text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Phase retrieval is the process of algorithmically finding solutions to the phase problem . Given a complex spectrum F ( k ) {\displaystyle F(k)} , of amplitude | F ( k ) | {\displaystyle |F(k)|} , and phase ψ ( k ) {\displaystyle \psi (k)} :
where x is an M -dimensional spatial coordinate and k is an M -dimensional spatial frequency coordinate. Phase retrieval consists of finding the phase that satisfies a set of constraints for a measured amplitude. Important applications of phase retrieval include X-ray crystallography , transmission electron microscopy and coherent diffractive imaging , for which M = 2 {\displaystyle M=2} . [ 1 ] Uniqueness theorems for both 1-D and 2-D cases of the phase retrieval problem, including the phaseless 1-D inverse scattering problem, were proven by Klibanov and his collaborators (see References).
Here we consider 1-D discrete Fourier transform (DFT) phase retrieval problem. The DFT of a complex signal f [ n ] {\displaystyle f[n]} is given by
F [ k ] = ∑ n = 0 N − 1 f [ n ] e − j 2 π k n N , = | F [ k ] | ⋅ e j ψ [ k ] k = 0 , 1 , … , N − 1 {\displaystyle F[k]=\sum _{n=0}^{N-1}f[n]e^{-j2\pi {\frac {kn}{N}},}=|F[k]|\cdot e^{j\psi [k]}\quad k=0,1,\ldots ,N-1} ,
and the oversampled DFT of x {\displaystyle x} is given by
F [ k ] = ∑ n = 0 N − 1 f [ n ] e − j 2 π k n M , k = 0 , 1 , … , M − 1 {\displaystyle F[k]=\sum _{n=0}^{N-1}f[n]e^{-j2\pi {\frac {kn}{M}},}\quad k=0,1,\ldots ,M-1} ,
where M > N {\displaystyle M>N} .
Since the DFT operator is bijective, this is equivalent to recovering the phase ψ [ k ] {\displaystyle \psi [k]} . It is common recovering a signal from its autocorrelation sequence instead of its Fourier magnitude. That is, denote by f ^ {\displaystyle {\hat {f}}} the vector f {\displaystyle f} after padding with N − 1 {\displaystyle N-1} zeros. The autocorrelation sequence of f ^ {\displaystyle {\hat {f}}} is then defined as
g [ m ] = ∑ i = max { 1 , m + 1 } N f ^ i f ^ i − m ¯ , m = − ( N − 1 ) , … , N − 1 {\displaystyle g[m]=\sum _{i=\max\{1,m+1\}}^{N}{\hat {f}}_{i}{\overline {{\hat {f}}_{i-m}}},\quad m=-(N-1),\ldots ,N-1} ,
and the DFT of g [ m ] {\displaystyle g[m]} , denoted by G [ k ] {\displaystyle G[k]} , satisfies G [ k ] = | F [ k ] | 2 {\displaystyle G[k]=|F[k]|^{2}} .
The error reduction is a generalization of the Gerchberg–Saxton algorithm . It solves for f ( x ) {\displaystyle f(x)} from measurements of | F ( u ) | {\displaystyle |F(u)|} by iterating a four-step process. For the k {\displaystyle k} th iteration the steps are as follows:
Step (1): G k ( u ) {\displaystyle G_{k}(u)} , ϕ k {\displaystyle \phi _{k}} , and g k ( x ) {\displaystyle g_{k}(x)} are estimates of, respectively, F ( u ) {\displaystyle F(u)} , ψ {\displaystyle \psi } and f ( x ) {\displaystyle f(x)} . In the first step we calculate the Fourier transform of g k ( x ) {\displaystyle g_{k}(x)} :
Step (2): The experimental value of | F ( u ) | {\displaystyle |F(u)|} , calculated from the diffraction pattern via the signal equation [ clarification needed ] , is then substituted for | G k ( u ) | {\displaystyle |G_{k}(u)|} , giving an estimate of the Fourier transform:
where the ' denotes an intermediate result that will be discarded later on.
Step (3): the estimate of the Fourier transform G k ′ ( u ) {\displaystyle G'_{k}(u)} is then inverse Fourier transformed:
Step (4): g k ′ ( x ) {\displaystyle g'_{k}(x)} then must be changed so that the new estimate of the object, g k + 1 ( x ) {\displaystyle g_{k+1}(x)} , satisfies the object constraints [ clarification needed ] . g k + 1 ( x ) {\displaystyle g_{k+1}(x)} is therefore defined piecewise as:
where γ {\displaystyle \gamma } is the domain in which g k ′ ( x ) {\displaystyle g'_{k}(x)} does not satisfy the object constraints. A new estimate g k + 1 ( x ) {\displaystyle g_{k+1}(x)} is obtained and the four step process is repeated.
This process is continued until both the Fourier constraint and object constraint are satisfied. Theoretically, the process will always lead to a convergence , [ 1 ] but the large number of iterations needed to produce a satisfactory image (generally >2000) results in the error-reduction algorithm by itself being unsuitable for practical applications.
The hybrid input-output algorithm is a modification of the error-reduction algorithm - the first three stages are identical. However, g k ( x ) {\displaystyle g_{k}(x)} no longer acts as an estimate of f ( x ) {\displaystyle f(x)} , but the input function corresponding to the output function g k ′ ( x ) {\displaystyle g'_{k}(x)} , which is an estimate of f ( x ) {\displaystyle f(x)} . [ 1 ] In the fourth step, when the function g k ′ ( x ) {\displaystyle g'_{k}(x)} violates the object constraints, the value of g k + 1 ( x ) {\displaystyle g_{k+1}(x)} is forced towards zero, but optimally not to zero. The chief advantage of the hybrid input-output algorithm is that the function g k ( x ) {\displaystyle g_{k}(x)} contains feedback information concerning previous iterations, reducing the probability of stagnation. It has been shown that the hybrid input-output algorithm converges to a solution significantly faster than the error reduction algorithm. Its convergence rate can be further improved through step size optimization algorithms. [ 2 ]
Here β {\displaystyle \beta } is a feedback parameter which can take a value between 0 and 1. For most applications, β ≈ 0.9 {\displaystyle \beta \approx 0.9} gives optimal results.{Scientific Reports volume 8, Article number: 6436 (2018)}
For a two dimensional phase retrieval problem, there is a degeneracy of solutions as f ( x ) {\displaystyle f(x)} and its conjugate f ∗ ( − x ) {\displaystyle f^{*}(-x)} have the same Fourier modulus. This leads to "image twinning" in which the phase retrieval algorithm stagnates producing an image with features of both the object and its conjugate . [ 3 ] The shrinkwrap technique periodically updates the estimate of the support by low-pass filtering the current estimate of the object amplitude (by convolution with a Gaussian ) and applying a threshold, leading to a reduction in the image ambiguity. [ 4 ]
The phase retrieval is an ill-posed problem. To uniquely identify the underlying signal, in addition to the methods that adds additional prior information like Gerchberg–Saxton algorithm , the other way is to add magnitude-only measurements like short time Fourier transform (STFT).
The method introduced below mainly based on the work of Jaganathan et al . [ 5 ]
Given a discrete signal x = ( f [ 0 ] , f [ 1 ] , . . . , f [ N − 1 ] ) T {\displaystyle \mathbf {x} =(f[0],f[1],...,f[N-1])^{T}} which is sampled from f ( x ) {\displaystyle f(x)} . We use a window of length W : w = ( w [ 0 ] , w [ 1 ] , . . . , w [ W − 1 ] ) T {\displaystyle \mathbf {w} =(w[0],w[1],...,w[W-1])^{T}} to compute the STFT of f {\displaystyle \mathrm {f} } , denoted by Y {\displaystyle \mathbf {Y} } :
Y [ m , r ] = ∑ n = 0 N − 1 f [ n ] w [ r L − n ] e − i 2 π m n N {\displaystyle Y[m,r]=\sum _{n=0}^{N-1}{f[n]w[rL-n]e^{-i2\pi {\frac {mn}{N}}}}}
for 0 ≤ m ≤ N − 1 {\displaystyle 0\leq m\leq N-1} and 0 ≤ r ≤ R − 1 {\displaystyle 0\leq r\leq R-1} , where the parameter L {\displaystyle L} denotes the separation in time between adjacent short-time sections and the parameter R = ⌈ N + W − 1 L ⌉ {\displaystyle R=\left\lceil {\frac {N+W-1}{L}}\right\rceil } denotes the number of short-time sections considered.
The other interpretation (called sliding window interpretation) of STFT can be used with the help of discrete Fourier transform (DFT). Let w r [ n ] = w [ r L − n ] {\displaystyle w_{r}[n]=w[rL-n]} denotes the window element obtained from shifted and flipped window w {\displaystyle \mathbf {w} } . Then we have
Y = [ Y 0 , Y 1 , . . . , Y R − 1 ] {\displaystyle \mathbf {Y} =[\mathbf {Y} _{0},\mathbf {Y} _{1},...,\mathbf {Y} _{R-1}]} , where Y r = x ∘ w r {\displaystyle \mathbf {Y} _{r}=\mathbf {x} \circ \mathbf {w} _{r}} .
Let Z w [ m , r ] = | Y [ m , r ] | 2 {\displaystyle {Z}_{w}[m,r]=|Y[m,r]|^{2}} be the N × R {\displaystyle N\times R} measurements corresponding to the magnitude-square of the STFT of x {\displaystyle \mathbf {x} } , W r {\displaystyle \mathbf {W} _{r}} be the N × N {\displaystyle N\times N} diagonal matrix with diagonal elements ( w r [ 0 ] , w r [ 1 ] , … , w r [ N − 1 ] ) . {\displaystyle \left(w_{r}[0],w_{r}[1],\ldots ,w_{r}[N-1]\right).} STFT phase retrieval can be stated as:
Find x {\displaystyle \mathbf {x} } such that Z w [ m , r ] = | ⟨ f m , W r x ⟩ | 2 {\displaystyle Z_{w}[m,r]=\left|\left\langle \mathbf {f} _{m},\mathbf {W} _{r}\mathbf {x} \right\rangle \right|^{2}} for 0 ≤ m ≤ N − 1 {\displaystyle 0\leq m\leq N-1} and 0 ≤ r ≤ R − 1 {\displaystyle 0\leq r\leq R-1} , where f m {\displaystyle \mathbf {f} _{m}} is the m {\displaystyle m} -th column of the N {\displaystyle N} -point inverse DFT matrix.
Intuitively, the computational complexity growing with N {\displaystyle N} makes the method impractical. In fact, however, for the most cases in practical we only need to consider the measurements corresponding to 0 ≤ m ≤ M {\displaystyle 0\leq m\leq M} , for any parameter M {\displaystyle M} satisfying 2 W ≤ M ≤ N {\displaystyle 2W\leq M\leq N} .
To be more specifically, if both the signal and the window are not vanishing , that is, x [ n ] ≠ 0 {\displaystyle x[n]\neq 0} for all 0 ≤ n ≤ N − 1 {\displaystyle 0\leq n\leq N-1} and w [ n ] ≠ 0 {\displaystyle w[n]\neq 0} for all 0 ≤ {\displaystyle 0\leq } n ≤ W − 1 {\displaystyle n\leq W-1} , signal x {\displaystyle \mathbf {x} } can be uniquely identified from its STFT magnitude if the following requirements are satisfied:
The proof can be found in Jaganathan' s work, [ 5 ] which reformulates STFT phase retrieval as the following least-squares problem:
min x ∑ r = 0 R − 1 ∑ m = 0 N − 1 ( Z w [ m , r ] − | ⟨ f m , W r x ⟩ | 2 ) 2 {\displaystyle \min _{\mathbf {x} }\sum _{r=0}^{R-1}\sum _{m=0}^{N-1}\left(Z_{w}[m,r]-\left|\left\langle \mathbf {f} _{m},\mathbf {W} _{r}\mathbf {x} \right\rangle \right|^{2}\right)^{2}} .
The algorithm, although without theoretical recovery guarantees, empirically able to converge to the global minimum when there is substantial overlap between adjacent short-time sections.
To establish recovery guarantees, one way is to formulate the problems as a semidefinite program (SDP), by embedding the problem in a higher dimensional space using the transformation X = x x ∗ {\displaystyle \mathbf {X} =\mathbf {x} \mathbf {x} ^{\ast }} and relax the rank-one constraint to obtain a convex program. The problem reformulated is stated below:
Obtain X ^ {\displaystyle \mathbf {\hat {X}} } by solving: m i n i m i z e t r a c e ( X ) s u b j e c t t o Z [ m , r ] = t r a c e ( W r ∗ f m f m ∗ W r X ) X ⪰ 0 {\displaystyle {\begin{aligned}&\mathrm {minimize} ~~~\mathrm {trace} (\mathbf {X} )\\[6pt]&\mathrm {subject~to} ~~Z[m,r]=\mathrm {trace} (\mathbf {W} _{r}^{\ast }\mathbf {f} _{m}\mathbf {f} _{m}^{\ast }\mathbf {W} _{r}\mathbf {X} )\\[0pt]&~~~~~~~~~~~~~~~~~~~\mathbf {X} \succeq 0\end{aligned}}} for 1 ≤ m ≤ M {\displaystyle 1\leq m\leq M} and 0 ≤ r ≤ R − 1 {\displaystyle 0\leq r\leq R-1}
Once X ^ {\displaystyle \mathbf {\hat {X}} } is found, we can recover signal x {\displaystyle \mathbf {x} } by best rank-one approximation.
Phase retrieval is a key component of coherent diffraction imaging (CDI). In CDI, the intensity of the diffraction pattern scattered from a target is measured. The phase of the diffraction pattern is then obtained using phase retrieval algorithms and an image of the target is constructed. In this way, phase retrieval allows for the conversion of a diffraction pattern into an image without an optical lens .
Using phase retrieval algorithms, it is possible to characterize complex optical systems and their aberrations. [ 6 ] For example, phase retrieval was used to diagnose and repair the flawed optics of the Hubble Space Telescope . [ 7 ] [ 8 ]
Other applications of phase retrieval include X-ray crystallography [ 9 ] and transmission electron microscopy . | https://en.wikipedia.org/wiki/Phase_retrieval |
In thermodynamics , the phase rule is a general principle governing multi-component, multi-phase systems in thermodynamic equilibrium . For a system without chemical reactions , it relates the number of freely varying intensive properties ( F ) to the number of components ( C ), the number of phases ( P ), and number of ways of performing work on the system ( N ): [ 1 ] [ 2 ] [ 3 ] : 123–125
Examples of intensive properties that count toward F are the temperature and pressure. For simple liquids and gases, pressure-volume work is the only type of work, in which case N = 1 .
The rule was derived by American physicist Josiah Willard Gibbs in his landmark paper titled On the Equilibrium of Heterogeneous Substances , published in parts between 1875 and 1878. [ 4 ]
The number of degrees of freedom F (also called the variance ) is the number of independent intensive properties, i.e. , the largest number of thermodynamic parameters such as temperature or pressure that can be varied simultaneously and independently of each other. [ 5 ]
An example of a one-component system ( C = 1 ) is a pure chemical. A two-component system ( C = 2 ) has two chemically independent components, like a mixture of water and ethanol. Examples of phases that count toward P are solids , liquids and gases .
The basis for the rule [ 3 ] : 122–126 is that equilibrium between phases places a constraint on the intensive variables. More rigorously, since the phases are in thermodynamic equilibrium with each other, the chemical potentials of the phases must be equal. The number of equality relationships determines the number of degrees of freedom. For example, if the chemical potentials of a liquid and of its vapour depend on temperature ( T ) and pressure ( p ), the equality of chemical potentials will mean that each of those variables will be dependent on the other. Mathematically, the equation μ liq ( T , p ) = μ vap ( T , p ) , where μ , the chemical potential, defines temperature as a function of pressure or vice versa. (Caution: do not confuse p as pressure with P , number of phases.)
To be more specific, the composition of each phase is determined by C − 1 intensive variables (such as mole fractions) in each phase. The total number of variables is ( C − 1) P + 2 , where the extra two are temperature T and pressure p . The number of constraints is C ( P − 1) , since the chemical potential of each component must be equal in all phases. Subtract the number of constraints from the number of variables to obtain the number of degrees of freedom as F = ( C − 1) P + 2 − C ( P − 1) = C − P + 2 .
The rule is valid provided the equilibrium between phases is not influenced by gravitational, electrical or magnetic forces, or by surface area, and only by temperature, pressure, and concentration.
For pure substances C = 1 so that F = 3 − P . In a single phase ( P = 1 ) condition of a pure component system, two variables ( F = 2 ), such as temperature and pressure, can be chosen independently to be any pair of values consistent with the phase. However, if the temperature and pressure combination ranges to a point where the pure component undergoes a separation into two phases ( P = 2 ), F decreases from 2 to 1. [ 6 ] When the system enters the two-phase region, it is no longer possible to independently control temperature and pressure.
In the phase diagram to the right, the boundary curve between the liquid and gas regions maps the constraint between temperature and pressure when the single-component system has separated into liquid and gas phases at equilibrium. The only way to increase the pressure on the two phase line is by increasing the temperature. If the temperature is decreased by cooling, some of the gas condenses, decreasing the pressure. Throughout both processes, the temperature and pressure stay in the relationship shown by this boundary curve unless one phase is entirely consumed by evaporation or condensation, or unless the critical point is reached. As long as there are two phases, there is only one degree of freedom, which corresponds to the position along the phase boundary curve.
The critical point is the black dot at the end of the liquid–gas boundary. As this point is approached, the liquid and gas phases become progressively more similar until, at the critical point, there is no longer a separation into two phases. Above the critical point and away from the phase boundary curve, F = 2 and the temperature and pressure can be controlled independently. Hence there is only one phase, and it has the physical properties of a dense gas, but is also referred to as a supercritical fluid .
Of the other two-boundary curves, one is the solid–liquid boundary or melting point curve which indicates the conditions for equilibrium between these two phases, and the other at lower temperature and pressure is the solid–gas boundary.
Even for a pure substance, it is possible that three phases, such as solid, liquid and vapour, can exist together in equilibrium ( P = 3 ). If there is only one component, there are no degrees of freedom ( F = 0 ) when there are three phases. Therefore, in a single-component system, this three-phase mixture can only exist at a single temperature and pressure, which is known as a triple point . Here there are two equations μ sol ( T , p ) = μ liq ( T , p ) = μ vap ( T , p ) , which are sufficient to determine the two variables T and p. In the diagram for CO 2 the triple point is the point at which the solid, liquid and gas phases come together, at 5.2 bar and 217 K. It is also possible for other sets of phases to form a triple point, for example in the water system there is a triple point where ice I , ice III and liquid can coexist.
If four phases of a pure substance were in equilibrium ( P = 4 ), the phase rule would give F = −1 , which is meaningless, since there cannot be −1 independent variables. This explains the fact that four phases of a pure substance (such as ice I, ice III, liquid water and water vapour) are not found in equilibrium at any temperature and pressure. In terms of chemical potentials there are now three equations, which cannot in general be satisfied by any values of the two variables T and p , although in principle they might be solved in a special case where one equation is mathematically dependent on the other two. In practice, however, the coexistence of more phases than allowed by the phase rule normally means that the phases are not all in true equilibrium.
For binary mixtures of two chemically independent components, C = 2 so that F = 4 − P . In addition to temperature and pressure, the other degree of freedom is the composition of each phase, often expressed as mole fraction or mass fraction of one component. [ 6 ]
As an example, consider the system of two completely miscible liquids such as toluene and benzene , in equilibrium with their vapours. This system may be described by a boiling-point diagram which shows the composition (mole fraction) of the two phases in equilibrium as functions of temperature (at a fixed pressure).
Four thermodynamic variables which may describe the system include temperature ( T ), pressure ( p ), mole fraction of component 1 (toluene) in the liquid phase ( x 1L ), and mole fraction of component 1 in the vapour phase ( x 1V ). However, since two phases are present ( P = 2 ) in equilibrium, only two of these variables can be independent ( F = 2 ). This is because the four variables are constrained by two relations: the equality of the chemical potentials of liquid toluene and toluene vapour, and the corresponding equality for benzene.
For given T and p , there will be two phases at equilibrium when the overall composition of the system ( system point ) lies in between the two curves. A horizontal line ( isotherm or tie line) can be drawn through any such system point, and intersects the curve for each phase at its equilibrium composition. The quantity of each phase is given by the lever rule (expressed in the variable corresponding to the x -axis, here mole fraction).
For the analysis of fractional distillation , the two independent variables are instead considered to be liquid-phase composition (x 1L ) and pressure. In that case the phase rule implies that the equilibrium temperature ( boiling point ) and vapour-phase composition are determined.
Liquid–vapour phase diagrams for other systems may have azeotropes (maxima or minima) in the composition curves, but the application of the phase rule is unchanged. The only difference is that the compositions of the two phases are equal exactly at the azeotropic composition.
Consider an aqueous solution containing sodium chloride (NaCl), potassium chloride (KCl), sodium bromide (NaBr), and potassium bromide (KBr), in equilibrium with their respective solid phases. Each salt, in solid form, is a different phase, because each possesses a distinct crystal structure and composition. The aqueous solution itself is another phase, because it forms a homogeneous liquid phase separate from the solid salts, with its own distinct composition and physical properties. Thus we have P = 5 phases.
There are 6 elements present (H, O, Na, K, Cl, Br), but we have 2 constraints:
giving C = 6 - 2 = 4 components. The Gibbs phase rule states that F = 1. So, for example, if we plot the P-T phase diagram of the system, there is only one line at which all phases coexist. Any deviation from the line would either cause one of the salts to completely dissolve or one of the ions to completely precipitate from the solution.
For applications in materials science dealing with phase changes between different solid structures, pressure is often imagined to be constant (for example at 1 atmosphere), and is ignored as a degree of freedom, so the formula becomes: [ 7 ]
This is sometimes incorrectly called the "condensed phase rule", but it is not applicable to condensed systems subject to high pressures (for example, in geology), since the effects of these pressures are important. [ 8 ]
In colloidal mixtures quintuple [ 9 ] [ 10 ] and sixtuple points [ 11 ] [ 12 ] have been described in violation of Gibbs phase rule but it is argued that in these systems the rule can be generalized to F = M + C − P + 1 {\displaystyle F=M+C-P+1} where M {\displaystyle M} accounts for additional parameters of interaction among the components like the diameter of one type of particle in relation to the diameter of the other particles in the solution. | https://en.wikipedia.org/wiki/Phase_rule |
Phase separation is the creation of two distinct phases from a single homogeneous mixture . [ 1 ] The most common type of phase separation is between two immiscible liquids, such as oil and water. This type of phase separation is known as liquid-liquid equilibrium. Colloids are formed by phase separation, though not all phase separations forms colloids - for example oil and water can form separated layers under gravity rather than remaining as microscopic droplets in suspension.
A common form of spontaneous phase separation is termed spinodal decomposition ; it is described by the Cahn–Hilliard equation . Regions of a phase diagram in which phase separation occurs are called miscibility gaps . There are two boundary curves of note: the binodal coexistence curve and the spinodal curve . On one side of the binodal, mixtures are absolutely stable. In between the binodal and the spinodal, mixtures may be metastable : staying mixed (or unmixed) absent some large disturbance. The region beyond the spinodal curve is absolutely unstable, and (if starting from a mixed state) will spontaneously phase-separate.
The upper critical solution temperature (UCST) and the lower critical solution temperature (LCST) are two critical temperatures , above which or below which the components of a mixture are miscible in all proportions. It is rare for systems to have both, but some exist: the nicotine -water system has an LCST of 61 °C, and also a UCST of 210 °C at pressures high enough for liquid water to exist at that temperature. The components are therefore miscible in all proportions below 61 °C and above 210 °C (at high pressure), and partially miscible in the interval from 61 to 210 °C. [ 2 ] [ 3 ]
Mixing is governed by the Gibbs free energy , with phase separation or mixing occurring for whichever case lowers the Gibbs free energy. The free energy G {\displaystyle G} can be decomposed into two parts: G = H − T S {\displaystyle G=H-TS} , with H {\displaystyle H} the enthalpy , T {\displaystyle T} the temperature , and S {\displaystyle S} the entropy . Thus, the change of the free energy in mixing is the sum of the enthalpy of mixing and the entropy of mixing . The enthalpy of mixing is zero for ideal mixtures , and ideal mixtures are enough to describe many common solutions. Thus, in many cases, mixing (or phase separation) is driven primarily by the entropy of mixing. It is generally the case that the entropy will increase whenever a particle (an atom, a molecule) has a larger space to explore; and thus, the entropy of mixing is generally positive: the components of the mixture can increase their entropy by sharing a larger common volume.
Phase separation is then driven by several distinct processes. In one case, the enthalpy of mixing is positive, and the temperature is low: the increase in entropy is insufficient to lower the free energy. In another, considerably more rare case, the entropy of mixing is " unfavorable ", that is to say, it is negative. In this case, even if the change in enthalpy is negative, phase separation will occur unless the temperature is low enough. It is this second case which gives rise to the idea of the lower critical solution temperature.
A mixture of two helium isotopes ( helium-3 and helium-4 ) in a certain range of temperatures and concentrations separates into parts. The initial mix of the two isotopes spontaneously separates into He 4 {\displaystyle {\ce {^{4}He}}} -rich and He 3 {\displaystyle {\ce {{}^3He}}} -rich regions. [ 4 ] Phase separation also exists in ultracold gas systems. [ 5 ] It has been shown experimentally in a two-component ultracold Fermi gas case. [ 6 ] [ 7 ] The phase separation can compete with other phenomena as vortex lattice formation or an exotic Fulde-Ferrell-Larkin-Ovchinnikov phase . [ 8 ] | https://en.wikipedia.org/wiki/Phase_separation |
Phase shift torque measurement involves the use of a shaft, which is either an integral part of the rotating machine under test - such as a turbine , compressor , or jet engine - or positioned between the machine and a dynamometer . The shaft has a pair of identical toothed disks attached at each end and often has a slender portion to enhance its angle of twist. The twist of the shaft can be determined from the phase difference of the magnetically or optically detected wave pattern from each of the disks. [ 1 ] [ 2 ] Under no-load the waves are synchronised and as a load is applied to the shaft their phase difference increases. The shaft's angle of twist is determined from the measured phase difference. Since the twist of a shaft is linearly proportional to the applied torque within the elastic limit (up to its yield strength ), the torque can be calculated using established formulas of torsion mechanics . Phase shift torque meters can measure shaft power to 0.1% accuracy in R & D applications, and to 1.0% when designed for permanent installation, both at confidence levels of 95%. [ 3 ]
As of 1991, phase shift torque measurement instrumentation had been installed on gas turbine systems with a total power of 2 GW, with over 2 million operational hours recorded, demonstrating good reliability. These systems operated at speeds of up to 90,000 rpm and achieved power outputs of up to 50 MW. [ 3 ]
This classical mechanics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Phase_shift_torque_measurement |
The phase space of a physical system is the set of all possible physical states of the system when described by a given parameterization. Each possible state corresponds uniquely to a point in the phase space. For mechanical systems , the phase space usually consists of all possible values of the position and momentum parameters. It is the direct product of direct space and reciprocal space . [ clarification needed ] The concept of phase space was developed in the late 19th century by Ludwig Boltzmann , Henri Poincaré , and Josiah Willard Gibbs . [ 1 ]
In a phase space, every degree of freedom or parameter of the system is represented as an axis of a multidimensional space; a one-dimensional system is called a phase line , while a two-dimensional system is called a phase plane . For every possible state of the system or allowed combination of values of the system's parameters, a point is included in the multidimensional space. The system's evolving state over time traces a path (a phase-space trajectory for the system) through the high-dimensional space. The phase-space trajectory represents the set of states compatible with starting from one particular initial condition , located in the full phase space that represents the set of states compatible with starting from any initial condition. As a whole, the phase diagram represents all that the system can be, and its shape can easily elucidate qualities of the system that might not be obvious otherwise. A phase space may contain a great number of dimensions. For instance, a gas containing many molecules may require a separate dimension for each particle's x , y and z positions and momenta (6 dimensions for an idealized monatomic gas), and for more complex molecular systems additional dimensions are required to describe vibrational modes of the molecular bonds, as well as spin around 3 axes. Phase spaces are easier to use when analyzing the behavior of mechanical systems restricted to motion around and along various axes of rotation or translation – e.g. in robotics, like analyzing the range of motion of a robotic arm or determining the optimal path to achieve a particular position/momentum result.
In classical mechanics, any choice of generalized coordinates q i for the position (i.e. coordinates on configuration space ) defines conjugate generalized momenta p i , which together define co-ordinates on phase space. More abstractly, in classical mechanics phase space is the cotangent bundle of configuration space, and in this interpretation the procedure above expresses that a choice of local coordinates on configuration space induces a choice of natural local Darboux coordinates for the standard symplectic structure on a cotangent space.
The motion of an ensemble of systems in this space is studied by classical statistical mechanics . The local density of points in such systems obeys Liouville's theorem , and so can be taken as constant. Within the context of a model system in classical mechanics, the phase-space coordinates of the system at any given time are composed of all of the system's dynamic variables. Because of this, it is possible to calculate the state of the system at any given time in the future or the past, through integration of Hamilton's or Lagrange's equations of motion.
For simple systems, there may be as few as one or two degrees of freedom. One degree of freedom occurs when one has an autonomous ordinary differential equation in a single variable, d y / d t = f ( y ) , {\displaystyle dy/dt=f(y),} with the resulting one-dimensional system being called a phase line , and the qualitative behaviour of the system being immediately visible from the phase line. The simplest non-trivial examples are the exponential growth model /decay (one unstable/stable equilibrium) and the logistic growth model (two equilibria, one stable, one unstable).
The phase space of a two-dimensional system is called a phase plane , which occurs in classical mechanics for a single particle moving in one dimension, and where the two variables are position and velocity. In this case, a sketch of the phase portrait may give qualitative information about the dynamics of the system, such as the limit cycle of the Van der Pol oscillator shown in the diagram.
Here the horizontal axis gives the position, and vertical axis the velocity. As the system evolves, its state follows one of the lines (trajectories) on the phase diagram.
A plot of position and momentum variables as a function of time is sometimes called a phase plot or a phase diagram . However the latter expression, " phase diagram ", is more usually reserved in the physical sciences for a diagram showing the various regions of stability of the thermodynamic phases of a chemical system, which consists of pressure , temperature , and composition.
In mathematics , a phase portrait is a geometric representation of the orbits of a dynamical system in the phase plane . Each set of initial conditions is represented by a different point or curve .
Phase portraits are an invaluable tool in studying dynamical systems. They consist of a plot of typical trajectories in the phase space. This reveals information such as whether an attractor , a repellor or limit cycle is present for the chosen parameter value. The concept of topological equivalence is important in classifying the behaviour of systems by specifying when two different phase portraits represent the same qualitative dynamic behavior. An attractor is a stable point which is also called a "sink". The repeller is considered as an unstable point, which is also known as a "source".
In classical statistical mechanics (continuous energies) the concept of phase space provides a classical analog to the partition function (sum over states) known as the phase integral. [ 2 ] Instead of summing the Boltzmann factor over discretely spaced energy states (defined by appropriate integer quantum numbers for each degree of freedom), one may integrate over continuous phase space. Such integration essentially consists of two parts: integration of the momentum component of all degrees of freedom (momentum space) and integration of the position component of all degrees of freedom (configuration space). Once the phase integral is known, it may be related to the classical partition function by multiplication of a normalization constant representing the number of quantum energy states per unit phase space. This normalization constant is simply the inverse of the Planck constant raised to a power equal to the number of degrees of freedom for the system. [ 3 ]
Classic examples of phase diagrams from chaos theory are:
In quantum mechanics , the coordinates p and q of phase space normally become Hermitian operators in a Hilbert space .
But they may alternatively retain their classical interpretation, provided functions of them compose in novel algebraic ways (through Groenewold's 1946 star product ). This is consistent with the uncertainty principle of quantum mechanics.
Every quantum mechanical observable corresponds to a unique function or distribution on phase space, and conversely, as specified by Hermann Weyl (1927) and supplemented by John von Neumann (1931); Eugene Wigner (1932); and, in a grand synthesis, by H. J. Groenewold (1946).
With J. E. Moyal (1949), these completed the foundations of the phase-space formulation of quantum mechanics , a complete and logically autonomous reformulation of quantum mechanics. [ 4 ] (Its modern abstractions include deformation quantization and geometric quantization .)
Expectation values in phase-space quantization are obtained isomorphically to tracing operator observables with the density matrix in Hilbert space: they are obtained by phase-space integrals of observables, with the Wigner quasi-probability distribution effectively serving as a measure.
Thus, by expressing quantum mechanics in phase space (the same ambit as for classical mechanics), the Weyl map facilitates recognition of quantum mechanics as a deformation (generalization) of classical mechanics, with deformation parameter ħ / S , where S is the action of the relevant process. (Other familiar deformations in physics involve the deformation of classical Newtonian into relativistic mechanics , with deformation parameter v / c ; [ citation needed ] or the deformation of Newtonian gravity into general relativity , with deformation parameter Schwarzschild radius /characteristic dimension.) [ citation needed ]
Classical expressions, observables, and operations (such as Poisson brackets ) are modified by ħ -dependent quantum corrections, as the conventional commutative multiplication applying in classical mechanics is generalized to the noncommutative star-multiplication characterizing quantum mechanics and underlying its uncertainty principle.
In thermodynamics and statistical mechanics contexts, the term "phase space" has two meanings: for one, it is used in the same sense as in classical mechanics. If a thermodynamic system consists of N particles, then a point in the 6 N -dimensional phase space describes the dynamic state of every particle in that system, as each particle is associated with 3 position variables and 3 momentum variables. In this sense, as long as the particles are distinguishable , a point in phase space is said to be a microstate of the system. (For indistinguishable particles a microstate consists of a set of N ! points, corresponding to all possible exchanges of the N particles.) N is typically on the order of the Avogadro number , thus describing the system at a microscopic level is often impractical. This leads to the use of phase space in a different sense.
The phase space can also refer to the space that is parameterized by the macroscopic states of the system, such as pressure, temperature, etc. For instance, one may view the pressure–volume diagram or temperature–entropy diagram as describing part of this phase space. A point in this phase space is correspondingly called a macrostate. There may easily be more than one microstate with the same macrostate. For example, for a fixed temperature, the system could have many dynamic configurations at the microscopic level. When used in this sense, a phase is a region of phase space where the system in question is in, for example, the liquid phase, or solid phase, etc.
Since there are many more microstates than macrostates, the phase space in the first sense is usually a manifold of much larger dimensions than in the second sense. Clearly, many more parameters are required to register every detail of the system down to the molecular or atomic scale than to simply specify, say, the temperature or the pressure of the system.
Phase space is extensively used in nonimaging optics , [ 5 ] the branch of optics devoted to illumination. It is also an important concept in Hamiltonian optics .
In medicine and bioengineering , the phase space method is used to visualize multidimensional physiological responses. [ 6 ] [ 7 ] | https://en.wikipedia.org/wiki/Phase_space |
Phase space crystal is the state of a physical system that displays discrete symmetry in phase space instead of real space . For a single-particle system, the phase space crystal state refers to the eigenstate of the Hamiltonian for a closed quantum system [ 1 ] or the eigenoperator of the Liouvillian for an open quantum system . [ 2 ] For a many-body system, phase space crystal is the solid-like crystalline state in phase space. [ 3 ] [ 4 ] The general framework of phase space crystals is to extend the study of solid state physics and condensed matter physics into phase space of dynamical systems . [ 5 ] While real space has Euclidean geometry , phase space is embedded with classical symplectic geometry or quantum noncommutative geometry .
In his celebrated book Mathematical Foundations of Quantum Mechanics , [ 6 ] John von Neumann constructed a phase space lattice by two commutative elementary displacement operators along position and momentum directions respectively, which is also called the von Neumann lattice nowadays. If the phase space is replaced a frequency-time plane, the von Neumann lattice is called Gabor lattice [ 7 ] and widely used for signal processing. [ 8 ]
The phase space lattice differs fundamentally from the real space lattice because the two coordinates of phase space are noncommutative in quantum mechanics . As a result, a coherent state moving along a closed path in phase space acquires an additional phase factor, which is similar to the Aharonov–Bohm effect of a charged particle moving in a magnetic field. [ 9 ] [ 3 ] There is a deep connection between phase space and magnetic field. In fact, the canonical equation of motion can also be rewritten in the Lorenz-force form reflecting the symplectic geometry of classical phase space. [ 5 ]
In the phase space of dynamical systems , the stable points together with their neighbouring regions form the so-called Poincaré-Birkhoff islands in the chaotic sea that may form a chain or some regular two dimensional lattice structures in phase space. For example, the effective Hamiltonian of kicked harmonic oscillator (KHO). [ 10 ] [ 11 ] can possess square lattice, triangle lattice and even quasi-crystal structures in phase space depending on the ratio of kicking number. In fact, any arbitrary phase space lattice can be engineered by selecting an appropriate kicking sequence for the KHO. [ 4 ]
The concept of phase space crystal was proposed by Guo et al. [ 1 ] and originally refers to the eigenstate of effective Hamiltonian of periodically driven (Floquet) dynamical system. Depending on whether interaction effect is included, phase space crystals can be classified into single-particle PSC and many-body PSC . [ 12 ]
Depending on the symmetry in phase space, phase space crystal can be a one-dimensional (1D) state with n {\displaystyle n} -fold rotational symmetry in phase space or two-dimensional (2D) lattice state extended into the whole phase space. The concept of phase space crystal for a closed system has been extended into open quantum systems and is named as dissipative phase space crystals . [ 2 ]
Phase space is fundamentally different from real space as the two coordinates of phase space do not commute, i.e., [ x ^ , p ^ ] = i λ {\displaystyle [{\hat {x}},{\hat {p}}]=i\lambda } where λ {\displaystyle \lambda } is the dimensionless Planck constant . The ladder operator is defined as a ^ = ( x ^ + i p ^ ) / 2 λ {\displaystyle {\hat {a}}=({\hat {x}}+i{\hat {p}})/{\sqrt {2\lambda }}} such that [ a ^ , a ^ † ] = 1 {\displaystyle [{\hat {a}},{\hat {a}}^{\dagger }]=1} . The Hamiltonian of a physical system H ^ = H ( x ^ , p ^ ) {\displaystyle {\hat {H}}=H({\hat {x}},{\hat {p}})} can also be written in a function of ladder operators H ^ = H ( a ^ , a ^ † ) {\displaystyle {\hat {H}}=H({\hat {a}},{\hat {a}}^{\dagger })} . By defining the rotational operator in phase space [ 1 ] [ 13 ] by T ^ τ = e − i τ a ^ † a ^ {\displaystyle {\hat {T}}_{\tau }=e^{-i\tau {\hat {a}}^{\dagger }{\hat {a}}}} where τ = 2 π / n {\displaystyle \tau ={2\pi }/{n}} with n {\displaystyle n} a positive integer, the system has n {\displaystyle n} -fold rotational symmetry or Z n {\displaystyle Z_{n}} symmetry if the Hamiltonian commutates with rotational operator [ H ^ , T ^ τ ] = 0 {\displaystyle [{\hat {H}},{\hat {T}}_{\tau }]=0} , i.e., H ^ = T ^ τ † H ^ T ^ τ → H ( a ^ , a ^ † ) = H ( T ^ τ † a ^ T ^ τ , T ^ τ † H ^ a ^ τ † ) = H ( a ^ e − i τ , a ^ † e i τ ) . {\displaystyle {\hat {H}}={\hat {T}}_{\tau }^{\dagger }{\hat {H}}{\hat {T}}_{\tau }\rightarrow H({\hat {a}},{\hat {a}}^{\dagger })=H({\hat {T}}_{\tau }^{\dagger }{\hat {a}}{\hat {T}}_{\tau },{\hat {T}}_{\tau }^{\dagger }{\hat {H}}{\hat {a}}_{\tau }^{\dagger })=H({\hat {a}}e^{-i\tau },{\hat {a}}^{\dagger }e^{i\tau }).} In this case, one can apply Bloch theorem to the n {\displaystyle n} -fold symmetric Hamiltonian and calculate the band structure . [ 1 ] [ 14 ] The discrete rotational symmetric structure of Hamiltonian is called Z n {\displaystyle Z_{n}} phase space lattice [ 15 ] and the corresponding eigenstates are called Z n {\displaystyle Z_{n}} phase space crystals .
The discrete rotational symmetry can be extended to the discrete translational symmetry in the whole phase space. For such purpose, the displacement operator in phase space is defined by D ^ ( ξ ) = exp [ ( ξ a ^ † − ξ ∗ a ^ ) / 2 λ ] {\displaystyle {\hat {D}}(\xi )=\exp[(\xi {\hat {a}}^{\dagger }-\xi ^{*}{\hat {a}})/{\sqrt {2\lambda }}]} which has the property D ^ † ( ξ ) a ^ D ^ ( ξ ) = a ^ + ξ {\displaystyle {\hat {D}}^{\dagger }(\xi ){\hat {a}}{\hat {D}}(\xi )={\hat {a}}+\xi } , where ξ {\displaystyle \xi } is a complex number corresponding to the displacement vector in phase space. The system has discrete translational symmetry if the Hamiltonian commutates with translational operator [ H ^ , D ^ † ( ξ ) ] = 0 {\displaystyle [{\hat {H}},{\hat {D}}^{\dagger }(\xi )]=0} , i.e., H ^ = D ^ † ( ξ ) H ^ D ^ ( ξ ) → H ( a ^ , a ^ † ) = H ( D ^ † ( ξ ) a ^ D ^ ( ξ ) , D ^ † a ^ † D ^ ( ξ ) ) = H ( a ^ + ξ , a ^ † + ξ ∗ ) . {\displaystyle {\hat {H}}={\hat {D}}^{\dagger }(\xi ){\hat {H}}{\hat {D}}(\xi )\rightarrow H({\hat {a}},{\hat {a}}^{\dagger })=H({\hat {D}}^{\dagger }(\xi ){\hat {a}}{\hat {D}}(\xi ),{\hat {D}}^{\dagger }{\hat {a}}^{\dagger }{\hat {D}}(\xi ))=H({\hat {a}}+\xi ,{\hat {a}}^{\dagger }+\xi ^{*}).} If there exist two elementary displacements D ^ ( ξ 1 ) {\displaystyle {\hat {D}}(\xi _{1})} and D ^ ( ξ 2 ) {\displaystyle {\hat {D}}(\xi _{2})} that satisfy the above condition simultaneously, the phase space Hamiltonian possesses 2D lattice symmetry in phase space. However, the two displacement operators are not commutative in general [ D ^ ( ξ 1 ) , D ^ ( ξ 2 ) ] ≠ 0 {\displaystyle [{\hat {D}}(\xi _{1}),{\hat {D}}(\xi _{2})]\neq 0} . In the non-commutative phase space, the concept of a "point" is meaningless. Instead, a coherent state | α ⟩ {\displaystyle |\alpha \rangle } is defined as the eigenstate of the lowering operator via a ^ | α ⟩ = α | α ⟩ {\displaystyle {\hat {a}}|\alpha \rangle =\alpha |\alpha \rangle } . The displacement operator displaces the coherent state with an additional phase, i.e., D ^ ( ξ ) | α ⟩ = e i I m ( ξ α ∗ ) | α + ξ ⟩ {\displaystyle {\hat {D}}(\xi )|\alpha \rangle =e^{i\mathrm {Im} (\xi \alpha ^{*})}|\alpha +\xi \rangle } . A coherent state that is moved along a closed path, e.g., a triangle with three edges given by ( ξ 1 , ξ 2 , − ξ 1 − ξ 2 ) {\displaystyle (\xi _{1},\xi _{2},-\xi _{1}-\xi _{2})} in phase space, acquires a geometric phase factor [ 16 ] [ 3 ] D ^ [ − ξ 1 − ξ 2 ] D ^ ( ξ 2 ) D ^ ( ξ 1 ) | α ⟩ = e i S λ | α ⟩ , {\displaystyle {\hat {D}}[-\xi _{1}-\xi _{2}]{\hat {D}}(\xi _{2}){\hat {D}}(\xi _{1})|\alpha \rangle =e^{i{\frac {S}{\lambda }}}|\alpha \rangle ,} where S = 1 2 I m ( ξ 2 ξ 1 ∗ ) {\displaystyle S={\frac {1}{2}}\mathrm {Im} (\xi _{2}\xi _{1}^{*})} is the enclosed area. This geometric phase is analogous to the Aharonov–Bohm phase of charged particle in a magnetic field. If the magnetic unit cell and the lattice unit cell are commensurable, namely, there exist two integers r {\displaystyle r} and s {\displaystyle s} such that [ D ^ r ( ξ 1 ) , D ^ s ( ξ 2 ) ] = 0 {\displaystyle [{\hat {D}}^{r}(\xi _{1}),{\hat {D}}^{s}(\xi _{2})]=0} , one can calculate the band structure defined in a 2D Brillouin. For example, the spectrum of a square phase space lattice Hamiltonian H ^ = cos x ^ + cos p ^ {\displaystyle {\hat {H}}=\cos {\hat {x}}+\cos {\hat {p}}} displays Hofstadter's butterfly band structure [ 3 ] [ 17 ] that describes the hopping of charged particles between tight-binding lattice sites in a magnetic field. [ 18 ] In this case, the eigenstates are called 2D lattice phase space crystals .
The concept of phase space crystals for closed quantum system has been extended to open quantum system . [ 2 ] In circuit QED systems, a microwave resonator combined with Josephson junctions and voltage bias under n {\displaystyle n} -photon resonance can be described by a rotating wave approximation (RWA) Hamiltonian H ^ R W A {\displaystyle {\hat {H}}_{RWA}} with Z n {\displaystyle Z_{n}} phase space symmetry described above. When single-photon loss is dominant, the dissipative dynamics of resonator is described by the following master equation ( Lindblad equation ) d ρ d t = − i ℏ [ H ^ R W A , ρ ] + γ 2 ( 2 a ^ ρ a ^ † − a ^ † a ^ ρ − ρ a ^ † a ^ ) = L ( ρ ) , {\displaystyle {\frac {d\rho }{dt}}=-{\frac {i}{\hbar }}[{\hat {H}}_{RWA},\rho ]+{\frac {\gamma }{2}}(2{\hat {a}}\rho {\hat {a}}^{\dagger }-{\hat {a}}^{\dagger }{\hat {a}}\rho -\rho {\hat {a}}^{\dagger }{\hat {a}})={\mathcal {L}}(\rho ),} where γ {\displaystyle \gamma } is the loss rate and superoperator L {\displaystyle {\mathcal {L}}} is called the Liouvillian . One can calculate the eigenspectrum and corresponding eigenoperators of the Liouvillian of the system L ρ ^ m = λ m ρ ^ m {\displaystyle {\mathcal {L}}{\hat {\rho }}_{m}=\lambda _{m}{\hat {\rho }}_{m}} .
Notice that not only the Hamiltonian but also the Liouvillian both are invariant under the n {\displaystyle n} -fold rotational operation, i.e., [ L , T τ ] = 0 {\displaystyle [{\mathcal {L}},{\mathcal {T}}_{\tau }]=0} with T τ O ^ = T ^ τ † O ^ T ^ τ {\displaystyle {\mathcal {T}}_{\tau }{\hat {O}}={\hat {T}}_{\tau }^{\dagger }{\hat {O}}{\hat {T}}_{\tau }} and τ = 2 π / n {\displaystyle \tau ={2\pi }/{n}} . This symmetry plays a crucial role in extending the concept of phase space crystals to an open quantum system. As a result, the Liouvillian eigenoperators ρ ^ m {\displaystyle {\hat {\rho }}_{m}} have a Bloch mode structure in phase space, which is called a dissipative phase space crystal . [ 2 ]
The concept of phase space crystal can be extended to systems of interacting particles where it refers to the many-body state having a solid-like crystalline structure in phase space. [ 3 ] [ 4 ] [ 12 ] In this case, the interaction of particles plays an important role. In real space, the many-body Hamiltonian subjected to a perturbative periodic drive (with period T {\displaystyle T} ) is given by H = ∑ i H ( x i , p i , t ) + ∑ i < j V ( x i − x j ) . {\displaystyle {\mathcal {H}}=\sum _{i}H(x_{i},p_{i},t)+\sum _{i<j}V(x_{i}-x_{j}).} Usually, the interaction potential V ( x i − x j ) {\displaystyle V(x_{i}-x_{j})} is a function of two particles' distance in real space. By transforming to the rotating frame with the driving frequency and adapting rotating wave approximation (RWA), one can get the effective Hamiltonian. [ 15 ] [ 5 ] H R W A = ∑ i H R W A ( X i , P i , t ) + ∑ i < j U ( X i , P i ; X j , P j ) . {\displaystyle {\mathcal {H}}_{RWA}=\sum _{i}H_{RWA}(X_{i},P_{i},t)+\sum _{i<j}U(X_{i},P_{i};X_{j},P_{j}).} Here, X i , P i {\displaystyle X_{i},P_{i}} are the stroboscopic position and momentum of i {\displaystyle i} -th particle, namely, they take the values of x i ( t ) , p i ( t ) {\displaystyle x_{i}(t),p_{i}(t)} at the integer multiple of driving period t = n T {\displaystyle t=nT} . To have the crystal structure in phase space, the effective interaction in phase space needs to be invariant under the discrete rotational or translational operations in phase space.
In classical dynamics , to the leading order, the effective interaction potential in phase space is the time-averaged real space interaction in one driving period U i j = 1 T ∫ 0 T V [ x i ( t ) − x j ( t ) ] . {\displaystyle U_{ij}={\frac {1}{T}}\int _{0}^{T}V[x_{i}(t)-x_{j}(t)].} Here, x i ( t ) {\displaystyle x_{i}(t)} represents the trajectory of i {\displaystyle i} -th particle in the absence of driving field. For the model power-law interaction potential V ( x i − x j ) = ϵ 2 n / | x i − x j | 2 n {\displaystyle V(x_{i}-x_{j})=\epsilon ^{2n}/|x_{i}-x_{j}|^{2n}} with integers and half-integers n ≥ 1 / 2 {\displaystyle n\geq 1/2} , the direct integral given by the above time-average formula is divergent, i.e., U i j = ∞ . {\displaystyle U_{ij}=\infty .} A renormalisation procedure was introduced to remove the divergence [ 19 ] and the correct phase space interaction is a function of phase space distance R i j {\displaystyle R_{ij}} in the ( X i , P i ) {\displaystyle (X_{i},P_{i})} plane. For the Coulomb potential n = 1 / 2 {\displaystyle n=1/2} , the result U ( R i j ) = 2 π − 1 ϵ ~ / R i j {\displaystyle U(R_{ij})=2\pi ^{-1}{\tilde {\epsilon }}/R_{ij}} still keeps the form of Coulomb's law up to a logarithmic renormalised "charge" ϵ ~ = ϵ ln ( ϵ − 1 e 2 R i j 3 / 2 ) {\displaystyle {\tilde {\epsilon }}=\epsilon \ln(\epsilon ^{-1}e^{2}R_{ij}^{3}/2)} , where e = 2.71828 ⋯ {\displaystyle e=2.71828\cdots } is the Euler's number . For n = 1 , 3 / 2 , 2 , 5 / 2 , ⋯ {\displaystyle n=1,3/2,2,5/2,\cdots } , the renormalised phase space interaction potential is [ 19 ] U i j = U ( R i j ) = 2 ϵ γ 2 n − 1 4 1 2 n − 1 π ( 2 n − 1 ) R i j 1 − 1 n , {\displaystyle U_{ij}=U(R_{ij})={\frac {2\epsilon \gamma ^{2n-1}4^{{\frac {1}{2n}}-1}}{\pi (2n-1)}}R_{ij}^{1-{\frac {1}{n}}},} where γ = ( 4 n − 1 ) 1 2 n − 1 {\displaystyle \gamma =(4n-1)^{\frac {1}{2n-1}}} is the collision factor. For the special case of n = 1 {\displaystyle n=1} , there is no effective interaction in phase space since U ( R i j ) = 3 ϵ π − 1 {\displaystyle U(R_{ij})={\sqrt {3}}\epsilon \pi ^{-1}} is a constant with respect to phase space distance. In general for the case of n > 1 {\displaystyle n>1} , phase space interaction U ( R i j ) {\displaystyle {U}(R_{ij})} grows with the phase space distance R i j {\displaystyle R_{ij}} . For the hard-sphere interaction ( n → ∞ {\displaystyle n\rightarrow \infty } ), phase space interaction U ( R i j ) = ϵ π − 1 R i j {\displaystyle U(R_{ij})=\epsilon \pi ^{-1}R_{ij}} behaves like the confinement interaction between quarks in Quantum chromodynamics (QCD). The above phase space interaction is indeed invariant under the discrete rotational or translational operations in phase space. Combined with the phase space lattice potential from driving, there exist a stable regime where the particles arrange themselves periodically in phase space giving rise to many-body phase space crystals . [ 3 ] [ 4 ] [ 12 ]
In quantum mechanics , the point particle is replaced by a quantum wave packet and the divergence problem is naturally avoided. To the lowest-order Magnus expansion for Floquet system, the quantum phase space interaction of two particles is the time-averaged real space interaction over the periodic two-body quantum state Φ ( x i , x j , t ) {\displaystyle \Phi (x_{i},x_{j},t)} as follows. [ 20 ] [ 3 ] U i j = 1 T ∫ 0 T ⟨ Φ ( x i , x j , t ) | V ( x i − x j ) | Φ ( x i , x j , t ) ⟩ . {\displaystyle U_{ij}={\frac {1}{T}}\int _{0}^{T}\langle \Phi (x_{i},x_{j},t)|V(x_{i}-x_{j})|\Phi (x_{i},x_{j},t)\rangle .} In the coherent state representation, the quantum phase space interaction approaches the classical phase space interaction in the long-distance limit. [ 3 ] For N {\displaystyle N} bosonic ultracold atoms with repulsive contact interaction bouncing on an oscillating mirror, it is possible to form Mott insulator -like state in the Z n {\displaystyle Z_{n}} phase space lattice. [ 20 ] [ 15 ] In this case, there is a well defined number of particles in each potential site which can be viewed as an example of 1D many-body phase space crystal .
If the two indistinguishable particles have spins , the total phase space interaction can be written in a sum of direct interaction and exchange interaction . [ 3 ] This means that the exchange effect during the collision of two particles can induce an effective spin-spin interaction. [ 5 ]
Solid crystals are defined by a periodic arrangement of atoms in real space, atoms subject to a time-periodic drive can also form crystals in phase space. [ 3 ] The interactions between these atoms give rise to collective vibrational modes similar to phonons in solid crystals. The honeycomb phase space crystal is particularly interesting because the vibrational band structure has two sub-lattice bands that can have nontrivial topological physics. [ 4 ] The vibrations of any two atoms are coupled via a pairing interaction with intrinsically complex couplings. Their complex phases have a simple geometrical interpretation and can not be eliminated by a gauge transformation , leading to a vibrational band structure with non-trivial Chern numbers and chiral edge states in phase space. In contrast to all topological transport scenarios in real space, the chiral transport for phase space phonons can arise without breaking physical time-reversal symmetry .
Time crystals and phase space crystals are closely related but different concepts. [ 5 ] They both study subharmonic modes that emerge in periodically driven systems. Time crystals focus on the spontaneous symmetry breaking process of discrete time translational symmetry (DTTS) and the protection mechanism of subharmonic modes in quantum many-body systems. In contrast, the study of phase space crystal focuses on the discrete symmetries in phase space. The basic modes constructing a phase space crystal are not necessarily a many-body state, and need not break DTTS as for the single-particle phase space crystals. For many-body systems, phase space crystals study the interplay of the potential subharmonic modes that are arranged periodically in phase space. There is a trend to study the interplay of multiple time crystals [ 21 ] which is coined as condensed matter physics in time crystals . [ 22 ] [ 15 ] [ 23 ] | https://en.wikipedia.org/wiki/Phase_space_crystal |
In applied mathematics , the phase space method is a technique for constructing and analyzing solutions of dynamical systems , that is, solving time-dependent differential equations .
The method consists of first rewriting the equations as a system of differential equations that are first-order in time, by introducing additional variables. The original and the new variables form a vector in the phase space . The solution then becomes a curve in the phase space, parametrized by time. The curve is usually called a trajectory or an orbit . The (vector) differential equation is reformulated as a geometrical description of the curve, that is, as a differential equation in terms of the phase space variables only, without the original time parametrization. Finally, a solution in the phase space is transformed back into the original setting.
The phase space method is used widely in physics . It can be applied, for example, to find traveling wave solutions of reaction–diffusion systems . [ 1 ] [ 2 ] | https://en.wikipedia.org/wiki/Phase_space_method |
Phase stretch transform ( PST ) is a computational approach to signal and image processing. One of its utilities is for feature detection and classification. [ 1 ] [ 2 ] PST is related to time stretch dispersive Fourier transform . [ 3 ] It transforms the image by emulating propagation through a diffractive medium with engineered 3D dispersive property (refractive index). The operation relies on symmetry of the dispersion profile and can be understood in terms of dispersive eigenfunctions or stretch modes. [ 4 ] PST performs similar functionality as phase-contrast microscopy , but on digital images. PST can be applied to digital images and temporal (time series) data. It is a physics-based feature engineering algorithm. [ 5 ]
Here the principle is described in the context of feature enhancement in digital images. The image is first filtered with a spatial kernel followed by application of a nonlinear frequency-dependent phase. The output of the transform is the phase in the spatial domain. The main step is the 2-D phase function which is typically applied in the frequency domain. The amount of phase applied to the image is frequency dependent, with higher amount of phase applied to higher frequency features of the image. Since sharp transitions, such as edges and corners, contain higher frequencies, PST emphasizes the edge information. Features can be further enhanced by applying thresholding and morphological operations . PST is a pure phase operation whereas conventional edge detection algorithms operate on amplitude.
Photonic time stretch technique can be understood by considering the propagation of an optical pulse through a dispersive fiber. By disregarding the loss and non-linearity in fiber, the non-linear Schrödinger equation governing the optical pulse propagation in fiber upon integration [ 6 ] reduces to:
where β 2 {\displaystyle \beta _{2}} = GVD parameter, z is propagation distance, E o ( z , t ) {\displaystyle E_{o}(z,t)} is the reshaped output pulse at distance z and time t . The response of this dispersive element in the time-stretch system can be approximated as a phase propagator as presented in [ 4 ] H ( ω ) = e i φ ( ω ) = e i ∑ m = 0 ∞ φ m ( ω ) = ∏ m = 0 ∞ H m ( ω ) {\displaystyle H(\omega )=e^{i\varphi (\omega )}=e^{i\sum _{m=0}^{\infty }\varphi _{m}(\omega )}=\prod _{m=0}^{\infty }H_{m}(\omega )} (2)
Therefore, Eq. 1 can be written as following for a pulse that propagates through the time-stretch system and is reshaped into a temporal signal with a complex envelope given by [ 4 ]
The time stretch operation is formulated as generalized phase and amplitude operations,
where e i φ ( ω ) {\displaystyle e^{i\varphi (\omega )}} is the phase filter and L ~ ( ω ) {\displaystyle {\tilde {L}}(\omega )} is the amplitude filter. Next the operator is converted to discrete domain,
where u {\displaystyle u} is the discrete frequency, K ~ ( u ) {\displaystyle {\tilde {K}}(u)} is the phase filter, L ~ ( u ) {\displaystyle {\tilde {L}}(u)} is the amplitude filter and FFT is fast Fourier transform.
The stretch operator S { } {\displaystyle \mathbb {S} \{\}} for a digital image is then
In the above equations, E i [ n , m ] {\displaystyle E_{i}[n,m]} is the input image, n {\displaystyle n} and m {\displaystyle m} are the spatial variables, F F T 2 {\displaystyle FFT^{2}} is the two-dimensional fast Fourier transform, and u {\displaystyle u} and v {\displaystyle v} are spatial frequency variables. The function K ~ ( u , v ) {\displaystyle {\tilde {K}}(u,v)} is the warped phase kernel and the function L ~ ( u , v ) {\displaystyle {\tilde {L}}(u,v)} is a localization kernel implemented in frequency domain. PST operator is defined as the phase of the Warped Stretch Transform output as follows
where ∡ { } {\displaystyle \measuredangle \{\}} is the angle operator.
The warped phase kernel K ~ ( u , v ) {\displaystyle {\tilde {K}}(u,v)} can be described by a nonlinear frequency dependent phase
While arbitrary phase kernels can be considered for PST operation, here we study the phase kernels for which the kernel phase derivative is a linear or sublinear function with respect to frequency variables. A simple example for such phase derivative profiles is the inverse tangent function. Consider the phase profile in the polar coordinate system
From d φ ( r ) d r = tan − 1 ( r ) {\displaystyle {\frac {d\varphi (r)}{dr}}=\tan ^{-1}(r)} we have φ ( r ) = r tan − 1 ( r ) − 1 2 log ( r 2 + 1 ) {\displaystyle \varphi (r)=r\tan ^{-1}(r)-{\frac {1}{2}}\log(r^{2}+1)}
Therefore, the PST kernel is implemented as
where S {\displaystyle S} and W {\displaystyle W} are real-valued numbers related to the strength and warp of the phase profile
PST has been used for edge detection in biological and biomedical images as well as synthetic-aperture radar (SAR) image processing. [ 7 ] [ 8 ] [ 9 ] PST has also been applied to improve the point spread function for single molecule imaging in order to achieve super-resolution. [ 10 ] The transform exhibits intrinsic superior properties compared to conventional edge detectors for feature detection in low contrast visually impaired images. [ 11 ]
The PST function can also be performed on 1-D temporal waveforms in the analog domain to reveal transitions and anomalies in real time. [ 4 ]
On February 9, 2016, a UCLA Engineering research group has made public the computer code for PST algorithm that helps computers process images at high speeds and "see" them in ways that human eyes cannot. The researchers say the code could eventually be used in face , fingerprint , and iris recognition systems for high-tech security, as well as in self-driving cars' navigation systems or for inspecting industrial products. The Matlab implementation for PST can also be downloaded from Matlab Files Exchange. [ 12 ] However, it is provided for research purposes only, and a license must be obtained for any commercial applications. The software is protected under a US patent. The code was then significantly refactored and improved to support GPU acceleration. In May 2022, it became one algorithm in PhyCV : the first physics-inspired computer vision library. | https://en.wikipedia.org/wiki/Phase_stretch_transform |
A phase telescope or Bertrand lens is an optical device used in aligning the various optical components of a light microscope . In particular it allows observation of the back focal plane of the objective lens and its conjugated focal planes. The phase telescope/Bertrand lens is inserted into the microscope in place of an eyepiece to move the intermediate image plane to a point where it can be observed.
Phase telescopes are primarily used for aligning the optical components required for Köhler illumination and phase contrast microscopy. For Köhler illumination the light source and condenser diaphragm should appear in focus at the back focal plane of the objective lens. For phase contrast microscopy the phase ring (at the back focal plane of the objective) and the annulus (at the back focal plane of the condenser lens) should appear in focus and in alignment.
Bertrand lenses find use in creating interference figures and assisting in aligning a microscope to generate interference figures. The name Bertrand lens commemorates French mineralogist Emile Bertrand (1844-1909), for whom the mineral Bertrandite is also named. [ 1 ]
This optics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Phase_telescope |
Phase transformation crystallography describes the orientation relationship and interface orientation after a phase transformation (such as martensitic transformation or precipitation) .
This crystallography -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Phase_transformation_crystallography |
In physics , chemistry , and other related fields like biology, a phase transition (or phase change ) is the physical process of transition between one state of a medium and another. Commonly the term is used to refer to changes among the basic states of matter : solid , liquid , and gas , and in rare cases, plasma . A phase of a thermodynamic system and the states of matter have uniform physical properties . During a phase transition of a given medium, certain properties of the medium change as a result of the change of external conditions, such as temperature or pressure . This can be a discontinuous change; for example, a liquid may become gas upon heating to its boiling point , resulting in an abrupt change in volume. The identification of the external conditions at which a transformation occurs defines the phase transition point.
Phase transitions commonly refer to when a substance transforms between one of the four states of matter to another. At the phase transition point for a substance, for instance the boiling point , the two phases involved - liquid and vapor , have identical free energies and therefore are equally likely to exist. Below the boiling point, the liquid is the more stable state of the two, whereas above the boiling point the gaseous form is the more stable.
Common transitions between the solid, liquid, and gaseous phases of a single component, due to the effects of temperature and/or pressure are identified in the following table:
For a single component, the most stable phase at different temperatures and pressures can be shown on a phase diagram . Such a diagram usually depicts states in equilibrium. A phase transition usually occurs when the pressure or temperature changes and the system crosses from one region to another, like water turning from liquid to solid as soon as the temperature drops below the freezing point . In exception to the usual case, it is sometimes possible to change the state of a system diabatically (as opposed to adiabatically ) in such a way that it can be brought past a phase transition point without undergoing a phase transition. The resulting state is metastable , i.e., less stable than the phase to which the transition would have occurred, but not unstable either. This occurs in superheating and supercooling , for example. Metastable states do not appear on usual phase diagrams.
Phase transitions can also occur when a solid changes to a different structure without changing its chemical makeup. In elements, this is known as allotropy , whereas in compounds it is known as polymorphism . The change from one crystal structure to another, from a crystalline solid to an amorphous solid , or from one amorphous structure to another ( polyamorphs ) are all examples of solid to solid phase transitions.
The martensitic transformation occurs as one of the many phase transformations in carbon steel and stands as a model for displacive phase transformations . Order-disorder transitions such as in alpha- titanium aluminides . As with states of matter, there is also a metastable to equilibrium phase transformation for structural phase transitions. A metastable polymorph which forms rapidly due to lower surface energy will transform to an equilibrium phase given sufficient thermal input to overcome an energetic barrier.
Phase transitions can also describe the change between different kinds of magnetic ordering . The most well-known is the transition between the ferromagnetic and paramagnetic phases of magnetic materials, which occurs at what is called the Curie point . Another example is the transition between differently ordered, commensurate or incommensurate, magnetic structures, such as in cerium antimonide . A simplified but highly useful model of magnetic phase transitions is provided by the Ising model .
Phase transitions involving solutions and mixtures are more complicated than transitions involving a single compound. While chemically pure compounds exhibit a single temperature melting point between solid and liquid phases, mixtures can either have a single melting point, known as congruent melting , or they have different liquidus and solidus temperatures resulting in a temperature span where solid and liquid coexist in equilibrium. This is often the case in solid solutions , where the two components are isostructural.
There are also a number of phase transitions involving three phases: a eutectic transformation, in which a two-component single-phase liquid is cooled and transforms into two solid phases. The same process, but beginning with a solid instead of a liquid is called a eutectoid transformation. A peritectic transformation, in which a two-component single-phase solid is heated and transforms into a solid phase and a liquid phase. A peritectoid reaction is a peritectoid reaction, except involving only solid phases. A monotectic reaction consists of change from a liquid and to a combination of a solid and a second liquid, where the two liquids display a miscibility gap . [ 1 ]
Separation into multiple phases can occur via spinodal decomposition , in which a single phase is cooled and separates into two different compositions.
Non-equilibrium mixtures can occur, such as in supersaturation .
Other phase changes include:
Phase transitions occur when the thermodynamic free energy of a system is non-analytic for some choice of thermodynamic variables (cf. phases ). This condition generally stems from the interactions of a large number of particles in a system, and does not appear in systems that are small. Phase transitions can occur for non-thermodynamic systems, where temperature is not a parameter. Examples include: quantum phase transitions , dynamic phase transitions, and topological (structural) phase transitions. In these types of systems other parameters take the place of temperature. For instance, connection probability replaces temperature for percolating networks.
Paul Ehrenfest classified phase transitions based on the behavior of the thermodynamic free energy as a function of other thermodynamic variables. [ 5 ] Under this scheme, phase transitions were labeled by the lowest derivative of the free energy that is discontinuous at the transition. First-order phase transitions exhibit a discontinuity in the first derivative of the free energy with respect to some thermodynamic variable. [ 6 ] The various solid/liquid/gas transitions are classified as first-order transitions because they involve a discontinuous change in density, which is the (inverse of the) first derivative of the free energy with respect to pressure. Second-order phase transitions are continuous in the first derivative (the order parameter , which is the first derivative of the free energy with respect to the external field, is continuous across the transition) but exhibit discontinuity in a second derivative of the free energy. [ 6 ] These include the ferromagnetic phase transition in materials such as iron, where the magnetization , which is the first derivative of the free energy with respect to the applied magnetic field strength, increases continuously from zero as the temperature is lowered below the Curie temperature . The magnetic susceptibility , the second derivative of the free energy with the field, changes discontinuously. Under the Ehrenfest classification scheme, there could in principle be third, fourth, and higher-order phase transitions. For example, the Gross–Witten–Wadia phase transition in 2-d lattice quantum chromodynamics is a third-order phase transition, and the Tracy–Widom distribution can be interpreted as a third-order transition. [ 7 ] [ 8 ] The Curie points of many ferromagnetics is also a third-order transition, as shown by their specific heat having a sudden change in slope. [ 9 ] [ 10 ]
In practice, only the first- and second-order phase transitions are typically observed. The second-order phase transition was for a while controversial, as it seems to require two sheets of the Gibbs free energy to osculate exactly, which is so unlikely as to never occur in practice. Cornelis Gorter replied the criticism by pointing out that the Gibbs free energy surface might have two sheets on one side, but only one sheet on the other side, creating a forked appearance. [ 11 ] ( [ 9 ] pp. 146--150)
The Ehrenfest classification implicitly allows for continuous phase transformations, where the bonding character of a material changes, but there is no discontinuity in any free energy derivative. An example of this occurs at the supercritical liquid–gas boundaries .
The first example of a phase transition which did not fit into the Ehrenfest classification was the exact solution of the Ising model , discovered in 1944 by Lars Onsager . The exact specific heat differed from the earlier mean-field approximations, which had predicted that it has a simple discontinuity at critical temperature. Instead, the exact specific heat had a logarithmic divergence at the critical temperature. [ 12 ] In the following decades, the Ehrenfest classification was replaced by a simplified classification scheme that is able to incorporate such transitions.
In the modern classification scheme, phase transitions are divided into two broad categories, named similarly to the Ehrenfest classes: [ 5 ]
First-order phase transitions are those that involve a latent heat . During such a transition, a system either absorbs or releases a fixed (and typically large) amount of energy per volume. During this process, the temperature of the system will stay constant as heat is added: the system is in a "mixed-phase regime" in which some parts of the system have completed the transition and others have not. [ 13 ] [ 14 ]
Familiar examples are the melting of ice or the boiling of water (the water does not instantly turn into vapor , but forms a turbulent mixture of liquid water and vapor bubbles). Yoseph Imry and Michael Wortis showed that quenched disorder can broaden a first-order transition. That is, the transformation is completed over a finite range of temperatures, but phenomena like supercooling and superheating survive and hysteresis is observed on thermal cycling. [ 15 ] [ 16 ] [ 17 ]
Second-order phase transition s are also called "continuous phase transitions" . They are characterized by a divergent susceptibility, an infinite correlation length , and a power law decay of correlations near criticality . Examples of second-order phase transitions are the ferromagnetic transition, superconducting transition (for a Type-I superconductor the phase transition is second-order at zero external field and for a Type-II superconductor the phase transition is second-order for both normal-state–mixed-state and mixed-state–superconducting-state transitions) and the superfluid transition. In contrast to viscosity, thermal expansion and heat capacity of amorphous materials show a relatively sudden change at the glass transition temperature [ 18 ] which enables accurate detection using differential scanning calorimetry measurements. Lev Landau gave a phenomenological theory of second-order phase transitions.
Apart from isolated, simple phase transitions, there exist transition lines as well as multicritical points , when varying external parameters like the magnetic field or composition.
Several transitions are known as infinite-order phase transitions .
They are continuous but break no symmetries . The most famous example is the Kosterlitz–Thouless transition in the two-dimensional XY model . Many quantum phase transitions , e.g., in two-dimensional electron gases , belong to this class.
The liquid–glass transition is observed in many polymers and other liquids that can be supercooled far below the melting point of the crystalline phase. This is atypical in several respects. It is not a transition between thermodynamic ground states: it is widely believed that the true ground state is always crystalline. Glass is a quenched disorder state, and its entropy, density, and so on, depend on the thermal history. Therefore, the glass transition is primarily a dynamic phenomenon: on cooling a liquid, internal degrees of freedom successively fall out of equilibrium. Some theoretical methods predict an underlying phase transition in the hypothetical limit of infinitely long relaxation times. [ 19 ] [ 20 ] No direct experimental evidence supports the existence of these transitions.
A disorder-broadened first-order transition occurs over a finite range of temperatures where the fraction of the low-temperature equilibrium phase grows from zero to one (100%) as the temperature is lowered. This continuous variation of the coexisting fractions with temperature raised interesting possibilities. On cooling, some liquids vitrify into a glass rather than transform to the equilibrium crystal phase. This happens if the cooling rate is faster than a critical cooling rate, and is attributed to the molecular motions becoming so slow that the molecules cannot rearrange into the crystal positions. [ 21 ] This slowing down happens below a glass-formation temperature T g , which may depend on the applied pressure. [ 18 ] [ 22 ] If the first-order freezing transition occurs over a range of temperatures, and T g falls within this range, then there is an interesting possibility that the transition is arrested when it is partial and incomplete. Extending these ideas to first-order magnetic transitions being arrested at low temperatures, resulted in the observation of incomplete magnetic transitions, with two magnetic phases coexisting, down to the lowest temperature. First reported in the case of a ferromagnetic to anti-ferromagnetic transition, [ 23 ] such persistent phase coexistence has now been reported across a variety of first-order magnetic transitions. These include colossal-magnetoresistance manganite materials, [ 24 ] [ 25 ] magnetocaloric materials, [ 26 ] magnetic shape memory materials, [ 27 ] and other materials. [ 28 ] The interesting feature of these observations of T g falling within the temperature range over which the transition occurs is that the first-order magnetic transition is influenced by magnetic field, just like the structural transition is influenced by pressure. The relative ease with which magnetic fields can be controlled, in contrast to pressure, raises the possibility that one can study the interplay between T g and T c in an exhaustive way. Phase coexistence across first-order magnetic transitions will then enable the resolution of outstanding issues in understanding glasses.
In any system containing liquid and gaseous phases, there exists a special combination of pressure and temperature, known as the critical point , at which the transition between liquid and gas becomes a second-order transition. Near the critical point, the fluid is sufficiently hot and compressed that the distinction between the liquid and gaseous phases is almost non-existent. This is associated with the phenomenon of critical opalescence , a milky appearance of the liquid due to density fluctuations at all possible wavelengths (including those of visible light).
Phase transitions often involve a symmetry breaking process. For instance, the cooling of a fluid into a crystalline solid breaks continuous translation symmetry : each point in the fluid has the same properties, but each point in a crystal does not have the same properties (unless the points are chosen from the lattice points of the crystal lattice). Typically, the high-temperature phase contains more symmetries than the low-temperature phase due to spontaneous symmetry breaking , with the exception of certain accidental symmetries (e.g. the formation of heavy virtual particles , which only occurs at low temperatures). [ 29 ]
An order parameter is a measure of the degree of order across the boundaries in a phase transition system; it normally ranges between zero in one phase (usually above the critical point) and nonzero in the other. [ 30 ] At the critical point, the order parameter susceptibility will usually diverge.
An example of an order parameter is the net magnetization in a ferromagnetic system undergoing a phase transition. For liquid/gas transitions, the order parameter is the difference of the densities.
From a theoretical perspective, order parameters arise from symmetry breaking. When this happens, one needs to introduce one or more extra variables to describe the state of the system. For example, in the ferromagnetic phase, one must provide the net magnetization , whose direction was spontaneously chosen when the system cooled below the Curie point . However, note that order parameters can also be defined for non-symmetry-breaking transitions. [ citation needed ]
Some phase transitions, such as superconducting and ferromagnetic, can have order parameters for more than one degree of freedom. In such phases, the order parameter may take the form of a complex number, a vector, or even a tensor, the magnitude of which goes to zero at the phase transition. [ citation needed ]
There also exist dual descriptions of phase transitions in terms of disorder parameters. These indicate the presence of line-like excitations such as vortex - or defect lines.
Symmetry-breaking phase transitions play an important role in cosmology . As the universe expanded and cooled, the vacuum underwent a series of symmetry-breaking phase transitions. For example, the electroweak transition broke the SU(2)×U(1) symmetry of the electroweak field into the U(1) symmetry of the present-day electromagnetic field . This transition is important to explain the asymmetry between the amount of matter and antimatter in the present-day universe, according to electroweak baryogenesis theory.
Progressive phase transitions in an expanding universe are implicated in the development of order in the universe, as is illustrated by the work of Eric Chaisson [ 31 ] and David Layzer . [ 32 ]
See also relational order theories and order and disorder .
Continuous phase transitions are easier to study than first-order transitions due to the absence of latent heat , and they have been discovered to have many interesting properties. The phenomena associated with continuous phase transitions are called critical phenomena, due to their association with critical points.
Continuous phase transitions can be characterized by parameters known as critical exponents . The most important one is perhaps the exponent describing the divergence of the thermal correlation length by approaching the transition. For instance, let us examine the behavior of the heat capacity near such a transition. We vary the temperature T of the system while keeping all the other thermodynamic variables fixed and find that the transition occurs at some critical temperature T c . When T is near T c , the heat capacity C typically has a power law behavior:
The heat capacity of amorphous materials has such a behaviour near the glass transition temperature where the universal critical exponent α = 0.59 [ 33 ] A similar behavior, but with the exponent ν instead of α , applies for the correlation length.
The exponent ν is positive. This is different with α . Its actual value depends on the type of phase transition we are considering.
The critical exponents are not necessarily the same above and below the critical temperature. When a continuous symmetry is explicitly broken down to a discrete symmetry by irrelevant (in the renormalization group sense) anisotropies, then some exponents (such as γ {\displaystyle \gamma } , the exponent of the susceptibility) are not identical. [ 34 ]
For −1 < α < 0, the heat capacity has a "kink" at the transition temperature. This is the behavior of liquid helium at the lambda transition from a normal state to the superfluid state, for which experiments have found α = −0.013 ± 0.003.
At least one experiment was performed in the zero-gravity conditions of an orbiting satellite to minimize pressure differences in the sample. [ 35 ] This experimental value of α agrees with theoretical predictions based on variational perturbation theory . [ 36 ]
For 0 < α < 1, the heat capacity diverges at the transition temperature (though, since α < 1, the enthalpy stays finite). An example of such behavior is the 3D ferromagnetic phase transition. In the three-dimensional Ising model for uniaxial magnets, detailed theoretical studies have yielded the exponent α ≈ +0.110.
Some model systems do not obey a power-law behavior. For example, mean field theory predicts a finite discontinuity of the heat capacity at the transition temperature, and the two-dimensional Ising model has a logarithmic divergence. However, these systems are limiting cases and an exception to the rule. Real phase transitions exhibit power-law behavior.
Several other critical exponents, β , γ , δ , ν , and η , are defined, examining the power law behavior of a measurable physical quantity near the phase transition. Exponents are related by scaling relations, such as
It can be shown that there are only two independent exponents, e.g. ν and η .
It is a remarkable fact that phase transitions arising in different systems often possess the same set of critical exponents. This phenomenon is known as universality . For example, the critical exponents at the liquid–gas critical point have been found to be independent of the chemical composition of the fluid.
More impressively, but understandably from above, they are an exact match for the critical exponents of the ferromagnetic phase transition in uniaxial magnets. Such systems are said to be in the same universality class. Universality is a prediction of the renormalization group theory of phase transitions, which states that the thermodynamic properties of a system near a phase transition depend only on a small number of features, such as dimensionality and symmetry, and are insensitive to the underlying microscopic properties of the system. Again, the divergence of the correlation length is the essential point.
There are also other critical phenomena; e.g., besides static functions there is also critical dynamics . As a consequence, at a phase transition one may observe critical slowing down or speeding up . Connected to the previous phenomenon is also the phenomenon of enhanced fluctuations before the phase transition, as a consequence of lower degree of stability of the initial phase of the system. The large static universality classes of a continuous phase transition split into smaller dynamic universality classes. In addition to the critical exponents, there are also universal relations for certain static or dynamic functions of the magnetic fields and temperature differences from the critical value. [ citation needed ]
A variety of methods are applied for studying the various effects. Selected examples are:
Phase transitions play many important roles in biological systems. Examples include the lipid bilayer formation, the coil-globule transition in the process of protein folding and DNA melting , liquid crystal-like transitions in the process of DNA condensation , cooperative ligand binding to DNA and proteins with the character of phase transition [ 37 ] or the change in the process of genetic expression at the onset of eukaryotes, marked by an algorithmic phase transition. [ 38 ]
In biological membranes , gel to liquid crystalline phase transitions play a critical role in physiological functioning of biomembranes. In gel phase, due to low fluidity of membrane lipid fatty-acyl chains, membrane proteins have restricted movement and thus are restrained in exercise of their physiological role. Plants depend critically on photosynthesis by chloroplast thylakoid membranes which are exposed cold environmental temperatures. Thylakoid membranes retain innate fluidity even at relatively low temperatures because of high degree of fatty-acyl disorder allowed by their high content of linolenic acid , 18-carbon chain with 3-double bonds. [ 39 ] Gel-to-liquid crystalline phase transition temperature of biological membranes can be determined by many techniques including calorimetry, fluorescence, spin label electron paramagnetic resonance and NMR by recording measurements of the concerned parameter by at series of sample temperatures. A simple method for its determination from 13-C NMR line intensities has also been proposed. [ 40 ]
It has been proposed that some biological systems might lie near critical points. Examples include neural networks in the salamander retina, [ 41 ] bird flocks [ 42 ] gene expression networks in Drosophila, [ 43 ] and protein folding. [ 44 ] However, it is not clear whether or not alternative reasons could explain some of the phenomena supporting arguments for criticality. [ 45 ] It has also been suggested that biological organisms share two key properties of phase transitions: the change of macroscopic behavior and the coherence of a system at a critical point. [ 46 ] Phase transitions are prominent feature of motor behavior in biological systems. [ 47 ] Spontaneous gait transitions, [ 48 ] as well as fatigue-induced motor task disengagements, [ 49 ] show typical critical behavior as an intimation of the sudden qualitative change of the previously stable motor behavioral pattern.
The characteristic feature of second order phase transitions is the appearance of fractals in some scale-free properties. It has long been known that protein globules are shaped by interactions with water. There are 20 amino acids that form side groups on protein peptide chains range from hydrophilic to hydrophobic, causing the former to lie near the globular surface, while the latter lie closer to the globular center. Twenty fractals were discovered in solvent associated surface areas of > 5000 protein segments. [ 50 ] The existence of these fractals proves that proteins function near critical points of second-order phase transitions.
In groups of organisms in stress (when approaching critical transitions), correlations tend to increase, while at the same time, fluctuations also increase. This effect is supported by many experiments and observations of groups of people, mice, trees, and grassy plants. [ 51 ]
Phase transitions have been hypothesised to occur in social systems viewed as dynamical systems. A hypothesis proposed in the 1990s and 2000s in the context of peace and armed conflict is that when a conflict that is non-violent shifts to a phase of armed conflict, this is a phase transition from latent to manifest phases within the dynamical system. [ 52 ] : 49
Media related to Phase changes at Wikimedia Commons | https://en.wikipedia.org/wiki/Phase_transition |
In biology , phase variation is a method for dealing with rapidly varying environments without requiring random mutation. It involves the variation of protein expression, frequently in an on-off fashion, within different parts of a bacterial population. As such the phenotype can switch at frequencies that are much higher (sometimes >1%) than classical mutation rates. Phase variation contributes to virulence by generating heterogeneity. Although it has been most commonly studied in the context of immune evasion , it is observed in many other areas as well and is employed by various types of bacteria, including Salmonella species.
Salmonella use this technique to switch between different types of the protein flagellin . As a result, flagella with different structures are assembled. Once an adaptive response has been mounted against one type of flagellin, or if a previous encounter has left the adaptive immune system ready to deal with one type of flagellin, switching types renders previously high-affinity antibodies, T-cell receptors, and B-cell receptors ineffective against the flagella.
Site-specific recombinations are usually short and occur at a single target site within the recombining sequence. For this to occur there are typically one or more cofactors (to name a few: DNA-binding proteins and the presence or absence of DNA binding sites) and a site-specific recombinase . [ 1 ] There is a change in orientation of the DNA that will affect gene expression or the structure of the gene product. [ 2 ] This is done by changing the spatial arrangement of the promoter or the regulatory elements. [ 1 ]
Through the utilization of specific recombinases, a particular DNA sequence is inverted, resulting in an ON to OFF switch and vice versa of the gene located within or next to this switch. Many bacterial species can utilize inversion to change the expression of certain genes for the benefit of the bacterium during infection. [ 1 ] The inversion event can be simple by involving the toggle in expression of one gene, like E. coli pilin expression, or more complicated by involving multiple genes in the expression of multiple types of flagellin by Salmonella enterica serovar Typhimurium . [ 3 ] Fimbrial adhesion by the type I fimbriae in E. coli undergoes site specific inversion to regulate the expression of fimA , the major subunit of the pili, depending on the stage of infection. The invertible element has a promoter within it that depending on the orientation will turn on or off the transcription of fimA . The inversion is mediated by two recombinases, FimB and FimE, and regulatory proteins H-NS, Integration Host Factor (IHF) and Leucine responsive protein (LRP). The FimE recombinase has the capability to only invert the element and turn expression from on to off while FimB can mediate the inversion in both directions. [ 4 ]
If excision is precise and the original sequence of DNA is restored, reversible phase variation can be mediated by transposition . Phase variation mediated by transposition targets specific DNA sequences. [ 5 ] P. atlantica contains an eps locus that encodes extracellular polysaccharide and the ON or OFF expression of this locus is controlled by the presence or absence of IS492. Two recombinases encoded by MooV and Piv mediate the precise excision and insertion, respectively, of the insertion element IS492 in the eps locus. When IS492 is excised it becomes a circular extrachromosomal element that results in the restored expression of eps . [ 5 ] [ 6 ]
Another, more complex example of site-specific DNA rearrangement is used in the flagella of Salmonella Typhimurium. In the usual phase, a promoter sequence promotes the expression of the H2 flagella gene along with a repressor of H1 flagella gene. Once this promoter sequence is inverted by the hin gene the repressor is turned off as is H2 allowing H1 to be expressed.
Gene conversion is another example of a type of phase variation. Type IV pili of Neisseria gonorrhoeae are controlled in this way. There are several copies of the gene coding for these pili (the Pil gene) but only one is expressed at any given time. This is referred to as the PilE gene. The silent versions of this gene, PilS, can use homologous recombination to combine with parts of the PilE gene and thus create a different phenotype. This allows for up to 10,000,000 different phenotypes of the pili [ citation needed ] .
Unlike other mechanisms of phase variation, epigenetic modifications do not alter DNA sequence and therefore it is the phenotype that is altered not the genotype. The integrity of the genome is intact and the change incurred by methylation alters the binding of transcription factors. The outcome is the regulation of transcription resulting in switches in gene expression. [ 2 ] [ 5 ] An outer membrane protein Antigen 43 (Ag43) in E. coli is controlled by phase variation mediated by two proteins, DNA-methylating enzyme deoxyadenosine methyltransferase (Dam) and the oxidative stress regulator OxyR. Ag43, located on the cell surface, is encoded by the Agn43 gene (previously designated as flu ) and is important for biofilms and infection. The expression of Agn43 is dependent on the binding of the regulator protein OxyR. When OxyR is bound to the regulatory region of Agn43 , which overlaps with the promoter, it inhibits transcription. The ON phase of transcription is dependent upon Dam methylating the GATC sequences in the beginning of the Agn43 gene (which happens to overlap with the OxyR binding site). When the Dam methylates the GATC sites it inhibits the OxyR from binding, allowing transcription of Ag43. [ 7 ]
In this form of phase variation. The promoter region of the genome can move from one copy of a gene to another through homologous recombination . This occurs with Campylobacter fetus surface proteins. The several different surface antigen proteins are all silent apart from one and all share a conserved region at the 5' end. The promoter sequence can then move between these conserved regions and allow expression of a different gene [ citation needed ] .
Slipped strand mispairing (SSM) is a process that produces mispairing of short repeat sequences between the mother and daughter strand during DNA synthesis . [ 1 ] This RecA -independent mechanism can transpire during either DNA replication or DNA repair and can be on the leading or lagging strand. SSM can result in an increase or decrease in the number of short repeat sequences. The short repeat sequences are 1 to 7 nucleotides and can be homogeneous or heterogeneous repetitive DNA sequences. [ 3 ]
Altered gene expression is a result of SSM and depending where the increase or decrease of the short repeat sequences occurs in relation to the promoter will either regulate at the level of transcription or translation. [ 8 ] The outcome is an ON or OFF phase of a gene or genes.
Transcriptional regulation (bottom portion of figure) occurs in several ways. One possible way is if the repeats are located in the promoter region at the RNA polymerase binding site, -10 and -35 upstream of the gene(s). The opportunistic pathogen H. influenzae has two divergently oriented promoters and fimbriae genes hifA and hifB . The overlapping promoter regions have repeats of the dinucleotide TA in the -10 and -35 sequences. Through SSM the TA repeat region can undergo addition or subtraction of TA dinucleotides which results in the reversible ON phase or OFF phase of transcription of the hifA and hifB . [ 3 ] [ 9 ] The second way that SSM induces transcriptional regulation is by changing the short repeat sequences located outside the promoter. If there is a change in the short repeat sequence it can affect the binding of a regulatory protein, such as an activator or repressor. It can also lead to differences in post-transcriptional stability of mRNA. [ 5 ]
Translation of a protein can be regulated by SSM if the short repeat sequences are in the coding region of the gene (top portion of the figure). Changing the number of repeats in the open reading frame can affect the codon sequence by adding a premature stop codon or by changing the sequence of the protein. This often results in a truncated (in the case of a premature stop codon) and/or nonfunctional protein. | https://en.wikipedia.org/wiki/Phase_variation |
Phased adoption or phased implementation is a strategy of implementing an innovation (i.e., information systems , new technologies, processes, etc.) in an organization in a phased way, so that different parts of the organization are implemented in different subsequent time slots. Phased implementation is a method of system changeover from an existing system to a new one that takes place in stages. [ 1 ] [ 2 ] [ 3 ] Other concepts that are used are: phased conversion, phased approach, phased strategy, phased introduction and staged conversion. Other methods of system changeover include direct changeover and parallel running .
Information technology has revolutionized the way of working in organizations. [ 4 ] with the introduction of high-tech enterprise resource planning systems (ERP), content management systems (CMS), customer and supplier relationship management systems (CRM and SRM), came the task to implement these systems in the organizations that are about to use it. The following entry will discuss just a small fraction of what has to be done or can be done when implementing such a system in the organization.
The phased approach takes the conversion one step at a time. The implementation requires a thoroughly thought out scenario for starting to use the new system. And at every milestone one has to instruct the employees and other users. The old system is taken over by the new system in predefined steps until it is totally abounded. The actual installation of the new system will be done in several ways, per module or per product and several instances can be carried out. This may be done by introducing some of the functionalities of the system before the rest or by introducing some functionalities to certain users before introducing them to all the users. This gives the users the time to cope with the changes caused by the system.
It is common to organize an implementation team that moves from department to department. By moving, the team learns and so gains expertise and knowledge, so that each subsequent implementation will be a lot faster than the first one.
The visualizing technique used in this entropy is a technique developed by the O&I group of the University of Utrecht . [ 5 ]
As can be seen in figure 1, phased adoption has a loop in it. Every department that is to be connected to the system is going through the same process. First based on the previous training sessions security levels are set (see ITIL ) In this way every unique user has its own profile which describes, which parts of the system are visible and/or usable to that specific user. Then the document and policies are documented. All processes and procedures are described in process descriptions, can be in paper or on the intranet. Then the actual conversion is depicted. As described in the above text, certain departments and or parts of an organization may be implemented in different time slots. In figure 1 that is depicted by implementing an additional module or even a total product. HRM needs different modules of an ERP system than Finance (module) or Finance may need an additional accounting software package (Product). Tuning of the system occurs to solve existing problems. After the certain department has been conversed the loop starts over, and another department or user group may be conversed. If all of the departments or organization parts are conversed and the system is totally implemented the system is officially delivered to the organization and the implementation team may be dissolved.
Phased adoption makes it possible to introduce modules that are ready while programming the other future modules. This does make the implementation scenario more critical, since certain modules depend on one another. Project Management techniques can be adopted to tackle these problems. See the techniques section below.
However, the actual adoption of the system by the users can be more problematic. The system may work just fine but if it is not used it's worthless. Users base their attitude towards the system on their first experience. [ 4 ] As this creates an extra weight on the first interaction, the implementers should be concerned with making the first interaction especially a pleasant one.
In the technique used in this entry each CONCEPT requires a proper definition which is preferably copied from a standard glossary of which the source is given, if applicable. All CONCEPT names in the text are with capital characters. In Table 1 the concept definition list is presented.
Table 1: Concept diagram
(The American Heritage Dictionary of the English Language, Fourth Edition, 2000)
(The American Heritage Dictionary of the English Language, Fourth Edition, 2000)
(The American Heritage Dictionary of the English Language, Fourth Edition, 2000)
The phased adoption method has certain pros, cons and risks [ 8 ] [ 4 ]
Pros:
Cons:
Risks:
The following sections are supplemental to the entry about adoption (software implementation) and are specific to phased adoption:
The configuration and specification of the hardware in place used by the legacy system and to run the new system is delivered in the hardware specifications. The hardware configuration is tested to assure proper functioning. This is reported in the hardware configuration report.
The configuration and specification of the software in place, i.e., the legacy system and the future new system is made clear to assure proper functioning once the system is installed. [ 9 ] The act of specifying the system already installed is key to the implementation. Which parts or even total systems will be taken over by the new system? All this is reported in the software installation and software test reports.
The actual installation of the software of the new system is also done here in a confined area to support the training sessions described in the following section.
The system training will teach users the keystrokes and transactions required to run the system. [ 6 ] The pilot exercises the systems and tests the users understanding of the system. The project team creates a skeletal business case test environment which takes the business processes from the beginning, when a customer order is received, to the end, when the customer order is shipped.
Training as such is not enough for adopting an information system. The users have learning needs. [ 4 ] Known learning needs are the emotional guidance. Users need to make emotional steps to make cognitive
steps. If they fear the system due to its difficult handling they may not be able to understand the cognitive steps needed to successfully carry out the tasks.
In the implementation field several techniques are used. A well-known method, and specifically oriented on the implementation field, is the Regatta method by Sogeti . Other techniques are the SAP Implementation method, which is adapted to implementing SAP systems. Systems are installed in several different ways. Different organizations may have their own methods, When implementing a system, it is considered a project and thus must be handled as such. Well known theories and methods are used in the field such as the PRINCE2 method with all of its underlying techniques, such as a PERT diagram, Gantt chart and critical path methods .
The EMR implementation at the University Physicians Group (UPG) in Staten Island and Brooklyn , New York .
The University Physicians Group in New York went with a complete technical installation of an EMR (Electronic Medical Record) software package. The UPG found that some vendors of the EMR package recommended a rolling out that would be done all-at-one, also called the Big Bang. But they found out that the Big Bang would have overwhelmed the physicians and staff due to the following factors:
Thus they chose a phased approach: “ Hence, a phased adoption to us, offered the greatest chance of success, staff adoption, and opportunity for the expected return-on-investment once the system was completely adopted. ” (J. Hyman, M.D.)
There also was a group who were somewhat reluctant about any new systems. By introducing the system to certain early adopters the late majority would be able to get to know the system. [ 10 ] As it was introduced phased through the organisation. Per loop (see figure 5, A) the UPG was introduced to the system.
As an example, think of a supermarket. In this supermarket the checkout system is being upgraded to a newer version. Imagine that only the checkout counters of the vegetable section are changed over to the new system, while the other counters carry on with the old system. If the new system does not work properly, it would not matter because only a small portion of the supermarket has been computerised. If it does work, staff can take turns working on the vegetable counters to get some practice using the new system.
After the vegetables section is working perfectly, the meat section might be next, then the confectionery section, and so on. Eventually all the various counters in the supermarket system would have been phased in, and everything would be running. This takes a long time as there are two systems working until the changeover is completed. However, the supermarket is never in danger of having to close and the staff are all able to get plenty of training in operating the new system, so it is a much friendlier method. | https://en.wikipedia.org/wiki/Phased_adoption |
In antenna theory, a phased array usually means an electronically scanned array , a computer-controlled array of antennas which creates a beam of radio waves that can be electronically steered to point in different directions without moving the antennas. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ]
In a phased array, the power from the transmitter is fed to the radiating elements through devices called phase shifters , controlled by a computer system, which can alter the phase or signal delay electronically, thus steering the beam of radio waves to a different direction. Since the size of an antenna array must extend many wavelengths to achieve the high gain needed for narrow beamwidth, phased arrays are mainly practical at the high frequency end of the radio spectrum, in the UHF and microwave bands, in which the operating wavelengths are conveniently small.
Phased arrays were originally invented for use in military radar systems, to detect fast moving planes and missiles, but are now widely used and have spread to civilian applications such as 5G MIMO for cell phones. The phased array principle is also used in acoustics is such applications as phased array ultrasonics , and in optics.
The term "phased array" is also used to a lesser extent for unsteered array antennas in which the radiation pattern of the antenna array is fixed, [ 5 ] [ 7 ] For example, AM broadcast radio antennas consisting of multiple mast radiators are also called "phased arrays".
A phased array is an electronically scanned array , a computer-controlled array of antennas which creates a beam of radio waves that can be electronically steered to point in different directions without moving the antennas. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] The general theory of an electromagnetic phased array also finds applications in ultrasonic and medical imaging application ( phased array ultrasonics ) and in optics ( optical phased array ).
In a simple array antenna , the radio frequency current from the transmitter is fed to multiple individual antenna elements with the proper phase relationship so that the radio waves from the separate elements combine ( superpose ) to form beams, to increase power radiated in desired directions and suppress radiation in undesired directions.
In a phased array, the power from the transmitter is fed to the radiating elements through devices called phase shifters , controlled by a computer system. The computer can alter the phase or signal delay of each antenna element electronically, this results in steering the beam of radio waves to a different direction.
Phased arrays were originally conceived for use in military radar systems, to steer a beam of radio waves quickly across the sky to detect planes and missiles. These systems are now widely used and have spread to civilian applications such as 5G MIMO for cell phones. The phased array principle is also used in acoustics , and phased arrays of acoustic transducers are used in medical ultrasound imaging scanners ( phased array ultrasonics ), oil and gas prospecting ( reflection seismology ), and military sonar systems.
The term "phased array" is also used to a lesser extent for non steerable array antennas in which the phase of the feed power and thus the radiation pattern of the antenna array is fixed. [ 5 ] [ 8 ] For example, AM broadcast radio antennas consisting of multiple mast radiators fed so as to create a specific radiation pattern are also called "phased arrays".
Phased arrays take multiple forms. However, the four most common are the passive electronically scanned array (PESA), active electronically scanned array (AESA), hybrid beam forming phased array, and digital beam forming (DBF) array. [ 9 ]
A passive phased array or passive electronically scanned array (PESA) is a phased array in which the antenna elements are connected to a single transmitter and/or receiver , as shown in the first animation at top. PESAs are the most common type of phased array. Generally speaking, a PESA uses one receiver/exciter for the entire array.
An active phased array or active electronically scanned array (AESA) is a phased array in which each antenna element has an analog transmitter/receiver (T/R) module [ 10 ] which creates the phase shifting required to electronically steer the antenna beam. Active arrays are a more advanced, second-generation phased-array technology that are used in military applications; unlike PESAs they can radiate several beams of radio waves at multiple frequencies in different directions simultaneously. However, the number of simultaneous beams is limited by practical reasons of electronic packaging of the beam formers to approximately three simultaneous beams for an AESA [ citation needed ] . Each beam former has a receiver/exciter connected to it.
A digital beam forming (DBF) phased array has a digital receiver/exciter at each element in the array. The signal at each element is digitized by the receiver/exciter. This means that antenna beams can be formed digitally in a field programmable gate array (FPGA) or the array computer. This approach allows for multiple simultaneous antenna beams to be formed.
A hybrid beam forming phased array can be thought of as a combination of an AESA and a digital beam forming phased array. It uses subarrays that are active phased arrays (for instance, a subarray may be 64, 128 or 256 elements and the number of elements depends upon system requirements). The subarrays are combined to form the full array. Each subarray has its own digital receiver/exciter. This approach allows clusters of simultaneous beams to be created.
A conformal antenna [ 11 ] is a phased array in which the individual antennas, instead of being arranged in a flat plane, are mounted on a curved surface. The phase shifters compensate for the different path lengths of the waves due to the antenna elements' varying position on the surface, allowing the array to radiate a plane wave. Conformal antennas are used in aircraft and missiles, to integrate the antenna into the curving surface of the aircraft to reduce aerodynamic drag.
There are two main types of beamformers. These are time domain beamformers and frequency domain beamformers. From a theoretical point of view, both are in principle the same operation, with just a Fourier transform allowing conversion from one to the other type.
A graduated attenuation window is sometimes applied across the face of the array to improve side-lobe suppression performance, in addition to the phase shift.
Time domain beamformer works by introducing time delays. The basic operation is called "delay and sum". It delays the incoming signal from each array element by a certain amount of time, and then adds them together. A Butler matrix allows several beams to be formed simultaneously, or one beam to be scanned through an arc. The most common kind of time domain beam former is serpentine waveguide. Active phased array designs use individual delay lines that are switched on and off. Yttrium iron garnet phase shifters vary the phase delay using the strength of a magnetic field.
There are two different types of frequency domain beamformers.
The first type separates the different frequency components that are present in the received signal into multiple frequency bins (using either a Discrete Fourier transform (DFT) or a filterbank ). When different delay and sum beamformers are applied to each frequency bin, the result is that the main lobe simultaneously points in multiple different directions at each of the different frequencies. This can be an advantage for communication links, and is used with the SPS-48 radar.
The other type of frequency domain beamformer makes use of Spatial Frequency. Discrete samples are taken from each of the individual array elements. The samples are processed using a DFT. The DFT introduces multiple different discrete phase shifts during processing. The outputs of the DFT are individual channels that correspond with evenly spaced beams formed simultaneously. A 1-dimensional DFT produces a fan of different beams. A 2-dimensional DFT produces beams with a pineapple configuration.
These techniques are used to create two kinds of phased array.
There are two further sub-categories that modify the kind of dynamic array or fixed array.
Each array element incorporates an adjustable phase shifter. These are collectively used to move the beam with respect to the array face.
Dynamic phased arrays require no physical movement to aim the beam. The beam is moved electronically. This can produce antenna motion fast enough to use a small pencil beam to simultaneously track multiple targets while searching for new targets using just one radar set, a capability known as track while search .
As an example, an antenna with a 2-degree beam with a pulse rate of 1 kHz will require approximately 8 seconds to cover an entire hemisphere consisting of 8,000 pointing positions. This configuration provides 12 opportunities to detect a 1,000 m/s (2,200 mph; 3,600 km/h) vehicle over a range of 100 km (62 mi), which is suitable for military applications. [ citation needed ]
The position of mechanically steered antennas can be predicted, which can be used to create electronic countermeasures that interfere with radar operation. The flexibility resulting from phased array operation allows beams to be aimed at random locations, which eliminates this vulnerability. This is also desirable for military applications.
Fixed phased array antennas are typically used to create an antenna with a more desirable form factor than the conventional parabolic reflector or cassegrain reflector . Fixed phased arrays incorporate fixed phase shifters. For example, most commercial FM Radio and TV antenna towers use a collinear antenna array , which is a fixed phased array of dipole elements.
In radar applications, this kind of phased array is physically moved during the track and scan process. There are two configurations.
The SPS-48 radar uses multiple transmit frequencies with a serpentine delay line along the left side of the array to produce vertical fan of stacked beams. Each frequency experiences a different phase shift as it propagates down the serpentine delay line, which forms different beams. A filter bank is used to split apart the individual receive beams. The antenna is mechanically rotated.
Semi-active radar homing uses monopulse radar that relies on a fixed phased array to produce multiple adjacent beams that measure angle errors. This form factor is suitable for gimbal mounting in missile seekers.
Active electronically-scanned arrays (AESA) elements incorporate transmit amplification with phase shift in each antenna element (or group of elements). Each element also includes receive pre-amplification. The phase shifter setting is the same for transmit and receive. [ 12 ]
Active phased arrays do not require phase reset after the end of the transmit pulse, which is compatible with Doppler radar and pulse-Doppler radar .
Passive phased arrays typically use large amplifiers that produce all of the microwave transmit signal for the antenna. Phase shifters typically consist of waveguide elements controlled by magnetic field, voltage gradient, or equivalent technology. [ 13 ] [ 14 ]
The phase shift process used with passive phased arrays typically puts the receive beam and transmit beam into diagonally opposite quadrants. The sign of the phase shift must be inverted after the transmit pulse is finished and before the receive period begins to place the receive beam into the same location as the transmit beam. That requires a phase impulse that degrades sub-clutter visibility performance on Doppler radar and Pulse-Doppler radar. As an example, Yttrium iron garnet phase shifters must be changed after transmit pulse quench and before receiver processing starts to align transmit and receive beams. That impulse introduces FM noise that degrades clutter performance.
Passive phased array design is used in the AEGIS Combat System [ 15 ] for direction-of-arrival estimation.
Phased array transmission was originally shown in 1905 by Nobel laureate Karl Ferdinand Braun who demonstrated enhanced transmission of radio waves in one direction. [ 16 ] [ 17 ] During World War II , Nobel laureate Luis Alvarez used phased array transmission in a rapidly steerable radar system for " ground-controlled approach ", a system to aid in the landing of aircraft. At the same time, the GEMA in Germany built the Mammut 1. [ 18 ] It was later adapted for radio astronomy leading to Nobel Prizes for Physics for Antony Hewish and Martin Ryle after several large phased arrays were developed at the University of Cambridge Interplanetary Scintillation Array . This design is also used for radar , and is generalized in interferometric radio antennas.
In 1966, most phased-array radars use ferrite phase shifters or traveling-wave tubes to dynamically adjust the phase.
The AN/SPS-33 -- installed on the nuclear-powered ships Long Beach and Enterprise around 1961 -- was claimed to be the only operational 3-D phased array in the world in 1966.
The AN/SPG-59 was designed to generate multiple tracking beams from the transmitting array and simultaneously program independent receiving arrays.
The first civilian 3D phased array was built in 1960 at the National Aviation Facilities Experimental Center; but was abandoned in 1961. [ 19 ]
In 2004, Caltech researchers demonstrated the first integrated silicon-based phased array receiver at 24 GHz with 8 elements. [ 20 ] This was followed by their demonstration of a CMOS 24 GHz phased array transmitter in 2005 [ 21 ] and a fully integrated 77 GHz phased array transceiver with integrated antennas in 2006 [ 22 ] [ 23 ] by the Caltech team. In 2007, DARPA researchers announced a 16-element phased-array radar antenna which was also integrated with all the necessary circuits on a single silicon chip and operated at 30–50 GHz. [ 24 ]
The relative amplitudes of—and constructive and destructive interference effects among—the signals radiated by the individual antennas determine the effective radiation pattern of the array. A phased array may be used to point a fixed radiation pattern, or to scan rapidly in azimuth or elevation. Simultaneous electrical scanning in both azimuth and elevation was first demonstrated in a phased array antenna at Hughes Aircraft Company , California in 1957. [ 25 ]
The total directivity of a phased array will be a result of the gain of the individual array elements, and the directivity due their positioning in an array. This latter component is closely tied (but not equal to [ 26 ] ) to the array factor . [ 27 ] [ page needed ] [ 26 ] In a (rectangular) planar phased array, of dimensions M × N {\displaystyle M\times N} , with inter-element spacing d x {\displaystyle d_{x}} and d y {\displaystyle d_{y}} , respectively, the array factor can be calculated accordingly [ 2 ] [ 27 ] [ page needed ] :
A F = ∑ n = 1 N I n 1 [ ∑ m = 1 M I m 1 e j ( m − 1 ) ( k d x sin θ cos ϕ + β x ) ] e j ( n − 1 ) ( k d y sin θ sin ϕ + β y ) {\displaystyle AF=\sum _{n=1}^{N}I_{n1}\left[\sum _{m=1}^{M}I_{m1}\mathrm {e} ^{j\left(m-1\right)\left(kd_{x}\sin \theta \cos \phi +\beta _{x}\right)}\right]\mathrm {e} ^{j\left(n-1\right)\left(kd_{y}\sin \theta \sin \phi +\beta _{y}\right)}}
Here, θ {\displaystyle \theta } and ϕ {\displaystyle \phi } are the directions which we are taking the array factor in, in the coordinate frame depicted to the right. The factors β x {\displaystyle \beta _{x}} and β y {\displaystyle \beta _{y}} are the progressive phase shift that is used to steer the beam electronically. The factors I n 1 {\displaystyle I_{n1}} and I m 1 {\displaystyle I_{m1}} are the excitation coefficients of the individual elements.
Beam steering is indicated in the same coordinate frame, however the direction of steering is indicated with θ 0 {\displaystyle \theta _{0}} and ϕ 0 {\displaystyle \phi _{0}} , which is used in calculation of progressive phase:
In all above equations, the value k {\displaystyle k} describes the wavenumber of the frequency used in transmission.
These equations can be solved to predict the nulls, main lobe, and grating lobes of the array. Referring to the exponents in the array factor equation, we can say that major and grating lobes will occur at integer m , n = 0 , 1 , 2 , … {\displaystyle m,n=0,1,2,\dots } solutions to the following equations: [ 2 ] [ 27 ] [ page needed ]
It is common in engineering to provide phased array A F {\displaystyle AF} values in decibels through A F d B = 10 log 10 A F {\displaystyle AF_{dB}=10\log _{10}AF} . Recalling the complex exponential in the array factor equation above, often, what is really meant by array factor is the magnitude of the summed phasor produced at the end of array factor calculation. With this, we can produce the following equation: A F d B = 10 log 10 ‖ ∑ n = 1 N I 1 n [ ∑ m = 1 M I m 1 e j ( m − 1 ) ( k d x sin θ cos ϕ + β x ) ] e j ( n − 1 ) ( k d y sin θ sin ϕ + β y ) ‖ {\displaystyle AF_{dB}=10\log _{10}\left\|\sum _{n=1}^{N}I_{1n}\left[\sum _{m=1}^{M}I_{m1}\mathrm {e} ^{j\left(m-1\right)\left(kd_{x}\sin \theta \cos \phi +\beta _{x}\right)}\right]\mathrm {e} ^{j\left(n-1\right)\left(kd_{y}\sin \theta \sin \phi +\beta _{y}\right)}\right\|} For the ease of visualization, we will analyze array factor given an input azimuth and elevation , which we will map to the array frame θ {\displaystyle \theta } and ϕ {\displaystyle \phi } through the following conversion:
This represents a coordinate frame whose x {\displaystyle \mathbf {x} } axis is aligned with the array z {\displaystyle \mathbf {z} } axis, and whose y {\displaystyle \mathbf {y} } axis is aligned with the array x {\displaystyle \mathbf {x} } axis.
If we consider a 16 × 16 {\displaystyle 16\times 16} phased array, this process provides the following values for A F d B {\displaystyle AF_{dB}} , when steering to bore-sight ( θ 0 = 0 ∘ {\displaystyle \theta _{0}=0^{\circ }} , ϕ 0 = 0 ∘ {\displaystyle \phi _{0}=0^{\circ }} ):
These values have been clipped to have a minimum A F {\displaystyle AF} of -50 dB, however, in reality, null points in the array factor pattern will have values significantly smaller than this.
Phased arrays were invented for radar tracking of ballistic missiles, and because of their fast tracking abilities phased array radars are widely used in military applications. For example, because of the rapidity with which the beam can be steered , phased array radars allow a warship to use one radar system for surface detection and tracking (finding ships), air detection and tracking (finding aircraft and missiles) and missile uplink capabilities. Before using these systems, each surface-to-air missile in flight required a dedicated fire-control radar , which meant that radar-guided weapons could only engage a small number of simultaneous targets. Phased array systems can be used to control missiles during the mid-course phase of the missile's flight. During the terminal portion of the flight, continuous-wave fire control directors provide the final guidance to the target. Because the antenna pattern is electronically steered , phased array systems can direct radar beams fast enough to maintain a fire control quality track on many targets simultaneously while also controlling several in-flight missiles.
The AN/SPY-1 phased array radar, part of the Aegis Combat System deployed on modern U.S. cruisers and destroyers , "is able to perform search, track and missile guidance functions simultaneously with a capability of over 100 targets." [ 28 ] Likewise, the Thales Herakles phased array multi-function radar used in service with France and Singapore has a track capacity of 200 targets and is able to achieve automatic target detection, confirmation and track initiation in a single scan, while simultaneously providing mid-course guidance updates to the MBDA Aster missiles launched from the ship. [ 29 ] The German Navy and the Royal Dutch Navy have developed the Active Phased Array Radar System (APAR). The MIM-104 Patriot and other ground-based antiaircraft systems use phased array radar for similar benefits.
Phased arrays are used in naval sonar, in active (transmit and receive) and passive (receive only) and hull-mounted and towed array sonar .
The MESSENGER spacecraft was a space probe mission to the planet Mercury (2011–2015 [ 30 ] ). This was the first deep-space mission to use a phased-array antenna for communications . The radiating elements are circularly-polarized , slotted waveguides . The antenna, which uses the X band , used 26 radiative elements and can gracefully degrade . [ 31 ]
The National Severe Storms Laboratory has been using a SPY-1A phased array antenna, provided by the US Navy, for weather research at its Norman, Oklahoma facility since April 23, 2003. It is hoped that research will lead to a better understanding of thunderstorms and tornadoes, eventually leading to increased warning times and enhanced prediction of tornadoes. Current project participants include the National Severe Storms Laboratory and National Weather Service Radar Operations Center, Lockheed Martin , United States Navy , University of Oklahoma School of Meteorology, School of Electrical and Computer Engineering, and Atmospheric Radar Research Center , Oklahoma State Regents for Higher Education, the Federal Aviation Administration , and Basic Commerce and Industries. The project includes research and development , future technology transfer and potential deployment of the system throughout the United States. It is expected to take 10 to 15 years to complete and initial construction was approximately $25 million. [ 32 ] A team from Japan's RIKEN Advanced Institute for Computational Science (AICS) has begun experimental work on using phased-array radar with a new algorithm for instant weather forecasts . [ 33 ]
Within the visible or infrared spectrum of electromagnetic waves it is possible to construct optical phased arrays . They are used in wavelength multiplexers and filters for telecommunication purposes, [ 34 ] laser beam steering , and holography. Synthetic array heterodyne detection is an efficient method for multiplexing an entire phased array onto a single element photodetector . The dynamic beam forming in an optical phased array transmitter can be used to electronically raster or vector scan images without using lenses or mechanically moving parts in a lensless projector. [ 35 ] Optical phased array receivers have been demonstrated to be able to act as lensless cameras by selectively looking at different directions. [ 36 ] [ 37 ]
Starlink is a low Earth orbit satellite constellation that is available in over a hundred countries. It provides broadband internet connectivity to consumers; the user terminals of the system use phased array antennas. [ 38 ]
By 2014, phased array antennas were integrated into RFID systems to increase the area of coverage of a single system by 100% to 76,200 m 2 (820,000 sq ft) while still using traditional passive UHF tags. [ 39 ]
A phased array of acoustic transducers, denominated airborne ultrasound tactile display (AUTD), was developed in 2008 at the University of Tokyo's Shinoda Lab to induce tactile feedback. [ 40 ] This system was demonstrated to enable a user to interactively manipulate virtual holographic objects. [ 41 ]
Phased Array Feeds (PAF) [ 42 ] have recently been used at the focus of radio telescopes to provide many beams, giving the radio telescope a very wide field of view . Three examples are the ASKAP telescope in Australia , the Apertif upgrade to the Westerbork Synthesis Radio Telescope in The Netherlands , and the Florida Space Institute in the United States .
In broadcast engineering , the term 'phased array' has a meaning different from its normal meaning, it means an ordinary array antenna , an array of multiple mast radiators designed to radiate a directional radiation pattern, as opposed to a single mast which radiates an omnidirectional pattern. Broadcast phased arrays have fixed radiation patterns and are not 'steered' during operation as are other phased arrays.
Phased arrays are used by many AM broadcast radio stations to enhance signal strength and therefore coverage in the city of license , while minimizing interference to other areas. Due to the differences between daytime and nighttime ionospheric propagation at mediumwave frequencies, it is common for AM broadcast stations to change between day ( groundwave ) and night ( skywave ) radiation patterns by switching the phase and power levels supplied to the individual antenna elements ( mast radiators ) daily at sunrise and sunset . For shortwave broadcasts many stations use arrays of horizontal dipoles. A common arrangement uses 16 dipoles in a 4×4 array. Usually this is in front of a wire grid reflector. The phasing is often switchable to allow beam steering in azimuth and sometimes elevation. | https://en.wikipedia.org/wiki/Phased_array |
The phases of clinical research are the stages in which scientists conduct experiments with a health intervention to obtain sufficient evidence for a process considered effective as a medical treatment . [ 1 ] For drug development , the clinical phases start with testing for drug safety in a few human subjects , then expand to many study participants (potentially tens of thousands) to determine if the treatment is effective. [ 1 ] Clinical research is conducted on drug candidates, vaccine candidates, new medical devices , and new diagnostic assays .
Clinical trials testing potential medical products are commonly classified into four phases. The drug development process will normally proceed through all four phases over many years. [ 1 ] When expressed specifically, a clinical trial phase is capitalized both in name and Roman numeral , such as "Phase I" clinical trial. [ 1 ]
If the drug successfully passes through Phases I, II, and III, it will usually be approved by the national regulatory authority for use in the general population. [ 1 ] Phase IV trials are 'post-marketing' or 'surveillance' studies conducted to monitor safety over several years. [ 1 ]
Before clinical trials are undertaken for a candidate drug, vaccine, medical device, or diagnostic assay, the product candidate is tested extensively in preclinical studies . [ 1 ] Such studies involve in vitro ( test tube or cell culture ) and in vivo ( animal model ) experiments using wide-ranging doses of the study agent to obtain preliminary efficacy , toxicity and pharmacokinetic information. Such tests assist the developer to decide whether a drug candidate has scientific merit for further development as an investigational new drug . [ 1 ]
Phase 0 is a designation for optional exploratory trials, originally introduced by the United States Food and Drug Administration's (FDA) 2006 Guidance on Exploratory Investigational New Drug (IND) Studies, but now generally adopted as standard practice. [ 3 ] [ 4 ] Phase 0 trials are also known as human microdosing studies and are designed to speed up the development of promising drugs or imaging agents by establishing very early on whether the drug or agent behaves in human subjects as was expected from preclinical studies. Distinctive features of Phase 0 trials include the administration of single subtherapeutic doses of the study drug to a small number of subjects (10 to 15) to gather preliminary data on the agent's pharmacokinetics (what the body does to the drugs). [ 5 ]
A Phase 0 study gives no data on safety or efficacy, being by definition a dose too low to cause any therapeutic effect. Drug development companies carry out Phase 0 studies to rank drug candidates to decide which has the best pharmacokinetic parameters in humans to take forward into further development. They enable go/no-go decisions to be based on relevant human models instead of relying on sometimes inconsistent animal data. [ 6 ]
Phase I trials were formerly referred to as "first-in-man studies" but the field generally moved to the gender-neutral language phrase "first-in-humans" in the 1990s; [ 7 ] these trials are the first stage of testing in human subjects. [ 8 ] They are designed to test the safety, side effects, best dose, and formulation method for the drug. [ 9 ] Phase I trials are not randomized, and thus are vulnerable to selection bias . [ 10 ]
Normally, a small group of 20–100 healthy volunteers will be recruited. [ 11 ] [ 8 ] These trials are often conducted in a clinical trial clinic, where the subject can be observed by full-time staff. These clinical trial clinics are often run by contract research organization (CROs) who conduct these studies on behalf of pharmaceutical companies or other research investigators. [ citation needed ]
The subject who receives the drug is usually observed until several half-lives of the drug have passed. This phase is designed to assess the safety ( pharmacovigilance ), tolerability, pharmacokinetics , and pharmacodynamics of a drug. Phase I trials normally include dose-ranging , also called dose escalation studies, so that the best and safest dose can be found and to discover the point at which a compound is too poisonous to administer. [ 12 ] The tested range of doses will usually be a fraction [ quantify ] of the dose that caused harm in animal testing .
Phase I trials most often include healthy volunteers. However, there are some circumstances when clinical patients are used, such as patients who have terminal cancer or HIV and the treatment is likely to make healthy individuals ill. These studies are usually conducted in tightly controlled clinics called Central Pharmacological Units, where participants receive 24-hour medical attention and oversight. In addition to the previously mentioned unhealthy individuals, "patients who have typically already tried and failed to improve on the existing standard therapies" [ 13 ] may also participate in Phase I trials. Volunteers are paid a variable inconvenience fee for their time spent in the volunteer center.
Before beginning a Phase I trial, the sponsor must submit an Investigational New Drug application to the FDA detailing the preliminary data on the drug gathered from cellular models and animal studies. [ citation needed ]
Phase I trials can be further divided:
Single ascending dose (Phase Ia): In single ascending dose studies, small groups of subjects are given a single dose of the drug while they are observed and tested for a period of time to confirm safety. [ 8 ] [ 14 ] Typically, a small number of participants, usually three, are entered sequentially at a particular dose. [ 13 ] If they do not exhibit any adverse side effects, and the pharmacokinetic data are roughly in line with predicted safe values, the dose is escalated, and a new group of subjects is then given a higher dose. [ citation needed ]
If unacceptable toxicity is observed in any of the three participants, an additional number of participants, usually three, are treated at the same dose. [ 13 ] This is continued until pre-calculated pharmacokinetic safety levels are reached, or intolerable side effects start showing up (at which point the drug is said to have reached the maximum tolerated dose (MTD)). If an additional unacceptable toxicity is observed, then the dose escalation is terminated and that dose, or perhaps the previous dose, is declared to be the maximally tolerated dose. This particular design assumes that the maximally tolerated dose occurs when approximately one-third of the participants experience unacceptable toxicity. Variations of this design exist, but most are similar. [ 13 ]
Multiple ascending dose (Phase Ib): Multiple ascending dose studies investigate the pharmacokinetics and pharmacodynamics of multiple doses of the drug, looking at safety and tolerability. In these studies, a group of patients receives multiple low doses of the drug, while samples (of blood, and other fluids) are collected at various time points and analyzed to acquire information on how the drug is processed within the body. The dose is subsequently escalated for further groups, up to a predetermined level. [ 8 ] [ 14 ]
A short trial designed to investigate any differences in absorption of the drug by the body, caused by eating before the drug is given. These studies are usually run as a crossover study , with volunteers being given two identical doses of the drug while fasted , and after being fed.
Once a dose or range of doses is determined, the next goal is to evaluate whether the drug has any biological activity or effect. [ 13 ] Phase II trials are performed on larger groups (50–300 individuals) and are designed to assess how well the drug works, as well as to continue Phase I safety assessments in a larger group of volunteers and patients. Genetic testing is common, particularly when there is evidence of variation in metabolic rate. [ 13 ] When the development process for a new drug fails, this usually occurs during Phase II trials when the drug is discovered not to work as planned, or to have toxic effects. [ citation needed ]
Phase II studies are sometimes divided into Phase IIa and Phase IIb. There is no formal definition for these two sub-categories, but generally:
Some Phase II trials are designed as case series , demonstrating a drug's safety and activity in a selected group of participants. Other Phase II trials are designed as randomized controlled trials , where some patients receive the drug/device and others receive placebo /standard treatment. Randomized Phase II trials have far fewer patients than randomized Phase III trials. [ citation needed ]
In the first stage, the investigator attempts to rule out drugs that have no or little biologic activity. For example, the researcher may specify that a drug must have some minimal level of activity, say, in 20% of participants. If the estimated activity level is less than 20%, the researcher chooses not to consider this drug further, at least not at that maximally tolerated dose. If the estimated activity level exceeds 20%, the researcher will add more participants to get a better estimate of the response rate. A typical study for ruling out a 20% or lower response rate enters 14 participants. If no response is observed in the first 14 participants, the drug is considered not likely to have a 20% or higher activity level. The number of additional participants added depends on the degree of precision desired, but ranges from 10 to 20. Thus, a typical cancer phase II study might include fewer than 30 people to estimate the response rate. [ 13 ]
When a study assesses efficacy, it is looking at whether the drug given in the specific manner described in the study is able to influence an outcome of interest (e.g. tumor size) in the chosen population (e.g. cancer patients with no other ongoing diseases). When a study is assessing effectiveness, it is determining whether a treatment will influence the disease. In an effectiveness study, it is essential that participants are treated as they would be when the treatment is prescribed in actual practice. That would mean that there should be no aspects of the study designed to increase compliance above those that would occur in routine clinical practice. The outcomes in effectiveness studies are also more generally applicable than in most efficacy studies (for example does the patient feel better, come to the hospital less or live longer in effectiveness studies as opposed to better test scores or lower cell counts in efficacy studies). There is usually less rigid control of the type of participant to be included in effectiveness studies than in efficacy studies, as the researchers are interested in whether the drug will have a broad effect in the population of patients with the disease. [ citation needed ]
Phase II clinical programs historically have experienced the lowest success rate of the four development phases. In 2010, the percentage of Phase II trials that proceeded to Phase III was 18%, [ 16 ] and only 31% of developmental candidates advanced from Phase II to Phase III in a study of trials over 2006–2015. [ 17 ]
This phase is designed to assess the effectiveness of the new intervention and, thereby, its value in clinical practice. [ 13 ] Phase III studies are randomized controlled multicenter trials on large patient groups (300–3,000 or more depending upon the disease/medical condition studied) and are aimed at being the definitive assessment of how effective the drug is, in comparison with current 'gold standard' treatment. Because of their size and comparatively long duration, Phase III trials are the most expensive, time-consuming and difficult trials to design and run, especially in therapies for chronic medical conditions. Phase III trials of chronic conditions or diseases often have a short follow-up period for evaluation, relative to the period of time the intervention might be used in practice. [ 13 ] This is sometimes called the "pre-marketing phase" because it actually measures consumer response to the drug. [ citation needed ]
It is common practice that certain Phase III trials will continue while the regulatory submission is pending at the appropriate regulatory agency. This allows patients to continue to receive possibly lifesaving drugs until the drug can be obtained by purchase. Other reasons for performing trials at this stage include attempts by the sponsor at "label expansion" (to show the drug works for additional types of patients/diseases beyond the original use for which the drug was approved for marketing), to obtain additional safety data, or to support marketing claims for the drug. Studies in this phase are by some companies categorized as "Phase IIIB studies." [ 18 ]
While not required in all cases, it is typically expected that there be at least two successful Phase III trials, demonstrating a drug's safety and efficacy, to obtain approval from the appropriate regulatory agencies such as FDA (US), or the EMA (European Union).
Once a drug has proved satisfactory after Phase III trials, the trial results are usually combined into a large document containing a comprehensive description of the methods and results of human and animal studies, manufacturing procedures, formulation details, and shelf life. This collection of information makes up the "regulatory submission" that is provided for review to the appropriate regulatory authorities [ 19 ] in different countries. They will review the submission, and if it is acceptable, give the sponsor approval to market the drug.
Most drugs undergoing Phase III clinical trials can be marketed under FDA norms with proper recommendations and guidelines through a New Drug Application (NDA) containing all manufacturing, preclinical, and clinical data. In case of any adverse effects being reported anywhere, the drugs need to be recalled immediately from the market. While most pharmaceutical companies refrain from this practice, it is not abnormal to see many drugs undergoing Phase III clinical trials in the market. [ 20 ]
The design of individual trials may be altered during a trial – usually during Phase II or III – to accommodate interim results for the benefit of the treatment, adjust statistical analysis, or to reach early termination of an unsuccessful design, a process called an "adaptive design". [ 21 ] [ 22 ] [ 23 ] Examples are the 2020 World Health Organization Solidarity trial , European Discovery trial , and UK RECOVERY Trial of hospitalized people with severe COVID-19 infection, each of which applies adaptive designs to rapidly alter trial parameters as results from the experimental therapeutic strategies emerge. [ 24 ] [ 25 ] [ 26 ]
Adaptive designs within ongoing Phase II–III clinical trials on candidate therapeutics may shorten trial durations and use fewer subjects, possibly expediting decisions for early termination or success, and coordinating design changes for a specific trial across its international locations. [ 23 ]
For vaccines, the probability of success ranges from 7% for non-industry-sponsored candidates to 40% for industry-sponsored candidates. [ 27 ]
A 2019 review of average success rates of clinical trials at different phases and diseases over the years 2005–15 found a success range of 5–14%. [ 28 ] Separated by diseases studied, cancer drug trials were on average only 3% successful, whereas ophthalmology drugs and vaccines for infectious diseases were 33% successful. [ 28 ] Trials using disease biomarkers , especially in cancer studies, were more successful than those not using biomarkers. [ 28 ]
A 2010 review found about 50% of drug candidates either fail during the Phase III trial or are rejected by the national regulatory agency. [ 29 ]
In the early 21st century, a typical Phase I trial conducted at a single clinic in the United States ranged from $1.4 million for pain or anesthesia studies to $6.6 million for immunomodulation studies. [ 30 ] Main expense drivers were operating and clinical monitoring costs of the Phase I site. [ 30 ]
The amount of money spent on Phase II or III trials depends on numerous factors, with therapeutic area being studied and types of clinical procedures as key drivers. [ 30 ] Phase II studies may cost as low as $7 million for cardiovascular projects, and as much as $20 million for hematology trials. [ 30 ]
Phase III trials for dermatology may cost as low as $11 million, whereas a pain or anesthesia Phase III trial may cost as much as $53 million. [ 30 ] An analysis of Phase III pivotal trials leading to 59 drug approvals by the US Food and Drug Administration over 2015–16 showed that the median cost was $19 million, but some trials involving thousands of subjects may cost 100 times more. [ 31 ]
Across all trial phases, the main expenses for clinical trials were administrative staff (about 20% of the total), clinical procedures (about 19%), and clinical monitoring of the subjects (about 11%). [ 30 ]
A Phase IV trial is also known as a postmarketing surveillance trial or drug monitoring trial to assure long-term safety and effectiveness of the drug, vaccine, device or diagnostic test. [ 1 ] Phase IV trials involve the safety surveillance ( pharmacovigilance ) and ongoing technical support of a drug after it receives regulatory approval to be sold. [ 8 ] Phase IV studies may be required by regulatory authorities or may be undertaken by the sponsoring company for competitive (finding a new market for the drug) or other reasons (for example, the drug may not have been tested for interactions with other drugs , or on certain population groups such as pregnant women, who are unlikely to subject themselves to trials). [ 11 ] [ 8 ] The safety surveillance is designed to detect any rare or long-term adverse effects over a much larger patient population and longer time period than was possible during the Phase I-III clinical trials. [ 8 ] Harmful effects discovered by Phase IV trials may result in a drug being withdrawn from the market or restricted to certain uses; examples include cerivastatin (brand names Baycol and Lipobay), troglitazone (Rezulin) and rofecoxib (Vioxx). [ citation needed ]
The entire process of developing a drug from preclinical research to marketing can take approximately 12 to 18 years and often costs well over $1 billion. [ 32 ] [ 33 ] | https://en.wikipedia.org/wiki/Phases_of_clinical_research |
Fluorine forms diatomic molecules ( F 2 ) that are gaseous at room temperature with a density about 1.3 times that of air. [ 1 ] [ note 1 ] Though sometimes cited as yellow-green, pure fluorine gas is actually a very pale yellow. The color can only be observed in concentrated fluorine gas when looking down the axis of long tubes, as it appears transparent when observed from the side in normal tubes or if allowed to escape into the atmosphere. [ 3 ] The element has a "pungent" characteristic odor that is noticeable in concentrations as low as 20 ppb . [ 4 ]
Fluorine condenses to a bright yellow liquid at −188 °C (−307 °F), [ 5 ] which is near the condensation temperatures of oxygen and nitrogen.
The solid state of fluorine relies on Van der Waals forces to hold molecules together, [ citation needed ] which, because of the small size of the fluorine molecules, are relatively weak. Consequently, the solid state of fluorine is more similar to that of oxygen [ 6 ] [ 7 ] or the noble gases than to those of the heavier halogens. [ citation needed ]
Fluorine solidifies at −220 °C (−363 °F) [ 5 ] into a cubic structure, called beta-fluorine. This phase is transparent and soft, with significant disorder of the molecules; its density is 1.70 g/cm 3 . At −228 °C (−378 °F) fluorine undergoes a solid–solid phase transition into a monoclinic structure called alpha-fluorine.
This phase is opaque and hard, with close-packed layers of molecules, and is denser at 1.97 g/cm 3 . [ 9 ] The solid state phase change requires more energy than the melting point transition and can be violent, shattering samples and blowing out sample holder windows. [ 10 ] [ 11 ]
Henri Moissan was the first to isolate the element in 1886, observing its gaseous phase. Eleven years later, Sir James Dewar first liquified the element. For unclear reasons, Dewar measured a density for the liquid about 40% too small, and would not be corrected until 1951. [ 12 ] : 4, 110 Solid fluorine received significant study in the 1920s and 30s, but relatively less until the 1960s. The crystal structure of alpha-fluorine given, which still has some uncertainty, dates to a 1970 paper by Linus Pauling . | https://en.wikipedia.org/wiki/Phases_of_fluorine |
In bacteria , phasevarions (also known as phase variable regulons ) mediate a coordinated change in the expression of multiple genes or proteins . [ 1 ] This occurs via phase variation of a single DNA methyltransferase . Phase variation of methyltransferase expression results in differential methylation throughout the bacterial genome , leading to variable expression of multiple genes through epigenetic mechanisms.
Phasevarions have been identified in several mucosal-associated human-adapted pathogens , which include; Haemophilus influenzae , [ 2 ] Neisseria meningitidis , [ 3 ] Neisseria gonorrhoeae , [ 3 ] Helicobacter pylori , [ 4 ] Moraxella catarrhalis , [ 5 ] and Streptococcus pneumoniae . [ 6 ] All described phasevarions regulate expression of proteins that are involved in host colonization , survival, and pathogenesis , and many regulate putative vaccine targets . [ 7 ] The presence of phasevarions complicates identification of stably expressed proteins, as the regulated genes do not contain any identifiable features. The only way to identify genes in a phasevarion is by detailed study of the organisms containing such systems. Study of the phasevarions, and identification of proteins they regulate, is therefore critical to generate effective and stable vaccines .
Many of the phasevarions described to date are controlled by Type III methyltransferases . [ 8 ] Mod genes are the methyltransferase component of type III restriction modification (R-M) systems in bacteria, and serve to protect host DNA from the action of the associated restriction enzyme . However, in many bacterial pathogens, mod genes contain simple sequence repeats (SSRs), and the associated restriction enzyme encoding gene (res) is inactive. In these organisms the DNA methyltransferase phase varies between two states (ON or OFF) by variation in the number of SSRs in the mod gene. [ 9 ] Multiple different mod genes have been identified. Each Mod methylates a different DNA sequence in the genome. Methylation of unique DNA sequences results in different Mod enzymes that regulate the expression of different sets of genes; i.e., they control different phasevarions. For example, twenty-one unique modA alleles have been identified in Haemophilus influenzae ; [ 10 ] [ 11 ] Neisseria species contain seven modB alleles; [ 12 ] and Helicobacter pylori contains seventeen modH alleles. [ 4 ] Individual strains of Neisseria gonorrhoeae and Neisseria meningitidis can contain multiple, independently switching mod genes; for example, N. gonorrhoae can contain both modA and modB genes002C and individual N. meningitidis strains that contain modA , modB and modD have been identified. [ 12 ] [ 13 ]
A phasevarion controlled by a methyltransferase associated with a Type I R-M system has been identified and studied in Streptococcus pneumoniae . [ 6 ] This phase-variable methyltransferase switches between six different methyltransferase specificities by shuffling between multiple, variable copies of the specificity subunit, hsdS , that dictates the sequence to be methylated. By shuffling DNA sequences, six different HsdS specificity proteins are produced in a pneumococcal population. This means six different DNA sequences are methylated by the functional methyltransferase. This genetic shuffling, or recombination, occurs between inverted repeat sequences located in the multiple, variable hsd genes present in the locus. Recombination is catalyzed by a recombinase that is associated with the type I locus. These six methyltransferase specificities (SpnD39IIIA-F) result in six differentiated cell types in a pneumococcal population. [ 14 ] [ 6 ]
A potential phasevarion controlled by a Type IIG R-M system has been recently described in the human gastric pathogen Campylobacter jejuni . [ 15 ]
Switching of mod genes is selected for under certain disease states or within specific host niches: for example, the non-typeable Haemophilus influenzae (NTHi) modA2 ON state is selected for within the middle ear during manifestation of experimental otitis media . [ 11 ] A switch from modA2 OFF to modA2 ON results in more severe middle ear disease in a model of otitis media than in a situation where switching from modA2 OFF to modA2 ON does not occur. [ 16 ] Phase-variation of the modA2 allele also results in NTHi populations with distinct advantages under oxidative stress and increased resistance to neutrophil killing. [ 17 ] In M. catarrhalis , the modM3 allele is associated with strains isolated from the middle ear of children. [ 5 ] In S. pneumoniae , selection of particular SpnD39III alleles (allele A) occurs when S. pneumoniae is present in blood, which implies that SpnD39III-A regulates genes that give a selective advantage in this in vivo niche. No selection for any SpnD39III allele was seen when S. pneumoniae was present in the nasopharynx. [ 6 ] | https://en.wikipedia.org/wiki/Phasevarion |
In physics, a phason is a form of collective excitation found in aperiodic crystal structures . Phasons are a type of quasiparticle : an emergent phenomenon of many-particle systems. The phason can also be seen as a degree of freedom unique to quasicrystals. Similar to phonons , phasons are quasiparticles associated with atomic motion. However, whereas phonons are related to the translation of atoms, phasons are associated with atomic rearrangement . As a result of this rearrangement, or modulation, the waves that describe the position of atoms in the crystal change phase -- hence the term "phason". In the language of the superspace picture commonly employed in the description of aperiodic crystals in which the aperiodic function is obtained via projection from a higher dimensional periodic function , the 'phason' displacement can be seen as displacement of the (higher-dimensional) lattice points in the perpendicular space. [ 1 ]
Phasons can travel faster than the speed of sound within quasicrystalline materials, giving these materials a higher thermal conductivity than materials in which the transfer of heat is carried out only by phonons. [ 2 ] Different phasonic modes can change the material properties of a quasicrystal. [ 3 ]
In the superspace representation, aperiodic crystals can be obtained from a periodic crystal of higher dimension by projection to a lower dimensional space– this is commonly referred to as the cut-and-project method. While phonons change the position of atoms relative to the crystal structure in space, phasons change the position of atoms relative to the quasicrystal structure and the cut-through superspace that defines it. Therefore, phonon modes are excitations of the "in-plane" real (also called parallel, direct, or external) space, whereas phasons are excitations of the perpendicular (also called internal or virtual) space. [ 4 ]
Phasons may be described in terms of hydrodynamic theory: when going from a homogenous fluid to a quasicrystal, hydrodamic theory predicts six new modes arising from the translational symmetry breaking in the parallel and perpendicular spaces. Three of these modes (corresponding to the parallel space) are acoustic phonon modes, while the remaining three are diffusive phason modes. In incommensurately-modulated crystals, phasons may be constructed from a coherent superposition of phonons of the unmodulated parent structure, though this is not possible for quasicrystals. [ 1 ] Hydrodynamic analysis of quasicrystals predicts that, while the strain relaxation of phonons is relatively rapid, relaxation of phason strain is diffusive and is much slower. [ 5 ] Therefore, metastable quasicrystals grown by rapid quenching from the melt exhibit built-in phason strain [ 6 ] associated with shifts and anisotropic broadenings of X-ray and electron diffraction peaks. [ 7 ] [ 8 ]
Freedman, B., Lifshitz, R., Fleischer, J. et al. Phason dynamics in nonlinear photonic quasicrystals. Nature Mater 6, 776–781 (2007). https://doi.org/10.1038/nmat1981
This condensed matter physics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Phason |
A phasor is a network of capacitors and variable inductors used to adjust the relative amplitude and phase of the current being distributed to each tower in a directional array. A typical phasor has separate controls to adjust the phase of the current going to each tower, adjustable power divider controls, and a common point impedance matching network to adjust the system input impedance to 50 ohms with no reactance without disturbing the phase or amplitude of the tower currents. [ 1 ]
This article related to radio communications is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Phasor_(radio_broadcasting) |
Phasor approach refers to a method which is used for vectorial representation of sinusoidal waves like alternating currents and voltages or electromagnetic waves . The amplitude and the phase of the waveform is transformed into a vector where the phase is translated to the angle between the phasor vector and X-axis and the amplitude is translated to vector length or magnitude.
In this concept the representation and the analysis becomes very simple and the addition of two wave forms is realized by their vectorial summation.
In Fluorescence lifetime and spectral imaging, phasor can be used to visualize the spectra and decay curves. [ 1 ] [ 2 ] In this method the Fourier transformation of the spectrum or decay curve is calculated and the resulted complex number is plotted on a 2D plot where the X-axis represents the real component and the Y-axis represents the imaginary component. This facilitates the analysis; each spectrum and decay is transformed into a unique position on the phasor plot which depends on its spectral width or emission maximum or to its average lifetime. Importantly, the analysis is fast and provides a graphical representation of the measured curve.
If we have decay curve which is represented by an exponential function with lifetime of τ:
d ( t ) = d 0 e − t / τ {\displaystyle d(t)={d_{0}{e}^{-t/\tau }}}
Then the Fourier transformation at frequency ω of d ( t ) {\displaystyle d(t)} (normalized to have area under the curve 1) is represented by the Lorentz function :
D ( ω ) = 1 1 + j ω τ = 1 1 + j ω τ 1 − j ω τ 1 − j ω τ = 1 − j ω τ 1 + ( ω τ ) 2 = 1 1 + ( ω τ ) 2 − j ω τ 1 + ( ω τ ) 2 {\displaystyle D(\omega )={\frac {1}{1+j\omega \tau }}={\frac {1}{1+j\omega \tau }}{\frac {1-j\omega \tau }{1-j\omega \tau }}={\frac {1-j\omega \tau }{1+(\omega \tau )^{2}}}={\frac {1}{1+(\omega \tau )^{2}}}-j{\frac {\omega \tau }{1+(\omega \tau )^{2}}}}
This is a complex function and drawing the imaginary versus real part of this function for all possible lifetimes will be a semicircle where the zero lifetime is located at (1,0) and the infinite lifetime located at (0,0). By changing the lifetime from zero to infinity the phasor point moves along a semicircle from (1,0) to (0,0). This suggest that by taking the Fourier transformation of a measured decay curve and mapping the result on the phasor plot the lifetime can be estimated from the position of the phasor on the semicircle.
Explicitly, the lifetime can be measured from the magnitude of the phasor as follow:
τ = 1 ω Im D ( ω ) Re D ( ω ) {\displaystyle \tau ={\frac {1}{\omega }}{\frac {\operatorname {Im} D(\omega )}{\operatorname {Re} D(\omega )}}}
This is a much faster approach than methods where fitting is used to estimate the lifetime.
The semicircle represents all possible single exponential fluorescent decays. When the measured decay curve consists of a superposition of different mono-exponential decays, the phasor falls inside the semicircle depending on the fractional contributions of the components. For a bi-exponential case with lifetimes τ 1 and τ 2 , all phasor values fall on a line connecting the phasors of τ 1 and τ 2 on the semicircle, and the distance from the phasor to τ 1 determines the fraction α. Therefore, the phasor values of the pixels of an image with two lifetime components are distributed on a line connecting the phasors of τ 1 and τ 2 . Fitting a line through these phasor points with slope (v) and interception (u) , will give two intersections with the semicircle that determine the lifetimes τ 1 and τ 2 : [ 3 ]
τ 1 , 2 = 1 ± 1 − 4 u ( u + v ) 2 ω u {\displaystyle {{\tau }_{1,2}}={\frac {1\pm {\sqrt {1-4u(u+v)}}}{2\omega u}}}
This is a blind solution for unmixing two components based on their lifetimes, provided that the fluorescence decays of the individual components show a single exponential behavior.
For a system with discrete number of gates and limited time window the phasor approach needs to be adapted. The equation for reference semicircle is changed to: [ 4 ]
D ′ ( ω ) = sinh ( T 2 K τ ) sinh ( 1 − j ω τ 2 K τ T ) {\displaystyle {D}'(\omega )={\frac {\sinh \left({\frac {T}{2K\tau }}\right)}{\sinh \left({\frac {1-j\omega \tau }{\frac {2K\tau }{T}}}\right)}}}
Where K is the number of gates and T is the total measurement window. The average lifetimes are calculated by:
And for a binary case after fitting a line through the data set of phasors and finding the slope (v) and interception (u) the lifetimes are calculated by:
τ 1 , 2 = T 2 K arccoth ( ± 1 − 2 u 2 − ( 4 u v + 2 u 2 ) cos ( n ω T 2 K ) ± 1 2 u sin ( n ω T 2 K ) ) {\displaystyle {{\tau }_{1,2}}={\frac {\frac {T}{2K}}{\operatorname {arccoth} \left(\pm {\frac {{\sqrt {1-2{{u}^{2}}-\left(4uv+2{{u}^{2}}\right)\cos \left(n\omega {\frac {T}{2K}}\right)}}\pm 1}{2u\sin \left(n\omega {\frac {T}{2K}}\right)}}\right)}}}
In a non-ideal and real situations, the measured decay curve is the convolution of the instrument response (the laser pulse distorted by system) with an exponential function which makes the analysis more complicated. A large number of techniques have been developed to overcome to this problem, but in phasor approach this is simply solved by the fact that the Fourier transformation of a convolution is the product of Fourier transforms. This allows to take into account the effect of instrument response by taking the Fourier transformation of instrument response function and dividing the total phasor to instrument response transformation.
Similar to the temporal phasor, the Fourier transform of a spectrum can be used to create a phasor. Consider a Gaussian spectrum with zero spectral width and a changing emission maximum from channel zero to K; the phasor rotates on a circle from small angles to larger angles. This corresponds to the shift theorem of Fourier transforms. Changing the spectral width from zero to infinity moves the phasor toward the center. This means that the phasor for the background signal, which can be considered a spectrum with infinite spectral width, is located at the center of the phasor with coordinates (0,0).
One of the interesting properties of the phasor approach is its linearity, where the superposition of different spectra or decay curves can be analyzed through the vectorial superposition of individual phasors. This is demonstrated in the figure, where adding two spectra with different emission maxima results in a phasor that lies on a line connecting the individual phasors. In a ternary system, adding three spectra results in a triangle formed by the phasors of the individual spectra or decays.
For a system which has three different components and different spectra are shown, the phasor of the pixels with different fractional intensities fall inside a triangle where the vertices are made up by phasor of pure components. The fractional intensities then can be estimated by measuring the area of the triangle that each phasor makes with the phasor of pure vertex.
This feature is noteworthy because there is a one-to-one correspondence between the pixels in an image and their phasors on the phasor plot, determined by their spectrum or decay curve. Phasors corresponding to pixels with similar temporal-spectral properties cluster in specific regions of the phasor plot. This characteristic provides a method for categorizing image pixels based on their temporal-spectral properties. By selecting a region of interest on the phasor plot, a reciprocal transformation can be applied, projecting the selected phasors back onto the image. This process enables basic image segmentation. | https://en.wikipedia.org/wiki/Phasor_approach_to_fluorescence_lifetime_and_spectral_imaging |
A phene is an individual genetically determined characteristic or trait which can be possessed by an organism , such as eye colour, height, behavior, tooth shape or any other observable characteristic.
The term 'phene' was evidently coined as an obvious parallel construct to 'gene'. Phene is to Phenotype as Gene is to Genotype , and Similarly Phene is to Phenome as Gene is to Genome . More specifically, a Phene is an abstract concept describing a particular characteristic which can be possessed by an organism. Whereas Phenotype refers to a collection of Phenes possessed by a particular organism, and Phenome refers to the entire set of Phenes that exist within an organism or species.
Genome wide association studies use "phenes" or "traits" (symptoms) to distinguish groups in the human population. These groups are then employed to identify associations with genetic alleles that are more common in the symptomatic group than in the asymptomatic control group. Allen et al. report that with respect to Schizophrenia "Research in molecular genetics has focused on detecting multiple genes of small effect" [ 1 ] This indicates the importance of discovering individual traits or "phenes" that are governed by single genes. Schizophrenia or bipolar disorder may be described as a phenotype but how many individual traits or "phenes" contribute to these phenotypes? Very large genome wide association studies have not found many significant gene linkages. On the contrary the results of these studies implicate a large number of gene alleles that have a very small effect (phene). [ 2 ]
It is important to note that the word phenotype was originally used to refer to both the trait/character itself (e.g. the blue eyes phenotype) and the set of traits/characteristics possessed by the organism (clair's eye-colour phenotype is blue). While this definition is still used in many places, the lack of distinction can make in-depth explanations confusing and thus use of the term Phene becomes necessary. Indeed, it is extremely difficult to determine precisely what the fundamental building blocks of a phenome are. Since the term "phenotype" has been used to describe traits and syndromes and population characteristics [ 3 ] it is not helpful in the collective search for specific traits that could be a consequence of a single gene or gene–environmental interaction. Phene has emerged as a candidate building block for the phenome.
Genes give rise to phenes. Genes are the biochemical instructions encoding what an organism can be, while phenes are what the organism is . In general it takes a combination of particular genes, environmental influences and random variation to give rise to any one phene in an organism. Both phenes and genes are subject to evolution. However, if one defines "genes" as "DNA sequences encoding polypeptides ", they are not directly accessible to natural selection ; the associated phenes are. Note that some, e.g. Richard Dawkins , have used a wider definition of "gene" than the one used in genetics on occasion, extending it to any DNA sequence with a function .
Due to the distinct chemical and physical properties of the nucleotides in the DNA and some mutations being " silent " (that is, not altering gene expression ), the DNA primary sequence may also be a phene. For example, A-T and C-G base pairs are differently resistant to heat (see also DNA-DNA hybridization ). In a thermophilic microorganism, "silent" mutations may have an effect on DNA stability and thus survival. While being subject to evolution , natural selection affects the primary sequence directly in this case, with or without it being expressed.
Consider, for example, a mutation that makes a zygote abort development as a young embryo . This mutation, obviously, will not spread, as it is quickly fatal. It is not the mutated nucleotide that is selected against, but the fact that due to this mutation, the phene (a key enzyme or developmental factor for example) does not get expressed.
Compare a (fictional) kind of mutation that breaks the DNA strand in a crucial position and defies all attempts to repair it, leading to cell death . Here, the mutated and unmutated DNA sequences would be phenes themselves; it is the changed primary sequence itself which by failing would cause death, not the corresponding polypeptide.
See also Dawkin's concept of extended phenotype .
The term has been widely adopted by the academic community and appears in scientific literature. A quick keyword search of titles and abstracts containing "phene" at PubMed returns many articles. [ 4 ] It is a valuable concept in the genomic era where "phenes" or "traits" (symptoms) are used to distinguish groups with genetic disorders.
"Phene" is used as to refer to relevant phenotypic traits in the OMIA ( Online Mendelian Inheritance in Animals ) database. One of the objectives of the OMIA is to match genotypes to phenotypes. Lenffer et al. (2006) describe the OMIA as a "comparative biology resource" "(The) OMIA is a comprehensive resource of phenotypic information on heritable animal traits and genes in a strongly comparative context, relating traits to genes where possible. OMIA is modelled on and is complementary to Online Mendelian Inheritance in Man (OMIM)." [ 5 ] The term "phene" is equated with "trait". | https://en.wikipedia.org/wiki/Phene |
In biology , phenetics ( / f ɪ ˈ n ɛ t ɪ k s / ; from Ancient Greek φαίνειν (phainein) ' to appear ' ), also known as taximetrics , is an attempt to classify organisms based on overall similarity, usually with respect to morphology or other observable traits, regardless of their phylogeny or evolutionary relation. It is related closely to numerical taxonomy which is concerned with the use of numerical methods for taxonomic classification. Many people contributed to the development of phenetics, but the most influential were Peter Sneath and Robert R. Sokal . Their books are still primary references for this sub-discipline, although now out of print. [ 1 ]
Phenetics has been largely superseded by cladistics for research into evolutionary relationships among species. However, certain phenetic methods, such as neighbor-joining , are used for phylogenetics, as a reasonable approximation of phylogeny when more advanced methods (such as Bayesian inference ) are too expensive computationally.
Phenetic techniques include various forms of clustering and ordination . These are sophisticated methods of reducing the variation displayed by organisms to a manageable degree. In practice this means measuring dozens of variables, and then presenting them as two- or three-dimensional graphs. Much of the technical challenge of phenetics concerns balancing the loss of information due to such a reduction against the ease of interpreting the resulting graphs.
The method can be traced back to 1763 and Michel Adanson (in his Familles des plantes ) because of two shared basic principles – overall similarity and equal weighting – and modern pheneticists are sometimes termed neo-Adansonian s. [ 2 ]
Phenetic analyses are " unrooted ", that is, they do not distinguish between plesiomorphies , traits that are inherited from an ancestor, and apomorphies , traits that evolved anew in one or several lineages. A common problem with phenetic analysis is that basal evolutionary grades , which retain many plesiomorphies compared to more advanced lineages, seem to be monophyletic . Phenetic analyses are also liable to be rendered inaccurate by convergent evolution and adaptive radiation . Cladistic methods attempt to solve those problems.
Consider for example songbirds . These can be divided into two groups – Corvida , which retains ancient characteristics of phenotype and genotype , and Passerida , which has more modern traits. But only the latter are a group of closest relatives; the former are numerous independent and ancient lineages which are related about as distantly to each other as each single one of them is to the Passerida. For a phenetic analysis, the large degree of overall similarity found among the Corvida will make them seem to be monophyletic too, but their shared traits were present in the ancestors of all songbirds already. It is the loss of these ancestral traits rather than their presence that signifies which songbirds are more closely related to each other than to other songbirds. However, the requirement that taxa be monophyletic – rather than paraphyletic as for the case of the Corvida – is itself part of the cladistic method of taxonomy, not necessarily obeyed absolutely by other methods.
The two methods are not mutually exclusive. There is no reason why, e.g., species identified using phenetics cannot subsequently be subjected to cladistic analysis, to determine their evolutionary relationships. Phenetic methods can also be superior to cladistics when only the distinctness of related taxa is important, as the computational requirements are less. [ 3 ]
The history of pheneticism and cladism as rival taxonomic systems is analysed in David Hull 's 1988 book Science as a Process . [ 4 ]
Traditionally there was much debate between pheneticists and cladists, as both methods were proposed initially to resolve evolutionary relationships. One of the most noteworthy applications of phenetics were the DNA–DNA hybridization studies by Charles G. Sibley , Jon E. Ahlquist and Burt L. Monroe Jr. , from which resulted the 1990 Sibley-Ahlquist taxonomy for birds . Controversial at its time, some of its findings (e.g. the Galloanserae ) have been vindicated, while others (e.g. the all-inclusive " Ciconiiformes " or the " Corvida ") have been rejected. However, with computers growing increasingly powerful and widespread, more refined cladistic algorithms became available which could test the suggestions of Willi Hennig . The results of cladistic analyses were proven superior to those of phenetic methods, at least for resolving phylogenies.
Many systematists continue to use phenetic methods, particularly to address species-level questions. While a major goal of taxonomy remains describing the 'tree of life' – the evolutionary relationships of all species – for fieldwork one needs to be able to separate one taxon from another. Classifying diverse groups of closely related organisms that differ very subtly is difficult using a cladistic method. Phenetics provides numerical methods for examining patterns of variation, allowing researchers to identify discrete groups that can be classified as species.
Modern applications of phenetics are common for botany , and some examples can be found in most issues of the journal Systematic Botany . Indeed, due to the effects of horizontal gene transfer , polyploid complexes and other peculiarities of plant genomics , phenetic techniques of botany – though less informative altogether – may, for these special cases, be less prone to errors compared with cladistic analysis of DNA sequences .
In addition, many of the techniques developed by phenetic taxonomists have been adopted and extended by community ecologists , due to a similar need to deal with large amounts of data. [ 5 ] | https://en.wikipedia.org/wiki/Phenetics |
The Phenice method is a technique of determining the sex of a human skeleton from the innominate pelvis. In the procedure, sex is determined based on three features: the ventral arc, the subpubic concavity, and the medial aspect of the ischio-pubic ramus. As a non-metric absolute method, it relies on the recognition of discrete male and female traits. This makes the method objective, easily performable, and relatively quick [ 1 ] (although this has been challenged by those seeking to improve the method). [ 2 ] It is considered highly accurate, up to 96%, owing to the distinct biological differences between male and female anatomy in the pelvis, making it a highly useful method for those determining the sex of a skeleton. [ 1 ]
Determining the sex of a human skeleton has multiple uses. Within archaeology, it is essential for building a biological profile of an individual, which in turn might be used to make assumptions about sex-based roles and responsibilities or contrast life histories based on sex. It is also important for reconstructing demographics of past societies to estimate population size, family size, and other factors. Within the field of heritage, it may be useful in reconstructing the appearance and life of an individual for public presentation. It also has forensic uses where it can aid in the identification of bodies for legal purposes. [ 2 ]
While the pelvis has long been recognised as an important piece of skeletal morphology in determining sex, the Phenice method was proposed in 1969 by T.W. Phenice. Before Phenice's ideas the study of the pubis focussed on aspects such as the width of the pubis, the pre-auricular suculus and the greater sciatic notch among others. Phenice considered these aspects highly relative and therefore subject to the researcher. Furthermore, they required experience to identify. Phenice's method was originally based on the differences between the area of attachment of the crus penis or crus clitoris to the ischiopubic ramus, however he determined this was not accurate enough and chose to consider the two further aspects as well. [ 1 ] Phenice's principles have been tested and revised numerous times since their original publication, most notably by Klales et al. in 2012. This paper claimed that Phenice's original method did not acknowledge the prevalence of intermediate forms between extreme male and female features, nor did it appropriately consider the significance of the different features of the innominate, nor did it calculate the posterior probability as to quantify the likelihood of the individual belonging to the other sex. As such, Klales et al. proposed an improved method that is often used today. [ 2 ]
Firstly, the innominate must be correctly orientated. The ventral surface must face the observer, the pubis symphysis in the anterior-posterior plane. Phenice describes the ventral arc as ‘a slightly elevated ridge of bone which extends from the pubic crest and arcs inferiorly across the ventral surface to the lateral most extension of the subpubic concavity… where it blends with the medial border of the ischio-pubic ramus.’ Phenice suggests the ventral arc can only be found in females, therefore its presence categorises the subject as female. While a similar ridge may be found on male examples, this is easily distinguishable because it will take an alternative path. As such, the ventral arc is the most objective indicator of sex. [ 1 ]
The pelvis must be orientated such that the observer is looking at the dorsal aspect of the pubis and ischio-pubic ramus. Phenice describes the subpubic concavity as ‘a lateral recurve which occurs in the ischio-pubic ramus… a short distance below the lower margin of the pubic symphysis.’ This is also only found in female examples. Although some males may show a slight subpubic concavity, this is unpronounced enough that the feature remains effectively diagnostic of sex. The subpubic concavity is a slightly less reliable indicator of sex than the ventral arc. [ 1 ]
The observer must orientate the hip such that they are directly facing the ischio-pubic ramus. A female hip will have a pronounced ridge on this face while a male hip will have a broad flat surface. This criterion is the least distinct of those that Phenice describes, with the highest similarity in male and female examples. It should only be relied upon in conjunction with the other two features. [ 1 ]
This improved method does away with the binary of Phenice's original. Instead, an ordinal system with five grades is provided, allowing for consideration of intermediate forms. This system has values of one to five, rather than female or male to avoid a binary. It also makes use of statistical tests to calculate the posterior possibility of each classification as to quantify the certainty of each observation whether male or female. [ 2 ]
Phenice expressed that his method could correctly sex individuals 96% of the time. [ 1 ] Following tests have achieved very similar yet slightly lower accuracy rates. While using Klales et al's. revised method an experienced researcher could achieve 95.5% accuracy while an inexperienced individual could determine sex correctly 77% of the time. Males were more easily identified than females in this study, suggesting a sex bias in identification. The highest accuracy was found when combining only the ventral surface and ischiopubic ramus, though including the medial aspect of the ischio-pubic ramus did lower the sex bias. Furthermore, there was no significant difference between groups of different ancestries. As such, the Phenice method is a highly accurate and widely applicable test. [ 2 ]
The Phenice method has three major limitations. While it is not particularly limited by cost, experience, or time like other methods, it does rely on the preservation of an intact pelvis. It also presumes the absence of any pathology that might disturb normal anatomy. Most notably, as the features it uses to determine sex are secondary sexual characteristics that develop only post-puberty, it cannot be used on children. [ 2 ] Klales et al's. attempt to adapt the method for subadults, lowering number of ordinal grades to three, only found significant accuracy (above 75%) from early adolescence (ages 12.6–15.3). Meanwhile, early young children (ages 1–3.5 years) could only be correctly identified 53.9% of the time, little more than random chance. [ 3 ]
Since most tests of the Phenice method are carried out on modern samples, there is a concern that it does not hold up when applied to past populations, as such it is sometimes applied to historic collections to test its accuracy. For example, as done with the medieval skeleton collection of the Hospital of St John the Evangelist in Cambridge. The concern is that significant differences in diet, lifestyle, and pathogenic stress among other factors may result in differing anatomy that the Phenice method does not account for. This is not helped by the fact that most studies of the Phenice test are performed on relatively modern collections, rarely pre 18th century, as the sex of the individuals must be known already in order to test the accuracy of the method. However, a DNA testing of individuals from the Hospital of St John the Evangelist in Cambridge has provided this basis. In applying the Phenice method to this population, favourable results were found, suggesting that the method could be 83% accurate even for historical populations. [ 4 ]
The Phenice method can be used to determine the sex of individuals in cemeteries to help reconstruct the demography of the site. This has been enacted at the Eiden Phase cemetery from the Pearson Complex in Eastern North America. 124 of 311 adults could be sexed using this method. This shows how archaeological examples are rarely well preserved enough to accurately determine the sex of most skeletons. Furthermore, no children could be sexed from this site. However, the morphology of the individuals that were sexed was used to create a diagnostic framework based on humerus, femur and foot measurements that allowed the determination of another 113 individuals. Using this information, combined with other discoveries, some demographic factors could then be estimated. For example, a mean fertility rate of 0.0904, and a mean family size of 3.66. The utility of the Phenice method, recognised as quick, easy and accurate, despite its reliance on preservation of the pelvis, in part allowed the reconstruction of this demography. [ 5 ]
The simple binary presented by determining the sex of individuals using the Phenice method may predispose researchers to focus on the sexual binary to the ignorance of other horizontal and vertical social categories and roles. Original excavations at Durankulak in Bulgaria sexed all the burials they could conclusively using osteology, then used grave goods and burial positions to establish a method of determining the sex of other graves. Males were often buried extended and had axes, females often crouched with jewellery. Analysis of further burial sites then built on this model and the confidence with which further burials were sexed was highly related to how well they conformed to this burial hypothesis. The grave sites which exhibited sexing that appeared to contradict the previously determined burial binary were rated as less conclusive, leading to a self-fulfilling theory. [ 6 ] | https://en.wikipedia.org/wiki/Phenice_method |
In phenomics , a phenocopy is a variation in phenotype (generally referring to a single trait ) which is caused by environmental conditions (often, but not necessarily, during the organism's development ), such that the organism's phenotype matches a phenotype which is determined by genetic factors. It is not a type of mutation , as it is non- hereditary .
The term was coined by German geneticist Richard Goldschmidt in 1935. [ 1 ] He used it to refer to forms, produced by some experimental procedure, whose appearance duplicates or copies the phenotype of some mutant or combination of mutants.
The butterfly genus Vanessa can change phenotype based on the local temperature. If introduced to Lapland they mimic butterflies localised to this area; and if localised to Syria they mimic butterflies of this area.
The larvae of Drosophila melanogaster have been found to be particularly vulnerable to environmental factors which produce phenocopies of known mutations; these factors include temperature, shock, radiation, and various chemical compounds. In fruit fly, Drosophila melanogaster , the normal body colour is brownish gray with black margins. A hereditary mutant for this was discovered by T.H. Morgan in 1910 where the body colour is yellow. This was a genotypic character which was constant in both the flies in all environments. However, in 1939, Rapoport discovered that if larvae of normal flies were fed with silver salts, they develop into yellow bodied flies irrespective of their genotype. [ 2 ] The yellow bodied flies which are genetically brown is a variant of the original yellow bodied fly.
Phenocopy can also be observed in Himalayan rabbits . When raised in moderate temperatures, Himalayan rabbits are white in colour with black tail, nose, and ears, making them phenotypically distinguishable from genetically black rabbits. However, when raised in cold temperatures, Himalayan rabbits show black colouration of their coats, resembling the genetically black rabbits. Hence this Himalayan rabbit is a phenocopy of the genetically black rabbit. [ 3 ]
Reversible and/or cosmetic modifications such as the use of hair bleach are not considered to be phenocopy, as they are not inherent traits. | https://en.wikipedia.org/wiki/Phenocopy |
Phenol extraction is a laboratory technique that purifies nucleic acid samples using a phenol solution. Phenol is common reagent in extraction because its properties allow for effective nucleic acid extraction, particularly as it strongly denatures proteins, it is a nucleic acid preservative, and it is immiscible in water.
It may also refer to the process of extracting and isolating phenols from raw materials such as coal tar . These purified phenols are used in many industrial and medical compounds and are used as precursors in some synthesis reactions .
Phenol extraction is a widely used technique for purifying nucleic acid samples from cell lysates. [ 1 ] To obtain nucleic acids , the cell must be lysed , and the nucleic acids separated from other cell components.
Phenol is a polar substance with a higher density than water (1.07 g/cm 3 [ 2 ] compared to water's 1.00 g/cm 3 ). When suspended in a water-phenol solution, denatured proteins and unwanted cell components dissolve in the phenol, while polar nucleic acids dissolve in the water phase. [ 3 ] The solution may then be centrifuged to separate the phenol and water into distinct organic and aqueous phases. Purified nucleic acids can be precipitated from the aqueous phase of the solution.
Phenol is often used in combination with chloroform . [ 4 ] Adding an equal volume of chloroform and phenol ensures a distinct separation between the aqueous and organic phases. Chloroform and phenol are miscible and create a denser solution than phenol alone, aiding the separation of the organic and aqueous layers. This addition of chloroform is useful when removing the aqueous phase to obtain a purified nucleic acid sample.
The pH of the solution must be adjusted specifically for each type of extraction. For DNA extraction, the pH is adjusted to 7.0–8.0. For RNA -specific extraction, the pH is adjusted to 4.5. At pH 4.5, hydrogen ions neutralize the negative charges on the phosphate groups , causing DNA to dissolve in the organic phase while allowing RNA to be isolated separately in the aqueous phase. | https://en.wikipedia.org/wiki/Phenol_extraction |
Phenol red (also known as phenolsulfonphthalein or PSP ) is a pH indicator frequently used in cell biology laboratories.
Phenol red exists as a red crystal that is stable in air. Its solubility is 0.77 grams per liter (g/L) in water and 2.9 g/L in ethanol . [ 1 ] It is a weak acid with p K a = 8.00 at 20 °C (68 °F).
A solution of phenol red is used as a pH indicator, often in cell culture. Its color exhibits a gradual transition from yellow ( λ max = 443 nm [ 2 ] ) to red (λ max = 570 nm [ 3 ] ) over the pH range 6.8 to 8.2. Above pH 8.2, phenol red turns a bright pink ( fuchsia ) color. [ 4 ] [ 5 ]
In crystalline form, and in solution under very acidic conditions (low pH), the compound exists as a zwitterion as in the structure shown above, with the sulfate group negatively charged, and the ketone group carrying an additional proton. This form is sometimes symbolically written as H + 2 PS − and is orange-red. If the pH is increased (p K a = 1.2), the proton from the ketone group is lost, resulting in the yellow, negatively charged ion denoted as HPS − . At still higher pH (p K a = 7.7), the phenol 's hydroxy group loses its proton, resulting in the red ion denoted as PS 2− . [ 6 ]
In several sources, the structure of phenol red is shown with the sulfur atom being part of a cyclic group, similar to the structure of phenolphthalein . [ 1 ] [ 7 ] However, this cyclic structure could not be confirmed by X-ray crystallography . [ 8 ]
Several indicators share a similar structure to phenol red, including bromothymol blue , thymol blue , bromocresol purple , thymolphthalein , and phenolphthalein. (A table of other common chemical indicators is available in the article on pH indicators .)
Phenol red was used by Leonard Rowntree and John Geraghty in the phenolsulfonphthalein test to estimate the overall blood flow through the kidney in 1911. [ 9 ] It was the first test of kidney function and was used for almost a century but is now obsolete.
The test is based on the fact that phenol red is excreted almost entirely in the urine. Phenol red solution is administered intravenously ; the urine produced is collected. By measuring the amount of phenol red excreted colorimetrically , kidney function can be determined. [ 10 ]
Most living tissues prosper at a near-neutral pH—that is, a pH close to 7. The pH of blood ranges from 7.35 to 7.45, for instance. When cells are grown in tissue culture , the medium in which they grow is held close to this physiological pH. A small amount of phenol red added to this growth medium will have a pink-red color under normal conditions. Typically, 15 mg/L are used for cell culture.
In the event of problems, waste products produced by dying cells or overgrowth of contaminants will cause a change in pH, leading to a change in indicator color. For example, a culture of relatively slowly dividing mammalian cells can be quickly overgrown by bacterial contamination. This usually results in an acidification of the medium, turning it yellow. Many biologists find this a convenient way to rapidly check on the health of tissue cultures. In addition, the waste products produced by the mammalian cells themselves will slowly decrease the pH, gradually turning the solution orange and then yellow. This color change is an indication that even in the absence of contamination, the medium needs to be replaced (generally, this should be done before the medium has turned completely orange).
Since the color of phenol red can interfere with some spectrophotometric and fluorescent assays, many types of tissue culture media are also available without phenol red.
Phenol red is a weak estrogen mimic, and in cell cultures can enhance the growth of cells that express the estrogen receptor. [ 11 ] It has been used to induce ovarian epithelial cells from post-menopausal women to differentiate into cells with properties of oocytes (eggs), with potential implications for both fertility treatment and stem cell research. [ 12 ]
Phenol red, sometimes labelled with a different name, such as "Guardex Solution #2", is used as a pH indicator in home swimming pool test kits. [ 13 ]
Chlorine can result in the bleaching of the dye in the absence of thiosulfate to inhibit the oxidizing chlorine. High levels of bromine can convert phenol red to bromophenol red (dibromophenolsulfonephthalein, whose lowered p K a results in an indicator with a range shifted in the acidic direction – water at pH 6.8 will appear to test at 7.5). Even higher levels of bromine (>20 ppm) can result in the secondary conversion of bromophenol red to bromophenol blue with an even lower p K a , erroneously giving the impression that the water has an extremely high pH despite being dangerously low. [ 14 ] | https://en.wikipedia.org/wiki/Phenol_red |
Phenology is the study of periodic events in biological life cycles and how these are influenced by seasonal and interannual variations in climate , as well as habitat factors (such as elevation ). [ 1 ]
Examples include the date of emergence of leaves and flowers, the first flight of butterflies, the first appearance of migratory birds, the date of leaf colouring and fall in deciduous trees, the dates of egg-laying of birds and amphibia, or the timing of the developmental cycles of temperate -zone honey bee colonies. In the scientific literature on ecology , the term is used more generally to indicate the time frame for any seasonal biological phenomena, including the dates of last appearance (e.g., the seasonal phenology of a species may be from April through September).
Because many such phenomena are very sensitive to small variations in climate , especially to temperature, phenological records can be a useful proxy for temperature in historical climatology , especially in the study of climate change and global warming . For example, viticultural records of grape harvests in Europe have been used to reconstruct a record of summer growing season temperatures going back more than 500 years. [ 2 ] [ 3 ] In addition to providing a longer historical baseline than instrumental measurements, phenological observations provide high temporal resolution of ongoing changes related to global warming . [ 4 ] [ 5 ]
The word is derived from the Greek φαίνω ( phainō ), "to show, to bring to light, make to appear" [ 6 ] + λόγος ( logos ), amongst others "study, discourse, reasoning" [ 7 ] and indicates that phenology has been principally concerned with the dates of first occurrence of biological events in their annual cycle.
The term was first used by Charles François Antoine Morren , a professor of botany at the University of Liège ( Belgium ). [ 8 ] Morren was a student of Adolphe Quetelet . Quetelet made plant phenological observations at the Royal Observatory of Belgium in Brussels. He is considered "one of 19th century trendsetters in these matters." [ 9 ] In 1839, he started his first observations and created a network over Belgium and Europe that reached a total of about 80 stations in the period 1840–1870.
Morren participated in 1842 and 1843 in Quetelets 'Observations of Periodical Phenomena' (Observations des Phénomènes périodiques), [ 10 ] and at first suggested to mention the observations concerning botanical phenomena "anthochronological observations". That term had already been used in 1840 by Carl Joseph Kreutzer .
On 16 December 1849, Morren used the term 'phenology' for the first time in a public lecture at the Royal Academy of Science, Letters and Fine Arts of Belgium in Brussels, [ 11 ] [ 12 ] to describe "the specific science which has the goal to know the manifestation of life ruled by the time." [ 13 ]
Four years later, Morren published "Phenological Memories". [ 14 ] The term may not have been common in the decades to follow, as in an article in The Zoologist of 1899 describing an ornithological meeting in Sarajevo, where "questions of Phaenology" were discussed, a footnote by the Editor, William Lucas Distant , says: "This word is seldom used, and we have been informed by a very high authority that it may be defined as "Observational Biology", and as applied to birds, as it is here, may be taken to mean the study or science of observations on the appearance of birds". [ 15 ]
Observations of phenological events have provided indications of the progress of the natural calendar since ancient agricultural times. Many cultures have traditional phenological proverbs and sayings which indicate a time for action: "When the sloe tree is white as a sheet, sow your barley whether it be dry or wet" or attempt to forecast future climate: "If oak's before ash, you're in for a splash. If ash before oak, you're in for a soak". But the indications can be pretty unreliable, as an alternative version of the rhyme shows: "If the oak is out before the ash , 'Twill be a summer of wet and splash; If the ash is out before the oak, 'Twill be a summer of fire and smoke." Theoretically, though, these are not mutually exclusive, as one forecasts immediate conditions and one forecasts future conditions.
The North American Bird Phenology Program at USGS Patuxent Wildlife Research Center (PWRC) is in possession of a collection of millions of bird arrival and departure date records for over 870 species across North America, dating between 1880 and 1970. This program, originally started by Wells W. Cooke , involved over 3,000 observers including many notable naturalists of the time. The program ran for 90 years and came to a close in 1970 when other programs starting up at PWRC took precedence. The program was again started in 2009 to digitize the collection of records and now with the help of citizens worldwide, each record is being transcribed into a database which will be publicly accessible for use.
The English naturalists Gilbert White and William Markwick reported the seasonal events of more than 400 plant and animal species, Gilbert White in Selborne , Hampshire and William Markwick in Battle, Sussex over a 25-year period between 1768 and 1793. The data, reported in White's Natural History and Antiquities of Selborne [ 17 ] are reported as the earliest and latest dates for each event over 25 years; so annual changes cannot therefore be determined.
In Japan and China the time of blossoming of cherry and peach trees is associated with ancient festivals and some of these dates can be traced back to the eighth century. Such historical records may, in principle, be capable of providing estimates of climate at dates before instrumental records became available. For example, records of the harvest dates of the pinot noir grape in Burgundy have been used in an attempt to reconstruct spring–summer temperatures from 1370 to 2003; [ 18 ] [ 19 ] the reconstructed values during 1787–2000 have a correlation with Paris instrumental data of about 0.75.
Robert Marsham , the founding father of modern phenological recording, was a wealthy landowner who kept systematic records of "Indications of spring" on his estate at Stratton Strawless , Norfolk , from 1736. These took the form of dates of the first occurrence of events such as flowering, bud burst, emergence or flight of an insect. Generations of Marsham's family maintained consistent records of the same events or "phenophases" over unprecedentedly long periods of time, eventually ending with the death of Mary Marsham in 1958, so that trends can be observed and related to long-term climate records. The data show significant variation in dates which broadly correspond with warm and cold years. Between 1850 and 1950 a long-term trend of gradual climate warming is observable, and during this same period the Marsham record of oak-leafing dates tended to become earlier. [ 20 ]
After 1960 the rate of warming accelerated, and this is mirrored by increasing earliness of oak leafing, recorded in the data collected by Jean Combes in Surrey. Over the past 250 years, the first leafing date of oak appears to have advanced by about 8 days, corresponding to overall warming on the order of 1.5 °C in the same period.
Towards the end of the 19th century the recording of the appearance and development of plants and animals became a national pastime, and between 1891 and 1948 the Royal Meteorological Society (RMS) organised a programme of phenological recording across the British Isles. Up to 600 observers submitted returns in some years, with numbers averaging a few hundred. During this period 11 main plant phenophases were consistently recorded over the 58 years from 1891 to 1948, and a further 14 phenophases were recorded for the 20 years between 1929 and 1948. The returns were summarised each year in the Quarterly Journal of the RMS as The Phenological Reports . Jeffree (1960) summarised the 58 years of data, [ 21 ] which show that flowering dates could be as many as 21 days early and as many as 34 days late, with extreme earliness greatest in summer-flowering species, and extreme lateness in spring-flowering species. In all 25 species, the timings of all phenological events are significantly related to temperature, [ 22 ] [ 23 ] indicating that phenological events are likely to get earlier as climate warms.
The Phenological Reports ended suddenly in 1948 after 58 years, and Britain remained without a national recording scheme for almost 50 years, just at a time when climate change was becoming evident. During this period, individual dedicated observers made important contributions. The naturalist and author Richard Fitter recorded the First Flowering Date (FFD) of 557 species of British flowering plants in Oxfordshire between about 1954 and 1990. Writing in Science in 2002, Richard Fitter and his son Alistair Fitter found that "the average FFD of 385 British plant species has advanced by 4.5 days during the past decade compared with the previous four decades." [ 24 ] [ 25 ] They note that FFD is sensitive to temperature, as is generally agreed, that "150 to 200 species may be flowering on average 15 days earlier in Britain now than in the very recent past" and that these earlier FFDs will have "profound ecosystem and evolutionary consequences". In Scotland, David Grisenthwaite meticulously recorded the dates he mowed his lawn since 1984. His first cut of the year was 13 days earlier in 2004 than in 1984, and his last cut was 17 days later, providing evidence for an earlier onset of spring and a warmer climate in general. [ 26 ] [ 27 ] [ 28 ]
National recording was resumed by Tim Sparks in 1998 [ 29 ] and, from 2000, [ 30 ] has been led by citizen science project Nature's Calendar [2] , run by the Woodland Trust and the Centre for Ecology and Hydrology . Latest research shows that oak bud burst has advanced more than 11 days since the 19th century and that resident and migrant birds are unable to keep up with this change. [ 31 ]
In Europe, phenological networks are operated in several countries, e.g. Germany's national meteorological service operates a very dense network with approx. 1200 observers, the majority of them on a voluntary basis. [ 32 ] The Pan European Phenology (PEP) project is a database that collects phenological data from European countries. Currently 32 European meteorological services and project partners from across Europe have joined and supplied data. [ 33 ]
In Geneva , Switzerland , the opening of the first leaf of an official chestnut tree (a horse chestnut ) has been observed and recorded since 1818, thus forming the oldest set of records of phenological events in Switzerland. [ 34 ] This task is conducted by the secretary of the Grand Council of Geneva (the local parliament), and the opening of the first leaf is announced publicly as indicating the beginning of the Spring . Data show a trend during the 20th century towards an opening that happens earlier and earlier. [ 35 ]
There is a USA National Phenology Network [3] in which both professional scientists and lay recorders participate.
Many other countries such as Canada (Alberta Plantwatch [4] and Saskatchewan PlantWatch [ 36 ] ), China and Australia [ 37 ] [ 38 ] also have phenological programs.
In eastern North America, almanacs are traditionally used by farmers for information on action phenology (in agriculture), taking into account the astronomical positions at the time.
William Felker has studied phenology in Ohio , US, since 1973 and now publishes "Poor Will's Almanack", a phenological almanac for farmers (not to be confused with a late 18th-century almanac by the same name).
In the Amazon rainforests of South America, the timing of leaf production and abscission has been linked to rhythms in gross primary production at several sites. [ 39 ] [ 40 ] Early in their lifespan, leaves reach a peak in their capacity for photosynthesis , [ 41 ] and in tropical evergreen forests of some regions of the Amazon basin (particularly regions with long dry seasons), many trees produce more young leaves in the dry season, [ 42 ] seasonally increasing the photosynthetic capacity of the forest. [ 43 ]
Recent technological advances in studying the earth from space have resulted in a new field of phenological research that is concerned with observing the phenology of whole ecosystems and stands of vegetation on a global scale using proxy approaches. These methods complement the traditional phenological methods which recorded the first occurrences of individual species and phenophases.
The most successful of these approaches is based on tracking the temporal change of a Vegetation Index (like Normalized Difference Vegetation Index (NDVI)). NDVI makes use of the vegetation's typical low reflection in the red (red energy is mostly absorbed by growing plants for Photosynthesis) and strong reflection in the Near Infrared (Infrared energy is mostly reflected by plants due to their cellular structure). Due to its robustness and simplicity, NDVI has become one of the most popular remote sensing based products. Typically, a vegetation index is constructed in such a way that the attenuated reflected sunlight energy (1% to 30% of incident sunlight) is amplified by ratio-ing red and NIR following this equation:
The evolution of the vegetation index through time, depicted by the graph above, exhibits a strong correlation with the typical green vegetation growth stages (emergence, vigor/growth, maturity, and harvest/senescence). These temporal curves are analyzed to extract useful parameters about the vegetation growing season (start of season, end of season, length of growing season , etc.). Other growing season parameters could potentially be extracted, and global maps of any of these growing season parameters could then be constructed and used in all sorts of climatic change studies.
A noteworthy example of the use of remote sensing based phenology is the work of Ranga Myneni [ 46 ] from Boston University . This work [ 47 ] showed an apparent increase in vegetation productivity that most likely resulted from the increase in temperature and lengthening of the growing season in the boreal forest . [ 48 ] Another example based on the MODIS enhanced vegetation index (EVI) reported by Alfredo Huete [ 49 ] at the University of Arizona and colleagues showed that the Amazon Rainforest , as opposed to the long-held view of a monotonous growing season or growth only during the wet rainy season, does in fact exhibit growth spurts during the dry season. [ 50 ] [ 51 ]
However, these phenological parameters are only an approximation of the true biological growth stages. This is mainly due to the limitation of current space-based remote sensing, especially the spatial resolution, and the nature of vegetation index. A pixel in an image does not contain a pure target (like a tree, a shrub, etc.) but contains a mixture of whatever intersected the sensor's field of view.
Most species, including both plants and animals, interact with one another within ecosystems and habitats, known as biological interactions . [ 52 ] These interactions (whether it be plant-plant, animal-animal, predator-prey or plant-animal interactions) can be vital to the success and survival of populations and therefore species.
Many species experience changes in life cycle development, migration or in some other process/behavior at different times in the season than previous patterns depict due to warming temperatures. Phenological mismatches, where interacting species change the timing of regularly repeated phases in their life cycles at different rates, creates a mismatch in interaction timing and therefore negatively harming the interaction. [ 53 ] Mismatches can occur in many different biological interactions, including between species in one trophic level ( intratrophic interactions) (i.e. plant-plant), between different trophic levels ( intertrophic interactions) (i.e. plant-animal) or through creating competition ( intraguild interactions). [ 54 ] For example, if a plant species blooms its flowers earlier than previous years, but the pollinators that feed on and pollinate this flower do not arrive or grow earlier as well, then a phenological mismatch has occurred. This results in the plant population declining as there are no pollinators to aid in their reproductive success. [ 55 ] Another example includes the interaction between plant species, where the presence of one species aids in the pollination of another through attraction of pollinators. However, if these plant species develop at mismatched times, this interaction will be negatively affected and therefore the plant species that relies on the other will be harmed.
Phenological mismatches means the loss of many biological interactions and therefore ecosystem functions are also at risk of being negatively affected or lost all together. Phenological mismatches will effect species and ecosystems food webs , reproduction success, resource availability, population and community dynamics in future generations, and therefore evolutionary processes and overall biodiversity . | https://en.wikipedia.org/wiki/Phenology |
Phenolphthalein ( / f ɛ ˈ n ɒ l ( f ) θ ə l iː n / [ citation needed ] feh- NOL(F) -thə-leen ) is a chemical compound with the formula C 20 H 14 O 4 and is often written as " HIn ", " HPh ", " phph " or simply " Ph " in shorthand notation. Phenolphthalein is often used as an indicator in acid–base titrations . For this application, it turns colorless in acidic solutions and pink in basic solutions. It belongs to the class of dyes known as phthalein dyes .
Phenolphthalein is slightly soluble in water and usually is dissolved in alcohols in experiments . It is a weak acid, which can lose H + ions in solution. The nonionized phenolphthalein molecule is colorless and the double deprotonated phenolphthalein ion is fuchsia . Further proton loss in higher pH occurs slowly and leads to a colorless form. Phenolphthalein ion in concentrated sulfuric acid is orange red due to sulfonation . [ 2 ]
Phenolphthalein's common use is as an indicator in acid-base titrations. It also serves as a component of universal indicator , together with methyl red , bromothymol blue , and thymol blue . [ 3 ]
Phenolphthalein adopts different forms in aqueous solution depending on the pH of the solution. [ 4 ] [ 2 ] [ 5 ] [ 6 ] Inconsistency exists in the literature about hydrated forms of the compounds and the color of sulfuric acid. Wittke reported in 1983 that it exists in protonated form (H 3 In + ) under strongly acidic conditions, providing an orange coloration. However, a later paper suggested that this color is due to sulfonation to phenolsulfonphthalein . [ 2 ]
The lactone form (H 2 In) is colorless between strongly acidic and slightly basic conditions. The doubly deprotonated (In 2- ) phenolate form (the anion form of phenol) gives the familiar pink color. In strongly basic solutions, phenolphthalein is converted to its In(OH) 3− form, and its pink color undergoes a rather slow fading reaction [ 6 ] and becomes completely colorless when pH is greater than 13.
The p K a values of phenolphthalein were found to be 9.05, 9.50 and 12 while those of phenolsulfonphthalein are 1.2 and 7.70. The p K a for the color change is 9.50. [ 2 ]
Phenolphthalein's pH sensitivity is exploited in other applications: concrete has naturally high pH due to the calcium hydroxide formed when Portland cement reacts with water. As the concrete reacts with carbon dioxide in the atmosphere, pH decreases to 8.5–9. When a 1% phenolphthalein solution is applied to normal concrete, it turns bright pink. However, if it remains colorless, it shows that the concrete has undergone carbonation . In a similar application, some spackling used to repair holes in drywall contains phenolphthalein. When applied, the basic spackling material retains a pink color; when the spackling has cured by reaction with atmospheric carbon dioxide, the pink color fades. [ 8 ]
In a highly basic solution, phenolphthalein's slow change from pink to colorless as it is converted to its Ph(OH) 3− form is used in chemistry classes for the study of reaction kinetics .
Phenolphthalein is used in toys, for example as a component of disappearing inks, or disappearing dye on the "Hollywood Hair" Barbie hair. In the ink, it is mixed with sodium hydroxide , which reacts with carbon dioxide in the air. This reaction leads to the pH falling below the color change threshold as hydrogen ions are released by the reaction:
To develop the hair and "magic" graphical patterns, the ink is sprayed with a solution of hydroxide, which leads to the appearance of the hidden graphics by the same mechanism described above for color change in alkaline solution. The pattern will eventually disappear again because of the reaction with carbon dioxide . Thymolphthalein is used for the same purpose and in the same way, when a blue color is desired. [ 9 ]
A reduced form of phenolphthalein, phenolphthalin, which is colorless, is used in a test to identify substances thought to contain blood, commonly known as the Kastle–Meyer test . A dry sample is collected with a swab or filter paper. A few drops of alcohol, then a few drops of phenolphthalein, and finally a few drops of hydrogen peroxide are dripped onto the sample. If the sample contains hemoglobin , it will turn pink immediately upon addition of the peroxide, because of the generation of phenolphthalein. A positive test indicates the sample contains hemoglobin and, therefore, is likely blood. A false positive can result from the presence of substances with catalytic activity similar to hemoglobin. This test is not destructive to the sample; it can be kept and used in further tests. This test has the same reaction with blood from any animal whose blood contains hemoglobin, including almost all vertebrates; further testing would be required to determine whether it originated from a human.
Phenolphthalein has been used for over a century as a laxative , but is now being removed from over-the-counter laxatives [ 10 ] over concerns of carcinogenicity . [ 11 ] [ 12 ] Laxative products formerly containing phenolphthalein have often been reformulated with alternative active ingredients: Feen-a-Mint [ 13 ] switched to bisacodyl , and Ex-Lax [ 14 ] was switched to a senna extract .
Thymolphthalein is a related laxative made from thymol .
Despite concerns regarding its carcinogenicity based on rodent studies, the use of phenolphthalein as a laxative is unlikely to cause ovarian cancer . [ 15 ] Some studies suggest a weak association with colon cancer , while others show none at all. [ 16 ]
Phenolphthalein is described as a stimulant laxative. [ 16 ] In addition, it has been found to inhibit human cellular calcium influx via store-operated calcium entry (SOCE, see Calcium release activated channel § Structure ) in vivo . This is effected by its inhibiting thrombin and thapsigargin , two activators of SOCE that increase intracellular free calcium. [ 17 ]
Phenolphthalein has been added to the European Chemicals Agency 's candidate list for substance of very high concern (SVHC). [ 18 ] It is on the IARC group 2B list for substances "possibly carcinogenic to humans". [ 19 ]
The discovery of phenolphthalein's laxative effect was due to an attempt by the Hungarian government to label [ clarification needed ] genuine local white wine with the substance in 1900. Phenolphthalein did not change the taste of the wine and would change color when a base is added, making it a good label in principle. However, it was found that ingestion of the substance led to diarrhea. Max Kiss, a Hungarian-born pharmacist residing in New York, heard about the news and launched Ex-Lax in 1906. [ 20 ] [ 19 ]
Phenolphthalein can be synthesized by condensation of phthalic anhydride with two equivalents of phenol under acidic conditions. It was discovered in 1871 by Adolf von Baeyer . [ 21 ] [ 22 ] [ 23 ] | https://en.wikipedia.org/wiki/Phenolphthalein |
In organic chemistry , phenols , sometimes called phenolics , are a class of chemical compounds consisting of one or more hydroxyl groups (− O H ) bonded directly to an aromatic hydrocarbon group. [ 1 ] The simplest is phenol , C 6 H 5 OH . Phenolic compounds are classified as simple phenols or polyphenols based on the number of phenol units in the molecule.
Phenols are both synthesized industrially and produced by plants and microorganisms. [ 2 ]
Phenols are more acidic than typical alcohols. The acidity of the hydroxyl group in phenols is commonly intermediate between that of aliphatic alcohols and carboxylic acids (their pK a is usually between 10 and 12). Deprotonation of a phenol forms a corresponding negative phenolate ion or phenoxide ion, and the corresponding salts are called phenolates or phenoxides (aryloxides, according to the IUPAC Gold Book). [ citation needed ]
Phenols are susceptible to electrophilic aromatic substitutions . Condensation with formaldehyde gives resinous materials, famously Bakelite . [ citation needed ]
Another industrial-scale electrophilic aromatic substitution is the production of bisphenol A , which is produced by the condensation with acetone . [ 3 ]
Phenol is readily alkylated at the ortho positions using alkenes in the presence of a Lewis acid such as aluminium phenoxide : [ citation needed ]
More than 100,000 tons of tert-butyl phenols are produced annually (year: 2000) in this way, using isobutylene (CH 2 =CMe 2 ) as the alkylating agent. Especially important is 2,6-ditert-butylphenol , a versatile antioxidant . [ 3 ]
Phenols undergo esterification . Phenol esters are active esters , being prone to hydrolysis . Phenols are reactive species toward oxidation . Oxidative cleavage, for instance cleavage of 1,2-dihydroxybenzene to the monomethylester of 2,4-hexadienedioic acid with oxygen, copper chloride in pyridine . [ 4 ] Oxidative de-aromatization to quinones also known as the Teuber reaction . Oxidizing reagents are Fremy's salt [ 5 ] and oxone . [ 6 ] In reaction depicted below 3,4,5-trimethylphenol reacts with singlet oxygen generated from oxone / sodium carbonate in an acetonitrile /water mixture to a para-peroxyquinole. This hydroperoxide is reduced to the quinole with sodium thiosulfate .
Phenols are oxidized to hydroquinones in the Elbs persulfate oxidation .
Reaction of naphtols and hydrazines and sodium bisulfite in the Bucherer carbazole synthesis .
Many phenols of commercial interest are prepared by elaboration of phenol or cresols . They are typically produced by the alkylation of benzene / toluene with propylene to form cumene then O 2 is added with H 2 SO 4 to form phenol ( Hock process ). In addition to the reactions above, many other more specialized reactions produce phenols:
There are various classification schemes. [ 15 ] : 2 A commonly used scheme is based on the number of carbons and was devised by Jeffrey Harborne and Simmonds in 1964 and published in 1980: [ 15 ] : 2 [ 16 ]
More than 371 drugs approved by the FDA between the years of 1951 and 2020 contain either a phenol or a phenolic ether (a phenol with an alkyl), with nearly every class of small molecule drugs being represented, and natural products making up a large portion of this list. [ 17 ]
In chemical analysis , phenols can be detected using 2,6‑dibromoquinonechlorimide . [ 18 ] It reacts with phenols to form indophenols , resulting in a color change. [ 19 ] | https://en.wikipedia.org/wiki/Phenols |
Phenol–chloroform extraction is a liquid-liquid extraction technique in molecular biology used to separate nucleic acids from proteins and lipids . [ 1 ]
Aqueous samples, lysed cells, or homogenised tissue are mixed with equal volumes of a phenol : chloroform mixture. This mixture is then centrifuged. Because the phenol:chloroform mixture is immiscible with water, the centrifuge will cause two distinct phases to form: an upper aqueous phase, and a lower organic phase. The aqueous phase rises to the top because it is less dense than the organic phase containing the phenol:chloroform. This difference in density is why phenol, which only has a slightly higher density than water, must be mixed with chloroform to form a mixture with a much higher density than water.
The hydrophobic lipids will partition into the lower organic phase, and the proteins will remain at the interphase between the two phases, while the nucleic acids (as well as other contaminants such as salts, sugars, etc.) remain in the upper aqueous phase. The upper aqueous phase can then be pipetted off. Care must be taken to avoid pipetting any of the organic phase or material at the interface . This procedure is often performed multiple times to increase the purity of the DNA. [ 2 ] This procedure yields large double stranded DNA that can be used in PCR or RFLP .
If the mixture is acidic, DNA will precipitate into the organic phase while RNA remains in the aqueous phase. This is because DNA is more readily neutralized than RNA.
There are some disadvantages of this technique in forensic use. It is time-consuming and uses hazardous reagents. Also, because it is a two-step process involving transfer of reagents between tubes, it is at a greater risk of contamination . [ 3 ]
This molecular biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Phenol–chloroform_extraction |
In genetics , the phenotype (from Ancient Greek φαίνω ( phaínō ) ' to appear, show ' and τύπος ( túpos ) ' mark, type ' ) is the set of observable characteristics or traits of an organism . [ 1 ] [ 2 ] The term covers the organism's morphology (physical form and structure), its developmental processes, its biochemical and physiological properties, its behavior , and the products of behavior. [ citation needed ] An organism's phenotype results from two basic factors: the expression of an organism's genetic code (its genotype ) and the influence of environmental factors. Both factors may interact, further affecting the phenotype. When two or more clearly different phenotypes exist in the same population of a species, the species is called polymorphic . A well-documented example of polymorphism is Labrador Retriever coloring ; while the coat color depends on many genes, it is clearly seen in the environment as yellow, black, and brown. Richard Dawkins in 1978 [ 3 ] and again in his 1982 book The Extended Phenotype suggested that one can regard bird nests and other built structures such as caddisfly larva cases and beaver dams as "extended phenotypes".
Wilhelm Johannsen proposed the genotype–phenotype distinction in 1911 to make clear the difference between an organism's hereditary material and what that hereditary material produces. [ 4 ] [ 5 ] The distinction resembles that proposed by August Weismann (1834–1914), who distinguished between germ plasm (heredity) and somatic cells (the body). More recently in The Selfish Gene (1976), Dawkins distinguished these concepts as replicators and vehicles.
Despite its seemingly straightforward definition, the concept of the phenotype has hidden subtleties. It may seem that anything dependent on the genotype is a phenotype, including molecules such as RNA and proteins . Most molecules and structures coded by the genetic material are not visible in the appearance of an organism, yet they are observable (for example by Western blotting ) and are thus part of the phenotype; human blood groups are an example. It may seem that this goes beyond the original intentions of the concept with its focus on the (living) organism in itself. Either way, the term phenotype includes inherent traits or characteristics that are observable or traits that can be made visible by some technical procedure. [ citation needed ]
The term "phenotype" has sometimes been incorrectly used as a shorthand for the phenotypic difference between a mutant and its wild type , which would lead to the false statement that a
"mutation has no phenotype". [ 6 ]
Behaviors and their consequences are also phenotypes, since behaviors are observable characteristics. Behavioral phenotypes include cognitive, personality, and behavioral patterns. Some behavioral phenotypes may characterize psychiatric disorders [ 7 ] or syndromes. [ 8 ] [ 9 ]
A phenome is the set of all traits expressed by a cell , tissue , organ , organism , or species . The term was first used by Davis in 1949, "We here propose the name phenome for the sum total of extragenic, non-autoreproductive portions of the cell, whether cytoplasmic or nuclear. The phenome would be the material basis of the phenotype, just as the genome is the material basis of the genotype ." [ 10 ] Although phenome has been in use for many years, the distinction between the use of phenome and phenotype is problematic. A proposed definition for both terms as the "physical totality of all traits of an organism or of one of its subsystems" was put forth by Mahner and Kary in 1997, who argue that although scientists tend to intuitively use these and related terms in a manner that does not impede research, the terms are not well defined and usage of the terms is not consistent. [ 11 ]
Some usages of the term suggest that the phenome of a given organism is best understood as a kind of matrix of data representing physical manifestation of phenotype. For example, discussions led by A. Varki among those who had used the term up to 2003 suggested the following definition: "The body of information describing an organism's phenotypes, under the influences of genetic and environmental factors". [ 12 ] Another team of researchers characterize "the human phenome [as] a multidimensional search space with several neurobiological levels, spanning the proteome, cellular systems (e.g., signaling pathways), neural systems and cognitive and behavioural phenotypes." [ 13 ] Plant biologists have begun to explore the phenome in the study of plant physiology. [ 14 ] In 2009, a research team demonstrated the feasibility of identifying genotype–phenotype associations using electronic health records (EHRs) linked to DNA biobanks . They called this method phenome-wide association study (PheWAS). [ 15 ]
Inspired by the evolution from genotype to genome to pan-genome , a concept of eventually exploring the relationship among pan-phenome, pan-genome , and pan- envirome was proposed in 2023. [ 16 ]
Phenotypic variation (due to underlying heritable genetic variation ) is a fundamental prerequisite for evolution by natural selection . It is the living organism as a whole that contributes (or not) to the next generation, so natural selection affects the genetic structure of a population indirectly via the contribution of phenotypes. Without phenotypic variation, there would be no evolution by natural selection. [ 17 ]
The interaction between genotype and phenotype has often been conceptualized by the following relationship:
A more nuanced version of the relationship is:
Genotypes often have much flexibility in the modification and expression of phenotypes; in many organisms these phenotypes are very different under varying environmental conditions. The plant Hieracium umbellatum is found growing in two different habitats in Sweden . One habitat is rocky, sea-side cliffs , where the plants are bushy with broad leaves and expanded inflorescences ; the other is among sand dunes where the plants grow prostrate with narrow leaves and compact inflorescences. The habitats alternate along the coast of Sweden and the habitat that the seeds of Hieracium umbellatum land in, determine the phenotype that grows. [ 18 ]
An example of random variation in Drosophila flies is the number of ommatidia , which may vary (randomly) between left and right eyes in a single individual as much as they do between different genotypes overall, or between clones raised in different environments. [ citation needed ]
The concept of phenotype can be extended to variations below the level of the gene which affect an organism's fitness. For example, silent mutations that do not change the corresponding amino acid sequence of a gene may change the frequency of guanine - cytosine base pairs ( GC content ). The base pairs have a higher thermal stability ( melting point ) than adenine - thymine , a property that might convey, among organisms living in high-temperature environments, a selective advantage on variants enriched in GC content. [ citation needed ]
Richard Dawkins described a phenotype that included all effects that a gene has on its surroundings, including other organisms, as an extended phenotype, arguing that "An animal's behavior tends to maximize the survival of the genes 'for' that behavior, whether or not those genes happen to be in the body of the particular animal performing it." [ 3 ] For instance, an organism such as a beaver modifies its environment by building a beaver dam ; this can be considered an expression of its genes , just as its incisor teeth are—which it uses to modify its environment. Similarly, when a bird feeds a brood parasite such as a cuckoo , it is unwittingly extending its phenotype; and when genes in an orchid affect orchid bee behavior to increase pollination, or when genes in a peacock affect the copulatory decisions of peahens, again, the phenotype is being extended. Genes are, in Dawkins's view, selected by their phenotypic effects. [ 19 ]
Other biologists broadly agree that the extended phenotype concept is relevant, but consider that its role is largely explanatory, rather than assisting in the design of experimental tests. [ 20 ]
Phenotypes are determined by an interaction of genes and the environment, but the mechanism for each gene and phenotype is different. For instance, an albino phenotype may be caused by a mutation in the gene encoding tyrosinase which is a key enzyme in melanin formation. However, exposure to UV radiation can increase melanin production, hence the environment plays a role in this phenotype as well. For most complex phenotypes the precise genetic mechanism remains unknown. For instance, it is largely unclear how genes determine the shape of bones or the human ear. [ citation needed ]
Gene expression plays a crucial role in determining the phenotypes of organisms. The level of gene expression can affect the phenotype of an organism. For example, if a gene that codes for a particular enzyme is expressed at high levels, the organism may produce more of that enzyme and exhibit a particular trait as a result. On the other hand, if the gene is expressed at low levels, the organism may produce less of the enzyme and exhibit a different trait. [ 22 ] Gene expression is regulated at various levels and thus each level can affect certain phenotypes, including transcriptional and post-transcriptional regulation. [ citation needed ]
Changes in the levels of gene expression can be influenced by a variety of factors, such as environmental conditions, genetic variations, and epigenetic modifications. These modifications can be influenced by environmental factors such as diet, stress, and exposure to toxins, and can have a significant impact on an individual's phenotype. Some phenotypes may be the result of changes in gene expression due to these factors, rather than changes in genotype. An experiment involving machine learning methods utilizing gene expressions measured from RNA sequencing found that they can contain enough signal to separate individuals in the context of phenotype prediction. [ 23 ]
Although a phenotype is the ensemble of observable characteristics displayed by an organism, the word phenome is sometimes used to refer to a collection of traits, while the simultaneous study of such a collection is referred to as phenomics . [ 24 ] [ 25 ] Phenomics is an important field of study because it can be used to figure out which genomic variants affect phenotypes which then can be used to explain things like health, disease, and evolutionary fitness. [ 26 ] Phenomics forms a large part of the Human Genome Project . [ 27 ]
Phenomics has applications in agriculture. For instance, genomic variations such as drought and heat resistance can be identified through phenomics to create more durable GMOs. [ 28 ] [ 14 ] Phenomics may be a stepping stone towards personalized medicine , particularly drug therapy . [ 29 ] Once the phenomic database has acquired enough data, a person's phenomic information can be used to select specific drugs tailored to the individual. [ 29 ]
Large-scale genetic screens can identify the genes or mutations that affect the phenotype of an organism. Analyzing the phenotypes of mutant genes can also aid in determining gene function. [ 30 ] Most genetic screens have used microorganisms, in which genes can be easily deleted. For instance, nearly all genes have been deleted in E. coli [ 31 ] and many other bacteria , but also in several eukaryotic model organisms such as baker's yeast [ 32 ] and fission yeast . [ 33 ] Among other discoveries, such studies have revealed lists of essential genes .
More recently, large-scale phenotypic screens have also been used in animals, e.g. to study lesser understood phenotypes such as behavior . In one screen, the role of mutations in mice were studied in areas including learning and memory , circadian rhythmicity , vision, responses to stress, and response to psychostimulants .
This experiment involves the progeny of mice treated with ENU , or N-ethyl-N-nitrosourea, which is a potent mutagen that causes point mutations . The mice were phenotypically screened for alterations in the different behavioral domains in order to find the number of putative mutants (see table for details). Putative mutants are then tested for heritability in order to help determine the inheritance pattern as well as map out the mutations. Once they have been mapped out, cloned, and identified, it can be determined whether a mutation represents a new gene or not.
These experiments show that mutations in the rhodopsin gene affected vision and can even cause retinal degeneration in mice. [ 34 ] The same amino acid change causes human familial blindness , showing how phenotyping in animals can inform medical diagnostics and possibly therapy.
The RNA world is the hypothesized pre-cellular stage in the evolutionary history of life on earth, in which self-replicating RNA molecules proliferated prior to the evolution of DNA and proteins. [ 35 ] The folded three-dimensional physical structure of the first RNA molecule that possessed ribozyme activity promoting replication while avoiding destruction would have been the first phenotype, and the nucleotide sequence of the first self-replicating RNA molecule would have been the original genotype. [ 35 ] | https://en.wikipedia.org/wiki/Phenome |
A phenomenological model is a scientific model that describes the empirical relationship of phenomena to each other, in a way which is consistent with fundamental theory, but is not directly derived from theory. In other words, a phenomenological model is not derived from first principles . A phenomenological model forgoes any attempt to explain why the variables interact the way they do, and simply attempts to describe the relationship, with the assumption that the relationship extends past the measured values. [ 1 ] [ page needed ] Regression analysis is sometimes used to create statistical models that serve as phenomenological models.
Phenomenological models have been characterized as being completely independent of theories, [ 2 ] though many phenomenological models, while failing to be derivable from a theory, incorporate principles and laws associated with theories. [ 3 ] The liquid drop model of the atomic nucleus , for instance, portrays the nucleus as a liquid drop and describes it as having several properties (surface tension and charge, among others) originating in different theories (hydrodynamics and electrodynamics, respectively). Certain aspects of these theories—though usually not the complete theory—are then used to determine both the static and dynamical properties of the nucleus. | https://en.wikipedia.org/wiki/Phenomenological_model |
Phenomenological quantum gravity is the research field that deals with the phenomenology of quantum gravity . The relevance of this research area derives from the fact that none of the candidate theories for quantum gravity has yielded experimentally testable predictions. [ 1 ] Phenomenological models are designed to bridge this gap by allowing physicists to test for general properties that the hypothetical correct theory of quantum gravity has. Furthermore, due to this current lack of experiments, it is not known for sure that gravity is indeed quantum (i.e. that general relativity can be quantized ), and so evidence is required to determine whether this is the case. [ 2 ] Phenomenological models are also necessary to assess the promise of future quantum gravity experiments.
Direct experiments for quantum gravity (perhaps by detecting gravitons ) would require reaching the Planck energy — on the order of 10 28 eV , around 15 orders of magnitude higher than can be achieved with current particle accelerators — as well as needing a detector the size of a large planet . [ 3 ] [ 1 ] As a result, experimental investigation of quantum gravity was long thought to be impossible with current levels of technology. [ 4 ]
However, in the early 21st century, new experiment designs and technologies have arisen which suggest that indirect approaches to testing quantum gravity may be feasible over the next few decades. [ 1 ] [ 4 ] [ 5 ] [ 6 ]
This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Phenomenological_quantum_gravity |
In physics , phenomenology is the application of theoretical physics to experimental data by making quantitative predictions based upon known theories. It is related to the philosophical notion of the same name in that these predictions describe anticipated behaviors for the phenomena in reality. Phenomenology stands in contrast with experimentation in the scientific method , in which the goal of the experiment is to test a scientific hypothesis instead of making predictions.
Phenomenology is commonly applied to the field of particle physics , where it forms a bridge between the mathematical models of theoretical physics (such as quantum field theories and theories of the structure of space-time ) and the results of the high-energy particle experiments. It is sometimes used in other fields such as in condensed matter physics [ 1 ] [ 2 ] and plasma physics , [ 3 ] [ 4 ] when there are no existing theories for the observed experimental data.
Within the well-tested and generally accepted Standard Model , phenomenology is the calculating of detailed predictions for experiments, usually at high precision (e.g., including radiative corrections ).
Examples include:
The CKM matrix is useful in these predictions:
In Physics beyond the Standard Model , phenomenology addresses the experimental consequences of new models : how their new particles could be searched for, how the model parameters could be measured, and how the model could be distinguished from other, competing models.
Phenomenological analyses , in which one studies the experimental consequences of adding the most general set of beyond-the-Standard-Model effects in a given sector of the Standard Model , usually parameterized in terms of anomalous couplings and higher-dimensional operators. In this case, the term " phenomenological " is being used more in its philosophy of science sense. | https://en.wikipedia.org/wiki/Phenomenology_(physics) |
Phenomics is the systematic study of traits that make up an organisms phenotype , [ 1 ] [ 2 ] which changes over time, due to development and aging or through metamorphosis such as when a caterpillar changes into a butterfly. The term "phenomics" was coined by scientist Steven A. Garan, working at UC Berkeley and LBNL . [ 3 ] [ 4 ] As such, it is a transdisciplinary area of research that involves biology , data sciences , engineering and other fields. Phenomics is concerned with the measurement of the phenotype where a phenome is a set of traits (physical and biochemical traits) that can be produced by a given organism over the course of development and in response to genetic mutation and environmental influences.
An organism's phenotype changes with time. The relationship between phenotype and genotype enables researchers to understand and study pleiotropy . [ 5 ] Phenomics concepts are used in functional genomics , pharmaceutical research , metabolic engineering , agricultural research , and increasingly in phylogenetics . [ 6 ]
Technical challenges involve improving, both qualitatively and quantitatively, the capacity to measure phenomes. [ 5 ]
In plant sciences, phenomics research occurs in both field and controlled environments. Field phenomics encompasses the measurement of phenotypes that occur in both cultivated and natural conditions, whereas controlled environment phenomics research involves the use of glass houses, growth chambers, and other systems where growth conditions can be manipulated. The University of Arizona 's Field Scanner [ 7 ] in Maricopa, Arizona is a platform developed to measure field phenotypes. Controlled environment systems include the Enviratron [ 8 ] at Iowa State University , the Plant Cultivation Hall under construction at IPK, and platforms at the Donald Danforth Plant Science Center , the University of Nebraska-Lincoln , and elsewhere.
A Minimal Information About a Plant Phenotyping Experiment (MIAPPE) standard [ 9 ] is available and in use among many researchers collecting and organizing plant phenomics data. A diverse set of computer vision methods exist to analyze 2D and 3D imaging data of plants. These methods are available to the community in various implementations, ranging from end-user ready cyber-platforms in the cloud such as DIRT [ 10 ] and PlantIt [ 11 ] to programming frameworks for software developers such as PlantCV. [ 12 ] Many research groups are focused on developing systems using the Breeding API, a Standardized RESTful Web Service API Specification for communicating Plant Breeding Data.
The Australian Plant Phenomics Facility (APPF), an initiative of the Australian government, has developed a number of new instruments for comprehensive and fast measurements of phenotypes in both the lab and the field.
The International Plant Phenotyping Network (IPPN) [ 13 ] is an organization that seeks to enable exchange of knowledge, information, and expertise across many disciplines involved in plant phenomics by providing a network linking members, platform operators, users, research groups, developers, and policy makers. Regional partners include, the European Plant Phenotyping Network (EPPN), the North American Plant Phenotyping Network (NAPPN), [ 14 ] and others.
The European research infrastructure for plant phenotyping, EMPHASIS, [ 15 ] enables researchers to use facilities, services and resources for multi-scale plant phenotyping across Europe. EMPHASIS aims to promote future food security and agricultural business in a changing climate by enabling scientists to better understand plant performance and translate this knowledge into application. | https://en.wikipedia.org/wiki/Phenomics |
Phenoptosis (from pheno : showing or demonstrating; ptosis : programmed death, "falling off") is a conception of the self-programmed death of an organism proposed by Vladimir Skulachev in 1999.
In many species, including salmon and marsupial mice, under certain circumstances, especially following reproduction, an organism's genes will cause the organism to rapidly degenerate and die off. Recently this has been referred to as "fast phenoptosis" as aging is being explored as "slow phenoptosis". [ 1 ] Phenoptosis is a common feature of living species , whose ramifications for humans is still being explored. The concept of programmed cell death was used before, by Lockshin & Williams [ 2 ] in 1964 in relation to insect tissue development, around eight years before " apoptosis " was coined. The term 'phenoptosis' is a neologism associated with Skulachev's proposal.
In multicellular organisms, worn-out and ineffective cells are dismantled and recycled for the greater good of the whole organism in a process called apoptosis . [ 3 ] It is believed that phenoptosis is an evolutionary mechanism that culls out the damaged, aged, infectious, or those in direct competition with their own offspring [ 4 ] for the good of the species. Special circumstances need to exist for the "phenoptosis" strategy to be an evolutionarily stable strategy (ESS), let alone the only ESS. Examples of "phenoptosis" given below are really examples of semelpary - a life history with a single reproduction followed by death, which evolves not "for the good of the species" but as the ESS in the conditions of high adult-to-juvenile mortality ratio. The elimination of parts detrimental to the organism or individuals detrimental to the species has been deemed "The samurai law of biology" – it is better to die than to be wrong. [ 5 ] Stress-induced, acute, or fast phenoptosis is the rapid deterioration of an organism induced by a life event such as breeding. Elimination of the parent provides space for fitter offspring. As a species this has been advantageous particularly to species that die immediately after spawning. [ 4 ] Age-induced, soft, or slow phenoptosis is the slow deterioration and death of an organism due to accumulated stresses over long periods of time. In short, it has been proposed that aging, heart disease, cancer, and other age related ailments are means of phenoptosis. "Death caused by aging clears the population of ancestors and frees space for progeny carrying new useful traits." [ 5 ] It has also been proposed that age provides a selective advantage to brains over brawn. [ 6 ] An example made by V. P. Skulachev provides that of two hares, one faster and one smarter, the faster hare may have a selective advantage in youth but as aging occurs and muscles deteriorate it is the smarter hare that now has the selective advantage. [ citation needed ]
Mitochondrial ROS – The production of ROS by the mitochondria. This causes oxidative damage to the inner compartment of the mitochondria and destruction of the mitochondria. [ 7 ]
Clk1 gene – the gene thought to be responsible to aging due to mitochondrial ROS. [ 13 ]
EF2 kinase – Blocks phosphorylation of elongation factor 2 thus blocking protein synthesis. [ 14 ]
Glucocorticoid regulation – A common route for phenoptosis is breakdown of glucocorticoid regulation and inhibition, leading to massive excess of these corticosteroids in the body. [ 5 ]
Robert Sapolsky discusses phenoptosis in his book Why Zebras Don't Get Ulcers , 3rd Ed., p. 245-247. He states that:
If you catch salmon right after they spawn ... you find they have huge adrenal glands , peptic ulcers , and kidney lesions , their immune systems have collapsed... [and they] have stupendously high glucocorticoid concentrations in their bloodstreams. When salmon spawn, regulation of their glucocortocoid secretion breaks down... But is the glucocorticoid excess really responsible for their death? Yup. Take a salmon right after spawning, remove its adrenals, and it will live for a year afterward.
The bizarre thing is that this sequence... not only occurs in five species of salmon, but also among a dozen species of Australian marsupial mice ... Pacific salmon and marsupial mice are not close relatives. At least twice in evolutionary history, completely independently, two very different sets of species have come up with the identical trick: if you want to degenerate very fast, secrete a ton of glucocorticoids . | https://en.wikipedia.org/wiki/Phenoptosis |
Phenothrin , also called sumithrin and d-phenothrin , [ 2 ] is a synthetic pyrethroid that kills adult fleas and ticks . It has also been used to kill head lice in humans. d-Phenothrin is used as a component of aerosol insecticides for domestic use. It is often used with methoprene , an insect growth regulator that interrupts the insect's biological lifecycle by killing the eggs.
Phenothrin is primarily used to kill fleas and ticks. [ 3 ] It is also used to kill head lice in humans, but studies conducted in Paris and the United Kingdom have shown widespread resistance to phenothrin. [ 3 ]
It is extremely toxic to bees. A U.S. Environmental Protection Agency (EPA) study found that 0.07 micrograms were enough to kill honey bees . [ 3 ] It is also extremely toxic to aquatic life with a study showing concentrations of 0.03 ppb killing mysid shrimp. [ 3 ] It has increased risk of liver cancer in rats and mice in long-term exposure, with doses in the
range of 100 milligrams per kilogram of body weight per day, or above. [ 3 ] It is capable of killing mosquitoes , [ 4 ] although remains poisonous to cats and dogs, with seizures and deaths being reported due to poisoning. [ 3 ] Specific data on concentrations or exposure are lacking.
Phenothrin has been found to possess antiandrogen properties, and was responsible for a small epidemic of gynecomastia via isolated environmental exposure. [ 5 ] [ 6 ]
The EPA has not assessed its effect on cancer in humans. However, one study performed by the Mount Sinai School of Medicine linked sumithrin with breast cancer ; the link made by its effect on increasing the expression of a gene responsible for mammary tissue proliferation. [ 3 ]
In 2005, the U.S. EPA cancelled permission to use phenothrin in several flea and tick products, at the request of the manufacturer, Hartz Mountain Industries . [ 7 ] [ 8 ] The products were linked to a range of adverse reactions, including hair loss, salivation, tremors , and numerous deaths in cats and kittens. In the short term, the agreement called for new warning labels on the products.
As of March 31, 2006, the sale and distribution of Hartz's phenothrin-containing flea and tick products for cats has been terminated. However, EPA's product cancellation order did not apply to Hartz flea and tick products for dogs, and Hartz continues to produce many of its flea and tick products for dogs. [ 9 ] | https://en.wikipedia.org/wiki/Phenothrin |
The phenotype microarray approach is a technology for high-throughput phenotyping of cells.
A phenotype microarray system enables one to monitor simultaneously the phenotypic reaction of cells to environmental challenges or exogenous compounds in a high-throughput manner.
The phenotypic reactions are recorded as either end-point measurements or respiration kinetics similar to growth curves .
High-throughput phenotypic testing is increasingly important for exploring the biology of bacteria , fungi , yeasts , and animal cell lines such as human cancer cells . Just as DNA microarrays and proteomic technologies have made it possible to assay the expression level of thousands of genes or proteins all a once, phenotype microarrays (PMs) make it possible to quantitatively measure thousands of cellular phenotypes simultaneously. [ 1 ] The approach also offers potential for testing gene function and improving genome annotation. [ 2 ] In contrast to many of the hitherto available molecular high-throughput technologies, phenotypic testing is processed with living cells, thus providing comprehensive information about the performance of entire cells. The major applications of the PM technology are in the fields of systems biology , microbial cell physiology , microbiology , and taxonomy , [ 3 ] and mammalian cell physiology including clinical research such as on autism . [ 4 ] Advantages of PMs over standard growth curves are that cellular respiration can be measured in environmental conditions where cellular replication (growth) may not be possible, [ 5 ] and that it is more accurate than optical density , which can vary between different cellular morphologies. In addition, respiration reactions are usually detected much earlier than cellular growth. [ 6 ]
A sole carbon source that can be transported into a cell and metabolized to produce NADH engenders a redox potential and flow of electrons to reduce a tetrazolium dye, [ 7 ] such as tetrazolium violet, which produces a purple color. The more rapid this metabolic flow, the more quickly purple color forms. The formation of purple color is a positive reaction. interpreted such that the sole carbon source is used as an energy source. A microplate reader and incubation facility are needed to provide the appropriate incubation conditions, and to automatically read the intensity of colour formation during tetrazolium reduction in intervals of, e.g., 15 minutes.
The principal idea of retrieving information about the abilities of an organism and its special modes of action when making use of certain energy sources can be equivalently applied to other macro-nutrients such as nitrogen , sulfur , or phosphorus and their compounds and derivatives.
As an extension, the impact of auxotrophic supplements or antibiotics , heavy metals or other inhibitory compounds on the respiration behaviour of the cells can be determined.
During a positive reaction, the longitudinal kinetics are expected to appear as sigmoidal curves in analogy to typical bacterial growth curves . Comparable to bacterial growth curves, the respiration kinetic curves may provide valuable information coded in the length of the lag phase λ, the respiration rate μ (corresponding to the steepness of the slope), the maximum cell respiration A (corresponding to the maximum value recorded), and the area under the curve (AUC). In contrast to bacterial growth curves , there is typically no death phase in PMs, as the reduced tetrazolium dye is insoluble.
Proprietary and commercially available software is available that provides a solution for storage, retrieval, and analysis of high throughput phenotype data. A powerful free and open source software is the "opm" package based on R . [ 8 ] [ 9 ] "opm" contains tools for analyzing PM data including management, visualization and statistical analysis of PM data, covering curve-parameter estimation, dedicated and customizable plots, metadata management, statistical comparison with genome and pathway annotations, automatic generation of taxonomic reports, data discretization for phylogenetic software and export in the YAML markup language. In conjunction with other R packages it was used to apply boosting to re-analyse autism PM data and detect more determining factors. [ 10 ] The "opm" package has been developed and is maintained at the Deutsche Sammlung von Mikroorganismen und Zellkulturen . Another free and open source software developed to analyze Phenotype Microarray data is "DuctApe", a Unix command-line tool that also correlates genomic data. [ 11 ] Other software tools are PheMaDB, [ 12 ] which provides a solution for storage, retrieval, and analysis of high throughput phenotype data, and the PMViewer software [ 13 ] which focuses on graphical display but does not enable further statistical analysis. The latter is not publicly available. | https://en.wikipedia.org/wiki/Phenotype_microarray |
Phenotype modification is the process of experimentally altering an organism's phenotype to investigate the impact of phenotype on the fitness. [ 1 ]
Phenotype modification has been used to assess the impact of parasite mechanical presence on fish host behaviour. [ 2 ]
This genetics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Phenotype_modification |
Phenotypic disparity , also known as morphological diversity , morphological variety , morphological disparity , morphodisparity or simply disparity , refers to the variation of observable characteristics within biological groups. It was originally proposed in paleontology , and has also been introduced into the study of extant organisms. Some biologists view phenotypic disparity as an important aspect of biodiversity, while others believe that they are two different concepts.
Biologists' interest in phenotypic disparity predates the formal concept. Douglas Erwin argued that it had been central to the organismal biology since Georges Cuvier , who utilized it as a criterion for animal classification . However, prior to the development of quantitative methods for measuring disparity, the disparity recognized within the Linnaean taxonomy faced criticism for being unnatural. [ 2 ]
This concept was first proposed in the 1980s, utilized to explore the evolutionary patterns of variation in anatomy, function, and ecology. [ 3 ] It arose from the efforts by paleobiologists to define the evolutionary origins of the body plans of animals and by comparative developmental biologists to offer causal explanations for the emergence of these body plans. [ 3 ] In 1989, Stephen Jay Gould published Wonderful Life , in which he used the fossils from the Middle Cambrian Burgess Shale to contend that the ancient arthropods at this site has a greater phenotypic disparity than all living arthropods. [ 2 ] This concept has been introduced into the study of extant organisms. [ 4 ] [ 5 ]
Initially, phenotypic disparity was considered a sub-concept of biodiversity, referred to as "morphological diversity", [ 6 ] subsequently it acquired its own name "disparity", also known as "phenotypic disparity", "morphological disparity", "morphological variety" or "morphodisparity". [ 4 ]
In the narrower sense, the currently widely accepted concept of biodiversity meant only the taxonomic diversity, or the species richness. However, some groups have a large number of species, while all of them are very similar in morphology; other groups have very few species, while they are highly heterogeneous. For example, there are nearly twice as many species of birds as there are of mammals , indicating greater species richness, but birds are more consistent in morphology, reproductive biology , and developmental biology . The range of their body plans is relatively narrow, with outliers like ratites (e.g. ostriches) and penguins , while mammals include such diverse forms as apes , armadillos , bats , giraffes , marsupials , moles , the platypus and whales . [ 1 ] Therefore, relying only on species richness to represent biodiversity is less comprehensive. [ 1 ]
The disparity is defined as the phenotypic differentiation within groups. [ 5 ] [ 7 ] "Groups" usually refers to the taxonomic groups, including species or higher taxa. [ 7 ] Some biologists believe that the concept of disparity should also be applied to other groups, including sexes, ages, biomorphs and the castes of social insects . [ 8 ]
Disparity has changed at different rates and independently of species richness in the evolutionary history . There are two main patterns in how disparity develops over time. Some groups have developed high disparity early on in their evolution (called "early-disparity"), while others take longer to reach their maximum disparity (called "later-disparity"). The early-disparity boom may happen because species quickly explore new habitats or take advantage of new ecological niches . On the other hand, later-disparity groups may have developed new morphological forms slowly, resulting in a delay in reaching their maximum disparity. [ 6 ]
Initially, there was no consensus on how to measure disparity. [ 9 ] In the 1980s, taxonomic metrics was an early approach of measuring disparity among groups. It involved counting how many different families or genera there were to measuring the diversity and disparity of a taxon. It was based on the assumption that higher-ranked taxa could represent specific morphological innovations. Although this approach was criticized as it relied on artificial and non- monophyletic taxa, it provided valuable insights into the evolution of disparity. Some conclusions have been confirmed by subsequent quantitative metrics. [ 2 ]
Currently, disparity is usually quantified using the morphospace, which is a multidimensional space covering the morphological variation within a taxon. [ 10 ] Due to the use of different mathematical tools, morphospaces may have different geometric structures and mathematical meanings. [ 11 ]
The initial step involves selecting multiple phenotypic descriptors (characteristics described in appropriate ways) that vary among different taxa. [ 7 ] All phenotypic characteristics can be used to evaluate the disparity of a group, but the morphological characteristics are mostly used, because they are more accessible than others. [ 1 ] Secondly, use the selected descriptors to construct a morphospace. Then, use standard statistical dispersion indicators, such as total range or total variance , to describe the dispersion and distribution of groups in morphospace. The morphospace is a multidimensional space, which is almost impossible to visualize, so the dimensionality of the morphospace should be reduced using principal component analysis , principal coordinate analysis , nonmetric multidimensional scaling , or other mathematical methods. Therefore, it could be projected onto a two-dimensional space to visualize it. [ 7 ] | https://en.wikipedia.org/wiki/Phenotypic_disparity |
Phenotypic integration is a metric for measuring the correlation of multiple functionally-related traits to each other. [ 1 ] Complex phenotypes often require multiple traits working together in order to function properly. Phenotypic integration is significant because it provides an explanation as to how phenotypes are sustained by relationships between traits. Every organism's phenotype is integrated, organized, and a functional whole. Integration is also associated with functional modules. Modules are complex character units that are tightly associated, such as a flower. [ 2 ] It is hypothesized that organisms with high correlations between traits in a module have the most efficient functions. [ 3 ] The fitness of a particular value for one phenotypic trait frequently depends on the value of the other phenotypic traits, making it important for those traits evolve together. One trait can have a direct effect on fitness, and it has been shown that the correlations among traits can also change fitness, causing these correlations to be adaptive, rather than solely genetic. [ 4 ] Integration can be involved in multiple aspects of life, not just at the genetic level, but during development, or simply at a functional level.
Integration can be caused by genetic , developmental, environmental, or physiological relationships among characters. [ 5 ] Environmental conditions can alter or cause integration, i.e. they may be plastic . [ 6 ] Correlational selection, a form of natural selection can also produce integration. At the genetic level, integration can be caused by pleiotropy , close linkage , or linkage disequilibrium among unlinked genes. [ 7 ] At the developmental level it can be due to cell-cell signaling such as in the development of the ectopic eyes in Drosophila. It is believed that the patterns of genetic covariance helped distinguish certain species. [ 8 ] It can create variation among certain phenotypes, and can facilitate efficiency. This is significant because integration may play a huge role in phenotypic evolution. Phenotypic integration and its evolution can not only create large amounts of variety among phenotypes which can cause variation among species. For example, the color patterns on Garter snakes range widely and are caused by the covariance among multiple phenotypes.
Shortly after the structure of DNA was uncovered, Everett C. Olson and Robert L. Miller (1958) wrote the first book regarding the topic of phenotypic integration. [ 9 ] The term integration was first used in reference to genetics by Olson and Miller, referring to correlations among characters that are influenced by selection. [ 10 ] Following Olson and Miller, botanical studies on coherence between characters were done spanning over many years. [ 11 ] Its first expansion was in the construction of a morphological integration genetic model constructed by Russell Lande (1980). However, the term "Phenotypic Integration" was first coined by Massimo Pigliucci and Katherine Preston, in their book, Phenotypic Integration , which helped elucidate the observed laws of correlation and some theoretical issues regarding the topic. [ 12 ]
Phenotypic Integration can be favorable or unfavorable with respect to natural selection. It has been shown that certain combinations of correlated traits can be unfavorable to an organism. In an ontogenetic study of laboratory rats, certain covariances among developmental characters which produced differing functions in the skull and limb were less favorable than another set that contributed to skull and limb structure. [ 13 ] The most common form of selection on phenotypic integration is correlational selection. Correlational selection is a form of natural selection that favors certain combinations of traits (phenotypic integration). It can promote both genetic correlations and high levels of genetic variation. It has even been found that correlational selection may be the most common form of natural selection. [ 14 ] Occasionally, this form of selection will favor a group of traits at the expense of others and if it does favor a particular set of traits it will include the most used traits whose functional effectiveness is essential for their ability to work together, and whose successful interaction is needed for the fitness of the individual. [ 15 ] [ 16 ] Phenotypic integration may be the adaptive product of correlational selection. An example of natural selection favoring integration is in the color patterns and escape mechanisms of the Garter snake, Thamnophis Ordinoides . [ 17 ] Another example is in plants that have highly-specific pollinators, natural selection favors plants that have highly specialized flowering to pair with the specific pollinators, and therefore high floral integration. [ 18 ]
Integration can be found at the genetic level due to genetic linkage. Genetic linkage involves multiple genes being inherited together during meiosis because they are close to each other on the same chromosome. Alleles at different loci can be inherited together if they are tightly linked. Large genetic correlations can only be upheld if the loci that influence different characters are tightly linked, or if high levels of inbreeding in the population occur. Even if selection favors the correlations, it will not be maintained unless those conditions are met. Selection will favor tight linkage because it is maintained better. Poorly linked genetic correlations will not last. [ 19 ] Transposition allows the loci at different locations on the chromosome to move so that they can become close to each other and be inherited together. This is significant to understanding the relationship between phenotypic integration and evolution because it is one of the mechanisms of how multiple traits that are connected to each other to evolve and change together. For instance, the Papilio dardanus butterflies come in three different forms, each mimicking a different distasteful butterfly species. [ 20 ] Multiple loci contribute to these different forms, and a butterfly with alleles for form A at one locus and B at another locus would have poor fitness. However, the multiple loci are tightly linked, so they are inherited together as a single allele. Through transposition, these multiple loci ended up close to each other. [ 21 ]
Mutations among these linked genes are the nonadaptive fuel which can create evolution. Evolution may also occur because the integration may have an adaptive advantage in a particular environment for an organism. It is also important to recognize that not only can the traits be inherited together, but inherited separately and selected together. Another important example of phenotypic integration evolving over time is the relationship between the neurocranium and the brain. Over the last 150 million years the number of bones in the brain has decreased while the size of the brain in mammals has changed. Integration between the brain and the skull has evolved over this time period to reduce the number of bones in the cranium, while increasing the size of the brain. This relationship between correlated traits has played an important role in the evolution of mammalian cranium structure and brain size. [ 22 ] Finally, development is another crucial cause of phenotypic integration that has evolved over time. Cell-signaling pathways which utilize integration in the form of complex interactions among specific cells in the pathway are crucial to proper development in many organisms. The interactions among the cells in the pathway, and the interaction of the pathways with other pathways have evolved over time to create complex structures. [ 23 ]
Aposematism in poison dart frogs has also shown that phenotypic integration may be involved. Aposematism is the use of warning colors to deter predators because it often conveys the organism being poisonous, and this study found that diet specialization, and chemical defense are integrated and help affect aposematism. [ 24 ]
In another study regarding the relationship of sexual ornaments and phenotypic integration, there seems to be a paradox where sexual traits are expected to be both less integrated for greater expression, and more integrated to better indicate physiological quality. However, in the case of the house finch , the female house finches select for males based on their likelihood to be a good parent. The females base their choice of male parental behaviors on the elaboration of the male's sexual ornamentation. Thus, female choice favors hormonally controlled integration of male sexual behaviors and male sexual ornamentation. [ 25 ]
Phylogenetically consistent patterns of phenotypic integration have also been recently reported in leaves, floral morphology, and dry fruits as well as in the morphology of some animal organs. [ 26 ] [ 27 ]
Understanding phenotypic integration will continue as more research and understanding is done with regards to genetic, developmental, and physiological mechanisms, and learn more about the relationship of selection and complex phenotypes. [ 28 ] Research of this topic can even be beneficial to modern biomedicine. [ 29 ] | https://en.wikipedia.org/wiki/Phenotypic_integration |
Phenotypic plasticity refers to some of the changes in an organism 's behavior, morphology and physiology in response to a unique environment. [ 1 ] [ 2 ] Fundamental to the way in which organisms cope with environmental variation, phenotypic plasticity encompasses all types of environmentally induced changes (e.g. morphological , physiological , behavioural , phenological ) that may or may not be permanent throughout an individual's lifespan. [ 3 ]
The term was originally used to describe developmental effects on morphological characters, but is now more broadly used to describe all phenotypic responses to environmental change, such as acclimation ( acclimatization ), as well as learning . [ 3 ] The special case when differences in environment induce discrete phenotypes is termed polyphenism .
Generally, phenotypic plasticity is more important for immobile organisms (e.g. plants ) than mobile organisms (e.g. most animals ), as mobile organisms can often move away from unfavourable environments. [ 4 ] Nevertheless, mobile organisms also have at least some degree of plasticity in at least some aspects of the phenotype . [ 2 ] One mobile organism with substantial phenotypic plasticity is Acyrthosiphon pisum of the aphid family, which exhibits the ability to interchange between asexual and sexual reproduction, as well as growing wings between generations when plants become too populated. [ 5 ] Water fleas ( Daphnia magna ) have shown both phenotypic plasticity and the ability to genetically evolve to deal with the heat stress of warmer, urban pond waters. [ 2 ]
Phenotypic plasticity in plants includes the timing of transition from vegetative to reproductive growth stage, the allocation of more resources to the roots in soils that contain low concentrations of nutrients , the size of the seeds an individual produces depending on the environment, [ 7 ] and the alteration of leaf shape, size, and thickness. [ 8 ] Leaves are particularly plastic, and their growth may be altered by light levels. Leaves grown in the light tend to be thicker, which maximizes photosynthesis in direct light; and have a smaller area, which cools the leaf more rapidly (due to a thinner boundary layer ). Conversely, leaves grown in the shade tend to be thinner, with a greater surface area to capture more of the limited light. [ 9 ] [ 10 ] Dandelion are well known for exhibiting considerable plasticity in form when growing in sunny versus shaded environments. The transport proteins present in roots also change depending on the concentration of the nutrient and the salinity of the soil. [ 11 ] Some plants, Mesembryanthemum crystallinum for example, are able to alter their photosynthetic pathways to use less water when they become water- or salt-stressed. [ 12 ]
Because of phenotypic plasticity, it is hard to explain and predict the traits when plants are grown in natural conditions unless an explicit environment index can be obtained to quantify environments. Identification of such explicit environment indices from critical growth periods being highly correlated with sorghum and rice flowering time enables such predictions. [ 6 ] [ 13 ] Additional work is being done to support the agricultural industry, which faces severe challenges in prediction of crop phenotypic expression in changing environments. Since many crops supporting the global food supply are grown in a wide variety of environments, understanding and ability to predict crop genotype by environment interaction will be essential for future food stability. [ 14 ]
Leaves are very important to a plant in that they create an avenue where photosynthesis and thermoregulation can occur. Evolutionarily, the environmental contribution to leaf shape allowed for a myriad of different types of leaves to be created. [ 15 ] Leaf shape can be determined by both genetics and the environment. [ 16 ] Environmental factors, such as light and humidity, have been shown to affect leaf morphology, [ 17 ] giving rise to the question of how this shape change is controlled at the molecular level. This means that different leaves could have the same gene but present a different form based on environmental factors. Plants are sessile, so this phenotypic plasticity allows the plant to take in information from its environment and respond without changing its location.
In order to understand how leaf morphology works, the anatomy of a leaf must be understood. The main part of the leaf, the blade or lamina, consists of the epidermis, mesophyll, and vascular tissue. The epidermis contains stomata which allows for gas exchange and controls perspiration of the plant. The mesophyll contains most of the chloroplast where photosynthesis can occur. Developing a wide blade/lamina can maximize the amount of light hitting the leaf, thereby increasing photosynthesis, however too much sunlight can damage the plant. Wide lamina can also catch wind easily which can cause stress to the plant, so finding a happy medium is imperative to the plants’ fitness. The Genetic Regulatory Network is responsible for creating this phenotypic plasticity and involves a variety of genes and proteins regulating leaf morphology.
Phytohormones have been shown to play a key role in signaling throughout the plant, and changes in concentration of the phytohormones can cause a change in development. [ 18 ]
Studies on the aquatic plant species Ludwigia arcuata have been done to look at the role of abscisic acid (ABA), as L. arcuata is known to exhibit phenotypic plasticity and has two different types of leaves, the aerial type (leaves that touch the air) and the submerged type (leaves that are underwater). [ 19 ] When adding ABA to the underwater shoots of L. arcuata , the plant was able to produce aerial type leaves underwater, suggesting that increased concentrations of ABA in the shoots, likely caused by air contact or a lack of water, triggers the change from the submerged type of leaf to the aerial type. This suggests ABA's role in leaf phenotypic change and its importance in regulating stress through environmental change (such as adapting from being underwater to above water). In the same study, another phytohormone, ethylene, was shown to induce the submerged leaf phenotype unlike ABA, which induced aerial leaf phenotype. Because ethylene is a gas, it tends to stay endogenously within the plant when underwater – this growth in concentration of ethylene induces a change from aerial to submerged leaves and has also been shown to inhibit ABA production, further increasing the growth of submerged type leaves.
These factors (temperature, water availability, and phytohormones) contribute to changes in leaf morphology throughout a plants lifetime and are vital to maximize plant fitness.
The developmental effects of nutrition and temperature have been demonstrated. [ 20 ] The gray wolf ( Canis lupus ) has wide phenotypic plasticity. [ 21 ] [ 22 ] Additionally, male speckled wood butterflies have two morphs: one with three dots on its hindwing, and one with four dots on its hindwings. The development of the fourth dot is dependent on environmental conditions – more specifically, location and the time of year. [ 23 ] In amphibians , the mutable rain frog (Pristimantis mutabilis) has remarkable phenotypic plasticity, [ 24 ] as does the red-eyed tree frog (Agalychnis callidryas) , whose embryos exhibit phenotypic plasticity by hatching early to protect themselves in response to egg disturbance. Another example is the southern rockhopper penguin . [ 25 ] Rockhopper penguins are present at a variety of climates and locations; Amsterdam Island's subtropical waters, Kerguelen Archipelago and Crozet Archipelago 's subantarctic coastal waters. [ 25 ] Due to the species plasticity they are able to express different strategies and foraging behaviors depending on the climate and environment. [ 25 ] A main factor that has influenced the species' behavior is where food is located. [ 25 ]
Plastic responses to temperature are essential among ectothermic organisms , as all aspects of their physiology are directly dependent on their thermal environment. As such, thermal acclimation entails phenotypic adjustments that are found commonly across taxa , such as changes in the lipid composition of cell membranes . Temperature change influences the fluidity of cell membranes by affecting the motion of the fatty acyl chains of glycerophospholipids . Because maintaining membrane fluidity is critical for cell function, ectotherms adjust the phospholipid composition of their cell membranes such that the strength of van der Waals forces within the membrane is changed, thereby maintaining fluidity across temperatures. [ 26 ]
Phenotypic plasticity of the digestive system allows some animals to respond to changes in dietary nutrient composition, [ 27 ] [ 28 ] diet quality, [ 29 ] [ 30 ] and energy requirements. [ 31 ] [ 32 ] [ 33 ]
Changes in the nutrient composition of the diet (the proportion of lipids, proteins and carbohydrates) may occur during development (e.g. weaning) or with seasonal changes in the abundance of different food types. These diet changes can elicit plasticity in the activity of particular digestive enzymes on the brush border of the small intestine . For example, in the first few days after hatching, nestling house sparrows ( Passer domesticus ) transition from an insect diet, high in protein and lipids, to a seed based diet that contains mostly carbohydrates; this diet change is accompanied by two-fold increase in the activity of the enzyme maltase , which digests carbohydrates. [ 27 ] Acclimatizing animals to high protein diets can increase the activity of aminopeptidase -N, which digests proteins. [ 28 ] [ 34 ]
Poor quality diets (those that contain a large amount of non-digestible material) have lower concentrations of nutrients, so animals must process a greater total volume of poor-quality food to extract the same amount of energy as they would from a high-quality diet. Many species respond to poor quality diets by increasing their food intake, enlarging digestive organs, and increasing the capacity of the digestive tract (e.g. prairie voles , [ 33 ] Mongolian gerbils , [ 30 ] Japanese quail , [ 29 ] wood ducks , [ 35 ] mallards [ 36 ] ). Poor quality diets also result in lower concentrations of nutrients in the lumen of the intestine, which can cause a decrease in the activity of several digestive enzymes. [ 30 ]
Animals often consume more food during periods of high energy demand (e.g. lactation or cold exposure in endotherms ), this is facilitated by an increase in digestive organ size and capacity, which is similar to the phenotype produced by poor quality diets. During lactation, common degus ( Octodon degus ) increase the mass of their liver, small intestine, large intestine and cecum by 15–35%. [ 31 ] Increases in food intake do not cause changes in the activity of digestive enzymes because nutrient concentrations in the intestinal lumen are determined by food quality and remain unaffected. [ 31 ] Intermittent feeding also represents a temporal increase in food intake and can induce dramatic changes in the size of the gut; the Burmese python ( Python molurus bivittatus ) can triple the size of its small intestine just a few days after feeding. [ 37 ]
AMY2B (Alpha-Amylase 2B) is a gene that codes a protein that assists with the first step in the digestion of dietary starch and glycogen . An expansion of this gene in dogs would enable early dogs to exploit a starch-rich diet as they fed on refuse from agriculture. Data indicated that the wolves and dingo had just two copies of the gene and the Siberian Husky that is associated with hunter-gatherers had just three or four copies, whereas the Saluki that is associated with the Fertile Crescent where agriculture originated had 29 copies. The results show that on average, modern dogs have a high copy number of the gene, whereas wolves and dingoes do not. The high copy number of AMY2B variants likely already existed as a standing variation in early domestic dogs, but expanded more recently with the development of large agriculturally based civilizations. [ 38 ]
Infection with parasites can induce phenotypic plasticity as a means to compensate for the detrimental effects caused by parasitism. Commonly, invertebrates respond to parasitic castration or increased parasite virulence with fecundity compensation in order to increase their reproductive output, or fitness . For example, water fleas ( Daphnia magna ), exposed to microsporidian parasites produce more offspring in the early stages of exposure to compensate for future loss of reproductive success. [ 39 ] A reduction in fecundity may also occur as a means of re-directing nutrients to an immune response, [ 40 ] or to increase longevity of the host. [ 41 ] This particular form of plasticity has been shown in certain cases to be mediated by host-derived molecules (e.g. schistosomin in snails Lymnaea stagnalis infected with trematodes Trichobilharzia ocellata ) that interfere with the action of reproductive hormones on their target organs. [ 42 ] Changes in reproductive effort during infection is also thought to be a less costly alternative to mounting resistance or defence against invading parasites, although it can occur in concert with a defence response. [ 43 ]
Hosts can also respond to parasitism through plasticity in physiology aside from reproduction. House mice infected with intestinal nematodes experience decreased rates of glucose transport in the intestine. To compensate for this, mice increase the total mass of mucosal cells, cells responsible for glucose transport, in the intestine. This allows infected mice to maintain the same capacity for glucose uptake and body size as uninfected mice. [ 44 ]
Phenotypic plasticity can also be observed as changes in behaviour. In response to infection, both vertebrates and invertebrates practice self-medication , which can be considered a form of adaptive plasticity. [ 45 ] Various species of non-human primates infected with intestinal worms engage in leaf-swallowing, in which they ingest rough, whole leaves that physically dislodge parasites from the intestine. Additionally, the leaves irritate the gastric mucosa , which promotes the secretion of gastric acid and increases gut motility , effectively flushing parasites from the system. [ 46 ] The term "self-induced adaptive plasticity" has been used to describe situations in which a behavior under selection causes changes in subordinate traits that in turn enhance the ability of the organism to perform the behavior. [ 47 ] For example, birds that engage in altitudinal migration might make "trial runs" lasting a few hours that would induce physiological changes that would improve their ability to function at high altitude. [ 47 ]
Woolly bear caterpillars ( Grammia incorrupta ) infected with tachinid flies increase their survival by ingesting plants containing toxins known as pyrrolizidine alkaloids . The physiological basis for this change in behaviour is unknown; however, it is possible that, when activated, the immune system sends signals to the taste system that trigger plasticity in feeding responses during infection. [ 45 ]
Reproduction
The red-eyed tree frog, Agalychnis callidryas , is an arboreal frog (hylid) that resides in the tropics of Central America. Unlike many frogs, the red-eyed tree frog has arboreal eggs which are laid on leaves hanging over ponds or large puddles and, upon hatching, the tadpoles fall into the water below. One of the most common predators encountered by these arboreal eggs is the cat-eyed snake, Leptodeira septentrionalis . In order to escape predation, the red-eyed tree frogs have developed a form of adaptive plasticity, which can also be considered phenotypic plasticity, when it comes to hatching age; the clutch is able to hatch prematurely and survive outside of the egg five days after oviposition when faced with an immediate threat of predation. The egg clutches take in important information from the vibrations felt around them and use it to determine whether or not they are at risk of predation. In the event of a snake attack, the clutch identifies the threat by the vibrations given off which, in turn, stimulates hatching almost instantaneously. In a controlled experiment conducted by Karen Warkentin, hatching rate and ages of red-eyed tree frogs were observed in clutches that were and were not attacked by the cat-eyed snake. When a clutch was attacked at six days of age, the entire clutch hatched at the same time, almost instantaneously. However, when a clutch is not presented with the threat of predation, the eggs hatch gradually over time with the first few hatching around seven days after oviposition, and the last of the clutch hatching around day ten. Karen Warkentin's study further explores the benefits and trade-offs of hatching plasticity in the red-eyed tree frog. [ 48 ]
Plasticity is usually thought to be an evolutionary adaptation to environmental variations that is reasonably predictable and occurs within the lifespan of an individual organism, as it allows individuals to 'fit' their phenotype to different environments. [ 49 ] [ 50 ] If the optimal phenotype in a given environment changes with environmental conditions, then the ability of individuals to express different traits should be advantageous and thus selected for . Hence, phenotypic plasticity can evolve if Darwinian fitness is increased by changing phenotype. [ 51 ] [ 52 ] A similar logic should apply in artificial evolution attempting to introduce phenotypic plasticity to artificial agents. [ 53 ] However, the fitness benefits of plasticity can be limited by the energetic costs of plastic responses (e.g. synthesizing new proteins, adjusting expression ratio of isozyme variants, maintaining sensory machinery to detect changes) as well as the predictability and reliability of environmental cues [ 54 ] (see Beneficial acclimation hypothesis ).
Freshwater snails ( Physa virgata ), provide an example of when phenotypic plasticity can be either adaptive or maladaptive . In the presence of a predator, bluegill sunfish , these snails make their shell shape more rotund and reduce growth. This makes them more crush-resistant and better protected from predation. However, these snails cannot tell the difference in chemical cues between the predatory and non-predatory sunfish. Thus, the snails respond inappropriately to non-predatory sunfish by producing an altered shell shape and reducing growth. These changes, in the absence of a predator, make the snails susceptible to other predators and limit fecundity . Therefore, these freshwater snails produce either an adaptive or maladaptive response to the environmental cue depending on whether predatory sunfish are present or not. [ 55 ] [ 56 ]
Given the profound ecological importance of temperature and its predictable variability over large spatial and temporal scales, adaptation to thermal variation has been hypothesized to be a key mechanism dictating the capacity of organisms for phenotypic plasticity. [ 57 ] The magnitude of thermal variation is thought to be directly proportional to plastic capacity, such that species that have evolved in the warm, constant climate of the tropics have a lower capacity for plasticity compared to those living in variable temperate habitats. Termed the "climatic variability hypothesis", this idea has been supported by several studies of plastic capacity across latitude in both plants and animals. [ 58 ] [ 59 ] However, recent studies of Drosophila species have failed to detect a clear pattern of plasticity over latitudinal gradients, suggesting this hypothesis may not hold true across all taxa or for all traits. [ 60 ] Some researchers propose that direct measures of environmental variability, using factors such as precipitation, are better predictors of phenotypic plasticity than latitude alone. [ 61 ]
Selection experiments and experimental evolution approaches have shown that plasticity is a trait that can evolve when under direct selection and also as a correlated response to selection on the average values of particular traits. [ 62 ]
Temporal plasticity , also known as fine-grained environmental adaptation, [ 63 ] is a type of phenotypic plasticity that involves the phenotypic change of organisms in response to changes in the environment over time. Animals can respond to short-term environmental changes with physiological (reversible) and behavioral changes; plants, which are sedentary, respond to short-term environmental changes with both physiological and developmental (non-reversible) changes. [ 64 ]
Unprecedented rates of climate change are predicted to occur over the next 100 years as a result of human activity. [ 67 ] Phenotypic plasticity is a key mechanism with which organisms can cope with a changing climate, as it allows individuals to respond to change within their lifetime. [ 68 ] This is thought to be particularly important for species with long generation times, as evolutionary responses via natural selection may not produce change fast enough to mitigate the effects of a warmer climate.
The North American red squirrel ( Tamiasciurus hudsonicus ) has experienced an increase in average temperature over this last decade of almost 2 °C. This increase in temperature has caused an increase in abundance of white spruce cones, the main food source for winter and spring reproduction. In response, the mean lifetime parturition date of this species has advanced by 18 days. Food abundance showed a significant effect on the breeding date with individual females, indicating a high amount of phenotypic plasticity in this trait. [ 69 ] | https://en.wikipedia.org/wiki/Phenotypic_plasticity |
Phenotypic response surfaces (PRS) is an artificial intelligence -guided personalized medicine platform that relies on combinatorial optimization principles to quantify drug interactions and efficacies to develop optimized combination therapies to treat a broad spectrum of illnesses.
Phenotypic response surfaces fit a parabolic surface to a set of drug doses and biomarker values based on the understanding that the relationship between drugs, their interactions, and their effect on the measure biomarker can be modeled by quadric surface . [ 1 ] The resulting surface allows for the omission of both in-vitro and in-silico screening of multi-drug combinations based on a patient's unique phenotypic response. [ 1 ] [ 2 ] This provides a method to utilize small data sets to create time-critical personalized therapies that is independent of the disease or drug mechanism. [ 1 ] [ 3 ] The adaptable nature of the platform allows it to tackle a wide range of applications from isolating novel combination therapies to predicting daily drug regimen adjustments to support in-patient treatments. [ 4 ] [ 5 ]
Modern medical practice since its inception in the early 19th to 20th centuries has been seen as "a science of uncertainty and art of probability" as mused by one of its founders, Sir William Osler . [ 6 ] The lack of a concrete mechanism for the relationship between drug dosing and its efficacy led largely to the use of population averages as a metric for determine optimal doses for patients. [ 7 ] This issue is further compounded by the introduction of combination therapies as there is an exponential growth in number of possible combinations and outcomes as the number of drugs increases. [ 1 ] Combinatory therapy treatments provide significant benefits over monotherapy alternatives including greater efficacies and lower side effects and fatality rates, making them ideal candidates to optimize. [ 8 ] In 2011 the PRS methodology was developed by a team led by Dr. Ibrahim Al-Shyoukh and Dr. Chih Ming Ho of the University of California Los Angeles to provide a platform that would allow for a comparatively small number of calibration tests to optimize multi-drug combination therapies based on measurement of cellular biomarkers . [ 1 ] Since its inception the PRS platform has been applied to a broad range of disease areas including organ transplants , oncology , and infectiology . [ 4 ] [ 5 ] [ 9 ] The PRS platform has since become the basis for a commercial optimization platform marketed by Singapore based Kyan Therapeutics in partnership with Kite Pharma and the National University of Singapore to provided personalized combination therapies for oncological applications. [ 10 ]
The PRS platform utilizes a neural network to fit data sets to a regression function resulting in a parabolic surface that provides a direct quantitative relationship between drug dose and efficacy. [ 1 ] The governing function for the PRS platform is given as the following:
E ( C , t ) = x 0 + ∑ i = 1 M x i C i + ∑ i = 1 M y i i C i 2 + ∑ i = 1 M − 1 ∑ j = i + 1 M z i j C i C j {\displaystyle E(C,t)=x_{0}+\sum _{i=1}^{M}x_{i}C_{i}+\sum _{i=1}^{M}y_{ii}C_{i}^{2}+\sum _{i=1}^{M-1}\sum _{j=i+1}^{M}z_{ij}C_{i}C_{j}} [ 1 ]
where:
The parabolic nature of the relationship allows for the minimal required calibration test to utilize the PRS regression in the search area of N M combinations, where N is the number of dosing regimens and M is the number of drugs in the combination. [ 1 ]
The mechanism-independent nature of the PRS platform makes it utilizable to treat a broad spectrum of diseases including for the treatment of cancers, infectious diseases, and for organ transplants. [ 4 ] [ 5 ] [ 9 ]
Optimization of combination therapies is of particular importance in oncology. Conventional cancer treatments often rely on the sequential use of chemotherapy drugs, with each new drug starting as soon as the previous agent loses efficacy. [ 8 ] This methodology allows for cancerous cells, due to their rapid rate of mutation, to develop resistances to chemotherapy drugs in instances where chemotherapy drugs fail to be effective. [ 8 ] Combination therapies are therefore vital to preventing the development of drug resistant tumors and thereby decreasing the likelihood of relapse among cancer patients. [ 8 ] The PRS platform alleviates the principal difficulty in developing combination therapies to treat cancer as it omits the need to perform in-vitro high throughput screening to determine the most effective regimen that is currently employed. [ 11 ] PRS based therapy has been used to successful derive an optimized 3 drug combination to treat multiple myeloma and overcome drug resistance. [ 4 ] The PRS derivative CURATE.AI platform has also been used to optimize a 2 drug combination of a bromodomain inhibitor and enzalutamide to successfully treat and prevent the progression of prostate cancer. [ 12 ]
Drug resistance is a particular challenge when attempting to treat infectious diseases as monotherapy solutions carry the risk of increasing drug resistance and combination therapy demonstrates lower mortality rates. [ 13 ] Highly contagious infectious diseases like tuberculosis have become the leading cause of death by infectious disease world wide. [ 9 ] Tuberculosis treatment requires the sustained use of antibiotics over an extended period of time, with high rates of noncompliance among patients, which increases the risk of development of drug resistant forms of tuberculosis. [ 9 ] The PRS platform has been successfully used to develop combinatory regimens that reduce tuberculosis treatment time by 75% and can be employed on both drug sensitive and resistant variants of the disease. [ 9 ] The PRS derivative IDENTIF.AI platform has been used in Singapore to identify viable SARS-CoV-2 delta variant treatments on behalf of the Singapore Ministry of Health . [ 2 ] The platform identified the metabolite EIDD-1931 as having strong antiviral properties that can be used in combination with other commercial antiviral agents to create an effective therapy for the treatment of the SARS-CoV-2 delta variant. [ 2 ]
The PRS derived phenotypic personalized dosing platform developed in 2016 has been used to provide personalized tacrolimus and prednisone dosing for liver transplant procedures and post transplant care to prevent transplant rejection events. [ 5 ] This methodology is able to use the minimal number of calibration tests and as a result provides a physicians with a rolling window in which daily optimized drug dose can be predicted. [ 5 ] The platform is recalibrated daily to take into consideration the patients changing physiological responses to the drug regimen providing physicians with accessible personalized treatment tools and eliminating the need to use of population average based dosing. [ 5 ] [ 7 ] The platform is actively being considered for other transplant uses including kidney and heart transplants. [ 5 ] | https://en.wikipedia.org/wiki/Phenotypic_response_surfaces |
Phenotypic screening is a type of screening used in biological research and drug discovery to identify substances such as small molecules , peptides , or RNAi that alter the phenotype of a cell or an organism in a desired manner. [ 1 ] Phenotypic screening must be followed up with identification (sometimes referred to as target deconvolution) and validation, [ 2 ] often through the use of chemoproteomics , to identify the mechanisms through which a phenotypic hit works. [ 3 ]
Phenotypic screening historically has been the basis for the discovery of new drugs. [ 4 ] Compounds are screened in cellular or animal disease models to identify compounds that cause a desirable change in phenotype. Only after the compounds have been discovered are efforts made to determine the biological targets of the compounds - a process known as target deconvolution. This overall strategy is referred to as " classical pharmacology ", "forward pharmacology" or "phenotypic drug discovery" (PDD). [ 4 ]
More recently it has become popular to develop a hypothesis that a certain biological target is disease modifying, and then screen for compounds that modulate the activity of this purified target. Afterwards, these compounds are tested in animals to see if they have the desired effect. This approach is known as " reverse pharmacology " or "target based drug discovery" (TDD). [ 5 ] However recent statistical analysis reveals that a disproportionate number of first-in-class drugs with novel mechanisms of action come from phenotypic screening [ 6 ] which has led to a resurgence of interest in this method. [ 1 ] [ 7 ] [ 8 ]
The simplest phenotypic screens employ cell lines and monitor a single parameter such as cellular death or the production of a particular protein. High-content screening where changes in the expression of several proteins can be simultaneously monitored is also often used. [ 9 ] [ 10 ] High-content imaging of dye-labeled cellular components can also reveal effects of compounds on cell cultures in vitro, distinguishing the phenotypic effects of a broad variety of drugs. [ 11 ]
In whole animal-based approaches, phenotypic screening is best exemplified where a substance is evaluated for potential therapeutic benefit across many different types of animal models representing different disease states. [ 12 ] Phenotypic screening in animal-based systems utilize model organisms to evaluate the effects of a test agent in fully assembled biological systems. Example organisms used for high-content screening include the fruit fly ( Drosophila melanogaster ), zebrafish ( Danio rerio ) and mice ( Mus musculus ). [ 13 ] In some instances the term phenotypic screening is used to include the serendipitous findings that occur in clinical trial settings particularly when new and unanticipated therapeutic effects of a therapeutic candidate are uncovered. [ 6 ]
Screening in model organism offers the advantage of interrogating test agents, or alterations in targets of interest, in the context of fully integrated, assembled, biological systems, providing insights that could otherwise not be obtained in cellular systems. Some have argued that cellular based systems are unable to adequately model human disease processes that involve many different cell types across many different organ systems and that this type of complexity can only be emulated in model organisms. [ 14 ] [ 15 ] The productivity of drug discovery by phenotypic screening in organisms, including serendipitous findings in the clinic, are consistent with this notion. [ 6 ] [ 16 ]
Animal based approaches to phenotypic screening are not as amenable to screening libraries containing thousands of small molecules. Therefore, these approaches have found more utility in evaluating already approved drugs or late stage drug candidates for drug repositioning . [ 12 ]
A number of companies including Melior Discovery , [ 17 ] [ 18 ] Phylonix , and Sosei have specialized in using phenotypic screening in animal disease models for drug positioning. Many other companies are involved in phenotypic screening research approaches, including Eurofins Discovery Phenotypic Services, Evotec , Dharmacon, Inc. , ThermoScientific , Cellecta, and Persomics . [ citation needed ]
The pharmaceutical company Eli Lilly has formalized collaborative efforts with various 3rd parties aimed at conducting phenotypic screening of selected small molecules. [ 19 ] | https://en.wikipedia.org/wiki/Phenotypic_screening |
Phenotypic switching is switching between multiple cellular morphologies. David R. Soll described two such systems: the first high frequency switching system between several morphological stages and a second high frequency switching system between opaque and white cells. The latter is an epigenetic switching system [ 1 ] [ 2 ]
Phenotypic switching in Candida albicans is often used to refer to the epigenetic white-to-opaque switching system. C. albicans needs this switch for sexual mating. [ 3 ] Next to the two above mentioned switching systems many other switching systems are known in C. albicans . [ 4 ]
A second example occurs in melanoma , where malignantly transformed pigment cells switch back-and-forth between phenotypes of proliferation and invasion in response to changing microenvironments, driving metastatic progression. [ 5 ] [ 6 ] [ 7 ] | https://en.wikipedia.org/wiki/Phenotypic_switching |
In microbiology , the phenotypic testing of mycobacteria uses a number of methods. The most-commonly used phenotypic tests to identify and distinguish Mycobacterium strains and species from each other are described below.
Media : KH 2 PO 4 (0.5 g), MgSO> 4 *7H 2 0 (0.5 g), purified agar (20 g), distilled water (1000 ml). The medium is supplemented with acetamide to a final concentration of 0.02M, adjusted to a pH of 7.0 and sterilized by autoclaving at 115°C for 30 minutes. After sloping , the medium is inoculated with one loop of the cultures and incubated. Growth is read after incubation for two weeks (rapid growers) or four weeks (slow growers). [ 1 ]
Arylsulfatase enzyme is present in most mycobacteria. The rate by which arylsulfatase enzyme breaks down phenolphthalein disulfate into phenolphthalein (which forms a red color in the presence of sodium bicarbonate) and other salts is used to differentiate certain strains of Mycobacteria. 3 day arylsulfatase test is used to identify potentially pathogenic rapid growers such as M. fortuitum and M. chelonae. Slow growing M. marinum and M. szulgai are positive in the 14-day arylsulfatase test. [ 2 ]
Most mycobacteria produce the enzyme catalase , but they vary in the quantity produced. Also, some forms of catalase are inactivated by heating at 68°C for 20 minutes (others are stable). Organisms producing the enzyme catalase have the ability to decompose hydrogen peroxide into water and free oxygen. The test differs from that used to detect catalase in other types of bacteria by using 30% hydrogen peroxide in a strong detergent solution (10% polysorbate 80 ). [ 1 ]
Sole carbon source [ 1 ]
Growth on Löwenstein–Jensen medium (LJ medium)
Sole carbon and nitrogen source [ 1 ]
The growth rate is the length of time required to form mature colonies visible without magnification on solid media. Mycobacteria forming colonies visible to the naked eye within seven days on subculture are known as rapid growers, while those requiring longer periods are termed slow growers. [ 3 ]
The ability to take up iron from an inorganic iron containing reagent helps differentiate some species of mycobacteria. [ 1 ]
Lebek is a semisolid medium used to test the oxygen preferences of mycobacterial isolates. Aerophilic growth is indicated by growth on (and above) the surface of the glass wall of the tube; microaerophilic growth is indicated by growth below the surface. [ 4 ]
Niacin is formed as a metabolic byproduct by all mycobacteria, but some species possess an enzyme that converts free niacin to niacin ribonucleotide . M. tuberculosis (and some other species) lack this enzyme, and accumulate niacin as a water-soluble byproduct in the culture medium. [ 1 ]
Mycobacteria containing nitroreductase catalyze the reduction from nitrate to nitrite . The presence of nitrite in the test medium is detected by addition of sulfanilamide and n-naphthylethylendiamine. If nitrate is present, red diazonium dye is formed. [ 1 ]
Some mycobacteria produce carotenoid pigments without light; others require photoactivation for pigment production. Photochromogens produce non-pigmented colonies when grown in the dark, and pigmented colonies after exposure to light and re-incubation. Scotochromogens produce deep-yellow-to-orange colonies when grown in either light or darkness. Non-photochromogens are non-pigmented in light and darkness or have a pale-yellow, buff or tan pigment which does not intensify after light exposure. [ 3 ]
Grows on Sauton agar containing picric acid (0.2% w/v) after three weeks [ 1 ]
Some mycobacteria produce carotenoid pigments without light; others require photoactivation for pigment production (see photoreactivity, above). [ 3 ]
The deamidation of pyrazinamide to pyrazinoic acid (assumed to be the active component of the drug pyrazinamide ) in four days is a useful physiologic characteristic by which M. tuberculosis -complex members can be distinguished. [ 1 ]
Growth on LJ medium containing 5% NaCl [ 1 ]
The growth of M. bovis and M. africanum subtype II is inhibited by thiophene-2carboxylic acid hydrazide ; growth of M. tuberculosis and M. africanum subtype I is uninhibited. [ 1 ]
A test for lipase using polysorbate 80 (polyoxyethylene sorbitan monooleate, a detergent). Certain mycobacteria possess a lipase that splits it into oleic acid and polyoxyethylated sorbitol . The test solution also contains phenol red , which is stabilised by the polysorbate 80; when the latter 80 is hydrolysed, the phenol red changes from yellow to pink. [ 1 ]
With an inoculation loop, several loopfuls of mycobacteria test colonies are transferred to 0.5 mL of urease substrate, mixed to emulsify and incubated at 35 °C for three days; a colour change (from amber-yellow to pink-red) is sought. [ 1 ] | https://en.wikipedia.org/wiki/Phenotypic_testing_of_mycobacteria |
Phenoxymethylpenicillin , also known as penicillin V ( PcV ) and penicillin VK , is an antibiotic useful for the treatment of a number of bacterial infections . [ 2 ] Specifically it is used for the treatment of strep throat , otitis media , and cellulitis . [ 2 ] It is also used to prevent rheumatic fever and to prevent infections following removal of the spleen . [ 2 ] It is given by mouth. [ 2 ]
Side effects include diarrhea , nausea , and allergic reactions including anaphylaxis . [ 2 ] It is not recommended in those with a history of penicillin allergy . [ 2 ] It is relatively safe for use during pregnancy . [ 3 ] It is in the penicillin and beta lactam family of medications. [ 4 ] It usually results in bacterial death . [ 4 ]
Phenoxymethylpenicillin was first made in 1948 by Eli Lilly . [ 5 ] : 121 It is on the World Health Organization's List of Essential Medicines . [ 6 ] It is available as a generic medication . [ 4 ] In 2022, it was the 259th most commonly prescribed medication in the United States, with more than 1 million prescriptions. [ 7 ] [ 8 ]
Specific uses for phenoxymethylpenicillin include: [ 9 ] [ 10 ]
Penicillin V is sometimes used in the treatment of odontogenic infections. [ citation needed ]
It is less active than benzylpenicillin (penicillin G) against Gram-negative bacteria . [ 11 ] [ 12 ] Phenoxymethylpenicillin has a range of antimicrobial activity against Gram-positive bacteria that is similar to that of benzylpenicillin and a similar mode of action, but it is substantially less active than benzylpenicillin against Gram-negative bacteria . [ 11 ] [ 12 ]
Phenoxymethylpenicillin is more acid-stable than benzylpenicillin, which allows it to be given orally. [ citation needed ]
Phenoxymethylpenicillin is usually used only for the treatment of mild to moderate infections, and not for severe or deep-seated infections since absorption can be unpredictable. Except for the treatment or prevention of infection with Streptococcus pyogenes (which is uniformly sensitive to penicillin), therapy should be guided by bacteriological studies (including sensitivity tests) and by clinical response. [ 13 ] People treated initially with parenteral benzylpenicillin may continue treatment with phenoxymethylpenicillin by mouth once a satisfactory response has been obtained. [ 9 ]
It is not active against beta-lactamase -producing bacteria, which include many strains of Staphylococci . [ 13 ]
Phenoxymethylpenicillin is usually well tolerated but may occasionally cause transient nausea , vomiting, epigastric distress, diarrhea , constipation, acidic smell to urine and black hairy tongue . A previous hypersensitivity reaction to any penicillin is a contraindication . [ 9 ] [ 13 ]
The mechanism of phenoxymethylpenicillin is identical to that of all other penicillins. It exerts a bactericidal action against penicillin-sensitive microorganisms during the stage of active multiplication. It acts by inhibiting the biosynthesis of cell-wall peptidoglycan . [ 14 ]
The Austrian pharmaceutical company, Biochemie, was founded in Kundl in July 1946 at the site of a derelict brewery, at the suggestion of a French officer, Michel Rambaud (a chemist), who was able to obtain a small amount of Penicillium start culture from France. Contamination of the fermentation tanks was a persistent problem and in 1951, the company biologist, Ernst Brandl , attempted to solve this by adding phenoxyethanol to the tanks as an anti-bacterial disinfectant. This resulted unexpectedly in an increase in penicillin production: but, the penicillin produced was not benzylpenicillin, but phenoxymethylpenicillin. Phenoxyethanol was fermented to phenoxyacetic acid [ 16 ] in the tanks, which was then incorporated into penicillin via biosynthesis. Importantly, Brandl realised that phenoxymethylpenicillin is not destroyed by stomach acid and can therefore be given by mouth. Phenoxymethyl penicillin was originally discovered by Eli Lilly in 1948 as part of their efforts to study penicillin precursors, but was not further exploited, and there is no evidence that Lilly understood the significance of their discovery at the time. [ 5 ] : 119–121 [ 17 ]
Biochemie is part of Sandoz . [ citation needed ]
There were four named penicillins at the time penicillin V was discovered ( penicillins I, II, III, IV ), however, Penicillin V was named "V" for Vertraulich (German for confidential ); [ 5 ] : 121 it was not named for the Roman numeral "5".
Penicillin VK is the potassium salt of penicillin V (K is the chemical symbol for potassium). [ citation needed ] | https://en.wikipedia.org/wiki/Phenoxymethylpenicillin |
The enzyme phenylalanine racemase ( EC 5.1.1.11 , phenylalanine racemase , phenylalanine racemase (adenosine triphosphate-hydrolysing) , gramicidin S synthetase I ) is the enzyme that acts on amino acids and derivatives. It activates both the L & D stereo isomers of phenylalanine to form L-phenylalanyl adenylate and D-phenylalanyl adenylate, which are bound to the enzyme. These bound compounds are then transferred to the thiol group of the enzyme followed by conversion of its configuration, the D-isomer being the more favorable configuration of the two, with a 7 to 3 ratio between the two isomers. The racemisation reaction of phenylalanine is coupled with the highly favorable hydrolysis of adenosine triphosphate (ATP) to adenosine monophosphate (AMP) and pyrophosphate (PP), thermodynamically allowing it to proceed. This reaction is then drawn forward by further hydrolyzing PP to inorganic phosphate (P i ), via Le Chatelier's principle .
Problems in the digestion of phenylalanine (phe) to tyrosine (tyr) lead to the buildup of both phe and phenylpyruvate, in a disease called Phenylketonuria (PKU). These two compounds build up in the blood stream and cerebral spinal fluid, which can lead to mental retardation if left untreated. Treatment consists of a restricted diet of foods that contain phe or compounds that can breakdown into phe. Children in the US are routinely tested for this at birth. For more information see the Phenylketonuria page or the link below.
Compound C00079 at KEGG Pathway Database. Compound C00002 at KEGG Pathway Database. Enzyme 5.1.1.11 at KEGG Pathway Database. Compound C00020 at KEGG Pathway Database. Compound C00013 at KEGG Pathway Database. Compound C00001 at KEGG Pathway Database. Reaction R00686 at KEGG Pathway Database. Pathway MAP00360 at KEGG Pathway Database. Compound C00018 at KEGG Pathway Database. |} | https://en.wikipedia.org/wiki/Phenylalanine_racemase_(ATP-hydrolysing) |
Phenylboronic acid or benzeneboronic acid , abbreviated as PhB(OH) 2 where Ph is the phenyl group C 6 H 5 - and B(OH) 2 is a boronic acid containing a phenyl substituent and two hydroxyl groups attached to boron . Phenylboronic acid is a white powder and is commonly used in organic synthesis . Boronic acids are mild Lewis acids which are generally stable and easy to handle, making them important to organic synthesis.
Phenylboronic acid is soluble in most polar organic solvents and is poorly soluble in hexanes and carbon tetrachloride . This planar compound has idealized C 2V molecular symmetry . The boron atom is sp 2 -hybridized and contains an empty p-orbital . The orthorhombic crystals use hydrogen bonding to form units made up of two molecules. [ 3 ] These dimeric units are combined to give an extended hydrogen-bonded network . The molecule is planar with a minor bend around the C-B bond of 6.6° and 21.4° for the two PhB(OH) 2 molecules. [ 4 ]
Numerous methods exist to synthesize phenylboronic acid. One of the most common synthesis uses phenylmagnesium bromide and trimethyl borate to form the ester PhB(OMe) 2 , which is then hydrolyzed to the product. [ 5 ]
Other routes to phenylboronic acid involve electrophilic borates to trap phenylmetal intermediates from phenyl halides or from directed ortho- metalation . [ 4 ] Phenylsilanes and phenylstannanes transmetalate with BBr 3 , followed by hydrolysis form phenylboronic acid. Aryl halides or triflates can be coupled with diboronyl reagents using transition metal catalysts. Aromatic C-H functionalization can also be done using transition metal catalysts .
The dehydration of boronic acids gives boroxines , the trimeric anhydrides of phenylboronic acid. The dehydration reaction is driven thermally, sometimes with a dehydration agent . [ 6 ]
Phenylboronic acid participates in numerous cross coupling reactions where it serves as a source of a phenyl group. One example is the Suzuki reaction where, in the presence of a Pd(0) catalyst and base, phenylboronic acid and vinyl halides are coupled to produce phenyl alkenes . [ 7 ] This method was generalized to a route producing biaryls by coupling phenylboronic acid with aryl halides.
C-C bond forming processes commonly use phenylboronic acid as a reagent. Alpha-amino acids can be generated using the uncatalyzed reaction between alpha-ketoacids , amines , and phenylboronic acid. [ 8 ] Heck-type cross coupling of phenylboronic acid and alkenes and alkynes has been demonstrated. [ 9 ]
Aryl azides and nitroaromatics can also be generated from phenylboronic acid. [ 4 ] Phenylboronic acid can also be regioselectively halodeboronated using aqueous bromine , chlorine , or iodine : [ 10 ]
Boronic esters result from the condensation of boronic acids with alcohols . This transformation is simply the replacement of the hydroxyl group by alkoxy or aryloxy groups. [ 4 ] This reversible reaction is commonly driven to product by the use of Dean-Stark apparatus or a dehydration agent to remove water.
As an extension of this reactivity, PhB(OH) 2 can be used as a protecting group for diols and diamines . This reactivity is the basis of the use of phenylboronic acid's use as a receptor and sensor for carbohydrates, antimicrobial agents, and enzyme inhibitors , neutron capture therapy for cancer , transmembrane transport , and bioconjugation and labeling of proteins and cell surface. [ 4 ] | https://en.wikipedia.org/wiki/Phenylboronic_acid |
The biosynthesis of phenylpropanoids involves a number of enzymes.
In plants, all phenylpropanoids are derived from the amino acids phenylalanine and tyrosine .
Phenylalanine ammonia-lyase (PAL, a.k.a. phenylalanine/tyrosine ammonia-lyase) is an enzyme that transforms L- phenylalanine and tyrosine into trans- cinnamic acid and p -coumaric acid , respectively.
Trans-cinnamate 4-monooxygenase (cinnamate 4-hydroxylase) is the enzyme that transforms trans-cinnamate into 4-hydroxycinnamate ( p -coumaric acid). 4-Coumarate-CoA ligase is the enzyme that transforms 4-coumarate ( p -coumaric acid) into 4-coumaroyl-CoA . [ 1 ]
These enzymes conjugate phenylpropanoids to other molecules.
An alternative bacterial ketosynthase -directed stilbenoids biosynthesis pathway exists in Photorhabdus bacterial symbionts of Heterorhabditis nematodes, producing 3,5-dihydroxy-4-isopropyl-trans-stilbene for antibiotic purposes. [ 2 ]
4-Coumaroyl-CoA can be combined with malonyl-CoA to yield the true backbone of flavonoids, a group of compounds called chalconoids , which contain two phenyl rings. Naringenin-chalcone synthase is an enzyme that catalyzes the following conversion:
Conjugate ring-closure of chalcones results in the familiar form of flavonoids , the three-ringed structure of a flavone . | https://en.wikipedia.org/wiki/Phenylpropanoids_metabolism |
Phenylpropanolamine ( PPA ), sold under many brand names, is a sympathomimetic agent used as a decongestant and appetite suppressant . [ 9 ] [ 1 ] [ 10 ] [ 11 ] It was once common in prescription and over-the-counter cough and cold preparations . The medication is taken orally . [ 4 ] [ 12 ]
Side effects of phenylpropanolamine include increased heart rate and blood pressure . [ 13 ] [ 14 ] [ 15 ] [ 12 ] Rarely, PPA has been associated with hemorrhagic stroke . [ 11 ] [ 16 ] [ 13 ] PPA acts as a norepinephrine releasing agent , indirectly activating adrenergic receptors . [ 17 ] [ 18 ] [ 19 ] As such, it is an indirectly acting sympathomimetic . [ 17 ] [ 18 ] [ 19 ] [ 10 ] It was once thought to act as a sympathomimetic with additional direct agonist action on adrenergic receptors, but this proved wrong. [ 17 ] [ 18 ] [ 19 ] Chemically, phenylpropanolamine is a substituted amphetamine and is closely related to ephedrine , pseudoephedrine , amphetamine , and cathinone . [ 20 ] [ 21 ] [ 22 ] [ 11 ] It is usually a racemic mixture of the (1 R ,2 S )- and (1 S ,2 R )- enantiomers of β-hydroxyamphetamine and is also known as dl -norephedrine. [ 21 ] [ 9 ] [ 1 ]
Phenylpropanolamine was first synthesized around 1910 and its effects on blood pressure were characterized around 1930. [ 21 ] [ 11 ] It was introduced as medicine by the 1930s. [ 23 ] [ 11 ] It was withdrawn from many markets starting in 2000 after learning that it was associated with increased risk of hemorrhagic stroke. [ 23 ] [ 11 ] It was previously available both over-the-counter and by prescription . [ 23 ] [ 2 ] [ 24 ] [ 25 ] Phenylpropanolamine is available for both human and/or veterinary use in some countries. [ 2 ]
Phenylpropanolamine is used as a decongestant to treat nasal congestion . [ 13 ] [ 14 ] It has also been used to suppress appetite and promote weight loss in the treatment of obesity and has shown effectiveness for this indication. [ 26 ] [ 27 ] [ 28 ]
Phenylpropanolamine was previously available in the United States over-the-counter and in certain combination drug forms by prescription . [ 24 ] [ 25 ] One such example of the latter was a combination of phenylpropanolamine and chlorpheniramine , which dually contained decongestant and antihistamine effects, marketed by Tutag as ' Vernate' . These forms have all been discontinued in the U.S., although. [ 24 ] [ 25 ] [ 2 ] phenylpropanolamine remains available in some countries. [ 2 ]
Phenylpropanolamine produces sympathomimetic effects and can cause side effects such as increased heart rate and blood pressure . [ 13 ] [ 14 ] [ 15 ] [ 12 ] It has been associated rarely with incidence of hemorrhagic stroke . [ 23 ] [ 16 ] [ 13 ]
Certain drugs increase the chances of déjà vu occurring in the user, resulting in a strong sensation that an event or experience currently being experienced has already been experienced in the past. Some pharmaceutical drugs, when taken together, have also been implicated in the cause of déjà vu . [ 29 ] The Journal of Clinical Neuroscience reported the case of an otherwise healthy male who started experiencing intense and recurrent sensations of déjà vu upon taking the drugs amantadine and phenylpropanolamine together to relieve flu symptoms. [ 30 ] He found the experience so interesting that he completed the full course of his treatment and reported it to the psychologists to write up as a case study. Because of the dopaminergic action of the drugs and previous findings from electrode stimulation of the brain, [ 31 ] it was speculated that déjà vu occurs as a result of hyperdopaminergic action in the mesial temporal areas of the brain.
There has been very little research on drug interactions with phenylpropanolamine. [ 4 ] In one study, phenylpropanolamine taken with caffeine was found to quadruple caffeine levels. [ 4 ] In another study, phenylpropanolamine reduced theophylline clearance by 50%. [ 4 ]
Phenylpropanolamine acts primarily as a selective norepinephrine releasing agent . [ 19 ] It also acts as a dopamine releasing agent with around 10-fold lower potency . [ 19 ] The stereoisomers of the drug have only weak or negligible affinity for α- and β-adrenergic receptors . [ 19 ]
Phenylpropanolamine was originally thought to act as a direct agonist of adrenergic receptors and hence to act as a mixed acting sympathomimetic , [ 21 ] [ 22 ] However, phenylpropanolamine was subsequently found to show only weak or negligible affinity for these receptors and has been instead characterized as exclusively an indirectly acting sympathomimetic. [ 10 ] [ 17 ] [ 18 ] [ 19 ] It acts by inducing norepinephrine release and thereby indirectly activating adrenergic receptors. [ 17 ] [ 18 ] [ 19 ]
Many sympathetic hormones and neurotransmitters are based on the phenethylamine skeleton, and function generally in "fight or flight" type responses, such as increasing heart rate, blood pressure, dilating the pupils, increased energy, drying of mucous membranes, increased sweating, and a significant number of additional effects. [ citation needed ]
Phenylpropanolamine has relatively low potency as a sympathomimetic. [ 21 ] It is about 100 to 200 times less potent than epinephrine (adrenaline) or norepinephrine (noradrenaline) in its sympathomimetic effects, although responses are variable depending on tissue . [ 21 ]
Phenylpropanolamine is readily- and well-absorbed with oral administration . [ 6 ] [ 7 ] [ 5 ] Immediate-release forms of the drug reached peak levels about 1.5 hours (range 1.0 to 2.3 hours) following administration. [ 4 ] [ 7 ] Conversely, extended-release forms of phenylpropanolamine reach peak levels after 3.0 to 4.5 hours. [ 4 ] The pharmacokinetics of phenylpropanolamine are linear across an oral dose range of 25 to 100 mg. [ 4 ] Steady-state levels of phenylpropanolamine are achieved within 12 hours when the drug is taken once every 4 hours. [ 4 ] There is 62% accumulation of phenylpropanolamine at steady state in terms of peak levels, whereas area-under-the-curve levels are not increased with steady state. [ 4 ]
The volume of distribution of phenylpropanolamine is 3.0 to 4.5 L/kg. [ 4 ] Levels of phenylpropanolamine in the brain are about 40% of those in the heart and 20% of those in the lungs . [ 6 ] The hydroxyl group of phenylpropanolamine at the β carbon increases its hydrophilicity , reduces its permeation through the blood–brain barrier , and limits its central nervous system (CNS) effects. [ 6 ] Hence, phenylpropanolamine crosses into the brain only to some extent, has only weak CNS effects, and most of its effects are peripheral. [ 14 ] [ 6 ] [ 5 ] [ 21 ] In any case, phenylpropanolamine can produce amphetamine -like psychostimulant effects at very high doses. [ 21 ] [ 6 ] [ 5 ] Phenylpropanolamine is more lipophilic than structurally related sympathomimetics with hydroxyl groups on the phenyl ring like epinephrine (adrenaline) and phenylephrine and has greater brain permeability than these agents. [ 5 ] [ 22 ]
The plasma protein binding of phenylpropanolamine is approximately 20%. [ 5 ] [ 4 ] However, it has been said that no recent studies have substantiated this value. [ 4 ]
Phenylpropanolamine is not substantially metabolized . [ 7 ] [ 5 ] It also does not undergo significant first-pass metabolism . [ 7 ] Only about 3 to 4% of an oral dose of phenylpropanolamine is metabolized. [ 5 ] Metabolites include hippuric acid (via oxidative deamination of the side chain ) and 4-hydroxynorephedrine (via para - hydroxylation ). [ 4 ] [ 5 ] [ 6 ] The methyl group at the α carbon of phenylpropanolamine blocks metabolism by monoamine oxidases (MAOs). [ 6 ] [ 5 ] [ 14 ] Phenylpropanolamine is also not a substrate of catechol O -methyltransferase . [ 14 ] The hydroxyl group at the β carbon of phenylpropanolamine also helps to increase metabolic stability . [ 5 ]
Approximately 90% of a dose of phenylpropanolamine is excreted in the urine unchanged within 24 hours. [ 4 ] [ 6 ] [ 7 ] [ 5 ] About 4% of excreted material is in the form of metabolites . [ 4 ]
The elimination half-life of immediate-release phenylpropanolamine is about 4 hours, with a range in different studies of 3.7 to 4.9 hours. [ 6 ] [ 7 ] [ 4 ] The half-life of extended-release phenylpropanolamine has ranged from 4.3 to 5.8 hours. [ 4 ]
The elimination of phenylpropanolamine is dependent on urinary pH . [ 4 ] [ 5 ] At a more acidic urinary pH, the elimination of phenylpropanolamine is accelerated and its half-life and duration are shortened, whereas at more basic urinary pH, the elimination of phenylpropanolamine is reduced and its half-life and duration are extended. [ 5 ] [ 4 ] Urinary acidifying agents like ascorbic acid and ammonium chloride can increase the excretion of and thereby reduce exposure to amphetamines including phenylpropanolamine, whereas urinary alkalinizing agents including antacids like sodium bicarbonate as well as acetazolamide can reduce the excretion of these agents and thereby increase exposure to them. [ 36 ] [ 5 ] [ 37 ]
Total body clearance of phenylpropanolamine has been reported to be 0.546 L/h/kg, while renal clearance was 0.432 L/h/kg. [ 4 ]
As phenylpropanolamine is not extensively metabolized, it would probably not be affected by hepatic impairment . [ 4 ] Conversely, there is likely to be accumulation of phenylpropanolamine with renal impairment due to its dependence on urinary excretion. [ 4 ]
Norephedrine is a minor metabolite of amphetamine and methamphetamine , as shown below. [ 4 ] It is also a minor metabolite of ephedrine and a major metabolite of cathinone . [ 4 ] [ 6 ] [ 5 ]
Phenylpropanolamine, also known as (1 RS ,2 SR )-α-methyl-β-hydroxyphenethylamine or as (1 RS ,2 SR )-β-hydroxyamphetamine, is a substituted phenethylamine and amphetamine derivative . [ 9 ] [ 20 ] [ 49 ] It is closely related to the cathinones (β-ketoamphetamines). [ 20 ] β-Hydroxyamphetamine exists as four stereoisomers , which include d - ( dextrorotatory ) and l -norephedrine ( levorotatory ), and d - and l -norpseudoephedrine . [ 49 ] [ 10 ] d -Norpseudoephedrine is also known as cathine , [ 9 ] [ 49 ] and is found naturally in Catha edulis ( khat ). [ 50 ] Pharmaceutical drug preparations of phenylpropanolamine have varied in their stereoisomer composition in different countries, which may explain differences in misuse and side effect profiles. [ 10 ] In any case, racemic dl -norephedrine, or (1 RS ,2 SR )-phenylpropanolamine, appears to be the most commonly used formulation of phenylpropanolamine pharmaceutically. [ 21 ] [ 9 ] [ 1 ] Analogues of phenylpropanolamine include ephedrine , pseudoephedrine , amphetamine , methamphetamine , and cathinone . [ 20 ]
Phenylpropanolamine, structurally, is in the substituted phenethylamine class, consisting of a cyclic benzene or phenyl group, a two carbon ethyl moiety, and a terminal nitrogen, hence the name phen-ethyl-amine . [ 51 ] The methyl group on the alpha carbon (the first carbon before the nitrogen group) also makes this compound a member of the substituted amphetamine class. [ 51 ] Ephedrine is the N -methyl analogue of phenylpropanolamine.
Exogenous compounds in this family are degraded too rapidly by monoamine oxidase to be active at all but the highest doses. [ 51 ] However, the addition of the α-methyl group allows the compound to avoid metabolism and confer an effect. [ 51 ] In general, N -methylation of primary amines increases their potency, whereas β-hydroxylation decreases CNS activity, but conveys more selectivity for adrenergic receptors. [ 51 ]
Phenylpropanolamine is a small-molecule compound with the molecular formula C 9 H 13 NO and a molecular weight of 151.21 g/mol. [ 52 ] [ 8 ] It has an experimental log P of 0.67, while its predicted log P values range from 0.57 to 0.89. [ 52 ] [ 8 ] The compound is relatively lipophilic , [ 5 ] but is also more hydrophilic than other amphetamines. [ 6 ] The lipophilicity of amphetamines is closely related to their brain permeability. [ 53 ] For comparison to phenylpropanolamine, the experimental log P of methamphetamine is 2.1, [ 54 ] of amphetamine is 1.8, [ 55 ] [ 54 ] of ephedrine is 1.1, [ 56 ] of pseudoephedrine is 0.7, [ 57 ] of phenylephrine is -0.3, [ 58 ] and of norepinephrine is -1.2. [ 59 ] Methamphetamine has high brain permeability, [ 54 ] whereas phenylephrine and norepinephrine are peripherally selective drugs . [ 60 ] [ 61 ] The optimal log P for brain permeation and central activity is about 2.1 (range 1.5–2.7). [ 62 ]
Phenylpropanolamine has been used pharmaceutically exclusively as the hydrochloride salt . [ 9 ] [ 1 ]
Phenylpropanolamine was first synthesized in the early 20th century, in or around 1910. [ 21 ] [ 11 ] It was patented as a mydriatic in 1913. [ 21 ] The pressor effects of phenylpropanolamine were characterized in the late 1920s and the 1930s. [ 21 ] Phenylpropanolamine was first introduced for medical use by the 1930s. [ 23 ] [ 11 ]
In the United States, phenylpropanolamine is no longer sold due to an increased risk of haemorrhagic stroke . [ 16 ] In a few countries in Europe , however, it is still available either by prescription or sometimes over-the-counter. In Canada , it was withdrawn from the market on 31 May 2001. [ 63 ] It was voluntarily withdrawn from the Australian market by July 2001. [ 64 ] In India , human use of phenylpropanolamine and its formulations was banned on 10 February 2011, [ 65 ] but the ban was overturned by the judiciary in September 2011. [ 66 ]
Phenylpropanolamine is the generic name of the drug and its INN Tooltip International Nonproprietary Name , BAN Tooltip British Approved Name , and DCF Tooltip Dénomination Commune Française , while phenylpropanolamine hydrochloride is its USAN Tooltip United States Adopted Name and BANM Tooltip British Approved Name in the case of the hydrochloride salt . [ 9 ] [ 1 ] [ 10 ] [ 2 ] It is also known by the synonym norephedrine . [ 9 ] [ 1 ] [ 2 ]
Brand names of phenylpropanolamine include Acutrim, Appedrine, Capton Diet, Control, Dexatrim , Emagrin Plus A.P., Glifentol, Kontexin, Merex , Monydrin, Mydriatine, Prolamine, Propadrine, Propagest, Recatol, Rinexin, Tinaroc, and Westrim, among many others. [ 9 ] [ 1 ] [ 2 ] It has also been used in combinations under brand names including Allerest , Demazin , Dimetapp , and Sinarest, among others. [ 1 ] [ 2 ]
Phenylpropanolamine is available for medical and veterinary use in some countries. [ 1 ] [ 2 ]
There has been interest in phenylpropanolamine as a performance-enhancing drug in exercise and sports . [ 67 ] However, clinical studies suggest that phenylpropanolamine is not effective in this regard. [ 67 ] [ 6 ] Phenylpropanolamine is not on the World Anti-Doping Agency (WADA) list of prohibited substances as of 2024. [ 68 ]
In Sweden, phenylpropanolamine is still available in prescription decongestants; [ 69 ] Phenylpropanolamine is also still available in Germany. It is used in some polypill medications like Wick DayMed capsules.
In the United Kingdom, phenylpropanolamine was available in many "all in one" cough and cold medications which usually also feature paracetamol (also known as acetaminophen) or another analgesic and caffeine and could also be purchased on its own. It is no longer approved for human use, however, and a European Category 1 Licence is required to purchase or acquire phenylpropanolamine for academic or research use.
In the United States, the Food and Drug Administration (FDA) issued a public health advisory [ 70 ] recommending against the use of the drug in November 2000. In this advisory, the FDA requested but did not require that all drug companies discontinue marketing products containing phenylpropanolamine. The agency estimates that phenylpropanolamine caused between 200 and 500 strokes per year among 18-to-49-year-old users. In 2005, the FDA removed phenylpropanolamine from over-the-counter sale and removed its " generally recognized as safe and effective " (GRASE) status. [ 71 ] Under the 2020 CARES Act , it requires FDA approval before it can be marketed again effectively banning the drug, even as a prescription. [ 72 ]
Because of its potential use in amphetamine manufacture, phenylpropanolamine is controlled by the Combat Methamphetamine Epidemic Act of 2005 . However, It is still available for veterinary use in dogs as a treatment for urinary incontinence .
Internationally, an item on the agenda of the 2000 Commission on Narcotic Drugs session called for including the stereoisomer norephedrine in Table I of United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances . [ 73 ]
Drugs containing phenylpropanolamine were banned in India on 27 January 2011. [ 74 ] On 13 September 2011, Madras High Court revoked a ban on the manufacture and sale of pediatric drugs phenylpropanolamine and nimesulide . [ 75 ]
Phenylpropanolamine is available for use in veterinary medicine . [ 25 ] It is used to control urinary incontinence in dogs. [ 76 ] [ 77 ]
In June 2024, the US Food and Drug Administration (FDA) approved Phenylpropanolamine hydrochloride chewable tablets for the control of urinary incontinence due to a weakening of the muscles that control urination (urethral sphincter hypotonus) in dogs. [ 78 ] [ 79 ] [ 80 ] This is the first generic phenylpropanolamine hydrochloride chewable tablets for dogs. [ 78 ]
Urinary incontinence happens when a dog loses its ability to control when it urinates. [ 78 ] Urinary incontinence due to urethral sphincter hypotonus can happen as dogs age and as the dog’s muscle in its urethra (the tube that leads from the dog’s bladder to outside its body) weakens and loses control over its ability to hold urine. [ 78 ]
Phenylpropanolamine hydrochloride chewable tablets contain the same active ingredient (phenylpropanolamine hydrochloride) in the same concentration and dosage form as the approved brand name drug product, Proin chewable tablets, which were first approved in August 2011. [ 78 ] In addition, the FDA determined that Phenylpropanolamine hydrochloride chewable tablets contain no inactive ingredients that may significantly affect the bioavailability of the active ingredient. [ 78 ] | https://en.wikipedia.org/wiki/Phenylpropanolamine |
1-Phenyl-2-propylaminopentane ( PPAP ), also known as α, N -dipropylphenethylamine ( DPPEA ) and by the developmental code name MK-306 , is an experimental drug related to selegiline which acts as a catecholaminergic activity enhancer (CAE). [ 1 ] [ 2 ] [ 3 ] [ 4 ]
PPAP is a CAE and enhances the nerve impulse propagation-mediated release of norepinephrine and dopamine . [ 1 ] [ 3 ] [ 4 ] [ 5 ] It produces psychostimulant -like effects in animals. [ 4 ] The drug is a phenethylamine and amphetamine derivative and was derived from selegiline. [ 3 ] [ 4 ]
PPAP was first described in the literature in 1988 [ 6 ] and in the first major paper in 1992. [ 4 ] [ 7 ] It led to the development of the improved monoaminergic activity enhancer (MAE) benzofuranylpropylaminopentane (BPAP) in 1999. [ 1 ] [ 3 ] PPAP was a reference compound for studying the MAE system for many years. [ 1 ] [ 2 ] [ 3 ] However, it was superseded by BPAP, which is more potent , selective , and also enhances serotonin . [ 8 ] [ 1 ] [ 2 ] [ 3 ] [ 9 ] [ 10 ] There has been interest in PPAP for potential clinical use in humans, including in the treatment of depression , attention deficit hyperactivity disorder (ADHD), and Alzheimer's disease . [ 4 ]
PPAP is classified as a catecholaminergic activity enhancer (CAE), a drug that stimulates the impulse propagation-mediated release of the catecholamine neurotransmitters norepinephrine and dopamine in the brain . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 11 ]
Unlike stimulants such as amphetamine , which release a flood of monoamine neurotransmitters in an uncontrolled manner, (–)-PPAP instead only increases the amount of neurotransmitters that get released when a neuron is stimulated by receiving an impulse from a neighboring neuron. [ 11 ] [ 5 ] Both amphetamine and (–)-PPAP promote the release of monoamines; however, while amphetamine causes neurons to release neurotransmitter stores into the synapse regardless of external input, (–)-PPAP does not influence the pattern of neurotransmitter release and instead releases a larger amount of neurotransmitters than normal. [ 11 ] [ 5 ]
Recent findings have suggested that known synthetic monoaminergic activity enhancers (MAEs) like PPAP, BPAP, and selegiline may exert their effects via trace amine-associated receptor 1 (TAAR1) agonism . [ 12 ] [ 13 ] This was evidenced by the TAAR1 antagonist EPPTB reversing the MAE effects of BPAP and selegiline, among other findings. [ 12 ] [ 13 ] Another compound, rasagiline , has likewise been found to reverse the effects of MAEs, and has been proposed as a possible TAAR1 antagonist. [ 13 ]
The therapeutic index for PPAP in animal models is greater than that of amphetamine while producing comparable improvements in learning , retention , and antidepressant effects. [ 4 ] It has been found to reduce deficits induced by the dopamine depleting agent tetrabenazine in the shuttle box learning test in rats. [ 4 ] [ 14 ]
PPAP and selegiline are much less potent than BPAP as MAEs. [ 3 ] [ 10 ] Whereas PPAP and selegiline are active at doses of 1 to 5 mg/kg in vivo in rats, BPAP is active at doses of 0.05 to 10 mg/kg. [ 3 ] BPAP is 130 times as potent as selegiline in the shuttle box test. [ 1 ] In contrast to BPAP however, the MAE effects of PPAP and selegiline are not reversed by the BPAP antagonist 3-F-BPAP . [ 2 ] In addition, whereas PPAP and selegiline are selective as MAEs of norepinephrine and dopamine, BPAP is a MAE of not only norepinephrine and dopamine but also of serotonin . [ 1 ] [ 10 ] [ 2 ] [ 4 ]
Unlike the related CAE selegiline , (–)-PPAP has no activity as a monoamine oxidase inhibitor . [ 8 ] [ 15 ]
PPAP, also known as α, N -dipropylphenethylamine (DPPEA) or as α-desmethyl-α, N -dipropylamphetamine, is a substituted phenethylamine and amphetamine derivative . [ 4 ] It was derived from structural modification of selegiline ( L -deprenyl; ( R )-(–)- N ,α-dimethyl- N -2-propynylphenethylamine). [ 4 ]
Both racemic PPAP and subsequently its more active (–)- or (2 R )- enantiomer (–)-PPAP have been employed in the literature. [ 4 ] [ 14 ] [ 1 ] [ 2 ] [ 5 ] [ 16 ]
PPAP is similar in chemical structure to propylamphetamine ( N -propylamphetamine; NPA; PAL-424), but has an α- propyl chain instead of an α- methyl group . It is also similar in structure to α-propylphenethylamine (APPEA; PAL-550), but has an N -propyl chain instead of no substitution. PPAP can be thought of as the combined derivative of NPA and APPEA. NPA and APPEA are known to be low- potency dopamine reuptake inhibitors ( IC 50 Tooltip half-maximal inhibitory concentration = 1,013 nM and 2,596 nM, respectively) and are inactive as dopamine releasing agents in vitro . [ 17 ] Another similar analogue of PPAP is N ,α-diethylphenethylamine (DEPEA), which is a norepinephrine–dopamine releasing agent and/or reuptake inhibitor . [ 18 ] [ 19 ] [ 12 ] A more well-known derivative of APPEA related to PPAP is the cathinone pentedrone (α-propyl-β-keto- N -methylphenethylamine), which is a norepinephrine–dopamine reuptake inhibitor.
A related MAE, BPAP, is a substituted benzofuran derivative and tryptamine relative that was derived from structural modification of PPAP. [ 1 ] It was developed by replacement of the benzene ring in PPAP with a benzofuran ring. [ 10 ] [ 20 ] Another related MAE, indolylpropylaminopentane (IPAP), is a tryptamine derivative that is the analogue of PPAP in which the benzene ring has been replaced with an indole ring. [ 20 ] [ 12 ] [ 13 ]
PPAP (MK-306) and its (–)-enantiomer (–)-PPAP must not be confused with the sigma receptor ligand R (−)- N -(3-phenyl-n-propyl)-1-phenyl-2-aminopropane ((–)-PPAP—same acronym) [ 21 ] or with the cephamycin antibiotic cefoxitin (MK-306—same developmental code name). [ 22 ] [ 23 ] [ 24 ]
Racemic PPAP (MK-306) was first described in the scientific literature in 1988 [ 6 ] and a series of papers characterizing it were published in the early 1990s. [ 25 ] [ 26 ] [ 27 ] [ 28 ] [ 29 ] [ 30 ] [ 7 ] [ 4 ] [ 31 ] The first major paper on the drug was published in 1992. [ 4 ] It was synthesized by József Knoll and colleagues. [ 7 ] [ 4 ] The potencies of the different enantiomers of PPAP were assessed in 1994. [ 14 ] Subsequent papers have employed (–)-PPAP. [ 1 ] [ 2 ] [ 5 ] [ 16 ]
Several patents of PPAP have been published. [ 32 ] [ 33 ] [ 34 ]
The development of PPAP was critical in elucidating that the CAE effects of selegiline are unrelated to its monoamine oxidase inhibition . [ 8 ] [ 1 ] [ 2 ] [ 3 ] For many years, PPAP served as a reference compound in studying MAEs. [ 1 ] [ 2 ] [ 3 ] However, it was eventually superseded by BPAP, which was discovered in 1999. [ 8 ] [ 1 ] [ 2 ] [ 3 ] [ 9 ] [ 10 ] This MAE is potent and selective than PPAP and, in contrast to PPAP and selegiline, also enhances serotonin . [ 8 ] [ 1 ] [ 2 ] [ 3 ] [ 9 ]
PPAP has been proposed as a potential therapeutic agent for attention deficit hyperactivity disorder (ADHD), Alzheimer's disease , and depression based on preclinical findings . [ 4 ] The developers of PPAP attempted to have it clinically studied, but were unsuccessful and it was never assessed in humans. [ 1 ] | https://en.wikipedia.org/wiki/Phenylpropylaminopentane |
Phenytoin ( PHT ), sold under the brand name Dilantin among others, [ 1 ] is an anti-seizure medication . [ 3 ] It is useful for the prevention of tonic-clonic seizures (also known as grand mal seizures) and focal seizures , but not absence seizures . [ 3 ] The intravenous form, fosphenytoin , is used for status epilepticus that does not improve with benzodiazepines . [ 3 ] It may also be used for certain heart arrhythmias or neuropathic pain . [ 3 ] It can be taken intravenously or by mouth. [ 3 ] The intravenous form generally begins working within 30 minutes and is effective for roughly 24 hours. [ 4 ] Blood levels can be measured to determine the proper dose. [ 3 ]
Common side effects include nausea, stomach pain, loss of appetite, poor coordination, increased hair growth , and enlargement of the gums . [ 3 ] Potentially serious side effects include sleepiness , self harm , liver problems, bone marrow suppression , low blood pressure , toxic epidermal necrolysis , [ 3 ] and atrophy of the cerebellum . [ 6 ] [ 7 ] [ 8 ] There is evidence that use during pregnancy results in abnormalities in the baby. [ 3 ] It appears to be safe to use when breastfeeding . [ 3 ] Alcohol may interfere with the medication's effects. [ 3 ]
Phenytoin was first made in 1908 by the German chemist Heinrich Biltz and found useful for seizures in 1936. [ 9 ] [ 10 ] It is on the World Health Organization's List of Essential Medicines . [ 11 ] Phenytoin is available as a generic medication . [ 12 ] In 2020, it was the 260th most commonly prescribed medication in the United States, with more than 1 million prescriptions. [ 13 ] [ 14 ]
Though phenytoin has been used to treat seizures in infants, as of 2023, its effectiveness in this age group has been evaluated in only one study. Due to the lack of a comparison group, the evidence is inconclusive. [ 17 ]
Common side effects include nausea, stomach pain, loss of appetite, poor coordination, increased hair growth , and enlargement of the gums . Potentially serious side effects include sleepiness , self harm , liver problems, bone marrow suppression , low blood pressure , and toxic epidermal necrolysis . There is evidence that use during pregnancy results in abnormalities in the baby. Its use appears to be safe during breastfeeding . Alcohol may interfere with the medication's effects. [ 3 ]
Severe low blood pressure and abnormal heart rhythms can be seen with rapid infusion of IV phenytoin. IV infusion should not exceed 50 mg/min in adults or 1–3 mg/kg/min (or 50 mg/min, whichever is slower) in children. Heart monitoring should occur during and after IV infusion. Due to these risks, oral phenytoin should be used if possible. [ 21 ]
At therapeutic doses, phenytoin may produce nystagmus on lateral gaze. At toxic doses, patients experience vertical nystagmus, double vision , sedation , slurred speech, cerebellar ataxia , and tremor . [ 22 ] If phenytoin is stopped abruptly, this may result in increased seizure frequency, including status epilepticus . [ 21 ] [ 20 ]
Phenytoin may accumulate in the cerebral cortex over long periods of time which can cause atrophy of the cerebellum . The degree of atrophy is related to the duration of phenytoin treatment and is not related to dosage of the medication. [ 23 ]
Phenytoin is known to be a causal factor in the development of peripheral neuropathy . [ 24 ]
Folate is present in food in a polyglutamate form, which is then converted into monoglutamates by intestinal conjugase to be absorbed by the jejunum. Phenytoin acts by inhibiting this enzyme, thereby causing folate deficiency , and thus megaloblastic anemia . [ 25 ] Other side effects may include: agranulocytosis , [ 26 ] aplastic anemia , [ 27 ] decreased white blood cell count , [ 28 ] and a low platelet count . [ 29 ]
Phenytoin is a known teratogen , since children exposed to phenytoin are at a higher risk of birth defects than children born to women without epilepsy and to women with untreated epilepsy. [ 30 ] [ 31 ] The birth defects, which occur in approximately 6% of exposed children, include neural tube defects , heart defects and craniofacial abnormalities , including broad nasal bridge, cleft lip and palate, and smaller than normal head . [ 31 ] [ 32 ] The effect on IQ cannot be determined as no study involves phenytoin as monotherapy, however poorer language abilities and delayed motor development may have been associated with maternal use of phenytoin during pregnancy. [ 30 ] This syndrome resembles the well-described fetal alcohol syndrome . [ 33 ] and has been referred to as " fetal hydantoin syndrome ". Some recommend avoiding polytherapy and maintaining the minimal dose possible during pregnancy, but acknowledge that current data fails to demonstrate a dose effect on the risk of birth defects. [ 30 ] [ 31 ] Data now being collected by the Epilepsy and Antiepileptic Drug Pregnancy Registry may one day answer this question definitively.
There is no good evidence to suggest that phenytoin is a human carcinogen . [ 34 ] [ 35 ] However, lymph node abnormalities have been observed, including malignancies. [ 36 ]
Phenytoin has been associated with drug-induced gingival enlargement (overgrowth of the gums), probably due to above-mentioned folate deficiency; indeed, evidence from a randomized controlled trial suggests that folic acid supplementation can prevent gingival enlargement in children who take phenytoin. [ 37 ] Plasma concentrations needed to induce gingival lesions have not been clearly defined. Effects consist of the following: bleeding upon probing, increased gingival exudate , pronounced gingival inflammatory response to plaque levels, associated in some instances with bone loss but without tooth detachment.
Hypertrichosis , Stevens–Johnson syndrome , purple glove syndrome , rash, exfoliative dermatitis , itching , excessive hairiness , and coarsening of facial features can be seen in those taking phenytoin.
Phenytoin therapy has been linked to the life-threatening skin reactions Stevens–Johnson syndrome (SJS) and toxic epidermal necrolysis (TEN). These conditions are significantly more common in patients with a particular HLA-B allele , HLA-B*1502 . [ 38 ] This allele occurs almost exclusively in patients with ancestry across broad areas of Asia, including South Asian Indians.
Phenytoin is primarily metabolized to its inactive form by the enzyme CYP2C9 . Variations within the CYP2C9 gene that result in decreased enzymatic activity have been associated with increased phenytoin concentrations, as well as reports of drug toxicities due to these increased concentrations. [ 39 ] The U.S. Food and Drug Administration (FDA) notes on the phenytoin drug label that since strong evidence exists linking HLA-B*1502 with the risk of developing SJS or TEN in patients taking carbamazepine , consideration should be given to avoiding phenytoin as an alternative to carbamazepine in patients carrying this allele. [ 40 ]
Phenytoin has been known to cause drug-induced lupus . [ 41 ]
Phenytoin is also associated with induction of reversible IgA deficiency . [ 42 ]
Phenytoin may increase risk of suicidal thoughts or behavior. People on phenytoin should be monitored for any changes in mood, the development or worsening depression, and/or any thoughts or behavior of suicide. [ 20 ]
Chronic phenytoin use has been associated with decreased bone density and increased bone fractures. Phenytoin induces metabolizing enzymes in the liver. This leads to increased metabolism of vitamin D , thus decreased vitamin D levels. Vitamin D deficiency , as well as low calcium and phosphate in the blood cause decreased bone mineral density. [ 20 ]
Phenytoin is an inducer of the CYP3A4 and CYP2C9 families of the P450 enzyme responsible for the liver's degradation of various drugs. [ 43 ]
A 1981 study by the National Institutes of Health showed that antacids administered concomitantly with phenytoin "altered not only the extent of absorption but also appeared to alter the rate of absorption. Antacids administered in a peptic ulcer regimen may decrease the AUC of a single dose of phenytoin. Patients should be cautioned against concomitant use of antacids and phenytoin." [ 44 ]
Warfarin and trimethoprim increase serum phenytoin levels and prolong the serum half-life of phenytoin by inhibiting its metabolism. Consider using other options if possible. [ 45 ]
In general, phenytoin can interact with the following drugs: [ citation needed ]
Phenytoin is believed to protect against seizures by causing voltage-dependent block of voltage gated sodium channels . [ 46 ] This blocks sustained high frequency repetitive firing of action potentials . This is accomplished by reducing the amplitude of sodium-dependent action potentials through enhancing steady-state inactivation. Sodium channels exist in three main conformations: the resting state, the open state, and the inactive state.
Phenytoin binds preferentially to the inactive form of the sodium channel. Because it takes time for the bound drug to dissassociate from the inactive channel, there is a time-dependent block of the channel. Since the fraction of inactive channels is increased by membrane depolarization as well as by repetitive firing, the binding to the inactive state by phenytoin sodium can produce voltage-dependent, use-dependent and time-dependent block of sodium-dependent action potentials. [ 47 ]
The primary site of action appears to be the motor cortex where spread of seizure activity is inhibited. [ 48 ] Possibly by promoting sodium efflux from neurons, phenytoin tends to stabilize the threshold against hyperexcitability caused by excessive stimulation or environmental changes capable of reducing membrane sodium gradient. This includes the reduction of post-tetanic potentiation at synapses which prevents cortical seizure foci from detonating adjacent cortical areas. Phenytoin reduces the maximal activity of brain stem centers responsible for the tonic phase of generalized tonic-clonic seizures. [ 21 ]
Phenytoin elimination kinetics show mixed-order, non-linear elimination behaviour at therapeutic concentrations. Where phenytoin is at low concentration it is cleared by first order kinetics , and at high concentrations by zero order kinetics . A small increase in dose may lead to a large increase in drug concentration as elimination becomes saturated. The time to reach steady state is often longer than 2 weeks. [ 49 ] [ 50 ] [ 51 ] [ 52 ]
Phenytoin (diphenylhydantoin) was first synthesized by German chemist Heinrich Biltz in 1908. [ 53 ] Biltz sold his discovery to Parke-Davis, which did not find an immediate use for it. In 1938, other physicians, including H. Houston Merritt and Tracy Putnam , discovered phenytoin's usefulness for controlling seizures, without the sedative effects associated with phenobarbital . [ 54 ]
According to Goodman and Gilman's Pharmacological Basis of Therapeutics :
In contrast to the earlier accidental discovery of the antiseizure properties of potassium bromide and phenobarbital, phenytoin was the product of a search among nonsedative structural relatives of phenobarbital for agents capable of suppressing electroshock convulsions in laboratory animals. [ 55 ]
It was approved by the FDA in 1953 for use in seizures. [ citation needed ]
Jack Dreyfus , founder of the Dreyfus Fund , became a major proponent of phenytoin as a means to control nervousness and depression when he received a prescription for Dilantin in 1966. He has claimed to have supplied large amounts of the drug to Richard Nixon throughout the late 1960s and early 1970s, although this is disputed by former White House aides [ 56 ] and Presidential historians. [ 57 ] Dreyfus' experience with phenytoin is outlined in his book, A Remarkable Medicine Has Been Overlooked . [ 58 ] Despite more than $70 million in personal financing, his push to see phenytoin evaluated for alternative uses has had little lasting effect on the medical community. This was partially because Parke-Davis was reluctant to invest in a drug nearing the end of its patent life, and partially due to mixed results from various studies. [ citation needed ]
In 2008, the drug was put on the FDA's Potential Signals of Serious Risks List to be further evaluated for approval. The list identifies medications with which the FDA has identified potential safety issues, but has not yet identified a causal relationship between the drug and the listed risk. To address this concern, the Warnings and Precautions section of the labeling for Dilantin injection was updated to include additional information about Purple glove syndrome in November 2011. [ 59 ]
Phenytoin is available as a generic medication. [ 12 ]
Since September 2012, the marketing licence in the UK has been held by Flynn Pharma Ltd, of Dublin, Ireland , and the product, although identical, has been called Phenytoin Sodium xx mg Flynn Hard Capsules. (The xx mg in the name refers to the strength—for example "Phenytoin sodium 25 mg Flynn Hard Capsules"). [ 60 ] The capsules are still made by Pfizer 's Goedecke subsidiary's plant in Freiburg , Germany, and they still have Epanutin printed on them. [ 61 ] After Pfizer's sale of the UK marketing licence to Flynn Pharma, the price of a 28-pack of 25 mg phenytoin sodium capsules marked Epanutin rose from 66p (about $0.88) to £15.74 (about $25.06). Capsules of other strengths also went up in price by the same factor—2,384%, [ 62 ] costing the UK's National Health Service an extra £43 million (about $68.44 million) a year. [ 63 ] The companies were referred to the Competition and Markets Authority (CMA) who found that they had exploited their dominant position in the market to charge "excessive and unfair" prices. [ 64 ]
The CMA imposed a record £84.2 million fine on the manufacturer Pfizer, and a £5.2 million fine on the distributor Flynn Pharma and ordered the companies to reduce their prices. [ 65 ]
Phenytoin is marketed under many brand names worldwide. [ 1 ]
In the US, Dilantin is marketed by Viatris after Upjohn was spun off from Pfizer. [ 66 ] [ 67 ] [ 68 ]
Tentative evidence suggests that topical phenytoin is useful in wound healing in people with chronic skin wounds. [ 69 ] [ 70 ] A meta-analysis also supported the use of phenytoin in managing various ulcers. [ 71 ] Phenytoin is incorporated into compounded medications to optimize wound treatment, often in combination with misoprostol . [ 72 ] [ 73 ]
Some clinical trials have explored whether phenytoin can be used as neuroprotector in multiple sclerosis . [ 74 ] | https://en.wikipedia.org/wiki/Phenytoin |
Pheophorbide or phaeophorbide is a product of chlorophyll breakdown and a derivative of pheophytin where both the central magnesium has been removed and the phytol tail has been hydrolyzed . It is used as a photosensitizer in photodynamic therapy . [ 1 ]
Pheophorbide may be generated by digestion of ingested plant matter. Both worm ( Caenorhabditis elegans ) and mouse mitochondria are able to use the molecule in a form of ad hoc photoheterotrophy . [ 2 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pheophorbide |
Pherecydes of Syros ( / f ə ˈ r ɛ s ɪ ˌ d iː z / ; Ancient Greek : Φερεκύδης ὁ Σύριος ; fl. 6th century BCE) was an Ancient Greek mythographer and proto- philosopher from the island of Syros . Little is known about his life and death. Some ancient testimonies counted Pherecydes among the Seven Sages of Greece , although he is generally believed to have lived in the generation after them. Others claim he may have been a teacher of Pythagoras , a student of Pittacus , or a well-traveled autodidact who had studied secret Phoenician books .
Pherecydes wrote a book on cosmogony , known as the "Pentemychos" [ a ] or "Heptamychos" [ b ] . He was considered the first writer to communicate philosophical ideas in prose as opposed to verse. However, other than a few short fragments preserved in quotations from other ancient philosophers and a long fragment discovered on an Egyptian papyrus, his work is lost. However, it survived into the Hellenistic period and a significant amount of its content can be conjectured indirectly through ancient testimonies. His cosmogony was derived from three divine principles: Zas ( Life ), Cthonie ( Earth ), and Chronos ( Time ). In the narrative, Chronos creates the Classical elements and other gods in cavities within the earth. Later, Zas defeats the dragon Ophion in a battle for supremacy and throws him in Oceanus . Zas marries Chthoniê, who then becomes the recognizable Earth ( Gê ) with forests and mountains. Chronos retires from the world as creator, and Zas succeeds him as ruler and assigns all beings their place.
Pherecydes' cosmogony forms a bridge between the mythological thought of Hesiod and pre-Socratic Greek philosophy ; Aristotle considered him one of the earliest thinkers to abandon traditional mythology in order to arrive at a systematic explanation of the world, although Plutarch , as well as many other writers, still gave him the title of theologus , as opposed to the later physiologoi of the Ionian school . Later hellenistic doxographers also considered him as one of the first thinkers to introduce a doctrine of the transmigration of souls to the Ancient Greek religion , which influenced the metempsychosis of Pythagoreanism , and the theogonies of Orphism . Various legends and miracles were ascribed to him, many of which tie him to the development of Pythagoreanism or Orphism .
Although it is relatively certain the Pherecydes was a native of the island of Syros , and that he lived in the 6th century BCE, almost nothing else is known about his life. There is even some discrepancy in the ancient sources of his life as to when exactly he lived within the 6th century. The Suda places his date of birth during the reign of King Alyattes in Lydia (c. 605-560 BCE), [ c ] which would place him as a contemporary of the Seven Sages of Greece , [ d ] , among whose number he was occasionally included. Alternatively, Apollodorus , places his floruit several decades later, in the 59th Olympiad (544–541 BCE), a generation later. Assuming that Pherecydes was born in this later generation, younger than the philosopher Thales (624-545 BC) and thus an older contemporary of Anaximander , he would also be approximately the correct age for the Pythagorean tradition in which he is regarded as a teacher of Pythagoras. [ 1 ] [ 2 ] Most of the other biographical information is probably fiction, and the ambiguity and contradictions in the surviving testimonies suggest that any reliable biographical data that may have existed was no longer available in the Hellenistic period. [ 3 ] The identity of Pherecydes was also unclear in ancient times because there were two authors of that name who both wrote about mythology: Pherecydes of Syros and Pherecydes of Athens (fl. 5th century BC ). [ e ]
According to a forged letter attributed to Thales, Pherecydes never traveled [ f ] but according to other sources he traveled throughout the Greek cultural area, to Delphi , the Peloponnese , Ephesus and Samos . According to Josephus and Byzantine writers, Pherecydes also made a journey to Egypt . Such a journey, however, is a common tale that is also part of other biographies of philosophers. [ 4 ] A sun-dial ( heliotropion ), supposedly made by Pherecydes, was said by Diogenes Laërtius to be "preserved on the island of Syros." [ g ] Several miraculous deeds were also attributed to Pherecydes; such as that he accurately predicted an earthquake on Syros after drinking from a well, or that he predicted the sinking of a ship that he saw along the coast of Syros, which then proceeded to sink. In Messene he allegedly warned his friend Perilaus that the city would be conquered. Finally, Hercules was said to have visited him in a dream and told him to tell the Spartans not to value silver or gold, and that same night Heracles is said to have told the king of Sparta in his sleep to listen to Pherecydes. [ h ] Many of those miracles however, were also attributed to other legendary philosophers such as Pythagoras or Epimenides .
There are many conflicting legends that purport to be an account of the death of Pherecydes. According to one story, Pherecydes was killed and skinned as a sacrifice by the Spartans, and their king kept the skin out of respect for Pherecydes' wisdom. [ i ] However, the same story was also told about Epimenides. [ 5 ] Other accounts have the philosopher perishing in a battle between the Ephesians and Magnesians , or throwing himself from Mount Corycus in Delphi, or succumbing to typhoid fever . [ j ] According to Aelianus typhoid fever was a punishment for his wickedness . [ k ] The latter story was already known to Aristotle and may have arisen from the idea that wise men did not care about physical care. [ 6 ] Other stories connect Pherecydes' death to Pythagoras. However, the historicity of all this is debatable. [ 7 ]
Pherecydes was designated as 'wise' ( sophos ), but only Servius calls him a philosopher ( philosophus ). [ l ] Aristotle places him between theologians and philosophers, because he no longer expressed himself completely mythical in his research. [ m ] No consistent teacher of Pherecydes was known by name in late antiquity ; [ 3 ] according to the doxographer Diogenes Laërtius Pherecydes was taught by Pittacus, [ n ] but according to the Suda he taught himself after he got his hands on 'the secret books of the Phoenicians '. [ o ]
Although this latter claim is almost certainly fictitious, it may be based on the similarity between Pherecydes' ideas and Eastern religious motifs. For example, in his book he describes an important battle in the earliest times between Kronos and Ophion , and this motif occurs in the Middle East . [ 8 ] His father was named Babys, a name that presumably originated from southern Anatolia , based on linguistic evidence. [ 9 ] Eternal time as god is also Middle Eastern. [ 10 ] In addition, Pherecydes has been associated with Zoroastrianism . Isidore the Gnostic claimed that Pherecydes based his allegorical work on a 'prophecy of Ham'. [ p ] Ham, as referred to here, may be Zoroaster , who was quite well known in the Greek world of late antiquity. Isidore may have concluded this because the Zoroastrian literature available to him was influenced by Hellenization , or because Pherecydes' work influenced it. [ 11 ] There is also a short fragment in which Pherecydes talks about ambrosia of the moon, the potion of the gods. [ q ] This representation has parallels in the Samaveda , where there the moon is a vessel from which the gods drink soma (god drink) and is important in the reincarnation theory as guardian of heaven. [ 12 ]
Pherecydes wrote a cosmogony (explanatory model for the origin of the universe) that contained a theogony, an explanatory model for the gods and their properties. This work broke with the mythological and theological tradition and shows Eastern influences. Pherecydes, along with Anaximander and Anaximenes , has long been regarded as one of the first Greek writers to compose his work in prose rather than hexameter verse. [ 13 ] Martin Litchfield West notes that the subject matter that all of three of these authors wrote on, the nature of the universe and how it came to be, had been written in verse prior to these authors. [ 13 ] West speculates based on the word choice that early logographers used ("words I have heard" instead of "I have read") that the original intent of a book written in prose was essentially a "write-up" of a lecture that a person interest in topics such as cosmology gave as a speech or public discourse. [ 13 ] The book was known variously under the titles such as Seven niches ( Heptamychos , Ἑπτάμυχος), "Five niches" (Pentemychos, Πεντέμυχος), and Mixing of the Gods ( Theokrasia , Θεοκρασία). [ r ] [ 14 ]
In this work, Pherecydes taught his philosophy through the medium of mythic representations. Although it is lost, it was extant in the Hellenistic period , and the fragments and testimony that survive from works that describe it are enough to reconstruct a basic outline. The opening sentence is given by Diogenes Laertius, [ s ] and two fragments in the middle of the text have also been preserved in fragments from a 3rd century Egyptian papyrus discovered by Bernard Pyne Grenfell and Arthur Surridge Hunt , which was identified thanks to a comment by Clement of Alexandria about the contents of Pherecydes' book: 'Pherecydes of Syros says: "Zas made a great and beautiful robe, and made the earth and Ogenus on it, and the palace of Ogenus".' [ 15 ]
Pherecydes developed a unique, syncretistic theogony with a new beginning stage, in which Zas, Chronos, and Chthoniê were the first gods to exist all along. He was probably the first to do this. [ 16 ] There is no creation out of nothing ( creatio ex nihilo ). The cosmogony is justified through etymology, a new understanding of the deity Kronos as Chronos and the insertion of a creator god ( demiurge ). Also, Pherecydes combined Greek mythology with non-Greek myths and religions. According to Aristotle, he was innovative in his approach, because he broke with the theological tradition and combined mythology with philosophy. Pherecydes' creation story therefore had to be more rational and concrete than Hesiod's Theogony . [ 17 ] He wrote that first Chaos came to be ( genetos ) without explanation, while Zas, Chronos and Chthoniê existed eternally ( êsan aeí ). The adoption of an eternal principle ( arche ) for the cosmos was characteristic of Pre-Socratic thinkers. [ 18 ]
The sequence of Pherecydes' creation myth is as follows. First, there are the eternal gods Zas (Zeus), Chthoniê (Gaia) and Chronos (Kronos). Then Chronos creates elements in niches in the earth with his seed, from which other gods arise. This is followed by the three-day wedding of Zas and Chthonie. On the third day Zas makes the robe of the world, which he hangs from a winged oak and then presents as a wedding gift to Chthonie, and wraps around her. The "winged oak" in this cosmology has no precedent in Greek tradition. [ 19 ] The stories are different but not mutually exclusive, because much is lacking in the fragments, but it seems clear that creation is hindered by chaotic forces. Before the world is ordered, a cosmic battle takes place, with Cronus as the head of one side and Ophion as the leader of the other. [ 20 ] Ophion then attacks Kronos, who defeats him and throws him in Ogenos. [ 21 ] Sometime after his battle with Ophion, Kronos is succeeded by Zas. This is implied by the fact that Zas/Zeus is ultimately the one who assigns the gods their domain in the world. For example, the Harpies are assigned to guard Tartarus [ 21 ] The fact that Kronos disappears into the background is due to his great magnificence. The argument for this is that Aristotle conceives Pherecydes as a semi-philosopher in that he connects the philosophical Good and Beautiful with the first, prevailing principle ( arche ) of the theologians, and eternity, according to Aristotle, is connected with the good. The three primordial gods are eternal, equal and wholly responsible for the world order. [ 22 ]
Pherecydes was interested in etymology and word associations. Like Thales, he associated chaos with the primordial elemental water , presumably because he associates the word 'chaos' with the verb 'cheesthai', 'to flow out', and because chaos is an undefined, disorderly state. [ t ] By that approach he adapted god names , although Pherecydes probably saw his gods as traditional deities. [ 23 ] He mentioned Rhea for example Rhê, [ u ] presumably by association with rhein '(out)streams'. [ 24 ] The common names were in the 6th century BC. already traditional. In addition, the names are not a Greek dialect. The reason for deviant forms is to make them resemble other words and to construct an original form. [ 25 ]
The sequence of Pherecydes' cosmogony begins with the eternal gods Zas (Zeus), Chthoniê (Gê) and Chronos (Kronos), who "always existed." The first creation is an act of ordering in the cosmos through niches and division of the world. That creation coincides with the dichotomy of eternity-temporality and being-becoming. Chronos must step out of eternity to create, and creation means becoming. [ 39 ] Later on Plato also used the distinction between eternal being and temporal genesis. [ x ] This is opposed to the older cosmogony of Hesiod (8th–7th century BCE) where the initial state of the universe is Chaos, a dark void considered as a divine primordial condition and the creation is ex nihilo (out of nothing).
The titles Penta -/ Heptamychos and Theokrasia of the work indicate that niches ( mychoi ) and mixing are an important part of the creation story. [ 40 ] Pherecydes first identified five niches ( mychoi ). If there were five niches in the story, they correspond to the five parts ( moirai ) of the cosmos: the sea, underworld and heaven (the homeric three-part division), plus the earth and Mount Olympus . Therefore Damascius calls the five niches 'five worlds' and the Suda mentions the alternative title Pentamychos . [ 41 ] Once Chronos fills them to create the worlds, they turn into the five cosmic regions ("moirai") Uranus ("heaven"), Tartarus, Chaos, Ether /Aer (“sky”) and Nyx (“night”). According to Porphyry, there were all kinds of caves and gates in the world. In classical antiquity caves were associated with sexuality and birth. However, the niches here are not stone caves in mountains, because the world has yet to be shaped. They are cavities in the still primitive, undifferentiated mass of the Earth. At an early stage, Chronos creates with his seed the three elements fire, air ( pneuma ) and water. The Earth element already existed with Chthoniê. Warmth, humidity and 'airiness' were according to Ancient Greek medicine three properties of seed, and through those principles the embryo developed. [ 42 ] The first three concepts are traditional and appear in the Pherecydes fragments (eg fragment DK 7 B4 below). Poets like Probus and Hermias, equated Pherecydes' Zas with Aether because since Zeus is the Greek sky god , he would have had Aether as his domain. The title Heptamychos in the Suda is explained by including Gê and Ogenos (hepta = seven). Pherecydes writes that Tartarus lies below the earth ("gê"), so that gê is therefore considered a separate region that could be seen. [ 43 ]
Fire, air and water are placed in the niches by Chronos and mixed ( krasis ). Mixing elements in five niches only makes sense if those mixtures are in different proportions. Contrary to later philosophy of Anaxagoras , the world is not created from the mixtures, but a second generation of gods ( theokrasia) , including Ophion. The formed gods derive their characteristics from the dominant element in each mixture and possibly associate them with the five regions. [ 41 ] [ 44 ] The elements may also be a later, stoic reinterpretation of the text, as the elements, especially air/pneuma, appear anachronistic and fit within Aristotelian and Stoic physiology . That means Chronos' seed will go straight into the niches. This representation is possible, because in a scholium at the Iliad , for example, it says that Chronos smeared two eggs with his seed and gave it to Hera. She had to keep the eggs underground ( kata gês ) so that Typhon was born, the enemy of Zeus. Typhon is a parallel of Pherecydes' serpent god Ophion. [ 45 ]
It is quite possible that in the course of the theogony the primeval trio changed into the traditional Zeus, Kronos and Hera. Such changes have Orphic parallels: Rhea is Demeter after she becomes Zeus' mother, [ y ] and Phanes simultaneously becomes Zeus and Eros . [ 46 ] In Pherecydes, Chthoniê becomes Gê through marriage, after which she becomes the protector of the marriage, and that was traditionally the domain of Hera. Hera is also associated with the earth in some sources. [ z ] [ 47 ]
The marriage of the gods is a union ( hieros gamos ) where Zas makes a robe ( pharos ) depicting Gaia and Ogenos. This is an allegory for the acts of creation ( mellonta dêmiourgein ). Zas is a demiurge and creates by turning into Eros. [ aa ] The robe is a covering, namely of Chthoniê, the earth's mass, thus taking as its domain the varied surface of the earth and the encircling ocean. [ 48 ] [ 49 ] Marriage is also etiological , because it explains the origin of the ritual unveiling of the bride ( anakalypteria ). [ 49 ] The cloth makes Chthoniê vivid and alive. She is the base matter, but Gê is the form of it. [ 50 ]
The robe hangs on a winged oak. [ ab ] This passage is unique and has several interpretations. [ 44 ] The robust oak was traditionally dedicated to Zeus and presumably indicates the solid structure and foundation of the earth. The roots and branches support the earth's surface. Below is Tartarus, and above it, according to Hesiod, grow "the roots of the earth and the barren sea". [ ac ] Pherecydes followed this archaic representation. [ ad ] The wings refer to the broad spreading branches of the oak. Over this hangs the cloth, which as the earth's surface is thus both smooth and varied in shape. [ 51 ] [ 52 ] The robe as a mythical image for the earth's surface also appears in some Orphic texts. In the Homeric Hymn to Demeter , Persephone is weaving a rich robe representing the cosmos when she is carried off by Hades to the underworld.Finally, the proverb 'The face of the earth is the garment of Persephone' is in the style of early Pythagoreans, who had sayings like 'tears of Zeus' for rain and 'The sea is the tear of Kronos'. [ 53 ]
The mythical images of the tree as an earthly structure and a robe as a gift at marriage have Greek cultic counterparts. In Plataeae , for example, the Daedala festival was celebrated, in which an oak was cut down to make a statue of a girl dressed as a bride. [ 54 ] Zeus gave Persephone Sicily or Thebes , while Cadmus gave a robe to Harmonia . Still, the images may be oriental in origin. [ 55 ] There are Mesopotamian parallels of the palace with a complex of spaces reserved for the bride and groom is built. There are also myths such as the one in which Anu takes heaven as his portion, whereupon Enlil takes the earth and gives it as a dowry to Ereshkigal , 'mistress of the great deep' ( chthoniê ). [ 56 ]
Pherecydes described a battle between Kronos and Ophion similar to that of Zeus and Typhon in Hesiod's older "Theogony". The stake of the battle is cosmic supremacy and is reminiscent of the Titanomachy and Gigantomachy of traditional theogony, in which the successive conflicts between gods are described with the current world order as a result. In Pherecydes' cosmogony, however, no initial chaos or tyranny is overcome, followed by the establishment of a new order. The creative gods are eternal and co-equal. Their order is temporarily threatened by Ophion, but that threat becomes a (re)affirmation of the divine order, with Kronos as the first king. [ ae ] [ 57 ] The battle is also etiological, for it explained the myths about ancient sea monsters in both Greece and Asia Minor and the Middle East . [ 58 ] The battle is described by Celsus [ af ] :
'Pherecydes told the myth that an army was lined up against army, and he mentioned Kronos as leader of one, Ophion of the other, and he related their challenges and struggles, and that they agreed that the one who fell into Ogenos was the loser, while those who cast them out and conquered should possess the sky'.
Chronos has become Kronos here. Presumably, as a prominent second creator, Zas also participates in the battle, after which he becomes Zeus. [ 59 ] Ophion did not exist from the beginning but was born and had progeny of his own ( Ophionidai ). [ ag ] He is serpentine, because his name is derived from ophis 'snake'. Traditionally, Gaia (Gê) was regarded as the mother of Typhon, and Chthoniê/Gê may be the mother of Ophion here. Ophion may also have been produced on her own in Tartarus, the cave under the earth. [ 60 ] Typhon also originated in a cave. Otherwise the father may be Chronos, because his seed is the niches of the earth. [ 61 ]
Ophion and its brood are often depicted as ruling the birthing cosmos for some time before falling from power. The chaotic forces are eternal and cannot be destroyed; instead they are thrown out from the ordered world and locked away in Tartaros in a kind of "appointment of the spheres", in which the victor (Zeus-Cronus) takes possession of the sky and of space and time. [ 62 ] Cronus (or Zeus in the more popularly known version) orders the offspring out from the cosmos to Tartaros. There they are kept behind locked gates, fashioned in iron and bronze. We are told about chaotic beings put into the pentemychos, and we are told that the Darkness has an offspring that is cast into the recesses of Tartaros. No surviving fragment makes the connection, but it is possible that the prison-house in Tartaros and the pentemychos are ways of referring to the essentially same thing. According to Celsus , Pherecydes said that: "Below that portion is the portion of Tartaros; the daughters of Boreas (the north wind), the Harpies and Thuella (Storm), guard it; there Zeus banished any of the gods whenever one behaves with insolence." [ 34 ] Thus the identity between Zeus' prison-house and the pentemychos seems likely. Judging from some ancient fragments Ophion is thrown into Oceanus , not into Tartaros. Exactly what entities or forces that were locked away in Pherecydes’ story cannot be known for sure. There may have been five principal figures. Ophion and Typhon are one and the same, and Eurynome fought on the side of Ophion against Cronus. [ 20 ] Chthonie is a principal "thing" of the underworld, but whether she is to be counted as one of the five or the five "sum-total" is an open question. Apart from these it is known that Ophion-Typhon mated with Echidna , and that Echidna herself was somehow mysteriously "produced" by Callirhoe . If Pherecydes counted five principal entities in association the pentemychos doctrine, then Ophion, Eurynome, Echidna, Calirrhoe and Chthonie are the main contenders.
Pherycydes is seen as a transitional figure between the mythological cosmogonies of Hesiod and the first pre-Socratic philosophers. Aristotle wrote in his Metaphysics [ ah ] that Pherecydes was partially a mythological writer and Plutarch, in his Parallel Lives , [ ai ] instead wrote of him being a theologian. Pherecydes contributed to pre-Socratic philosophy of nature by denying that nothing comes from nothing and describing the mixture of three elements. Mixture ( krasis ) plays a role in later cosmologies, such as that of Anaxagoras , Plato ( Timaeus ) and in the Orphic poem Krater attributed to the Pythagorean philosopher Zopyrus of Tarentum. [ 63 ]
Out of all of the philosophers who were historical predecessors of Pythagoras, Pherycydes was the philosopher most often linked with him as one of his teachers. [ 64 ] Not many prose treatises existed in the 6th century, Pythagoras may have learned of Pherecydes' work and adopted the idea of reincarnation. [ 65 ] In Pythagoras' youth, when he still lived on Samos , he is said to have visited Pherecydes on Delos and later buried him. [ aj ] An early variant of this story places this event later in Pythagoras' life when he lived in Croton . His visit to the sick Pherecydes was used to explain his absence during Cylon's rebellion in that city. [ 66 ] These stories may have evolved from the story that Pythagoras was a student of Pherecydes. [ 67 ] According to Apollonius, Pythagoras imitated Pherecydes in his 'miracles'. [ ak ] [ 68 ] The historicity of the connection between the two has been debated, however, because their philosophies are otherwise unrelated, and because Pythagoras has been attributed all kinds of teachers over time. [ 64 ] The confusion among later authors about the attribution of the miracles can perhaps be traced back to the poem of Ion of Chios . [ 69 ] [ al ] Aristotle nevertheless stated in the 4th century BC that both were friends, and the story already about their friendship certainly dates back to the 5th century BC . It is believed that both philosophers once met. [ 64 ]
Pherecydes' book was thought to have contained a mystical esoteric teaching, treated allegorically. A comparatively large number of sources say Pherecydes was the first to teach the Pythagorean doctrine of metempsychosis , the transmigration of human souls. [ 70 ] [ 71 ] Both Cicero and Augustine thought of him having given the first teaching of the "immortality of the soul". [ 40 ] The Christian Apponius mentioned Pherecydes' belief in metempsychosis in his argument against murder and executions because a good life is rewarded and a bad life is punished in the afterlife. [ am ] The Middle Platonist Numenius , like Apponius, referred to the idea that the soul enters the body through the seed, and mentions a river in Pherecydes' representation of the underworld. [ an ] The Neoplatonist Porphyry added 'corners, pits, caves, doors and gates' through which souls travel. [ ao ] Finally, the orator Themistius reported that Pherecydes, like Pythagoras, considered killing a great sin. [ ap ] This suggests that impure deeds in a next life or after death must be expiated. [ 72 ] Pherecydes may have regarded the soul as at least an immortal part of the sky or aether . [ 73 ] [ 74 ] That he was the first to teach such a thing is doubtful, but Schibli concludes that Pherecydes like "included in his book ["Pentemychos"] at least a rudimentary treatment of the immortality of the soul, its wanderings in the underworld, and the reasons for the soul’s incarnations". [ 75 ]
The theogony of Pherecydes also shows similarities with Orphic theogonies such as the Orphic Hymns . Both feature primordial serpents, the weaving of a cosmic robe and eternal Time as god who creates with his own seed by masturbation. [ 76 ] [ 77 ] [ 78 ] Such Orphic aspects also appear in Epimenides' Theogony . Pherecydes probably influenced the early Orphics, or possibly an earlier sect of Orphic practitioners influenced him. [ 79 ] The battle between Kronos and Ophion also influenced the Bibliotheca of pseudo-Apollodorus, who drew on several previous theogonies, such as those of Hesiod and the Orphic religion. The story was also a source for the Argonautica by Apollonius of Rhodes , in which Orpheus sings about Ophion and Eurynome who were overthrown by Kronos and Rhea. The association of Kronos with Chronos by the Greeks can probably also be traced back to Pherecydes. [ 80 ] There are also many significant parallels between Pherecydes's cosmogony, Orphic theogonies, and the preserved accounts of Zoroastrian, Phoenician and Vedic cosmogonies. [ 81 ] According to West, these myths have a common source that originates in the Levant . The basic form is as follows. In the beginning there is no heaven and no earth, but a limitless abyss of water, shrouded in deep darkness. This condition has existed for centuries. Then the hermaphrodite and eternal Time makes love to itself. Thus he produces an egg. From that egg appears a radiant creator god, who made it heaven and earth. [ 78 ]
In the Diels-Kranz numbering for testimony and fragments of Pre-Socratic philosophy , Pherecydes of Syros is catalogued as number 7 . The most recent edition of this catalogue is | https://en.wikipedia.org/wiki/Pherecydes_of_Syros |
A pheromone (from Ancient Greek φέρω ( phérō ) ' to bear ' and hormone ) is a secreted or excreted chemical factor [ jargon ] that triggers a social response in members of the same species . Pheromones are chemicals capable of acting like hormones outside the body of the secreting individual, to affect the behavior of the receiving individuals. [ 1 ] There are alarm pheromones , food trail pheromones , sex pheromones , and many others that affect behavior or physiology. Pheromones are used by many organisms, from basic unicellular prokaryotes to complex multicellular eukaryotes . [ 2 ] Their use among insects has been particularly well documented. In addition, some vertebrates , plants and ciliates communicate by using pheromones. The ecological functions and evolution of pheromones are a major topic of research in the field of chemical ecology . [ 3 ]
The portmanteau word "pheromone" was coined by Peter Karlson and Martin Lüscher in 1959, based on the Greek φέρω phérō ( ' I carry ' ) and ὁρμων hórmōn ( ' stimulating ' ). [ 4 ] Pheromones are also sometimes classified as ecto-hormones ("ecto-" meaning "outside" [ 5 ] ). They were researched earlier by various scientists, including Jean-Henri Fabre , Joseph A. Lintner , Adolf Butenandt , and ethologist Karl von Frisch who called them various names, such as "alarm substances". These chemical messengers are transported outside of the body and affect neurocircuits , including the autonomous nervous system with hormone or cytokine mediated physiological changes, inflammatory signaling, immune system changes and/or behavioral change in the recipient. [ 6 ] They proposed the term to describe chemical signals from conspecifics that elicit innate behaviors soon after the German biochemist Adolf Butenandt had characterized the first such chemical, bombykol , a chemically well-characterized pheromone released by the female silkworm to attract mates. [ 7 ]
Aggregation pheromones function in mate choice , overcoming host resistance by mass attack, and defense against predators. A group of individuals at one location is referred to as an aggregation, whether consisting of one sex or both sexes. Male-produced sex attractants have been called aggregation pheromones, because they usually result in the arrival of both sexes at a calling site and increase the density of conspecifics surrounding the pheromone source. Most sex pheromones are produced by the females; only a small percentage of sex attractants are produced by males. [ 8 ] Aggregation pheromones have been found in members of the Coleoptera , Collembola , [ 9 ] Diptera , Hemiptera , Dictyoptera , and Orthoptera . In recent decades, aggregation pheromones have proven useful in the management of many pests, such as the boll weevil ( Anthonomus grandis ), the pea and bean weevil ( Sitona lineatus , and stored product weevils (e.g. Sitophilus zeamais , Sitophilus granarius , and Sitophilus oryzae ). Aggregation pheromones are among the most ecologically selective pest suppression methods. They are non-toxic and effective at very low concentrations. [ 10 ]
Some species release a volatile substance when attacked by a predator that can trigger flight (in aphids ) or aggression (in ants , bees , termites , and wasps ) [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] in members of the same species. For example, Vespula squamosa use alarm pheromones to alert others to a threat. [ 16 ] In Polistes exclamans , alarm pheromones are also used as an alert to incoming predators. [ 17 ] Pheromones also exist in plants: Certain plants emit alarm pheromones when grazed upon, resulting in tannin production in neighboring plants. [ 18 ] These tannins make the plants less appetizing to herbivores . [ 18 ]
An alarm pheromone has been documented in a mammalian species. Alarmed pronghorn , Antilocapra americana flair their white rump hair and exposes two highly odoriferous glands that releases a compound described having the odor "reminiscent of buttered popcorn". This sends a message to other pronghorns by both sight and smell about a present danger. This scent has been observed by humans 20 to 30 meters downwind from alarmed animals. The major odour compound identified from this gland is 2-pyrrolidinone . [ 19 ]
Epideictic pheromones are different from territory pheromones, when it comes to insects. Fabre observed and noted how "females who lay their eggs in these fruits deposit these mysterious substances in the vicinity of their clutch to signal to other females of the same species they should clutch elsewhere." It may be helpful to note that the word epideictic , having to do with display or show (from the Greek 'deixis'), has a different but related meaning in rhetoric, the human art of persuasion by means of words.
Laid down in the environment, territorial pheromones mark the boundaries and identity of an organism's territory. Cats and dogs deposit these pheromones by urinating on landmarks that mark the perimeter of the claimed territory. In social seabirds, the preen gland is used to mark nests, nuptial gifts, and territory boundaries with behavior formerly described as ' displacement activity '. [ 21 ]
Social insects commonly use trail pheromones. For example, ants mark their paths with pheromones consisting of volatile hydrocarbons . Certain ants lay down an initial trail of pheromones as they return to the nest with food. This trail attracts other ants and serves as a guide. [ 22 ] As long as the food source remains available, visiting ants will continuously renew the pheromone trail. The pheromone requires continuous renewal because it evaporates quickly. When the food supply begins to dwindle, the trail-making ceases. Pharaoh ants ( Monomorium pharaonis ) mark trails that no longer lead to food with a repellent pheromone, which causes avoidance behaviour in ants. [ 23 ] Repellent trail markers may help ants to undertake more efficient collective exploration. [ 24 ] The army ant Eciton burchellii provides an example of using pheromones to mark and maintain foraging paths. When species of wasps such as Polybia sericea found new nests, they use pheromones to lead the rest of the colony to the new nesting site.
Gregarious caterpillars, such as the forest tent caterpillar , lay down pheromone trails that are used to achieve group movement. [ 25 ]
In animals, sex pheromones indicate the availability of the female for breeding. Male animals may also emit pheromones that convey information about their species and genotype .
At the microscopic level, a number of bacterial species (e.g. Bacillus subtilis , Streptococcus pneumoniae , Bacillus cereus ) release specific chemicals into the surrounding media to induce the "competent" state in neighboring bacteria. [ 26 ] Competence is a physiological state that allows bacterial cells to take up DNA from other cells and incorporate this DNA into their own genome, a sexual process called transformation.
Among eukaryotic microorganisms, pheromones promote sexual interaction in numerous species. [ 27 ] These species include the yeast Saccharomyces cerevisiae , the filamentous fungi Neurospora crassa and Mucor mucedo , the water mold Achlya ambisexualis , the aquatic fungus Allomyces macrogynus , the slime mold Dictyostelium discoideum , the ciliate protozoan Blepharisma japonicum and the multicellular green algae Volvox carteri . In addition, male copepods can follow a three-dimensional pheromone trail left by a swimming female, and male gametes of many animals use a pheromone to help find a female gamete for fertilization . [ 28 ]
Many well-studied insect species, such as the ant Leptothorax acervorum , the moths Helicoverpa zea and Agrotis ipsilon , the bee Xylocopa sonorina , the frog Pseudophryne bibronii , and the butterfly Edith's checkerspot release sex pheromones to attract a mate, and some lepidopterans (moths and butterflies) can detect a potential mate from as far away as 10 km (6.2 mi). [ 29 ] [ 30 ] Some insects, such as ghost moths , use pheromones during lek mating . [ 31 ] Traps containing pheromones are used by farmers to detect and monitor insect populations in orchards. In addition, Colias eurytheme butterflies release pheromones, an olfactory cue important for mate selection. [ 32 ] In mealworm beetles, Tenebrio molitor , the female preference of pheromones is dependent on the nutritional condition of the males.
The effect of Hz-2V virus infection on the reproductive physiology and behavior of female Helicoverpa zea moths is that in the absence of males they exhibited calling behavior and called as often but for shorter periods on average than control females. Even after these contacts virus-infected females made many frequent contacts with males and continued to call; they were found to produce five to seven times more pheromone and attracted twice as many males as did control females in flight tunnel experiments. [ 33 ]
Pheromones are also utilized by bee and wasp species. Some pheromones can be used to suppress the sexual behavior of other individuals allowing for a reproductive monopoly – the wasp R. marginata uses this. [ 34 ] With regard to the Bombus hyperboreus species, males, otherwise known as drones, patrol circuits of scent marks (pheromones) to find queens. [ 35 ] In particular, pheromones for the Bombus hyperboreus, include octadecenol , 2,3-dihydro-6-transfarnesol, citronellol, and geranylcitronellol. [ 36 ]
Sea urchins release pheromones into the surrounding water, sending a chemical message that triggers other urchins in the colony to eject their sex cells simultaneously.
In plants, some homosporous ferns release a chemical called antheridiogen , which affects sex expression. This is very similar to pheromones.
This classification, based on the effects on behavior, remains artificial. Pheromones fill many additional functions.
Releaser pheromones are pheromones that cause an alteration in the behavior of the recipient. For example, some organisms use powerful attractant molecules to attract mates from a distance of two miles or more. In general, this type of pheromone elicits a rapid response, but is quickly degraded. In contrast, a primer pheromone has a slower onset and a longer duration. For example, rabbit (mothers) release mammary pheromones that trigger immediate nursing behavior by their babies. [ 21 ]
Primer pheromones trigger a change of developmental events (in which they differ from all the other pheromones, which trigger a change in behavior). They were first described in Schistocerca gregaria by Maud Norris in 1954. [ 39 ]
Signal pheromones cause short-term changes, such as the neurotransmitter release that activates a response. For instance, GnRH molecule functions as a neurotransmitter in rats to elicit lordosis behavior . [ 6 ]
The human trace amine-associated receptors are a group of six G protein-coupled receptors (i.e., TAAR1 , TAAR2 , TAAR5 , TAAR6 , TAAR8 , and TAAR9 ) that – with exception for TAAR1 – are expressed in the human olfactory epithelium . [ 40 ] In humans and other animals, TAARs in the olfactory epithelium function as olfactory receptors that detect volatile amine odorants , including certain pheromones; [ 40 ] [ 41 ] these TAARs putatively function as a class of pheromone receptors involved in the olfactive detection of social cues. [ 40 ] [ 41 ]
A review of studies involving non-human animals indicated that TAARs in the olfactory epithelium can mediate attractive or aversive behavioral responses to a receptor agonist . [ 41 ] This review also noted that the behavioral response evoked by a TAAR can vary across species (e.g., TAAR5 mediates attraction to trimethylamine in mice and aversion to trimethylamine in rats). [ 41 ] In humans, hTAAR5 presumably mediates aversion to trimethylamine, which is known to act as an hTAAR5 agonist and to possess a foul, fishy odor that is aversive to humans; [ 41 ] [ 42 ] however, hTAAR5 is not the only olfactory receptor that is responsible for trimethylamine olfaction in humans. [ 41 ] [ 42 ] As of December 2015, [update] hTAAR5-mediated trimethylamine aversion has not been examined in published research. [ 42 ]
In reptiles , amphibia and non-primate mammals pheromones are detected by regular olfactory membranes, and also by the vomeronasal organ (VNO), or Jacobson's organ, which lies at the base of the nasal septum between the nose and mouth and is the first stage of the accessory olfactory system . [ 43 ] While the VNO is present in most amphibia, reptiles, and non-primate mammals, [ 44 ] it is absent in birds , adult catarrhine monkeys (downward facing nostrils, as opposed to sideways), and apes . [ 45 ] An active role for the human VNO in the detection of pheromones is disputed; while it is clearly present in the fetus it appears to be atrophied , shrunk or completely absent in adults. Three distinct families of vomeronasal receptors , putatively pheromone sensing, have been identified in the vomeronasal organ named V1Rs, V2Rs, and V3Rs. All are G protein-coupled receptors but are only distantly related to the receptors of the main olfactory system, highlighting their different role. [ 43 ]
Olfactory processing of chemical signals like pheromones exists in all animal phyla and is thus the oldest of the senses. [ citation needed ] It has been suggested that it serves survival by generating appropriate behavioral responses to the signals of threat, sex and dominance status among members of the same species. [ 46 ]
Furthermore, it has been suggested that in the evolution of unicellular prokaryotes to multicellular eukaryotes , primordial pheromone signaling between individuals may have evolved to paracrine and endocrine signaling within individual organisms. [ 47 ]
Some authors assume that approach-avoidance reactions in animals, elicited by chemical cues, form the phylogenetic basis for the experience of emotions in humans. [ 48 ]
Mice can distinguish close relatives from more distantly related individuals on the basis of scent signals, [ 49 ] which enables them to avoid mating with close relatives and minimizes deleterious inbreeding . [ 50 ]
In addition to mice, two species of bumblebee, in particular Bombus bifarius and Bombus frigidus , have been observed to use pheromones as a means of kin recognition to avoid inbreeding. [ 51 ] For example, B. bifarius males display "patrolling" behavior in which they mark specific paths outside their nests with pheromones and subsequently "patrol" these paths. [ 51 ] Unrelated reproductive females are attracted to the pheromones deposited by males on these paths, and males that encounter these females while patrolling can mate with them. [ 51 ] Other bees of the Bombus species are found to emit pheromones as precopulatory signals, such as Bombus lapidarius . [ 52 ]
Pheromones of certain pest insect species, such as the Japanese beetle , acrobat ant , and the spongy moth , can be used to trap the respective insect for monitoring purposes, to control the population by creating confusion, to disrupt mating, and to prevent further egg laying.
Pheromones are used in the detection of oestrus in sows . Boar pheromones are sprayed into the sty , and those sows that exhibit sexual arousal are known to be currently available for breeding.
While humans are highly dependent upon visual cues, when in close proximity smells also play a role in sociosexual behaviors. An inherent difficulty in studying human pheromones is the need for cleanliness and odorlessness in human participants. [ 53 ] Though various researchers have investigated the possibility of their existence, no pheromonal substance has ever been demonstrated to directly influence human behavior in a peer reviewed study. [ 54 ] [ 55 ] [ 56 ] [ 57 ] Experiments have focused on three classes of possible human pheromones: axillary steroids, vaginal aliphatic acids, and stimulators of the vomeronasal organ , including a 2018 study claiming pheromones affect men's sexual cognition. [ 58 ]
Axillary steroids are produced by the testes , ovaries , apocrine glands, and adrenal glands . [ 59 ] These chemicals are not biologically active until puberty when sex steroids influence their activity. [ 60 ] The change in activity during puberty suggest that humans may communicate through odors. [ 59 ] Several axillary steroids have been described as possible human pheromones: androstadienol , androstadienone , androstenol , androstenone , and androsterone .
While it may be expected on evolutionary grounds that humans have pheromones, these three molecules have yet to be rigorously proven to act as such. Research in this field has suffered from small sample sizes, publication bias , false positives, and poor methodology. [ 68 ]
A class of aliphatic acids (volatile fatty acids as a kind of carboxylic acid ) was found in female rhesus monkeys that produced six types in the vaginal fluids. [ 69 ] The combination of these acids is referred to as "copulins". One of the acids, acetic acid, was found in all of the sampled female's vaginal fluid. [ 69 ] Even in humans, one-third of women have all six types of copulins, which increase in quantity before ovulation. [ 69 ] Copulins are used to signal ovulation; however, as human ovulation is concealed it is thought that they may be used for reasons other than sexual communication. [ 59 ]
The human vomeronasal organ has epithelia that may be able to serve as a chemical sensory organ; however, the genes that encode the VNO receptors are nonfunctional pseudogenes in humans. [ 53 ] Also, while there are sensory neurons in the human VNO there seem to be no connections between the VNO and the central nervous system. The associated olfactory bulb is present in the fetus, but regresses and vanishes in the adult brain. There have been some reports that the human VNO does function, but only responds to hormones in a "sex-specific manner". There also have been pheromone receptor genes found in olfactory mucosa. [ 53 ] There have been no experiments that compare people lacking the VNO, and people that have it. It is disputed on whether the chemicals are reaching the brain through the VNO or other tissues. [ 59 ]
In 2006, it was shown that a second mouse receptor sub-class is found in the olfactory epithelium . Called the trace amine-associated receptors (TAAR), some are activated by volatile amines found in mouse urine, including one putative mouse pheromone. [ 70 ] Orthologous receptors exist in humans providing, the authors propose, evidence for a mechanism of human pheromone detection. [ 71 ]
Although there are disputes about the mechanisms by which pheromones function, there is evidence that pheromones do affect humans. [ 72 ] Despite this evidence, it has not been conclusively shown that humans have functional pheromones. Those experiments suggesting that certain pheromones have a positive effect on humans are countered by others indicating they have no effect whatsoever. [ 59 ]
A possible theory being studied now is that these axillary odors are being used to provide information about the immune system. Milinski and colleagues found that the artificial odors that people chose are determined in part by their major histocompatibility complexes (MHC) combination. [ 73 ] Information about an individual's immune system could be used as a way of "sexual selection" so that the female could obtain good genes for her offspring. [ 53 ] Claus Wedekind and colleagues found that both men and women prefer the axillary odors of people whose MHC is different from their own. [ 74 ]
Some body spray advertisers claim that their products contain human sexual pheromones that act as an aphrodisiac . Despite these claims, no pheromonal substance has ever been demonstrated to directly influence human behavior in a peer reviewed study. [ 59 ] [ 56 ] [ disputed – discuss ] Thus, the role of pheromones in human behavior remains speculative and controversial. [ 75 ] | https://en.wikipedia.org/wiki/Pheromone |
A pheromone trap is a type of insect trap that uses pheromones to lure insects . Sex pheromones and aggregating pheromones are the most common types used. A pheromone-impregnated lure is encased in a conventional trap such as a bottle trap , delta trap, water-pan trap , or funnel trap. Pheromone traps are used both to count insect populations by sampling, and to trap pests such as clothes moths to destroy them.
Pheromone traps are very sensitive, meaning they attract insects present at very low densities. They are often used to detect presence of exotic pests , or for sampling, monitoring, or to determine the first appearance of a pest in an area. They can be used for legal control, and are used to monitor the success of the Boll Weevil Eradication Program and the spread of the spongy moth . The high species-specificity of pheromone traps can also be an advantage, and they tend to be inexpensive and easy to implement. This sensitivity is especially suited to some investigations of invasive species : Flying males are easily blown off course by winds. Rather than introducing noise , Frank et al. 2013 find this can actually help detect isolated nests or populations and determine the length of time necessary between introduction and establishment . (Although any trap can answer the same questions, high sensitivity such as provided by pheromone traps does so more accurately.) [ 1 ]
However, it is impractical in most cases to completely remove or "trap out" pests using a pheromone trap. Some pheromone-based pest control methods have been successful, usually those designed to protect enclosed areas such as households or storage facilities. There has also been some success in mating disruption . In one form of mating disruption, males are attracted to a powder containing female attractant pheromones. The pheromones stick to the males' bodies, and when they fly off, the pheromones make them attractive to other males. It is hoped that if enough males chase other males instead of females, egg-laying will be severely impeded. [ 2 ]
Some difficulties surrounding pheromone traps include sensitivity to bad weather, their ability to attract pests from neighboring areas, and that they generally only attract adults, although it is the juveniles in many species that are pests. [ 3 ] They are also generally limited to one sex.
Though certainly not all insect pheromones have been discovered, many are known and many more are discovered every year. Some sites curate large lists of insect pheromones. [ 4 ] Pheromones are frequently used to monitor and control lepidopteran and coleopteran species, with many available commercially. [ 5 ] Pheromones are available for insects including: | https://en.wikipedia.org/wiki/Pheromone_trap |
A φ Josephson junction (pronounced phi Josephson junction ) is a particular type of the Josephson junction , which has a non-zero Josephson phase φ across it in the ground state. A π Josephson junction , which has the minimum energy corresponding to the phase of π, is a specific example of it.
The Josephson energy U {\displaystyle U} depends on the superconducting phase difference (Josephson phase) ϕ {\displaystyle \phi } periodically, with the period 2 π {\displaystyle 2\pi } . Therefore, let us focus only on one period, e.g. − π < ϕ ≤ + π {\displaystyle -\pi <\phi \leq +\pi } . In the ordinary Josephson junction the dependence U ( ϕ ) {\displaystyle U(\phi )} has the minimum at ϕ = 0 {\displaystyle \phi =0} . The function
where I c is the critical current of the junction, and Φ 0 {\displaystyle \Phi _{0}} is the flux quantum , is a good example of conventional U ( ϕ ) {\displaystyle U(\phi )} .
Instead, when the Josephson energy U ( ϕ ) {\displaystyle U(\phi )} has a minimum (or more than one minimum per period) at ϕ ≠ 0 {\displaystyle \phi \neq 0} , these minimum (minima) correspond to the lowest energy states (ground states) of the junction and one speaks about "φ Josephson junction". Consider two examples.
First, consider the junction with the Josephson energy U ( ϕ ) {\displaystyle U(\phi )} having two minima at ϕ = ± φ {\displaystyle \phi =\pm \varphi } within each period, where φ {\displaystyle \varphi } (such that 0 < φ < π {\displaystyle 0<\varphi <\pi } ) is some number. For example, this is the case for
U ( ϕ ) = Φ 0 2 π { I c 1 [ 1 − cos ( ϕ ) ] + 1 2 I c 2 [ 1 − cos ( 2 ϕ ) ] } {\displaystyle U(\phi )={\frac {\Phi _{0}}{2\pi }}\left\{I_{c1}[1-\cos(\phi )]+{\frac {1}{2}}I_{c2}[1-\cos(2\phi )]\right\}} ,
which corresponds to the current-phase relation
I s ( ϕ ) = I c 1 sin ( ϕ ) + I c 2 sin ( 2 ϕ ) {\displaystyle I_{s}(\phi )=I_{c1}\sin(\phi )+I_{c2}\sin(2\phi )} .
If I c1 >0 and I c2 <-1/2<0 , the minima of the Josephson energy occur at ϕ = ± φ {\displaystyle \phi =\pm \varphi } , where φ = arccos ( − 2 I c 1 / I c 2 ) {\displaystyle \varphi =\arccos \left(-2I_{c1}/I_{c2}\right)} . Note, that the ground state of such a Josephson junction is doubly degenerate because U ( − φ ) = U ( + φ ) {\displaystyle U(-\varphi )=U(+\varphi )} .
Another example is the junction with the Josephson energy similar to conventional one, but shifted along ϕ {\displaystyle \phi } -axis, for example U ( ϕ ) = Φ 0 I c 2 π [ 1 − cos ( ϕ − φ 0 ) ] {\displaystyle U(\phi )={\frac {\Phi _{0}I_{c}}{2\pi }}[1-\cos(\phi -\varphi _{0})]} ,
and the corresponding current-phase relation
I s ( ϕ ) = I c sin ( ϕ − φ 0 ) {\displaystyle I_{s}(\phi )=I_{c}\sin(\phi -\varphi _{0})} .
In this case the ground state is ϕ = φ 0 {\displaystyle \phi =\varphi _{0}} and it is not degenerate.
The above two examples show that the Josephson energy profile in φ Josephson junction can be rather different, resulting in different physical properties. Often, to distinguish, which particular type of the current-phase relation is meant, the researches are using different names. At the moment there is no well-accepted terminology. However, some researchers use the terminology after A. Buzdin: [ 1 ] the Josephson junction with double degenerate ground state ϕ = ± φ {\displaystyle \phi =\pm \varphi } , similar to the first example above, are indeed called φ Josephson junction, while the junction with non-degenerate ground state, similar to the second example above, are called φ 0 {\displaystyle \varphi _{0}} Josephson junctions.
The first indications of φ junction behavior (degenerate ground states [ 2 ] or unconventional temperature dependence of its critical current [ 3 ] ) were reported in the beginning of the 21st century. These junctions were made of d-wave superconductors.
The first experimental realization of controllable φ junction was reported in September 2012 by the group of Edward Goldobin at University of Tübingen . [ 4 ] It is based on a combination of 0 and π segments in one superconducting-insulator-ferromagnetic-superconductor hybrid device and clearly demonstrates two critical currents corresponding to two junction states ϕ = ± φ {\displaystyle \phi =\pm \varphi } . The proposal to construct a φ Josephson junction out of (infinitely) many 0 and π segments has appeared in the works by R. Mints and coauthors, [ 5 ] [ 6 ] although at that time there was no term φ junction. For the first time the word φ Josephson junction appeared in the work of Buzdin and Koshelev, [ 1 ] whose idea was similar. Following this idea, it was further proposed to use a combination of only two 0 and π segments. [ 7 ]
In 2016, a φ 0 {\displaystyle \varphi _{0}} junction based on the nanowire quantum dot was reported by the group of Leo Kouwenhoven at Delft University of Technology . The InSb nanowire has strong spin-orbit coupling , and magnetic field was applied leading to Zeeman effect . This combination breaks both inversion and time-reversal symmetries creating finite current at zero phase difference. [ 8 ]
Other theoretically proposed realization include geometric φ junctions. There is a theoretical prediction that one can construct the so-called geometric φ junction based on nano-structured d-wave superconductor. [ 9 ] As of 2013, this was not demonstrated experimentally. | https://en.wikipedia.org/wiki/Phi_Josephson_junction |
Phi Sigma Rho ( ΦΣΡ ; also known as Phi Rho or PSR ) is a social sorority for individuals who identify as female or non-binary in engineering and technology. The sorority was founded in 1984 at Purdue University . It has since expanded to more than 40 colleges across the United States.
Phi Sigma Rho was founded on September 24, 1984, at Purdue University . [ 1 ] [ 2 ] Its founders were Rashmi Khanna and Abby McDonald who were unable to participate in traditional sorority rush due to the demands of the sororities and their engineering program; they decided to start a new sorority that would take their academic program's demands into consideration. [ 2 ]
The Alpha chapter at Purdue University was founded with ten charter members: Gail Bonney, Anita Chatterjea, Ann Cullinan, Pam Kabbes, Rashmi Khanna, Abby McDonald, Christine Mooney, Tina Kershner, Michelle Self, and Kathy Vargo. [ 3 ]
Phi Sigma Rho is a social sorority that accepts students pursuing degrees in engineering and technology who identify as female or who identify as non-binary. [ 3 ] The sorority made the decision to include non-binary students in all chapters in the summer of 2021. [ 3 ] [ 4 ]
The sorority's headquarters is based in Seattle, Washington. [ 5 ]
Phi Sigma Rho's core values or pillars are Friendship, Scholarship, and Encouragement. [ 6 ] Its motto is "Together we build the future." [ 2 ] [ 6 ]
The colors of Phi Sigma Rho are wine red and silver. [ 2 ] The sorority's flower is the orchid. [ 6 ] [ 2 ] Its jewel is the pearl. [ 2 ] [ 6 ] Its mascot is Sigmand the penguin. [ 2 ] [ 6 ] Its online magazine is The Key . [ 7 ]
Phi Sigma Rho's national philanthropy is the Leukemia & Lymphoma Society . [ 8 ]
The Phi Sigma Rho Foundation was established as a separate nonprofit organization in 2005. [ 8 ] It supports the educational and philanthropic efforts of the sorority's members and offers merit-based scholarships to sorority members. [ 8 ] [ 9 ]
As of 2025, Phi Sigma Rho has charter 53 chapters in the United States, with 48 being active. [ 1 ] | https://en.wikipedia.org/wiki/Phi_Sigma_Rho |
In chemistry , phi bonds ( φ bonds ) are usually covalent chemical bonds , where six lobes of one involved atomic orbital overlap six lobes of the other involved atomic orbital. This overlap leads to the formation of a bonding molecular orbital with three nodal planes which contain the internuclear axis and go through both atoms.
The Greek letter φ in their name refers to f orbitals , since the orbital symmetry of the φ bond is the same as that of the usual (6-lobed) type of f orbital when seen down the bond axis.
There was one possible candidate known in 2005 of a molecule with phi bonding (a U−U bond, in the molecule U 2 ). [ 1 ] However, later studies that accounted for spin orbit interactions found that the bonding was only of fourth order . [ 2 ] [ 3 ] [ 4 ] Experimental evidence for phi bonding between a thorium atom and cyclooctatetraene in thorocene has been supported by computational analysis, though this mixed-orbital bond has strong ionic character and is not a traditional phi bond. [ 5 ] | https://en.wikipedia.org/wiki/Phi_bond |
In statistics , the phi coefficient (or mean square contingency coefficient and denoted by φ or r φ ) is a measure of association for two binary variables .
In machine learning , it is known as the Matthews correlation coefficient (MCC) and used as a measure of the quality of binary (two-class) classifications , introduced by biochemist Brian W. Matthews in 1975. [ 1 ]
Introduced by Karl Pearson , [ 2 ] and also known as the Yule phi coefficient from its introduction by Udny Yule in 1912 [ 3 ] this measure is similar to the Pearson correlation coefficient in its interpretation.
In meteorology , the phi coefficient, [ 4 ] or its square (the latter aligning with M. H. Doolittle's original proposition from 1885 [ 5 ] ), is referred to as the Doolittle Skill Score or the Doolittle Measure of Association.
A Pearson correlation coefficient estimated for two binary variables will return the phi coefficient. [ 6 ]
Two binary variables are considered positively associated if most of the data falls along the diagonal cells. In contrast, two binary variables are considered negatively associated if most of the data falls off the diagonal.
If we have a 2×2 table for two random variables x and y
where n 11 , n 10 , n 01 , n 00 , are non-negative counts of numbers of observations that sum to n , the total number of observations. The phi coefficient that describes the association of x and y is
Phi is related to the point-biserial correlation coefficient and Cohen's d and estimates the extent of the relationship between two variables (2×2). [ 7 ]
The phi coefficient can also be expressed using only n {\displaystyle n} , n 11 {\displaystyle n_{11}} , n 1 ∙ {\displaystyle n_{1\bullet }} , and n ∙ 1 {\displaystyle n_{\bullet 1}} , as
Although computationally the Pearson correlation coefficient reduces to the phi coefficient in the 2×2 case, they are not in general the same. The Pearson correlation coefficient ranges from −1 to +1, where ±1 indicates perfect agreement or disagreement, and 0 indicates no relationship. The phi coefficient has a maximum value that is determined by the distribution of the two variables if one or both variables can take on more than two values. [ further explanation needed ] See Davenport and El-Sanhury (1991) [ 8 ] for a thorough discussion.
The MCC is defined identically to phi coefficient, introduced by Karl Pearson , [ 2 ] [ 9 ] also known as the Yule phi coefficient from its introduction by Udny Yule in 1912. [ 3 ] Despite these antecedents which predate Matthews's use by several decades, the term MCC is widely used in the field of bioinformatics and machine learning.
The coefficient takes into account true and false positives and negatives and is generally regarded as a balanced measure which can be used even if the classes are of very different sizes. [ 10 ] The MCC is in essence a correlation coefficient between the observed and predicted binary classifications; it returns a value between −1 and +1. A coefficient of +1 represents a perfect prediction, 0 no better than random prediction and −1 indicates total disagreement between prediction and observation. However, if MCC equals neither −1, 0, or +1, it is not a reliable indicator of how similar a predictor is to random guessing because MCC is dependent on the dataset. [ 11 ] MCC is closely related to the chi-square statistic for a 2×2 contingency table
where n is the total number of observations.
While there is no perfect way of describing the confusion matrix of true and false positives and negatives by a single number, the Matthews correlation coefficient is generally regarded as being one of the best such measures. [ 12 ] Other measures, such as the proportion of correct predictions (also termed accuracy ), are not useful when the two classes are of very different sizes. For example, assigning every object to the larger set achieves a high proportion of correct predictions, but is not generally a useful classification.
The MCC can be calculated directly from the confusion matrix using the formula:
In this equation, TP is the number of true positives , TN the number of true negatives , FP the number of false positives and FN the number of false negatives . If exactly one of the four sums in the denominator is zero, the denominator can be arbitrarily set to one; this results in a Matthews correlation coefficient of zero, which can be shown to be the correct limiting value. In case two or more sums are zero (e.g. both labels and model predictions are all positive or negative), the limit does not exist.
The MCC can be calculated with the formula:
using the positive predictive value, the true positive rate, the true negative rate, the negative predictive value, the false discovery rate, the false negative rate, the false positive rate, and the false omission rate.
The original formula as given by Matthews was: [ 1 ]
This is equal to the formula given above. As a correlation coefficient , the Matthews correlation coefficient is the geometric mean of the regression coefficients of the problem and its dual . The component regression coefficients of the Matthews correlation coefficient are Markedness (Δp) and Youden's J statistic ( Informedness or Δp'). [ 12 ] [ 13 ] Markedness and Informedness correspond to different directions of information flow and generalize Youden's J statistic , the δ {\displaystyle \delta } p statistics, while their geometric mean generalizes the Matthews Correlation Coefficient to more than two classes. [ 12 ]
Some scientists claim the Matthews correlation coefficient to be the most informative single score to establish the quality of a binary classifier prediction in a confusion matrix context. [ 14 ] [ 15 ]
Given a sample of 12 pictures, 8 of cats and 4 of dogs, where cats belong to class 1 and dogs belong to class 0,
assume that a classifier that distinguishes between cats and dogs is trained, and we take the 12 pictures and run them through the classifier, and the classifier makes 9 accurate predictions and misses 3: 2 cats wrongly predicted as dogs (first 2 predictions) and 1 dog wrongly predicted as a cat (last prediction).
With these two labelled sets (actual and predictions) we can create a confusion matrix that will summarize the results of testing the classifier:
In this confusion matrix, of the 8 cat pictures, the system judged that 2 were dogs, and of the 4 dog pictures, it predicted that 1 was a cat. All correct predictions are located in the diagonal of the table (highlighted in bold), so it is easy to visually inspect the table for prediction errors, as they will be represented by values outside the diagonal.
In abstract terms, the confusion matrix is as follows:
where P = Positive; N = Negative; TP = True Positive; FP = False Positive; TN = True Negative; FN = False Negative.
Plugging the numbers from the formula:
Let us define an experiment from P positive instances and N negative instances for some condition. The four outcomes can be formulated in a 2×2 contingency table or confusion matrix , as follows:
The Matthews correlation coefficient has been generalized to the multiclass case. The generalization called the R K {\displaystyle R_{K}} statistic (for K different classes) was defined in terms of a K × K {\displaystyle K\times K} confusion matrix C {\displaystyle C} [ 24 ] . [ 25 ]
When there are more than two labels the MCC will no longer range between −1 and +1. Instead the minimum value will be between −1 and 0 depending on the true distribution. The maximum value is always +1.
This formula can be more easily understood by defining intermediate variables: [ 26 ]
Using above formula to compute MCC measure for the dog and cat example discussed above, where the confusion matrix is treated as a 2 × Multiclass example:
An alternative generalization of the Matthews Correlation Coefficient to more than two classes was given by Powers [ 12 ] by the definition of Correlation as the geometric mean of Informedness and Markedness .
Several generalizations of the Matthews Correlation Coefficient to more than two classes along with new Multivariate Correlation Metrics for multinary classification have been presented by P Stoica and P Babu. [ 27 ]
As explained by Davide Chicco in his paper "Ten quick tips for machine learning in computational biology " [ 14 ] ( BioData Mining , 2017) and "The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation" [ 28 ] ( BMC Genomics , 2020), the Matthews correlation coefficient is more informative than F1 score and accuracy in evaluating binary classification problems, because it takes into account the balance ratios of the four confusion matrix categories (true positives, true negatives, false positives, false negatives). [ 14 ] [ 28 ]
The former article explains, for Tip 8 : [ excessive quote ]
In order to have an overall understanding of your prediction, you decide to take advantage of common statistical scores, such as accuracy, and F1 score.
(Equation 1, accuracy: worst value = 0; best value = 1)
(Equation 2, F1 score: worst value = 0; best value = 1)
However, even if accuracy and F1 score are widely employed in statistics, both can be misleading, since they do not fully consider the size of the four classes of the confusion matrix in their final score computation.
Suppose, for example, you have a very imbalanced validation set made of 100 elements, 95 of which are positive elements, and only 5 are negative elements (as explained in Tip 5). And suppose also you made some mistakes in designing and training your machine learning classifier, and now you have an algorithm which always predicts positive. Imagine that you are not aware of this issue.
By applying your only-positive predictor to your imbalanced validation set, therefore, you obtain values for the confusion matrix categories:
These values lead to the following performance scores: accuracy = 95%, and F1 score = 97.44%. By reading these over-optimistic scores, then you will be very happy and will think that your machine learning algorithm is doing an excellent job. Obviously, you would be on the wrong track.
On the contrary, to avoid these dangerous misleading illusions, there is another performance score that you can exploit: the Matthews correlation coefficient [40] (MCC).
(Equation 3, MCC: worst value = −1; best value = +1).
By considering the proportion of each class of the confusion matrix in its formula, its score is high only if your classifier is doing well on both the negative and the positive elements.
In the example above, the MCC score would be undefined (since TN and FN would be 0, therefore the denominator of Equation 3 would be 0). By checking this value, instead of accuracy and F1 score, you would then be able to notice that your classifier is going in the wrong direction, and you would become aware that there are issues you ought to solve before proceeding.
Consider this other example. You ran a classification on the same dataset which led to the following values for the confusion matrix categories:
In this example, the classifier has performed well in classifying positive instances, but was not able to correctly recognize negative data elements. Again, the resulting F1 score and accuracy scores would be extremely high: accuracy = 91%, and F1 score = 95.24%. Similarly to the previous case, if a researcher analyzed only these two score indicators, without considering the MCC, they would wrongly think the algorithm is performing quite well in its task, and would have the illusion of being successful.
On the other hand, checking the Matthews correlation coefficient would be pivotal once again. In this example, the value of the MCC would be 0.14 (Equation 3), indicating that the algorithm is performing similarly to random guessing. Acting as an alarm, the MCC would be able to inform the data mining practitioner that the statistical model is performing poorly.
For these reasons, we strongly encourage to evaluate each test performance through the Matthews correlation coefficient (MCC), instead of the accuracy and the F1 score, for any binary classification problem.
Chicco's passage might be read as endorsing the MCC score in cases with imbalanced data sets. This, however, is contested; in particular, Zhu (2020) offers a strong rebuttal. [ 29 ]
Note that the F1 score depends on which class is defined as the positive class. In the first example above, the F1 score is high because the majority class is defined as the positive class. Inverting the positive and negative classes results in the following confusion matrix:
This gives an F1 score = 0%.
The MCC doesn't depend on which class is the positive one, which has the advantage over the F1 score to avoid incorrectly defining the positive class. | https://en.wikipedia.org/wiki/Phi_coefficient |
Edwin Philip Pister (January 15, 1929 – January 17, 2023) was an American fishery biologist who worked for California Department of Fish and Game. [ 1 ] He was a pioneer of desert fish conservation, and is credited with saving the Owens pupfish ( Cyprinodon radiosis ) by transferring the entire remaining population into several buckets and transporting them to a safe location. [ 2 ] [ 3 ]
Pister was born in Stockton, California and lived in Bishop, California . A volume compiling studies of desert fishes has been published in his honor. [ 4 ] He has written and published scientific and popular papers and has also written about environmental ethics . [ 5 ]
Pister helped found the non-profit Desert Fishes Council in 1969, serving as its first president, then as its Executive Secretary until his death. [ 6 ]
Audio interviews of him are available in the Bancroft Library at University of California, Berkeley [ 7 ]
Pister died in Bishop, California on January 17, 2023, two days after his 94th birthday. [ 8 ] | https://en.wikipedia.org/wiki/Phil_Pister |
Philip David Radford (born January 2, 1976) is an American consumer and environmental leader who is currently the President and CEO of Consumer Reports . [ 3 ] He previously served as Chief Strategy Officer of the Sierra Club , [ 4 ] [ 5 ] and as the executive director of Greenpeace USA. [ 6 ] Radford started his career working for the nonpartisan organizations Public Interest Research Group and Public Citizen, working for consumer protection, fair trade, and public health. [ 7 ] He was the founder and President of Progressive Power Lab, [ 8 ] an organization that incubates companies and non-profits that build capacity for progressive organizations, [ 9 ] including a donor advisory organization [ 10 ] Champion.us, [ 11 ] the Progressive Multiplier Fund [ 12 ] and Membership Drive. [ 13 ] Radford is a co-founder of the Democracy Initiative . He has a background in grassroots organizing, corporate social responsibility , [ 14 ] and clean energy . [ 15 ]
Radford began his civic engagement as a high school student at Oak Park and River Forest High School in Oak Park , a Chicago suburb, volunteering to stop the building of trash incinerators in the West Side of Chicago near his family's Oak Park home. [ 16 ]
His first job in community engagement was canvassing door to door for nonpartisan Illinois PIRG . While studying political science at Washington University in St. Louis , he directed campaign and canvass offices during summers for the Fund for Public Interest Research for clients including PIRGIM and Ohio PIRG . [ 7 ] Radford took time off of school to work for Public Citizen on global trade issues. After graduating college in 1998, Radford became a lead organizer at Green Corps , the field school for environmental organizing. [ 17 ]
Radford received his B.A. from Washington University in St. Louis in 1998. [ 1 ]
From 1999 to 2001 Radford was field director for Ozone Action, an organization dedicated to working on the atmospheric threats of global warming and ozone depletion. As field director, Radford planned and executed a number of grassroots campaigns, including a campaign during the 2000 presidential primaries, which was the initial impetus for Senator John McCain sponsoring the Climate Stewardship Act. [ 18 ] [ 19 ]
Radford also managed the grassroots mobilization for the Global Warming Divestiture Campaign, which resulted in Ford , General Motors , Texaco , and other companies ending their funding of the Global Climate Coalition , which spread misinformation about global warming. [ 20 ] According to The New York Times , the result of the campaign was "the latest sign of divisions within heavy industry over how to respond to global warming." [ 21 ]
In 2001, Radford founded Power Shift, [ 7 ] a non-governmental organization dedicated to driving clean energy market breakthroughs and building the grassroots base to stop global warming . [ 22 ]
As executive director of Power Shift, Radford worked closely with the cities of San Diego , Chula Vista, California , and Berkeley, California , as well as nine other municipalities, to secure investments for installation of solar energy systems and implementation of energy efficiency measures in municipal buildings. [ 7 ] Radford also helped to convince Citigroup to adopt innovative new means of financing clean energy infrastructure for wind and solar installations that made them affordable to average Americans. [ 6 ] [ 23 ]
In 2009, at the age of 33, Radford was selected as the youngest ever executive director of Greenpeace . [ 24 ] [ 25 ] Radford's tenure at Greenpeace USA is best known for convincing over 100 corporations to change their environmental practices; [ 26 ] exposing the anti-environmental influence of the Koch Brothers , making them a household name; [ 27 ] increasing the organization's net income by 80%; [ 28 ] launching the organization's grassroots organizing and significantly growing the canvass programs; [ 29 ] and serving as a founder of the Democracy Initiative , [ 2 ] a national coalition of major unions, environmental groups, civil rights and government reform organizations working for universal voter registration, to get money out of politics, and to reform Senate rules. In September 2013, Radford announced that he would step down on April 30, 2014, once he had completed five years of service as executive director. [ 6 ]
New York Times reporter Andrew Revkin referred to a Greenpeace campaign during Radford's tenure as "Activism at Its Best." [ 30 ] [ 31 ]
Ben Jealous , former president and chief executive officer of the NAACP as well as co-founder of the Democracy Initiative with Radford, described Radford at the helm of Greenpeace as "a modern movement building giant. He has built powerful diverse coalitions to bolster the fights for the environment and voting rights. In the process he has shown himself to be unmatched in mobilizing everyday people to fund their movements directly." Environmental leader Bill McKibben stated: "During Radford's tenure, Greenpeace has been helping the whole environmental movement shift back towards its roots: local, connected, tough." [ 28 ]
Before becoming executive director of Greenpeace USA, Radford served as the director of the organization's Grassroots Program. [ 32 ] In that capacity, he directed and significantly grew the organization's street canvass and launched and directed the door-to-door canvasses, online-to-offline organizing team, social media team, the Greenpeace Student Network, and the Greenpeace Semester. [ 33 ] Under Radford, the street and door-to-door canvassing programs grew to include nearly 400 canvassers in almost 20 cities across the country and was responsible for doubling the organization's budget. [ 33 ]
After leaving Greenpeace, Radford launched Progressive Power Lab, which starts and manages organizations that work to move millions of dollars and people into progressive causes. Through Progressive Power Lab, Radford launched the Progressive Multiplier Fund, [ 34 ] Membership Drive, a Salesforce App developer [ 35 ] which built Apps including The Field, [ 36 ] and Champion.us, a donor advisor firm for small donors focused on democracy and climate change. [ 37 ]
During Radford's tenure at Greenpeace, his theory of change shifted from viewing governments as arbitrators between public and private interests on environmental issues, to finding that most governments are captured by industry. Rather than fighting first for new laws, which could be blocked by industries, he has focused on pressuring large companies to change their practices and enlisted them as allies in pushing for strong environmental protections. [ 38 ] [ 28 ] [ 39 ] Examples include Greenpeace campaigns that convinced Apple Inc. and other tech companies to shift to 100% clean energy and lobby utilities and regulators to make that possible, as well as work to protect both the Indonesian rainforest and the Bering Sea Canyons . [ 40 ] [ 41 ] Radford argues that the combination of creating industry champions and "outside pressure" focused on the government are the keys to passing new laws to protect the environment. [ 38 ] However, Radford has also been a vocal leader calling for the United States to pass campaign finance reform and respect all Americans' voting rights to shift power in politics from corporations towards people and fulfill "the promise of American democracy." [ 42 ] [ 43 ] Radford played a major role in several initiatives to influence corporations such as the Global Climate Coalition, Citigroup, Kimberley-Clark, Asia Pulp and Paper, and the tech industry.
Radford managed the grassroots efforts of a national divestment / disinvestment campaign, [ 44 ] which forced Ford , General Motors , Texaco , and other companies to stop funding the Global Climate Coalition , which spread misinformation about global warming. [ 20 ] Soon thereafter, the GCC ended operations. [ 45 ]
In 2001, while running Power Shift, Radford launched a campaign to push Citibank to offer and promote Energy Efficient Mortgages (EEMs). [ 46 ] Citi was "missing the opportunity to help stop global warming by phasing out fossil fuel investments and promoting clean energy now," Radford said. "The irony is that if Citi financed solar for people's homes, solar energy could be made immediately affordable for millions of Americans today." [ 47 ] In 2004, Citigroup agreed to offer and promote EEMs for residential wind, energy efficiency, and solar installations that would make clean energy affordable for millions of Americans. [ 48 ]
Radford oversaw the grassroots mobilization efforts on the Kleercut Campaign in the United States and, later, the entire U.S. component of the global campaign when he became Greenpeace's executive director, [ 32 ] targeting Kimberly-Clark for sourcing 22% of its paper pulp from Canadian boreal forests containing 200-year-old trees. The campaign included intervening in Kleenex commercial shoots, [ 49 ] convincing twenty-two universities and colleges to take action such as cancelling contracts, [ 50 ] [ 51 ] recruiting 500 companies to boycott Kimberly-Clark, over 1,000 protests of the company, and more. [ 51 ] [ 52 ] On August 5, 2009, Kimberly-Clark announced that it would source 40% of its paper fiber from recycled content or other sustainable sources – a 71% increase from 2007 levels. [ 53 ] The demand created by Kimberly-Clark for sustainably logged fiber was greater than the supply, enabling the company to convince logging companies to change their practices. [ 54 ]
From 2010 to 2013, Radford managed the Greenpeace team that persuaded major U.S. companies to cancel their contracts with Asia Pulp and Paper (APP) – the world's third largest paper company [ 55 ] – to push APP to stop destroying ancient forests. [ 56 ] Greenpeace and its allies succeeded in convincing more than 100 corporate customers of APP to sever their ties with the company, [ 26 ] including Mattel , [ 57 ] Hasbro , [ 58 ] Lego , Kmart , [ 59 ] IGA , Kroger , Food Lion , National Geographic , and Xerox . [ 60 ] The campaign against APP cut nearly 80% of APP's U.S. market. On February 5, 2013, Asia Pulp and Paper announced a deforestation policy protecting Indonesian rainforests. [ 61 ] Referring to the victory, New York Times reporter Andrew Revkin heralded the campaign with a piece titled: "Activism at Its Best: Greenpeace's Push to Stop the Pulping of Rainforests". [ 30 ]
On April 21, 2011, Greenpeace released a report highlighting data centers, which consumed up to 2% of all global electricity and this amount was projected to increase. Radford stated "we are concerned that this new explosion in electricity use could lock us into old, polluting energy sources instead of the clean energy available today." [ 62 ] Business Insider reported that after Greenpeace USA campaigns, "tech giants like Apple, Google, Facebook, and Salesforce have promised to power their data centers with renewable energy, a pledge that led Duke Energy, the nation's largest power utility and one of the most flagrant emitters of CO2, to begin providing clean energy to win their business." [ 39 ]
In 2014, deforestation in Indonesia, which accounts for 0.1% of the world's surface, caused 4% of global warming pollution. One of the major drivers of deforestation was clearing the forest to grow palm oil plantations. [ 39 ] Under Radford, the Greenpeace USA team persuaded Procter & Gamble , Colgate Palmolive , Mondelez , and other major companies to demand sustainably grown palm oil. [ 63 ] [ 64 ]
Under Radford, Greenpeace ran a campaign targeting supermarket chains to convince them to stop selling threatened fish, adopt sustainable seafood policies, and lobby for policies such as marine reserves to protect the oceans. Whole Foods , Safeway Inc. , Wegmans , Target , Harris Teeter , Meijer , and Kroger implemented sustainable seafood purchasing policies; [ 40 ] [ 65 ] [ 66 ] Trader Joe's , Aldi , Costco , Target Corporation , and A&P have dramatically cut the threatened fish that they sell; Whole Foods, Safeway Inc., Trader Joe's, Walmart , and Hy-Vee introduced sustainably caught canned tuna; [ 67 ] and Wegmans, Whole Foods, Safeway Inc., Target, and Trader Joe's have lobbied for strong ocean policies, such as protecting the Ross Sea and Bering Sea Canyons as marine reserves . [ 40 ] [ 65 ] [ 66 ] | https://en.wikipedia.org/wiki/Phil_Radford |
Philibert Nang (born 1967 [ 1 ] ) is a Gabonese mathematician known for his work in algebra ( D-modules , Riemann–Hilbert correspondence ).
Nang won the 2011 ICTP Ramanujan Prize for his research in mathematics, and because he conducted it in Gabon the ICTP declared: "It is hoped that his example will inspire other young African mathematicians working at the highest levels while based in Africa." [ 2 ] He was awarded the African Mathematics Millennium Science Initiative -Phillip Griffiths Prize in 2017. [ 3 ]
He obtained his Ph.D. from the Pierre and Marie Curie University in 1996 under the supervision of Louis Boutet de Monvel . [ 4 ]
Nang currently serves as president of the Gabon Mathematical Society . [ 5 ]
He has been a visiting member at the Max Planck Institute for Mathematics and at the Tata Institute of Fundamental Research . [ 6 ] Currently he is employed as associate professor at University of Pretoria [ 7 ] in South Africa .
This article about a mathematician is a stub . You can help Wikipedia by expanding it .
This Gabonese biographical article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Philibert_Nang |
Philip Coppens (October 24, 1930 – June 21, 2017) [ 2 ] was a Dutch -born American chemist and crystallographer known for his work on charge density analysis using X-rays crystallography [ 3 ] and the pioneering work in the field of photocrystallography . [ 4 ]
The Amersfoort -born Coppens received his B.S. and Ph.D. degrees from the University of Amsterdam in 1954 and 1960, where he was supervised by Carolina MacGillavry . In 1968, following appointments at the Weizmann Institute and Brookhaven National Laboratory , he was appointed in the chemistry department at the State University of New York at Buffalo . He was a SUNY Distinguished Professor and holder of the Henry M. Woodburn Chair of Chemistry. Among the many 3-dimensional structures Coppens characterized is the nitroprusside ion . [ 5 ]
Coppens was a corresponding member of the Royal Netherlands Academy of Arts and Sciences since 1979 [ 6 ] and a fellow of the American Association for the Advancement of Science from 1993. [ 7 ] Additionally, he was awarded the Gregori Aminoff Prize of the Royal Swedish Academy of Sciences in 1996, the Ewald Prize of the International Union of Crystallography in 2005, [ 8 ] and Kołos Medal in 2013. | https://en.wikipedia.org/wiki/Philip_Coppens_(chemist) |
Philip Hall FRS [ 1 ] (11 April 1904 – 30 December 1982), was an English mathematician . His major work was on group theory , notably on finite groups and solvable groups . [ 2 ] [ 3 ]
He was educated first at Christ's Hospital , where he won the Thompson Gold Medal for mathematics, and later at King's College, Cambridge . He was elected a Fellow of the Royal Society in 1951 and awarded its Sylvester Medal in 1961. He was President of the London Mathematical Society from 1955–1957, and was awarded its Berwick Prize in 1958 and De Morgan Medal in 1965. [ 4 ] [ 5 ]
This article about a United Kingdom mathematician is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Philip_Hall |
Philip Stuart Kitcher (born 20 February 1947) is a British philosopher who is the John Dewey Professor Emeritus of philosophy at Columbia University . [ 4 ] He specialises in the philosophy of science , the philosophy of biology , the philosophy of mathematics , the philosophy of literature , and more recently pragmatism .
Born in London, Kitcher spent his early life in Eastbourne , East Sussex , on the south coast of the United Kingdom, where another distinguished philosopher of an earlier generation ( A. J. Ayer ) was also at school. Kitcher himself went to school at Christ's Hospital , Horsham, West Sussex . [ 5 ] [ 6 ] He earned his BA in mathematics/history and philosophy of science from Christ's College, Cambridge , in 1969, and his PhD in history and philosophy of science from Princeton University in 1974, where he worked closely with Carl Hempel and Thomas Kuhn .
Kitcher is currently John Dewey Professor of Philosophy Emeritus at Columbia University . As chair of Columbia's Contemporary Civilization program (part of its undergraduate Core Curriculum ), he also held the James R. Barker Professorship of Contemporary Civilization. Before moving to Columbia, Kitcher held tenure-track positions at the University of Vermont , the University of Minnesota , and University of California, San Diego , where he held the position of Presidential Professor of Philosophy.
Kitcher is past president of the American Philosophical Association . In 2002, Kitcher was named a fellow of the American Academy of Arts and Sciences , and he was awarded the inaugural Prometheus Prize from the American Philosophical Association in 2006 in honour of extended achievement in the philosophy of science . He was elected to the American Philosophical Society in 2018. [ 7 ] Kitcher was Editor-in-Chief of the journal Philosophy of Science from 1994 to 1999, was also a member of the NIH / DOE Working Group on the Ethical, Legal, and Social Implications of the Human Genome Project from 1995 to 1997.
He has trained a number of philosophers of science, including Peter Godfrey-Smith ( University of Sydney ), Kyle Stanford ( University of California, Irvine ), and Michael R. Dietrich ( University of Pittsburgh ). He also taught C. Kenneth Waters (University of Calgary) and Michael Weisberg (University of Pennsylvania) as undergraduates. [ citation needed ]
He is married to Patricia Kitcher . She is a Kant scholar and philosopher of mind who has been the Mark Van Doren Professor of Humanities at Columbia. Their son, Charles Kitcher, is the associate general counsel for the Federal Election Commission . [ 8 ] [ 9 ]
Within philosophy, Kitcher is best known for his work in philosophy of biology , science , and mathematics , and outside academia for his work examining creationism and sociobiology . His works attempt to connect the questions raised in philosophy of biology and philosophy of mathematics with the central philosophical issues of epistemology , metaphysics , and ethics . He has also published papers on John Stuart Mill , Kant and other figures in the history of philosophy . His 2012 book [ 10 ] documented his developing interest in John Dewey and a pragmatic approach to philosophical issues. He sees pragmatism as providing a unifying and reconstructive approach to traditional philosophy issues. He had, a year earlier, published a book outlining a naturalistic approach to ethics, The Ethical Project (Harvard University Press, 2011). [ 11 ] He has also done work on the philosophy of climate change . [ 12 ] [ 13 ]
Kitcher's three criteria for good science are: [ 14 ]
He increasingly recognised the role of values in practical decisions about scientific research. [ 15 ]
Kitcher is the author of Abusing Science: The Case Against Creationism . He has commented on the way creationists have misinterpreted Kuhn:
Thomas Kuhn 's book The Structure of Scientific Revolutions has probably been more widely read—and more widely misinterpreted—than any other book in the recent philosophy of science. The broad circulation of his views has generated a popular caricature of Kuhn's position. According to this popular caricature, scientists working in a field belong to a club. All club members are required to agree on main points of doctrine. Indeed, the price of admission is several years of graduate education , during which the chief dogmas are inculcated. The views of outsiders are ignored. Now I want to emphasize that this is a hopeless caricature, both of the practice of scientists and of Kuhn's analysis of the practice. Nevertheless, the caricature has become commonly accepted as a faithful representation, thereby lending support to the Creationists' claims that their views are arrogantly disregarded. [ 16 ] | https://en.wikipedia.org/wiki/Philip_Kitcher |
Philip Joseph Kocienski FRS (born 23 December 1946 [ 3 ] ) is a British organic chemist . He is an Emeritus Professor at the University of Leeds . [ 4 ]
Kocienski has made contributions to the design and development of new organometallic reagents in synthesis , and the applications of synthetic methods to complex natural products. Early work with Basil Lythgoe on the scope and stereochemistry of the Julia olefination with alpha-metallated sulphone reagents emphasised the value of this method in organic chemistry. His major contribution has been to research the synthesis and chemistry of novel metallated (lithium, copper and nickel) enol ethers , and to develop the uses of these intermediates in the synthesis of oxacyclic and geometrically defined alkene units in natural products of biological significance. Kocienski has synthesised the insecticide milbemycin beta 3, the potassium channel blocker talaromycin B, the hypotensive agent lacrimin, and the antihypertensive agent zoapatanol. His total synthesis of the insect toxin pederin , and his synthetic work toward the immunosuppressant FK 506 , have made him regarded as one of the leading organic chemists in the field. [ 5 ]
In 1984 Kocienski received the Hickinbottom Fellowship of the Royal Society of Chemistry. [ 6 ] He was elected a Fellow of the Royal Society (FRS) in 1997 . [ 5 ] He won the Marie Curie Medal in 1997. Since 2000 he has been a foreign member of Polish Academy of Sciences . [ 3 ] | https://en.wikipedia.org/wiki/Philip_Kocienski |
Philip Reed is a renowned model ship scratch builder and published author on the subject of model ship construction. [ 1 ] [ 2 ] He is known for his models of ships from the First and Second World Wars , as well as the Napoleonic era and the 17th century . [ 1 ] [ 3 ] Regarded as one of the finest model shipbuilders in the world, [ 4 ] he was awarded the Championship Cup by the Mechanical Engineerium Museum . [ 5 ]
Philip Reed began his career in fine art and education but shifted his focus after being captivated by a model ship-building kit displayed in a hobby shop window. [ 1 ] [ 6 ] Determined to master the craft, he started constructing models and selling them at the American Marine Model Gallery in Massachusetts and the West End Gallery. He describes his passion for building model ships as an obsession, noting that at its best, it offers a ‘meditative absorption,’ while at its worst, it can lead to ‘total frustration.’ [ 1 ]
Since beginning his career in model making in 1980, Reed has authored numerous articles and four books focused on ship model construction. [ 1 ] [ 7 ] His publications are primarily aimed at advanced scratch builders, [ 8 ] [ 9 ] but can significantly increase the model ship building skills of the readers, while showing them that persistence and trial and error can produce a fine model. [ 7 ] His books can also be valuable for nautical archaeologists who may benefit from using modeling as a tool to interpret information. [ 10 ] In his book Building a Miniature Navy Board Model, Reed guides readers through each stage of the process, accompanied by 400 photographs and detailed explanations. [ 11 ] [ 10 ] The book marked the first comprehensive examination of early 18th-century Navy board models in many years. [ 12 ] [ 13 ] He said that, “One of my great hopes is that someone will take up the baton and use what I’ve written to extend the life of this particular art form.’ [ 1 ]
Reed has built many model ships from the First and Second World Wars and the Napoleonic era, but in recent years he has shifted his focus to ships from the second half of the 17th century, a period he admired in his childhood. [ 1 ] Each of his models is handcrafted, taking a minimum of 6 months to complete, with many requiring a year or longer. [ 5 ] He works in a small and simple workshop, using hand tools and small machine tools. [ 2 ] His models are built to a scale of 1/16 inch, which is a quarter of the normally used 1:48 or 1/4 inch. [ 1 ] [ 14 ] Although the tools and materials that Reed employs are not different than what other modeler's utilize, it is the way he uses them and his outside of the box thinking that distinguishes him. [ 7 ] Roger Cole of The Northern Mariner said that pictures of Reed's models appear like they "could be photos of the original vessels ." [ 2 ]
Early on in Reed’s career, he realized that it was very difficult to make accurate models from just plans . Fortunately, he was able to go to nearby London museums and take photographs. He said that without those photographs, he would not have been able to complete his early models. He believes that his time as an art student was critical in developing his mindset and abilities as a model ship builder. [ 15 ] He also emphasizes the importance of pursuing what you “absolutely love doing,” suggesting that everyone has a purpose in this world. [ 16 ] He wrote the foreword for Alistair Roach's The Life and Ship Models of Norman Ough and called Ough a great inspiration for his own career. [ 17 ] As a child, he visited the Imperial War Museum and was particularly fond of Ough's HMS Curacao . [ 18 ] Notably, Reed conveyed that he might be "the last model ship maker working in this particular format." [ 19 ] Country Life magazine has recognized Philip Reed as a living national treasure. [ 1 ] [ 20 ]
Philip Reed crafted The Anne of 1678 to depict it as it was transporting Maria Sofia of Neuburg to marry Don Pedro the Second in Lisbon . He credits Richard Endsor's book, The Warship Anne, as a vital resource in his research for constructing the model, which he built to a scale of 1/16 inch. [ 15 ] The Mordaunt represents a further example of a miniature Navy board model crafted by Reed. The contributions of Richard Endsor were also instrumental in the development of the plans for this model. In its construction, Reed utilized Brazilian boxwood for both the framing and planking , while opting for carved boxwood for the decorative work. [ 9 ] Another noteworthy example of a Navy board model is the Royal George , for which Reed selected yellow cedar for the planking. In order to create the intricate decorative carvings for this model, Reed employed wire armatures , sculpted boxwood, and artist's gesso . [ 9 ]
While most of the models he constructed have been sailing ships, he has also built sixteen modern warship models, with the majority of them being WW1 and WW2 ships. [ 18 ] One such model is the HMS Caesar model. Before commencing its construction, Reed studied plans that represent 1/16 inch scale while using John Lambert's work for larger components. Additionally, he used a copy of Anthony Preston's Warship # 32, HMS Cavalier, and the Ca Class Destroyers . He also utilized the numerous photographs he captured of the HMS Cavalier at Chatham , keeping in mind that certain modifications made to the Cavalier may not have been made to the Caesar. [ 2 ]
He was awarded the Championship Cup by the Mechanical Engineerium Museum in Brighton , England . His models have been exhibited at the Peabody Essex Museum in Salem , MA, the Parker Gallery, and the Philadelphia Maritime Museum , [ 5 ] and other leading museums and galleries in North America and Europe . [ 6 ]
Ship model
Scratch building
Scale model
Norman A. Ough
Model makers
Zen and The Art of Model Making - The Story of Philip Reed
Britain's Last Model Ship Maker Will Never Give Up Craft
Philip Reed's Ship Models No#1 - The Anne
Philip Reed's Ship Models No#5 - HMS Cavalier | https://en.wikipedia.org/wiki/Philip_Reed_(model_ship_maker) |
Philip Warren (born 1930 or 1931) is an English ship model maker best known for building a matchstick Maritime Fleet. His collection includes models of over 500 vessels and 1,000 aircraft , as well as of all the Royal Navy ships since 1945. [ 1 ] [ 2 ]
Philip Warren was born in Dorset , England , and was a director of a stationery wholesale company before his retirement. [ 3 ] He started building models of naval boats at the age of 17 due to a fascination with ships . [ 4 ] Like many children of his era, his interest in warships came about due to going through childhood during World War 2. [ 5 ]
When he first began model making , he used balsa wood to make models. [ 4 ] He switched to matchsticks because he found that material more suitable for static waterline warship models, [ 4 ] and because matches were common. [ 3 ] His models are hand-built and are made in 1/300th of the scale, using only a few building materials, including matchboxes , matchsticks , a razor blade , and glue . [ 6 ] [ 7 ] Completing his models requires him looking at photographs , drawings , and plans of real ships. [ 8 ] His largest model is 1m (3ft) long. [ 3 ]
When Warren began matchstick model making, matchboxes were easy to get a hold of, but in recent years, he has relied on donations to keep up with his work. [ 6 ] He has continued model making into his 90s. [ 1 ]
Philip Warren's earliest model was the Royal Navy's HMS Scorpion, which was less detailed compared to later models. [ 4 ] Following its completion, he built a different destroyer , a battleship , a cruiser , and later an aircraft carrier . [ 4 ] As the years passed, his attention to detail and accuracy improved, making models with many moving parts, including missile launchers , radars , gun turrets , swing wings, and helicopter rotors . [ 9 ] His collection of models includes 500 vessels and 1000 aircraft from the very last World War 2 battleships to nuclear-powered submarines and modern aircraft carriers. [ 1 ] [ 10 ]
In his over 70 years [ 11 ] of model making, he built one or more of each class of Royal Navy ships from 1945 to the present day, including 7 Leander class frigates . [ 5 ] [ 10 ] In addition, he has also built Commonwealth ships. [ 10 ] Also, Warren has constructed 60 US ships so far, including four giant supercarriers , two battleships , and various cruisers , demonstrating the evolution of vessels that contained guns to those with missiles. Additionally, he built around 50 ships from various Navies of other nations. [ 12 ]
His aircraft models span from older aircraft, such as the Swordfish , to modern supersonic jets . [ 12 ] Warren's model of the HMS Queen Elizabeth was completed before the original. [ 5 ]
It can take Warren over a year to finish a model. [ 12 ] He very rarely gets rid of models and has never made money from his hobby, [ 10 ] despite being told by numerous museum curators that his models have considerable value. [ 12 ] His models have not been built in historical sequence but preserved so well that it's difficult to tell which models are older. [ 9 ] Warren’s ships take him approximately 1,500 matches to build. [ 13 ] Adam Aspinall from The Mirror states, “Each vessel is correct to the tiniest detail.” [ 12 ] In 1989, Philip gave the United Kingdom's Prince Andrew a model of the frigate Campbeltown . [ 9 ] Trend Hunter named Warren the “ Hobby King of Hobbydom” due to having built models of all of Britain's warships since the end of the Second World War. [ 14 ]
Various museums and charities have displayed Philip Warren's work, including the Fleet Air Arm Museum and Nothe Fort . [ 10 ] [ 2 ] [ 15 ] The Duke of Gloucester honored him with a glass trophy for his service to the Northe Fort community for his yearly display of his matchstick fleet. [ 15 ]
Philip Warren was married to his wife Anita for 47 years until her death. [ 3 ] | https://en.wikipedia.org/wiki/Philip_Warren |
Philipp Kukura FRSC (born 26 March 1978) is Professor of Chemistry at the University of Oxford , and a Fellow of Exeter College, Oxford . [ 1 ] He is best known for pioneering contributions to femtosecond stimulated Raman spectroscopy (FSRS), interferometric scattering microscopy (iSCAT) and the development of mass photometry.
He was born in Bratislava , then Czechoslovakia [ 2 ] in a family of Slovak actor Juraj Kukura . In 1984 the family emigrated to Germany. In 2002 he graduated with a Master of Chemistry from the University of Oxford and competed in the 2001 and 2002 Rugby League Varsity matches. In 2006 he completed his PhD in Chemistry from the University of California, Berkeley College of Chemistry .
After completing his PhD, Philipp Kukura moved to Zürich . There he worked at the Swiss Federal Institute of Technology as a postdoctoral research assistant under the supervision of Professor Vahid Sandoghdar on nano-optics until 2010. [ 2 ] He returned to Oxford in 2010 to work initially as an EPSRC Career Acceleration Fellow. In 2011 he was elected to a tutorial fellowship at Exeter College. [ 2 ] In 2016 he was promoted to Full Professor of Chemistry. [ 1 ]
In 2018 Philipp Kukura founded Refeyn Ltd. together with Justin Benesch, Daniel Cole, and Gavin Young to commercialise mass photometry. [ 1 ] | https://en.wikipedia.org/wiki/Philipp_Kukura |
Philippa Marion Wiggins (nee Glasgow ) FRSNZ (16 July 1925 – 16 March 2017) was a New Zealand academic, who made significant contributions to the understanding of the structure of water in living cells. [ 1 ]
Wiggins studied science at the University of Canterbury , but although she wanted to continue in physics, women at the university were not allowed to progress past stage one. Having switched to chemistry, Wiggins then won a scholarship to research at the Davy-Faraday Laboratory at the Royal Institution in London. [ 2 ] She then completed a PhD at King's College London . Wiggins took time off to have a family and did not return to full-time work until the age of 48. [ 3 ]
Upon returning to New Zealand, Wiggins worked at the University of Canterbury with Walter Metcalf from 1962–1966. [ 1 ] After this she worked at the University of Otago , and began working on water in living cells. [ 1 ]
Wiggins was awarded a Career Fellowship by the New Zealand Medical Research Council . From 1970, she continued her research in the Department of Medicine at the University of Auckland , as Professor of Membrane Physiology. [ 1 ]
In 1994 Wiggins co-founded BiostoreNZ, which commercialised preservation and storage technology for cells. BiostoreNZ was later acquired by Genesis Research and Development. [ 4 ] Wiggins worked as a research scientist for Genesis Research in 1997, and continued to publish until 2009. She held more than 40 patents. [ 1 ]
Wiggins died in Auckland on 16 March 2017 aged 91. [ 1 ]
Wiggins realised that water can exist in two different states, and that the existence of these states explains the way that living cells work, and has implications for DNA and protein structure. [ 3 ] [ 1 ]
Wiggins was appointed a Fellow of the Royal Society Te Apārangi in 1991. [ 5 ] She received a medal for her research from the Health Research Council of New Zealand . [ 1 ]
In 2017 Wiggins was featured as one of the Royal Society Te Apārangi's 150 women in 150 words . [ 3 ] | https://en.wikipedia.org/wiki/Philippa_Wiggins |
Philips Hue is a line of color-changing LED lamps and white bulbs which can be controlled wirelessly . The Philips Hue line of bulbs was the first smart bulb of its kind on the market. [ 3 ] The lamps are currently created and manufactured by Signify N.V. , formerly the Philips Lighting division of Royal Philips N.V. [ 1 ] [ 4 ]
The Hue Bridge is the central controller of the lighting system which allows the bulbs to "communicate" with Apple HomeKit [ 5 ] and the app. In 2016, Philips released a new square shaped v2 bridge with increased memory and processor speed which replaced the round v1 bridge. The first-generation bridge received a final software update in April 2020, and support from the Philips web servers was discontinued. Functionality including grouping lights into rooms and scheduling scenes that depended on Philips servers to pack the instructions into a form the bridge executed could no longer be created. Users of the earlier v1 bridge had to update to a v2 bridge to be able to control their configuration.
The Hue system was released in October 2012 on Apple Store , [ 3 ] and was marketed as the first iOS-controlled lighting appliance. [ 6 ] Products released before 2019 use the Zigbee Light Link protocol, a compatible subset of Zigbee 3.0, to communicate, while lighting products released later use either Bluetooth or Zigbee 3.0. Smart switches, motion detectors, and other accessory devices such as the Hue HDMI sync originally used only the Zigbee Home Automation protocol, but later supported Zigbee 3.0. Hue system components can be controlled over the Internet, typically by smartphone apps over cellular or WiFi networks, or a Home Automation voice command interface. Commands are delivered to the bridge via a wired Ethernet connection which transmits the commands to the devices over the Zigbee mesh network. [ 3 ] The initial system had bulbs capable of producing up to 600 lumens , a limit later increased to 1600 lumens.
In July 2018, an outdoor version of the Philips Hue suite was introduced, [ 7 ] and in October 2018 a suite of entertainment-focused, free-standing light fittings. [ 8 ] In January 2019 Philips announced outdoor sensors and lights. [ 9 ]
Three different Philips Hue color types are available, all dimmable: White, White Ambiance, and White and Color Ambiance. The White bulbs produce white light with a color temperature of 2700 K (warm); the White Ambiance bulbs produce white light of color temperature adjustable between 2200 K (warm soft white) and 6500 K (daylight). The White and Color Ambiance range can generate white light adjustable from 2000 K to 6500 K, and also adjustable colored light.
Since June 2019, all Philips hue bulbs support Bluetooth through the Philips Hue Bluetooth app, [ 10 ] so that a Philips Hue Bridge is no longer necessary for basic operation, though it enables further features. [ 11 ] Up to ten bulbs can be controlled by Bluetooth (which requires location services to be enabled) over a range stated to be 30 feet (9.1 m).
Use of the Hue Bridge enables control of up to 50 lights, assignation of room names, full voice control, configuration of Hue smart accessories, setting of timers and schedules, away-from-home control, routines to switch on and off, and synchronisation of lights with entertainment devices. [ 12 ]
A security flaw was found and resolved in 2016: the bulbs, and potentially other ZigBee devices, could be remotely controlled by anyone, using inexpensive equipment. Researchers tricked the lights into installing a malicious firmware update enabling them to be controlled from 70 metres (230 ft) away. [ 13 ]
In an article in Forbes , Seth Porges called Phillips Hue the "best product of 2012". [ 6 ] PC Magazine reviewed the white variation and named it as an editors' choice, saying it was bright and affordable and had many features. [ 14 ] | https://en.wikipedia.org/wiki/Philips_Hue |
Phillip A. Porras is a computer scientist and security researcher known for his work combating the Conficker worm. Porras leads the Internet Security Group in SRI International 's Computer Science Laboratory.
He was previously a manager of the Trusted Computer Systems Department of The Aerospace Corporation . Porras holds 12 U.S. patents, and was named an SRI Fellow in 2013. [ 1 ]
Porras attended the University of California, Irvine .
Porras was an author of patents involved in the 2008 case SRI International, Inc. v. Internet Security Systems, Inc. [ 2 ]
During the Conficker worm's initial attack, Porras was running a honeypot and was one of the first security researchers to notice it; and was part of the "Conficker Cabal" that helped combat the worm. [ 3 ] [ 4 ] Porras' team in SRI published an extensive analysis of the worm. [ 5 ] In 2010, Porras was a co-author of BLADE , a collaboration between SRI and Georgia Tech researchers designed to prevent drive-by download malware attacks. [ 6 ] [ 7 ] [ 8 ]
Porras was named an SRI Fellow in 2013 for his long-term work in information security and malware analysis, and his recent research on OpenFlow . [ 9 ] | https://en.wikipedia.org/wiki/Phillip_Porras |
The Phillips Machine , also known as the MONIAC ( Monetary National Income Analogue Computer ), Phillips Hydraulic Computer and the Financephalograph , is an analogue computer which uses fluidic logic to model the workings of an economy. The name "MONIAC" is suggested by associating money and ENIAC , an early electronic digital computer .
It was created in 1949 by the New Zealand economist Bill Phillips to model the national economic processes of the United Kingdom , while Phillips was a student at the London School of Economics (LSE). While designed as a teaching tool, it was discovered to be quite accurate, and thus an effective economic simulator.
At least twelve machines were built, donated to or purchased by various organisations around the world. As of 2023 [update] , several are in working order.
Phillips scrounged materials to create his prototype computer, including bits and pieces of war surplus parts from old Lancaster bombers . [ 1 ] The first MONIAC was created in his landlady's garage in Croydon at a cost of £ 400 (equivalent to £18,000 in 2023).
According to the Anna Corkhill:
Phillips discussed the idea with Walter Newlyn, a junior academic at Leeds University who had studied with Phillips at the LSE, and proceeded to build a prototype (with Newlyn’s assistance) over one summer in a garage in Croydon. Newlyn persuaded the head of department at Leeds to advance
£100 towards building the prototype. Newlyn helped as a craftsman’s mate—sanding and gluing together pieces of acrylic and supplementing Phillips’ economic knowledge. [ 2 ]
Phillips first demonstrated the machine to leading economists at the London School of Economics (LSE), of which Phillips was a student, in 1949. It was very well received and Phillips was soon offered a teaching position at the LSE.
The machine had been designed as a teaching aid but was also discovered to be an effective economic simulator. [ 3 ] When the machine was created, electronic digital computers that could run complex economic simulations were unavailable. In 1949, the few computers in existence were restricted to government and military use and their lack of adequate visual displays made them unable to illustrate the operation of complex models. Observing the machine in operation made it much easier for students to understand the interrelated processes of a national economy. The range of organisations that acquired a machine showed that it was used in both capacities. [ original research? ]
The machine is approximately 2 m (6 ft 7 in) high, 1.2 m (3 ft 11 in) wide and almost 1 m (3 ft 3 in) deep, and consisted of a series of transparent plastic tanks and pipes which were fastened to a wooden board. Each tank represented some aspect of the UK national economy and the flow of money around the economy was illustrated by coloured water. At the top of the board was a large tank called the treasury. Water (representing money) flowed from the treasury to other tanks representing the various ways in which a country could spend its money. For example, there were tanks for health and education. To increase spending on health care a tap could be opened to drain water from the treasury to the tank which represented health spending. Water then ran further down the model to other tanks, representing other interactions in the economy. Water could be pumped back to the treasury from some of the tanks to represent taxation . Changes in tax rates were modeled by increasing or decreasing pumping speeds.
Savings reduce the funds available to consumers and investment income increases those funds. [ citation needed ] The machine showed it by draining water (savings) from the expenditure stream and by injecting water (investment income) into that stream. When the savings flow exceeds the investment flow, the level of water in the savings and investment tank (the surplus-balances tank) would rise to reflect the accumulated balance. When the investment flow exceeds the savings flow for any length of time, the surplus-balances tank would run dry. Import and export were represented by water draining from the model and by additional water being poured into the model.
The flow of the water was automatically controlled through a series of floats, counterweights, electrodes, and cords. When the level of water reached a certain level in a tank, pumps and drains would be activated. To their surprise, Phillips and his associate Walter Newlyn found that machine could be calibrated to an accuracy of 2%.
The flow of water between the tanks was determined by economic principles and the settings for various parameters. Different economic parameters, such as tax rates and investment rates, could be entered by setting the valves which controlled the flow of water about the computer. Users could experiment with different settings and note their effects. The machine's ability to model the subtle interaction of a number of variables made it a powerful tool for its time. [ citation needed ] When a set of parameters resulted in a viable economy the model would stabilise and the results could be read from scales. The output from the computer could also be sent to a rudimentary plotter .
It is thought that twelve to fourteen machines were built:
The Terry Pratchett novel Making Money contains a similar device as a major plot point. However, after the device is fully perfected, it magically becomes directly coupled to the economy it was intended to simulate, with the result that the machine cannot then be adjusted without causing a change in the actual economy (in parodic resemblance to Goodhart's law ). [ improper synthesis? ]
Economist Kate Raworth 's book Donut Economics critiques the use of an electric pump as the power source, claiming that because its power consumption was not considered, it left out an important component out of the economic model it was portraying: [ 11 ] [ 12 ]
"This is where Bill Phillips’s MONIAC machine was fundamentally flawed. While brilliantly demonstrating the economy’s circular flow of income, it completely overlooked its throughflow of energy. To make his hydraulic computer start up, Phillips had to flip a switch on the back of it to turn on its electric pump. Like any real economy it relied upon an external source of energy to make it run, but neither Phillips nor his contemporaries spotted that the machine’s power source was a critical part of what made the model work. That lesson from the MONIAC applies to all of macroeconomics: the role of energy deserves a far more prominent place in economic theories that hope to explain what drives economic activity." | https://en.wikipedia.org/wiki/Phillips_Machine |
The Phillips catalyst , or the Phillips supported chromium catalyst, is the catalyst used to produce approximately half of the world's polyethylene . A heterogeneous catalyst , it consists of a chromium oxide supported on silica gel . [ 1 ] Polyethylene, the most-produced synthetic polymer, is produced industrially by the polymerization of ethylene :
Although exergonic (i.e., thermodynamically favorable), the reaction requires catalysts. Three main catalysts are employed commercially: the Phillips catalyst, Ziegler–Natta catalysts (based on titanium trichloride ), and, for specialty polymers, metallocene -based catalysts.
The Phillips catalyst is prepared by impregnating high surface area silica gel with chromium trioxide or related chromium compounds. The solid precatalyst is then calcined in air to give the active catalyst. Only a fraction of the chromium is catalytically active, a fact that interferes with elucidation of the catalytic mechanism. The active catalyst is often depicted as a chromate ester bound to the silica surface. The mechanism for the polymerization process is the subject of much research, the central question being the structure of the active species, which is assumed to be an organochromium compound . [ 2 ] Robert L. Banks and J. Paul Hogan , both at Phillips Petroleum , filed the first patents on the Phillips catalyst in 1953. Four years later, the process was commercialized. [ 3 ] | https://en.wikipedia.org/wiki/Phillips_catalyst |
In astrophysics , the Phillips relationship is the relationship between the peak luminosity of a Type Ia supernova and the speed of luminosity evolution after maximum light. The relationship was independently discovered by the American statistician and astronomer Bert Woodard Rust and the Soviet astronomer Yury Pavlovich Pskovskii [ ru ] in the 1970s. [ 1 ] [ 2 ] [ 3 ] They found that the faster the supernova faded from maximum light, the fainter its peak magnitude was. As a main parameter characterizing the light curve shape, Pskovskii used β, the mean rate of decline in photographic brightness from maximum light to the point at which the luminosity decline rate changes. β is
measured in magnitudes per 100-day intervals. [ 4 ] Selection of this parameter is justified by the fact that, at that time, the probability of discovering a supernova before the maximum light, and obtain the full light curve, was small. Moreover, the existing light curves were mostly incomplete. On the other hand, to determine the decline after the maximum light was rather simple for most observed supernovae.
In the early 1980s CCD cameras appeared, and the number of SNe discoveries increased substantially. Moreover, the probability of discovering SNe before they reached maximum light and following their brightness evolution longer also increased. The first light curves of SNe Ia obtained using CCD photometry showed that some supernovae had faster decline rates than others. Later, the low luminosity Ia SN 1991bg with a fast decline rate was discovered.
All this motivated the American astronomer Mark M. Phillips to revise this relationship precisely during the course of the Calán/Tololo Supernova Survey . [ 5 ] The correlation had been difficult to prove because Pskovskii's slope (β) parameter was difficult to measure with precision in practice, a necessary condition to prove the correlation. Rather than trying to determine the slope, Phillips used a simpler and more robust procedure that consisted in "measuring the total amount in magnitudes that the light curve decays from its peak brightness during some specified period following maximum light." It was defined as the decline in the B -magnitude light curve from maximum light to the magnitude 15 days after B -maximum, a parameter he called Δ m 15 {\displaystyle \Delta {m}_{15}} . The lead sentence of the acknowledgments section of Phillips' paper states: "I am indebted to George Jacoby for suggesting the Δ m 15 {\displaystyle \Delta {m}_{15}} parameter as an alternative to Pskovskii's β." The relation states that the maximum intrinsic B-band magnitude is given by
M m a x ( B ) = − 21.726 + 2.698 Δ m 15 ( B ) . {\displaystyle M_{\mathrm {max} }(B)=-21.726+2.698\Delta m_{15}(B).} [ 6 ]
Phillips dedicated the journal article confirming Yuri Pskovskii's proposed correlation to Pskovskii, who died a few weeks after Phillips' evidence confirming the relationship was published.
It has been recast to include the evolution in multiple photometric bandpasses, with a significantly shallower slope [ 7 ] [ 8 ] and as a stretch in the time axis relative to a standard template. [ 9 ] The relation is typically used to bring any Type Ia supernova peak magnitude to a standard candle value. | https://en.wikipedia.org/wiki/Phillips_relationship |
Philopatry is the tendency of an organism to stay in or habitually return to a particular area. [ 1 ] The causes of philopatry are numerous, but natal philopatry , where animals return to their birthplace to breed, may be the most common. [ 2 ] The term derives from the Greek roots philo , "liking, loving" and patra , "fatherland", [ 3 ] although in recent years the term has been applied to more than just the animal's birthplace. Recent usage refers to animals returning to the same area to breed despite not being born there, and migratory species that demonstrate site fidelity: reusing stopovers, staging points, and wintering grounds. [ 3 ]
Some of the known reasons for organisms to be philopatric would be for mating (reproduction), survival, migration, parental care, resources, etc.. In most species of animals, individuals will benefit from living in groups, [ 4 ] because depending on the species, individuals are more vulnerable to predation and more likely to have difficulty finding resources and food. Therefore, living in groups increases a species' chances of survival, which correlates to finding resources and reproducing. Again, depending on the species, returning to their birthplace where that particular species occupies that territory is the more favorable option. The birthplaces for these animals serve as a territory for them to return for feeding and refuge, like fish from a coral reef . [ 5 ] In an animal behavior study conducted by Paul Greenwood, overall female mammals are more likely to be philopatric, while male mammals are more likely to disperse. Male birds are more likely to be philopatric, while females are more likely to disperse. Philopatry will favor the evolution of cooperative traits because the direction of sex has consequences from the particular mating system . [ 6 ]
One type of philopatry is breeding philopatry , or breeding-site fidelity , and involves an individual, pair, or colony returning to the same location to breed, year after year . The animal can live in that area and reproduce although animals can reproduce anywhere but it can have a higher lifespan in their birth area. Among animals that are largely sedentary, breeding-site philopatry is common. It is advantageous to reuse a breeding site, as there may be territorial competition outside of the individual’s home range, and since the area evidently meets the requirements of breeding. Such advantages are compounded among species that invest heavily in the construction of a nest or associated courtship area. For example, the megapodes (large, ground-dwelling birds such as the Australian malleefowl , Leipoa ocellata ) construct a large mound of vegetation and soil or sand to lay their eggs in. Megapodes often reuse the same mound for many years, only abandoning it when it is damaged beyond repair, or due to disturbance. Nest fidelity is highly beneficial as reproducing is time and energy consuming (malleefowl will tend a mound for five to six months per year). [ 7 ] In colonial seabirds, it has been shown that nest fidelity depends on multi-scale information, including the breeding success of the focal breeding pair, the average breeding success of the rest of the colony, and the interaction of these two scales. [ 8 ]
Breeding fidelity is also well documented among species that migrate or disperse after reaching maturity. Birds, in particular, that disperse as fledglings will take advantage of exceptional navigational skills to return to a previous site. [ 9 ] Philopatric individuals exhibit learning behaviour, and do not return to a location in following years if a breeding attempt is unsuccessful. [ 10 ] The evolutionary benefits of such learning are evident: individuals that risk searching for a better site will not have lower fitness than those that persist with a poor site. Philopatry is not homogenous within a species, with individuals far more likely to exhibit philopatry if the breeding habitat is isolated. [ 11 ] Similarly, non-migratory populations are more likely to be philopatric that those that migrate. [ 12 ]
In species that exhibit lifelong monogamous pair bonds, even outside of the breeding season, there is no bias in the sex that is philopatric. [ 13 ] However, among polygynous species that disperse (including those that find only a single mate per breeding season), there is a much higher rate of breeding-site philopatry in males than females among birds, and the opposite bias among mammals. [ 6 ] Many possible explanations for this sex bias have been posited, with the earliest accepted hypothesis attributing the bias to intrasexual competition, and territory choice. [ 13 ] The most widely accepted hypothesis is that proposed by Greenwood (1980). [ 6 ] Among birds, males invest highly in protecting resources – a territory – against other males. Over consecutive seasons, a male that returns to the same territory has higher fitness than one that is not philopatric. [ 6 ] Females are free to disperse, and assess males. Conversely, in mammals, the predominant mating system is one of matrilineal social organisation . [ 14 ]
Males generally invest little in the raising of offspring, and compete with each other for mates rather than resources. Thus, dispersing can result in reproductive enhancement, as greater access to females is available. On the other hand, the cost of dispersal to females is high, and thus they are philopatric. This hypothesis also applies to natal philopatry, but is primarily concerned with breeding-site fidelity. A more recent hypothesis builds on Greenwood’s findings, suggesting that parental influence may play a large role. Because birds lay eggs, adult females are at risk of being cuckolded by their daughters, and thus would drive them out. On the other hand, young male mammals pose a threat to their dominant father, and so are driven to disperse while young. [ 15 ]
Natal philopatry commonly refers to the return to the area the animal was born in, or to animals remaining in their natal territory. It is a form of breeding-site philopatry. The debate over the evolutionary causes remains unsettled. The outcomes of natal philopatry may be speciation, and, in cases of non-dispersing animals, cooperative breeding. Natal philopatry is the most common form of philopatry in females because it decreases competition for mating and increases the rate of reproduction and a higher survival rate for offspring. [ 2 ] Natal philopatry also leads to a kin-structured population, which is when the population is more genetically related than less related between individuals in a species. This can also lead to inbreeding and a higher rate of natural and sexual selection within a population. [ 10 ]
The exact causes for the evolution of natal philopatry are unknown. Two major hypotheses have been proposed. Shields (1982) suggested that philopatry was a way of ensuring inbreeding , in a hypothesis known as the optimal-inbreeding hypothesis. [ 16 ] He argued that, since philopatry leads to the concentration of related individuals in their birth areas, and thus reduced genetic diversity, there must be some advantage to inbreeding – otherwise the process would have been evolutionary detrimental and would not be so prevalent. The major beneficial outcome under this hypothesis is the protection of a local gene complex that is finely adapted to the local environment. [ 16 ] Another proposed benefit is the reduction of the cost of meiosis and recombination events. [ 9 ] Under this hypothesis, non-philopatric individuals would be maladapted and over multi-generational time, philopatry within a species could become fixed. Evidence for the optimal-inbreeding hypothesis is found in outbreeding depression . Outbreeding depression involves reduced fitness as a result of random mating, which occurs due to the breakdown of coadapted gene complexes by combining allele that do not cross well with those from a different subpopulation. [ 17 ] However, it is important to note that outbreeding depression becomes more detrimental the longer (temporally) that subpopulations have been separated, and that this does hypothesis does not provide an initial mechanism for the evolution of natal philopatry. [ 17 ]
A second hypothesis explains the evolution of natal philopatry as a method of reducing the high costs of dispersal among offspring. A review of records of natal philopatry among passerine birds found that migrant species showed significantly less site fidelity than sedentary birds. [ 9 ] Among migratory species, the cost of dispersal is paid either way. If the optimal-inbreeding hypothesis was correct, the benefits of inbreeding should result in philopatry among all species. Inbreeding depression is a phenomenon whereby deleterious alleles become fixed more easily within an inbreeding population. [ 17 ] Inbreeding depression is demonstrably costly and accepted by most scientists as a greater cost than those of outbreeding depression. [ 13 ] Within a species, there has also been found to be variation in rates of philopatry, with migratory populations exhibiting low levels of philopatry – further suggesting that the ecological cost of dispersal, rather than genetic benefits of either inbreeding or outbreeding, is the driver of natal philopatry. [ citation needed ]
A number of other hypotheses exist. One such is that philopatry is a method, in migratory species, of ensuring that the sexes interact in breeding areas, and that breeding actually occurs. [ 18 ] A second is that philopatry provides a much higher chance of breeding success. Strict habitat requirements – whether due to a precisely adapted genome or not – mean that individuals that return to a site are more familiar with it, and may have more success in either defending it, or locating mates. [ 9 ] This hypothesis does not justify whether philopatry is due to an innate behaviour in each individual, or to learning; however it has been shown that, in most species, older individuals show higher site fidelity. [ 19 ] Neither of these hypotheses is as widely accepted as the optimal-inbreeding or dispersal hypotheses, but their existence indicates that the evolutionary causes of natal philopatry have still not been conclusively demonstrated. [ citation needed ]
A major outcome of multi-generational natal philopatry is genetic divergence and, ultimately, speciation . Without genetic exchange, geographically and reproductively isolated populations may undergo genetic drift . Such speciation is most evident on islands. For mobile island-breeding animals, finding a new breeding location may be beyond their means. In combination with a small population, as may occur due to recent colonisation, or simply restricted space, genetic drift can occur on shorter timescales than is observable in mainland species. The high levels of endemism on islands have been attributed to these factors. [ 20 ]
Substantial evidence for speciation due to natal philopatry has been gathered in studies of island-nesting albatross . Genetic difference is most often detected in microsatellites in mitochondrial DNA . Animals that spend much of their time at sea, but which return to land to breed exhibit high levels of natal philopatry and subsequent genetic drift between populations. Many species of albatross do not breed until 6–16 years of age. [ 21 ] Between leaving their birth island, and their return, they fly hundreds of thousands of kilometres. High levels of natal philopatry have been demonstrated via mark-recapture data. For example, more than 99% of Laysan albatross ( Phoebastria immutabilis ) in a study returned to exactly the same nest in consecutive years. [ 22 ] Such site-specificity can lead to speciation, and has also been observed in the earliest stages of this process. The shy albatross ( Thalassarche [cauta] cauta ) was shown to have genetic differences in its microsatellites between three breeding colonies located off the coast of Tasmania. [ 23 ] The differences are not currently sufficient to propose identifying the populations as distinct species; however divergence is likely to continue without outbreeding.
Not all isolated populations will show evidence of genetic drift. [ 24 ] Genetic homogeneity can be attributed to one of two explanations, both of which indicate that natal philopatry is not absolute within a species. Firstly, a lack of divergence may be due to founder effects , which explains how individuals that start new populations carry the genes of their source population. If only a short (in evolutionary timescales) period of time has passed, insufficient divergence may have occurred. For example, study of mitochondrial DNA microsatellites found no significant difference between colonies of black-browed albatross ( T. melanophrys ) on the Falkland Islands and Campbell Island, despite the sites being thousands of kilometres apart. [ 25 ] Observational evidence of white-capped albatross ( T. [cauta] steadi ) making attempts to build nests on a south Atlantic Island, where the species had never been previously recorded, demonstrate that range extension by roaming sub-adult birds is possible. [ 26 ] Secondly, there may be sufficient gene exchange as to prevent divergence. For example, isolated (yet geographically close) populations of the Buller’s albatross ( T. bulleri bulleri ) have been shown to be genetically similar. [ 24 ] This evidence has only recently, for the first time, been supported by mark-recapture data, which showed one bird marked on one of the two breeding islands was nesting on the other island. [ citation needed ]
Due to the dispersal capabilities of albatross, distance between populations does not appear to be a determining factor in divergence. [ 24 ] Actual speciation is likely to occur very slowly, as the selective pressures on the animals are the same for the vast majority of their lives, which is spent at sea. Small mutational changes in non-nuclear DNA that become fixed in small populations are likely to be the major driver of speciation. That there is minimal structural morphological difference between the genetically distinct populations is evidence for random genetic drift, rather than directional evolution due to natural selective pressure. [ 27 ]
Speciation through natal philopatry is a self-reinforcing process. Once genetic differences are sufficient, different species may be unable to interbreed to produce viable offspring. As a result, breeding could not occur anywhere except natal island, strengthening philopatry and ultimately leading to even greater genetic divergence. [ citation needed ]
Philopatric species that do not migrate may evolve to breed cooperatively. Kin selection , of which cooperative breeding is a form, explains how individual offspring provide care for further offspring produced by their relatives. [ 28 ] [ 29 ] Animals that are philopatric to birthsites have increased association with family members, and, in situations where inclusive fitness is increased through cooperative breeding, may evolve such behaviour, as it will incur evolutionary benefits to families that do. [ 28 ] Inclusive fitness is the sum of all direct and indirect fitness, where direct fitness is defined as the amount of fitness gained through producing offspring. Indirect fitness is defined as the amount of fitness gained through aiding related individuals offspring. [ 30 ]
Cooperative breeding is a hierarchical social system characterized by a dominant breeding pair surrounded by subordinate helpers. The dominant breeding pair and their helpers experience costs and benefits from using this system. [ 31 ]
Costs for helpers include a fitness reduction, increased territory defense, offspring guarding and an increased cost of growth. Benefits for helpers include a reduced chance of predation, increased foraging time, territory inheritance, increased environmental conditions and an inclusive fitness. [ citation needed ]
For the breeding pair, costs include increased mate guarding and suppression of subordinate mating. Breeders receive benefits as reductions in offspring care and territory maintenance. Their primary benefit is an increased reproductive rate and survival. [ citation needed ] [ 32 ]
Cooperative breeding causes the reproductive success of all sexually mature adults to be skewed towards one mating pair. This means the reproductive fitness of the group is held within a select few breeding members and helpers have little to no reproductive fitness. [ 33 ] With this system, breeders gain an increased reproductive, while helpers gain an increased inclusive fitness. [ 33 ]
Cooperative breeding, like speciation, can become a self-reinforcing process for a species. If the fitness benefits result in higher inclusive fitness of a family than the fitness of a non-cooperative family, the trait will eventually become fixed in the population. Over time, this may lead to the evolution of obligate cooperative breeding, as exhibited by the Australian mudnesters and Australo-Papuan babblers. Obligate cooperative breeding requires natally philopatric offspring to assist in raising offspring – breeding is unsuccessful without such help. [ 34 ]
Migrating animals also exhibit philopatry to certain important areas on their route; staging areas, stop-overs, molting areas and wintering grounds. Philopatry is generally believed to help maintain the adaptation of a population to a very specific environment (i.e., if a set of genes has evolved in a specific area, individuals that fail to return to that area may do poorly elsewhere, so natural selection will favor those who exhibit fidelity). [ citation needed ]
The level of philopatry varies within migratory families and species. [ citation needed ]
The term is sometimes also applied to animals that live in nests but do not remain in them during an unfavorable season (e.g., the winter in the temperate zone, or the dry season in the tropics), and leave to find hiding places nearby to pass the inactive period (common in various bees and wasps ); this is not migration in the usual sense, as the location of the hiding place is effectively random and unique (never located or revisited except by accident), though the navigation skills required to relocate the old nest site may be similar to those of migrating animals. [ citation needed ] | https://en.wikipedia.org/wiki/Philopatry |
The philosopher's stone [ a ] is a mythic alchemical substance capable of turning base metals such as mercury into gold or silver; [ b ] it was also known as "the tincture" and "the powder". Alchemists additionally believed that it could be used to make an elixir of life which made possible rejuvenation and immortality . [ 1 ] [ 2 ]
For many centuries, it was the most sought-after goal in alchemy . The philosopher's stone was the central symbol of the mystical terminology of alchemy, symbolizing perfection at its finest, divine illumination , and heavenly bliss. Efforts to discover the philosopher's stone were known as the Magnum Opus ("Great Work"). [ 3 ]
The earliest known written mention of the philosopher's stone is in the Cheirokmeta by Zosimos of Panopolis ( c. 300 AD ). [ 4 ] : 66 Alchemical writers assign a longer history. Elias Ashmole and the anonymous author of Gloria Mundi (1620) claim that its history goes back to Adam , who acquired the knowledge of the stone directly from God. This knowledge was said to have been passed down through biblical patriarchs, giving them their longevity. The legend of the stone was also compared to the biblical history of the Temple of Solomon and the rejected cornerstone described in Psalm 118 . [ 5 ] : 19
The theoretical roots outlining the stone's creation can be traced to Greek philosophy. Alchemists later used the classical elements , the concept of anima mundi , and Creation stories presented in texts like Plato 's Timaeus as analogies for their process. [ 6 ] : 29 According to Plato , the four elements are derived from a common source or prima materia (first matter), associated with chaos . Prima materia is also the name alchemists assign to the starting ingredient for the creation of the philosopher's stone. The importance of this philosophical first matter persisted throughout the history of alchemy. In the seventeenth century, Thomas Vaughan writes, "the first matter of the stone is the very same with the first matter of all things." [ 7 ] : 211
In the Byzantine Empire and the Arab empires , early medieval alchemists built upon the work of Zosimos. Byzantine and Muslim alchemists were fascinated by the concept of metal transmutation and attempted to carry out the process. [ 8 ] The eighth-century Muslim alchemist Jabir ibn Hayyan ( Latinized as Geber ) analysed each classical element in terms of the four basic qualities. Fire was both hot and dry, earth cold and dry, water cold and moist, and air hot and moist. He theorized that every metal was a combination of these four principles, two of them interior and two exterior. From this premise, it was reasoned that the transmutation of one metal into another could be effected by the rearrangement of its basic qualities. This change would be mediated by a substance, which came to be called xerion in Greek and al-iksir in Arabic (from which the word elixir is derived). It was often considered to exist as a dry red powder (also known as al-kibrit al-ahmar , red sulfur) made from a legendary stone—the philosopher's stone. [ 9 ] [ 10 ] The elixir powder came to be regarded as a crucial component of transmutation by later Arab alchemists. [ 8 ]
In the 11th century, there was a debate among Muslim world chemists on whether the transmutation of substances was possible. A leading opponent was the Persian polymath Avicenna (Ibn Sina), who discredited the theory of the transmutation of substances, stating, "Those of the chemical craft know well that no change can be effected in the different species of substances, though they can produce the appearance of such change." [ 11 ] : 196–197
According to legend, the 13th-century scientist and philosopher, Albertus Magnus , is said to have discovered the philosopher's stone. Magnus does not confirm he discovered the stone in his writings, but he did record that he witnessed the creation of gold by "transmutation". [ 12 ] : 28–30
The 16th-century Swiss alchemist Paracelsus ( Philippus Aureolus Theophrastus Bombastus von Hohenheim ) believed in the existence of alkahest , which he thought to be an undiscovered element from which all other elements (earth, fire, water, air) were simply derivative forms. Paracelsus believed that this element was, in fact, the philosopher's stone.
The English philosopher Sir Thomas Browne in his spiritual testament Religio Medici (1643) identified the religious aspect of the quest for the philosopher's Stone when declaring:
The smattering I have of the Philosophers stone, (which is something more than the perfect exaltation of gold) hath taught me a great deale of Divinity.
A mystical text published in the 17th century called the Mutus Liber appears to be a symbolic instruction manual for concocting a philosopher's stone. [ 14 ] [ 15 ] [ 16 ] Called the "wordless book", it was a collection of 15 illustrations.
The equivalent of the philosopher's stone in Buddhism and Hinduism is the Cintamani , also spelled as Chintamani . [ 17 ] : 277 [ better source needed ] It is also referred to as Paras/Parasmani ( Sanskrit : पारसमणि , Hindi : पारस ) or Paris ( Marathi : परिस ).
In Mahayana Buddhism, Chintamani is held by the bodhisattvas , Avalokiteshvara and Ksitigarbha . It is also seen carried upon the back of the Lung ta (wind horse) which is depicted on Tibetan prayer flags . By reciting the Dharani of Chintamani, Buddhist tradition maintains that one attains the Wisdom of Buddhas, is able to understand the truth of the Buddhas, and turns afflictions into Bodhi . It is said to allow one to see the Holy Retinue of Amitabha and his assembly upon one's deathbed. In Tibetan Buddhist tradition the Chintamani is sometimes depicted as a luminous pearl and is in the possession of several different forms of the Buddha. [ 18 ] : 170
Within Hinduism, it is connected with the gods Vishnu and Ganesha . In Hindu tradition it is often depicted as a fabulous jewel in the possession of the Nāga king or as on the forehead of the Makara . [ citation needed ] The Yoga Vasistha , originally written in the tenth century AD, contains a story about the philosopher's stone. [ 19 ] : 346–353
A great Hindu sage wrote about the spiritual accomplishment of Gnosis using the metaphor of the philosopher's stone. Sant Jnaneshwar (1275–1296) wrote a commentary with 17 references to the philosopher's stone that explicitly transmutes base metal into gold. [ citation needed ] The seventh-century Siddhar Thirumoolar in his classic Tirumandhiram explains man's path to immortal divinity. In verse 2709 he declares that the name of God, Shiva is an alchemical vehicle that turns the body into immortal gold. [ citation needed ]
Another depiction of the philosopher's stone is the Shyāmantaka Mani ( श्यामन्तक मणि ). [ citation needed ] According to Hindu mythology, the Shyāmantaka Mani is a ruby, capable of preventing all natural calamities such as droughts, floods, etc. around its owner, as well as producing eight bhāras (≈1700 pounds or 700 kilograms) of gold, every day. [ citation needed ]
The most commonly mentioned properties are the ability to transmute base metals into gold or silver, and the ability to heal all forms of illness and prolong the life of any person who consumes a small part of the philosopher's stone diluted in wine. [ 20 ] Other mentioned properties include: creation of perpetually burning lamps, [ 20 ] transmutation of common crystals into precious stones and diamonds, [ 20 ] reviving of dead plants, [ 20 ] creation of flexible or malleable glass, [ 21 ] and the creation of a clone or homunculus . [ 22 ]
Numerous synonyms were used to make oblique reference to the stone, such as "white stone" ( calculus albus , identified with the calculus candidus of Revelation 2:17 which was taken as a symbol of the glory of heaven [ 23 ] ), vitriol (as expressed in the backronym Visita Interiora Terrae Rectificando Invenies Occultum Lapidem ), also lapis noster , lapis occultus , in water at the box , and numerous oblique, mystical or mythological references such as Adam , Aer, Animal, Alkahest, Antidotus, Antimonium , Aqua benedicta, Aqua volans per aeram, Arcanum , Atramentum, Autumnus, Basilicus, Brutorum cor, Bufo, Capillus, Capistrum auri, Carbones, Cerberus , Chaos , Cinis cineris, Crocus , Dominus philosophorum, Divine quintessence, Draco elixir, Filius ignis, Fimus, Folium, Frater, Granum, Granum frumenti, Haematites, Hepar, Herba, Herbalis, Kimia , Lac, Melancholia, Ovum philosophorum, Panacea salutifera, Pandora , Phoenix , Philosophic mercury, Pyrites, Radices arboris solares, Regina, Rex regum, Sal metallorum, Salvator terrenus, Talcum, Thesaurus, Ventus hermetis . [ 24 ] Many of the medieval allegories of Christ were adopted for the lapis , and the Christ and the Stone were indeed taken as identical in a mystical sense. The name of "Stone" or lapis itself is informed by early Christian allegory, such as Priscillian (4th century), who stated,
Unicornis est Deus, nobis petra Christus, nobis lapis angularis Jesus, nobis hominum homo Christus (One-horned is God, Christ the rock to us, Jesus the cornerstone to us, Christ the man of men to us.) [ 25 ]
In some texts, it is simply called "stone", or our stone, or in the case of Thomas Norton's Ordinal, "oure delycious stone". [ 26 ] The stone was frequently praised and referred to in such terms.
It may be noted that the Latin expression lapis philosophorum , as well as the Arabic ḥajar al-falāsifa from which the Latin derives, both employ the plural form of the word for philosopher . Thus a literal translation would be philosophers' stone rather than philosopher's stone . [ 27 ]
Descriptions of the philosopher's stone are numerous and various. [ 28 ] According to alchemical texts, the stone of the philosophers came in two varieties, prepared by an almost identical method: white (for the purpose of making silver), and red (for the purpose of making gold), the white stone being a less matured version of the red stone. [ 29 ] Some ancient and medieval alchemical texts leave clues to the physical appearance of the stone of the philosophers, specifically the red stone. It is often said to be orange (saffron coloured) or red when ground to powder. Or in a solid form, an intermediate between red and purple, transparent and glass-like. [ 30 ] The weight is spoken of as being heavier than gold, [ 31 ] and it is soluble in any liquid, and incombustible in fire. [ 32 ]
Alchemical authors sometimes suggest that the stone's descriptors are metaphorical. [ 33 ] The appearance is expressed geometrically in Atalanta Fugiens Emblem XXI :
Make of a man and woman a circle; then a quadrangle; out of this a triangle; make again a circle, and you will have the Stone of the Wise. Thus is made the stone, which thou canst not discover, unless you, through diligence, learn to understand this geometrical teaching.
He further describes in greater detail the metaphysical nature of the meaning of the emblem as a divine union of feminine and masculine principles: [ 34 ]
In like manner the Philosophers would have the quadrangle reduced into a triangle, that is, into body, Spirit, and Soul, which three do appear in three previous colors before redness, for example, the body or earth in the blackness of Saturn, the Spirit in a lunar whiteness, as water, the Soul or air in a solar citrinity: then will the triangle be perfect, but this likewise must be changed into a circle, that is, into an invariable redness: By which operation the woman is converted into the man, and made one with him, and the senary the first number of the perfect completed by one, two, having returned again to a unit, in which is eternal rest and peace.
Rupescissa uses the imagery of the Christian passion, saying that it ascends "from the sepulcher of the Most Excellent King, shining and glorious, resuscitated from the dead and wearing a red diadem...". [ 35 ]
The various names and attributes assigned to the philosopher's stone have led to long-standing speculation on its composition and source. Exoteric candidates have been found in metals, plants, rocks, chemical compounds, and bodily products such as hair, urine, and eggs. Justus von Liebig states that 'it was indispensable that every substance accessible... should be observed and examined'. [ 36 ] Alchemists once thought a key component in the creation of the stone was a mythical element named carmot. [ 37 ] [ 38 ]
Esoteric hermetic alchemists may reject work on exoteric substances, instead directing their search for the philosopher's stone inward. [ 39 ] Though esoteric and exoteric approaches are sometimes mixed, it is clear that some authors "are not concerned with material substances but are employing the language of exoteric alchemy for the sole purpose of expressing theological, philosophical, or mystical beliefs and aspirations". [ 40 ] New interpretations continue to be developed around spagyric , chemical, and esoteric schools of thought.
The transmutation mediated by the stone has also been interpreted as a psychological process. Idries Shah devotes a chapter of his book, The Sufis , to provide a detailed analysis of the symbolic significance of alchemical work with the philosopher's stone. His analysis is based in part on a linguistic interpretation through Arabic equivalents of one of the terms for the stone ( Azoth ) as well as for sulfur, salt, and mercury. [ 41 ]
The philosopher's stone is created by the alchemical method known as The Magnum Opus or The Great Work. Often expressed as a series of color changes or chemical processes, the instructions for creating the philosopher's stone are varied. When expressed in colours, the work may pass through phases of nigredo , albedo , citrinitas , and rubedo . When expressed as a series of chemical processes it often includes seven or twelve stages concluding in multiplication , and projection .
The philosopher's stone has been an inspiration, plot feature, or subject of innumerable artistic works: animations, comics, films, musical compositions, novels, and video games. Examples include Harry Potter and the Philosopher's Stone , As Above, So Below , Fullmetal Alchemist , The Flash and The Mystery of Mamo .
The philosopher's stone is an important motif in Gothic fiction , and originated in William Godwin 's 1799 novel St. Leon . [ 42 ] | https://en.wikipedia.org/wiki/Philosopher's_stone |
Understood in a narrow sense, philosophical logic is the area of logic that studies the application of logical methods to philosophical problems, often in the form of extended logical systems like modal logic . Some theorists conceive philosophical logic in a wider sense as the study of the scope and nature of logic in general. In this sense, philosophical logic can be seen as identical to the philosophy of logic , which includes additional topics like how to define logic or a discussion of the fundamental concepts of logic. The current article treats philosophical logic in the narrow sense, in which it forms one field of inquiry within the philosophy of logic.
An important issue for philosophical logic is the question of how to classify the great variety of non-classical logical systems, many of which are of rather recent origin. One form of classification often found in the literature is to distinguish between extended logics and deviant logics. Logic itself can be defined as the study of valid inference . Classical logic is the dominant form of logic and articulates rules of inference in accordance with logical intuitions shared by many, like the law of excluded middle , the double negation elimination , and the bivalence of truth.
Extended logics are logical systems that are based on classical logic and its rules of inference but extend it to new fields by introducing new logical symbols and the corresponding rules of inference governing these symbols. In the case of alethic modal logic , these new symbols are used to express not just what is true simpliciter , but also what is possibly or necessarily true . It is often combined with possible worlds semantics, which holds that a proposition is possibly true if it is true in some possible world while it is necessarily true if it is true in all possible worlds. Deontic logic pertains to ethics and provides a formal treatment of ethical notions, such as obligation and permission . Temporal logic formalizes temporal relations between propositions. This includes ideas like whether something is true at some time or all the time and whether it is true in the future or in the past. Epistemic logic belongs to epistemology . It can be used to express not just what is the case but also what someone believes or knows to be the case. Its rules of inference articulate what follows from the fact that someone has these kinds of mental states . Higher-order logics do not directly apply classical logic to certain new sub-fields within philosophy but generalize it by allowing quantification not just over individuals but also over predicates.
Deviant logics , in contrast to these forms of extended logics, reject some of the fundamental principles of classical logic and are often seen as its rivals. Intuitionistic logic is based on the idea that truth depends on verification through a proof. This leads it to reject certain rules of inference found in classical logic that are not compatible with this assumption. Free logic modifies classical logic in order to avoid existential presuppositions associated with the use of possibly empty singular terms, like names and definite descriptions. Many-valued logics allow additional truth values besides true and false . They thereby reject the principle of bivalence of truth. Paraconsistent logics are logical systems able to deal with contradictions. They do so by avoiding the principle of explosion found in classical logic. Relevance logic is a prominent form of paraconsistent logic. It rejects the purely truth-functional interpretation of the material conditional by introducing the additional requirement of relevance: for the conditional to be true, its antecedent has to be relevant to its consequent.
The term "philosophical logic" is used by different theorists in slightly different ways. [ 1 ] When understood in a narrow sense, as discussed in this article, philosophical logic is the area of philosophy that studies the application of logical methods to philosophical problems. This usually happens in the form of developing new logical systems to either extend classical logic to new areas or to modify it to include certain logical intuitions not properly addressed by classical logic. [ 2 ] [ 1 ] [ 3 ] [ 4 ] In this sense, philosophical logic studies various forms of non-classical logics, like modal logic and deontic logic. This way, various fundamental philosophical concepts, like possibility, necessity, obligation, permission, and time, are treated in a logically precise manner by formally expressing the inferential roles they play in relation to each other. [ 5 ] [ 4 ] [ 1 ] [ 3 ] Some theorists understand philosophical logic in a wider sense as the study of the scope and nature of logic in general. On this view, it investigates various philosophical problems raised by logic, including the fundamental concepts of logic. In this wider sense, it can be understood as identical to the philosophy of logic , where these topics are discussed. [ 6 ] [ 7 ] [ 8 ] [ 1 ] The current article discusses only the narrow conception of philosophical logic. In this sense, it forms one area of the philosophy of logic. [ 1 ]
Central to philosophical logic is an understanding of what logic is and what role philosophical logics play in it. Logic can be defined as the study of valid inferences. [ 4 ] [ 6 ] [ 9 ] An inference is the step of reasoning in which it moves from the premises to a conclusion. [ 10 ] Often the term "argument" is also used instead. An inference is valid if it is impossible for the premises to be true and the conclusion to be false. In this sense, the truth of the premises ensures the truth of the conclusion. [ 11 ] [ 10 ] [ 12 ] [ 1 ] This can be expressed in terms of rules of inference : an inference is valid if its structure, i.e. the way its premises and its conclusion are formed, follows a rule of inference. [ 4 ] Different systems of logic provide different accounts for when an inference is valid. This means that they use different rules of inference. The traditionally dominant approach to validity is called classical logic. But philosophical logic is concerned with non-classical logic: it studies alternative systems of inference. [ 2 ] [ 1 ] [ 3 ] [ 4 ] The motivations for doing so can roughly be divided into two categories. For some, classical logic is too narrow: it leaves out many philosophically interesting issues. This can be solved by extending classical logic with additional symbols to give a logically strict treatment of further areas. [ 6 ] [ 13 ] [ 14 ] Others see some flaw with classical logic itself and try to give a rival account of inference. This usually leads to the development of deviant logics, each of which modifies the fundamental principles behind classical logic in order to rectify their alleged flaws. [ 6 ] [ 13 ] [ 14 ]
Modern developments in the area of logic have resulted in a great proliferation of logical systems. [ 13 ] This stands in stark contrast to the historical dominance of Aristotelian logic , which was treated as the one canon of logic for over two thousand years. [ 1 ] Treatises on modern logic often treat these different systems as a list of separate topics without providing a clear classification of them. However, one classification frequently mentioned in the academic literature is due to Susan Haack and distinguishes between classical logic , extended logics, and deviant logics . [ 6 ] [ 13 ] [ 15 ] This classification is based on the idea that classical logic, i.e. propositional logic and first-order logic, formalizes some of the most common logical intuitions. In this sense, it constitutes a basic account of the axioms governing valid inference. [ 4 ] [ 9 ] Extended logics accept this basic account and extend it to additional areas. This usually happens by adding new vocabulary, for example, to express necessity, obligation, or time. [ 13 ] [ 1 ] [ 4 ] [ 9 ] These new symbols are then integrated into the logical mechanism by specifying which new rules of inference apply to them, like that possibility follows from necessity. [ 15 ] [ 13 ] Deviant logics, on the other hand, reject some of the basic assumptions of classical logic. In this sense, they are not mere extensions of it but are often formulated as rival systems that offer a different account of the laws of logic. [ 13 ] [ 15 ]
Expressed in a more technical language, the distinction between extended and deviant logics is sometimes drawn in a slightly different manner. On this view, a logic is an extension of classical logic if two conditions are fulfilled: (1) all well-formed formulas of classical logic are also well-formed formulas in it and (2) all valid inferences in classical logic are also valid inferences in it. [ 13 ] [ 15 ] [ 16 ] For a deviant logic, on the other hand, (a) its class of well-formed formulas coincides with that of classical logic, while (b) some valid inferences in classical logic are not valid inferences in it. [ 13 ] [ 15 ] [ 17 ] The term quasi-deviant logic is used if (i) it introduces new vocabulary but all well-formed formulas of classical logic are also well-formed formulas in it and (ii) even when it is restricted to inferences using only the vocabulary of classical logic, some valid inferences in classical logic are not valid inferences in it. [ 13 ] [ 15 ] The term "deviant logic" is often used in a sense that includes quasi-deviant logics as well. [ 13 ]
A philosophical problem raised by this plurality of logics concerns the question of whether there can be more than one true logic. [ 13 ] [ 1 ] Some theorists favor a local approach in which different types of logic are applied to different areas. Early intuitionists, for example, saw intuitionistic logic as the correct logic for mathematics but allowed classical logic in other fields. [ 13 ] [ 18 ] But others, like Michael Dummett , prefer a global approach by holding that intuitionistic logic should replace classical logic in every area. [ 13 ] [ 18 ] Monism is the thesis that there is only one true logic. [ 6 ] This can be understood in different ways, for example, that only one of all the suggested logical systems is correct or that the correct logical system is yet to be found as a system underlying and unifying all the different logics. [ 1 ] Pluralists, on the other hand, hold that a variety of different logical systems can all be correct at the same time. [ 19 ] [ 6 ] [ 1 ]
A closely related problem concerns the question of whether all of these formal systems actually constitute logical systems. [ 1 ] [ 4 ] This is especially relevant for deviant logics that stray very far from the common logical intuitions associated with classical logic. In this sense, it has been argued, for example, that fuzzy logic is a logic only in name but should be considered a non-logical formal system instead since the idea of degrees of truth is too far removed from the most fundamental logical intuitions. [ 13 ] [ 20 ] [ 4 ] So not everyone agrees that all the formal systems discussed in this article actually constitute logics , when understood in a strict sense.
Classical logic is the dominant form of logic used in most fields. [ 21 ] The term refers primarily to propositional logic and first-order logic . [ 6 ] Classical logic is not an independent topic within philosophical logic. But a good familiarity with it is still required since many of the logical systems of direct concern to philosophical logic can be understood either as extensions of classical logic, which accept its fundamental principles and build on top of it, or as modifications of it, rejecting some of its core assumptions. [ 5 ] [ 14 ] Classical logic was initially created in order to analyze mathematical arguments and was applied to various other fields only afterward. [ 5 ] For this reason, it neglects many topics of philosophical importance not relevant to mathematics, like the difference between necessity and possibility, between obligation and permission, or between past, present, and future. [ 5 ] These and similar topics are given a logical treatment in the different philosophical logics extending classical logic. [ 14 ] [ 1 ] [ 3 ] Classical logic by itself is only concerned with a few basic concepts and the role these concepts play in making valid inferences. [ 22 ] The concepts pertaining to propositional logic include propositional connectives, like "and", "or", and "if-then". [ 4 ] Characteristic of the classical approach to these connectives is that they follow certain laws, like the law of excluded middle , the double negation elimination , the principle of explosion , and the bivalence of truth. [ 21 ] This sets classical logic apart from various deviant logics, which deny one or several of these principles. [ 13 ] [ 5 ]
In first-order logic , the propositions themselves are made up of subpropositional parts, like predicates , singular terms , and quantifiers . [ 8 ] [ 23 ] Singular terms refer to objects and predicates express properties of objects and relations between them. [ 8 ] [ 24 ] Quantifiers constitute a formal treatment of notions like "for some" and "for all". They can be used to express whether predicates have an extension at all or whether their extension includes the whole domain. [ 25 ] Quantification is only allowed over individual terms but not over predicates, in contrast to higher-order logics. [ 26 ] [ 4 ]
Alethic modal logic has been very influential in logic and philosophy. It provides a logical formalism to express what is possibly or necessarily true . [ 12 ] [ 9 ] [ 27 ] [ 28 ] [ 29 ] [ 30 ] [ 14 ] It constitutes an extension of first-order logic, which by itself is only able to express what is true simpliciter . This extension happens by introducing two new symbols: " ◊ {\displaystyle \Diamond } " for possibility and " ◻ {\displaystyle \Box } " for necessity. These symbols are used to modify propositions. For example, if " W ( s ) {\displaystyle W(s)} " stands for the proposition "Socrates is wise", then " ◊ W ( s ) {\displaystyle \Diamond W(s)} " expresses the proposition "it is possible that Socrates is wise". In order to integrate these symbols into the logical formalism, various axioms are added to the existing axioms of first-order logic. [ 27 ] [ 28 ] [ 30 ] They govern the logical behavior of these symbols by determining how the validity of an inference depends on the fact that these symbols are found in it. They usually include the idea that if a proposition is necessary then its negation is impossible, i.e. that " ◻ A {\displaystyle \Box A} " is equivalent to " ¬ ◊ ¬ A {\displaystyle \lnot \Diamond \lnot A} " . Another such principle is that if something is necessary, then it must also be possible. This means that " ◊ A {\displaystyle \Diamond A} " follows from " ◻ A {\displaystyle \Box A} " . [ 27 ] [ 28 ] [ 30 ] There is disagreement about exactly which axioms govern modal logic. The different forms of modal logic are often presented as a nested hierarchy of systems in which the most fundamental systems, like system K , include only the most fundamental axioms while other systems, like the popular system S5 , build on top of it by including additional axioms. [ 27 ] [ 28 ] [ 30 ] In this sense, system K is an extension of first-order logic while system S5 is an extension of system K. Important discussions within philosophical logic concern the question of which system of modal logic is correct. [ 27 ] [ 28 ] [ 30 ] It is usually advantageous to have the strongest system possible in order to be able to draw many different inferences. But this brings with it the problem that some of these additional inferences may contradict basic modal intuitions in specific cases. This usually motivates the choice of a more basic system of axioms. [ 27 ] [ 28 ] [ 30 ]
Possible worlds semantics is a very influential formal semantics in modal logic that brings with it system S5. [ 27 ] [ 28 ] [ 30 ] A formal semantics of a language characterizes the conditions under which the sentences of this language are true or false. Formal semantics play a central role in the model-theoretic conception of validity . [ 4 ] [ 10 ] They are able to provide clear criteria for when an inference is valid or not: an inference is valid if and only if it is truth-preserving, i.e. if whenever its premises are true then its conclusion is also true. [ 9 ] [ 10 ] [ 31 ] Whether they are true or false is specified by the formal semantics. Possible worlds semantics specifies the truth conditions of sentences expressed in modal logic in terms of possible worlds. [ 27 ] [ 28 ] [ 30 ] A possible world is a complete and consistent way how things could have been. [ 32 ] [ 33 ] On this view, a sentence modified by the ◊ {\displaystyle \Diamond } -operator is true if it is true in at least one possible world while a sentence modified by the ◻ {\displaystyle \Box } -operator is true if it is true in all possible worlds. [ 27 ] [ 28 ] [ 30 ] So the sentence " ◊ W ( s ) {\displaystyle \Diamond W(s)} " (it is possible that Socrates is wise) is true since there is at least one world where Socrates is wise. But " ◻ W ( s ) {\displaystyle \Box W(s)} " (it is necessary that Socrates is wise) is false since Socrates is not wise in every possible world. Possible world semantics has been criticized as a formal semantics of modal logic since it seems to be circular. [ 8 ] The reason for this is that possible worlds are themselves defined in modal terms, i.e. as ways how things could have been. In this way, it itself uses modal expressions to determine the truth of sentences containing modal expressions. [ 8 ]
Deontic logic extends classical logic to the field of ethics . [ 34 ] [ 14 ] [ 35 ] Of central importance in ethics are the concepts of obligation and permission , i.e. which actions the agent has to do or is allowed to do. Deontic logic usually expresses these ideas with the operators O {\displaystyle O} and P {\displaystyle P} . [ 34 ] [ 14 ] [ 35 ] [ 27 ] So if " J ( r ) {\displaystyle J(r)} " stands for the proposition "Ramirez goes jogging", then " O J ( r ) {\displaystyle OJ(r)} " means that Ramirez has the obligation to go jogging and " P J ( r ) {\displaystyle PJ(r)} " means that Ramirez has the permission to go jogging.
Deontic logic is closely related to alethic modal logic in that the axioms governing the logical behavior of their operators are identical. This means that obligation and permission behave in regards to valid inference just like necessity and possibility do. [ 34 ] [ 14 ] [ 35 ] [ 27 ] For this reason, sometimes even the same symbols are used as operators. [ 36 ] Just as in alethic modal logic, there is a discussion in philosophical logic concerning which is the right system of axioms for expressing the common intuitions governing deontic inferences. [ 34 ] [ 14 ] [ 35 ] But the arguments and counterexamples here are slightly different since the meanings of these operators differ. For example, a common intuition in ethics is that if the agent has the obligation to do something then they automatically also have the permission to do it. This can be expressed formally through the axiom schema " O A → P A {\displaystyle OA\to PA} " . [ 34 ] [ 14 ] [ 35 ] Another question of interest to philosophical logic concerns the relation between alethic modal logic and deontic logic. An often discussed principle in this respect is that ought implies can . This means that the agent can only have the obligation to do something if it is possible for the agent to do it. [ 37 ] [ 38 ] Expressed formally: " O A → ◊ A {\displaystyle OA\to \Diamond A} " . [ 34 ]
Temporal logic , or tense logic, uses logical mechanisms to express temporal relations. [ 39 ] [ 14 ] [ 35 ] [ 40 ] In its most simple form, it contains one operator to express that something happened at one time and another to express that something is happening all the time. These two operators behave in the same way as the operators for possibility and necessity in alethic modal logic. Since the difference between past and future is of central importance to human affairs, these operators are often modified to take this difference into account. Arthur Prior 's tense logic, for example, realizes this idea using four such operators: P {\displaystyle P} (it was the case that...), F {\displaystyle F} (it will be the case that...), H {\displaystyle H} (it has always been the case that...), and G {\displaystyle G} (it will always be the case that...). [ 39 ] [ 14 ] [ 35 ] [ 40 ] So to express that it will always be rainy in London one could use " G ( R a i n y ( l o n d o n ) ) {\displaystyle G(Rainy(london))} " . Various axioms are used to govern which inferences are valid depending on the operators appearing in them. According to them, for example, one can deduce " F ( R a i n y ( l o n d o n ) ) {\displaystyle F(Rainy(london))} " (it will be rainy in London at some time) from " G ( R a i n y ( l o n d o n ) ) {\displaystyle G(Rainy(london))} " . In more complicated forms of temporal logic, also binary operators linking two propositions are defined, for example, to express that something happens until something else happens. [ 39 ]
Temporal modal logic can be translated into classical first-order logic by treating time in the form of a singular term and increasing the arity of one's predicates by one. [ 40 ] For example, the tense-logic-sentence " d a r k ∧ P ( l i g h t ) ∧ F ( l i g h t ) {\displaystyle dark\land P(light)\land F(light)} " (it is dark, it was light, and it will be light again) can be translated into pure first-order logic as " d a r k ( t 1 ) ∧ ∃ t 0 ( t 0 < t 1 ∧ l i g h t ( t 0 ) ) ∧ ∃ t 2 ( t 1 < t 2 ∧ l i g h t ( t 2 ) ) {\displaystyle dark(t_{1})\land \exists t_{0}(t_{0}<t_{1}\land light(t_{0}))\land \exists t_{2}(t_{1}<t_{2}\land light(t_{2}))} " . [ 41 ] While similar approaches are often seen in physics, logicians usually prefer an autonomous treatment of time in terms of operators. This is also closer to natural languages, which mostly use grammar, e.g. by conjugating verbs, to express the pastness or futurity of events. [ 40 ]
Epistemic logic is a form of modal logic applied to the field of epistemology . [ 42 ] [ 43 ] [ 35 ] [ 9 ] It aims to capture the logic of knowledge and belief . The modal operators expressing knowledge and belief are usually expressed through the symbols " K {\displaystyle K} " and " B {\displaystyle B} " . So if " W ( s ) {\displaystyle W(s)} " stands for the proposition "Socrates is wise", then " K W ( s ) {\displaystyle KW(s)} " expresses the proposition "the agent knows that Socrates is wise" and " B W ( s ) {\displaystyle BW(s)} " expresses the proposition "the agent believes that Socrates is wise". Axioms governing these operators are then formulated to express various epistemic principles. [ 35 ] [ 42 ] [ 43 ] For example, the axiom schema " K A → A {\displaystyle KA\to A} " expresses that whenever something is known, then it is true. This reflects the idea that one can only know what is true, otherwise it is not knowledge but another mental state. [ 35 ] [ 42 ] [ 43 ] Another epistemic intuition about knowledge concerns the fact that when the agent knows something, they also know that they know it. This can be expressed by the axiom schema " K A → K K A {\displaystyle KA\to KKA} " . [ 35 ] [ 42 ] [ 43 ] An additional principle linking knowledge and belief states that knowledge implies belief, i.e. " K A → B A {\displaystyle KA\to BA} " . Dynamic epistemic logic is a distinct form of epistemic logic that focuses on situations in which changes in belief and knowledge happen. [ 44 ]
Higher-order logics extend first-order logic by including new forms of quantification . [ 12 ] [ 26 ] [ 45 ] [ 46 ] In first-order logic, quantification is restricted to singular terms. It can be used to talk about whether a predicate has an extension at all or whether its extension includes the whole domain. This way, propositions like " ∃ x ( A p p l e ( x ) ∧ S w e e t ( x ) ) {\displaystyle \exists x(Apple(x)\land Sweet(x))} " ( there are some apples that are sweet) can be expressed. In higher-order logics, quantification is allowed not just over individual terms but also over predicates. This way, it is possible to express, for example, whether certain individuals share some or all of their predicates, as in " ∃ Q ( Q ( m a r y ) ∧ Q ( j o h n ) ) {\displaystyle \exists Q(Q(mary)\land Q(john))} " ( there are some qualities that Mary and John share). [ 12 ] [ 26 ] [ 45 ] [ 46 ] Because of these changes, higher-order logics have more expressive power than first-order logic. This can be helpful for mathematics in various ways since different mathematical theories have a much simpler expression in higher-order logic than in first-order logic. [ 12 ] For example, Peano arithmetic and Zermelo-Fraenkel set theory need an infinite number of axioms to be expressed in first-order logic. But they can be expressed in second-order logic with only a few axioms. [ 12 ]
But despite this advantage, first-order logic is still much more widely used than higher-order logic. One reason for this is that higher-order logic is incomplete . [ 12 ] This means that, for theories formulated in higher-order logic, it is not possible to prove every true sentence pertaining to the theory in question. [ 4 ] Another disadvantage is connected to the additional ontological commitments of higher-order logics. It is often held that the usage of the existential quantifier brings with it an ontological commitment to the entities over which this quantifier ranges. [ 9 ] [ 47 ] [ 48 ] [ 49 ] In first-order logic, this concerns only individuals, which is usually seen as an unproblematic ontological commitment. In higher-order logic, quantification concerns also properties and relations. [ 9 ] [ 26 ] [ 6 ] This is often interpreted as meaning that higher-order logic brings with it a form of Platonism , i.e. the view that universal properties and relations exist in addition to individuals. [ 12 ] [ 45 ]
Intuitionistic logic is a more restricted version of classical logic. [ 18 ] [ 50 ] [ 14 ] It is more restricted in the sense that certain rules of inference used in classical logic do not constitute valid inferences in it. This concerns specifically the law of excluded middle and the double negation elimination . [ 18 ] [ 50 ] [ 14 ] The law of excluded middle states that for every sentence, either it or its negation are true. Expressed formally: A ∨ ¬ A {\displaystyle A\lor \lnot A} . The law of double negation elimination states that if a sentence is not not true, then it is true, i.e. " ¬ ¬ A → A {\displaystyle \lnot \lnot A\to A} " . [ 18 ] [ 14 ] Due to these restrictions, many proofs are more complicated and some proofs otherwise accepted become impossible. [ 50 ]
These modifications of classical logic are motivated by the idea that truth depends on verification through a proof . This has been interpreted in the sense that "true" means "verifiable". [ 50 ] [ 14 ] It was originally only applied to the area of mathematics but has since then been used in other areas as well. [ 18 ] On this interpretation, the law of excluded middle would involve the assumption that every mathematical problem has a solution in the form of a proof. In this sense, the intuitionistic rejection of the law of excluded middle is motivated by the rejection of this assumption. [ 18 ] [ 14 ] This position can also be expressed by stating that there are no unexperienced or verification-transcendent truths. [ 50 ] In this sense, intuitionistic logic is motivated by a form of metaphysical idealism. Applied to mathematics, it states that mathematical objects exist only to the extent that they are constructed in the mind. [ 50 ]
Free logic rejects some of the existential presuppositions found in classical logic. [ 51 ] [ 52 ] [ 53 ] In classical logic, every singular term has to denote an object in the domain of quantification. [ 51 ] This is usually understood as an ontological commitment to the existence of the named entity. But many names are used in everyday discourse that do not refer to existing entities, like "Santa Claus" or "Pegasus". This threatens to preclude such areas of discourse from a strict logical treatment. Free logic avoids these problems by allowing formulas with non-denoting singular terms. [ 52 ] This applies to proper names as well as definite descriptions , and functional expressions. [ 51 ] [ 53 ] Quantifiers, on the other hand, are treated in the usual way as ranging over the domain. This allows for expressions like " ¬ ∃ x ( x = s a n t a ) {\displaystyle \lnot \exists x(x=santa)} " (Santa Claus does not exist) to be true even though they are self-contradictory in classical logic. [ 51 ] It also brings with it the consequence that certain valid forms of inference found in classical logic are not valid in free logic. For example, one may infer from " B e a r d ( s a n t a ) {\displaystyle Beard(santa)} " (Santa Claus has a beard) that " ∃ x ( B e a r d ( x ) ) {\displaystyle \exists x(Beard(x))} " (something has a beard) in classical logic but not in free logic. [ 51 ] In free logic, often an existence-predicate is used to indicate whether a singular term denotes an object in the domain or not. But the usage of existence-predicates is controversial. They are often opposed, based on the idea that existence is required if any predicates should apply to the object at all. In this sense, existence cannot itself be a predicate. [ 9 ] [ 54 ] [ 55 ]
Karel Lambert , who coined the term "free logic", has suggested that free logic can be understood as a generalization of classical predicate logic just as predicate logic is a generalization of Aristotelian logic. On this view, classical predicate logic introduces predicates with an empty extension while free logic introduces singular terms of non-existing things. [ 51 ]
An important problem for free logic consists in how to determine the truth value of expressions containing empty singular terms, i.e. of formulating a formal semantics for free logic. [ 56 ] Formal semantics of classical logic can define the truth of their expressions in terms of their denotation. But this option cannot be applied to all expressions in free logic since not all of them have a denotation. [ 56 ] Three general approaches to this issue are often discussed in the literature: negative semantics , positive semantics , and neutral semantics . [ 53 ] Negative semantics hold that all atomic formulas containing empty terms are false. On this view, the expression " B e a r d ( s a n t a ) {\displaystyle Beard(santa)} " is false. [ 56 ] [ 53 ] Positive semantics allows that at least some expressions with empty terms are true. This usually includes identity statements, like " s a n t a = s a n t a {\displaystyle santa=santa} " . Some versions introduce a second, outer domain for non-existing objects, which is then used to determine the corresponding truth values. [ 56 ] [ 53 ] Neutral semantics , on the other hand, hold that atomic formulas containing empty terms are neither true nor false. [ 56 ] [ 53 ] This is often understood as a three-valued logic , i.e. that a third truth value besides true and false is introduced for these cases. [ 57 ]
Many-valued logics are logics that allow for more than two truth values. [ 58 ] [ 14 ] [ 59 ] They reject one of the core assumptions of classical logic: the principle of the bivalence of truth. The most simple versions of many-valued logics are three-valued logics: they contain a third truth value. In Stephen Cole Kleene 's three-valued logic, for example, this third truth value is "undefined". [ 58 ] [ 59 ] According to Nuel Belnap 's four-valued logic, there are four possible truth values: "true", "false", "neither true nor false", and "both true and false". This can be interpreted, for example, as indicating the information one has concerning whether a state obtains: information that it does obtain, information that it does not obtain, no information, and conflicting information. [ 58 ] One of the most extreme forms of many-valued logic is fuzzy logic. It allows truth to arise in any degree between 0 and 1. [ 60 ] [ 58 ] [ 14 ] 0 corresponds to completely false, 1 corresponds to completely true, and the values in between correspond to truth in some degree, e.g. as a little true or very true. [ 60 ] [ 58 ] It is often used to deal with vague expressions in natural language. For example, saying that "Petr is young" fits better (i.e. is "more true") if "Petr" refers to a three-year-old than if it refers to a 23-year-old. [ 60 ] Many-valued logics with a finite number of truth-values can define their logical connectives using truth tables, just like classical logic. The difference is that these truth tables are more complex since more possible inputs and outputs have to be considered. [ 58 ] [ 59 ] In Kleene's three-valued logic, for example, the inputs "true" and "undefined" for the conjunction-operator " ∧ {\displaystyle \land } " result in the output "undefined". The inputs "false" and "undefined", on the other hand, result in "false". [ 61 ] [ 59 ]
Paraconsistent logics are logical systems that can deal with contradictions without leading to all-out absurdity. [ 62 ] [ 14 ] [ 63 ] They achieve this by avoiding the principle of explosion found in classical logic. According to the principle of explosion, anything follows from a contradiction. This is the case because of two rules of inference, which are valid in classical logic: disjunction introduction and disjunctive syllogism . [ 62 ] [ 14 ] [ 63 ] According to the disjunction introduction, any proposition can be introduced in the form of a disjunction when paired with a true proposition. [ 64 ] So since it is true that "the sun is bigger than the moon", it is possible to infer that "the sun is bigger than the moon or Spain is controlled by space-rabbits". According to the disjunctive syllogism , one can infer that one of these disjuncts is true if the other is false. [ 64 ] So if the logical system also contains the negation of this proposition, i.e. that "the sun is not bigger than the moon", then it is possible to infer any proposition from this system, like the proposition that "Spain is controlled by space-rabbits". Paraconsistent logics avoid this by using different rules of inference that make inferences in accordance with the principle of explosion invalid. [ 62 ] [ 14 ] [ 63 ]
An important motivation for using paraconsistent logics is dialetheism, i.e. the belief that contradictions are not just introduced into theories due to mistakes but that reality itself is contradictory and contradictions within theories are needed to accurately reflect reality. [ 63 ] [ 65 ] [ 62 ] [ 66 ] Without paraconsistent logics, dialetheism would be hopeless since everything would be both true and false. [ 66 ] Paraconsistent logics make it possible to keep contradictions local, without exploding the whole system. [ 14 ] But even with this adjustment, dialetheism is still highly contested. [ 63 ] [ 66 ] Another motivation for paraconsistent logic is to provide a logic for discussions and group beliefs where the group as a whole may have inconsistent beliefs if its different members are in disagreement. [ 63 ]
Relevance logic is one type of paraconsistent logic. As such, it also avoids the principle of explosion even though this is usually not the main motivation behind relevance logic. Instead, it is usually formulated with the goal of avoiding certain unintuitive applications of the material conditional found in classical logic. [ 67 ] [ 14 ] [ 68 ] Classical logic defines the material conditional in purely truth-functional terms, i.e. " p → q {\displaystyle p\to q} " is false if " p {\displaystyle p} " is true and " q {\displaystyle q} " is false, but otherwise true in every case. According to this formal definition, it does not matter whether " p {\displaystyle p} " and " q {\displaystyle q} " are relevant to each other in any way. [ 67 ] [ 14 ] [ 68 ] For example, the material conditional "if all lemons are red then there is a sandstorm inside the Sydney Opera House" is true even though the two propositions are not relevant to each other.
The fact that this usage of material conditionals is highly unintuitive is also reflected in informal logic , which categorizes such inferences as fallacies of relevance . Relevance logic tries to avoid these cases by requiring that for a true material conditional, its antecedent has to be relevant to the consequent. [ 67 ] [ 14 ] [ 68 ] A difficulty faced for this issue is that relevance usually belongs to the content of the propositions while logic only deals with formal aspects. This problem is partially addressed by the so-called variable sharing principle . It states that antecedent and consequent have to share a propositional variable. [ 67 ] [ 68 ] [ 14 ] This would be the case, for example, in " ( p ∧ q ) → q {\displaystyle (p\land q)\to q} " but not in " ( p ∧ q ) → r {\displaystyle (p\land q)\to r} " . A closely related concern of relevance logic is that inferences should follow the same requirement of relevance, i.e. that it is a necessary requirement of valid inferences that their premises are relevant to their conclusion. [ 67 ] | https://en.wikipedia.org/wiki/Philosophical_logic |
Presentism (sometimes 'philosophical presentism') is the view of time which states that only present entities exist (or, equivalently, that everything which is exists presently) and what is present (i.e., what exists) changes as time passes. [ 1 ] According to presentism, there are no past or future entities at all, though some entities have existed and other entities will exist. In a sense, the past and the future do not exist for presentists—past events have happened (have existed, or have been present) and future events will happen (will exist, or will be present), but neither exist at all since they do not exist now. Presentism is a view about temporal ontology, i.e., a view about what exists in time, that contrasts with eternalism —the view that past, present and future entities exist (that is, the ontological thesis of the 'block universe')—and with no-futurism —the view that only past and present entities exist (that is, the ontological thesis of the ' growing block universe '). [ 2 ]
Augustine of Hippo proposed that the present is analogous to a knife edge placed exactly between the perceived past and the imaginary future and does not include the concept of time. Proponents claim this should be self- evident because, if the present is extended, it must have separate parts. These parts must be simultaneous if they are truly a part of the present. According to early philosophers, time cannot be simultaneously past and present and hence not extended. Contrary to Saint Augustine, some philosophers propose that conscious experience is extended in time. For instance, William James said that time is "the short duration of which we are immediately and incessantly sensible". [ 3 ] Other early presentist philosophers include the Indian Buddhist tradition [ clarification needed ] [ citation needed ] . Fyodor Shcherbatskoy , a leading scholar of the modern era on Buddhist philosophy , has written extensively on Buddhist presentism: "Everything past is unreal, everything future is unreal, everything imagined, absent, mental... is unreal. Ultimately, real is only the present moment of physical efficiency [i.e., causation ]." [ 4 ]
According to J. M. E. McTaggart 's " The Unreality of Time ", there are two ways of referring to events: the 'A Series' (or 'tensed time': yesterday , today , tomorrow ) and the 'B Series' (or 'untensed time': Monday, Tuesday, Wednesday). Presentism posits that the A Series is fundamental and that the B Series alone is not sufficient. Presentists maintain that temporal discourse requires the use of tenses, whereas the "Old B-Theorists" argued that tensed language could be reduced to tenseless facts (Dyke, 2004).
Arthur N. Prior has argued against un-tensed theories with the following ideas: the meaning of statements such as "Thank goodness that's over" is much easier to see in a tensed theory with a distinguished, present now . [ 5 ] Similar arguments can be made to support the theory of egocentric presentism (or perspectival realism ), which holds that there is a distinguished, present self . Vincent Conitzer has made a similar argument connecting A-theory with the vertiginous question . According to Conitzer, arguments in favor of A-theory are more effective as arguments for the combined position of both A-theory being true and the "I" being metaphysically privileged from other perspectives. [ 6 ]
One main objection to presentism comes from the idea that what is true substantively depends upon what exists (or, that truth depends or ' supervenes ' upon being). According to this critique, presentism is said to be in conflict with truth-maker theory . Truth-maker theory looks to capture the dependence of truth upon being with the idea that truths (e.g., true propositions) are true in virtue of the existence of some entity or entities ('truth-makers'). The conflict arises because most presentists accept that there are evidence-transcendent and objective truths about the past (and some accept that there are truths about the future, pace concerns about fatalism ), but presentists deny the existence of the past and the future. For instance, most presentists accept that it is true that Marie Curie discovered polonium , but they deny that the event of her discovery exists (because it is a wholly past event). Since the mid-1990s, truth-maker theorists have been trying to accuse Presentists with violating their principle (that truths require truth-makers) and ontologically 'cheating'. To resolve the truth-maker theorists' counter, presentists can argue that there are truth-makers for the past, but they either exist presently or outside of time. For a second option, some presentists posit the existence of " atemporal " objects which function as truth-makers, though some justification would be needed for how something outside of time would not conflict with the proposition that only present entities exist. [ 7 ]
Presentists can as well reject that propositions about the past are made true by truth-makers. However, this leaves unclear what exactly makes truths about the past true. As a result, few philosophers support this method of resolving the objection. [ 8 ]
Presentists who make the claim that there are “atemporal” entities (atemporal in a similar sense as numbers) which are truth-making endorse a view called “ersatz presentism.” Ersatz ( German for "substitute"/"alternative") presentists believe that the truth of propositions about the past like “Churchill existed” are made true by a theoretical time which is a representation of how things were (i.e. ersatz rather than concrete). [ 9 ]
Ersatz times are, in a sense, akin to ersatz possible worlds. Alyssa Ney describes ersatz modal realism as positing “that there are possible worlds (worlds that can play a similar role to the concrete worlds of the modal realist), but that these are not additional universes in the same sense as our universe.” [ 10 ] In a similar way, ersatz times would exist, but not in the same sense that actual time currently exists. Rather, they would be theoretical times which represent the moment when in which the proposition was true. Ersatz presentists must, though, postulate an ordering relationship between ersatz times which is equivalent to an earlier/later-than relation (such that t1 is later than t2). [ 11 ]
For example, when ersatz presentists claim “ Churchill existed,” such a proposition is true only if a) there is an ersatz time (t2) which represents the present time and b) there is an ersatz time which represents (t1) a prior time when Churchill existed. For the ersatz presentist, the theoretical ersatz times ground or make true propositions about the past. As a result of postulating presently existing entities which ground past entities, ersatz presentists do not need to accept the existence of anything which does not presently exist in order to explain the distinction between consecutive moments. [ 12 ]
Many philosophers have argued that relativity implies eternalism , the idea that the past and future exist in a real sense, not only as changes that occurred or will occur to the present. [ 13 ] Philosopher of science Dean Rickles disagrees with some qualifications, but notes that "the consensus among philosophers seems to be that special and general relativity are incompatible with presentism". [ 14 ] Some philosophers view time as a dimension equal to spatial dimensions, that future events are "already there" in the same sense different places exist, and that there is no objective flow of time; however, this view is disputed. [ 15 ] Since relativity has been confirmed by experiment, and it posits that time is a coordinate or "dimension" between two points in spacetime, it gave rise to a philosophical viewpoint known as four dimensionalism . [ 16 ]
Observers in motion with respect to each other are said to be in different frames of reference . These observers may disagree on whether two events at different locations occurred simultaneously, which is referred to as the relativity of simultaneity . [ 17 ]
Presentism in classical spacetime deems that only the present exists; this is not reconcilable with the relativity of simultaneity in special relativity, shown in the following example: Alice and Bob are simultaneous observers of event O . For Alice, some event E is simultaneous with O , but for Bob, event E is in the past or future. Therefore, Alice and Bob disagree about what exists in the present, which contradicts classical presentism. "Here-now presentism" attempts to reconcile this by only acknowledging the time and space of a single point; this is unsatisfactory because objects coming and going from the "here-now" alternate between real and unreal, in addition to the lack of a privileged "here-now" that would be the "real" present. "Relativized presentism" acknowledges that there are infinite frames of reference, each of them having a different set of simultaneous events, which makes it impossible to distinguish a single "real" present, and hence either all events in time are real—blurring the difference between presentism and eternalism—or each frame of reference exists in its own reality. Options for presentism in special relativity appear to be exhausted, but Gödel and others suspect presentism may be valid for some forms of general relativity. [ 17 ] Generally, the idea of absolute time and space is considered incompatible with general relativity; there is no universal truth about the absolute position of events which occur at different times, and thus no way to determine which point in space at one time is at the universal "same position" at another time, [ 18 ] and all coordinate systems are on equal footing as given by the principle of diffeomorphism invariance . [ 19 ] | https://en.wikipedia.org/wiki/Philosophical_presentism |
Philosophy & Technology is a quarterly peer-reviewed academic journal covering philosophy of technology . It is published by Springer Science+Business Media and the editor-in-chief is Luciano Floridi ( University of Oxford ). Besides regular issues, the journal publishes occasional special issues and topical collections on particular philosophical topics.
The journal is abstracted and indexed in EBSCO databases , PhilPapers , ProQuest databases , and Scopus . [ 1 ]
This article about a philosophy journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Philosophy_&_Technology |
Philosophy of Arithmetic: Psychological and Logical Investigations ( German : Philosophie der Arithmetik. Psychologische und logische Untersuchungen ) is an 1891 book about the philosophy of mathematics by the philosopher Edmund Husserl . Husserl's first published book, it is a synthesis of his studies in mathematics, under Karl Weierstrass , with his studies in philosophy and psychology, under Franz Brentano , to whom it is dedicated, and Carl Stumpf .
The Philosophy of Arithmetic constitutes the first volume of a work which Husserl intended to comprise two volumes, of which the second was never published. Comprehensively it would have encompassed four parts and an Appendix.
The first volume is divided in two parts, in the first of which Husserl purports to analyse the "Proper concepts of multiplicity, unity and amount" ( Die eigentliche Begriffe von Vielheit, Einheit und Anzahl ) and in the second "The symbolic amount-concepts and the logical sources of amount-arithmetic" ( Die symbolischen Anzahlbegrife und die logischen Quellen der Anzahlen-Arithmetik ).
The basic issue of the book is a philosophical analysis of the concept of number , which is the most basic concept on which the entire edifice of arithmetic and mathematics can be founded. In order to proceed with this analysis, Husserl, following Brentano and Stumpf, uses the tools of psychology to look for the "origin and content" of the concept of number. He begins with the classical definition, already given by Euclid , Thomas Hobbes and Gottfried Wilhelm Leibniz , that "number is a multiplicity of unities" and then asks himself: what is multiplicity and what is unity? Anything that we can think of, anything we can present, can be considered at its most basic level to be "something". Multiplicity is then the "collective connection" of "something and something and something etc." In order to get a number instead of a mere quantity, we can also think of these featureless, abstract "somethings" as "ones" and then get "one and one and one etc." as basic definition of number in abstracto . However, these are just the proper numbers, i.e. number which we can conceive of properly, without the help of instruments or symbols. Psychologically we are limited to just the very first few numbers if we want to conceive of them properly, with higher numbers our short-term memory is not enough to think of them all together, but still as identical to themselves and different from all others. Husserl contends that as a result, we must proceed to the analysis of symbolically conceived numbers, which are in essence the numbers used in mathematics.
The book is a product of Husserl's years of study with Weierstrass (in Berlin) and his student Leo Königsberger (in Vienna) on the mathematical side and his studies with Brentano (in Vienna) and Stumpf (in Halle) on the psychological/philosophical side. The book is mostly based on his habilitationsschrift of 1887 "On the Concept of Number" ( Über den Begriff der Zahl ). Husserl also lectured on the concept of number between 1889 and 1891, much in the same vein. He continued working on the second volume up to at least 1894.
Gottlob Frege was critical of Philosophy of Arithmetic , and accused Husserl of relying too much on the metaphysical and not enough on the logical aspects of mathematics. Frege's criticisms influenced negatively the young mathematician's career as a professor. Husserl's Logical Investigations secured his reputation ten years later, but Frege and others never accepted Husserl as a practitioner of true logic.
The original edition:
Husserliana edition:
Official English translation of the Husserliana edition: | https://en.wikipedia.org/wiki/Philosophy_of_Arithmetic |
The philosophy of computer science is concerned with the philosophical questions that arise within the study of computer science . There is still no common understanding of the content, aims, focus, or topics of the philosophy of computer science, [ 1 ] despite some attempts to develop a philosophy of computer science like the philosophy of physics or the philosophy of mathematics . Due to the abstract nature of computer programs and the technological ambitions of computer science, many of the conceptual questions of the philosophy of computer science are also comparable to the philosophy of science , philosophy of mathematics , and the philosophy of technology . [ 2 ]
Many of the central philosophical questions of computer science are centered on the logical, ethical, methodological, ontological and epistemological issues that concern it. [ 3 ] Some of these questions may include:
The Church–Turing thesis and its variations are central to the theory of computation . Since, as an informal notion, the concept of effective calculability does not have a formal definition, the thesis, although it has near-universal acceptance, cannot be formally proven. The implications of this thesis is also of philosophical concern. Philosophers have interpreted the Church–Turing thesis as having implications for the philosophy of mind . [ 6 ] [ 7 ]
The P versus NP problem is an unsolved problem in computer science and mathematics. It asks whether every problem whose solution can be verified in polynomial time (and so defined to belong to the class NP ) can also be solved in polynomial time (and so defined to belong to the class P ). Most computer scientists believe that P ≠ NP . [ 8 ] [ 9 ] Apart from the reason that after decades of studying these problems no one has been able to find a polynomial-time algorithm for any of more than 3000 important known NP -complete problems, philosophical reasons that concern its implications may have motivated this belief.
For instance, according to Scott Aaronson , the American computer scientist then at MIT :
If P = NP , then the world would be a profoundly different place than we usually assume it to be. There would be no special value in "creative leaps", no fundamental gap between solving a problem and recognizing the solution once it's found. Everyone who could appreciate a symphony would be Mozart ; everyone who could follow a step-by-step argument would be Gauss . [ 10 ] | https://en.wikipedia.org/wiki/Philosophy_of_computer_science |
Philosophy of design is the study of definitions of design , and the assumptions, foundations, and implications of design. The field, which is mostly a sub-discipline of aesthetics , is defined by an interest in a set of problems, or an interest in central or foundational concerns in design. In addition to these central problems for design as a whole, many philosophers of design consider these problems as they apply to particular disciplines (e.g. philosophy of art ).
Although most practitioners are philosophers specialized in aesthetics (i.e., aestheticians), several prominent designers and artists have contributed to the field. For an introduction to the philosophy of design see the article by Per Galle [ 1 ] at the Royal Danish Academy of Art .
Philosophers of design, or philosophers relevant to the philosophical study of design:
This aesthetics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Philosophy_of_design |
Philosophy of ecology is a concept under the philosophy of science , which is a subfield of philosophy . Its main concerns centre on the practice and application of ecology , its moral issues, and the intersectionality between the position of humans and other entities. [ 1 ] This topic also overlaps with metaphysics , ontology , and epistemology , for example, as it attempts to answer metaphysical, epistemic and moral issues surrounding environmental ethics and public policy. [ 2 ]
The aim of the philosophy of ecology is to clarify and critique the 'first principles’, which are the fundamental assumptions present in science and the natural sciences. Although there has yet to be a consensus about what presupposes philosophy of ecology, and the definition for ecology is up for debate, there are some central issues that philosophers of ecology consider when examining the role and purpose of what ecologists practice. For example, this field considers the 'nature of nature', [ 2 ] the methodological and conceptual issues surrounding ecological research, and the problems associated with these studies within its contextual environment. [ 3 ]
Philosophy addresses the questions that make up ecological studies, and presents a different perspective into the history of ecology, environmental ethics in ecological science, and the application of mathematical models. [ 3 ]
Ecology is considered as a relatively new scientific discipline, having been acknowledged as a formal scientific field in the late nineteenth and early twentieth century. Although an established definition of ecology has yet to be presented, there are some commonalities in the questions proposed by ecologists.
Ecology was considered as “the science of the economy [and] habits,” [ 4 ] according to Stauffer, and was proponent in understanding the external interrelations between organisms. It was recognised formally as a field of science in 1866 by German zoologist Ernst Haeckel (1834-1919). Haeckel termed ‘ecology’ in his book, Generelle Morphologie der Organismen (1866), [ 4 ] [ 5 ] in the attempt to present a synthesis of morphology, taxonomy, and the evolution of animals. [ 6 ]
Haeckel aimed to refine the notion of ecology and propose a new area of study to investigate population growth and stability, [ 7 ] as influenced by Charles Darwin and his work in Origin of Species (1859). [ 4 ] He had first expressed ecology as an interchangeable term constituted within an area of biology and an aspect of ‘physiology of relationships’. [ 4 ] In the English translation by Stauffer, Haeckel defined ecology as “the whole science of the relationship of organism to environment including, in the broad sense, all the ‘conditions for existence.'” [ 4 ] [ 7 ] This neologism was used to distinguish studies conducted on the field, as opposed to those conducted within the laboratory. [ 8 ] He expanded upon this definition of ecology after considering the Darwinian theory of evolution and natural selection.
There is yet to be an established consensus amongst philosophers about the exact definition of ecology, however, there are commonalities in the research agendas that helps differentiate this discipline from other natural sciences.
Ecology underlies an ecological worldview, [ 9 ] wherein interaction and connectedness are emphasized and developed through several themes:
There are three main disciplinary categories of ecology: Romantic ecology, political ecology , and scientific ecology . Romantic ecology, also called aesthetic or literary ecology, was a counter-movement to the increasingly anthropocentric and mechanistic ideology presented in modern Europe and America of the nineteenth century, especially during the Industrial Revolution. [ 13 ] Some notable figures of this period include William Wordsworth (1770-1862), [ 14 ] John Muir (1838-1914), [ 15 ] and Ralph Waldo Emerson (1803-1882). [ 16 ] Scope of romantic ecological influence also extends into politics, and in which political interrelation with ethics underline political ecology. [ 2 ]
Political ecology, also known as axiological or values-based ecology, considers the socio-political implications surrounding the ecological landscape. [ 17 ] [ 18 ] Some fundamental questions political ecologists ask generally focus on the ethics between nature and society. [ 19 ] American environmentalist Aldo Leopold (1886-1948), affirm that ethics should be extended to encompass the land and biotic communities as well, rather than pertaining exclusively to individuals. [ 20 ] In this sense, political ecology can be denoted as a form of environmental ethics.
Finally, scientific ecology, or commonly known as ecology, addresses central concerns, such as understanding the role of the ecologists and what they study, and the types of methodology and conceptual issues that surround the development of these studies and what type of problem this may present.
Defining contemporary ecology requires looking at certain fundamental principles, namely principles of system and evolution. System entails understanding the processes, of which interconnected sections establish a holistic identity, not separated or predictable from their components. [ 6 ] Evolution results from the ‘generation of variety’ as a means to produce change. Certain entities that interact with their environments create evolution through survival, and it is the production of changes that shape ecological systems. This evolutionary process is central to ecology and biology. [ 21 ] There are three main concerns that ecologists generally concur with: naturalism, scientific realism, and the comprehensive scope of ecology.
Philosopher Frederick Ferre defines two different primary meanings for nature in Being and Value: Toward a Constructive Postmodern Metaphysics (1996). [ 22 ] The first definition does not consider nature as 'artifacts of human manipulation’, [ 2 ] and nature, in this sense, comprises those not of artificial origins. The second definition establishes natures as those not of supernatural conceptions, which includes artefacts of human manipulation in this case. [ 13 ] [ 2 ] However, there is confusion of meaning as both connotations are used interchangeably in its application in different contexts by different ecologists.
There is yet to be a defined explanation of naturalism within philosophy of ecology, however, its current usage connotes the idea that underlines a system containing a reality subsumed by nature, independent of the ‘supernatural’ world or existence. [ 11 ] Naturalism, asserts the notion that scientific methodology is sufficient to obtain knowledge about reality. Naturalists who support this perspective view mental, biological, and social operations as physical entities. For example, considering a pebble or a human being, these existences occur concurrently within the same space and time. Applications of these scientific methods remain relevant and sufficient as it explains the spatiotemporal processes that physical entities undergo as spatiotemporal beings. [ 11 ]
The holism-reductionism debate encompasses ontological, methodological and epistemic concerns. [ 23 ] Common questions involve examining whether the means to understanding an object is through critical analyses of its constituents (reductionism) or ‘contextualisation’ of its components (holism) to retain phenomenological value. [ 24 ] Holists maintain that certain unique properties are attributed to the abiotic or biotic entity, such as an ecosystem, and how these characteristics are not intrinsically applicable to its separate components. Analysis of just the parts are insufficient in obtaining knowledge of the entire unit. [ 23 ] On the other spectrum, reductionists argue that these parts are independent of each other, [ 25 ] and that knowledge of the components provide understanding of the composite entity. This approach, however, has been criticised, as the entity does not just denote just the unity of its aggregates but rather a synthesis between the whole and its parts.
Rationalism within scientific ecology such methodologies remain necessary and relevant in their role for establishing ecological theory as a guide. Methodology employed under rationalist approaches became pronounced in the 1920s by Alfred Lotka 's (1956) and Vito Volterra's (1926) logistic models that are known as Lotka-Volterra equations. Empiricism establishes the need for observational and empirical testing. An obvious consequence of this paradigm is the presence and usage of pluralistic methodology, although there has yet to be a unifying model adequate for application in ecology, and neither has there yet to establish a pluralistic theory as well.
Environmental ethics emerged in the 1970s in response to traditional anthropocentrism. It studies the moral implications between social and environmental interactions, prompted from concerns of environmental degradation, and challenged the ethical positionality of humans. [ 26 ] A common belief amongst environmental philosophy is the view that biological entities are morally valuable and independent of human standards. [ 27 ] Within this field, there is the shared assumption that environmental issues are prominently anthropogenic, and that this stems from an anthropocentric argument . The basis in rejecting anthropocentrism is to refute the belief that non-human entities are not worthy of value. [ 28 ]
A main concern in environmental ethics is anthropogenically induced mass extinction within the biosphere. The attempt to interpret it non-anthropocentrically is vital to the foundations of environmental ethics. [ 28 ] Paleontology , for example, details mass extinction as pivotal and a precursor to major radiations. Those with non-anthropocentric views interpret the death of dinosaurs as a preservation of biodiversity and principle to anthropocentric values. As ecology is closely entwined with ethics, understanding environmental approaches require understanding the world, which is the role of ecology and environmental ethics. The main issue is to also incorporate natural entities in its ethical concern, which involves conscious, sentient, living and existing beings. [ 29 ]
Mathematical models play a role in questioning the issues presented in ecology and conservation biology . There are mainly two types of models used to explore the relationship between applications of mathematics and practice within ecology. [ 30 ] The first are descriptive models, which details single-species population growth, for example, and multi-species models like Lotka-Volterra predator-prey models [ 30 ] or Nicholson-Baily host-parasitoid model. [ 31 ] These models explain behavioural activity through the idealisation of the intended target. The second type are normative models, which describe the current state of variables and how certain variables should behave. [ 27 ] [ 7 ]
In ecology, complicated biological interactions require explanation, which is where the models are used to investigate hypotheses. For example, identification and explanations of certain organisms and population abundance is essential for understanding the role of ecology and biodiversity. Applications of equations provide an inclination towards a prediction, or a model to suggest an answer for these questions that come up. Mathematical model in particular also provide contextual supporting information regarding factors on a wider, more global scale as well. [ 30 ]
The purpose of these models and the differences in normative models and scientific models is that the differences in their standards entail different applications. [ 32 ] These models aid in illustrating decision making outcomes, and also aid in tackling group decisions. For example, mathematical models incorporate environmental decisions of people within a group holistically. The model helps represent the values of each members, and the weightings of respect in the matrix. The model will then deliver the final result. In the case of conflict about proceedings or how to represent certain quantities, the model may be limited in that it would be deemed not of use. Furthermore, the number of idealisations in the model are also presented. [ 30 ]
The process of mathematical modelling presents distinction between reality and theory, or more specifically, the application of models against the genuine phenomena these models aim to represent. [ 33 ] Critics of the employment of mathematical models within ecology question its use and the extent of their relevance, prompted by an imbalance in investigative procedure and theoretical propositions. According to Weiner (1995), deterministic models have been ineffectual within ecology. [ 33 ] The Lotka-Volterra models, Weiner argues, have not yielded testable predictions. [ 34 ] In cases where theoretical models within ecology produced testable predictions, they have been refuted. [ 35 ]
The purpose of the Lotka-Volterra models is to track the predator and prey interaction and their population cycles. The usual pattern maintains that the predator population follows the prey population fluctuations. [ 21 ] For example, as prey population increase, so does the predator, and likewise in prey population decrease, predator population decreases. However, Weiner argues that, in reality, prey population still maintains their oscillating cycles, even if the predator is removed, and is an inaccurate representation of natural phenomena. [ 34 ] Criticism in how idealisation is inherent within modelling and application of this is methodologically deficient. They also maintain that mathematical modelling within ecology is an oversimplification of reality, and a misrepresentation or insufficient representation of the biological system. [ 1 ]
Application of simple or complex models are also up for debate. There is concern regarding the model results, wherein complexities of a system are not able to be replicated or adequately captured with a complicated model. | https://en.wikipedia.org/wiki/Philosophy_of_ecology |
Philosophy of mathematics is the branch of philosophy that deals with the nature of mathematics and its relationship to other areas of philosophy, particularly epistemology and metaphysics . Central questions posed include whether or not mathematical objects are purely abstract entities or are in some way concrete, and in what the relationship such objects have with physical reality consists. [ 1 ]
Major themes that are dealt with in philosophy of mathematics include:
The connection between mathematics and material reality has led to philosophical debates since at least the time of Pythagoras . The ancient philosopher Plato argued that abstractions that reflect material reality have themselves a reality that exists outside space and time. As a result, the philosophical view that mathematical objects somehow exist on their own in abstraction is often referred to as Platonism . Independently of their possible philosophical opinions, modern mathematicians may be generally considered as Platonists, since they think of and talk of their objects of study as real objects. [ 2 ]
Armand Borel summarized this view of mathematics reality as follows, and provided quotations of G. H. Hardy , Charles Hermite , Henri Poincaré and Albert Einstein that support his views. [ 3 ]
Something becomes objective (as opposed to "subjective") as soon as we are convinced that it exists in the minds of others in the same form as it does in ours and that we can think about it and discuss it together. [ 4 ] Because the language of mathematics is so precise, it is ideally suited to defining concepts for which such a consensus exists. In my opinion, that is sufficient to provide us with a feeling of an objective existence, of a reality of mathematics ...
Mathematical reasoning requires rigor . This means that the definitions must be absolutely unambiguous and the proofs must be reducible to a succession of applications of syllogisms or inference rules , [ a ] without any use of empirical evidence and intuition . [ b ] [ 6 ]
The rules of rigorous reasoning have been established by the ancient Greek philosophers under the name of logic . Logic is not specific to mathematics, but, in mathematics, the standard of rigor is much higher than elsewhere.
For many centuries, logic, although used for mathematical proofs, belonged to philosophy and was not specifically studied by mathematicians. [ 7 ] Circa the end of the 19th century, several paradoxes made questionable the logical foundation of mathematics, and consequently the validity of the whole of mathematics. This has been called the foundational crisis of mathematics . Some of these paradoxes consist of results that seem to contradict the common intuition, such as the possibility to construct valid non-Euclidean geometries in which the parallel postulate is wrong, the Weierstrass function that is continuous but nowhere differentiable , and the study by Georg Cantor of infinite sets , which led to consider several sizes of infinity (infinite cardinals ). Even more striking, Russell's paradox shows that the phrase "the set of all sets" is self contradictory.
Several methods have been proposed to solve the problem by changing of logical framework, such as constructive mathematics and intuitionistic logic . Roughly speaking, the first one consists of requiring that every existence theorem must provide an explicit example, and the second one excludes from mathematical reasoning the law of excluded middle and double negation elimination .
These logics have less inference rules than classical logic. On the other hand classical logic was a first-order logic , which means roughly that quantifiers cannot be applied to infinite sets. This means, for example that the sentence "every set of natural numbers has a least element" is nonsensical in any formalization of classical logic. This led to the introduction of higher-order logics , which are presently used commonly in mathematics.
The problems of foundation of mathematics has been eventually resolved with the rise of mathematical logic as a new area of mathematics. In this framework, a mathematical or logical theory consists of a formal language that defines the well-formed of assertions , a set of basic assertions called axioms and a set of inference rules that allow producing new assertions from one or several known assertions. A theorem of such a theory is either an axiom or an assertion that can be obtained from previously known theorems by the application of an inference rule. The Zermelo–Fraenkel set theory with the axiom of choice , generally called ZFC , is a higher-order logic in which all mathematics have been restated; it is used implicitely in all mathematics texts that do not specify explicitly on which foundations they are based. Moreover, the other proposed foundations can be modeled and studied inside ZFC.
It results that "rigor" is no more a relevant concept in mathematics, as a proof is either correct or erroneous, and a "rigorous proof" is simply a pleonasm . Where a special concept of rigor comes into play is in the socialized aspects of a proof. In particular, proofs are rarely written in full details, and some steps of a proof are generally considered as trivial , easy , or straightforward , and therefore left to the reader. As most proof errors occur in these skipped steps, a new proof requires to be verified by other specialists of the subject, and can be considered as reliable only after having been accepted by the community of the specialists, which may need several years. [ 8 ]
Also, the concept of "rigor" may remain useful for teaching to beginners what is a mathematical proof. [ 9 ]
Mathematics is used in most sciences for modeling phenomena, which then allows predictions to be made from experimental laws. [ 10 ] The independence of mathematical truth from any experimentation implies that the accuracy of such predictions depends only on the adequacy of the model. [ 11 ] Inaccurate predictions, rather than being caused by invalid mathematical concepts, imply the need to change the mathematical model used. [ 12 ] For example, the perihelion precession of Mercury could only be explained after the emergence of Einstein 's general relativity , which replaced Newton's law of gravitation as a better mathematical model. [ 13 ]
There is still a philosophical debate whether mathematics is a science. However, in practice, mathematicians are typically grouped with scientists, and mathematics shares much in common with the physical sciences. Like them, it is falsifiable , which means in mathematics that if a result or a theory is wrong, this can be proved by providing a counterexample . Similarly as in science, theories and results (theorems) are often obtained from experimentation . [ 14 ] In mathematics, the experimentation may consist of computation on selected examples or of the study of figures or other representations of mathematical objects (often mind representations without physical support). For example, when asked how he came about his theorems, Gauss once replied "durch planmässiges Tattonieren" (through systematic experimentation). [ 15 ] However, some authors emphasize that mathematics differs from the modern notion of science by not relying on empirical evidence. [ 16 ] [ 17 ] [ 18 ] [ 19 ]
The unreasonable effectiveness of mathematics is a phenomenon that was named and first made explicit by physicist Eugene Wigner . [ 20 ] It is the fact that many mathematical theories (even the "purest") have applications outside their initial object. These applications may be completely outside their initial area of mathematics, and may concern physical phenomena that were completely unknown when the mathematical theory was introduced. [ 21 ] Examples of unexpected applications of mathematical theories can be found in many areas of mathematics.
A notable example is the prime factorization of natural numbers that was discovered more than 2,000 years before its common use for secure internet communications through the RSA cryptosystem . [ 22 ] A second historical example is the theory of ellipses . They were studied by the ancient Greek mathematicians as conic sections (that is, intersections of cones with planes). It was almost 2,000 years later that Johannes Kepler discovered that the trajectories of the planets are ellipses. [ 23 ]
In the 19th century, the internal development of geometry (pure mathematics) led to definition and study of non-Euclidean geometries, spaces of dimension higher than three and manifolds . At this time, these concepts seemed totally disconnected from the physical reality, but at the beginning of the 20th century, Albert Einstein developed the theory of relativity that uses fundamentally these concepts. In particular, spacetime of special relativity is a non-Euclidean space of dimension four, and spacetime of general relativity is a (curved) manifold of dimension four. [ 24 ] [ 25 ]
A striking aspect of the interaction between mathematics and physics is when mathematics drives research in physics. This is illustrated by the discoveries of the positron and the baryon Ω − . {\displaystyle \Omega ^{-}.} In both cases, the equations of the theories had unexplained solutions, which led to conjecture of the existence of an unknown particle , and the search for these particles. In both cases, these particles were discovered a few years later by specific experiments. [ 26 ] [ 27 ] [ 28 ]
The origin of mathematics is of arguments and disagreements. Whether the birth of mathematics was by chance or induced by necessity during the development of similar subjects, such as physics, remains an area of contention. [ 29 ] [ 30 ]
Many thinkers have contributed their ideas concerning the nature of mathematics. Today, some [ who? ] philosophers of mathematics aim to give accounts of this form of inquiry and its products as they stand, while others emphasize a role for themselves that goes beyond simple interpretation to critical analysis. There are traditions of mathematical philosophy in both Western philosophy and Eastern philosophy . Western philosophies of mathematics go as far back as Pythagoras , who described the theory "everything is mathematics" ( mathematicism ), Plato , who paraphrased Pythagoras, and studied the ontological status of mathematical objects, and Aristotle , who studied logic and issues related to infinity (actual versus potential).
Greek philosophy on mathematics was strongly influenced by their study of geometry . For example, at one time, the Greeks held the opinion that 1 (one) was not a number , but rather a unit of arbitrary length. A number was defined as a multitude. Therefore, 3, for example, represented a certain multitude of units, and was thus "truly" a number. At another point, a similar argument was made that 2 was not a number but a fundamental notion of a pair. These views come from the heavily geometric straight-edge-and-compass viewpoint of the Greeks: just as lines drawn in a geometric problem are measured in proportion to the first arbitrarily drawn line, so too are the numbers on a number line measured in proportion to the arbitrary first "number" or "one". [ citation needed ]
These earlier Greek ideas of numbers were later upended by the discovery of the irrationality of the square root of two. Hippasus , a disciple of Pythagoras , showed that the diagonal of a unit square was incommensurable with its (unit-length) edge: in other words he proved there was no existing (rational) number that accurately depicts the proportion of the diagonal of the unit square to its edge. This caused a significant re-evaluation of Greek philosophy of mathematics. According to legend, fellow Pythagoreans were so traumatized by this discovery that they murdered Hippasus to stop him from spreading his heretical idea. [ 31 ] Simon Stevin was one of the first in Europe to challenge Greek ideas in the 16th century. Beginning with Leibniz , the focus shifted strongly to the relationship between mathematics and logic. This perspective dominated the philosophy of mathematics through the time of Boole , Frege and Russell , but was brought into question by developments in the late 19th and early 20th centuries.
A perennial issue in the philosophy of mathematics concerns the relationship between logic and mathematics at their joint foundations. While 20th-century philosophers continued to ask the questions mentioned at the outset of this article, the philosophy of mathematics in the 20th century was characterized by a predominant interest in formal logic , set theory (both naive set theory and axiomatic set theory ), and foundational issues.
It is a profound puzzle that on the one hand mathematical truths seem to have a compelling inevitability, but on the other hand the source of their "truthfulness" remains elusive. Investigations into this issue are known as the foundations of mathematics program.
At the start of the 20th century, philosophers of mathematics were already beginning to divide into various schools of thought about all these questions, broadly distinguished by their pictures of mathematical epistemology and ontology . Three schools, formalism , intuitionism , and logicism , emerged at this time, partly in response to the increasingly widespread worry that mathematics as it stood, and analysis in particular, did not live up to the standards of certainty and rigor that had been taken for granted. Each school addressed the issues that came to the fore at that time, either attempting to resolve them or claiming that mathematics is not entitled to its status as our most trusted knowledge.
Surprising and counter-intuitive developments in formal logic and set theory early in the 20th century led to new questions concerning what was traditionally called the foundations of mathematics . As the century unfolded, the initial focus of concern expanded to an open exploration of the fundamental axioms of mathematics, the axiomatic approach having been taken for granted since the time of Euclid around 300 BCE as the natural basis for mathematics. Notions of axiom , proposition and proof , as well as the notion of a proposition being true of a mathematical object (see Assignment ) , were formalized, allowing them to be treated mathematically. The Zermelo–Fraenkel axioms for set theory were formulated which provided a conceptual framework in which much mathematical discourse would be interpreted. In mathematics, as in physics, new and unexpected ideas had arisen and significant changes were coming. With Gödel numbering , propositions could be interpreted as referring to themselves or other propositions, enabling inquiry into the consistency of mathematical theories. This reflective critique in which the theory under review "becomes itself the object of a mathematical study" led Hilbert to call such study metamathematics or proof theory . [ 32 ]
At the middle of the century, a new mathematical theory was created by Samuel Eilenberg and Saunders Mac Lane , known as category theory , and it became a new contender for the natural language of mathematical thinking. [ 33 ] As the 20th century progressed, however, philosophical opinions diverged as to just how well-founded were the questions about foundations that were raised at the century's beginning. Hilary Putnam summed up one common view of the situation in the last third of the century by saying:
When philosophy discovers something wrong with science, sometimes science has to be changed— Russell's paradox comes to mind, as does Berkeley 's attack on the actual infinitesimal —but more often it is philosophy that has to be changed. I do not think that the difficulties that philosophy finds with classical mathematics today are genuine difficulties; and I think that the philosophical interpretations of mathematics that we are being offered on every hand are wrong, and that "philosophical interpretation" is just what mathematics doesn't need. [ 34 ] : 169–170
Philosophy of mathematics today proceeds along several different lines of inquiry, by philosophers of mathematics, logicians, and mathematicians, and there are many schools of thought on the subject. The schools are addressed separately in the next section, and their assumptions explained.
Contemporary schools of thought in the philosophy of mathematics include: artistic, Platonism, mathematicism, logicism, formalism, conventionalism, intuitionism, constructivism, finitism, structuralism, embodied mind theories (Aristotelian realism, psychologism, empiricism), fictionalism, social constructivism, and non-traditional schools.
However, many of these schools of thought are mutually compatible. For example, most living mathematicians are together Platonists and formalists, give a great importance to aesthetic , and consider that axioms should be chosen for the results they produce, not for their coherence with human intuition of reality (conventionalism). [ 26 ]
The view that claims that mathematics is the aesthetic combination of assumptions, and then also claims that mathematics is an art . A famous mathematician who claims that is the British G. H. Hardy . [ 35 ] For Hardy, in his book, A Mathematician's Apology , the definition of mathematics was more like the aesthetic combination of concepts. [ 36 ]
Max Tegmark 's mathematical universe hypothesis (or mathematicism ) goes further than Platonism in asserting that not only do all mathematical objects exist, but nothing else does. Tegmark's sole postulate is: All structures that exist mathematically also exist physically . That is, in the sense that "in those [worlds] complex enough to contain self-aware substructures [they] will subjectively perceive themselves as existing in a physically 'real' world". [ 37 ] [ 38 ]
Logicism is the thesis that mathematics is reducible to logic, and hence nothing but a part of logic. [ 39 ] : 41 Logicists hold that mathematics can be known a priori , but suggest that our knowledge of mathematics is just part of our knowledge of logic in general, and is thus analytic , not requiring any special faculty of mathematical intuition. In this view, logic is the proper foundation of mathematics, and all mathematical statements are necessary logical truths .
Rudolf Carnap (1931) presents the logicist thesis in two parts: [ 39 ]
Gottlob Frege was the founder of logicism. In his seminal Die Grundgesetze der Arithmetik ( Basic Laws of Arithmetic ) he built up arithmetic from a system of logic with a general principle of comprehension, which he called "Basic Law V" (for concepts F and G , the extension of F equals the extension of G if and only if for all objects a , Fa equals Ga ), a principle that he took to be acceptable as part of logic.
Frege's construction was flawed. Bertrand Russell discovered that Basic Law V is inconsistent (this is Russell's paradox ). Frege abandoned his logicist program soon after this, but it was continued by Russell and Whitehead . They attributed the paradox to "vicious circularity" and built up what they called ramified type theory to deal with it. In this system, they were eventually able to build up much of modern mathematics but in an altered, and excessively complex form (for example, there were different natural numbers in each type, and there were infinitely many types). They also had to make several compromises in order to develop much of mathematics, such as the " axiom of reducibility ". Even Russell said that this axiom did not really belong to logic.
Modern logicists (like Bob Hale , Crispin Wright , and perhaps others) have returned to a program closer to Frege's. They have abandoned Basic Law V in favor of abstraction principles such as Hume's principle (the number of objects falling under the concept F equals the number of objects falling under the concept G if and only if the extension of F and the extension of G can be put into one-to-one correspondence ). Frege required Basic Law V to be able to give an explicit definition of the numbers, but all the properties of numbers can be derived from Hume's principle. This would not have been enough for Frege because (to paraphrase him) it does not exclude the possibility that the number 3 is in fact Julius Caesar. In addition, many of the weakened principles that they have had to adopt to replace Basic Law V no longer seem so obviously analytic, and thus purely logical.
Formalism holds that mathematical statements may be thought of as statements about the consequences of certain string manipulation rules. For example, in the "game" of Euclidean geometry (which is seen as consisting of some strings called "axioms", and some "rules of inference" to generate new strings from given ones), one can prove that the Pythagorean theorem holds (that is, one can generate the string corresponding to the Pythagorean theorem). According to formalism, mathematical truths are not about numbers and sets and triangles and the like—in fact, they are not "about" anything at all.
Another version of formalism is known as deductivism . [ 40 ] In deductivism, the Pythagorean theorem is not an absolute truth, but a relative one, if it follows deductively from the appropriate axioms. The same is held to be true for all other mathematical statements.
Formalism need not mean that mathematics is nothing more than a meaningless symbolic game. It is usually hoped that there exists some interpretation in which the rules of the game hold. (Compare this position to structuralism .) But it does allow the working mathematician to continue in his or her work and leave such problems to the philosopher or scientist. Many formalists would say that in practice, the axiom systems to be studied will be suggested by the demands of science or other areas of mathematics.
A major early proponent of formalism was David Hilbert , whose program was intended to be a complete and consistent axiomatization of all of mathematics. [ 41 ] Hilbert aimed to show the consistency of mathematical systems from the assumption that the "finitary arithmetic" (a subsystem of the usual arithmetic of the positive integers , chosen to be philosophically uncontroversial) was consistent. Hilbert's goals of creating a system of mathematics that is both complete and consistent were seriously undermined by the second of Gödel's incompleteness theorems , which states that sufficiently expressive consistent axiom systems can never prove their own consistency. Since any such axiom system would contain the finitary arithmetic as a subsystem, Gödel's theorem implied that it would be impossible to prove the system's consistency relative to that (since it would then prove its own consistency, which Gödel had shown was impossible). Thus, in order to show that any axiomatic system of mathematics is in fact consistent, one needs to first assume the consistency of a system of mathematics that is in a sense stronger than the system to be proven consistent.
Hilbert was initially a deductivist, but, as may be clear from above, he considered certain metamathematical methods to yield intrinsically meaningful results and was a realist with respect to the finitary arithmetic. Later, he held the opinion that there was no other meaningful mathematics whatsoever, regardless of interpretation.
Other formalists, such as Rudolf Carnap , Alfred Tarski , and Haskell Curry , considered mathematics to be the investigation of formal axiom systems . Mathematical logicians study formal systems but are just as often realists as they are formalists.
Formalists are relatively tolerant and inviting to new approaches to logic, non-standard number systems, new set theories, etc. The more games we study, the better. However, in all three of these examples, motivation is drawn from existing mathematical or philosophical concerns. The "games" are usually not arbitrary.
The main critique of formalism is that the actual mathematical ideas that occupy mathematicians are far removed from the string manipulation games mentioned above. Formalism is thus silent on the question of which axiom systems ought to be studied, as none is more meaningful than another from a formalistic point of view.
Recently, some [ who? ] formalist mathematicians have proposed that all of our formal mathematical knowledge should be systematically encoded in computer-readable formats, so as to facilitate automated proof checking of mathematical proofs and the use of interactive theorem proving in the development of mathematical theories and computer software. Because of their close connection with computer science , this idea is also advocated by mathematical intuitionists and constructivists in the "computability" tradition— see QED project for a general overview .
The French mathematician Henri Poincaré was among the first to articulate a conventionalist view. Poincaré's use of non-Euclidean geometries in his work on differential equations convinced him that Euclidean geometry should not be regarded as a priori truth. He held that axioms in geometry should be chosen for the results they produce, not for their apparent coherence with human intuitions about the physical world.
In mathematics, intuitionism is a program of methodological reform whose motto is that "there are no non-experienced mathematical truths" ( L. E. J. Brouwer ). From this springboard, intuitionists seek to reconstruct what they consider to be the corrigible portion of mathematics in accordance with Kantian concepts of being, becoming, intuition, and knowledge. Brouwer, the founder of the movement, held that mathematical objects arise from the a priori forms of the volitions that inform the perception of empirical objects. [ 42 ]
A major force behind intuitionism was L. E. J. Brouwer , who rejected the usefulness of formalized logic of any sort for mathematics. His student Arend Heyting postulated an intuitionistic logic , different from the classical Aristotelian logic ; this logic does not contain the law of the excluded middle and therefore frowns upon proofs by contradiction . The axiom of choice is also rejected in most intuitionistic set theories, though in some versions it is accepted.
In intuitionism, the term "explicit construction" is not cleanly defined, and that has led to criticisms. Attempts have been made to use the concepts of Turing machine or computable function to fill this gap, leading to the claim that only questions regarding the behavior of finite algorithms are meaningful and should be investigated in mathematics. This has led to the study of the computable numbers , first introduced by Alan Turing . Not surprisingly, then, this approach to mathematics is sometimes associated with theoretical computer science .
Like intuitionism, constructivism involves the regulative principle that only mathematical entities which can be explicitly constructed in a certain sense should be admitted to mathematical discourse. In this view, mathematics is an exercise of the human intuition, not a game played with meaningless symbols. Instead, it is about entities that we can create directly through mental activity. In addition, some adherents of these schools reject non-constructive proofs, such as using proof by contradiction when showing the existence of an object or when trying to establish the truth of some proposition. Important work was done by Errett Bishop , who managed to prove versions of the most important theorems in real analysis as constructive analysis in his 1967 Foundations of Constructive Analysis. [ 43 ]
Finitism is an extreme form of constructivism , according to which a mathematical object does not exist unless it can be constructed from natural numbers in a finite number of steps. In her book Philosophy of Set Theory , Mary Tiles characterized those who allow countably infinite objects as classical finitists, and those who deny even countably infinite objects as strict finitists.
The most famous proponent of finitism was Leopold Kronecker , [ 44 ] who said:
God created the natural numbers, all else is the work of man.
Ultrafinitism is an even more extreme version of finitism, which rejects not only infinities but finite quantities that cannot feasibly be constructed with available resources. Another variant of finitism is Euclidean arithmetic, a system developed by John Penn Mayberry in his book The Foundations of Mathematics in the Theory of Sets . [ 45 ] Mayberry's system is Aristotelian in general inspiration and, despite his strong rejection of any role for operationalism or feasibility in the foundations of mathematics, comes to somewhat similar conclusions, such as, for instance, that super-exponentiation is not a legitimate finitary function.
Structuralism is a position holding that mathematical theories describe structures, and that mathematical objects are exhaustively defined by their places in such structures, consequently having no intrinsic properties . For instance, it would maintain that all that needs to be known about the number 1 is that it is the first whole number after 0. Likewise all the other whole numbers are defined by their places in a structure, the number line . Other examples of mathematical objects might include lines and planes in geometry, or elements and operations in abstract algebra .
Structuralism is an epistemologically realistic view in that it holds that mathematical statements have an objective truth value. However, its central claim only relates to what kind of entity a mathematical object is, not to what kind of existence mathematical objects or structures have (not, in other words, to their ontology ). The kind of existence mathematical objects have would clearly be dependent on that of the structures in which they are embedded; different sub-varieties of structuralism make different ontological claims in this regard. [ 46 ]
The ante rem structuralism ("before the thing") has a similar ontology to Platonism . Structures are held to have a real but abstract and immaterial existence. As such, it faces the standard epistemological problem of explaining the interaction between such abstract structures and flesh-and-blood mathematicians (see Benacerraf's identification problem ) .
The in re structuralism ("in the thing") is the equivalent of Aristotelian realism . Structures are held to exist inasmuch as some concrete system exemplifies them. This incurs the usual issues that some perfectly legitimate structures might accidentally happen not to exist, and that a finite physical world might not be "big" enough to accommodate some otherwise legitimate structures.
The post rem structuralism ("after the thing") is anti-realist about structures in a way that parallels nominalism . Like nominalism, the post rem approach denies the existence of abstract mathematical objects with properties other than their place in a relational structure. According to this view mathematical systems exist, and have structural features in common. If something is true of a structure, it will be true of all systems exemplifying the structure. However, it is merely instrumental to talk of structures being "held in common" between systems: they in fact have no independent existence.
Embodied mind theories hold that mathematical thought is a natural outgrowth of the human cognitive apparatus which finds itself in our physical universe. For example, the abstract concept of number springs from the experience of counting discrete objects (requiring the human senses such as sight for detecting the objects, touch; and signalling from the brain). It is held that mathematics is not universal and does not exist in any real sense, other than in human brains. Humans construct, but do not discover, mathematics.
The cognitive processes of pattern-finding and distinguishing objects are also subject to neuroscience ; if mathematics is considered to be relevant to a natural world (such as from realism or a degree of it, as opposed to pure solipsism ).
Its actual relevance to reality, while accepted to be a trustworthy approximation (it is also suggested the evolution of perceptions, the body, and the senses may have been necessary for survival) is not necessarily accurate to a full realism (and is still subject to flaws such as illusion , assumptions (consequently; the foundations and axioms in which mathematics have been formed by humans), generalisations, deception, and hallucinations ). As such, this may also raise questions for the modern scientific method for its compatibility with general mathematics; as while relatively reliable, it is still limited by what can be measured by empiricism which may not be as reliable as previously assumed (see also: 'counterintuitive' concepts in such as quantum nonlocality , and action at a distance ).
Another issue is that one numeral system may not necessarily be applicable to problem solving. Subjects such as complex numbers or imaginary numbers require specific changes to more commonly used axioms of mathematics; otherwise they cannot be adequately understood.
Alternatively, computer programmers may use hexadecimal for its 'human-friendly' representation of binary-coded values, rather than decimal (convenient for counting because humans have ten fingers). The axioms or logical rules behind mathematics also vary through time (such as the adaption and invention of zero ).
As perceptions from the human brain are subject to illusions , assumptions, deceptions, (induced) hallucinations , cognitive errors or assumptions in a general context, it can be questioned whether they are accurate or strictly indicative of truth (see also: philosophy of being ), and the nature of empiricism itself in relation to the universe and whether it is independent to the senses and the universe.
The human mind has no special claim on reality or approaches to it built out of math. If such constructs as Euler's identity are true then they are true as a map of the human mind and cognition .
Embodied mind theorists thus explain the effectiveness of mathematics—mathematics was constructed by the brain in order to be effective in this universe.
The most accessible, famous, and infamous treatment of this perspective is Where Mathematics Comes From , by George Lakoff and Rafael E. Núñez . In addition, mathematician Keith Devlin has investigated similar concepts with his book The Math Instinct , as has neuroscientist Stanislas Dehaene with his book The Number Sense . For more on the philosophical ideas that inspired this perspective, see cognitive science of mathematics .
Aristotelian realism holds that mathematics studies properties such as symmetry, continuity and order that can be literally realized in the physical world (or in any other world there might be). It contrasts with Platonism in holding that the objects of mathematics, such as numbers, do not exist in an "abstract" world but can be physically realized. For example, the number 4 is realized in the relation between a heap of parrots and the universal "being a parrot" that divides the heap into so many parrots. [ 47 ] [ 48 ] Aristotelian realism is defended by James Franklin and the Sydney School in the philosophy of mathematics and is close to the view of Penelope Maddy that when an egg carton is opened, a set of three eggs is perceived (that is, a mathematical entity realized in the physical world). [ 49 ] A problem for Aristotelian realism is what account to give of higher infinities, which may not be realizable in the physical world.
The Euclidean arithmetic developed by John Penn Mayberry in his book The Foundations of Mathematics in the Theory of Sets [ 45 ] also falls into the Aristotelian realist tradition. Mayberry, following Euclid, considers numbers to be simply "definite multitudes of units" realized in nature—such as "the members of the London Symphony Orchestra" or "the trees in Birnam wood". Whether or not there are definite multitudes of units for which Euclid's Common Notion 5 (the whole is greater than the part) fails and which would consequently be reckoned as infinite is for Mayberry essentially a question about Nature and does not entail any transcendental suppositions.
Psychologism in the philosophy of mathematics is the position that mathematical concepts and/or truths are grounded in, derived from or explained by psychological facts (or laws).
John Stuart Mill seems to have been an advocate of a type of logical psychologism, as were many 19th-century German logicians such as Sigwart and Erdmann as well as a number of psychologists , past and present: for example, Gustave Le Bon . Psychologism was famously criticized by Frege in his The Foundations of Arithmetic , and many of his works and essays, including his review of Husserl 's Philosophy of Arithmetic . Edmund Husserl, in the first volume of his Logical Investigations , called "The Prolegomena of Pure Logic", criticized psychologism thoroughly and sought to distance himself from it. The "Prolegomena" is considered [ by whom? ] a more concise, fair, and thorough refutation of psychologism than the criticisms made by Frege, and also it is considered today by many [ by whom? ] as being a memorable refutation for its decisive blow to psychologism. Psychologism was also criticized by Charles Sanders Peirce and Maurice Merleau-Ponty .
Mathematical empiricism is a form of realism that denies that mathematics can be known a priori at all. It says that we discover mathematical facts by empirical research , just like facts in any of the other sciences. It is not one of the classical three positions advocated in the early 20th century, but primarily arose in the middle of the century. However, an important early proponent of a view like this was John Stuart Mill . Mill's view was widely criticized, because, according to critics, such as A.J. Ayer, [ 50 ] it makes statements like "2 + 2 = 4" come out as uncertain, contingent truths, which we can only learn by observing instances of two pairs coming together and forming a quartet.
Karl Popper was another philosopher to point out empirical aspects of mathematics, observing that "most mathematical theories are, like those of physics and biology, hypothetico-deductive: pure mathematics therefore turns out to be much closer to the natural sciences whose hypotheses are conjectures, than it seemed even recently." [ 51 ] Popper also noted he would "admit a system as empirical or scientific only if it is capable of being tested by experience." [ 52 ]
Contemporary mathematical empiricism, formulated by W. V. O. Quine and Hilary Putnam , is primarily supported by the indispensability argument : mathematics is indispensable to all empirical sciences, and if we want to believe in the reality of the phenomena described by the sciences, we ought also believe in the reality of those entities required for this description. That is, since physics needs to talk about electrons to say why light bulbs behave as they do, then electrons must exist . Since physics needs to talk about numbers in offering any of its explanations, then numbers must exist. In keeping with Quine and Putnam's overall philosophies, this is a naturalistic argument. It argues for the existence of mathematical entities as the best explanation for experience, thus stripping mathematics of being distinct from the other sciences.
Putnam strongly rejected the term " Platonist " as implying an over-specific ontology that was not necessary to mathematical practice in any real sense. He advocated a form of "pure realism" that rejected mystical notions of truth and accepted much quasi-empiricism in mathematics . This grew from the increasingly popular assertion in the late 20th century that no one foundation of mathematics could be ever proven to exist. It is also sometimes called "postmodernism in mathematics" although that term is considered overloaded by some and insulting by others. Quasi-empiricism argues that in doing their research, mathematicians test hypotheses as well as prove theorems. A mathematical argument can transmit falsity from the conclusion to the premises just as well as it can transmit truth from the premises to the conclusion. Putnam has argued that any theory of mathematical realism would include quasi-empirical methods. He proposed that an alien species doing mathematics might well rely on quasi-empirical methods primarily, being willing often to forgo rigorous and axiomatic proofs, and still be doing mathematics—at perhaps a somewhat greater risk of failure of their calculations. He gave a detailed argument for this in New Directions . [ 53 ] Quasi-empiricism was also developed by Imre Lakatos .
The most important criticism of empirical views of mathematics is approximately the same as that raised against Mill. If mathematics is just as empirical as the other sciences, then this suggests that its results are just as fallible as theirs, and just as contingent. In Mill's case the empirical justification comes directly, while in Quine's case it comes indirectly, through the coherence of our scientific theory as a whole, i.e. consilience after E.O. Wilson . Quine suggests that mathematics seems completely certain because the role it plays in our web of belief is extraordinarily central, and that it would be extremely difficult for us to revise it, though not impossible.
For a philosophy of mathematics that attempts to overcome some of the shortcomings of Quine and Gödel's approaches by taking aspects of each see Penelope Maddy 's Realism in Mathematics . Another example of a realist theory is the embodied mind theory .
For experimental evidence suggesting that human infants can do elementary arithmetic, see Brian Butterworth .
Mathematical fictionalism was brought to fame in 1980 when Hartry Field published Science Without Numbers , [ 54 ] which rejected and in fact reversed Quine's indispensability argument. Where Quine suggested that mathematics was indispensable for our best scientific theories, and therefore should be accepted as a body of truths talking about independently existing entities, Field suggested that mathematics was dispensable, and therefore should be considered as a body of falsehoods not talking about anything real. He did this by giving a complete axiomatization of Newtonian mechanics with no reference to numbers or functions at all. He started with the "betweenness" of Hilbert's axioms to characterize space without coordinatizing it, and then added extra relations between points to do the work formerly done by vector fields . Hilbert's geometry is mathematical, because it talks about abstract points, but in Field's theory, these points are the concrete points of physical space, so no special mathematical objects at all are needed.
Having shown how to do science without using numbers, Field proceeded to rehabilitate mathematics as a kind of useful fiction . He showed that mathematical physics is a conservative extension of his non-mathematical physics (that is, every physical fact provable in mathematical physics is already provable from Field's system), so that mathematics is a reliable process whose physical applications are all true, even though its own statements are false. Thus, when doing mathematics, we can see ourselves as telling a sort of story, talking as if numbers existed. For Field, a statement like "2 + 2 = 4" is just as fictitious as " Sherlock Holmes lived at 221B Baker Street"—but both are true according to the relevant fictions.
Another fictionalist, Mary Leng , expresses the perspective succinctly by dismissing any seeming connection between mathematics and the physical world as "a happy coincidence". This rejection separates fictionalism from other forms of anti-realism, which see mathematics itself as artificial but still bounded or fitted to reality in some way. [ 55 ]
By this account, there are no metaphysical or epistemological problems special to mathematics. The only worries left are the general worries about non-mathematical physics, and about fiction in general. Field's approach has been very influential, but is widely rejected. This is in part because of the requirement of strong fragments of second-order logic to carry out his reduction, and because the statement of conservativity seems to require quantification over abstract models or deductions. [ citation needed ]
Social constructivism sees mathematics primarily as a social construct , as a product of culture, subject to correction and change. Like the other sciences, mathematics is viewed as an empirical endeavor whose results are constantly evaluated and may be discarded. However, while on an empiricist view the evaluation is some sort of comparison with "reality", social constructivists emphasize that the direction of mathematical research is dictated by the fashions of the social group performing it or by the needs of the society financing it. However, although such external forces may change the direction of some mathematical research, there are strong internal constraints—the mathematical traditions, methods, problems, meanings and values into which mathematicians are enculturated—that work to conserve the historically defined discipline.
This runs counter to the traditional beliefs of working mathematicians, that mathematics is somehow pure or objective. But social constructivists argue that mathematics is in fact grounded by much uncertainty: as mathematical practice evolves, the status of previous mathematics is cast into doubt, and is corrected to the degree it is required or desired by the current mathematical community. This can be seen in the development of analysis from reexamination of the calculus of Leibniz and Newton. They argue further that finished mathematics is often accorded too much status, and folk mathematics not enough, due to an overemphasis on axiomatic proof and peer review as practices.
The social nature of mathematics is highlighted in its subcultures . Major discoveries can be made in one branch of mathematics and be relevant to another, yet the relationship goes undiscovered for lack of social contact between mathematicians. Social constructivists argue each speciality forms its own epistemic community and often has great difficulty communicating, or motivating the investigation of unifying conjectures that might relate different areas of mathematics. Social constructivists see the process of "doing mathematics" as actually creating the meaning, while social realists see a deficiency either of human capacity to abstractify, or of human's cognitive bias , or of mathematicians' collective intelligence as preventing the comprehension of a real universe of mathematical objects. Social constructivists sometimes reject the search for foundations of mathematics as bound to fail, as pointless or even meaningless.
Contributions to this school have been made by Imre Lakatos and Thomas Tymoczko , although it is not clear that either would endorse the title. [ clarification needed ] More recently Paul Ernest has explicitly formulated a social constructivist philosophy of mathematics. [ 56 ] Some consider the work of Paul Erdős as a whole to have advanced this view (although he personally rejected it) because of his uniquely broad collaborations, which prompted others to see and study "mathematics as a social activity", e.g., via the Erdős number . Reuben Hersh has also promoted the social view of mathematics, calling it a "humanistic" approach, [ 57 ] similar to but not quite the same as that associated with Alvin White; [ 58 ] one of Hersh's co-authors, Philip J. Davis , has expressed sympathy for the social view as well.
Rather than focus on narrow debates about the true nature of mathematical truth , or even on practices unique to mathematicians such as the proof , a growing movement from the 1960s to the 1990s began to question the idea of seeking foundations or finding any one right answer to why mathematics works. The starting point for this was Eugene Wigner 's famous 1960 paper " The Unreasonable Effectiveness of Mathematics in the Natural Sciences ", in which he argued that the happy coincidence of mathematics and physics being so well matched seemed to be unreasonable and hard to explain.
Realist and constructivist theories are normally taken to be contraries. However, Karl Popper [ 59 ] argued that a number statement such as "2 apples + 2 apples = 4 apples" can be taken in two senses. In one sense it is irrefutable and logically true. In the second sense it is factually true and falsifiable. Another way of putting this is to say that a single number statement can express two propositions: one of which can be explained on constructivist lines; the other on realist lines. [ 60 ]
Innovations in the philosophy of language during the 20th century renewed interest in whether mathematics is, as is often said, [ citation needed ] the language of science. Although some [ who? ] mathematicians and philosophers would accept the statement "mathematics is a language" (most consider that the language of mathematics is a part of mathematics to which mathematics cannot be reduced), [ citation needed ] linguists [ who? ] believe that the implications of such a statement must be considered. For example, the tools of linguistics are not generally applied to the symbol systems of mathematics, that is, mathematics is studied in a markedly different way from other languages. If mathematics is a language, it is a different type of language from natural languages . Indeed, because of the need for clarity and specificity, the language of mathematics is far more constrained than natural languages studied by linguists. However, the methods developed by Frege and Tarski for the study of mathematical language have been extended greatly by Tarski's student Richard Montague and other linguists working in formal semantics to show that the distinction between mathematical language and natural language may not be as great as it seems.
Mohan Ganesalingam has analysed mathematical language using tools from formal linguistics. [ 61 ] Ganesalingam notes that some features of natural language are not necessary when analysing mathematical language (such as tense ), but many of the same analytical tools can be used (such as context-free grammars ). One important difference is that mathematical objects have clearly defined types , which can be explicitly defined in a text: "Effectively, we are allowed to introduce a word in one part of a sentence, and declare its part of speech in another; and this operation has no analogue in natural language." [ 61 ] : 251
This argument, associated with Willard Quine and Hilary Putnam , is considered by Stephen Yablo to be one of the most challenging arguments in favor of the acceptance of the existence of abstract mathematical entities, such as numbers and sets. [ 62 ] The form of the argument is as follows.
The justification for the first premise is the most controversial. Both Putnam and Quine invoke naturalism to justify the exclusion of all non-scientific entities, and hence to defend the "only" part of "all and only". The assertion that "all" entities postulated in scientific theories, including numbers, should be accepted as real is justified by confirmation holism . Since theories are not confirmed in a piecemeal fashion, but as a whole, there is no justification for excluding any of the entities referred to in well-confirmed theories. This puts the nominalist who wishes to exclude the existence of sets and non-Euclidean geometry , but to include the existence of quarks and other undetectable entities of physics, for example, in a difficult position. [ 63 ]
The anti-realist " epistemic argument" against Platonism has been made by Paul Benacerraf and Hartry Field . Platonism posits that mathematical objects are abstract entities. By general agreement, abstract entities cannot interact causally with concrete, physical entities ("the truth-values of our mathematical assertions depend on facts involving Platonic entities that reside in a realm outside of space-time" [ 64 ] ). Whilst our knowledge of concrete, physical objects is based on our ability to perceive them, and therefore to causally interact with them, there is no parallel account of how mathematicians come to have knowledge of abstract objects. [ 65 ] [ 66 ] [ 67 ] Another way of making the point is that if the Platonic world were to disappear, it would make no difference to the ability of mathematicians to generate proofs , etc., which is already fully accountable in terms of physical processes in their brains.
Field developed his views into fictionalism . Benacerraf also developed the philosophy of mathematical structuralism , according to which there are no mathematical objects. Nonetheless, some versions of structuralism are compatible with some versions of realism.
The argument hinges on the idea that a satisfactory naturalistic account of thought processes in terms of brain processes can be given for mathematical reasoning along with everything else. One line of defense is to maintain that this is false, so that mathematical reasoning uses some special intuition that involves contact with the Platonic realm. A modern form of this argument is given by Sir Roger Penrose . [ 68 ]
Another line of defense is to maintain that abstract objects are relevant to mathematical reasoning in a way that is non-causal, and not analogous to perception. This argument is developed by Jerrold Katz in his 2000 book Realistic Rationalism .
A more radical defense is denial of physical reality, i.e. the mathematical universe hypothesis . In that case, a mathematician's knowledge of mathematics is one mathematical object making contact with another.
Many practicing mathematicians have been drawn to their subject because of a sense of beauty they perceive in it. One sometimes hears the sentiment that mathematicians would like to leave philosophy to the philosophers and get back to mathematics—where, presumably, the beauty lies.
In his work on the divine proportion , H.E. Huntley relates the feeling of reading and understanding someone else's proof of a theorem of mathematics to that of a viewer of a masterpiece of art—the reader of a proof has a similar sense of exhilaration at understanding as the original author of the proof, much as, he argues, the viewer of a masterpiece has a sense of exhilaration similar to the original painter or sculptor. Indeed, one can study mathematical and scientific writings as literature .
Philip J. Davis and Reuben Hersh have commented that the sense of mathematical beauty is universal amongst practicing mathematicians. By way of example, they provide two proofs of the irrationality of √ 2 . The first is the traditional proof by contradiction , ascribed to Euclid ; the second is a more direct proof involving the fundamental theorem of arithmetic that, they argue, gets to the heart of the issue. Davis and Hersh argue that mathematicians find the second proof more aesthetically appealing because it gets closer to the nature of the problem.
Paul Erdős was well known for his notion of a hypothetical "Book" containing the most elegant or beautiful mathematical proofs. There is not universal agreement that a result has one "most elegant" proof; Gregory Chaitin has argued against this idea.
Philosophers have sometimes criticized mathematicians' sense of beauty or elegance as being, at best, vaguely stated. By the same token, however, philosophers of mathematics have sought to characterize what makes one proof more desirable than another when both are logically sound.
Another aspect of aesthetics concerning mathematics is mathematicians' views towards the possible uses of mathematics for purposes deemed unethical or inappropriate. The best-known exposition of this view occurs in G. H. Hardy 's book A Mathematician's Apology , in which Hardy argues that pure mathematics is superior in beauty to applied mathematics precisely because it cannot be used for war and similar ends. | https://en.wikipedia.org/wiki/Philosophy_of_mathematics |
In contemporary education , mathematics education —known in Europe as the didactics or pedagogy of mathematics —is the practice of teaching , learning , and carrying out scholarly research into the transfer of mathematical knowledge.
Although research into mathematics education is primarily concerned with the tools, methods, and approaches that facilitate practice or the study of practice, it also covers an extensive field of study encompassing a variety of different concepts, theories and methods. National and international organisations regularly hold conferences and publish literature in order to improve mathematics education.
Elementary mathematics were a core part of education in many ancient civilisations, including ancient Egypt , ancient Babylonia , ancient Greece , ancient Rome , and Vedic India . [ citation needed ] In most cases, formal education was only available to male children with sufficiently high status, wealth, or caste . [ citation needed ] The oldest known mathematics textbook is the Rhind papyrus , dated from circa 1650 BCE. [ 1 ]
Historians of Mesopotamia have confirmed that use of the Pythagorean rule dates back to the Old Babylonian Empire (20th–16th centuries BC) and that it was being taught in scribal schools over one thousand years before the birth of Pythagoras . [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ]
In Plato 's division of the liberal arts into the trivium and the quadrivium , the quadrivium included the mathematical fields of arithmetic and geometry . This structure was continued in the structure of classical education that was developed in medieval Europe. The teaching of geometry was almost universally based on Euclid's Elements . Apprentices to trades such as masons, merchants, and moneylenders could expect to learn such practical mathematics as was relevant to their profession.
In the Middle Ages , the academic status of mathematics declined, because it was strongly associated with trade and commerce, and considered somewhat un-Christian. [ 7 ] Although it continued to be taught in European universities , it was seen as subservient to the study of natural , metaphysical , and moral philosophy . The first modern arithmetic curriculum (starting with addition , then subtraction , multiplication , and division ) arose at reckoning schools in Italy in the 1300s. [ 8 ] Spreading along trade routes, these methods were designed to be used in commerce. They contrasted with Platonic math taught at universities, which was more philosophical and concerned numbers as concepts rather than calculating methods. [ 8 ] They also contrasted with mathematical methods learned by artisan apprentices, which were specific to the tasks and tools at hand. For example, the division of a board into thirds can be accomplished with a piece of string, instead of measuring the length and using the arithmetic operation of division. [ 7 ]
The first mathematics textbooks to be written in English and French were published by Robert Recorde , beginning with The Grounde of Artes in 1543. However, there are many different writings on mathematics and mathematics methodology that date back to 1800 BCE. These were mostly located in Mesopotamia, where the Sumerians were practicing multiplication and division. There are also artifacts demonstrating their methodology for solving equations like the quadratic equation . After the Sumerians, some of the most famous ancient works on mathematics came from Egypt in the form of the Rhind Mathematical Papyrus and the Moscow Mathematical Papyrus . The more famous Rhind Papyrus has been dated back to approximately 1650 BCE, but it is thought to be a copy of an even older scroll. This papyrus was essentially an early textbook for Egyptian students.
The social status of mathematical study was improving by the seventeenth century, with the University of Aberdeen creating a Mathematics Chair in 1613, followed by the Chair in Geometry being set up in University of Oxford in 1619 and the Lucasian Chair of Mathematics being established by the University of Cambridge in 1662.
In the 18th and 19th centuries, the Industrial Revolution led to an enormous increase in urban populations. Basic numeracy skills, such as the ability to tell the time, count money, and carry out simple arithmetic , became essential in this new urban lifestyle. Within the new public education systems, mathematics became a central part of the curriculum from an early age.
By the twentieth century, mathematics was part of the core curriculum in all developed countries .
During the twentieth century, mathematics education was established as an independent field of research. Main events in this development include the following:
Midway through the twentieth century, the cultural impact of the " electronic age " (McLuhan) was also taken up by educational theory and the teaching of mathematics. While previous approach focused on "working with specialized 'problems' in arithmetic ", the emerging structural approach to knowledge had "small children meditating about number theory and ' sets '." [ 10 ] Since the 1980s, there have been a number of efforts to reform the traditional curriculum, which focuses on continuous mathematics and relegates even some basic discrete concepts to advanced study, to better balance coverage of the continuous and discrete sides of the subject: [ 11 ]
Similar efforts are also underway to shift more focus to mathematical modeling as well as its relationship to discrete math. [ 12 ]
At different times and in different cultures and countries, mathematics education has attempted to achieve a variety of different objectives. These objectives have included:
The method or methods used in any particular context are largely determined by the objectives that the relevant educational system is trying to achieve. Methods of teaching mathematics include the following:
Different levels of mathematics are taught at different ages and in somewhat different sequences in different countries. Sometimes a class may be taught at an earlier age than typical as a special or honors class .
Elementary mathematics in most countries is taught similarly, though there are differences. Most countries tend to cover fewer topics in greater depth than in the United States. [ 26 ] During the primary school years, children learn about whole numbers and arithmetic, including addition, subtraction, multiplication, and division. [ 27 ] Comparisons and measurement are taught, in both numeric and pictorial form, as well as fractions and proportionality , patterns, and various topics related to geometry. [ 28 ]
At high school level in most of the US, algebra , geometry , and analysis ( pre-calculus and calculus ) are taught as separate courses in different years.
On the other hand, in most other countries (and in a few US states), mathematics is taught as an integrated subject, with topics from all branches of mathematics studied every year;
students thus undertake a pre-defined course - entailing several topics - rather than choosing courses à la carte as in the United States.
Even in these cases, however, several "mathematics" options may be offered, selected based on the student's intended studies post high school.
(In South Africa, for example, the options are Mathematics, Mathematical Literacy and Technical Mathematics.)
Thus, a science-oriented curriculum typically overlaps the first year of university mathematics, and includes differential calculus and trigonometry at age 16–17 and integral calculus , complex numbers , analytic geometry , exponential and logarithmic functions , and infinite series in their final year of secondary school; Probability and statistics are similarly often taught.
At college and university level, science and engineering students will be required to take multivariable calculus , differential equations , and linear algebra ; at several US colleges, the minor or AS in mathematics substantively comprises these courses. Mathematics majors study additional areas of pure mathematics —and often applied mathematics—with the requirement of specified advanced courses in analysis and modern algebra . Other topics in pure mathematics include differential geometry , set theory , and topology . Applied mathematics may be taken as a major subject in its own right, covering partial differential equations , optimization , and numerical analysis among other topics. Courses here are also taught within other programs: for example, civil engineers may be required to study fluid mechanics , [ 29 ] and "math for computer science" might include graph theory , permutation , probability, and formal mathematical proofs . [ 30 ] Pure and applied math degrees often include modules in probability theory or mathematical statistics , as well as stochastic processes .
( Theoretical ) physics is mathematics-intensive, often overlapping substantively with the pure or applied math degree. Business mathematics is usually limited to introductory calculus and (sometimes) matrix calculations; economics programs additionally cover optimization , often differential equations and linear algebra , and sometimes analysis. Business and social science students also typically take statistics and probability courses.
Throughout most of history, standards for mathematics education were set locally, by individual schools or teachers, depending on the levels of achievement that were relevant to, realistic for, and considered socially appropriate for their pupils.
In modern times, there has been a move towards regional or national standards, usually under the umbrella of a wider standard school curriculum. In England , for example, standards for mathematics education are set as part of the National Curriculum for England, [ 31 ] while Scotland maintains its own educational system. Many other countries have centralized ministries which set national standards or curricula, and sometimes even textbooks.
Ma (2000) summarized the research of others who found, based on nationwide data, that students with higher scores on standardized mathematics tests had taken more mathematics courses in high school. This led some states to require three years of mathematics instead of two. But because this requirement was often met by taking another lower-level mathematics course, the additional courses had a “diluted” effect in raising achievement levels. [ 32 ]
In North America, the National Council of Teachers of Mathematics (NCTM) published the Principles and Standards for School Mathematics in 2000 for the United States and Canada, which boosted the trend towards reform mathematics . In 2006, the NCTM released Curriculum Focal Points , which recommend the most important mathematical topics for each grade level through grade 8. However, these standards were guidelines to implement as American states and Canadian provinces chose. In 2010, the National Governors Association Center for Best Practices and the Council of Chief State School Officers published the Common Core State Standards for US states, which were subsequently adopted by most states. Adoption of the Common Core State Standards in mathematics is at the discretion of each state, and is not mandated by the federal government. [ 33 ] "States routinely review their academic standards and may choose to change or add onto the standards to best meet the needs of their students." [ 34 ] The NCTM has state affiliates that have different education standards at the state level. For example, Missouri has the Missouri Council of Teachers of Mathematics (MCTM) which has its pillars and standards of education listed on its website. The MCTM also offers membership opportunities to teachers and future teachers so that they can stay up to date on the changes in math educational standards. [ 35 ]
The Programme for International Student Assessment (PISA), created by the Organisation for the Economic Co-operation and Development (OECD), is a global program studying the reading, science, and mathematics abilities of 15-year-old students. [ 36 ] The first assessment was conducted in the year 2000 with 43 countries participating. [ 37 ] PISA has repeated this assessment every three years to provide comparable data, helping to guide global education to better prepare youth for future economies. There have been many ramifications following the results of triennial PISA assessments due to implicit and explicit responses of stakeholders, which have led to education reform and policy change. [ 37 ] [ 38 ] [ 23 ]
According to Hiebert and Grouws, "Robust, useful theories of classroom teaching do not yet exist." [ 39 ] However, there are useful theories on how children learn mathematics, and much research has been conducted in recent decades to explore how these theories can be applied to teaching. The following results are examples of some of the current findings in the field of mathematics education.
Source: [ 39 ]
Source: [ 42 ]
As with other educational research (and the social sciences in general), mathematics education research depends on both quantitative and qualitative studies. Quantitative research includes studies that use inferential statistics to answer specific questions, such as whether a certain teaching method gives significantly better results than the status quo. The best quantitative studies involve randomized trials where students or classes are randomly assigned different methods to test their effects. They depend on large samples to obtain statistically significant results.
Qualitative research , such as case studies , action research , discourse analysis , and clinical interviews , depend on small but focused samples in an attempt to understand student learning and to look at how and why a given method gives the results it does. Such studies cannot conclusively establish that one method is better than another, as randomized trials can, but unless it is understood why treatment X is better than treatment Y, application of results of quantitative studies will often lead to "lethal mutations" [ 39 ] of the finding in actual classrooms. Exploratory qualitative research is also useful for suggesting new hypotheses , which can eventually be tested by randomized experiments. Both qualitative and quantitative studies, therefore, are considered essential in education—just as in the other social sciences. [ 47 ] Many studies are “mixed”, simultaneously combining aspects of both quantitative and qualitative research, as appropriate.
There has been some controversy over the relative strengths of different types of research. Because of an opinion that randomized trials provide clear, objective evidence on “what works”, policymakers often consider only those studies. Some scholars have pushed for more random experiments in which teaching methods are randomly assigned to classes. [ 48 ] [ 49 ] In other disciplines concerned with human subjects—like biomedicine , psychology , and policy evaluation—controlled, randomized experiments remain the preferred method of evaluating treatments. [ 50 ] [ 51 ] Educational statisticians and some mathematics educators have been working to increase the use of randomized experiments to evaluate teaching methods. [ 49 ] On the other hand, many scholars in educational schools have argued against increasing the number of randomized experiments, often because of philosophical objections, such as the ethical difficulty of randomly assigning students to various treatments when the effects of such treatments are not yet known to be effective, [ 52 ] or the difficulty of assuring rigid control of the independent variable in fluid, real school settings. [ 53 ]
In the United States, the National Mathematics Advisory Panel (NMAP) published a report in 2008 based on studies, some of which used randomized assignment of treatments to experimental units , such as classrooms or students. The NMAP report's preference for randomized experiments received criticism from some scholars. [ 54 ] In 2010, the What Works Clearinghouse (essentially the research arm for the Department of Education ) responded to ongoing controversy by extending its research base to include non-experimental studies, including regression discontinuity designs and single-case studies . [ 55 ] | https://en.wikipedia.org/wiki/Philosophy_of_mathematics_education |
In philosophy , the philosophy of physics deals with conceptual and interpretational issues in physics , many of which overlap with research done by certain kinds of theoretical physicists . Historically, philosophers of physics have engaged with questions such as the nature of space, time, matter and the laws that govern their interactions, as well as the epistemological and ontological basis of the theories used by practicing physicists. The discipline draws upon insights from various areas of philosophy, including metaphysics , epistemology , and philosophy of science , while also engaging with the latest developments in theoretical and experimental physics.
Contemporary work focuses on issues at the foundations of the three pillars of modern physics :
Other areas of focus include the nature of physical laws , symmetries , and conservation principles ; the role of mathematics; and philosophical implications of emerging fields like quantum gravity , quantum information , and complex systems . Philosophers of physics have argued that conceptual analysis clarifies foundations, interprets implications, and guides theory development in physics.
The existence and nature of space and time (or space-time) are central topics in the philosophy of physics. [ 1 ] Issues include (1) whether space and time are fundamental or emergent, and (2) how space and time are operationally different from one another.
In classical mechanics, time is taken to be a fundamental quantity (that is, a quantity which cannot be defined in terms of other quantities). However, certain theories such as loop quantum gravity claim that spacetime is emergent. As Carlo Rovelli , one of the founders of loop quantum gravity, has said: "No more fields on spacetime: just fields on fields". [ 2 ] Time is defined via measurement—by its standard time interval. Currently, the standard time interval (called "conventional second ", or simply "second") is defined as 9,192,631,770 oscillations of a hyperfine transition in the 133 caesium atom . ( ISO 31-1 ). What time is and how it works follows from the above definition. Time then can be combined mathematically with the fundamental quantities of space and mass to define concepts such as velocity , momentum , energy , and fields .
Both Isaac Newton and Galileo Galilei , [ 3 ] as well as most people up until the 20th century, thought that time was the same for everyone everywhere. [ 4 ] The modern conception of time is based on Albert Einstein 's theory of relativity and Hermann Minkowski 's spacetime , in which rates of time run differently in different inertial frames of reference, and space and time are merged into spacetime . Einstein's general relativity as well as the redshift of the light from receding distant galaxies indicate that the entire Universe and possibly space-time itself began about 13.8 billion years ago in the Big Bang . Einstein's theory of special relativity mostly (though not universally) made theories of time where there is something metaphysically special about the present seem much less plausible, as the reference-frame-dependence of time seems to not allow the idea of a privileged present moment.
Space is one of the few fundamental quantities in physics, meaning that it cannot be defined via other quantities because there is nothing more fundamental known at present. Thus, similar to the definition of other fundamental quantities (like time and mass ), space is defined via measurement. Currently, the standard space interval, called a standard metre or simply metre, is defined as the distance traveled by light in a vacuum during a time interval of 1/299792458 of a second (exact).
In classical physics , space is a three-dimensional Euclidean space where any position can be described using three coordinates and parameterised by time. Special and general relativity use four-dimensional spacetime rather than three-dimensional space; and currently there are many speculative theories which use more than three spatial dimensions.
Quantum mechanics is a large focus of contemporary philosophy of physics, specifically concerning the correct interpretation of quantum mechanics. Very broadly, much of the philosophical work that is done in quantum theory is trying to make sense of superposition states: [ 5 ] the property that particles seem to not just be in one determinate position at one time, but are somewhere 'here', and also 'there' at the same time. Such a radical view turns many common sense metaphysical ideas on their head. Much of contemporary philosophy of quantum mechanics aims to make sense of what the very empirically successful formalism of quantum mechanics tells us about the physical world.
The uncertainty principle is a mathematical relation asserting an upper limit to the accuracy of the simultaneous measurement of any pair of conjugate variables , e.g. position and momentum. In the formalism of operator notation , this limit is the evaluation of the commutator of the variables' corresponding operators.
The uncertainty principle arose as an answer to the question: How does one measure the location of an electron around a nucleus if an electron is a wave? When quantum mechanics was developed, it was seen to be a relation between the classical and quantum descriptions of a system using wave mechanics.
Bell's theorem is a term encompassing a number of closely related results in physics, all of which determine that quantum mechanics is incompatible with local hidden-variable theories given some basic assumptions about the nature of measurement. "Local" here refers to the principle of locality , the idea that a particle can only be influenced by its immediate surroundings, and that interactions mediated by physical fields cannot propagate faster than the speed of light . " Hidden variables " are putative properties of quantum particles that are not included in the theory but nevertheless affect the outcome of experiments. In the words of physicist John Stewart Bell , for whom this family of results is named, "If [a hidden-variable theory] is local it will not agree with quantum mechanics, and if it agrees with quantum mechanics it will not be local." [ 6 ]
The term is broadly applied to a number of different derivations, the first of which was introduced by Bell in a 1964 paper titled "On the Einstein Podolsky Rosen Paradox ". Bell's paper was a response to a 1935 thought experiment that Albert Einstein , Boris Podolsky and Nathan Rosen proposed, arguing that quantum physics is an "incomplete" theory. [ 7 ] [ 8 ] By 1935, it was already recognized that the predictions of quantum physics are probabilistic . Einstein, Podolsky and Rosen presented a scenario that involves preparing a pair of particles such that the quantum state of the pair is entangled , and then separating the particles to an arbitrarily large distance. The experimenter has a choice of possible measurements that can be performed on one of the particles. When they choose a measurement and obtain a result, the quantum state of the other particle apparently collapses instantaneously into a new state depending upon that result, no matter how far away the other particle is. This suggests that either the measurement of the first particle somehow also influenced the second particle faster than the speed of light, or that the entangled particles had some unmeasured property which pre-determined their final quantum states before they were separated. Therefore, assuming locality, quantum mechanics must be incomplete, as it cannot give a complete description of the particle's true physical characteristics. In other words, quantum particles, like electrons and photons , must carry some property or attributes not included in quantum theory, and the uncertainties in quantum theory's predictions would then be due to ignorance or unknowability of these properties, later termed "hidden variables".
Bell carried the analysis of quantum entanglement much further. He deduced that if measurements are performed independently on the two separated particles of an entangled pair, then the assumption that the outcomes depend upon hidden variables within each half implies a mathematical constraint on how the outcomes on the two measurements are correlated. This constraint would later be named the Bell inequality . Bell then showed that quantum physics predicts correlations that violate this inequality. Consequently, the only way that hidden variables could explain the predictions of quantum physics is if they are "nonlocal", which is to say that somehow the two particles can carry non-classical correlations no matter how widely they ever become separated. [ 9 ] [ 10 ]
Multiple variations on Bell's theorem were put forward in the following years, introducing other closely related conditions generally known as Bell (or "Bell-type") inequalities. The first rudimentary experiment designed to test Bell's theorem was performed in 1972 by John Clauser and Stuart Freedman . [ 11 ] More advanced experiments, known collectively as Bell tests , have been performed many times since. To date, Bell tests have consistently found that physical systems obey quantum mechanics and violate Bell inequalities; which is to say that the results of these experiments are incompatible with any local hidden variable theory. [ 12 ] [ 13 ]
The exact nature of the assumptions required to prove a Bell-type constraint on correlations has been debated by physicists and by philosophers. While the significance of Bell's theorem is not in doubt, its full implications for the interpretation of quantum mechanics remain unresolved.
In March 1927, working in Niels Bohr 's institute, Werner Heisenberg formulated the principle of uncertainty thereby laying the foundation of what became known as the Copenhagen interpretation of quantum mechanics. Heisenberg had been studying the papers of Paul Dirac and Pascual Jordan . He discovered a problem with measurement of basic variables in the equations. His analysis showed that uncertainties, or imprecisions, always turned up if one tried to measure the position and the momentum of a particle at the same time. Heisenberg concluded that these uncertainties or imprecisions in the measurements were not the fault of the experimenter, but fundamental in nature and are inherent mathematical properties of operators in quantum mechanics arising from definitions of these operators. [ 14 ]
The Copenhagen interpretation is somewhat loosely defined, as many physicists and philosophers of physics have advanced similar but not identical views of quantum mechanics. It is principally associated with Heisenberg and Bohr, despite their philosophical differences. [ 15 ] [ 16 ] Features common to Copenhagen-type interpretations include the idea that quantum mechanics is intrinsically indeterministic, with probabilities calculated using the Born rule , and the principle of complementarity , which states that objects have certain pairs of complementary properties that cannot all be observed or measured simultaneously. [ 17 ] Moreover, the act of "observing" or "measuring" an object is irreversible, and no truth can be attributed to an object, except according to the results of its measurement . Copenhagen-type interpretations hold that quantum descriptions are objective, in that they are independent of any arbitrary factors in the physicist's mind. [ 18 ] : 85–90
The many-worlds interpretation of quantum mechanics by Hugh Everett III claims that the wave-function of a quantum system is telling us claims about the reality of that physical system. It denies wavefunction collapse, and claims that superposition states should be interpreted literally as describing the reality of many-worlds where objects are located, and not simply indicating the indeterminacy of those variables. This is sometimes argued as a corollary of scientific realism , [ 19 ] which states that scientific theories aim to give us literally true descriptions of the world.
One issue for the Everett interpretation is the role that probability plays on this account. The Everettian account is completely deterministic, whereas probability seems to play an ineliminable role in quantum mechanics. [ 20 ] Contemporary Everettians have argued that one can get an account of probability that follows the Born rule through certain decision-theoretic proofs, [ 21 ] but there is as yet no consensus about whether any of these proofs are successful. [ 22 ] [ 23 ] [ 24 ]
Physicist Roland Omnès noted that it is impossible to experimentally differentiate between Everett's view, which says that as the wave-function decoheres into distinct worlds, each of which exists equally, and the more traditional view that says that a decoherent wave-function leaves only one unique real result. Hence, the dispute between the two views represents a great "chasm". "Every characteristic of reality has reappeared in its reconstruction by our theoretical model; every feature except one: the uniqueness of facts." [ 25 ]
The philosophy of thermal and statistical physics is concerned with the foundational issues and conceptual implications of thermodynamics and statistical mechanics . These branches of physics deal with the macroscopic behavior of systems comprising a large number of microscopic entities, such as particles, and the nature of laws that emerge from these systems like irreversibility and entropy . Interest of philosophers in statistical mechanics first arose from the observation of an apparent conflict between the time-reversal symmetry of fundamental physical laws and the irreversibility observed in thermodynamic processes, known as the arrow of time problem. Philosophers have sought to understand how the asymmetric behavior of macroscopic systems, such as the tendency of heat to flow from hot to cold bodies, can be reconciled with the time-symmetric laws governing the motion of individual particles.
Another key issue is the interpretation of probability in statistical mechanics , which is primarily concerned with the question of whether probabilities in statistical mechanics are epistemic , reflecting our lack of knowledge about the precise microstate of a system, or ontic , representing an objective feature of the physical world. The epistemic interpretation, also known as the subjective or Bayesian view, holds that probabilities in statistical mechanics are a measure of our ignorance about the exact state of a system. According to this view, we resort to probabilistic descriptions only due to the practical impossibility of knowing the precise properties of all its micro-constituents, like the positions and momenta of particles. As such, the probabilities are not objective features of the world but rather arise from our ignorance. In contrast, the ontic interpretation, also called the objective or frequentist view, asserts that probabilities in statistical mechanics are real, physical properties of the system itself. Proponents of this view argue that the probabilistic nature of statistical mechanics is not merely a reflection of our ignorance but an intrinsic feature of the physical world, and that even if we had complete knowledge of the microstate of a system, the macroscopic behavior would still be best described by probabilistic laws.
Aristotelian physics viewed the universe as a sphere with a center. Matter, composed of the classical elements : earth, water, air, and fire; sought to go down towards the center of the universe, the center of the Earth, or up, away from it. Things in the aether such as the Moon, the Sun, planets, or stars circled the center of the universe. [ 26 ] Movement is defined as change in place, [ 26 ] i.e. space. [ 27 ]
The implicit axioms of Aristotelian physics with respect to movement of matter in space were superseded in Newtonian physics by Newton's first law of motion . [ 28 ]
Every body perseveres in its state either of rest or of uniform motion in a straight line, except insofar as it is compelled to change its state by impressed forces.
"Every body" includes the Moon, and an apple; and includes all types of matter, air as well as water, stones, or even a flame. Nothing has a natural or inherent motion. [ 29 ] Absolute space being three-dimensional Euclidean space , infinite and without a center. [ 29 ] Being "at rest" means being at the same place in absolute space over time. [ 30 ] The topology and affine structure of space must permit movement in a straight line at a uniform velocity; thus both space and time must have definite, stable dimensions . [ 31 ]
Gottfried Wilhelm Leibniz , 1646–1716, was a contemporary of Newton. He contributed a fair amount to the statics and dynamics emerging around him, often disagreeing with Descartes and Newton . He devised a new theory of motion ( dynamics ) based on kinetic energy and potential energy , which posited space as relative, whereas Newton was thoroughly convinced that space was absolute. An important example of Leibniz's mature physical thinking is his Specimen Dynamicum of 1695. [ 32 ]
Until the discovery of subatomic particles and the quantum mechanics governing them, many of Leibniz's speculative ideas about aspects of nature not reducible to statics and dynamics made little sense.
He anticipated Albert Einstein by arguing, against Newton, that space , time and motion are relative, not absolute: [ 33 ] "As for my own opinion, I have said more than once, that I hold space to be something merely relative, as time is, that I hold it to be an order of coexistences, as time is an order of successions." [ 34 ] | https://en.wikipedia.org/wiki/Philosophy_of_physics |
The philosophy of statistics is the study of the mathematical, conceptual, and philosophical foundations and analyses of statistics and statistical inference. For example, Dennis Lindely argues for the more general analysis of statistics as the study of uncertainty . [ 1 ] The subject involves the meaning , justification , utility , use and abuse of statistics and its methodology , and ethical and epistemological issues involved in the consideration of choice and interpretation of data and methods of statistics . [ 2 ] | https://en.wikipedia.org/wiki/Philosophy_of_statistics |
The philosophy of technology is a sub-field of philosophy that studies the nature of technology and its social effects.
Philosophical discussion of questions relating to technology (or its Greek ancestor techne ) dates back to the very dawn of Western philosophy . [ 1 ] The phrase "philosophy of technology" was first used in the late 19th century by German-born philosopher and geographer Ernst Kapp , who published a book titled Elements of a Philosophy of Technology (German title: Grundlinien einer Philosophie der Technik ). [ 2 ] [ 3 ] [ 4 ]
The western term 'technology' comes from the Greek term techne (τέχνη) (art, or craft knowledge) and philosophical views on technology can be traced to the very roots of Western philosophy . A common theme in the Greek view of techne is that it arises as an imitation of nature (for example, weaving developed out of watching spiders). Greek philosophers such as Heraclitus and Democritus endorsed this view. [ 1 ] In his Physics , Aristotle agreed that this imitation was often the case, but also argued that techne can go beyond nature and complete "what nature cannot bring to a finish." [ 5 ] Aristotle also argued that nature ( physis ) and techne are ontologically distinct because natural things have an inner principle of generation and motion, as well as an inner teleological final cause. While techne is shaped by an outside cause and an outside telos (goal or end) which shapes it. [ 6 ] Natural things strive for some end and reproduce themselves, while techne does not. In Plato 's Timaeus , the world is depicted as being the work of a divine craftsman ( Demiurge ) who created the world in accordance with eternal forms as an artisan makes things using blueprints. Moreover, Plato argues in the Laws , that what a craftsman does is imitate this divine craftsman.
During the period of the Roman empire and late antiquity authors produced practical works such as Vitruvius ' De Architectura (1st century BC) and Agricola 's De Re Metallica (1556). Medieval Scholastic philosophy generally upheld the traditional view of technology as imitation of nature. During the Renaissance, Francis Bacon became one of the first modern authors to reflect on the impact of technology on society. In his utopian work New Atlantis (1627), Bacon put forth an optimistic worldview in which a fictional institution ( Salomon's House ) uses natural philosophy and technology to extend man's power over nature – for the betterment of society, through works which improve living conditions. The goal of this fictional foundation is "...the knowledge of causes, and secret motions of things; and the enlarging of the bounds of human empire, to the effecting of all things possible". [ citation needed ]
The native German philosopher and geographer Ernst Kapp , who was based in Texas , published the fundamental book "Grundlinien einer Philosophie der Technik" in 1877. [ 3 ] Kapp was deeply inspired by the philosophy of Hegel and regarded technique as a projection of human organs. In the European context, Kapp is referred to as the founder of the philosophy of technology.
Another, more materialistic position on technology which became very influential in the 20th-century philosophy of technology was centered on the ideas of Benjamin Franklin and Karl Marx . [ citation needed ]
Five early and prominent 20th-century philosophers to directly address the effects of modern technology on humanity include John Dewey , Martin Heidegger , Herbert Marcuse , Günther Anders and Hannah Arendt . They all saw technology as central to modern life, although Heidegger, Anders, [ 7 ] Arendt [ 8 ] and Marcuse were more ambivalent and critical than Dewey. The problem for Heidegger was the hidden nature of technology's essence, Gestell or Enframing which posed for humans what he called its greatest danger and thus its greatest possibility. Heidegger's major work on technology is found in The Question Concerning Technology .
Technological determinists such as Jaques Ellul have argued that modern technology constitutes a unified monolithic and deterministic force, and that the notion of technology being simply a tool is a serious error. Ellul views the modern technological world-system as being motivated by the needs of its own efficiency and power, not the welfare of the human race or the integrity of the biosphere. [ 9 ]
While a number of important individual works were published in the second half of the twentieth century, Paul Durbin has identified two books published at the turn of the century as marking the development of the philosophy of technology as an academic subdiscipline with canonical texts. [ 10 ] Those were Technology and the Good Life (2000), edited by Eric Higgs , Andrew Light, and David Strong and American Philosophy of Technology (2001) by Hans Achterhuis . Several collected volumes with topics in philosophy of technology have come out over the past decade and the journals Techne: Research in Philosophy and Technology (the journal of the Society for Philosophy and Technology , published by the Philosophy Documentation Center ) and Philosophy & Technology ( Springer ) publish exclusively works in philosophy of technology. Philosophers of technology reflect broadly and work in the area and include interest on diverse topics of geoengineering , internet data and privacy, our understandings of internet cats, technological function and epistemology of technology, computer ethics, biotechnology and its implications, transcendence in space, and technological ethics more broadly. [ citation needed ]
Bernard Stiegler argued in his Technics and Time , as well as in his other works, that the question of technology has been repressed (in the sense of Freud) by the history of philosophy. Instead, Stiegler showed how the question of technology constitutes the fundamental question of philosophy. Stiegler shows, for example in Plato's Meno , that technology is that which makes anamnesis, namely the access to truth, possible. Stiegler's deconstruction of the history of philosophy through technology as the supplement opens a different path to understand the place of technology in philosophy than the established field of philosophy of technology. In the same vein, philosophers – such as Alexander Galloway , Eugene Thacker , and McKenzie Wark in their book Excommunication – argue that advances in and the pervasiveness of digital technologies transform the philosophy of technology into a new 'first philosophy'. Citing examples such as the analysis of writing and speech in Plato's dialogue The Phaedrus , Galloway et al. suggest that instead of considering technology as a secondary to ontology, technology be understood as prior to the very possibility of philosophy: "Does everything that exists, exist to me presented and represented, to be mediated and remediated, to be communicated and translated? There are mediative situations in which heresy, exile, or banishment carry the day, not repetition, communion, or integration. There are certain kinds of messages that state 'there will be no more messages'. Hence for every communication there is a correlative excommunication." [ 11 ]
There has been additional reflection focusing on the philosophy of engineering , as a sub-field within philosophy of technology. Ibo van de Poel and David E. Goldberg edited a volume, Philosophy and Engineering: An Emerging Agenda (2010) which contains a number of research articles focused on design, epistemology, ontology and ethics in engineering .
Technological determinism is the idea that "features of technology [determine] its use and the role of a progressive society was to adapt to [and benefit from] technological change." [ 12 ] The alternative perspective would be social determinism which looks upon society being at fault for the "development and deployment" [ 13 ] of technologies. Lelia Green used recent gun massacres such as the Port Arthur Massacre and the Dunblane Massacre to selectively show technological determinism and social determinism . According to Green, a technology can be thought of as a neutral entity only when the sociocultural context and issues circulating the specific technology are removed. It will be then visible to us that there lies a relationship of social groups and power provided through the possession of technologies. A compatibilist position between these two positions is the interactional stance on technology proposed by Batya Friedman that states that social forces and technology co-construct and co-vary with one another.
This philosophy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Philosophy_of_technology |
The phlogiston theory , a superseded scientific theory , postulated the existence of a fire-like element dubbed phlogiston ( / f l ɒ ˈ dʒ ɪ s t ən , f l oʊ -, - ɒ n / ) [ 1 ] [ 2 ] contained within combustible bodies and released during combustion . The name comes from the Ancient Greek φλογιστόν phlogistón ( burning up ), from φλόξ phlóx ( flame ). The idea of a phlogistic substance was first proposed in 1667 by Johann Joachim Becher and later put together more formally in 1697 by Georg Ernst Stahl . Phlogiston theory attempted to explain chemical processes such as combustion and rusting , now collectively known as oxidation . The theory was challenged by the concomitant weight increase and was abandoned before the end of the 18th century following experiments by Antoine Lavoisier in the 1770s and by other scientists. Phlogiston theory led to experiments that ultimately resulted in the identification ( c. 1771 ), and naming (1777), of oxygen by Joseph Priestley and Antoine Lavoisier , respectively.
Phlogiston theory states that phlogisticated substances contain phlogiston and that they dephlogisticate when burned, releasing stored phlogiston, which is absorbed by the air. Growing plants then absorb this phlogiston, which is why air does not spontaneously combust and also why plant matter burns. This method of accounting for combustion was inverse to the oxygen theory by Antoine Lavoisier.
In general, substances that burned in the air were said to be rich in phlogiston; the fact that combustion soon ceased in an enclosed space was taken as clear-cut evidence that air had the capacity to absorb only a finite amount of phlogiston. When the air had become completely phlogisticated it would no longer serve to support the combustion of any material, nor would a metal heated in it yield a calx ; nor could phlogisticated air support life. Breathing was thought to take phlogiston out of the body. [ 3 ]
Joseph Black 's Scottish student Daniel Rutherford discovered nitrogen in 1772, and the pair used the theory to explain his results. The residue of air left after burning, in fact, a mixture of nitrogen and carbon dioxide , was sometimes referred to as phlogisticated air, having taken up all of the phlogiston. Conversely, when Joseph Priestley discovered oxygen , he believed it to be dephlogisticated air, capable of combining with more phlogiston and thus supporting combustion for longer than ordinary air. [ 4 ]
Empedocles had formulated the classical theory that there were four elements—water, earth, fire, and air—and Aristotle reinforced this idea by characterising them as moist, dry, hot, and cold. Fire was thus thought of as a substance, and burning was seen as a process of decomposition that applied only to compounds. Experience had shown that burning was not always accompanied by a loss of material, and a better theory was needed to account for this. [ 5 ]
In 1667, Johann Joachim Becher published his book Physica subterranea , which contained the first instance of what would become the phlogiston theory. In his book, Becher eliminated fire and air from the classical element model and replaced them with three forms of the earth: terra lapidea , terra fluida , and terra pinguis . [ 6 ] [ 7 ] Terra pinguis was the element that imparted oily, sulphurous , or combustible properties. [ 8 ] Becher believed that terra pinguis was a key feature of combustion and was released when combustible substances were burned. [ 6 ] Becher did not have much to do with phlogiston theory as we know it now, but he had a large influence on his student Stahl. Becher's main contribution was the start of the theory itself, however much of it was changed after him. [ 9 ] Becher's idea was that combustible substances contain an ignitable matter, the terra pinguis . [ 10 ]
In 1703, Georg Ernst Stahl , a professor of medicine and chemistry at Halle , proposed a variant of the theory in which he renamed Becher's terra pinguis to phlogiston , and it was in this form that the theory probably had its greatest influence. [ 11 ] The term 'phlogiston' itself was not something that Stahl invented. There is evidence that the word was used as early as 1606, and in a way that was very similar to what Stahl was using it for. [ 9 ] The term was derived from a Greek word meaning inflame. The following paragraph describes Stahl's view of phlogiston:
To Stahl, metals were compounds containing phlogiston in combination with metallic oxides (calces); when ignited, the phlogiston was freed from the metal leaving the oxide behind. When the oxide was heated with a substance rich in phlogiston, such as charcoal, the calx again took up phlogiston and regenerated the metal. Phlogiston was a definite substance, the same in all its combinations. [ 10 ]
Stahl's first definition of phlogiston first appeared in his Zymotechnia fundamentalis , published in 1697. His most quoted definition was found in the treatise on chemistry entitled Fundamenta chymiae in 1723. [ 9 ] According to Stahl, phlogiston was a substance that was not able to be put into a bottle but could be transferred nonetheless. To him, wood was just a combination of ash and phlogiston, and making a metal was as simple as getting a metal calx and adding phlogiston. [ 10 ] Soot was almost pure phlogiston, which is why heating it with a metallic calx transforms the calx into the metal and Stahl attempted to prove that the phlogiston in soot and sulphur were identical by converting sulphates to liver of sulphur using charcoal . He did not account for the increase in weight on combustion of tin and lead that were known at the time. [ 12 ]
Johann Heinrich Pott , a student of one of Stahl's students, expanded the theory and attempted to make it much more understandable to a general audience . He compared phlogiston to light or fire, saying that all three were substances whose natures were widely understood but not easily defined . He thought that phlogiston should not be considered as a particle but as an essence that permeates substances, arguing that in a pound of any substance, one could not simply pick out the particles of phlogiston. [ 9 ] Pott also observed the fact that when certain substances are burned they increase in mass instead of losing the mass of the phlogiston as it escapes; according to him, phlogiston was the basic fire principle and could not be obtained by itself. Flames were considered to be a mix of phlogiston and water, while a phlogiston-and-earthy mixture could not burn properly. Phlogiston permeates everything in the universe, it could be released as heat when combined with an acid. Pott proposed the following properties:
Pott's formulations proposed little new theory; he merely supplied further details and rendered existing theory more approachable to the common man.
Johann Juncker also created a very complete picture of phlogiston. When reading Stahl's work, he assumed that phlogiston was in fact very material. He, therefore, came to the conclusion that phlogiston has the property of levity, or that it makes the compound that it is in much lighter than it would be without the phlogiston. He also showed that air was needed for combustion by putting substances in a sealed flask and trying to burn them. [ 9 ]
Guillaume-François Rouelle brought the theory of phlogiston to France, where he was a very influential scientist and teacher, popularizing the theory very quickly. Many of his students became very influential scientists in their own right, Lavoisier included. [ 10 ] The French viewed phlogiston as a very subtle principle that vanishes in all analysis, yet it is in all bodies. Essentially they followed straight from Stahl's theory. [ 9 ]
Giovanni Antonio Giobert introduced Lavoisier's work in Italy. Giobert won a prize competition from the Academy of Letters and Sciences of Mantua in 1792 for his work refuting phlogiston theory. He presented a paper at the Académie royale des Sciences of Turin on 18 March 1792, entitled Examen chimique de la doctrine du phlogistique et de la doctrine des pneumatistes par rapport à la nature de l'eau ("Chemical examination of the doctrine of phlogiston and the doctrine of pneumatists in relation to the nature of water"), which is considered the most original defence of Lavoisier's theory of water composition to appear in Italy. [ 14 ]
Eventually, quantitative experiments revealed problems, including the fact that some metals gained weight after they burned, even though they were supposed to have lost phlogiston.
Some phlogiston proponents, like Robert Boyle , [ 15 ] explained this by concluding that phlogiston has negative mass; others, such as Louis-Bernard Guyton de Morveau , gave the more conventional argument that it is lighter than air. However, a more detailed analysis based on Archimedes' principle , the densities of magnesium and its combustion product showed that just being lighter than air could not account for the increase in weight. [ citation needed ] Stahl himself did not address the problem of the metals that burn gaining weight, but those who followed his school of thought were the ones that worked on this problem. [ 9 ]
During the eighteenth century, as it became clear that metals gained weight after they were oxidized, phlogiston was increasingly regarded as a principle rather than a material substance. [ 16 ] By the end of the eighteenth century, for the few chemists who still used the term phlogiston, the concept was linked to hydrogen . Joseph Priestley , for example, in referring to the reaction of steam on iron, while fully acknowledging that the iron gains weight after it binds with oxygen to form a calx , iron oxide, iron also loses "the basis of inflammable air ( hydrogen ), and this is the substance or principle, to which we give the name phlogiston". [ 17 ] Following Lavoisier's description of oxygen as the oxidizing principle (hence its name, from Ancient Greek: oksús , "sharp"; génos , "birth" referring to oxygen's supposed role in the formation of acids), Priestley described phlogiston as the alkaline principle. [ 18 ]
Phlogiston remained the dominant theory until the 1770s when Antoine-Laurent de Lavoisier showed that combustion requires a gas that has weight (specifically, oxygen ) and could be measured by means of weighing closed vessels. [ 19 ] The use of closed vessels by Lavoisier and earlier by the Russian scientist Mikhail Lomonosov also negated the buoyancy that had disguised the weight of the gases of combustion, and culminated in the principle of mass conservation . These observations solved the mass paradox and set the stage for the new oxygen theory of combustion. [ 20 ] The British chemist Elizabeth Fulhame demonstrated through experiment that many oxidation reactions occur only in the presence of water, that they directly involve water, and that water is regenerated and is detectable at the end of the reaction. Based on her experiments, she disagreed with some of the conclusions of Lavoisier as well as with the phlogiston theorists that he critiqued. Her book on the subject appeared in print soon after Lavoisier's execution for Farm-General membership during the French Revolution . [ 21 ] [ 22 ]
Experienced chemists who supported Stahl's phlogiston theory attempted to respond to the challenges suggested by Lavoisier and the newer chemists. In doing so, the theory became more complicated and assumed too much, contributing to its overall demise. [ 20 ] Many people tried to remodel their theories on phlogiston to have the theory work with what Lavoisier was doing in his experiments. Pierre Macquer reworded his theory many times, and even though he is said to have thought the theory of phlogiston was doomed, he stood by phlogiston and tried to make the theory work. [ 23 ] | https://en.wikipedia.org/wiki/Phlogiston_theory |
The Phosphate ( Pho ) regulon is a regulatory mechanism used for the conservation and management of inorganic phosphate within the cell . It was first discovered in Escherichia coli as an operating system for the bacterial strain, and was later identified in other species. [ 1 ] The Pho system is composed of various components including extracellular enzymes and transporters that are capable of phosphate assimilation in addition to extracting inorganic phosphate from organic sources. [ 2 ] This is an essential process since phosphate plays an important role in cellular membranes, genetic expression , and metabolism within the cell. Under low nutrient availability, the Pho regulon helps the cell survive and thrive despite a depletion of phosphate within the environment. When this occurs, phosphate starvation-inducible ( psi ) genes activate other proteins that aid in the transport of inorganic phosphate. [ 3 ]
The Pho regulon is controlled by a two-component regulatory system composed of a histidine kinase sensor protein (PhoR) within the inner membrane and a transcriptional response regulator (PhoB/PhoR) on the cytoplasmic side of the membrane. [ 2 ] These proteins bind to upstream promoters in the pho regulon in order to induce a general change in gene transcription . This occurs when the cell senses low concentrations of phosphate within its internal environment causing the response regulator to be phosphorylated inducing an overall decrease in gene transcription. This mechanism is ubiquitous within gram -positive, gram-negative, cyanobacteria , yeasts , and archaea . [ 3 ]
Depletion of inorganic phosphate within the cell is required for activation of the Pho regulon in most prokaryotes . In the most commonly studied bacterium, E. coli, seven total proteins are used to detect intracellular levels of inorganic phosphate along with transfusing that signal appropriately. [ 2 ] Of the seven proteins, one is a metal binding protein (PhoU) and four are phosphate-specific transporters (Pst S, Pst C, Pst A, and Pst B). The transcriptional response regulator PhoR activates PhoB when it senses low intracellular inorganic phosphate levels. [ 2 ]
Although inorganic phosphate is primarily used in the Pho regulon system, there are several species of bacteria that can utilize varying forms of phosphate. One example is seen in E. coli which can use both inorganic and organic phosphate, as well as naturally occurring or synthetic phosphates (Phn). [ 3 ] Several enzymes breakdown the compounds of the alternative phosphates, allowing the organism to use the phosphate via the C-P lyase pathway. [ 3 ] Other species of bacteria like Pseudomonas aeruginosa and Salmonella typhimurium use a different pathway called the phosphonatase pathway, whereas the bacterium Enterobacter aerogenes can use either one of the pathways to cleave the C-P bond found in the alternative phosphates. [ 3 ]
Although the Pho regulon system is most widely studied in Escherichia coli it is found in other bacterial species such as Pseudomonas fluorescens and Bacillus subtilis . In Pseudomonas fluorescens , the transcriptional response regulator (PhoB/PhoR) retain the same function they play in E. coli . [ 4 ] Bacillus subtilis also shares some similarities when encountering low intracellular phosphate concentrations. Under phosphate-starved conditions B. subtilis binds its transcription regulator, PhoP and the histidine kinase, PhoR to the Pho-regulon gene which induces a production of teichuronic acid. [ 5 ] Furthermore, recent studies have suggested the critical role that techoic acid plays in the cell wall of B. subtilis, by acting as a phosphate reservoir and storing the necessary amount of inorganic phosphate in phosphate-starved conditions. [ 6 ]
Because bacteria use the Pho regulon to maintain homeostasis of Pi, it has the added effect of being used to control other genes. Many of the other genes activated or repressed by the Pho regulon cause virulence in bacterial pathogens. Three ways that this regulon effects virulence and pathogenicity are toxin production, biofilm formation, and acid tolerance. [ 2 ]
Pseudomonas aeruginosa is a known opportunistic pathogen. [ 2 ] One of its virulence factors is its ability to produce pyocyanin , a toxin released to kill both microbes and mammalian cells alike. The pyocyanin production occurs when activated by PhoB. [ 2 ] This implies that P. aeruginosa uses the low Pi as a signal that the host has been damaged and to start producing toxin to improve chances of its survival.
In contrast to P. aeruginosa , Vibrio cholerae has its toxin genes repressed by PhoB. It is thought that PhoB in V. cholerae is activated when Pi is low to prevent the production of toxins. [ 7 ] It could be activated by other signals in the environment, [ 7 ] but it has been shown that PhoB directly inhibits the toxins production by binding to the tcpPH promoter and stopping the ToxR regulon from being activated. [ 7 ] Evidence supporting Pi as the signal is given by how the regulon is not repressed under high Pi conditions. The regulatory cascade is only repressed under low Pi conditions. [ 2 ]
Biofilms are a mixture of microorganisms, layered together and usually adhered to a surface. The advantages of a biofilm include resistance to environmental stresses, antibiotics, and the ability to more easily obtain nutrients. [ 2 ] PhoB is used to enhance biofilm formation in environments where Pi is not in sufficient supply. This has been shown in multiple microbes including Pseudomonas, V. cholera, and E. coli. [ 4 ] This is not always the effect of the Pho regulon as for other species in different environments it is more advantageous to not be in biofilm when Pi is low. In these cases PhoB represses biofilm formation. [ 2 ]
E. coli has a protein to protect other periplasmic proteins from low pH environments called the Asr protein. The gene responsible for this protein is PhoB-dependent, and can only be turned on when the Pho regulon is activated by low Pi concentration. [ 8 ] Synthesis of the Asr protein imparts acid shock resistance to E. coli enabling it to survive in environments like the stomach which has a low pH. [ 2 ] Many acid tolerance genes are induced by more than just the low pH environment and require other environmental signals to be present in order to be activated. These specific nutrients being present or in low concentrations, anaerobiosis , and host-produced factors. [ 8 ] | https://en.wikipedia.org/wiki/Pho_regulon |
Phobos ( / ˈ f oʊ b ə s / ; systematic designation : Mars I ) is the innermost and larger of the two natural satellites of Mars , the other being Deimos . The two moons were discovered in 1877 by American astronomer Asaph Hall . Phobos is named after the Greek god of fear and panic , who is the son of Ares (Mars) and twin brother of Deimos .
Phobos is a small, irregularly shaped object with a mean radius of 11 km (7 mi). It orbits 6,000 km (3,700 mi) from the Martian surface, closer to its primary body than any other known natural satellite to a planet. It orbits Mars much faster than Mars rotates and completes an orbit in just 7 hours and 39 minutes. As a result, from the surface of Mars it appears to rise in the west, move across the sky in 4 hours and 15 minutes or less, and set in the east, twice each Martian day . Phobos is one of the least reflective bodies in the Solar System , with an albedo of 0.071. Surface temperatures range from about −4 °C (25 °F) on the sunlit side to −112 °C (−170 °F) on the shadowed side. The notable surface feature is the large impact crater Stickney , which takes up a substantial proportion of the moon's surface. The surface is also marked by many grooves, and there are numerous theories as to how these grooves were formed.
Images and models indicate that Phobos may be a rubble pile held together by a thin crust that is being torn apart by tidal interactions. Phobos gets closer to Mars by about 2 centimetres (0.79 in) per year.
Phobos was discovered by the American astronomer Asaph Hall on 18 August 1877 at the United States Naval Observatory in Washington, D.C. , at about 09:14 Greenwich Mean Time . (Contemporary sources, using the pre-1925 astronomical convention that began the day at noon, [ 11 ] give the time of discovery as 17 August at 16:06 Washington Mean Time , meaning 18 August 04:06 in the modern convention.) [ 12 ] [ 13 ] [ 14 ] Hall had discovered Deimos , Mars' other moon, a few days earlier. [ 15 ] The discoveries were made using the world's largest refracting telescope , the 26-inch "Great Equatorial". [ 16 ]
The names, originally spelled Phobus and Deimus respectively, were suggested by the British academic Henry Madan , a science master at Eton College , who based them on Greek mythology , in which Phobos is a companion to the god, Ares . [ 17 ] [ 18 ]
Planetary moons other than Earth's were never given symbols in the astronomical literature. Denis Moskowitz, a software engineer who designed most of the dwarf planet symbols, proposed a Greek phi (the initial of Phobos) combined with Mars' spear as the symbol of Phobos ( ). This symbol is not widely used. [ 19 ]
Phobos has dimensions of 26 by 23 by 18 kilometres (16 mi × 14 mi × 11 mi), [ 7 ] and retains too little mass to be rounded under its own gravity. Phobos does not have an atmosphere due to its low mass and low gravity. [ 20 ] It is one of the least reflective bodies in the Solar System, with an albedo of about 0.071. [ 21 ] Infrared spectra show that it has carbon-rich material found in carbonaceous chondrites , and its composition shows similarities to that of Mars' surface. [ 22 ] Phobos's density is too low to be solid rock, and it is known to have significant porosity . [ 23 ] [ 24 ] [ 25 ] These results led to the suggestion that Phobos might contain a substantial reservoir of ice. Spectral observations indicate that the surface regolith layer lacks hydration, [ 26 ] [ 27 ] but ice below the regolith is not ruled out. [ 28 ] [ 29 ] Surface temperatures range from about −4 °C (25 °F) on the sunlit side to −112 °C (−170 °F) on the shadowed side. [ 30 ]
Unlike Deimos, Phobos is heavily cratered, [ 31 ] with one of the craters near the equator having a central peak despite the moon's small size. [ 32 ] The most prominent of these is the crater Stickney , an impact crater 9 km (5.6 mi) in diameter, which takes up a substantial proportion of the moon's surface area. As with the Saturnian moon Mimas 's crater Herschel , the impact that created Stickney probably almost shattered Phobos. [ 33 ]
Many grooves and streaks cover the oddly shaped surface. The grooves are typically less than 30 meters (98 ft) deep, 100 to 200 meters (330 to 660 ft) wide, and up to 20 kilometers (12 mi) in length, and were originally assumed to have been the result of the same impact that created Stickney. Analysis of results from the Mars Express spacecraft revealed that the grooves are not radial to Stickney, but are centered on the leading apex of Phobos in its orbit (which is not far from Stickney). Researchers suspected that they had been excavated by material ejected into space by impacts on the surface of Mars. The grooves thus formed as crater chains , and all of them fade away as the trailing apex of Phobos is approached. They have been grouped into 12 or more families of varying age, presumably representing at least 12 Martian impact events. [ 34 ] In November 2018, based on computational probability analysis, astronomers concluded that the many grooves on Phobos were caused by boulders ejected from the asteroid impact that created Stickney crater. These boulders rolled in a predictable pattern on the surface of the moon. [ 35 ] [ 36 ]
Faint dust rings produced by Phobos and Deimos have long been predicted but attempts to observe these rings have, to date, failed. [ 37 ] Images from Mars Global Surveyor indicate that Phobos is covered with a layer of fine-grained regolith at least 100 meters thick; it is hypothesized to have been created by impacts from other bodies, but it is not known how the material stuck to an object with almost no gravity. [ 38 ]
The unique Kaidun meteorite that fell on a Soviet military base in Yemen in 1980 has been hypothesized to be a piece of Phobos, but this could not be verified because little is known about the exact composition of Phobos. [ 39 ] [ 40 ]
In the late 1950s and 1960s, the unusual orbital characteristics of Phobos led to speculations that it might be hollow. [ 41 ] Around 1958, Russian astrophysicist Iosif Samuilovich Shklovsky , studying the secular acceleration of Phobos's orbital motion, suggested a "thin sheet metal" structure for Phobos, a suggestion which led to speculations that Phobos was of artificial origin. [ 42 ] Shklovsky based his analysis on estimates of the upper Martian atmosphere's density, and deduced that for the weak braking effect to be able to account for the secular acceleration, Phobos had to be very light—one calculation yielded a hollow iron sphere 16 kilometers (9.9 mi) across but less than 6 centimetres (2.4 in) thick. [ 42 ] [ 43 ] In a February 1960 letter to the journal Astronautics , [ 44 ] Fred Singer , then science advisor to U.S. President Dwight D. Eisenhower , said of Shklovsky's theory:
If the satellite is indeed spiraling inward as deduced from astronomical observation, then there is little alternative to the hypothesis that it is hollow and therefore Martian made. The big 'if' lies in the astronomical observations; they may well be in error. Since they are based on several independent sets of measurements taken decades apart by different observers with different instruments, systematic errors may have influenced them. [ 44 ]
Subsequently, the systematic data errors that Singer predicted were found to exist, the claim was called into doubt, [ 45 ] and accurate measurements of the orbit available by 1969 showed that the discrepancy did not exist. [ 46 ] Singer's critique was justified when earlier studies were discovered to have used an overestimated value of 5 centimetres (2.0 in) per year for the rate of altitude loss, which was later revised to 1.8 centimetres (0.71 in) per year. [ 47 ] The secular acceleration is now attributed to tidal effects, which create drag on the moon and therefore cause it to spiral inward. [ 48 ]
The density of Phobos has now been directly measured by spacecraft to be 1.887 g/cm 3 (0.0682 lb/cu in). [ 49 ] Current observations are consistent with Phobos being a rubble pile . [ 49 ] Images obtained by the Viking probes in the 1970s showed a natural object, not an artificial one. Nevertheless, mapping by the Mars Express probe and subsequent volume calculations do suggest the presence of voids and indicate that it is not a solid chunk of rock but a porous body. [ 50 ] The porosity of Phobos was calculated to be 30% ± 5%, or a quarter to a third being empty. [ 51 ]
Geological features on Phobos are named after astronomers who studied Phobos and people and places from Jonathan Swift 's Gulliver's Travels . [ 52 ]
Some craters have been named, and are listed in the following map and table. [ 53 ]
There is one named regio , Laputa Regio , and one named planitia , Lagado Planitia ; both are named after places in Gulliver's Travels (the fictional Laputa , a flying island, and Lagado , imaginary capital of the fictional nation Balnibarbi ). [ 54 ] The only named ridge on Phobos is Kepler Dorsum , named after the astronomer Johannes Kepler . [ 55 ]
The orbital motion of Phobos has been intensively studied, making it "the best studied natural satellite in the Solar System" in terms of orbits completed. [ 56 ] Its close orbit around Mars produces some distinct effects. With an altitude of 5,989 km (3,721 mi), Phobos orbits Mars below the synchronous orbit radius, meaning that it moves around Mars faster than Mars itself rotates. [ 24 ] Therefore, from the point of view of an observer on the surface of Mars, it rises in the west, moves comparatively rapidly across the sky (in 4 h 15 min or less) and sets in the east, approximately twice each Martian day (every 11 h 6 min). Because it is close to the surface and in an equatorial orbit, it cannot be seen above the horizon from latitudes greater than 70.4°. Its orbit is so low that its angular diameter , as seen by an observer on Mars, varies visibly with its position in the sky. Seen at the horizon, Phobos is about 0.14° wide; at zenith , it is 0.20°, one-third as wide as the full Moon as seen from Earth. By comparison, the Sun has an apparent size of about 0.35° in the Martian sky. Phobos's phases, inasmuch as they can be observed from Mars, take 0.3191 days (Phobos's synodic period) to run their course, a mere 13 seconds longer than Phobos's sidereal period .
An observer situated on the Martian surface, in a position to observe Phobos, would see regular transits of Phobos across the Sun. Several of these transits have been photographed by the Mars Rover Opportunity . During the transits, Phobos casts a shadow on the surface of Mars; this event has been photographed by several spacecraft. Phobos is not large enough to cover the Sun's disk, and so cannot cause a total eclipse . [ 57 ]
Tidal deceleration is gradually decreasing the orbital radius of Phobos by approximately 2 m (6 ft 7 in) every 100 years, [ 58 ] and with decreasing orbital radius the likelihood of breakup due to tidal forces increases, estimated in approximately 30–50 million years, [ 58 ] [ 56 ] or about 43 million years in one study's estimate. [ 59 ]
Phobos's grooves were long thought to be fractures caused by the impact that formed the Stickney crater. Other modelling suggested since the 1970s support the idea that the grooves are more like "stretch marks" that occur when Phobos gets deformed by tidal forces, but in 2015 when the tidal forces were calculated and used in a new model, the stresses were too weak to fracture a solid moon of that size, unless Phobos is a rubble pile surrounded by a layer of powdery regolith about 100 m (330 ft) thick. Stress fractures calculated for this model line up with the grooves on Phobos. The model is supported with the discovery that some of the grooves are younger than others, implying that the process that produces the grooves is ongoing. [ 58 ] [ 60 ] [ inconsistent ]
Given Phobos's irregular shape and assuming that it is a pile of rubble (specifically a Mohr–Coulomb body ), it will eventually break up due to tidal forces when it reaches approximately 2.1 Mars radii. [ 61 ] When Phobos is broken up, it will form a planetary ring around Mars. [ 62 ] This predicted ring may last from 1 million to 100 million years. The fraction of the mass of Phobos that will form the ring depends on the unknown internal structure of Phobos. Loose, weakly bound material will form the ring. Components of Phobos with strong cohesion will escape tidal breakup and will enter the Martian atmosphere. [ 63 ]
The origin of the Martian moons has been disputed. [ 64 ] Phobos and Deimos both have much in common with carbonaceous C-type asteroids , with spectra , albedo , and density very similar to those of C- or D-type asteroids. [ 65 ] Based on their similarity, one hypothesis is that both moons may be captured main-belt asteroids . [ 66 ] [ 67 ] Both moons have very circular orbits which lie almost exactly in Mars' equatorial plane , and hence a capture origin requires a mechanism for circularizing the initially highly eccentric orbit, and adjusting its inclination into the equatorial plane, most probably by a combination of atmospheric drag and tidal forces, [ 68 ] although it is not clear that sufficient time is available for this to occur for Deimos. [ 64 ] Capture also requires dissipation of energy. The current Martian atmosphere is too thin to capture a Phobos-sized object by atmospheric braking. [ 64 ] Geoffrey A. Landis has pointed out that the capture could have occurred if the original body was a binary asteroid that separated under tidal forces. [ 67 ] [ 69 ]
Phobos could be a second-generation Solar System object that coalesced in orbit after Mars formed, rather than forming concurrently out of the same birth cloud as Mars. [ 70 ]
Another hypothesis is that Mars was once surrounded by many Phobos- and Deimos-sized bodies, perhaps ejected into orbit around it by a collision with a large planetesimal . [ 71 ] The high porosity of the interior of Phobos (based on the density of 1.88 g/cm 3 , voids are estimated to comprise 25 to 35 percent of Phobos's volume) is inconsistent with an asteroidal origin. [ 51 ] Observations of Phobos in the thermal infrared suggest a composition containing mainly phyllosilicates , which are well known from the surface of Mars. The spectra are distinct from those of all classes of chondrite meteorites, again pointing away from an asteroidal origin. [ 72 ] Both sets of findings support an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, [ 73 ] similar to the prevailing theory for the origin of Earth's moon.
Some areas of the surface are reddish in color, while others are bluish. The hypothesis is that gravity pull from Mars makes the reddish regolith move over the surface, exposing relatively fresh, unweathered and bluish material from the moon, while the regolith covering it over time has been weathered due to exposure of solar radiation. Because the blue rock differs from known Martian rock, it could contradict the theory that the moon is formed from leftover planetary material after the impact of a large object. [ 74 ]
In February 2021, Amirhossein Bagheri ( ETH Zurich ), Amir Khan (ETH Zurich), Michael Efroimsky (US Naval Observatory) and their colleagues proposed a new hypothesis on the origin of the moons. By analyzing the seismic and orbital data from Mars InSight Mission and other missions, they proposed that the moons are born from disruption of a common parent body around 1 to 2.7 billion years ago. The common progenitor of Phobos and Deimos was most probably hit by another object and shattered to form both moons. [ 75 ]
Phobos has been photographed in close-up by several spacecraft whose primary mission has been to photograph Mars. The first was Mariner 7 in 1969, followed by Mariner 9 in 1971, Viking 1 in 1977, Phobos 2 in 1989 [ 76 ] Mars Global Surveyor in 1998 and 2003, Mars Express in 2004, 2008, 2010 [ 77 ] and 2019, and Mars Reconnaissance Orbiter in 2007 and 2008. On 25 August 2005, the Spirit rover , with an excess of energy due to wind blowing dust off of its solar panels, took several short-exposure photographs of the night sky from the surface of Mars, and was able to successfully photograph both Phobos and Deimos. [ 78 ]
The Soviet Union undertook the Phobos program with two probes, both launched successfully in July 1988. Phobos 1 was shut down by an erroneous command from ground control issued in September 1988 and lost while still en route. Phobos 2 arrived at the Mars system in January 1989 and, after transmitting a small amount of data and imagery shortly before beginning its detailed examination of Phobos's surface, abruptly ceased transmission due either to failure of the onboard computer or of the radio transmitter, already operating on backup power. Other Mars missions collected more data, but no dedicated sample return mission has been successfully performed.
The Russian Space Agency launched a sample return mission to Phobos in November 2011, called Fobos-Grunt . The return capsule also included a life science experiment of The Planetary Society , called Living Interplanetary Flight Experiment , or LIFE. [ 79 ] A second contributor to this mission was the China National Space Administration , which supplied a surveying satellite called " Yinghuo-1 ", which would have been released in the orbit of Mars, and a soil-grinding and sieving system for the scientific payload of the Phobos lander. [ 80 ] [ 81 ] After achieving Earth orbit , the Fobos-Grunt probe failed to initiate subsequent burns that would have sent it to Mars. Attempts to recover the probe were unsuccessful and it crashed back to Earth in January 2012. [ 82 ]
On 1 July 2020, the Mars orbiter of the Indian Space Research Organisation was able to capture photos of the body from 4,200 km away. [ 83 ]
In 1997 and 1998, the Aladdin mission was selected as a finalist in the NASA Discovery Program . The plan was to visit both Phobos and Deimos, and launch projectiles at the satellites. The probe would collect the ejecta as it performed a slow flyby (~1 km/s). [ 84 ] These samples would be returned to Earth for study three years later. [ 85 ] [ 86 ] The Principal Investigator was Dr. Carle Pieters of Brown University . The total mission cost, including launch vehicle and operations was $247.7 million. [ 87 ] Ultimately, the mission chosen to fly was MESSENGER , a probe to Mercury. [ 88 ]
In 2007, the European aerospace subsidiary EADS Astrium was reported to have been developing a mission to Phobos as a technology demonstrator . Astrium was involved in developing a European Space Agency plan for a sample return mission to Mars, as part of the ESA's Aurora programme , and sending a mission to Phobos with its low gravity was seen as a good opportunity for testing and proving the technologies required for an eventual sample return mission to Mars. The mission was envisioned to start in 2016, was to last for three years. The company planned to use a "mothership", which would be propelled by an ion engine , releasing a lander to the surface of Phobos. The lander would perform some tests and experiments, gather samples in a capsule, then return to the mothership and head back to Earth where the samples would be jettisoned for recovery on the surface. [ 89 ]
In 2007, the Canadian Space Agency funded a study by Optech and the Mars Institute for an uncrewed mission to Phobos known as Phobos Reconnaissance and International Mars Exploration (PRIME). A proposed landing site for the PRIME spacecraft is at the " Phobos monolith ", a prominent object near Stickney crater . [ 90 ] [ 91 ] [ 92 ] The PRIME mission would be composed of an orbiter and lander, and each would carry 4 instruments designed to study various aspects of Phobos's geology. [ 93 ]
In 2008, NASA Glenn Research Center began studying a Phobos and Deimos sample return mission that would use solar electric propulsion . The study gave rise to the "Hall" mission concept, a New Frontiers -class mission under further study as of 2010. [ 94 ]
Another concept of a sample return mission from Phobos and Deimos is OSIRIS-REx II , which would use heritage technology from the first OSIRIS-REx mission. [ 95 ]
As of January 2013, a new Phobos Surveyor mission is currently under development by a collaboration of Stanford University , NASA's Jet Propulsion Laboratory , and the Massachusetts Institute of Technology . [ 96 ] The mission is currently in the testing phases, and the team at Stanford plans to launch the mission between 2023 and 2033. [ 96 ]
In March 2014, a Discovery class mission was proposed to place an orbiter in Mars orbit by 2021 to study Phobos and Deimos through a series of close flybys. The mission is called Phobos And Deimos & Mars Environment (PADME). [ 97 ] [ 98 ] [ 99 ] Two other Phobos missions that were proposed for the Discovery 13 selection included a mission called Merlin , which would flyby Deimos but actually orbit and land on Phobos, and another one is Pandora which would orbit both Deimos and Phobos. [ 100 ]
The Japanese Aerospace Exploration Agency (JAXA) unveiled on 9 June 2015 the Martian Moons Exploration (MMX), a sample return mission targeting Phobos. [ 101 ] MMX will land and collect samples from Phobos multiple times, along with conducting Deimos flyby observations and monitoring Mars' climate. By using a corer sampling mechanism, the spacecraft aims to retrieve a minimum 10 g amount of samples. [ 102 ] NASA, ESA, DLR, and CNES [ 103 ] are also participating in the project, and will provide scientific instruments. [ 104 ] [ 105 ] The U.S. will contribute the Neutron and Gamma-Ray
Spectrometer (NGRS), and France the Near IR Spectrometer (NIRS4/MacrOmega). [ 102 ] [ 106 ] Although the mission has been selected for implementation [ 107 ] [ 108 ] and is now beyond proposal stage, formal project approval by JAXA has been postponed following the Hitomi mishap. [ 109 ] Development and testing of key components, including the sampler, is currently ongoing. [ 110 ] As of 2017 [update] , MMX is scheduled to be launched in 2026, and will return to Earth five years later. [ 102 ]
Russia plans to repeat Fobos-Grunt mission in the late 2020s, and the European Space Agency is assessing a sample-return mission for 2024 called Phootprint . [ 111 ] [ 112 ]
Phobos has been proposed as an early target for a human mission to Mars . The teleoperation of robotic scouts on Mars by humans on Phobos could be conducted without significant time delay, and planetary protection concerns in early Mars exploration might be addressed by such an approach. [ 113 ]
A landing on Phobos would be considerably less difficult and expensive than a landing on the surface of Mars itself. A lander bound for Mars would need to be capable of atmospheric entry and subsequent return to orbit without any support facilities, or would require the creation of support facilities in-situ . A lander instead bound for Phobos could be based on equipment designed for lunar and asteroid landings. [ 114 ] Furthermore, due to Phobos's very weak gravity, the delta-v required to land on Phobos and return is only 80% of that required for a trip to and from the surface of the Moon. [ 115 ]
It has been proposed that the sands of Phobos could serve as a valuable material for aerobraking during a Mars landing. A relatively small amount of chemical fuel brought from Earth could be used to lift a large amount of sand from the surface of Phobos to a transfer orbit. This sand could be released in front of a spacecraft during the descent maneuver causing a densification of the atmosphere just in front of the spacecraft. [ 116 ] [ 117 ]
While human exploration of Phobos could serve as a catalyst for the human exploration of Mars, it could be scientifically valuable in its own right. [ 118 ]
First discussed in fiction in 1956 by Fontenay, [ 119 ] Phobos has been proposed as a future site for space elevator construction. This would involve a pair of space elevators: one extending 6,000 km from the Mars-facing side to the edge of Mars' atmosphere, the other extending 6,000 km (3,700 mi) from the other side and away from Mars. A spacecraft launching from Mars' surface to the lower space elevator would only need a delta-v of 0.52 km/s (0.32 mi/s), as opposed to the over 3.6 km/s (2.2 mi/s) needed to launch to low Mars orbit. The spacecraft could be lifted up using electrical power and then released from the upper space elevator with a hyperbolic velocity of 2.6 km/s (1.6 mi/s), enough to reach Earth and a significant fraction of the velocity needed to reach the asteroid belt . The space elevators could also work in reverse to help spacecraft enter the Martian system. The great mass of Phobos means that any forces from space elevator operation would have minimal effect on its orbit. Additionally, materials from Phobos could be used for space industry. [ 120 ]
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of". | https://en.wikipedia.org/wiki/Phobos_(moon) |
The Phobos monolith is a large rock on the surface of Mars 's moon Phobos . [ 1 ] It is a boulder , about 85 m (279 ft) across and 90 m (300 ft) tall. [ 2 ] [ 3 ] A monolith is a geological feature consisting of a single massive piece of rock. Monoliths also occur naturally on Earth, but it has been suggested that the Phobos monolith may be a piece of impact ejecta . The monolith is a bright object near Stickney crater , described as a "building sized" boulder, which casts a prominent shadow. [ 4 ] [ 5 ] It was discovered by Efrain Palermo, who did extensive surveys of Martian probe imagery, and later confirmed by Lan Fleming, an imaging sub-contractor at NASA Johnson Space Center . [ 6 ]
The general vicinity of the monolith is a proposed landing site by Optech and the Mars Institute , for a robotic mission to Phobos known as PRIME (Phobos Reconnaissance and International Mars Exploration). [ 4 ] The PRIME mission would be composed of an orbiter and lander, and each would carry four instruments designed to study various aspects of Phobos's geology. [ 7 ] At present, PRIME has not been funded and does not have a projected launch date. Former astronaut Buzz Aldrin has spoken about the Phobos monolith and his support for a mission to Phobos. [ 8 ]
The object appears in Mars Global Surveyor images SP2-52603 [ 9 ] and SP2-55103, [ 10 ] dated 1998. The object is unrelated to another monolith located on the surface of Mars, which NASA noted as an example of a common surface feature in that region. [ 11 ]
The Phobos monolith features in Alastair Reynolds 's 2012 science-fiction novel Blue Remembered Earth , wherein its surface has been entirely carved by visiting astronauts into the semblance of a wrecked spaceship.
The debut studio album by The Claypool Lennon Delirium , consisting of American multi-instrumentalists Sean Lennon and Primus 's Les Claypool , released on June 3, 2016, is called Monolith of Phobos . [ 12 ]
Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of". | https://en.wikipedia.org/wiki/Phobos_monolith |
The phomoxanthones are a loosely defined class of natural products . The two founding members of this class are phomoxanthone A and phomoxanthone B . Other compounds were later also classified as phomoxanthones, although a unifying nomenclature has not yet been established. [ 1 ] The structure of all phomoxanthones is derived from a dimer of two covalently linked tetrahydroxanthones, and they differ mainly in the position of this link as well as in the acetylation status of their hydroxy groups. The phomoxanthones are structurally closely related to other tetrahydroxanthone dimers such as the secalonic acids and the eumitrins. While most phomoxanthones were discovered in fungi of the genus Phomopsis , most notably in the species Phomopsis longicolla , some have also been found in Penicillium sp. [ 2 ] | https://en.wikipedia.org/wiki/Phomoxanthone |
The mycotoxin phomoxanthone A , or PXA for short, is a toxic natural product that affects the mitochondria . It is the most toxic and the best studied of the naturally occurring phomoxanthones . PXA has recently been shown to induce rapid, non-canonical mitochondrial fission by causing the mitochondrial matrix to fragment while the outer mitochondrial membrane can remain intact. This process was shown to be independent from the mitochondrial fission and fusion regulators DRP1 and OPA1 . [ 1 ]
The phomoxanthones are named after the fungus Phomopsis , from which they were first isolated, and after their xanthonoid structure, which means they have structures similar to the compound xanthone (pictured on the left). Chemically, the phomoxanthones are dimers of two tetrahydroxanthones, meaning that they consist of two subunits of xanthonoids that have four hydroxy groups each. The two subunits of the phomoxanthones are covalently linked to each other. PXA itself is a homodimer, meaning that it consists of two identical subunits. Both of these subunits are diacetylated tetrahydroxanthones, so two of their hydroxy groups have been replaced by acetyl groups . The position of the link between the two dimer subunits is the only structural difference between PXA and its less toxic isomers phomoxanthone B (PXB) and dicerandrol C : In PXA, the two xanthonoid monomers are symmetrically linked at the position C-4,4’, while in PXB, they are asymmetrically linked at C-2,4’, and in dicerandrol C, they are symmetrically linked at C-2,2’. Otherwise, these three compounds are structurally identical. [ 2 ] [ 3 ] The phomoxanthones are structurally closely related to the secalonic acids , another class of dimeric tetrahydroxanthone mycotoxins, with which they share several properties. Notably, both the phomoxanthones and the secalonic acids are unstable when dissolved in polar solvents such as DMSO , with the covalent bond between the two monomers shifting between 2,2′-, 2,4′-, and 4,4′-linkage. [ 4 ] The two phomoxanthones PXA and PXB can thus slowly isomerise into each other as well as into the essentially non-toxic dicerandrol C, resulting in a loss of activity of PXA over time when dissolved in a polar solvent. [ 1 ]
As natural products , PXA and other phomoxanthones occur as secondary metabolites in fungi of the eponymous genus Phomopsis , most notably in the species Phomopsis longicolla . [ 2 ] [ 3 ] This fungus is an endophyte of the mangrove plant Sonneratia caseolaris . [ 5 ] [ 3 ] However, it has also been identified as a pathogen in other plants, such as the soybean plant in which it causes a disease called Phomopsis seed decay (PSD) . [ 6 ] [ 7 ]
Both PXA and PXB were discovered in 2001, and their preparation by isolation from Phomopsis fungal cultures was described in the corresponding publication. [ 2 ] Briefly, a MeOH extract of a Phomopsis culture is mixed with H 2 O and washed with hexane . The aqueous phase is then dried and the residue is dissolved in EtOAc , washed with H 2 O, concentrated and repeatedly purified by size-exclusion chromatography . The resulting mixture of PXA and PXB is separated by HPLC . A modified method, in which the initial extraction is done with EtOAc instead of MeOH and the drying step is skipped, was described in 2013. [ 3 ]
Phomoxanthone A was first identified in a screening for antimalarial compounds. [ 2 ] It showed strong antibiotic activity against a multidrug-resistant strain of the main causative agent of malaria , the protozoan parasite Plasmodium falciparum . The same study also reported antibiotic activity of PXA against Mycobacterium tuberculosis and against three animal cell lines, two of which were derived from human cancer cells. [ 2 ] These findings not only showed that PXA has antibiotic activity against very diverse organisms, but they also sparked further studies that investigated PXA as a potential antibiotic or anti-cancer drug . A later study also reported antibiotic activity for PXA against the alga Chlorella fusca , the fungus Ustilago violacea , and the bacterium Bacillus megaterium . [ 8 ] This broad range of activity disqualified it as a specific antibiotic that could be used in the treatment of infectious diseases , however the hope that it could be used as an anti-cancer drug remained. Preliminary results from a study in human cancer cells and non-cancer cells suggested that PXA might be more toxic to the former than to the latter, although results from in vivo studies have not yet been presented. [ 3 ] [ 9 ]
Aside from a potential medical use, recent findings indicate that PXA might have an application as a research tool in the study of mitochondrial membrane dynamics, particularly non-canonical mitochondrial fission and remodelling of the mitochondrial matrix. [ 1 ]
Since PXA has antibiotic activity against organisms as diverse as bacteria, protozoans, fungi, plants and animal cells including human cancer cells, it has to affect a cellular feature that is evolutionarily highly conserved. A recent study has shown that PXA directly affects the mitochondria by disrupting both their biochemical functions and their membrane architecture. [ 1 ] The mitochondria are cellular organelles that are present in almost all eukaryotes . According to the theory of symbiogenesis , they are derived from bacteria and share many characteristics with them, including several properties of their membrane composition. [ 10 ] [ 11 ]
One of the main functions of the mitochondria is to produce the cellular energy currency ATP through the process of oxidative phosphorylation (OxPhos). OxPhos depends on the mitochondrial membrane potential , which is generated by the electron transport chain (ETC) via the consumption of oxygen . PXA was shown to interfere with all of these functions of the mitochondria: not only does it decrease ATP synthesis and depolarise the mitochondria, but it also inhibits the ETC and cellular oxygen consumption. This sets it apart from uncoupling agents such as protonophores . While these also decrease ATP synthesis and depolarise the mitochondria, they increase respiration at the same time due to increased ETC activity in an attempt to restore the membrane potential. [ 1 ]
In addition to this inhibition of the function of mitochondria, PXA also disrupts their membrane architecture. In many cell types, the mitochondria normally form an intricate tubular network that undergoes a constant process of balanced mitochondrial fission and mitochondrial fusion . Treatment with PXA or many other mitochondrial stressors, such as protonophores, causes excessive fission that results in mitochondrial fragmentation. In the case of PXA, however, this fragmentation process was shown to be different from canonical fragmentation, caused by other agents such as protonophores, in several ways: first, it is considerably faster, resulting in complete fragmentation within a minute as opposed to about 30–60 minutes for canonical fragmentation; second, it is independent from the mitochondrial fission and fusion regulators DRP1 and OPA1; and third, while PXA causes fragmentation of both the outer mitochondrial membrane (OMM) and the mitochondrial matrix in wild type cells, it causes exclusive fragmentation of the matrix in cells that lack DRP1. [ 1 ] This last feature is especially unusual since no active mechanism for exclusive matrix fission is known in higher eukaryotes. [ 12 ] Examination of the mitochondrial ultrastructure revealed that PXA causes cristae disruption and complete distortion of the mitochondrial matrix. It is probably through this effect that PXA induces programmed cell death in the form of apoptosis . [ 1 ] | https://en.wikipedia.org/wiki/Phomoxanthone_A |
The mycotoxin phomoxanthone B , or PXB for short, is a toxic natural product . It is a less toxic isomer of phomoxanthone A and one of the two founding members of the class of phomoxanthone compounds. The phomoxanthones are named after the fungus Phomopsis , from which they were first isolated, and after their xanthonoid structure. Chemically, they are dimers of two tetrahydroxanthones that are covalently linked to each other. PXB itself is a homodimer of two identical diacetylated tetrahydroxanthones. The position of the link between the two tetrahydroxanthones is the only structural difference between PXB and its isomers PXA and dicerandrol C : In PXA, the two xanthonoid monomers are symmetrically linked at C-4,4’, while in PXB, they are asymmetrically linked at C-2,4’, and in dicerandrol C, they are symmetrically linked at C-2,2’. [ 2 ] | https://en.wikipedia.org/wiki/Phomoxanthone_B |
The phoneME project is Sun Microsystems reference implementation of Java virtual machine and associated libraries of Java ME with source, licensed under the GNU General Public License .
The phoneME library includes implementations of Connected Limited Device Configuration (CLDC) and Mobile Information Device Profile (MIDP) as well as complete or partial implementations for some optional package JSRs .
phoneME provide complete or partial implementations for the following JSRs :
Supported platforms are Linux/ ARM , Linux/ x86 and Windows/ i386 .
This free and open-source software article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/PhoneME |
A phone connector is a family of cylindrically -shaped electrical connectors primarily for analog audio signals . Invented in the late 19th century for tele phone switchboards , the phone connector remains in use for interfacing wired audio equipment , such as head phones , speakers , micro phones , mixing consoles , and electronic musical instruments (e.g. electric guitars , keyboards , and effects units ). A male connector (a plug), is mated into a female connector (a socket), though other terminology is used.
Plugs have 2 to 5 electrical contacts . The tip contact is indented with a groove. The sleeve contact is nearest the ( conductive or insulated ) handle . [ 1 ] Contacts are insulated from each other by a band of non-conductive material. Between the tip and sleeve are 0 to 3 ring contacts. Since phone connectors have many uses, it is common to simply name the connector according to its number of rings:
The sleeve is usually a common ground reference voltage or return current for signals in the tip and any rings . Thus, the number of transmittable signals is less than the number of contacts.
The outside diameter of the sleeve is 6.35 millimetres ( 1 ⁄ 4 inch) for full-sized connectors, 3.5 mm ( 1 ⁄ 8 in) for " mini " connectors, and only 2.5 mm ( 1 ⁄ 10 in) for " sub-mini " connectors. Rings are typically the same diameter as the sleeve.
The 1902 International Library of Technology simply uses jack for the female and plug for the male connector. [ 3 ] The 1989 Sound Reinforcement Handbook uses phone jack for the female and phone plug for the male connector. [ 4 ] Robert McLeish, who worked at the BBC , uses jack or jack socket for the female and jack plug for the male connector in his 2005 book Radio Production . [ 5 ] The American Society of Mechanical Engineers , as of 2007, says the more fixed electrical connector is the jack, while the less fixed connector is the plug, without regard to the gender of the connector contacts. [ 6 ] The Institute of Electrical and Electronics Engineers in 1975 also made a standard that was withdrawn in 1997. [ 7 ]
The intended application for a phone connector has also resulted in names such as audio jack , headphone jack , stereo plug , microphone jack , aux input , etc. Among audio engineers, the connector may often simply be called a quarter-inch to distinguish it from XLR , another frequently-used audio connector. These naming variations are also used for the 3.5 mm connectors, which have been called mini-phone , mini-stereo , mini jack , etc.
RCA connectors are differently-shaped, but confusingly are similarly-named as phono plugs and phono jacks (or in the UK, phono sockets). 3.5 mm connectors are sometimes—counter to the connector manufacturers' nomenclature [ 8 ] —referred to as mini phonos . [ 9 ]
Confusion also arises because phone jack and phone plug may sometimes refer to the RJ11 and various older telephone sockets and plugs that connect wired telephones to wall outlets.
The original 1 ⁄ 4 -inch (6.35 mm) version descends from as early as 1877 in Boston when the first telephone switchboard was installed [ 10 ] or 1878, when an early switchboard was used for the first commercial manual telephone exchange [ 11 ] [ 12 ] in New Haven created by George W. Coy . [ 13 ] [ 14 ]
Charles E. Scribner filed a patent [ 15 ] in 1878 to facilitate switchboard operation using his spring-jack switch . In it, a conductive lever pushed by a spring is normally connected to one contact. But when a cable with a conductive plug is inserted into a hole and makes contact with that lever, the lever pivots and breaks its normal connection. The receptacle was called a jack-knife because of its resemblance to a pocket clasp-knife . [ 16 ] This is said to be the origin of calling the receptacle a jack . Scribner filed a patent [ 17 ] in 1880 which removes the lever and resembles the modern connector and made improvements to switchboard design in subsequent patents [ 18 ] [ 19 ] filed in 1882.
Henry P. Clausen filed a patent [ 20 ] in 1901 for improved construction of the telephone switchboard-plug with today's 1 ⁄ 4 inch TS form still used on audio equipment.
Western Electric was the manufacturing arm of the Bell System , and thus originated or refined most of the engineering designs, including the telephone jacks and plugs which were later adopted by other industries, including the US military .
By 1907, Western Electric had designed a number of models for different purposes, including: [ 21 ]
By 1950, the two main plug designs were:
Several modern designs have descended from those earlier versions:
U.S. military versions of the Western Electric plugs were initially specified in Amendment No.1, MIL-P-642, and included:
The 3.5 mm or miniature size was originally designed in the 1950s as two-conductor connectors for earpieces on transistor radios , and remains a standard still used today. [ 24 ] This roughly half-sized version of the original, popularized by the Sony EFM-117J radio (released in 1964), [ 25 ] [ 26 ] [ failed verification ] is still commonly used in portable applications and has a length of 15 millimetres (0.59 in). The three-conductor version became very popular with its application on the Walkman in 1979, as unlike earlier transistor radios, these devices had no speaker of their own; the usual way to listen to them was to plug in headphones. There is also an EIA standard for 0.141-inch miniature phone jacks.
The 2.5 mm or sub-miniature sizes were similarly popularized on small portable electronics. They often appeared next to a 3.5 mm microphone jack for a remote control on-off switch on early portable tape recorders; the microphone provided with such machines had the on-off switch and used a two-pronged connector with both the 3.5 and 2.5 mm plugs. They were also used for low-voltage DC power input from wall adapters. In the latter role, they were soon replaced by coaxial DC power connectors . 2.5 mm phone jacks have also been used as headset jacks on mobile telephones (see § Mobile devices ).
The 1 ⁄ 8 in and 1 ⁄ 10 in sizes, approximately 3.5 mm and 2.5 mm respectively in mm, though those dimensions are only approximations. [ 27 ] All sizes are now readily available in two-conductor (unbalanced mono) and three-conductor ( balanced mono or unbalanced stereo) versions.
Four-conductor versions of the 3.5 mm plug and jack are used for certain applications. A four-conductor version is often used in compact camcorders and portable media players, providing stereo sound and composite analog video. It is also used for a combination of stereo audio, a microphone, and controlling media playback, calls, volume and/or a virtual assistant on some laptop computers and most mobile phones , [ 28 ] and some handheld amateur radio transceivers from Yaesu . [ 29 ] Some headphone amplifiers have used it to connect balanced stereo headphones, which require two conductors per audio channel as the channels do not share a common ground. [ 30 ]
By the 1940s, broadcast radio stations were using Western Electric Code No. 103 plugs and matching jacks for patching audio throughout studios. This connector was used because of its use in AT&T 's Long Line circuits for the distribution of audio programs over the radio networks' leased telephone lines. [ citation needed ] Because of the large amount of space these patch panels required, the industry began switching to 3-conductor plugs and jacks in the late 1940s, using the WE Type 291 plug with WE type 239 jacks. The type 291 plug was used instead of the standard type 110 switchboard plug because the location of the large bulb shape on this TRS plug would have resulted in both audio signal connections being shorted together for a brief moment while the plug was being inserted and removed. The Type 291 plug avoids this by having a shorter tip. [ 31 ]
Professional audio and the telecommunication industry use a 0.173 in (4.4 mm) diameter plug, associated with trademarked names including Bantam , TT, Tini-Telephone, and Tini-Tel. They are not compatible with standard EIA RS-453/IEC 60603-11 1 ⁄ 4 -inch jacks. In addition to a slightly smaller diameter, they have a slightly different geometry. [ 32 ] The three-conductor TRS versions are capable of handling balanced signals and are used in professional audio installations. Though unable to handle as much power, and less reliable than a 6.35 mm ( 1 ⁄ 4 in) jack, [ 33 ] Bantam connectors are used for mixing console and outboard patchbays in recording studio and live sound applications, where large numbers of patch points are needed in a limited space. [ 32 ] The slightly different shape of Bantam plugs is also less likely to cause shorting as they are plugged in. [ citation needed ]
A two-pin version, known to the telecom industry as a "310 connector", consists of two 1 ⁄ 4 -inch phone plugs at a centre spacing of 5 ⁄ 8 inch (16 mm). The socket versions of these can be used with normal phone plugs provided the plug bodies are not too large, but the plug version will only mate with two sockets at 5 ⁄ 8 inches centre spacing, or with line sockets, again with sufficiently small bodies. These connectors are still used today in telephone company central offices on "DSX" patch panels for DS1 circuits . A similar type of 3.5 mm connector is often used in the armrests of older aircraft, as part of the on-board in-flight entertainment system. Plugging a stereo plug into one of the two mono jacks typically results in the audio coming into only one ear. Adapters are available.
A short-barrelled version of the phone plug was used for 20th-century high-impedance mono headphones, and in particular those used in World War II aircraft . These have become rare. It is physically possible to use a normal plug in a short socket, but a short plug will neither lock into a normal socket nor complete the tip circuit.
Less commonly used sizes, both diameters and lengths, are also available from some manufacturers, and are used when it is desired to restrict the availability of matching connectors, such as 0.210-inch (5.3 mm) inside diameter jacks for fire safety communication in public buildings. [ a ]
While phone connectors remain a standard connector type in some fields, such as desktop computers, musical instrument amplification, [ 35 ] and live audio and recording equipment, [ 36 ] [ 37 ] they have been removed from many smartphones. [ 38 ]
Digital audio is now common and may be transmitted via USB sound cards , USB headphones, Bluetooth , display connectors with integrated sound (e.g. DisplayPort and HDMI ). Digital devices may also have internal speakers and mics. Thus the phone connector is sometimes considered redundant and a waste of space, particularly on thinner mobile devices . And while low-profile surface-mount sockets waterproofed up to 1 meter exist, [ 39 ] removing the socket entirely facilitates waterproofing . [ 40 ]
Chinese phone manufacturers were early in not using a phone socket: first with Oppo 's Finder in July 2012 (which came packaged with micro-USB headphones and supported Bluetooth headphones ), followed by Vivo 's X5Max in 2014 and LeEco in April 2016 and Lenovo 's Moto Z in September 2016. [ 41 ] Apple 's September 2016 announcement of the iPhone 7 was initially mocked for removing the socket by other manufacturers like Samsung and Google who eventually followed suit. [ 42 ] The socket is also not present in some tablets and thin laptops (e.g. Lenovo Duet Chromebook and Asus ZenBook 13 in 2020 [ 43 ] ).
The US military uses a variety of phone connectors including 9 ⁄ 32 -inch (0.281-inch, 7.14 mm) and 1 ⁄ 4 -inch (0.25 inch, 6.35 mm) diameter plugs. [ 44 ]
Commercial and general aviation (GA) civil aircraft headsets often use a pair of phone connectors. A standard 1 ⁄ 4 -inch (6.3 mm) 2 or 3-conductor plug, type PJ-055, is used for headphones. For the microphone, a smaller 3 ⁄ 16 -inch (0.206 inch / 5.23 mm) diameter 3-conductor plug, type PJ-068, is used.
Military aircraft and civil helicopters have another type termed the U-174/U (Nexus TP-101), [ 45 ] also known as U-93A/U (Nexus TP-102) [ 46 ] and Nexus TP-120. [ 47 ] These are also known as US NATO plugs. These have a 0.281 in (7.1 mm) diameter shaft with four conductors, allowing two for the headphones, and two for the microphone. Also used is the U-384/U (Nexus TP-105), which has the same diameter as the U-174/U but is slightly longer and has 5 conductors instead of 4. [ 48 ] [ 49 ]
There is a confusingly similar four-conductor British connector, Type 671 (10H/18575), with a slightly larger diameter of 7.57 mm (0.298 in) [ 50 ] used for headsets in many UK military aircraft and often referred to as a UK NATO or European NATO connector. [ 51 ]
In the most common arrangement, consistent with the original intention of the design, the male plug is connected to a cable, and the female socket is mounted in a piece of equipment. A considerable variety of line plugs and panel sockets is available, including plugs suiting various cable sizes, right-angle plugs, and both plugs and sockets in a variety of price ranges and with current capacities up to 15 amperes for certain heavy-duty 1 ⁄ 4 in versions intended for loudspeaker connections. [ 52 ]
Common uses of phone plugs and their matching sockets include:
Any number of 3.5 mm sockets for input and output may be found on personal computers , either from integrated sound hardware common on motherboards or from insertable sound cards . The 1999 PC System Design Guide's color code for 3.5 mm TRS sockets is common, which assigns pink for microphone , light blue for line in , and lime for line level . AC'97 and its 2004 successor Intel High Definition Audio have been widely adopted specifications that, while not mandating physical sockets, do provide specifications for a front panel connector with pin assignments for two ports with jack detection. Front panels commonly have a stereo output socket for headphones and (slightly less commonly) a stereo input socket for a mic. The back panel may have additional sockets, most commonly for line out , mic , line in , and less commonly for multiple surround sound outs. Laptops and tablets tend to have fewer sockets than desktops due to size constraints.
Some computers include a 3.5 mm TRS socket for mono microphone that delivers a 5 V bias voltage on the ring to power an electret microphone 's integrated buffer amplifier , though details depend on the manufacturer. [ 53 ] The Apple PlainTalk microphone socket is a historical variant that accepts either a 3.5 mm line input or an elongated 3.5 mm TRS plug whose tip carries the amplifier's power.
Some newer computers, especially laptops, have 3.5 mm TRRS headset sockets, which are compatible with phone headsets and may be distinguished by a headset icon instead of the usual headphones or microphone icons. These are particularly used for voice over IP .
Sound cards that output 5.1 surround sound have three sockets to accommodate six channels: front left and right; surround left and right; and center and subwoofer. 6.1 and 7.1 channel sound cards from Creative Labs, however, use a single three-conductor socket (for the front speakers) and two four-conductor sockets. [ b ] This is to accommodate rear-center (6.1) or rear left and right (7.1) channels without the need for additional sockets on the sound card.
Some portable computers have a combined 3.5 mm TRS/ TOSLINK jack, supporting stereo audio output using either a TRS connector or TOSLINK (stereo or 5.1 Dolby Digital / DTS ) digital output using a suitable optical adapter. Most iMac computers have this digital/analog combo output feature as standard, with early MacBooks having two ports, one for analog/digital audio input and the other for output. Support for input was dropped on various later models [ 54 ] [ 55 ]
The original application for the 6.35 mm ( 1 ⁄ 4 in) phone jack was in manual telephone exchanges. [ 56 ] Many different configurations of these phone plugs were used, some accommodating five or more conductors, with several tip profiles. Of these many varieties, only the two-conductor version with a rounded tip profile was compatible between different manufacturers, and this was the design that was at first adopted for use with microphones , electric guitars, headphones , loudspeakers , and other audio equipment .
When a three-conductor version of the 6.35 mm plug was introduced for use with stereo headphones, it was given a sharper tip profile to make it possible to manufacture jacks that would accept only stereo plugs, to avoid short-circuiting the right channel of the amplifier. This attempt has long been abandoned, and now the convention is that all plugs fit all sockets of the same size, regardless of whether they are balanced or unbalanced, mono or stereo. Most 6.35 mm plugs, mono or stereo, now have the profile of the original stereo plug, although a few rounded mono plugs are still produced. The profiles of stereo miniature and sub-miniature plugs have always been identical to the mono plugs of the same size.
The results of this physical compatibility are:
Equipment aware of this possible shorting allows, for instance:
Some devices for an even higher number of rings might possibly be backwards-compatible with an opposite-gendered device with fewer rings, or may cause damage. For example, 3.5 mm TRRS sockets that accept TRRS headsets (stereo headphones with a mic) are often compatible with standard TRS stereo headphones, whereby the contact that expects a mic signal will instead simply become shorted to ground and thus will provide a zero signal. Conversely, those TRRS headsets can plug into TRS sockets, in which case its speakers may still work even though its mic won't work (the mic's signal contact will be disconnected). [ 57 ]
Because of a lack of standardization in the past regarding the dimensions (length) given to the ring conductor and the insulating portions on either side of it in 6.35 mm ( 1 ⁄ 4 in) phone connectors and the width of the conductors in different brands and generations of sockets, there are occasional issues with compatibility between differing brands of plug and socket. This can result in a contact in the socket bridging (shorting) the ring and sleeve contacts on a phone connector.
Equipment requiring video with stereo audio input or output sometimes uses 3.5 mm TRRS connectors. Two incompatible variants exist, of 15 millimetres (0.59 in) and 17 mm (0.67 in) length, and using the wrong variant may either simply not work, or could cause physical damage.
Attempting to fully insert the longer (17 mm) plug into a receptacle designed for the shorter (15 mm) plug may damage the receptacle, and may damage any electronics located immediately behind the receptacle. However, partially inserting the plug will work as the tip/ring/ring distances are the same for both variants.
A shorter plug in a socket designed for the longer connector may not be retained firmly and may result in wrong signal routing or a short circuit inside the equipment (e.g. the plug tip may cause the contacts inside the receptacle – tip/ring 1, etc. – to short together).
The shorter 15 mm TRRS variant is more common and physically compatible with standard 3.5 mm TRS and TS connectors.
Many small video cameras, laptops, recorders and other consumer devices use a 3.5 mm microphone connector for attaching a microphone to the system. These fall into three categories: [ citation needed ]
Three- or four-conductor (TRS or TRRS) 2.5 mm and 3.5 mm sockets were common on older cell phones and smartphones respectively, providing mono (three-conductor) or stereo (four-conductor) sound and a microphone input, together with signaling (e.g., push a button to answer a call). These are used both for handsfree headsets and for stereo headphones.
3.5 mm TRRS (stereo-plus-mic) sockets became particularly common on smartphones , and have been used by Nokia and others since 2006, and as mentioned in the compatibility section, they are often compatible with standard 3.5 mm stereo headphones. Many computers, especially laptops, also include a TRRS headset socket compatible with the headsets intended for smartphones.
The four conductors of a TRRS connector are assigned to different purposes by different manufacturers. Any 3.5 mm plug can be plugged mechanically into any socket, but many combinations are electrically incompatible. For example, plugging TRRS headphones into a TRS headset socket, a TRS headset into a TRRS socket, or plugging TRRS headphones from one manufacturer into a TRRS socket from another may not function correctly, or at all. Mono audio will usually work, but stereo audio or the microphone may not work, or the pause/play controls may be inactive, as is common when trying to use headphones with controls for iPhones on an Android device, or vice versa .
Two different forms are frequently found. Both place left audio on the tip and right audio on the first ring, same as stereo connectors. They differ in the placement of the microphone and return contacts.
The OMTP standard places the ground return on the sleeve and the microphone on the second ring. [ 58 ] It has been accepted as a national Chinese standard YDT 1885–2009. In the West, it is mostly used on older devices, such as older Nokia mobiles, older Samsung smartphones, and some Sony Ericsson phones. [ 59 ] It is widely used in products meant for the Chinese market. [ 60 ] [ 61 ] Headsets using this wiring are sometimes indicated by black plastic separators between the rings. [ 62 ] [ 61 ]
The CTIA / AHJ standard reverses these contacts, putting the microphone on the sleeve. It is used by Apple 's iPhone line until the 6S and SE (1st) . In the West, these products made it the de facto TRRS standard. [ 63 ] [ 64 ] [ 65 ] It is now used by HTC devices, recent Samsung , Nokia , and Sony phones , among others. It has the disadvantage that the microphone gets shorted to ground if the device has a metal body and the sleeve has a flange, touching the body. Headsets using this wiring are sometimes indicated by white plastic separators between the rings. [ 62 ] [ 61 ]
If a CTIA headset is connected to an OMTP device, the missing ground effectively connects the speakers in series, out-of-phase. This removes the singer's voice on typical popular music recordings, which place the singers in the center. If the main microphone button is held down, shorting across the microphone and restoring ground, the correct sound may be audible. [ 61 ]
The 4-pole 3.5 mm connector is defined by the Japanese standard JEITA/EIAJ RC-5325A, "4-Pole miniature concentric plugs and jacks", originally published in 1993. [ 78 ] 3-pole 3.5 mm TRS connectors are defined in JIS C 6560. See also JIS C 5401 and IEC 60130-8.
Apple 's iPod Shuffle 2G reuses its TRRS socket not just for audio but also for charging and syncing over USB when docked. [ 79 ]
The USB Type-C Cable and Connector Specification specifies a mapping from a USB-C jack to a 4-pole TRRS jack, for the use of headsets, and supports both CTIA and OMTP (YD/T 1885–2009) modes. [ 80 ] Some devices transparently handle many jack standards, [ 81 ] [ 82 ] and there are hardware implementations of this available as components. [ 83 ] This is accomplished in some cases by applying a voltage to the sleeve and second ring to detect the wiring. The last two conductors may then be switched to allow a device made to one standard to be used with a headset made to the other. [ 84 ]
A TRRRS standard for 3.5 mm connectors was developed by ITU-T. [ 85 ] The standard, called P.382 (formerly P.MMIC), outlines technical requirements and test methods for a 5-conductor socket and plug configuration. Compared to the TRRS standard, TRRRS provides one extra conductor that can be used for connecting a second microphone or providing power to or from the audio accessory.
P.382 requires compliant sockets and plugs to be backward compatible with legacy TRRS and TRS connectors. Therefore, P.382-compliant TRRRS connectors should allow for seamless integration when used on new products. TRRRS connectors enable the following audio applications: active noise canceling, binaural recording and others, where dual analog microphone lines can be directly connected to a host device. It was commonly found on Sony phones starting with the Xperia Z1 , Xperis XZ1 and Xperia 1 II .
Another TRRRS standard for 4.4 mm connectors following JEITA RC-8141C was introduced in 2015 and is used for balanced audio connections, in particular for headphone cables. This connector is often called a Pentaconn connector, following the brand name of Nippon DICS (NDICS). It is used by some Sony products like the M1Z Walkman of their Signature series and by some Sennheiser products like the HD 820 headphone or the HDV 820 DAC headphone amplifier. [ 86 ] [ 87 ]
Panel-mounted jacks may include switch contacts. Most commonly, a mono jack is provided with one normally closed (NC) contact, which is connected to the tip (live) connection when no plug is in the socket, and disconnected when a plug is inserted. Stereo sockets commonly provide two such NC contacts, one for the tip (left channel) and one for the ring or collar (right channel). Some jacks also have such a connection on the sleeve. As this contact is usually ground, it is not much use for signal switching but could be used to indicate to electronic circuitry that the jack is in use. Less commonly, jacks may feature normally open (NO) or change-over contacts or the switch contacts may be isolated from the connector signals.
The original purpose of these contacts was for switching in telephone exchanges, for which there were many patterns. Two sets of change-over contacts, isolated from the connector contacts, were common. The more recent pattern of one NC contact for each signal path, internally attached to the connector contact, stems from their use as headphone jacks. In many amplifiers and equipment containing them, such as electronic organs, a headphone jack is provided that disconnects the loudspeakers when in use. This is done by means of these switch contacts. In other equipment, a dummy load is provided when the headphones are not connected. This is also easily provided by means of these NC contacts.
Other uses for these contacts have been found. One is to interrupt a signal path in a mixing console to insert an effects processor. This is accomplished by using one NC contact of a stereo jack to connect the tip and ring together to affect a bypass when no plug is inserted. A similar arrangement is used in patch panels for normalization (see Patch panel § Normalization ).
Where a 3.5 mm or 2.5 mm jack is used as a DC power inlet connector, a switch contact may be used to disconnect an internal battery whenever an external power supply is connected, to prevent incorrect recharging of the battery.
To eliminate the need for a separate power switch, a standard stereo jack is used on most battery-powered guitar effects pedals . The internal battery has its negative terminal wired to the sleeve contact of the jack. When the user plugs in a two-conductor (mono) plug, the resulting short circuit between the sleeve and ring connects an internal battery to the unit's circuitry, ensuring that it powers up or down automatically whenever a signal lead is inserted or removed.
The connector assembly is usually made by one or more hollow and one solid pin. The jack is then assembled with pins separated by an insulating material.
Connectors that are tarnished, or that were not manufactured within tight tolerances, are prone to cause poor connections. [ 88 ] Depending upon the surface material of the connectors, tarnished ones can be cleaned with a burnishing agent (for solid brass contacts typical) or contact cleaner (for plated contacts). [ 88 ]
A great number of jack configurations have been used, including the following, though the simple mono and stereo jack (examples A and B) are most common: [ 89 ]
When a phone connector is used to make a balanced audio connection, the two active conductors are used for differential versions of a monaural signal. The ring, used for the right channel in stereo systems, is used instead for the inverting input.
Where space is a premium, TRS connectors offer a more compact alternative to XLR connectors , and so are common in small audio mixing desks .
Another advantage offered by TRS connectors used for balanced microphone inputs is that a standard unbalanced signal lead using a TS phone jack can simply be plugged into such an input. The inverting input on the ring contact gets correctly grounded when it makes contact with the plug body.
When using non-switching phone connectors to make balanced audio connections, the socket grounds the plug tip and ring when inserting or disconnecting the plug, and the ground mates last. This causes bursts of hum, cracks and pops and may stress some outputs as they will be short circuited briefly, or longer if the plug is left half in.
This problem does not occur with XLR or when using gauge B [ 96 ] which although it is of 0.25 in (6.35 mm) diameter has a smaller tip and a recessed ring so that the ground contact of the socket never touches the tip or ring of the plug. This type was designed for balanced audio use, being the original telephone switchboard connector and is still common in broadcast, telecommunications and many professional audio applications where it is vital that permanent circuits being monitored are not interrupted by the insertion or removal of connectors. This same tapered shape used in the gauge B plug can be seen also in aviation and military applications on various diameters of jack connector including the PJ-068 and Bantam plugs. The more common straight-sided profile used in domestic and commercial applications and discussed in most of this article is known as gauge A .
Alternatively, some switched audio jacks contain built-in isolated switches that only activate when the plug is fully inserted. [ 97 ] This can be used to avoid the insertion issue, for instance by wiring the connectors through a double pole, double throw switch that activates only upon full insertion. Or for instance by having the switch control a circuit that gracefully ramps up the audio once the plug is fully-inserted and mutes the audio when not fully-inserted.
Phone connectors with three conductors are also commonly used as unbalanced audio patch points (or insert points , or simply inserts ), with the output on many mixers found on the tip and the input on the ring. This is often expressed as tip send, ring return . [ c ] Older mixers and some outboard gear [ d ] have unbalanced insert points with ring send, tip return . [ e ]
In many implementations, the switch contact within the panel socket is used to close the circuit between send and return when the patch point has no plug inserted. Combining send and return functions via single 1 ⁄ 4 in TRS connectors halves the space needed for insert jack fields which would otherwise require two jacks, one for send and one for return . [ f ]
In some three-conductor TRS phone inserts, the concept is extended by using specially designed phone jacks that will accept a mono phone plug partly inserted to the first click and will then connect the tip to the signal path without breaking it. Standard TRS connectors may also be used in this way with varying success.
In some very compact equipment including modular synthesizers , 3.5 mm TS phone connectors are used for patch points. | https://en.wikipedia.org/wiki/Phone_connector_(audio) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.