text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
The product of exponentials (POE) method is a robotics convention for mapping the links of a spatial kinematic chain . It is an alternative to Denavit–Hartenberg parameterization. While the latter method uses the minimal number of parameters to represent joint motions, the former method has a number of advantages: uniform treatment of prismatic and revolute joints, definition of only two reference frames, and an easy geometric interpretation from the use of screw axes for each joint. [ 1 ]
The POE method was introduced by Roger W. Brockett in 1984. [ 2 ]
The following method is used to determine the product of exponentials for a kinematic chain, with the goal of parameterizing an affine transformation matrix between the base and tool frames in terms of the joint angles θ 1 . . . θ N . {\textstyle \theta _{1}...\theta _{N}.}
The first step is to select a "zero configuration" where all the joint angles are defined as being zero. The 4x4 matrix g s t ( 0 ) {\textstyle g_{st}(0)} describes the transformation from the base frame to the tool frame in this configuration. It is an affine transform consisting of the 3x3 rotation matrix R and the 1x3 translation vector p . The matrix is augmented to create a 4x4 square matrix.
g s t ( 0 ) = [ R p 0 1 ] {\displaystyle g_{st}(0)=\left[{\begin{array}{cc}R&p\\0&1\\\end{array}}\right]}
The following steps should be followed for each of N joints to produce an affine transform for each.
For each joint of the kinematic chain, an origin point q and an axis of action are selected for the zero configuration, using the coordinate frame of the base. In the case of a prismatic joint , the axis of action v is the vector along which the joint extends; in the case of a revolute joint , the axis of action ω the vector normal to the rotation.
A 1x6 twist vector is composed to describe the movement of each joint.
For a revolute joint, ξ i = ( ω i − ω i × q i ) . {\displaystyle \xi _{i}=\left({\begin{array}{c}\omega _{i}\\-\omega _{i}\times q_{i}\\\end{array}}\right).}
For a prismatic joint, ξ i = ( 0 v i ) . {\displaystyle \xi _{i}=\left({\begin{array}{c}0\\v_{i}\\\end{array}}\right).}
The resulting twist has two 1x3 vector components: Linear motion along an axis ( v {\displaystyle v} ) and rotational motion along the same axis ( ω ). ξ = ( ω v ) . {\displaystyle \xi =\left({\begin{array}{c}\omega \\v\\\end{array}}\right).}
The 3x1 vector ω is rewritten in cross product matrix notation: ω ^ = [ 0 − ω 3 ω 2 ω 3 0 − ω 1 − ω 2 ω 1 0 ] . {\displaystyle {\hat {\omega }}=\left[{\begin{array}{ccc}0&-\omega _{3}&\omega _{2}\\\omega _{3}&0&-\omega _{1}\\-\omega _{2}&\omega _{1}&0\\\end{array}}\right].}
Per Rodrigues' rotation formula , the rotation matrix is calculated from the rotational component: e ω ^ θ = I + ω ^ sin θ + ω ^ 2 ( 1 − cos θ ) . {\displaystyle e^{{\hat {\omega }}\theta }=I+{\hat {\omega }}\sin {\theta }+{\hat {\omega }}^{2}(1-\cos {\theta }).}
The 3x1 translation vector is calculated from the components of the twist. t = ( I − e ω ^ θ ) ( ω × v ) + ω ω T v θ {\displaystyle t=(I-e^{{\hat {\omega }}\theta })(\omega \times v)+\omega \omega ^{T}v\theta } where I is the 3x3 identity matrix . [ 3 ]
For each joint i , the matrix exponential e ξ ^ i θ i {\textstyle e^{{\hat {\xi }}_{i}\theta _{i}}} for a given joint angle θ {\textstyle \theta } is composed from the rotation matrix and translation vector, combined into an augmented 4x4 matrix: e ξ ^ i θ i = [ e ω ^ θ t 0 1 ] {\displaystyle e^{{\hat {\xi }}_{i}\theta _{i}}=\left[{\begin{array}{cc}e^{{\hat {\omega }}\theta }&t\\0&1\\\end{array}}\right]}
The matrix exponentials are multiplied to produce a 4 × 4 affine transform g d ( θ i , … , θ n ) {\textstyle g_{d}(\theta _{i},\ldots ,\theta _{n})} from the base frame to the tool frame in a given configuration. g d = e ξ ^ 1 θ 1 ⋯ e ξ ^ n θ n g s t ( 0 ) . {\displaystyle g_{d}=e^{{\hat {\xi }}_{1}\theta _{1}}\cdots e^{{\hat {\xi }}_{n}\theta _{n}}g_{st}(0).}
Forward kinematics may be computed directly from the POE chain for a given manipulator. This allows generating of complex trajectories of the end-effector in Cartesian space ( Cartesian coordinate system ) given trajectories in the joint space. [ 4 ] Inverse kinematics for most common robot manipulators can be solved with the use of Paden–Kahan subproblems . The problem of inverse kinematics can also be approached with the use of nonlinear root-finding methods, such as the Newton-Raphson iterative method ( Newton's method ). [ 4 ]
The product of exponentials method uses only two frames of reference : the base frame S and the tool frame T . Constructing the Denavit–Hartenberg parameters for a robot requires the careful selection of tool frames in order to enable particular cancellations, such that the twists can be represented by four parameters instead of six. In the product of exponentials method, the joint twists can be constructed directly without considering adjacent joints in the chain. This makes the joint twists easier to construct, and easier to process by computer. [ 3 ] In addition, revolute and prismatic joints are treated uniformly in the POE method, while they are treated separately when using the Denavit–Hartenberg parameters. Moreover, there are multiple conventions for assigning link frames when using the Denavit–Hartenberg parameters.
There is not a one-to-one mapping between twist coordinate mapping in both methods, but algorithmic mapping from POE to Denavit–Hartenberg has been demonstrated. [ 5 ]
When analyzing parallel robots , the kinematic chain of each leg is analyzed individually and the tool frames are set equal to one another. This method is extensible to grasp analyses.
This robotics-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Product_of_exponentials_formula |
In NMR spectroscopy , the product operator formalism is a method used to determine the outcome of pulse sequences in a rigorous but straightforward way. With this method it is possible to predict how the bulk magnetization evolves with time under the action of pulses applied in different directions. It is a net improvement from the semi-classical vector model which is not able to predict many of the results in NMR spectroscopy and is a simplification of the complete density matrix formalism.
In this model, for a single spin , four base operators exist: I x {\displaystyle I_{x}} , I y {\displaystyle I_{y}} , I z {\displaystyle I_{z}} and E / 2 {\displaystyle E/2} which represent respectively polarization (population difference between the two spin states), single quantum coherence (magnetization on the xy plane) and the unit operator. Many other, non-classical operators exist for coupled systems . Using this approach, the evolution of the magnetization under free precession is represented by I z {\displaystyle I_{z}} and corresponds to a rotation about the z-axis with a phase angle proportional to the chemical shift of the spin in question:
I x → ω τ I z cos ( ω τ ) I x − sin ( ω τ ) I y {\displaystyle I_{x}{\xrightarrow {\omega \tau I_{z}}}\cos(\omega \tau )I_{x}-\sin(\omega \tau )I_{y}}
Pulses about the x and y axis can be represented by I x {\displaystyle I_{x}} and I y {\displaystyle I_{y}} respectively; these allow to interconvert the magnetization between planes and ultimately to observe it at the end of a sequence. Since every spin will evolve differently depending on its shift, with this formalism it is possible to calculate exactly where the magnetization will end up and hence devise pulse sequences to measure the desired signal while excluding others.
The product operator formalism is particularly useful in describing experiments in two-dimensions like COSY, HSQC and HMBC.
Throughout this section, the reduced Planck constant ℏ = 1 {\displaystyle \hbar =1} for convenience.
The product operator formalism is usually applied to sets of spin-1/2 particles, since the fact that the individual operators satisfy L x 2 = L y 2 = L z 2 ∝ 1 {\displaystyle L_{x}^{2}=L_{y}^{2}=L_{z}^{2}\propto \mathbf {1} } , where 1 {\displaystyle \mathbf {1} } is the identity operator , makes the commutation relations of product operators particularly simple. In principle the formalism could be extended to higher spins, but in practice the general irreducible spherical tensor treatment is more often used. As such, we consider only the spin-1/2 case below.
The main idea of the formalism is to make it easier to follow the system density operator ρ {\displaystyle \rho } , which evolves under a Hamiltonian H {\displaystyle H} according to the Liouville-von Neumann equation as
For a time-independent Hamiltonian, the density operator inherits its solutions from the Schrödinger time-evolution operator U ( t ) = exp ( − i H t ) {\displaystyle U(t)=\exp(-\mathrm {i} Ht)} as
Suppose a single spin-1/2 L {\displaystyle L} is in the state | ↑ ⟩ {\displaystyle |\uparrow \,\rangle } , which is an eigenstate of the z-spin operator L z {\displaystyle L_{z}} , that is L z | ↑ ⟩ = 1 2 | ↑ ⟩ {\displaystyle L_{z}|\uparrow \,\rangle ={\frac {1}{2}}|\uparrow \,\rangle } . Similarly L z | ↓ ⟩ = − 1 2 | ↓ ⟩ {\displaystyle L_{z}|\downarrow \,\rangle =-{\frac {1}{2}}|\downarrow \,\rangle } . Making use of the expansion of a Hermitian operator A {\displaystyle A} in terms of projections onto its eigenkets | a ⟩ {\displaystyle |a\rangle } with eigenvalues a {\displaystyle a} as A = ∑ a | a ⟩ ⟨ a | {\displaystyle A=\sum a|a\rangle \langle a|} , the associated density operator is
where 1 {\displaystyle \mathbf {1} } is the identity operator. Similarly, the density operator for the state | ↓ ⟩ {\displaystyle |\downarrow \,\rangle } is
Since the spin operators L x , L y , L z {\displaystyle L_{x},L_{y},L_{z}} are all traceless and the expectation value of an operator A {\displaystyle A} for a system with density operator ρ {\displaystyle \rho } is ⟨ A ⟩ = tr ( ρ A ) {\displaystyle \langle A\rangle =\operatorname {tr} (\rho A)} , the terms proportional to the unit operator 1 {\displaystyle \mathbf {1} } do not affect the expectations of the spin operators. Additionally those parts do not evolve in time, since they trivially commute with the Hamiltonian. Therefore those terms can be ignored, and the state | ↑ ⟩ {\displaystyle |\uparrow \,\rangle } corresponds to a density operator + L z {\displaystyle +L_{z}} , while the state | ↓ ⟩ {\displaystyle |\downarrow \,\rangle } corresponds to a density operator − L z {\displaystyle -L_{z}} . In exactly the same manner, polarisation along the positive x-axis, that is a state | ↑ x ⟩ {\displaystyle |\uparrow _{x}\,\rangle } , corresponds to a density operator + L x {\displaystyle +L_{x}} . This idea extends naturally to multiple spins, where the states and operators are direct products of single-spin states and operators. Hence operator terms in the density operator have a direct duality with states.
In the case of two spins L , S {\displaystyle L,S} , the terms in the density operator (ignoring the identity on its own) can be interpreted as representing
where eg L z {\displaystyle L_{z}} is a shorthand for the Kronecker product L z ⊗ 1 S {\displaystyle L_{z}\otimes \mathbf {1} _{S}} , where 1 S {\displaystyle \mathbf {1} _{S}} is the identity operator on the S {\displaystyle S} spin, and similarly L x S z {\displaystyle L_{x}S_{z}} is a shorthand for L x ⊗ S z {\displaystyle L_{x}\otimes S_{z}} .
The factors of two in the 'true' two-spin operators are to allow for convenient commutation relations in this specific spin-1/2 case - see below. Note also that we could instead choose to expand the density operator in the basis L z , L ± = L x ± i L y {\displaystyle L_{z},L_{\pm }=L_{x}\pm \mathrm {i} \,L_{y}} etc, where the transverse operators have been replaced with raising and lowering operators . With quadrature detection, the observable associated with an individual spin is effectively the non-Hermitian L ± {\displaystyle L_{\pm }} , so this is sometimes more convenient.
Consider operators A , B , C {\displaystyle A,B,C} that obey the cyclic commutation relations
In fact only the first two relations are necessary for the following derivation, but since we are usually working with operators associated with Cartesian directions, such as the individual angular momentum operators, the third commutator follows by a symmetry argument.
Introduce also the commutation superoperator F ^ {\displaystyle {\hat {F}}} of an operator F {\displaystyle F} (in our case, this is more formally related to the adjoint representation of the Lie algebra whose elements are A , B , C {\displaystyle A,B,C} ), which acts as
In particular, for the cyclic operators, we have
and consequently for integer n ≥ 0 {\displaystyle n\geq 0}
An identity for two operators F , G {\displaystyle F,G} is
which can be derived by putting F → t F {\displaystyle F\to tF} where t {\displaystyle t} is a scalar parameter, differentiating both sides with respect to t {\displaystyle t} , and noting that both sides satisfy the same differential equation in that parameter, with the same initial condition at t = 0 {\displaystyle t=0} . In particular, for some scalar parameter θ {\displaystyle \theta } , we have
where the final equality follows from recognising the Taylor series for sine and cosine. Now suppose that the density operator at time zero is ρ ( 0 ) = A {\displaystyle \rho (0)=A} , and it is allowed to freely evolve under the Hamiltonian H = α B {\displaystyle H=\alpha \,B} where α {\displaystyle \alpha } is some scalar. Using the results above, the density operator at some later time t {\displaystyle t} will be given by
The interpretation of this is that although nuclear spin angular momentum itself is not connected to rotations in three-dimensional space in the same way that angular momentum is, the evolution of the density operator can be viewed as rotations in an abstract space, in which the operators A , B , C {\displaystyle A,B,C} are the generators of rotations about the axes. An example of such a set of generators is just the spin operators L x , L y , L z {\displaystyle L_{x},L_{y},L_{z}} themselves.
We now also introduce the 'arrow notation' typically used in NMR, which writes the general evolution given above as the shorthand
With more specific reference to the radiofrequency pulses applied during NMR experiments, a hard pulse with tip angle θ {\displaystyle \theta } around a direction q {\displaystyle q} is written as ( θ ) q {\displaystyle (\theta )_{q}} above the arrow and corresponds to taking B = L q {\displaystyle B=L_{q}} as the rotation generator in Equation 1 . When there is no ambiguity, the arrow label may be omitted, or be eg text instead.
Note that a more complicated calculation has now been reduced to a simpler procedure that requires no knowledge of the underlying quantum mechanics, especially since the subspaces of cyclic operators can be tabulated in advance.
The Hamiltonian for a single spin L {\displaystyle L} evolving under a chemical shift of angular frequency ω {\displaystyle \omega } is
which means that in an ensemble of many such spins with slightly different chemical shifts, there is a dephasing of the magnetisation in the x {\displaystyle x} - y {\displaystyle y} plane. Consider the pulse sequence
where τ {\displaystyle \tau } is a time interval. Starting in an equilibrium state with all the polarisation along the z {\displaystyle z} -axis, the evolution of an individual spin in the ensemble is
Hence this sequence refocuses the transverse magnetisation produced by the first pulse, independent of the value of the chemical shift.
As an indication of the utility of the formalism, suppose instead that we tried to reach the same result using states only and therefore the Schrödinger time evolution operators. This amounts to trying to simplify the unitary propagator U {\displaystyle U} taking the initial state | ψ 0 ⟩ {\displaystyle |\psi _{0}\rangle } to the final state | ψ ⟩ {\displaystyle |\psi \rangle } as | ψ ⟩ = U | ψ 0 ⟩ {\displaystyle |\psi \rangle =U|\psi _{0}\rangle } , where explicitly
Essentially we want to find the propagator in the form U = exp C {\displaystyle U=\exp C} , that is as a single exponential of a combination of operators, because that gives the effective Hamiltonian acting during the sequence. Since the arguments of the exponentials in the original form of the propagator do not commute, this amounts to solving a specific example of the Baker–Campbell–Hausdorff (BCH) problem. In this relatively simple case we can solve the BCH problem using the fact that U f ( A ) U † = f ( U A U † ) {\displaystyle Uf(A)U^{\dagger }=f(UAU^{\dagger })} for unitary operator U {\displaystyle U} , operator A {\displaystyle A} and function f {\displaystyle f} , as well as the mathematical similarity of the spin operators with the physical rotation generators, which allow us to write
Hence U = exp ( − i π L x ) {\displaystyle U=\exp(-\mathrm {i} \pi L_{x})} and only the effect of the 180° pulse remains, which agrees with the product operator treatment. For larger sequences of pulses this state treatment quickly becomes even more unwieldy, unless more advanced methods such as exact effective Hamiltonian theory (which gives closed-form expressions for the entangled propagators via the Cayley–Hamilton theorem and eigendecompositions) are used.
As an extension of the refocussing pulse treated above, consider a set of two pulses with arbitrary flip angles α 1 {\displaystyle \alpha _{1}} and α 2 {\displaystyle \alpha _{2}} , that is sequence
where again τ {\displaystyle \tau } is a time interval. Liberally dropping irrelevant terms, the evolution for a single spin with offset ω {\displaystyle \omega } up to just after the second pulse is
Now consider an ensemble of spins in a magnetic field that is sufficiently inhomogeneous to completely dephase the spins in the interval between the pulses. After the second pulse, we can decompose the remaining terms into a sum of two spin populations differing only in the sign of the L y {\displaystyle L_{y}} term, in the sense that for an individual spin we have
where we used the identities cos 2 θ + sin 2 θ = 1 {\displaystyle \cos ^{2}\theta +\sin ^{2}\theta =1} and cos 2 θ − sin 2 θ = cos 2 θ {\displaystyle \cos ^{2}\theta -\sin ^{2}\theta =\cos 2\theta } .
It is the spins in the new population that has been generated by the second pulse, namely the one with + L y {\displaystyle +L_{y}} , that will lead to the formation of an echo after evolution for the next τ {\displaystyle \tau } interval. Therefore, remembering to include the sin α 1 {\displaystyle \sin \alpha _{1}} introduced by the first pulse, the amplitude of the resulting Hahn echo relative to that produced by an ideal 90°—180° refocussing pulse sequence is roughly
Note that this is not an exact result, because it considers only the refocussing of polarisation that was transverse immediately before the second pulse. In reality there will be further transverse components originating from the tipping of the longitudinal magnetisation that remained after the first pulse. However, for many tip angles, this is a good rule of thumb.
To instead arrive at this result using the state formalism, we would have had to non-trivially evaluate the rotation propagator as
and then evaluate a transition probability by considering the result of applying this to a state representing polarisation in the transverse plane.
DEPT (Distortionless Enhancement by Polarisation Transfer) is a pulse sequence used to distinguish between the multiplicity of hydrogen bonded to carbon, that is it can separate C, CH, CH 2 and CH 3 groups. It does this by exploiting the heteronuclear carbon-hydrogen J {\displaystyle J} - coupling and varying the tip angle of the final pulse in the sequence. The basic pulse sequence is shown below.
Under the weak coupling assumption, the chemical shift terms commute with the J {\displaystyle J} -coupling term in the Hamiltonian. Hence we can ignore the refocussed chemical shift (see § The 180°-refocussing pulse ) in the two intervals containing π {\displaystyle \pi } -pulses, namely ( 1 ) → ( 4 ) {\displaystyle (1)\to (4)} and ( 3 ) → ( 6 ) {\displaystyle (3)\to (6)} , and additionally refrain from evaluating the chemical shift evolution in the last τ 2 {\displaystyle {\frac {\tau }{2}}} period ( 5 ) → ( 6 ) {\displaystyle (5)\to (6)} . The pulse separation time τ 2 {\displaystyle {\frac {\tau }{2}}} is adjusted to the coupling strength J {\displaystyle J} (with associated Hamiltonian coefficient α = π J {\displaystyle \alpha =\pi J} ) such that it satisfies
because then the first term in the evolved density operator in Equation 2 vanishes under the pure coupling evolution between the pulses.
Label the hydrogen spin as L {\displaystyle L} , and the carbon spin by S {\displaystyle S} . For illustrative purposes, we assume that the equilibrium state only has polarisation on the L {\displaystyle L} -spin (in reality, there will also be polarisation on the S {\displaystyle S} spin, with the relative populations determined by the thermal Boltzmann factors ). The J {\displaystyle J} -coupling Hamiltonian is
which gives the following evolution
The non-trivial commutators used to identify the cyclic subspace for ( 1 ) → ( 2 ) {\displaystyle (1)\to (2)} are
and consequently the next cyclic rotation
where we used the 'mixed-product identity' ( A ⊗ B ) ( C ⊗ D ) = A C ⊗ B D {\displaystyle (A\otimes B)(C\otimes D)=AC\otimes BD} , which relates the matrix and Kronecker products for compatible dimensions of A , B , C , D {\displaystyle A,B,C,D} , and also the fact that since the two eigenvalues of any of the spin-1/2 operators S x , S y , S z {\displaystyle S_{x},S_{y},S_{z}} are s = ± 1 2 {\displaystyle s=\pm {\frac {1}{2}}} , any of their squares are given by s 2 1 S {\displaystyle s^{2}\mathbf {1} _{S}} by the Cayley–Hamilton theorem .
Note also that the 2 L x L y {\displaystyle 2L_{x}L_{y}} term is invariant under the J {\displaystyle J} -coupling evolution. That is that the term commutes with the Hamiltonian, and in this case, that can be manually confirmed by evaluating the commutator [ 2 L x L y , 2 L z S z ] = 0 {\displaystyle [2L_{x}L_{y},2L_{z}S_{z}]=0} using the matrix representations of the spin operators.
Now label the two hydrogen spins as L , L ′ {\displaystyle L,L'} and the carbon spin by S {\displaystyle S} . The J {\displaystyle J} -coupling Hamiltonian is now
which gives the following evolution
( 0 ) : L z + L z ′ ( 0 ) → ( 1 ) : L z + L z ′ → ( π 2 ) x L − L y + L z ′ → ( π 2 ) x L ′ − L y − L y ′ ( 1 ) → ( 2 ) : − L y − L y ′ → J − coup. 2 L x S z + 2 L x ′ S z ( 2 ) → ( 3 ) : 2 L x S z + 2 L x ′ S z → ( π 2 ) x S − 2 L x S y − 2 L x ′ S y → ( π ) x L − 2 L x S y − 2 L x ′ S y → ( π ) x L ′ − 2 L x S y − 2 L x ′ S y ( 3 ) → ( 4 ) : − 2 L x S y − 2 L x ′ S y → J − coup. 4 L x L z ′ S x + 4 L z L x ′ S x ( 4 ) → ( 5 ) : 4 L x L z ′ S x + 4 L z L x ′ S x → ( π ) x S 4 L x L z ′ S x + 4 L z L x ′ S x → ( θ ) y L 4 L z L x ′ S x cos θ − 4 L z L z ′ S x sin θ + others → ( θ ) y L ′ − 8 L z L z ′ S x cos θ sin θ + others ( 5 ) → ( 6 ) : − 8 L z L z ′ S x cos θ sin θ + others → J L S − coup. − 4 L z ′ S y cos θ sin θ + others → J L ′ S − coup. 2 S x cos θ sin θ + others {\displaystyle {\begin{aligned}(0)&:\ L_{z}+L_{z}'\\(0)\to (1)&:\ L_{z}+L_{z}'\xrightarrow {\left({\frac {\pi }{2}}\right)_{xL}} -L_{y}+L_{z}'\xrightarrow {\left({\frac {\pi }{2}}\right)_{xL'}} -L_{y}-L_{y}'\\(1)\to (2)&:\ {-L_{y}-L_{y}'}\xrightarrow {J-{\text{coup.}}} 2L_{x}S_{z}+2L_{x}'S_{z}\\(2)\to (3)&:\ 2L_{x}S_{z}+2L_{x}'S_{z}\xrightarrow {\left({\frac {\pi }{2}}\right)_{xS}} -2L_{x}S_{y}-2L_{x}'S_{y}\xrightarrow {\left(\pi \right)_{xL}} -2L_{x}S_{y}-2L_{x}'S_{y}\xrightarrow {\left(\pi \right)_{xL'}} -2L_{x}S_{y}-2L_{x}'S_{y}\\(3)\to (4)&:\ -2L_{x}S_{y}-2L_{x}'S_{y}\xrightarrow {J-{\text{coup.}}} 4L_{x}L_{z}'S_{x}+4L_{z}L_{x}'S_{x}\\(4)\to (5)&:\ 4L_{x}L_{z}'S_{x}+4L_{z}L_{x}'S_{x}\xrightarrow {\left(\pi \right)_{xS}} 4L_{x}L_{z}'S_{x}+4L_{z}L_{x}'S_{x}\xrightarrow {(\theta )_{yL}} 4L_{z}L_{x}'S_{x}\cos \theta -4L_{z}L_{z}'S_{x}\sin \theta +{\text{others}}\xrightarrow {(\theta )_{yL'}} -8L_{z}L_{z}'S_{x}\cos \theta \sin \theta +{\text{others}}\\(5)\to (6)&:\ -8L_{z}L_{z}'S_{x}\cos \theta \sin \theta +{\text{others}}\xrightarrow {J_{LS}-{\text{coup.}}} -4L_{z}'S_{y}\cos \theta \sin \theta +{\text{others}}\xrightarrow {J_{L'S}-{\text{coup.}}} 2S_{x}\cos \theta \sin \theta +{\text{others}}\end{aligned}}}
where 'others' denotes various terms that can safely be ignored because they will not evolve into observable transverse polarisation on the target spin S {\displaystyle S} . The required cyclic commutators for dealing with the J {\displaystyle J} -coupling evolution are the following three sets (and their L ↔ L ′ {\displaystyle L\leftrightarrow L'} versions if needed)
A similar (but more lengthy) treatment gives the final observable term as 3 S x cos 2 θ sin θ {\displaystyle 3S_{x}\cos ^{2}\theta \sin \theta } .
Refer to § DEPT (Distortionless Enhancement by Polarisation Transfer) for the notation used in this example.
APT is similar to DEPT in that it detects carbon multiplicity. However, it has additional degeneracies: it gives identical positive signals for C and CH 2 , and identical negative signals for CH and CH 3 . One variation on the basic pulse sequence is shown below.
The key observation is that since we can again ignore the refocussed chemical shift, the only relevant dynamics occur in the interval with no hydrogen decoupling, where we can consider solely the J {\displaystyle J} -coupling. By using an interval twice as long as in the DEPT case, we ensure that a density operator of L y {\displaystyle L_{y}} at the start of the interval just has its sign inverted following the coupling (since this corresponds to α t = π {\displaystyle \alpha t=\pi } in the general treatment, and cos π = − 1 , sin π = 0 {\displaystyle \cos \pi =-1,\,\sin \pi =0} ). The Hamiltonians for the couplings to each of the n {\displaystyle n} separate neighbouring hydrogen atoms commute, so the overall effect is to multiply by a factor ( − 1 ) n {\displaystyle (-1)^{n}} . This motivates the alternating sign of the signal mentioned above. | https://en.wikipedia.org/wiki/Product_operator_formalism |
In mathematics , given partial orders ⪯ {\displaystyle \preceq } and ⊑ {\displaystyle \sqsubseteq } on sets A {\displaystyle A} and B {\displaystyle B} , respectively, the product order [ 1 ] [ 2 ] [ 3 ] [ 4 ] (also called the coordinatewise order [ 5 ] [ 3 ] [ 6 ] or componentwise order [ 2 ] [ 7 ] ) is a partial order ≤ {\displaystyle \leq } on the Cartesian product A × B . {\displaystyle A\times B.} Given two pairs ( a 1 , b 1 ) {\displaystyle \left(a_{1},b_{1}\right)} and ( a 2 , b 2 ) {\displaystyle \left(a_{2},b_{2}\right)} in A × B , {\displaystyle A\times B,} declare that ( a 1 , b 1 ) ≤ ( a 2 , b 2 ) {\displaystyle \left(a_{1},b_{1}\right)\leq \left(a_{2},b_{2}\right)} if a 1 ⪯ a 2 {\displaystyle a_{1}\preceq a_{2}} and b 1 ⊑ b 2 . {\displaystyle b_{1}\sqsubseteq b_{2}.}
Another possible order on A × B {\displaystyle A\times B} is the lexicographical order . It is a total order if both A {\displaystyle A} and B {\displaystyle B} are totally ordered. However the product order of two total orders is not in general total; for example, the pairs ( 0 , 1 ) {\displaystyle (0,1)} and ( 1 , 0 ) {\displaystyle (1,0)} are incomparable in the product order of the order 0 < 1 {\displaystyle 0<1} with itself. The lexicographic combination of two total orders is a linear extension of their product order, and thus the product order is a subrelation of the lexicographic order. [ 3 ]
The Cartesian product with the product order is the categorical product in the category of partially ordered sets with monotone functions . [ 7 ]
The product order generalizes to arbitrary (possibly infinitary) Cartesian products.
Suppose A ≠ ∅ {\displaystyle A\neq \varnothing } is a set and for every a ∈ A , {\displaystyle a\in A,} ( I a , ≤ ) {\displaystyle \left(I_{a},\leq \right)} is a preordered set.
Then the product preorder on ∏ a ∈ A I a {\displaystyle \prod _{a\in A}I_{a}} is defined by declaring for any i ∙ = ( i a ) a ∈ A {\displaystyle i_{\bullet }=\left(i_{a}\right)_{a\in A}} and j ∙ = ( j a ) a ∈ A {\displaystyle j_{\bullet }=\left(j_{a}\right)_{a\in A}} in ∏ a ∈ A I a , {\displaystyle \prod _{a\in A}I_{a},} that
If every ( I a , ≤ ) {\displaystyle \left(I_{a},\leq \right)} is a partial order then so is the product preorder.
Furthermore, given a set A , {\displaystyle A,} the product order over the Cartesian product ∏ a ∈ A { 0 , 1 } {\displaystyle \prod _{a\in A}\{0,1\}} can be identified with the inclusion order of subsets of A . {\displaystyle A.} [ 4 ]
The notion applies equally well to preorders . The product order is also the categorical product in a number of richer categories, including lattices and Boolean algebras . [ 7 ]
This mathematics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Product_order |
In calculus , the product rule (or Leibniz rule [ 1 ] or Leibniz product rule ) is a formula used to find the derivatives of products of two or more functions . For two functions, it may be stated in Lagrange's notation as ( u ⋅ v ) ′ = u ′ ⋅ v + u ⋅ v ′ {\displaystyle (u\cdot v)'=u'\cdot v+u\cdot v'} or in Leibniz's notation as d d x ( u ⋅ v ) = d u d x ⋅ v + u ⋅ d v d x . {\displaystyle {\frac {d}{dx}}(u\cdot v)={\frac {du}{dx}}\cdot v+u\cdot {\frac {dv}{dx}}.}
The rule may be extended or generalized to products of three or more functions, to a rule for higher-order derivatives of a product, and to other contexts.
Discovery of this rule is credited to Gottfried Leibniz , who demonstrated it using "infinitesimals" (a precursor to the modern differential ). [ 2 ] (However, J. M. Child, a translator of Leibniz's papers, [ 3 ] argues that it is due to Isaac Barrow .) Here is Leibniz's argument: [ 4 ] Let u and v be functions. Then d(uv) is the same thing as the difference between two successive uv' s; let one of these be uv , and the other u+du times v+dv ; then: d ( u ⋅ v ) = ( u + d u ) ⋅ ( v + d v ) − u ⋅ v = u ⋅ d v + v ⋅ d u + d u ⋅ d v . {\displaystyle {\begin{aligned}d(u\cdot v)&{}=(u+du)\cdot (v+dv)-u\cdot v\\&{}=u\cdot dv+v\cdot du+du\cdot dv.\end{aligned}}}
Since the term du · dv is "negligible" (compared to du and dv ), Leibniz concluded that d ( u ⋅ v ) = v ⋅ d u + u ⋅ d v {\displaystyle d(u\cdot v)=v\cdot du+u\cdot dv} and this is indeed the differential form of the product rule. If we divide through by the differential dx , we obtain d d x ( u ⋅ v ) = v ⋅ d u d x + u ⋅ d v d x {\displaystyle {\frac {d}{dx}}(u\cdot v)=v\cdot {\frac {du}{dx}}+u\cdot {\frac {dv}{dx}}} which can also be written in Lagrange's notation as ( u ⋅ v ) ′ = v ⋅ u ′ + u ⋅ v ′ . {\displaystyle (u\cdot v)'=v\cdot u'+u\cdot v'.}
Let h ( x ) = f ( x ) g ( x ) and suppose that f and g are each differentiable at x . We want to prove that h is differentiable at x and that its derivative, h ′ ( x ) , is given by f ′ ( x ) g ( x ) + f ( x ) g ′ ( x ) . To do this, f ( x ) g ( x + Δ x ) − f ( x ) g ( x + Δ x ) {\displaystyle f(x)g(x+\Delta x)-f(x)g(x+\Delta x)} (which is zero, and thus does not change the value) is added to the numerator to permit its factoring, and then properties of limits are used. h ′ ( x ) = lim Δ x → 0 h ( x + Δ x ) − h ( x ) Δ x = lim Δ x → 0 f ( x + Δ x ) g ( x + Δ x ) − f ( x ) g ( x ) Δ x = lim Δ x → 0 f ( x + Δ x ) g ( x + Δ x ) − f ( x ) g ( x + Δ x ) + f ( x ) g ( x + Δ x ) − f ( x ) g ( x ) Δ x = lim Δ x → 0 [ f ( x + Δ x ) − f ( x ) ] ⋅ g ( x + Δ x ) + f ( x ) ⋅ [ g ( x + Δ x ) − g ( x ) ] Δ x = lim Δ x → 0 f ( x + Δ x ) − f ( x ) Δ x ⋅ lim Δ x → 0 g ( x + Δ x ) + lim Δ x → 0 f ( x ) ⋅ lim Δ x → 0 g ( x + Δ x ) − g ( x ) Δ x = f ′ ( x ) g ( x ) + f ( x ) g ′ ( x ) . {\displaystyle {\begin{aligned}h'(x)&=\lim _{\Delta x\to 0}{\frac {h(x+\Delta x)-h(x)}{\Delta x}}\\[5pt]&=\lim _{\Delta x\to 0}{\frac {f(x+\Delta x)g(x+\Delta x)-f(x)g(x)}{\Delta x}}\\[5pt]&=\lim _{\Delta x\to 0}{\frac {f(x+\Delta x)g(x+\Delta x)-f(x)g(x+\Delta x)+f(x)g(x+\Delta x)-f(x)g(x)}{\Delta x}}\\[5pt]&=\lim _{\Delta x\to 0}{\frac {{\big [}f(x+\Delta x)-f(x){\big ]}\cdot g(x+\Delta x)+f(x)\cdot {\big [}g(x+\Delta x)-g(x){\big ]}}{\Delta x}}\\[5pt]&=\lim _{\Delta x\to 0}{\frac {f(x+\Delta x)-f(x)}{\Delta x}}\cdot \lim _{\Delta x\to 0}g(x+\Delta x)+\lim _{\Delta x\to 0}f(x)\cdot \lim _{\Delta x\to 0}{\frac {g(x+\Delta x)-g(x)}{\Delta x}}\\[5pt]&=f'(x)g(x)+f(x)g'(x).\end{aligned}}} The fact that lim Δ x → 0 g ( x + Δ x ) = g ( x ) {\displaystyle \lim _{\Delta x\to 0}g(x+\Delta x)=g(x)} follows from the fact that differentiable functions are continuous.
By definition, if f , g : R → R {\displaystyle f,g:\mathbb {R} \to \mathbb {R} } are differentiable at x {\displaystyle x} , then we can write linear approximations : f ( x + h ) = f ( x ) + f ′ ( x ) h + ε 1 ( h ) {\displaystyle f(x+h)=f(x)+f'(x)h+\varepsilon _{1}(h)} and g ( x + h ) = g ( x ) + g ′ ( x ) h + ε 2 ( h ) , {\displaystyle g(x+h)=g(x)+g'(x)h+\varepsilon _{2}(h),} where the error terms are small with respect to h : that is, lim h → 0 ε 1 ( h ) h = lim h → 0 ε 2 ( h ) h = 0 , {\textstyle \lim _{h\to 0}{\frac {\varepsilon _{1}(h)}{h}}=\lim _{h\to 0}{\frac {\varepsilon _{2}(h)}{h}}=0,} also written ε 1 , ε 2 ∼ o ( h ) {\displaystyle \varepsilon _{1},\varepsilon _{2}\sim o(h)} . Then: f ( x + h ) g ( x + h ) − f ( x ) g ( x ) = ( f ( x ) + f ′ ( x ) h + ε 1 ( h ) ) ( g ( x ) + g ′ ( x ) h + ε 2 ( h ) ) − f ( x ) g ( x ) = f ( x ) g ( x ) + f ′ ( x ) g ( x ) h + f ( x ) g ′ ( x ) h − f ( x ) g ( x ) + error terms = f ′ ( x ) g ( x ) h + f ( x ) g ′ ( x ) h + o ( h ) . {\displaystyle {\begin{aligned}f(x+h)g(x+h)-f(x)g(x)&=(f(x)+f'(x)h+\varepsilon _{1}(h))(g(x)+g'(x)h+\varepsilon _{2}(h))-f(x)g(x)\\[.5em]&=f(x)g(x)+f'(x)g(x)h+f(x)g'(x)h-f(x)g(x)+{\text{error terms}}\\[.5em]&=f'(x)g(x)h+f(x)g'(x)h+o(h).\end{aligned}}} The "error terms" consist of items such as f ( x ) ε 2 ( h ) , f ′ ( x ) g ′ ( x ) h 2 {\displaystyle f(x)\varepsilon _{2}(h),f'(x)g'(x)h^{2}} and h f ′ ( x ) ε 1 ( h ) {\displaystyle hf'(x)\varepsilon _{1}(h)} which are easily seen to have magnitude o ( h ) . {\displaystyle o(h).} Dividing by h {\displaystyle h} and taking the limit h → 0 {\displaystyle h\to 0} gives the result.
This proof uses the chain rule and the quarter square function q ( x ) = 1 4 x 2 {\displaystyle q(x)={\tfrac {1}{4}}x^{2}} with derivative q ′ ( x ) = 1 2 x {\displaystyle q'(x)={\tfrac {1}{2}}x} . We have: u v = q ( u + v ) − q ( u − v ) , {\displaystyle uv=q(u+v)-q(u-v),} and differentiating both sides gives: f ′ = q ′ ( u + v ) ( u ′ + v ′ ) − q ′ ( u − v ) ( u ′ − v ′ ) = ( 1 2 ( u + v ) ( u ′ + v ′ ) ) − ( 1 2 ( u − v ) ( u ′ − v ′ ) ) = 1 2 ( u u ′ + v u ′ + u v ′ + v v ′ ) − 1 2 ( u u ′ − v u ′ − u v ′ + v v ′ ) = v u ′ + u v ′ . {\displaystyle {\begin{aligned}f'&=q'(u+v)(u'+v')-q'(u-v)(u'-v')\\[4pt]&=\left({\tfrac {1}{2}}(u+v)(u'+v')\right)-\left({\tfrac {1}{2}}(u-v)(u'-v')\right)\\[4pt]&={\tfrac {1}{2}}(uu'+vu'+uv'+vv')-{\tfrac {1}{2}}(uu'-vu'-uv'+vv')\\[4pt]&=vu'+uv'.\end{aligned}}}
The product rule can be considered a special case of the chain rule for several variables, applied to the multiplication function m ( u , v ) = u v {\displaystyle m(u,v)=uv} : d ( u v ) d x = ∂ ( u v ) ∂ u d u d x + ∂ ( u v ) ∂ v d v d x = v d u d x + u d v d x . {\displaystyle {d(uv) \over dx}={\frac {\partial (uv)}{\partial u}}{\frac {du}{dx}}+{\frac {\partial (uv)}{\partial v}}{\frac {dv}{dx}}=v{\frac {du}{dx}}+u{\frac {dv}{dx}}.}
Let u and v be continuous functions in x , and let dx , du and dv be infinitesimals within the framework of non-standard analysis , specifically the hyperreal numbers . Using st to denote the standard part function that associates to a finite hyperreal number the real infinitely close to it, this gives d ( u v ) d x = st ( ( u + d u ) ( v + d v ) − u v d x ) = st ( u v + u ⋅ d v + v ⋅ d u + d u ⋅ d v − u v d x ) = st ( u ⋅ d v + v ⋅ d u + d u ⋅ d v d x ) = st ( u d v d x + ( v + d v ) d u d x ) = u d v d x + v d u d x . {\displaystyle {\begin{aligned}{\frac {d(uv)}{dx}}&=\operatorname {st} \left({\frac {(u+du)(v+dv)-uv}{dx}}\right)\\&=\operatorname {st} \left({\frac {uv+u\cdot dv+v\cdot du+du\cdot dv-uv}{dx}}\right)\\&=\operatorname {st} \left({\frac {u\cdot dv+v\cdot du+du\cdot dv}{dx}}\right)\\&=\operatorname {st} \left(u{\frac {dv}{dx}}+(v+dv){\frac {du}{dx}}\right)\\&=u{\frac {dv}{dx}}+v{\frac {du}{dx}}.\end{aligned}}} This was essentially Leibniz 's proof exploiting the transcendental law of homogeneity (in place of the standard part above).
In the context of Lawvere's approach to infinitesimals, let d x {\displaystyle dx} be a nilsquare infinitesimal. Then d u = u ′ d x {\displaystyle du=u'\ dx} and d v = v ′ d x {\displaystyle dv=v'\ dx} , so that d ( u v ) = ( u + d u ) ( v + d v ) − u v = u v + u ⋅ d v + v ⋅ d u + d u ⋅ d v − u v = u ⋅ d v + v ⋅ d u + d u ⋅ d v = u ⋅ d v + v ⋅ d u {\displaystyle {\begin{aligned}d(uv)&=(u+du)(v+dv)-uv\\&=uv+u\cdot dv+v\cdot du+du\cdot dv-uv\\&=u\cdot dv+v\cdot du+du\cdot dv\\&=u\cdot dv+v\cdot du\end{aligned}}} since d u d v = u ′ v ′ ( d x ) 2 = 0. {\displaystyle du\,dv=u'v'(dx)^{2}=0.} Dividing by d x {\displaystyle dx} then gives d ( u v ) d x = u d v d x + v d u d x {\displaystyle {\frac {d(uv)}{dx}}=u{\frac {dv}{dx}}+v{\frac {du}{dx}}} or ( u v ) ′ = u ⋅ v ′ + v ⋅ u ′ {\displaystyle (uv)'=u\cdot v'+v\cdot u'} .
Let h ( x ) = f ( x ) g ( x ) {\displaystyle h(x)=f(x)g(x)} . Taking the absolute value of each function and the natural log of both sides of the equation, ln | h ( x ) | = ln | f ( x ) g ( x ) | {\displaystyle \ln |h(x)|=\ln |f(x)g(x)|} Applying properties of the absolute value and logarithms, ln | h ( x ) | = ln | f ( x ) | + ln | g ( x ) | {\displaystyle \ln |h(x)|=\ln |f(x)|+\ln |g(x)|} Taking the logarithmic derivative of both sides and then solving for h ′ ( x ) {\displaystyle h'(x)} : h ′ ( x ) h ( x ) = f ′ ( x ) f ( x ) + g ′ ( x ) g ( x ) {\displaystyle {\frac {h'(x)}{h(x)}}={\frac {f'(x)}{f(x)}}+{\frac {g'(x)}{g(x)}}} Solving for h ′ ( x ) {\displaystyle h'(x)} and substituting back f ( x ) g ( x ) {\displaystyle f(x)g(x)} for h ( x ) {\displaystyle h(x)} gives: h ′ ( x ) = h ( x ) ( f ′ ( x ) f ( x ) + g ′ ( x ) g ( x ) ) = f ( x ) g ( x ) ( f ′ ( x ) f ( x ) + g ′ ( x ) g ( x ) ) = f ′ ( x ) g ( x ) + f ( x ) g ′ ( x ) . {\displaystyle {\begin{aligned}h'(x)&=h(x)\left({\frac {f'(x)}{f(x)}}+{\frac {g'(x)}{g(x)}}\right)\\&=f(x)g(x)\left({\frac {f'(x)}{f(x)}}+{\frac {g'(x)}{g(x)}}\right)\\&=f'(x)g(x)+f(x)g'(x).\end{aligned}}} Note: Taking the absolute value of the functions is necessary for the logarithmic differentiation of functions that may have negative values, as logarithms are only real-valued for positive arguments. This works because d d x ( ln | u | ) = u ′ u {\displaystyle {\tfrac {d}{dx}}(\ln |u|)={\tfrac {u'}{u}}} , which justifies taking the absolute value of the functions for logarithmic differentiation.
The product rule can be generalized to products of more than two factors. For example, for three factors we have d ( u v w ) d x = d u d x v w + u d v d x w + u v d w d x . {\displaystyle {\frac {d(uvw)}{dx}}={\frac {du}{dx}}vw+u{\frac {dv}{dx}}w+uv{\frac {dw}{dx}}.} For a collection of functions f 1 , … , f k {\displaystyle f_{1},\dots ,f_{k}} , we have d d x [ ∏ i = 1 k f i ( x ) ] = ∑ i = 1 k ( ( d d x f i ( x ) ) ∏ j = 1 , j ≠ i k f j ( x ) ) = ( ∏ i = 1 k f i ( x ) ) ( ∑ i = 1 k f i ′ ( x ) f i ( x ) ) . {\displaystyle {\frac {d}{dx}}\left[\prod _{i=1}^{k}f_{i}(x)\right]=\sum _{i=1}^{k}\left(\left({\frac {d}{dx}}f_{i}(x)\right)\prod _{j=1,j\neq i}^{k}f_{j}(x)\right)=\left(\prod _{i=1}^{k}f_{i}(x)\right)\left(\sum _{i=1}^{k}{\frac {f'_{i}(x)}{f_{i}(x)}}\right).}
The logarithmic derivative provides a simpler expression of the last form, as well as a direct proof that does not involve any recursion . The logarithmic derivative of a function f , denoted here Logder( f ) , is the derivative of the logarithm of the function. It follows that Logder ( f ) = f ′ f . {\displaystyle \operatorname {Logder} (f)={\frac {f'}{f}}.} Using that the logarithm of a product is the sum of the logarithms of the factors, the sum rule for derivatives gives immediately Logder ( f 1 ⋯ f k ) = ∑ i = 1 k Logder ( f i ) . {\displaystyle \operatorname {Logder} (f_{1}\cdots f_{k})=\sum _{i=1}^{k}\operatorname {Logder} (f_{i}).} The last above expression of the derivative of a product is obtained by multiplying both members of this equation by the product of the f i . {\displaystyle f_{i}.}
It can also be generalized to the general Leibniz rule for the n th derivative of a product of two factors, by symbolically expanding according to the binomial theorem : d n ( u v ) = ∑ k = 0 n ( n k ) ⋅ d ( n − k ) ( u ) ⋅ d ( k ) ( v ) . {\displaystyle d^{n}(uv)=\sum _{k=0}^{n}{n \choose k}\cdot d^{(n-k)}(u)\cdot d^{(k)}(v).}
Applied at a specific point x , the above formula gives: ( u v ) ( n ) ( x ) = ∑ k = 0 n ( n k ) ⋅ u ( n − k ) ( x ) ⋅ v ( k ) ( x ) . {\displaystyle (uv)^{(n)}(x)=\sum _{k=0}^{n}{n \choose k}\cdot u^{(n-k)}(x)\cdot v^{(k)}(x).}
Furthermore, for the n th derivative of an arbitrary number of factors, one has a similar formula with multinomial coefficients : ( ∏ i = 1 k f i ) ( n ) = ∑ j 1 + j 2 + ⋯ + j k = n ( n j 1 , j 2 , … , j k ) ∏ i = 1 k f i ( j i ) . {\displaystyle \left(\prod _{i=1}^{k}f_{i}\right)^{\!\!(n)}=\sum _{j_{1}+j_{2}+\cdots +j_{k}=n}{n \choose j_{1},j_{2},\ldots ,j_{k}}\prod _{i=1}^{k}f_{i}^{(j_{i})}.}
For partial derivatives , we have [ 5 ] ∂ n ∂ x 1 ⋯ ∂ x n ( u v ) = ∑ S ∂ | S | u ∏ i ∈ S ∂ x i ⋅ ∂ n − | S | v ∏ i ∉ S ∂ x i {\displaystyle {\partial ^{n} \over \partial x_{1}\,\cdots \,\partial x_{n}}(uv)=\sum _{S}{\partial ^{|S|}u \over \prod _{i\in S}\partial x_{i}}\cdot {\partial ^{n-|S|}v \over \prod _{i\not \in S}\partial x_{i}}} where the index S runs through all 2 n subsets of {1, ..., n } , and | S | is the cardinality of S . For example, when n = 3 , ∂ 3 ∂ x 1 ∂ x 2 ∂ x 3 ( u v ) = u ⋅ ∂ 3 v ∂ x 1 ∂ x 2 ∂ x 3 + ∂ u ∂ x 1 ⋅ ∂ 2 v ∂ x 2 ∂ x 3 + ∂ u ∂ x 2 ⋅ ∂ 2 v ∂ x 1 ∂ x 3 + ∂ u ∂ x 3 ⋅ ∂ 2 v ∂ x 1 ∂ x 2 + ∂ 2 u ∂ x 1 ∂ x 2 ⋅ ∂ v ∂ x 3 + ∂ 2 u ∂ x 1 ∂ x 3 ⋅ ∂ v ∂ x 2 + ∂ 2 u ∂ x 2 ∂ x 3 ⋅ ∂ v ∂ x 1 + ∂ 3 u ∂ x 1 ∂ x 2 ∂ x 3 ⋅ v . {\displaystyle {\begin{aligned}&{\partial ^{3} \over \partial x_{1}\,\partial x_{2}\,\partial x_{3}}(uv)\\[1ex]={}&u\cdot {\partial ^{3}v \over \partial x_{1}\,\partial x_{2}\,\partial x_{3}}+{\partial u \over \partial x_{1}}\cdot {\partial ^{2}v \over \partial x_{2}\,\partial x_{3}}+{\partial u \over \partial x_{2}}\cdot {\partial ^{2}v \over \partial x_{1}\,\partial x_{3}}+{\partial u \over \partial x_{3}}\cdot {\partial ^{2}v \over \partial x_{1}\,\partial x_{2}}\\[1ex]&+{\partial ^{2}u \over \partial x_{1}\,\partial x_{2}}\cdot {\partial v \over \partial x_{3}}+{\partial ^{2}u \over \partial x_{1}\,\partial x_{3}}\cdot {\partial v \over \partial x_{2}}+{\partial ^{2}u \over \partial x_{2}\,\partial x_{3}}\cdot {\partial v \over \partial x_{1}}+{\partial ^{3}u \over \partial x_{1}\,\partial x_{2}\,\partial x_{3}}\cdot v.\\[-3ex]&\end{aligned}}}
Suppose X , Y , and Z are Banach spaces (which includes Euclidean space ) and B : X × Y → Z is a continuous bilinear operator . Then B is differentiable, and its derivative at the point ( x , y ) in X × Y is the linear map D ( x , y ) B : X × Y → Z given by ( D ( x , y ) B ) ( u , v ) = B ( u , y ) + B ( x , v ) ∀ ( u , v ) ∈ X × Y . {\displaystyle (D_{\left(x,y\right)}\,B)\left(u,v\right)=B\left(u,y\right)+B\left(x,v\right)\qquad \forall (u,v)\in X\times Y.}
This result can be extended [ 6 ] to more general topological vector spaces.
The product rule extends to various product operations of vector functions on R n {\displaystyle \mathbb {R} ^{n}} : [ 7 ]
There are also analogues for other analogs of the derivative: if f and g are scalar fields then there is a product rule with the gradient : ∇ ( f ⋅ g ) = ∇ f ⋅ g + f ⋅ ∇ g {\displaystyle \nabla (f\cdot g)=\nabla f\cdot g+f\cdot \nabla g}
Such a rule will hold for any continuous bilinear product operation. Let B : X × Y → Z be a continuous bilinear map between vector spaces, and let f and g be differentiable functions into X and Y , respectively. The only properties of multiplication used in the proof using the limit definition of derivative is that multiplication is continuous and bilinear. So for any continuous bilinear operation, H ( f , g ) ′ = H ( f ′ , g ) + H ( f , g ′ ) . {\displaystyle H(f,g)'=H(f',g)+H(f,g').} This is also a special case of the product rule for bilinear maps in Banach space .
In abstract algebra , the product rule is the defining property of a derivation . In this terminology, the product rule states that the derivative operator is a derivation on functions.
In differential geometry , a tangent vector to a manifold M at a point p may be defined abstractly as an operator on real-valued functions which behaves like a directional derivative at p : that is, a linear functional v which is a derivation, v ( f g ) = v ( f ) g ( p ) + f ( p ) v ( g ) . {\displaystyle v(fg)=v(f)\,g(p)+f(p)\,v(g).} Generalizing (and dualizing) the formulas of vector calculus to an n -dimensional manifold M, one may take differential forms of degrees k and l , denoted α ∈ Ω k ( M ) , β ∈ Ω ℓ ( M ) {\displaystyle \alpha \in \Omega ^{k}(M),\beta \in \Omega ^{\ell }(M)} , with the wedge or exterior product operation α ∧ β ∈ Ω k + ℓ ( M ) {\displaystyle \alpha \wedge \beta \in \Omega ^{k+\ell }(M)} , as well as the exterior derivative d : Ω m ( M ) → Ω m + 1 ( M ) {\displaystyle d:\Omega ^{m}(M)\to \Omega ^{m+1}(M)} . Then one has the graded Leibniz rule : d ( α ∧ β ) = d α ∧ β + ( − 1 ) k α ∧ d β . {\displaystyle d(\alpha \wedge \beta )=d\alpha \wedge \beta +(-1)^{k}\alpha \wedge d\beta .}
Among the applications of the product rule is a proof that d d x x n = n x n − 1 {\displaystyle {d \over dx}x^{n}=nx^{n-1}} when n is a positive integer (this rule is true even if n is not positive or is not an integer, but the proof of that must rely on other methods). The proof is by mathematical induction on the exponent n . If n = 0 then x n is constant and nx n − 1 = 0. The rule holds in that case because the derivative of a constant function is 0. If the rule holds for any particular exponent n , then for the next value, n + 1, we have d x n + 1 d x = d d x ( x n ⋅ x ) = x d d x x n + x n d d x x (the product rule is used here) = x ( n x n − 1 ) + x n ⋅ 1 (the induction hypothesis is used here) = ( n + 1 ) x n . {\displaystyle {\begin{aligned}{\frac {dx^{n+1}}{dx}}&{}={\frac {d}{dx}}\left(x^{n}\cdot x\right)\\[1ex]&{}=x{\frac {d}{dx}}x^{n}+x^{n}{\frac {d}{dx}}x&{\text{(the product rule is used here)}}\\[1ex]&{}=x\left(nx^{n-1}\right)+x^{n}\cdot 1&{\text{(the induction hypothesis is used here)}}\\[1ex]&{}=\left(n+1\right)x^{n}.\end{aligned}}} Therefore, if the proposition is true for n , it is true also for n + 1, and therefore for all natural n . | https://en.wikipedia.org/wiki/Product_rule |
Within supply chain management and manufacturing , production control is the activity of monitoring and controlling any particular production or operation. Production control is often run from a specific control room or operations room. With inventory control and quality control , production control is one of the key functions of operations management . [ 1 ]
Production control is the activity of monitoring and controlling a large physical facility or physically dispersed service. It is a "set of actions and decision taken during production to regulate output and obtain reasonable assurance that the specification will be met." [ 2 ] The American Production and Inventory Control Society , nowadays APICS, defined production control in 1959 as:
Production planning and control in larger factories is often run from a production planning department run by production controllers and a production control manager. Production monitoring and control of larger operations is often run from a central space, called a control room or operations room or operations control center (OCC).
The emerging area of Project Production Management (PPM), based on viewing project activities as a production system, adopts the same notion of production control to take steps to regulate the behavior of a production system where in this case the production system is a capital project, rather than a physical facility or a physically dispersed service.
Production control is to be contrasted with project controls. As explained, [ 4 ] project controls have developed to become centralized functions to track project progress and identify deviations from plan and to forecast future progress, using metrics rooted in accounting principles.
One type of production control is the control of manufacturing operations.
Management of real-time operational in specific fields.
Communist countries had a central production control institute, where the agricultural and industrial production for the whole nation was planned and controlled.
In Customer Care environments production control is known as Workforce Management (WFM). Centralized Workforce Management teams are often called Command Center, Mission Control or WFM Shared Production Centers.
Production control is just one of multiple types of control in organizations. Most commons other types are: | https://en.wikipedia.org/wiki/Production_control |
In film and television , a production designer is the individual responsible for the overall aesthetic of the story. The production design gives the viewers a sense of the time period, the plot location, and character actions and feelings. Working directly with the director , cinematographer , and producer , production designers have a key creative role in the creation of motion pictures and television. The term production designer was coined by William Cameron Menzies while he was working on the film Gone with the Wind . [ 1 ] Production designers are commonly confused with art directors [ 2 ] as the roles have similar responsibilities. Production designers decide the visual concept and deal with the many and varied logistics of filmmaking including, schedules, budgets, and staffing. Art directors manage the process of making the visuals, which is done by concept artists , graphic designers , set designers , costume designers , lighting designers , etc. [ 3 ] The production designer and the art director lead a team of individuals to assist with the visual component of the film. Depending on the size of the production the rest of the team can include runners, graphic designers , drafts people, props makers, and set builders. Productions Designers create a framework for the visual aesthetic of a project and work in partnership and collaboration with the Set Decorator & Set Decorating department to execute the desired look. [ 4 ]
Production design plays an essential role in storytelling, for instance, in the movie Titanic , when the characters Jack and Rose are in the cold water after the ship sank, we know that they are cold because of the setting: it is nighttime and there is ice on their hair. A more specific example is The Wizard of Oz , in which we know the story takes place on a farm because of the bale of hay Dorothy leans on and the animals around, as well as the typical wooden fence. In the scene in which Dorothy's dog is taken away, we know that it happens in her aunt and uncle's house, which adds more tension because her beloved friend, Toto is not killed, lost or kidnapped on the street, but is forced to leave by an outsider, Ms. Gulch, who enters Dorothy's private and safe zone (her home). Jane Barnwell states that the place the characters exist in gives information about them and enhances the fluency of the narrative (175). [ 5 ] Imagine Dorothy's home was dirty and everyone in her house were dressed untidily, the viewer would have supported the outsider instead, perhaps thinking that the outsider in a way, rescued the dog from an unhealthy environment. Additionally, the characters' clothing, especially that of Ms. Gulch, makes the description "own half the county" more reliable in portraying Ms. Gulch, and also supports the reason why Dorothy cannot rebel against Ms. Gulch by making the dog stay. However, this does not mean that the setting or costume should be extremely detailed and cluttered with information. The goal is to not let the viewer notice these elements, which, however, is how production design works. Jon Boorstin states in his book, Making Movies Work Thinking Like a Filmmaker , that the background, the camera motion or even the sound effect is considered well-done if the viewer does not notice their appearance. [ 6 ]
In the United States and British Columbia , production designers are represented by several local unions of the International Alliance of Theatrical Stage Employees (IATSE). Local 800, the Art Directors Guild , represents production designers in the U.S., with the exception of New York City and its vicinity. [ 7 ] Those members are represented by Local 829, the United Scenic Artists . In the rest of Canada, production designers are represented by the Directors Guild of Canada . In the United Kingdom, members of the art department are represented by the non-union British Film Designers Guild .
The production design credit must be requested by a film's producer, prior to completion of photography, and submitted to the Art Directors Guild Board of Directors for the credit approval. | https://en.wikipedia.org/wiki/Production_designer |
Production equipment control involves production equipment that resides in the shop floor of a manufacturing company and its purpose is to produce goods of a wanted quality when provided with production resources of a required quality. In modern production lines the production equipment is fully automated using industrial control methods and involves limited unskilled labour participation. Modern production equipment consists of mechatronic modules that are integrated according to a control architecture . The most widely known architectures involve hierarchy , polyarchy , hetaerarchy and hybrid. The methods for achieving a technical effect are described by control algorithms , which may or may not utilize formal methods in their design. | https://en.wikipedia.org/wiki/Production_equipment_control |
In operations management and industrial engineering , production flow analysis refers to methods which share the following characteristics:
Methods differ on how they group together machines with products. These play an important role in designing manufacturing cells .
Given a binary product-machines n-by-m matrix b i p {\displaystyle b_{ip}} , rank order clustering [ 1 ] is an algorithm characterized by the following steps:
Given a binary product-machines n-by-m matrix, the algorithm proceeds [ 2 ] by the following steps:
Unless this procedure is stopped the algorithm eventually will put all machines in one single group. | https://en.wikipedia.org/wiki/Production_flow_analysis |
Production leveling , also known as production smoothing or – by its Japanese original term – heijunka ( 平準化 ) , [ 1 ] is a technique for reducing the mura (unevenness) which in turn reduces muda (waste). It was vital to the development of production efficiency in the Toyota Production System and lean manufacturing . The goal is to produce intermediate goods at a constant rate so that further processing may also be carried out at a constant and predictable rate.
Where demand is constant, production leveling is easy, but where customer demand fluctuates, two approaches have been adopted: 1) demand leveling and 2) production leveling through flexible production.
To prevent fluctuations in production, even in outside affiliates, it is important to minimize fluctuation in the final assembly line. Toyota's final assembly line never assembles the same automobile model in a batch. Instead, they level production by assembling a mix of models in each batch [ 2 ] and the batches are made as small as possible.
Production leveling can refer to leveling by volume, or leveling by product type or mix, although the two are closely related.
If for a family of products that use the same production process there is a demand that varies between 800 and 1,200 units then it might seem a good idea to produce the amount ordered. Toyota's view is that production systems that vary in the required output suffer from mura and muri with capacity being 'forced' in some periods. So their approach is to manufacture at the long-term average demand and carry an inventory proportional to the variability of demand, stability of the production process and the frequency of shipments. So for our case of 800–1,200 units, if the production process were 100% reliable and the shipments once a week, then the production would be with minimum standard inventory of 200 at the start of the week and 1,200 at the point of shipment. The advantage of carrying this inventory is that it can smooth production throughout the plant and therefore reduce process inventories and simplify operations which reduces costs.
Most value streams produce a mix of products and therefore face a choice of production mix and sequence. It is here that the discussions on economic order quantities take place and have been dominated by changeover times and the inventory this requires. Toyota's approach resulted in a different discussion where it reduced the time and cost of changeovers so that smaller and smaller batches were not prohibitive and lost production time and quality costs were not significant. This meant that the demand for components could be leveled for the upstream sub-processes and therefore lead time and total inventories reduced along the entire value stream. To simplify leveling of products with different demand levels a related visual scheduling board known as a heijunka box is often used in achieving these heijunka style efficiencies. Other production leveling techniques based on this thinking have also been developed. Once leveling by product is achieved then there is one more leveling phase, that of " Just in Sequence " where leveling occurs at the lowest level of product production.
The use of production leveling as well as broader lean production techniques helped Toyota massively reduce vehicle production times as well as inventory levels during the 1980s.
Even Toyota hasn't reached the final stage in this journey, single-piece flows, across all of their processes; indeed they recommend following their journey rather than trying to jump into an intermediate stage. The reason Toyota advocates this is that each production stage is accompanied by adjustments and adaptations to support services to production; if those services are not given these adaptation steps then major issues can arise.
Demand leveling is the deliberate influencing of demand itself or the demand processes to deliver a more predictable pattern of customer demand. Some of this influencing is by manipulating the product offering, some by influencing the ordering process and some by revealing the demand amplification induced variability of ordering patterns. Demand levelling does not include influencing activities designed to clear existing stock.
Historically demand leveling evolved as subset of production levelling and has been approached in a variety of ways:
If it is accepted that a large part of demand variability in high volume products can be substantially caused by sales and ordering process artifacts then analysis and leveling can be attempted.
The use of long delay supply chains to reduce manufacturing costs often means that production orders are placed long before customer demand can be realistically estimated. The much later arrival of forecast product demand volumes makes demand leveling irrelevant since the issue has now switched to disposal at best price possible products that are already created and possibly paid for. Demand leveling has only proven possible where build times have been made relatively low and production has been made relatively reliable and flexible. Examples of these are fast airborne supply chains (e.g. Apple iPod) or direct to customer selling through web sites allowing late customisation (e.g. NIKEiD custom shoes) or local manufacture (e.g. Timbuk2 custom courier bags).
Where actual build-delivery times can be brought within the same scale as customer time horizons then effort to modify impulse buying and make it somewhat planned can be successful. Reliable, flexible manufacturing will then mean that low stock levels (if any) do not interfere with customer satisfaction and that incentives to sell what has been produced eliminated.
Where demand follows a predictable pattern, e.g. flat, then regular deliveries of constant amounts can be agreed with variances in actual demand ignored unless it exceeds some agreed trigger level. Where this cannot be agreed then it can be simulated and the benefits gained through frequent deliveries and a market location .
The predictable pattern does not have to be flat and may, for example, be an annual pattern with higher volumes at particular periods. Here again the deliveries can be agreed to follow a simplified but similar pattern, perhaps one delivery volume for six months of the year and another for the other six months. | https://en.wikipedia.org/wiki/Production_leveling |
A production packer [ 1 ] is a standard component of the completion hardware of oil or gas wells used to provide a seal between the outside of the production tubing and the inside of the casing, liner, or wellbore wall.
Based on their primary use, packers can be divided into two main categories: production packers and service packers. Production packers are those that remain in the well during well production. Service packers are used temporarily during well service activities such as cement squeezing, acidizing, fracturing and well testing.
It is usually run in close to the bottom end of the production tubing and set at a point above the top perforations or sand screens. In wells with multiple reservoir zones, packers are used to isolate the perforations for each zone. In these situations, a sliding sleeve would be used to select which zone to produce. Packers may also be used to protect the casing from pressure and produced fluids, isolate sections of corroded casing, casing leaks or squeezed perforations, and isolate or temporarily abandon producing zones. In water-flooding developments in which water is injected into the reservoir, packers are used in injection wells to isolate the zones into which the water must be injected.
There are occasions in which running a packer may not be desirable. High volume wells, for example, that are produced both up the tubing and annulus will not include a packer. Rod pumped wells are not normally run with packers because the associated gas is produced up the annulus. In general, well completions may not incorporate a packer when the annular space is used as a production conduit.
A production packer is designed to grip and seal against the casing ID. Gripping is accomplished with metal wedges called "slips." These components have sharpened, carburized teeth that dig into the metal of the casing. Sealing is accomplished with large, cylindrical rubber elements. In situations where the sealed pressure is very high (above 5,000 psi), metal rings are used on either side of the elements to prevent the rubber from extruding.
A packer is run in the casing on production tubing or wireline . Once the desired depth is reached, the slips and element must be expanded out to contact the casing. Axial loads are applied to push the slips up a ramp and to compress the element, causing it to expand outward. The axial loads are applied either hydraulically, mechanically, or with a slow burning chemical charge.
Most packers are "permanent" and require milling in order to remove them from the casing. The main advantages of permanent packers are lower cost and greater sealing and gripping capabilities.
In situations where a packer must be easily removed from the well, such as secondary recoveries, re-completions, or to change out the production tubing, a retrievable packer must be used. To unset the tool, either a metal ring is sheared or a sleeve is shifted to disengage connecting components. Retrievable packers have a more complicated design and generally lower sealing and gripping capabilities, but after removal and subsequent servicing, they can be reused.
There are three types of packers: mechanical and hydraulic set and permanent. All packers fall into one or a combination of these.
Mechanical set packers are set by some form of tubing movement, usually a rotation or upward /downward motion.
Others can be weight set—the tubing weight can be used to compress and expand the sealing element. By a simple up string pull the packer is released. It is used best in shallow low pressure wells that are straight. It is not designed to withstand pressure differences unless a hydraulic hold down is incorporated.
Tension-set packers are set by pulling a tension on the tubing, slacking off releases the packer. Good for shallow wells with moderate pressure differences. The lower pressure helps to increase the setting force on the packer. Used in a stimulation well.
Rotation-set packer – Tubing rotation is used to set the packer to mechanically lock it in; a left-hand turn engages and a right-hand turn retrieves it.
Hydraulic-set packers use fluid pressure to drive the cone behind the slips. Once set they remain set by the use of either entrapped pressure or a mechanical lock. They are released by picking up the tubing. They are good for use in deviated or crooked holes where tubing movement is restricted or unwanted. The tubing can be hung in neutral tension.
Inflatable packers [ 2 ] - use fluid pressure to inflate a long cylindrical tube of reinforced rubber to set the packer. Frequently used for open hole testing in exploration wells and for cement assurance in production wells. Also used in wells where the packer must pass through a restriction and then set at a much larger diameter in casing or open holes. Many variations for specific applications are available including those capable of withstanding high pressure differentials.
Permanent packers are run and set on an electric wireline, drill pipe or tubing. Opposed slips are positioned to lock it in compression. Once set this packer is resistant to motion for either direction. Wireline uses an electric current to detonate an explosive charge to set the packer. A release stud then frees the assembly form the packer. Tubing can be used by applying rotation or a pull or a combination of both. They are good in wells that have high pressure differentials or large tubing load variations and can be set precisely. They can be set the deepest.
Cement packers – In this case the tubing is cemented in place inside the casing or open hole. This type of packer is cheap.
Temperature and pressure can affect how the tubing and the packer behave as this could cause changes in the packer and tubing expansion rates. If the packer allows free motion then the tubing can elongate or shorten. If not the tensile and compressive forces can develop within. [ 3 ] | https://en.wikipedia.org/wiki/Production_packer |
Productive aging refers to activities which older people engage in on a daily basis. Older adults have opportunities and constraints which are related to the productive aging process. The community and society need to develop more options for older adults to choose their way of being engaged in the community and contributing to others. Things such as policy changes and resource commitments are important to promote productive aging . One example of productive aging is retirement which moves older adults from paid forms of productivity to non-paid activities. Many activities can give older adults opportunities and constraints related to the productive aging process. These activities include retirement , employment, economic well-being, leisure, religious participation and spirituality , membership in community associations and volunteerism , education, and political action. Older adults will find many opportunities to engage in activities which contribute to society or follow personal creative activities. [ 1 ] | https://en.wikipedia.org/wiki/Productive_aging |
In linear algebra , a square nonnegative matrix A {\displaystyle A} of order n {\displaystyle n} is said to be productive , or to be a Leontief matrix , if there exists a n × 1 {\displaystyle n\times 1} nonnegative column matrix P {\displaystyle P} such as P − A P {\displaystyle P-AP} is a positive matrix .
The concept of productive matrix was developed by the economist Wassily Leontief ( Nobel Prize in Economics in 1973) in order to model and analyze the relations between the different sectors of an economy. [ 1 ] The interdependency linkages between the latter can be examined by the input-output model with empirical data.
The matrix A ∈ M n , n ( R ) {\displaystyle A\in \mathrm {M} _{n,n}(\mathbb {R} )} is productive if and only if A ⩾ 0 {\displaystyle A\geqslant 0} and ∃ P ∈ M n , 1 ( R ) , P > 0 {\displaystyle \exists P\in \mathrm {M} _{n,1}(\mathbb {R} ),P>0} such as P − A P > 0 {\displaystyle P-AP>0} .
Here M r , c ( R ) {\displaystyle \mathrm {M} _{r,c}(\mathbb {R} )} denotes the set of r × c matrices of real numbers , whereas > 0 {\displaystyle >0} and ⩾ 0 {\displaystyle \geqslant 0} indicates a positive and a nonnegative matrix , respectively.
The following properties are proven e.g. in the textbook (Michel 1984). [ 2 ]
Theorem A nonnegative matrix A ∈ M n , n ( R ) {\displaystyle A\in \mathrm {M} _{n,n}(\mathbb {R} )} is productive if and only if I n − A {\displaystyle I_{n}-A} is invertible with a nonnegative inverse, where I n {\displaystyle I_{n}} denotes the n × n {\displaystyle n\times n} identity matrix .
Proof
"If" :
"Only if" :
Proposition The transpose of a productive matrix is productive.
Proof
With a matrix approach of the input-output model , the consumption matrix is productive if it is economically viable and if the latter and the demand vector are nonnegative. | https://en.wikipedia.org/wiki/Productive_matrix |
In 2007, productive nanosystems were defined as functional nanoscale systems that make atomically -specified structures and devices under programmatic control, i.e., performing atomically precise manufacturing. [ 1 ] As of 2015, such devices were only hypothetical, and productive nanosystems represented a more advanced approach among several to perform Atomically Precise Manufacturing. A workshop on Integrated Nanosystems for Atomically Precise Manufacturing was held by the Department of Energy in 2015. [ 2 ]
Present-day technologies are limited in various ways. Large atomically precise structures (that is, virtually defect-free) do not exist. Complex 3D nanoscale structures exist in the form of folded linear molecules such as DNA origami and proteins . As of 2018, it was also possible to build very small atomically precise structures using scanning probe microscopy to construct molecules such as FeCO [ 3 ] and Triangulene , or to perform hydrogen depassivation lithography. [ 4 ] But it is not yet possible to combine components in a systematic way to build larger, more complex systems.
Principles of physics and examples from nature both suggest that it will be possible to extend atomically precise fabrication to more complex products of larger size, involving a wider range of materials. An example of progress in this direction would be Christian Schafmeister's work on bis-peptides . [ 5 ]
In 2005, Mihail Roco , one of the architects of the USA's National Nanotechnology Initiative, proposed four states of nanotechnology that seem to parallel the technical progress of the Industrial Revolution, of which productive nanosystems is the most advanced. [ 6 ]
1. Passive nanostructures - nanoparticles and nanotubes that provide added strength, electrical and thermal conductivity, toughness, hydrophilic/phobic and/or other properties that emerge from their nanoscale structure.
2. Active nanodevices - nanostructures that change states in order to transform energy, information, and/or to perform useful functions. There is some debate about whether or not state-of-the art integrated circuits qualify here, since they operate despite emergent nanoscale properties, not because of them. Therefore, the argument goes, they don't qualify as "novel" nanoscale properties, even though the devices themselves are between one and a hundred nanometers.
3. Complex nanomachines - the assembly of different nanodevices into a nanosystem to accomplish a complex function. Some would argue that Zettl 's machines fit in this category; others argue that modern microprocessors and FPGAs also fit.
4. Systems of nanosystems/Productive nanosystems - these will be complex nanosystems that produce atomically precise parts for other nanosystems, not necessarily using novel nanoscale-emergent properties, but well-understood fundamentals of manufacturing. Because of the discrete (i.e. atomic) nature of matter and the possibility of exponential growth, this stage is seen as the basis of another industrial revolution. There are currently many different approaches to building productive nanosystems: including top-down approaches like Patterned atomic layer epitaxy [ 7 ] and Diamondoid Mechanosynthesis . [ 8 ] There are also bottom-up approaches like DNA Origami and Bis-peptide Synthesis. [ 9 ]
A fifth step, info/bio/nano convergence, was added later by Roco. This is the convergence of the three most revolutionary technologies, since every living thing is made up of atoms and information.
Clanking replicator
Ribosome
Synthetic biology | https://en.wikipedia.org/wiki/Productive_nanosystems |
In ecology , the term productivity refers to the rate of generation of biomass in an ecosystem , usually expressed in units of mass per volume (unit surface) per unit of time, such as grams per square metre per day (g m −2 d −1 ). The unit of mass can relate to dry matter or to the mass of generated carbon . The productivity of autotrophs , such as plants , is called primary productivity , while the productivity of heterotrophs , such as animals , is called secondary productivity . [ 1 ]
The productivity of an ecosystem is influenced by a wide range of factors, including nutrient availability, temperature, and water availability. Understanding ecological productivity is vital because it provides insights into how ecosystems function and the extent to which they can support life. [ 2 ]
Primary production is the synthesis of organic material from inorganic molecules. Primary production in most ecosystems is dominated by the process of photosynthesis , In which organisms synthesize organic molecules from sunlight , H 2 O , and CO 2 . [ 3 ] Aquatic primary productivity refers to the production of organic matter, such as phytoplankton, aquatic plants, and algae, in aquatic ecosystems, which include oceans, lakes, and rivers. Terrestrial primary productivity refers to the organic matter production that takes place in terrestrial ecosystems such as forests, grasslands, and wetlands.
Primary production is divided into Net Primary Production (NPP) and Gross Primary Production (GPP). Gross primary production measures all carbon assimilated into organic molecules by primary producers. [ 4 ] Net primary production measures the organic molecules by primary producers. Net primary production also measures the amount of carbon assimilated into organic molecules by primary producers, but does not include organic molecules that are then broken down again by these organism for biological processes such as cellular respiration . [ 5 ] The formula used to calculate NPP is net primary production = gross primary production - respiration.
Organisms that rely on light energy to fix carbon , and thus participate in primary production, are referred to as photoautotrophs . [ 6 ]
Photoautotrophs exists across the tree of life. Many bacterial taxa are known to be photoautotrophic such as cyanobacteria [ 7 ] and some Pseudomonadota (formerly proteobacteria). [ 8 ] Eukaryotic organisms gained the ability to participate in photosynthesis through the development of plastids derived from endosymbiotic relationships. [ 9 ] Archaeplastida , which includes red algae , green algae , and plants, have evolved chloroplasts originating from an ancient endosymbiotic relationship with an Alphaproteobacteria . [ 10 ] The productivity of plants, while being photoautotrophs, is also dependent on factors such as salinity and abiotic stressors from the surrounding environment. [ 11 ] The rest of the eukaryotic photoautotrophic organisms are within the SAR clade (Comprising Stramenopila , Alveolata , and Rhizaria ). Organisms in the SAR clade that developed plastids did so through a secondary or a tertiary endosymbiotic relationships with green algae and/or red algae. [ 12 ] The SAR clade includes many aquatic and marine primary producers such as Kelp , Diatoms , and Dinoflagellates . [ 12 ]
The other process of primary production is lithoautotrophy . Lithoautotrophs use reduced chemical compounds such as hydrogen gas , hydrogen sulfide , methane , or ferrous ion to fix carbon and participate in primary production. Lithoautotrophic organisms are prokaryotic and are represented by members of both the bacterial and archaeal domains. [ 13 ] Lithoautotrophy is the only form of primary production possible in ecosystems without light such as ground-water ecosystems, [ 14 ] hydrothermal vent ecosystems, [ 15 ] soil ecosystems , [ 16 ] and cave ecosystems. [ 17 ]
Secondary production is the generation of biomass of heterotrophic (consumer) organisms in a system. This is driven by the transfer of organic material between trophic levels , and represents the quantity of new tissue created through the use of assimilated food. Secondary production is sometimes defined to only include consumption of primary producers by herbivorous consumers [ 18 ] (with tertiary production referring to carnivorous consumers), [ 19 ] but is more commonly defined to include all biomass generation by heterotrophs. [ 1 ]
Organisms responsible for secondary production include animals, protists , fungi and many bacteria. [ citation needed ]
Secondary production can be estimated through a number of different methods including increment summation, removal summation, the instantaneous growth method and the Allen curve method. [ 20 ] The choice between these methods will depend on the assumptions of each and the ecosystem under study. For instance, whether cohorts should be distinguished, whether linear mortality can be assumed and whether population growth is exponential. [ citation needed ]
Net ecosystem production is defined as the difference between gross primary production (GPP) and ecosystem respiration. [ 21 ] The formula to calculate net ecosystem production is NEP = GPP - respiration (by autotrophs) - respiration (by heterotrophs). [ 22 ] The key difference between NPP and NEP is that NPP focuses primarily on autotrophic production, whereas NEP incorporates the contributions of other aspects of the ecosystem to the total carbon budget. [ 23 ]
Following is the list of ecosystems in order of decreasing productivity. [ citation needed ]
The connection between plant productivity and biodiversity is a significant topic in ecology, although it has been controversial for decades. Both productivity and species diversity are constricted by other variables such as climate, ecosystem type, and land use intensity. [ 24 ] According to some research on the correlation between plant diversity and ecosystem functioning is that productivity increases as species diversity increases. [ 25 ] One reasoning for this is that the likelihood of discovering a highly productive species increases as the number of species initially present in an ecosystem increases. [ 25 ] [ 26 ]
Other researchers believe that the relationship between species diversity and productivity is unimodal within an ecosystem. [ 27 ] A 1999 study on grassland ecosystems in Europe, for example, found that increasing species diversity initially increased productivity but gradually leveled off at intermediate levels of diversity. [ 28 ] More recently, a meta-analysis of 44 studies from various ecosystem types observed that the interaction between diversity and production was unimodal in all but one study. [ 29 ]
Anthropogenic activities (human activities) have impacted the productivity and biomass of several ecosystems. Examples of these activities include habitat modification, freshwater consumption, an increase in nutrients due to fertilizers, and many others. [ 30 ] Increased nutrients can stimulate an algal bloom in waterbodies, increasing primary production but making the ecosystem less stable. [ 31 ] This would raise secondary production and have a trophic cascade effect across the food chain, ultimately increasing overall ecosystem productivity. [ 32 ] In lakes, these human impacts can "mask" the effects of climate change . [ 33 ] Algal biomass is causally related to climate in some lakes, with temporary or long-term shifts in productivity ( regime shifts ). [ 33 ] | https://en.wikipedia.org/wiki/Productivity_(ecology) |
The productivity paradox refers to the slowdown in productivity growth in the United States in the 1970s and 1980s despite rapid development in the field of information technology (IT) over the same period. The term was coined by Erik Brynjolfsson in a 1993 paper ("The Productivity Paradox of IT") [ 1 ] inspired by a quip by Nobel Laureate Robert Solow "You can see the computer age everywhere but in the productivity statistics." [ 2 ] For this reason, it is also sometimes also referred to as the Solow paradox.
The productivity paradox inspired many research efforts at explaining the slowdown, only for the paradox to disappear with renewed productivity growth in the developed countries in the 1990s. However, issues raised by those research efforts remain important in the study of productivity growth in general, and became important again when productivity growth slowed around the world again from the 2000s to the present day. Thus the term "productivity paradox" can also refer to the more general disconnect between powerful computer technologies and weak productivity growth. [ 3 ]
The 1970s to 1980s productivity paradox has been defined as a perceived "discrepancy between measures of investment in information technology and measures of output at the national level." [ 4 ] Brynjolfsson documented that productivity growth slowed down at the level of the whole U.S. economy, and often within individual sectors that had invested heavily in IT, despite dramatic advances in computer power and increasing investment in IT. [ 1 ] Similar trends were seen in many other nations. [ 5 ] While the computing capacity of the U.S. increased a hundredfold in the 1970s and 1980s, [ 6 ] labor productivity growth slowed from over 3% in the 1960s to roughly 1% in the 1980s. This perceived paradox was popularized in the media by analysts such as Steven Roach and later Paul Strassman .
Many observers disagree that any meaningful "productivity paradox" exists and others, while acknowledging the disconnect between IT capacity and spending, view it less as a paradox than a series of unwarranted assumptions about the impact of technology on productivity. In the latter view, this disconnect is emblematic of our need to understand and do a better job of deploying the technology that becomes available to us rather than an arcane paradox that by its nature is difficult to unravel.
Some point to historical parallels with the steam engine and with electricity , where the dividends of a productivity-enhancing disruptive technology were reaped only slowly, with an initial lag, over the course of decades, due to the time required for the technologies to diffuse into common use, and due to the time required to reorganize around and master efficient use of the new technology. [ 7 ] [ 8 ] As with previous technologies, an extremely large number of initial cutting-edge investments in IT were counterproductive and over-optimistic. [ 9 ] Some modest IT-based gains may have been difficult to detect amid the apparent overall slowing of productivity growth, which is generally attributed to one or more of a variety of non-IT factors, such as oil shocks, increased regulation or other cultural changes, a hypothetical decrease in labor quality, a hypothetical exhaustion or slowdown in non-IT innovation, and/or a coincidence of sector-specific problems. [ 10 ]
This phenomenon inspired a number of hypothesized explanations of the paradox.
The mismeasurement hypotheses of the productivity paradox center around the idea that real output estimates during this time overestimates inflation and understates productivity, because they do not take into account quality improvements of IT goods and goods in general. The US government measures productivity by comparing real output measurements from period to period, which they do by dividing the nominal output measurements from each period into an inflation component, and a real output component. The US government's calculations of real GDP does not take into account inflation directly, and during the 1970s and 1980s these calculations estimate inflation from observing the change in total spending and change in total units consumed for goods and services over time. This accurately represented inflation if the consumed goods and services in the output measurements remain relatively the same from period to period, but if goods and services improved from period to period the change in spending will characterize consumer spending for quality improvements as inflation, which overstates inflation and under estimates productivity growth. Later calculations of GDP partly compensates for this problem using hedonic regression methods, and these methods estimate that the true price of mainframe computers alone from 1950 to 1980s may have declined more than 20% per year. These estimated implicit price decreases are indications of the scale of productivity growth missing from the output measurements. These measurement issues, as well as measurement issues with new products, continues to affect output and productivity measurement today. [ 11 ] [ 1 ]
The redistribution and dissipation of profits hypotheses rely on the idea that firms might make IT investments that are productive for the firm by capturing more wealth available in their industry, but do not create more wealth in that industry. Some examples of these types of IT investments might be market research, marketing and advertisement investments. These investments help firms compete away market share from firms with less of these IT investments, while they do not improve the total output of the industry as a whole. [ 1 ]
The mismanagement of IT hypotheses suggests that IT investments are really not productive at a firm level, but that decision makers are making the investments nevertheless. These hypotheses suggest that firm level decision-makers make IT investments regardless of the cost and productivity benefits of the investments because of the difficulty in quantifying IT productivity gains. [ 1 ]
Other economists have made a more controversial charge against the utility of computers: that they pale into insignificance as a source of productivity advantage when compared to the Industrial Revolution , electrification, infrastructures (canals and waterways, railroads, highway system), Fordist mass production and the replacement of human and animal power with machines. [ 12 ] High productivity growth occurred from last decades of the 19th century until the 1973, with a peak from 1929 to 1973, then declined to levels of the early 19th century. [ 13 ] [ 14 ]
However, the hypothesis that IT was fundamentally unproductive weakened in the early 1990s, as total factor productivity growth in the United States accelerated. From 2000 through the most recent data in 2022, the information technology industry was among those to see the fastest productivity growth. [ 15 ]
Gordon J. Bjork points out that manufacturing productivity gains continued, although at a decreasing rate than in decades past; however, the cost reductions in manufacturing shrank the sector size. The services and government sectors, where productivity growth is very low, gained in share, dragging down the overall productivity number. Because government services are priced at cost with no value added, government productivity growth is near zero as an artifact of the way in which it is measured. Bjork also points out that manufacturing uses more capital per unit of output than government or services. [ 16 ]
The "lags due to learning and adjustment" (lags) hypothesis explains the productivity paradox as the idea that output and productivity gains from investment in IT materializes well after the investment takes place, so any output and productivity observations of the 1970s and 1980s will not observe those gains. Surveys of executives as well as econometric studies indicated that it might take between two and five years for IT investments to have any impact on organizations that made IT investments. The lags in IT benefits might also slow down IT investments, as observations of short-term marginal costs and benefits of IT investments might seem irrational. [ 1 ] IT investments might also require complementary capital investments to be made to be fully productive. [ 7 ] Because these investments are costly, but do not create measurably benefits until later, they can create a " productivity J-curve ". [ 17 ] Subsequent observations of productivity increases in 2000s may be due to lag effects of IT investments in the 1970s-1990s period. [ 18 ]
By the late 1990s there were some signs that productivity in the workplace been improved by the introduction of IT, especially in the United States. In fact, Erik Brynjolfsson and his colleagues found a significant positive relationship between IT investments and productivity, at least when these investments were made to complement organizational changes. [ 19 ] [ 20 ] [ 21 ] [ 22 ] A large share of the productivity gains outside the IT-equipment industry itself have been in retail, wholesale and finance. [ 23 ] The 1990s IT-related productivity jump arguably resolving the original paradox in favor of the lag in productivity benefits explanations. [ 5 ] [ 8 ]
There was an additional slowdown in productivity growth in the United States and developed countries from the 2000s to 2020s; sometimes the newer slowdown is referred to as the productivity slowdown , the productivity puzzle , or the productivity paradox 2.0 . The 2000s to 2020s productivity slowdown has been defined in terms of lower developed world productivity growth, especially in the US, in this period compared to the period between 1940s and 1970s, and the period between 1994 and 2004. [ 24 ] Sometimes this productivity slowdown is analyzed in the context of AI and other modern IT advancements similarly to the 1970s and 1980s productivity paradox. [ 25 ] As well, many of the hypothesized explanations of the 1970 and 1980s productivity paradox remains relevant to the discussion of the modern productivity paradox.
New mismeasurement hypotheses are conceptually similar to the 1970s and 1980s mismeasurement hypotheses of the productivity paradox in that they still center around the idea that real output estimates overestimates inflation and understates productivity; however, the new mismeasurement hypotheses looks at additional sources of estimation error like the output effects of adding new, never-before-seen products. As in the 1970s and 1980s, modern, post-2000s US productivity measures are produced by comparing real output measurements from period to period, which they do by dividing the nominal output measurements from each period into an inflation component, and a real output component. As before, the US government's calculations of real GDP do not take into account inflation directly but estimate inflation from observing the change in total spending and change in total units consumed for goods and services over time. These new inflation calculation methods, however, compensate for previously raised mismeasurement problems using hedonic regression methods, but they still do not take into account the output-inflation effects of introducing new products. If existing goods and services improved from period to period, hedonic regression estimates could produce an estimate of what consumers would pay for the quality improvements and lower inflation estimates by those amounts. However, if new goods and services in a sector appear in one time period, the extra money that consumers would pay for creation of those new goods and services are not captured in the inflation estimate; the observed extra spending by consumers in that sector is measured as inflation and not attributed to the new goods and services in this case. Thus, the modern real output calculations will characterize consumer spending for new products and services, as well as any spending for quality improvements not captured by the hedonic regression models, as inflation, which overstates inflation and underestimates productivity growth. [ 11 ] [ 1 ]
New lag hypotheses are substantially the same as older lag hypotheses but focus on the lag effects of different new technology and different ways that technology can improve productivity. Productivity benefits from IT investments in the mid-1990s tend to come from their ability to improve supply-chain, back-office and end-to-end operations. Productivity benefits from IT investments post-2000s are expected to come from front-office operations and new product introductions. [ 26 ]
Acemoglu, Autor, Dorn, Hanson & Price (2014) studied IT productivity benefits in manufacturing to find that "there is...little evidence of faster productivity growth in IT-intensive industries after the late 1990s. Second and more importantly, to the extent that there is more rapid growth of labor productivity...this is associated with declining output...and even more rapidly declining employment." [ 27 ] In fact, up to half of the growth of U.S. healthcare spending is attributable to technology costs. [ 28 ]
Computers and mobile phones are continually cited as the greatest reducers of workplace productivity by means of distraction. [ 29 ]
Despite high expectations for online retail sales, individual item and small quantity handling and transportation costs may offset the savings of not having to maintain bricks and mortar stores. [ 30 ] Online retail sales has proven successful in specialty items, collectibles and higher priced goods. Some airline and hotel retailers and aggregators have also witnessed great success.
Online commerce has been extremely successful in banking, airline, hotel, and rental car reservations, to name a few. | https://en.wikipedia.org/wiki/Productivity_paradox |
Products of conception , abbreviated POC , is a medical term used for the tissue derived from the union of an egg and a sperm . It encompasses anembryonic gestation (blighted ovum) which does not have a viable embryo .
In the context of tissue from a dilation and curettage , the presence of POC essentially excludes an ectopic pregnancy .
Retained products of conception is where products of conception remain in the uterus after childbirth , medical abortion or miscarriage (also known as spontaneous abortion). [ 1 ] Miscarriage with retained products of conception is termed delayed when no or very little products of conception have been passed, and incomplete when some products have been passed but some still remain in utero . [ 2 ] [ 3 ]
The diagnosis is based on clinical presentation, quantitative HCG , ultrasound, and pathologic evaluation. A solid, heterogeneous, echogenic mass has a positive predictive value of 80%, but is present in only a minority of cases. A thickened endometrium of > 10 mm is usually considered abnormal, though no consensus exists on the appropriate cutoff. A cut-off of 8 mm or more has 34% positive rate, while a cut off of 14 mm or more has 85% sensitivity, 64% specificity for the diagnosis. Color Doppler flow in the endometrial canal can increased confidence in the diagnosis, though its absence does not exclude it, as 40% of cases of retained products have little or no flow. The differential in suspected cases includes uterine atony, blood clot, gestational trophoblastic disease, and normal post partum appearance of the uterus. Post partum blood clot is more common, reported in up to 24% of postpartum patients, and tends to be more hypoechoic than retained products with absent color flow on Doppler, and resolving spontaneously on follow up scans. The presence of gas raises the possibility of post partum endometritis, though this can also be seen in up to 21% of normal post pregnancy states. The normal post partum uterus is usually less than 2 cm in thickness, and continues to involute on follow up scans to 7 mm or less over time. Retained products are not uncommon, occurring in approximately 1% of all pregnancies, though it more common following abortions, either elective or spontaneous. There is significant overlap between appearance of a normal post partum uterus and retained products. If there is no endometrial canal mass or fluid, and endometrial thickness is less than 10 mm and without increased flow, retained products are statistically unlikely. [ 4 ] [ 5 ] [ 6 ]
Recent studies indicate that the products of conception may be susceptible to pathogenic infections, [ 7 ] including viral infections. Indeed, footprints of JC polyomavirus and Merkel cell polyomavirus have been detected in chorionic villi from females affected by spontaneous abortion as well as pregnant women. [ 8 ] [ 9 ] Another virus , BK polyomavirus has been detected in the same tissues, but with lesser extent. [ 8 ]
According to the 2006 WHO Frequently asked clinical questions about medical abortion , [ 10 ] the presence of remaining products of conception in the uterus (as detected by obstetric ultrasonography ) after a medical abortion is not an indication for surgical intervention (that is, vacuum aspiration or dilation and curettage ). Remaining products of conception will be expelled during subsequent vaginal bleeding. Still, surgical intervention may be carried out on the woman's request, if the bleeding is heavy or prolonged, or causes anemia , or if there is evidence of endometritis .
In delayed miscarriage (also called missed abortion), the Royal Women's Hospital recommendations of management depend on the findings in ultrasonography: [ 11 ]
In incomplete miscarriage, the Royal Women's Hospital recommendations of management depend on the findings in ultrasonography: [ 11 ] | https://en.wikipedia.org/wiki/Products_of_conception |
Produgelaviricota is a phylum of viruses . [ 1 ]
Produgelaviricota has two classes: Ainoaviricetes and Belvinaviricetes . Ainoaviricetes is monotypic down to the rank of species, i.e. the class has just one species. [ 2 ]
This virus -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Produgelaviricota |
Prodynorphin , also known as proenkephalin B , is an opioid polypeptide hormone involved with chemical signal transduction and cell communication. The gene for prodynorphin is expressed in the endometrium and the striatum , and its gene map locus is 20pter-p12. Prodynorphin is a basic building-block of endorphins , the chemical messengers in the brain that appear most heavily involved in the anticipation and experience of pain and the formation of deep emotional bonds, and that are also critical in learning and memory. [ 1 ] [ 2 ]
The gene is thought to influence perception, as well as susceptibility to drug dependence, and is expressed more readily in human beings than in other primates.
Most humans have multiple copies of the regulatory gene sequence for prodynorphin, which is virtually identical among all primates , whereas other primates have only a single copy. In addition, most Asian populations have two copies of the gene sequence for prodynorphin, whereas East Africas , Middle Easterners , and Europeans tend to have three repetitions. [ 3 ]
The extent of regulatory gene disparities for prodynorphin, between human and primates, has gained the attention of scientists. There are very few genes known to be directly related to mankind's speciation from other great apes. According to computational biologist researcher Matthew W. Hahn of Indiana University , "this is the first documented instance of a neural gene that has had its regulation shaped by natural selection during human origins ." [ citation needed ]
The prodynorphin polypeptide is identical in humans and chimpanzees, but the regulatory promoter sequences have been shown to exhibit marked differences. According to Hahn, "humans have the ability to turn on this gene more easily and more intensely than other primates", a reason why regulation of this gene may have been important in the evolution of modern humans' mental capacity. [ citation needed ] | https://en.wikipedia.org/wiki/Prodynorphin |
In probability theory , Proebsting's paradox is an argument that appears to show that the Kelly criterion can lead to ruin. Although it can be resolved mathematically, it raises some interesting issues about the practical application of Kelly, especially in investing. It was named and first discussed by Edward O. Thorp in 2008. [ 1 ] The paradox was named for Todd Proebsting , its creator.
If a bet is equally likely to win or lose, and pays b times the stake for a win, the Kelly bet is:
times wealth. [ 2 ] For example, if a 50/50 bet pays 2 to 1, Kelly says to bet 25% of wealth. If a 50/50 bet pays 5 to 1, Kelly says to bet 40% of wealth.
Now suppose a gambler is offered 2 to 1 payout and bets 25%. What should he do if the payout on new bets changes to 5 to 1? He should choose f * to maximize:
because if he wins he will have 1.5 (the 0.5 from winning the 25% bet at 2 to 1 odds) plus 5 f *; and if he loses he must pay 0.25 from the first bet, and f * from the second. Taking the derivative with respect to f * and setting it to zero gives:
which can be rewritten:
So f * = 0.225.
The paradox is that the total bet, 0.25 + 0.225 = 0.475, is larger than the 0.4 Kelly bet if the 5 to 1 odds are offered from the beginning. It is counterintuitive that you bet more when some of the bet is at unfavorable odds. Todd Proebsting emailed Ed Thorp asking about this.
Ed Thorp realized the idea could be extended to give the Kelly bettor a nonzero probability of being ruined. He showed that if a gambler is offered 2 to 1 odds, then 4 to 1, then 8 to 1 and so on (2 n to 1 for n = 1 to infinity) Kelly says to bet:
each time. The sum of all these bets is 1. So a Kelly gambler has a 50% chance of losing his entire wealth.
In general, if a bettor makes the Kelly bet on a 50/50 proposition with a payout of b 1 , and then is offered b 2 , he will bet a total of:
The first term is what the bettor would bet if offered b 2 initially. The second term is positive if f 2 > f 1 , meaning that if the payout improves, the Kelly bettor will bet more than he would if just offered the second payout, while if the payout gets worse he will bet less than he would if offered only the second payout.
Many bets have the feature that payoffs and probabilities can change before the outcome is determined. In sports betting for example, the line may change several times before the event is held, and news may come out (such as an injury or weather forecast) that changes the probability of an outcome. In investing, a stock originally bought at $20 per share might be available now at $10 or $30 or any other price. Some sports bettors try to make income from anticipating line changes rather than predicting event outcomes. Some traders concentrate on possible short-term price movements of a security rather than its long-term fundamental prospects. [ 3 ]
A classic investing example is a trader who has exposure limits, say he is not allowed to have more than $1 million at risk in any one stock. That doesn't mean he cannot lose more than $1 million. If he buys $1 million of the stock at $20 and it goes to $10, he can buy another $500,000. If it then goes to $5, he can buy another $500,000. If it goes to zero, he can lose an infinite amount of money, despite never having more than $1 million at risk. [ 4 ]
One easy way to dismiss the paradox is to note that Kelly assumes that probabilities do not change. A Kelly bettor who knows odds might change could factor this into a more complex Kelly bet. For example suppose a Kelly bettor is given a one-time opportunity to bet a 50/50 proposition at odds of 2 to 1. He knows there is a 50% chance that a second one-time opportunity will be offered at 5 to 1. Now he should maximize:
with respect to both f 1 and f 2 . The answer turns out to be bet zero at 2 to 1, and wait for the chance of betting at 5 to 1, in which case you bet 40% of wealth. If the probability of being offered 5 to 1 odds is less than 50%, some amount between zero and 25% will be bet at 2 to 1. If the probability of being offered 5 to 1 odds is more than 50%, the Kelly bettor will actually make a negative bet at 2 to 1 odds (that is, bet on the 50/50 outcome with payout of 1/2 if he wins and paying 1 if he loses). In either case, his bet at 5 to 1 odds, if the opportunity is offered, is 40% minus 0.7 times his 2 to 1 bet.
What the paradox says, essentially, is that if a Kelly bettor has incorrect beliefs about what future bets may be offered, he can make suboptimal choices, and even go broke. The Kelly criterion is supposed to do better than any essentially different strategy in the long run and have zero chance of ruin, as long as the bettor knows the probabilities and payouts. [ 2 ]
More light on the issues was shed by an independent consideration of the problem by Aaron Brown , also communicated to Ed Thorp by email. In this formulation, the assumption is the bettor first sells back the initial bet, then makes a new bet at the second payout. In this case his total bet is:
which looks very similar to the formula above for the Proebsting formulation, except that the sign is reversed on the second term and it is multiplied by an additional term.
For example, given the original example of a 2 to 1 payout followed by a 5 to 1 payout, in this formulation the bettor first bets 25% of wealth at 2 to 1. When the 5 to 1 payout is offered, the bettor can sell back the original bet for a loss of 0.125. His 2 to 1 bet pays 0.5 if he wins and costs 0.25 if he loses. At the new 5 to 1 payout, he could get a bet that pays 0.625 if he wins and costs 0.125 if he loses, this is 0.125 better than his original bet in both states. Therefore his original bet now has a value of -0.125. Given his new wealth level of 0.875, his 40% bet (the Kelly amount for the 5 to 1 payout) is 0.35.
The two formulations are equivalent. In the original formulation, the bettor has 0.25 bet at 2 to 1 and 0.225 bet at 5 to 1. If he wins, he gets 2.625 and if he loses he has 0.525. In the second formulation, the bettor has 0.875 and 0.35 bet at 5 to 1. If he wins, he gets 2.625 and if he loses he has 0.525.
The second formulation makes clear that the change in behavior results from the mark-to-market loss the investor experiences when the new payout is offered. This is a natural way to think in finance, less natural to a gambler. In this interpretation, the infinite series of doubling payouts does not ruin the Kelly bettor by enticing him to overbet, it extracts all his wealth through changes beyond his control. | https://en.wikipedia.org/wiki/Proebsting's_paradox |
Profanity is a text mode instant messaging interface that supports the XMPP protocol. [ 2 ] It supports Linux , macOS , Windows (via Cygwin or WSL ), FreeBSD , and Android (via Termux ).
Packages are available in the Debian , [ 3 ] Ubuntu [ 4 ] and Arch Linux [ 5 ] distributions.
Features include multi-user chat, desktop notifications, Off The Record [ 6 ] [ 7 ] and OMEMO [ 8 ] message encryption.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Profanity_(instant_messaging_client) |
The Professional Electrical Apparatus Reconditioning League or PEARL is an international professional organization and standards group based in Denver, Colorado . [ 1 ] PEARL is focused on developing ethical business practices and technical standards [ 2 ] related to inspecting, testing, and reconditioning circuit breakers , transformer , motor controls, switchgear , disconnect switches, protective relays, bus duct , motor starters and other electrical equipment and apparatus used in the electrical distribution systems of commercial, industrial, and utility facilities.
PEARL's standards for inspecting, testing and reconditioning electrical equipment , components and apparatus help ensure the reliable, safe operation of devices such as circuit breakers, transformers, switches, protective relays, and contactors. PEARL also disseminates information on electrical safety news and counterfeit notices relating to electrical equipment utilized at commercial, industrial and utility facilities.
PEARL is an "American National Standards Institute (ANSI) Developer of Reconditioning Standards" for returning electrical equipment to safe and reliable service. PEARL's goal is to develop a single set of consensus technical standards for reconditioning electrical equipment used in industrial and commercial installations and accepted by a pool of industry professionals and stakeholders.
To this end, PEARL is seeking qualified individuals to review and provide input and comment on new standards and any revision to a standard. These individuals will have expertise and knowledge of inspecting, testing and reconditioning electrical equipment used in industrial and commercial facilities.
Counterfeit electrical apparatus pose a growing threat [ 3 ] to all sectors of the electrical marketplace, from the OEMs who lose revenue and brand prestige, to distributors and suppliers that risk liability, to electrical contractors end users who can face financial or physical liabilities as a result of potentially dangerous counterfeit electrical devices. Independent suppliers of electrical product are particularly susceptible to fraudulent counterfeit goods because most OEMs will not sell their 'new' product at wholesale prices directly to non-licensed distributors, forcing independent electrical supply houses to alternate sources.
In September 2007, PEARL held a special board meeting to discuss counterfeit electrical power equipment. [ 4 ] Among other actions, PEARL's Standards and Practices Committee issued a policy directive to all members to pro-actively assist OEMs and other organizations with identifying, reporting, and policing counterfeit electrical product and the companies and individuals that sell it. Since 2007, PEARL members have helped OEMs and other industry associations locate several shipments of counterfeit product.
PEARL sponsors an annual "Electrical Safety, Reliability and Sustainability Conference & Exhibition. Conference topics include but are not limited to:
In 2010 PEARL published a white paper "Reconditioning: The Ultimate Form of Recycling" outlining how reuse, reconditioning, and remanufacturing use a fraction of the energy of new production, keep millions of tons of waste from landfills every year, reduce raw material consumption, and create 3 to 5 times more skilled jobs than automated production lines.
Because the remanufacturing process only consumes about 15% of the energy used to create a new product, remanufacturing in the U.S. saves 400 trillion BTUs annually, the equivalent of 16 million barrels of crude oil, or enough gasoline to run 6 million cars for a year. Based on a weighted average of 140 pounds of CO 2 gas pollution for every 1 million BTUs of energy consumed, remanufacturing reduces CO 2 generation by 28 million tons each year, which is equal to the CO 2 output of 10, 500-megawatt coal-burning electrical plants. Remanufacturing also saves the U.S. enough raw materials to fill 155,000 railway cars each year. [ 5 ] [ 6 ]
Boston University 's Prof. Robert Lund estimated the U.S. remanufacturing industry at $53 billion in sales in 1996, employing approximately 480,000 people. [ 7 ] This figure only includes a portion of the electrical equipment market related to electrical motors, and doesn't include remanufacturing of other electrical products such as circuit breakers, transformers, etc. Remanufacturing electrical equipment keeps thousands of tons of waste from U.S. landfills every year, based on the inventory turnarounds shown just by PEARL member companies. Also, unlike recycling which only reclaims part of the materials within a waste stream, reconditioning reclaims more material savings, as well as most of the energy and labor energy used to manufacturer the original product, making reconditioning a more environmentally sustainable practice than recycling. In 2003, the OEM Product-Services Institute (OPI) said U.S. electrical generation facilities alone spent $3.1 billion on remanufacturing, overhaul, and rebuilding. [ 8 ]
Between 1980 and 1992, the National Institute for Occupational Safety and Health estimated that on average 411 workers died in the U.S. each year from electrocution . [ 9 ] Safety of remanufactured electrical equipment is a prime focus of PEARL.
Although numbers on energy savings and pollution reduction thanks solely to electrical reconditioning do not exist, PEARL has recently been recognized by the California Integrated Waste Management Board (CIWMB) through its Waste Reduction Award Program (WRAP). [ 10 ] as an organization that has helped the state meet its waste reduction goals.
PEARL's original corporate members first came together in Denver, CO, in 1996 to discuss emerging issues surrounding new electronic data interchange (EDI) systems for the ordering and purchase of electric apparatus and equipment for commercial and industrial markets. As a group, these independent suppliers of new, surplus, and used electrical apparatus and equipment for commercial and industrial electrical applications typically were neither members of horizontal electrical industry associations and standards organizations, such as the National Electrical Manufacturers Association (NEMA), who develops general electrical enclosure and interconnect standards for electrical original equipment manufacturers (OEM); nor vertical trade associations such as the Electrical Apparatus Service Association Inc. (EASA), which develops standards for servicing electrical motors, nor the International Electrical Testing Association (NETA), which develops standards for electrical field testing and field equipment maintenance. EASA and NETA would go on to become standards development groups for the American National Standards Institute (ANSI).
Although OEMs of electrical equipment did develop maintenance and repair documents for their individual company products and offer for-fee remanufacturing services, neither OEM's nor their trade associations collected these repair documents, standardized the processes from different companies, validated the processes through third party engineering review, or offered them as a group of standards to the electrical industry at large. EASA had developed standards for rewinding electric motors, excluding motor control circuits, NETA had developed the standards to calibrate electrical test equipment used in both EASA and eventually PEARL standards, but in 1996, PEARL's founding members saw that the industry did not have the technical standards necessary to ensure the safety of reconditioned electrical equipment used by industrial and commercial industries, ranging from circuit breakers and transformer to conduit and bus duct.
As a result of these conditions within the electrical industry, In 1997, 20 charter members formed PEARL to collect, create, and disseminate information, policies, procedures, and standards to ensure the proper recycling and reuse of electrical power equipment, as well as to prevent fraudulent electrical apparatus labeling and misrepresentation of electrical equipment. As of May 25, 2009, PEARL's 51 corporate voting members and 30 affiliate members, representing more than $500 million in annual sales revenues from companies in the U.S. and Canada, have contributed to the development of 137 electrical Reconditioning Standards for Electrical Equipment ranging from circuit breakers and transformers to conduit and bus duct.
[ 1 ] | https://en.wikipedia.org/wiki/Professional_Electrical_Apparatus_Recyclers_League |
Professional Engineers Day was launched by the National Society of Professional Engineers in 2016 to celebrate and raise public awareness of the contributions of licensed professional engineers in the United States. As of 2015, there were 474,777 licensed professional engineers in the U.S. [ 1 ] The first Professional Engineers Day was celebrated on August 3, 2016.
The idea for Professional Engineers Day came from Tim Austin, PE, a professional engineer from Kansas who served as president of the National Society of Professional Engineers in 2015–16. While promotion of engineering in the US is common, such as the attention given to STEM fields and events such as the USA Science and Engineering Festival and National Engineers Week (U.S.) , which was also founded by NSPE in 1951, Austin believed attention should be paid specifically to the contributions of licensed professional engineers because of NSPE's core principle which states, "Being a licensed professional engineer means more than just holding a certificate and possessing technical competence. It is a commitment to hold the public health, safety, and welfare above all other considerations." "NSPE's Statement of Principles" .
Of course, an idea is just an idea unless actions are taken to turn that idea into something tangible. Led by then Executive Director, Mark J. Golden, CAE, Communications Director David Siegel, and Public Relation & Outreach Manager Stacey Ober, the NSPE staff created the celebratory event known today as PE Day, which is mostly a virtual event across social media platforms. The success of PE Day and the efforts of NSPE staff was recognized by the Association of Media & Publishing in the category "Promotional Content: Social Media Campaign" in June 2018
Licensing of professional engineers in the US began in 1907, when Clarence Johnston, the state engineer of Wyoming, presented a bill to the Wyoming legislature that would require registration for those representing themselves to the public as an engineer or land surveyor. The bill was later enacted, making the state the first in the US to register engineers and land surveyors. [ 2 ]
On August 8, 1907, Charles Bellamy of Wyoming received the first professional engineering license. [ 3 ] Professional Engineers Day is held the first Wednesday in August to mark that occasion. Incidentally, Bellamy's wife, Mary Godot Bellamy , is also known for a first: the first woman elected to the Wyoming legislature.
Charles Bellamy founded Bellamy & Sons Engineers in 1913. [ 4 ]
The Bellamy Chapter of the Wyoming Society of Professional Engineers is named in his honor.
Today, for the purpose of protecting the public, all US states and territories license professional engineers. [ 5 ] Professional Engineers make contributions in virtually all areas of society, industry, and economy including, among other things, energy, transportation, manufacturing, communications, environment, mining, building, defense, education, medical, and consumer products.
To mark the inaugural Professional Engineers Day in 2016, governors from Kansas, Ohio, Oklahoma, Louisiana, and Wisconsin signed proclamations recognizing the contributions licensed professional engineers make to their states and society. | https://en.wikipedia.org/wiki/Professional_Engineers_Day_(U.S.) |
The Professional Lighting and Sound Association ( PLASA ) is a trade association headquartered in Eastbourne , United Kingdom . Its membership is made up of companies involved with the events and entertainments technology sector.
PLASA was originally known as the British Association of Discothèque Equipment Manufacturers (BADEM), a name used between 1976 and 1983. [ 3 ]
In 2010 PLASA merged with the Entertainment Services and Technology Association . [ 4 ] [ 5 ] and demerged in 2015. John Simpson, the PLASA Governing Body Chair at the time, said "This has been a difficult period for PLASA but it is also an opportunity for us to refocus. PLASA has a chance to reassess its role in this industry, its relationships and communications with its members, and the future directions of its commercial activities." Also during this time PLASA Show was relocated to Earls Court and CEO Matthew Griffiths left his post. [ 6 ]
Peter Heath took the role of CEO in April of 2016. In the same year PLASA Show move back to the West London venue London Olympia. Since then, PLASA Show has steadily regained popularity with the 2018 edition of the show being the “busiest and most vibrant show in recent history”. [ 7 ]
PLASA's activities include lobbying , organising trade show events (including the PLASA Show ), publishing both technical and industry news products (such as Lighting & Sound International and Lighting & Sound America ), developing industry standards and developing industry certification schemes. [ 8 ]
PLASA performed lobbying of Ofcom and other British Government entities in the late 2000s when users of radio microphones and similar devices complained that their equipment would be rendered unusable as a result of proposed plans to auction the radio spectrum utilised by many of such devices as part of the digital television switchover . [ 9 ] [ 10 ]
After merging with ESTA , PLASA took on the role of maintaining the industry standards for DMX512 and RDM . [ 11 ] PLASA have also been responsible for the development of a UK National Rigging Certificate, which launched in 2007 [ 12 ] for individuals working in the entertainments rigging industry. [ 1 ] [ 13 ]
Each year, PLASA hands out PLASA Awards for Innovation and Sustainability Award. The PLASA Awards for Innovation aim to emphasise this focus on true innovation. The procedure ensures that all nominated products are vetted to show that they offer something new to the industry.
PLASA has been a part of the European Ecodesign Coalition which includes prominent industry bodies from across Europe. The purpose of the coalition has been to campaign against Ecodesign lighting regulations and propose exemptions for stage lighting.
In 2018 PLASA collaborated with Hamish Dumbreck of JESE Ltd, Peter Willis of Howard Eaton Lighting and Wayne Howell of Artistic Licence to present Plugfest, a three-day residential event in Gatwick, UK for lighting technicians and developers to test the interoperability of their products. This event returns in 2019 taking place in Lille, France. | https://en.wikipedia.org/wiki/Professional_Lighting_and_Sound_Association |
Professional audio , abbreviated as pro audio , refers to both an activity and a category of high-quality, studio-grade audio equipment. Typically it encompasses sound recording , sound reinforcement system setup and audio mixing , and studio music production by trained sound engineers , audio engineers , record producers , and audio technicians who work in live event support and recording using mixing consoles , recording equipment and sound reinforcement systems. Professional audio is differentiated from consumer- or home-oriented audio, which are typically geared toward listening in a non-commercial environment.
Professional audio can include, but is not limited to broadcast radio , audio mastering in a recording studio , television studio , and sound reinforcement such as a live concert, DJ performances, audio sampling , public address system set up, sound reinforcement in movie theatres , and design and setup of piped music in hotels and restaurants. Professional audio equipment is sold at professional audio stores and music stores .
The term professional audio has no precise definition, but it typically includes:
A professional audio store is a retail establishment that sells, and in many cases rents, expensive, high-end sound recording equipment ( microphones , [ 3 ] audio mixers , digital audio recorders , speakers and surround sound speakers, [ 4 ] monitor speakers ) and sound reinforcement system gear (e.g., speaker enclosure cabinets, stage monitor speakers , power amplifiers , subwoofer cabinets) and accessories used in both settings, such as microphone stands . Some pro audio stores also sell video equipment, such as video projectors , as this equipment is commonly used in live audio settings (e.g., business presentations and conventions). Some pro audio stores also sell and/or rent DJ gear ( record turntables , DJ mixers ) and the stage lighting equipment used in rock concerts, dance clubs , raves and theater / musical theater shows. | https://en.wikipedia.org/wiki/Professional_audio |
Profibus (usually styled as PROFIBUS , as a portmanteau for Pro cess Fi eld Bus ) is a standard for fieldbus communication in automation technology and was first promoted in 1989 by BMBF (German department of education and research) and then used by Siemens . [ 1 ] It should not be confused with the Profinet standard for Industrial Ethernet .
Profibus is openly published as type 3 of IEC 61158/61784-1. [ 2 ]
The history of PROFIBUS goes back to a publicly promoted plan from Marco Todaro for an association which started in Germany in 1986 and for which 18 companies and institutes devised a master project plan called " fieldbus ". [ 3 ] The goal was to implement and spread the use of a bit-serial field bus based on the basic requirements of the field device interfaces. For this purpose, member companies agreed to support a common technical concept for production (i.e. discrete or factory automation ) and process automation . First, the complex communication protocol Profibus FMS (Field bus Message Specification), which was tailored for demanding communication tasks, was specified. Subsequently, in 1993, the specification for the simpler and thus considerably faster protocol PROFIBUS DP (Decentralised Peripherals) was completed. Profibus FMS is used for (non-deterministic) communication of data between Profibus Masters. Profibus DP is a protocol made for (deterministic) communication between Profibus masters and their remote I/O slaves. [ 4 ] [ 5 ]
There are two variations of PROFIBUS in use today; the most commonly used PROFIBUS DP, and the lesser used, application specific, PROFIBUS PA:
In excess of 30 million PROFIBUS nodes were installed by the end of 2009. 5 million of these are in the process industries. [ 3 ]
To use these functions, various service levels [ 10 ] of the DP protocol [ 11 ] were defined: [ 12 ] [ 13 ]
The data link layer FDL (Field bus Data Link) services [ 15 ] and protocols [ 16 ] work with a hybrid access method that combines token passing with a master/slave method. In a PROFIBUS DP network, the controllers or process control systems are the masters and the sensors and actuators are the slaves. [ 12 ] [ 17 ]
Each byte has even parity and is transferred asynchronously with a start and stop bit.
There may not be a pause between a stop bit and the following start bit when the bytes of a telegram are transmitted. The master signals the start of a new telegram with a SYN pause of at least 33 bits (logical "1" = bus idle).
Various telegram types are used. They can be differentiated by their start delimiter (SD):
SD1 = 0x10
SD2 = 0x68
SD3 = 0xA2
SD4 = 0xDC
Three different methods are specified for the bit-transmission layer: [ 9 ]
For data transfer via sliding contacts for mobile devices or optical or radio data transmission in open spaces, products from various manufacturers can be obtained, however they do not conform to any standard.
PROFIBUS DP [ 6 ] uses two core screened cable with a violet sheath, [ 18 ] and runs at speeds between 9.6 kbit/s and 12 Mbit/s. [ 20 ] A particular speed can be chosen for a network to give enough time for communication with all the devices present in the network. If systems change slowly then lower communication speed is suitable, and if the systems change quickly then effective communication will happen through faster speed. The RS485 balanced transmission used in PROFIBUS DP only allows 31 devices to be connected at once; however, more devices (up to 126) can be connected or the network expanded with the use of hubs or repeaters (4 hubs or repeaters to reach 126). [ 7 ] A Hub or a Repeater is also counted as a device. [ 21 ]
PROFIBUS PA [ 8 ] runs at fixed speed of 31.25 kbit/s via blue sheathed two core screened cable. The communication may be initiated to minimise the risk of explosion or for the systems that intrinsically need safe equipment. The message formats in PROFIBUS PA are identical to PROFIBUS DP.
Note: PROFIBUS DP and PROFIBUS PA should not be confused with PROFINET .
Profiles are pre-defined configurations of the functions and features available from PROFIBUS for use in specific devices or applications. They are specified by PI working groups and published by PI. Profiles are important for openness, interoperability and interchangeability, so that the end user can be sure that similar equipments from different vendors perform in a standardised way. User choice also encourages competition that drives vendors towards enhanced performance and lower costs.
There are PROFIBUS profiles for Encoders, Laboratory instruments, Intelligent pumps , Robots and Numerically Controlled machines, for example. Profiles also exist for applications such as using HART and wireless with PROFIBUS, and process automation devices via PROFIBUS PA. Other profiles have been specified for Motion Control (PROFIdrive) and Functional Safety ( PROFIsafe ).
The PROFIBUS Nutzerorganisation e.V. (PROFIBUS User Organisation, or PNO) was created in 1989. [ 3 ] This group was composed mainly of manufacturers and users from Europe. In 1992, the first regional PROFIBUS organisation was founded (PROFIBUS Schweiz in Switzerland). In the following years, additional Regional PROFIBUS & PROFINET Associations (RPAs) were added.
In 1995, all the RPAs joined together under the international umbrella association Profibus and Profinet International (PI). Today, PROFIBUS is represented by 25 RPAs around the world (including PNO) with over 1400 members, including most if not all major automation vendors and service suppliers, along with many end users. | https://en.wikipedia.org/wiki/Profibus |
In standardization , a profile is a subset internal to a specification . Aspects of a complex technical specification may necessarily have more than one interpretation, and there are probably many optional features. These aspects constitute a profile of the standard. Two implementations engineered from the same description may not interoperate due to having a different profile of the standard. Vendors can even ignore features that they view as unimportant, yet prevail in the long run.
The use of profiles in these ways can force one interpretation, or create de facto standards from official standards. Engineers can design or procure by using a profile to ensure interoperability. For example, the International Standard Profile, ISP, is used by the ISO in their ISO ISP series of standards; in the context of OSI networking , Britain uses the UK-GOSIP profile and the US uses US- GOSIP ; there are also various mobile profiles adopted by the W3C for web standards. In particular, implementations of standards on mobile devices often have significant limitations compared to their traditional desktop implementations, even if the standard which governs both permits such limitations.
In structural engineering a profile means a hot rolled structural steel shape like an Ɪ-beam .
In civil engineering , a profile consists of a plotted line which indicates grades and distances (and typically depths of cut and/or elevations of fill) for excavation and grading work. Constructors of roadways , railways (and similar works) normally chart the profile along the centerline. A profile can also indicate the vertical slope(s) (changes in elevation) in a pipeline or similar structure. Civil engineers always depict profile as a side ( cross section ) view (as opposed to an overhead ( plan ) view).
In fabricating , a profile consists of the more-or-less complex outline of a shape to be cut in a sheet of material such as laminated plastic, aluminium alloy or steel plate . In modern practice, a drawing office determines the shape and dimensions required to fit the sheet into a larger work and feeds directions to a computer controlling a profile cutter . This then cuts the shape from a standard-sized sheet. The cutting head may use a rotating cutter like that of a spindle router or (in the case of steel plate) a torch which burns oxy-acetylene or other oxy-gas. | https://en.wikipedia.org/wiki/Profile_(engineering) |
Road surface textures are deviations from a planar and smooth surface, affecting the vehicle/tyre interaction. Pavement texture is divided into: microtexture with wavelengths from 0 mm to 0.5 millimetres (0.020 in), macrotexture with wavelengths from 0.5 millimetres (0.020 in) to 50 millimetres (2.0 in) and megatexture with wavelengths from 50 millimetres (2.0 in) to 500 millimetres (20 in).
Microtexture (MiTx) is the collaborative term for a material's crystallographic parameters and other aspects of microstructure: such as morphology , including size and shape distributions; chemical composition; and crystal orientation and relationships [ 1 ]
While vehicle suspension deflection and dynamic tire loads are affected by longer wavelength ( roughness ), road texture affects the interaction between the road surface and the tire footprint. Microtexture has wavelengths shorter than 0.5 mm. It relates to the surface of the binder, of the aggregate , and of contaminants such as rubber deposits from tires.
The mix of the road material contributes to dry road surface friction. Typically, road agencies do not monitor mix directly, but indirectly by brake friction tests. However, friction also depends on other surface properties, such as macro-texture.
Macrotexture (MaTx) is partly a desired property and partly an undesired property. Short MaTx waves, about 5 mm, act as acoustical pores and reduce tyre/road noise. On the other hand, long wave MaTx increase noise. MaTx provide wet road friction, especially at high speeds. Excessive MaTx increases rolling resistance and thus fuel consumption and CO 2 emission, contributing to global warming . Proper roads have MaTx of about 1 mm Mean Profile Depth.
Macrotexture is a family of wave-shaped road surface characteristics. While vehicle suspension deflection and dynamic tyre loads are affected by longer waves (roughness), road texture affects the interaction between the road surface and the tyre footprint. Macrotexture has wavelengths from 0.5 mm up to 50 mm.
Road agencies monitor macrotexture using measurements taken with highway speed laser or inertial profilometers.
Megatexture (MeTx) is a result of pavement wear and distress, causing noise and vibration. Megatexture has wavelengths from 50 mm up to 500 mm. Some examples of road damages with much MeTx are potholes, washboards (common on dirt roads) and uneven frost heaves. MeTx below 0.2 mm Root-Mean-Square is considered normal on proper roads.
MaTx and MeTx are measured with laser/inertial profilographs . Since MiTx has so short waves, it is preferably measured by dry friction brake tests rather than by profiling. Profilographs that record texture in both left and right wheel paths can be used to identify road sections with hazardous split friction .
The profilograph is a device used to measure pavement surface roughness. In the early 20th century, profilographs were low speed rolling devices (for example rolling straight-edges ). Today, many profilographs are advanced high speed systems with a laser based height sensor in combination with an inertial system that creates a large scale reference plane. It is used by construction crews or certified consultants to measure the roughness of in-service road networks, as well as before and after milling off ridges and paving overlays. Modern profilographs are fully computerized instruments.
The data collected by a profilograph is used to calculate the International Roughness Index (IRI), which is expressed in units of inches/mile or mm/m. IRI values range from 0 (equivalent to driving on a plate of glass) upwards to several hundred in/mile (a very rough road). The IRI value is used for road management to monitor road safety and quality issues. [ 2 ]
Many road profilographs are also measuring the pavements cross slope , curvature , longitudinal gradient and rutting . Some profilographs take digital photos or videos while profiling the road. Most profilographs also record the position, using GPS technology. Yet another common measurement option is cracks . Some profilograph systems include a ground penetrating radar , used to record asphalt layer thickness. [ 3 ]
Another type of profilograph system is for measuring the surface texture of a road and how it relates to the coefficient of friction and thus to skid resistance . Pavement texture is divided into three categories; megatexture, macrotexture, and microtexture. Microtexture cannot currently be measured directly, except in a laboratory. Megatexture is measured using a similar profiling method as when obtaining IRI values, while macrotexture is the measurement of the individual variations of the road within a small interval of a few centimeters. For example, a road which has gravel spread on top followed by an asphalt seal coat will have a high macrotexture, and a road built with concrete slabs will have low macrotexture. For this reason, concrete is often grooved or roughed up immediately after it is laid on the road bed to increase the friction between the tire and road.
Equipment to measure macrotexture currently consists of a distance measuring laser with an extremely small spot size (< 1 mm) and data acquisition systems capable of recording elevations spaced at 1 mm or less. The sample rate is generally over 32 kHz. Macrotexture data can be used to calculate the speed-dependent part of friction between typical car tires and the road surface in both dry and wet conditions. Microtexture affects friction as well.
Lateral friction and cross slope are the key reaction forces acting to keep a cornering vehicle in steady lateral position, while it is subject to exiting forces arising from speed and curvature. Cross slope and curvature can be measured with a road profilograph, and in combination with friction-related measurements can be used to identify improperly banked curves , which can increase the risk of motor vehicle accidents.
Road pavement profilometers (aka profilographs , as used in the famous 1958-1960 AASHO Road Test ) use a distance measuring laser (suspended approximately 30 cm from the pavement) in combination with an odometer and an inertial unit (normally an accelerometer to detect vehicle movement in the vertical plane) that establishes a moving reference plane to which the laser distances are integrated. The inertial compensation makes the profile data more or less independent of what speed the profilometer vehicle had during the measurements, with the assumption that the vehicle does not make large speed variations and the speed is kept above 25 km/h or 15 mph. The profilometer system collects data at normal highway speeds, sampling the surface elevations at intervals of 2–15 cm (1–6 in), and requires a high speed data acquisition system capable of obtaining measurements in the kilohertz range.
The data collected by a profilometer is used to calculate the International Roughness Index (IRI) which is expressed in units of inches/mile or mm/m. IRI values range from 0 (equivalent to driving on a plate of glass) upwards to several hundred in/mi (a very rough road). The IRI value is used for road management to monitor road safety and quality issues.
Many road profilers also measure the pavement's cross slope , curvature , longitudinal gradient and rutting . Some profilers take digital photos or videos while profiling the road. Most profilers also record the position, using GPS technology. Another quite common measurement option is cracks . Some profilometer systems include a ground penetrating radar , used to record asphalt layer thickness.
Another type of profilometer is for measuring the surface texture of a road and how it relates to the coefficient of friction and thus to skid resistance . Pavement texture is divided into three categories: megatexture, macrotexture, and microtexture. Microtexture cannot currently be measured directly, except in a laboratory. Megatexture is measured using a similar profiling method as when obtaining IRI values, while macrotexture is the measurement of the individual variations of the road within a small interval of a few centimeters. For example, a road which has gravel spread on top followed by an asphalt seal coat will have a high macrotexture, and a road built with concrete slabs will have low macrotexture. For this reason, concrete is often grooved or roughed up immediately after it is laid on the road bed to increase the friction between the tire and road.
Equipment to measure macrotexture currently consists of a distance measuring laser with an extremely small spot size (< 1 mm) and data acquisition systems capable of recording elevations spaced at a mm or less apart. The sample rate is generally over 32 kHz. Macrotexture data can be used to calculate the speed-depending part of the friction number between typical car tires and the road surface. The macrotexture also give information on the difference between dry and wet road friction. However, macrotexture cannot be used to calculate a relevant friction number, since also microtexture affects the friction.
Lateral friction and cross slope are the key reaction forces acting to keep a cornering vehicle in steady lateral position, while exposed to exciting forces from speed and curvature. Since friction is strongly dependent on macrotexture and texture, cross slope as well as curvature can be measured with a road profiler, so road profilers are very useful to identify improperly banked curves that may pose a risk to motor vehicles. | https://en.wikipedia.org/wiki/Profilograph |
Profinet (usually styled as PROFINET , as a portmanteau for Pro cess Fi eld Net work) is an industry technical standard for data communication over Industrial Ethernet , designed for collecting data from, and controlling equipment in industrial systems , with a particular strength in delivering data under tight time constraints. The standard is maintained and supported by Profibus and Profinet International , an umbrella organization headquartered in Karlsruhe, Germany .
Profinet implements the interfacing to peripherals . [ 1 ] [ 2 ] It defines the communication with field connected peripheral devices. Its basis is a cascading real-time concept. Profinet defines the entire data exchange between controllers (called "IO-Controllers") and the devices (called "IO-Devices"), as well as parameter setting and diagnosis. IO-Controllers are typically a PLC , DCS , or IPC ; whereas IO-Devices can be varied: I/O blocks, drives, sensors, or actuators. The Profinet protocol is designed for the fast data exchange between Ethernet-based field devices and follows the provider-consumer model. [ 1 ] Field devices in a subordinate Profibus line can be integrated in the Profinet system seamlessly via an IO-Proxy (representative of a subordinate bus system). [ 3 ]
Applications with Profinet can be divided according to the international standard IEC 61784-2 into four conformance classes:
IEC 61784-5-3 and IEC 24702:
IEC 61784-5-3:
IEC 61784-5-3:
IEC 61784-5-3:
A Profinet system consists of the following devices: [ 1 ] : 3
A minimal Profinet IO-System consists of at least one IO-Controller that controls one or more IO-Devices. In addition, one or more IO-Supervisors can optionally be switched on temporarily for the engineering of the IO-Devices if required.
If two IO-Systems are in the same IP network , the IO-Controllers can also share an input signal as shared input, in which they have read access to the same submodule in an IO-Device. [ 1 ] : 3 [ 2 ] This simplifies the combination of a PLC with a separate safety controller or motion control . Likewise, an entire IO-Device can be shared as a shared device, [ 1 ] : 11 in which individual submodules of an IO-Device are assigned to different IO-Controllers.
Each automation device with an Ethernet interface can simultaneously fulfill the functionality of an IO-Controller and an IO-Device. If a controller for a partner controller acts as an IO-Device and simultaneously controls its periphery as an IO-Controller, the tasks between controllers can be coordinated without additional devices.
An Application Relation (AR) is established between an IO-Controller and an IO-Device. These ARs are used to define Communication Relations (CR) with different characteristics for the transfer of parameters, cyclic exchange of data and handling of alarms. [ 1 ] : 4
The project engineering [ 1 ] : 5 [ 2 ] of an IO system is nearly identical to the Profibus in terms of "look and feel":
Profinet is also increasingly being used in critical applications. There is always a risk that the required functions cannot be fulfilled. This risk can be reduced by specific measures as identified by a dependability [ 6 ] analyses. The following objectives are in the foreground:
These goals can interfere with or complement each other.
Profisafe [ 7 ] defines how safety-related devices ( emergency stop buttons, light grids, overfill prevention devices, ...) communicate with safety controllers via Profinet in such a safe way that they can be used in safety-related automation tasks up to Safety Integrity Level 3 (SIL) according to IEC 61508 , Performance Level "e" (PL) according to ISO 13849 , or Category 4 according to EN 954-1.
Profisafe implements safe communication via a profile, [ 8 ] i.e. via a special format of the user data and a special protocol. It is designed as a separate layer on top of the fieldbus application layer to reduce the probability of data transmission errors. The Profisafe messages use standard fieldbus cables and messages. They do not depend on error detection mechanisms of underlying transmission channels, and thus supports securing of whole communication paths, including backplanes inside controllers or remote I/O . [ 9 ] The Profisafe protocol uses error and failure detection mechanisms such as:
and is defined in the IEC 61784 -3-3 standard.
High availability [ 10 ] is one of the most important requirements in industrial automation, both in factory and process automation. The availability of an automation system can be increased by adding redundancy for critical elements. A distinction can be made between system and media redundancy.
System redundancy can also be implemented with Profinet to increase availability . In this case, two IO-Controllers that control the same IO-Device are configured. The active IO-Controller marks its output data as primary. Output data that is not marked is ignored by an IO-Device in a redundant IO-System. In the event of an error, the second IO-Controller can therefore take control of all IO-Devices without interruption by marking its output data as primary. How the two IO-Controllers synchronize their tasks is not defined in Profinet and is implemented differently by the various manufacturers offering redundant control systems.
Profinet offers two media redundancy solutions. The Media Redundancy Protocol (MRP) allows the creation of a protocol-independent ring topology with a switching time of less than 50 ms. This is often sufficient for standard real-time communication with Profinet. To switch over the redundancy in the event of an error without time delay, the "Media Redundancy for Planned Duplication" (MRPD) must be used as a seamless media redundancy concept. In the MRPD, the cyclic real-time data is transmitted in both directions in the ring-shaped topology. A time stamp in the data packet allows the receiver to remove the redundant duplicates.
The IT security concept [ 11 ] for Profinet assumes a defense-in-depth [ 12 ] approach. In this approach, the production plant is protected against attacks, particularly from outside, by a multi-level perimeter, including firewalls. In addition, further protection is possible within the plant by dividing it into zones using firewalls . In addition, a security component test ensures that the Profinet components are resistant to overload to a defined extent. [ 13 ] This concept is supported by organizational measures in the production plant within the framework of a security management system according to ISO 27001 .
For a smooth interaction of the devices involved in an automation solution, they must correspond in their basic functions and services. Standardization is achieved by "profiles" [ 14 ] with binding specifications for functions and services. The possible functions of communication with Profinet are restricted and additional specifications regarding the function of the field device are prescribed. These can be cross-device class properties such as a safety-relevant behavior (Common Application Profiles) or device class specific properties (Specific Application Profiles). [ 15 ] A distinction is made between
PROFIdrive [ 16 ] is the modular device profile for drive devices. It was jointly developed by manufacturers and users in the 1990s and since then, in conjunction with Profibus and, from version 4.0, also with Profinet, it has covered the entire range from the simplest to the most demanding drive solutions.
Another profile is PROFIenergy which includes services for real time monitoring of energy demand. This was requested in 2009 by the AIDA group of German automotive Manufacturers ( Audi , BMW , Mercedes-Benz , Porsche and Volkswagen ) who wished to have a standardised way of actively managing energy usage in their plants. High energy devices and sub-systems such as robots, lasers and even paint lines are the target for this profile, which will help reduce a plant's energy costs by intelligently switching the devices into 'sleep' modes to take account of production breaks, both foreseen (e.g. weekends and shut-downs) and unforeseen (e.g. breakdowns).
Modern process devices have their own intelligence and can take over part of the information processing or the overall functionality in automation systems. For integration into a Profinet system, [ 17 ] [ 18 ] a two-wire Ethernet is required in addition to increased availability.
The profile PA Devices [ 19 ] defines for different classes of process devices all functions and parameters typically used in process devices for the signal flow from the sensor signal from the process to the pre-processed process value, which is read out to the control system together with a measured value status. The PA Devices profile contains device data sheets for
Ethernet Advanced Physical Layer (Ethernet-APL) [ 20 ] describes a physical layer for the Ethernet communication technology which is especially developed for the requirements of the process industries. The development of Ethernet-APL was determined by the need for communication at high speeds and over long distances, the supply of power and communications signals via common single, twisted-pair (2-wire) cable as well as protective measures for the safe use within explosion hazardous areas. Ethernet APL opens the possibility for Profinet to be incorporated into process instruments.
Profinet uses the following protocols in the different layers [ 2 ] : 15 of the OSI model :
Layers 1-2: Mainly full-duplex with 100 MBit/s electrical ( 100BASE-TX ) or optical ( 100BASE-FX ) according to IEEE 802.3 are recommended [ 21 ] as device connections. Autocrossover is mandatory for all connections so that the use of crossover cables can be avoided. From IEEE 802.1Q the VLAN with priority tagging is used. All real-time data are thus given the highest possible priority 6 and are therefore forwarded by a switch with a minimum delay.
The Profinet protocol can be recorded and displayed with any Ethernet analysis tool. Wireshark is capable of decoding Profinet telegrams.
The Link Layer Discovery Protocol (LLDP) has been extended with additional parameters, so that in addition to the detection of neighbors, the propagation time of the signals on the connection lines can be communicated.
Layers 3-6: Either the Remote Service Interface (RSI) protocol or the Remote Procedure Call (RPC) protocol is used for the connection setup and the acyclic services. The RPC protocol is used via User Datagram Protocol (UDP) and Internet Protocol (IP) with the use of IP addresses . The Address Resolution Protocol (ARP) is extended for this purpose with the detection of duplicate IP addresses. The Discovery and basic Configuration Protocol (DCP) is mandatory for the assignment of IP addresses. Optionally, the Dynamic Host Configuration Protocol (DHCP) can also be used for this purpose. No IP addresses are used with the RSI protocol. Thus, IP can be used in the operating system of the field device for other protocols such as OPC Unified Architecture (OPC UA).
Layer 7: Various protocols [ 1 ] are defined to access the services of the Fieldbus Application Layer (FAL). The RT (Real-Time) protocol for class A & B applications with cycle times in the range of 1 - 10 ms. The IRT (Isochronous Real-Time) protocol for application class C allows cycle times below 1 ms for drive technology applications. This can also be achieved with the same services via Time-Sensitive Networking (TSN).
The functionalities of Profinet IO are realized with different technologies and protocols:
The basic function of the Profinet is the cyclic data exchange between the IO-Controller as producer and several IO-Devices as consumers of the output data and the IO-Devices as producers and the IO-Controller as consumer of the input data. [ 1 ] Each communication relationship IO data CR between the IO-Controller and an IO-Device defines the number of data and the cycle times.
All Profinet IO-Devices must support device diagnostics and the safe transmission of alarms via the communication relation for alarms Alarm CR .
In addition, device parameters can be read and written with each Profinet device via the acyclic communication relation Record Data CR . The data set for the unique identification of an IO-Device, the Identification and Maintenance Data Set 0 (I&M 0), must be installed by all Profinet IO-Devices. Optionally, further information can be stored in a standardized format as I&M 1-4.
For real-time data (cyclic data and alarms), the Profinet Real-Time (RT) telegrams are transmitted directly via Ethernet. UDP/IP is used for the transmission of acyclic data.
The Application Relation (AR) is established between an IO-Controller and every IO-Device to be controlled. Inside the ARs are defined the required CRs. The Profinet AR life-cycle consists of address resolution, connection establishment, parameterization, process IO data exchange / alarm handling, and termination.
In addition to the basic Class A functions, Class B devices must support additional functionalities. [ 1 ] These functionalities primarily support the commissioning, operation and maintenance of a Profinet IO system and are intended to increase the availability of the Profinet IO system.
Support of network diagnostics with the Simple Network Management Protocol (SNMP) is mandatory. Likewise, the Link Layer Discovery Protocol (LLDP) for neighborhood detection including the extensions for Profinet must be supported by all Class B devices. This also includes the collection and provision of Ethernet port-related statistics for network maintenance. With these mechanisms, the topology of a Profinet IO network can be read out at any time and the status of the individual connections can be monitored. If the network topology is known, automatic addressing of the nodes can be activated by their position in the topology. This considerably simplifies device replacement during maintenance, since no more settings need to be made.
High availability of the IO system is particularly important for applications in process automation and process engineering. For this reason, special procedures have been defined for Class B devices with the existing relationships and protocols. This allows system redundancy with two IO-Controllers accessing the same IO-Devices simultaneously. In addition, there is a prescribed procedure Dynamic Reconfiguration (DR), how the configuration of an IO-Device can be changed with the help of these redundant relationships without losing control over the IO-Device.
For the functionalities of Conformance Class C (CC-C) the Isochronous Real-Time [ 1 ] (IRT) protocol is mainly used.
With the bandwidth reservation , a part of the available transmission bandwidth of 100 MBit/s is reserved exclusively for real-time tasks. A procedure similar to a time multiplexing method is used. The bandwidth is divided into fixed cycle times, which in turn are divided into phases. The red phase is reserved exclusively for class C real-time data, in the orange phase the time-critical messages are transmitted and in the green phase the other Ethernet messages are transparently passed through. To ensure that maximum Ethernet telegrams can still be passed through transparently, the green phase must be at least 125 μs long. Thus, cycle times under 250 μs are not possible in combination with unchanged Ethernet.
In order to achieve shorter cycle times down to 31.25 μs, the Ethernet telegrams of the green phase are optionally broken down into fragments. These short fragments are now transmitted via the green phase. This fragmentation mechanism is transparent to the other participants on the Ethernet and therefore not recognizable.
In order to implement these bus cycles for bandwidth reservation, precise clock synchronization of all participating devices including the switches is required with a maximum deviation of 1 μs. This clock synchronization is implemented with the Precision Time Protocol (PTP) according to the IEEE 1588-2008 (1588 V2) standard. All devices involved in the bandwidth reservation must therefore be in the same time domain.
For position control applications for several axes or for positioning processes according to the PROFIdrive [ 16 ] drive profile of application classes 4 - 6, not only must communication be timely, but the actions of the various drives on a Profinet must also be coordinated and synchronized . The clock synchronization of the application program to the bus cycle allows control functions to be implemented that are executed synchronously on distributed devices.
If several Profinet devices are connected in a line ( daisy chain ), it is possible to further optimise the cyclic data exchange with Dynamic Frame Packing (DFP). For this purpose, the controller puts all output data for all devices into a single IRT frame. At the passing IRT frame, each Device takes out the data intended for the device, i.e. the IRT frame becomes shorter and shorter. For the data from the different devices to the controller, the IRT frame is dynamically assembled. The great efficiency of the DFP lies in the fact that the IRT frame is always only as extensive as necessary and that the data from the controller to the devices can be transmitted in full duplex simultaneously with the data from the devices to the controller.
Class D offers the same services to the user as Class C, with the difference that these services are provided using the mechanisms of Time-Sensitive Networking (TSN) defined by IEEE.
The Remote Service Interface (RSI) is used as a replacement for the Internet protocol suite . Thus, this application class D is implemented independently of IP addresses . The protocol stack will be smaller and independent of future Internet versions ( IPv6 ).
The TSN is not a consistent, self-contained protocol definition, but a collection of different protocols with different characteristics that can be combined almost arbitrarily for each application. For use in industrial automation , a subset is compiled in IEC/IEEE standard 60802 "Joint Profile TSN for Industrial Automation". A subset is used in the Profinet specification version 2.4 for implementing class D. [ 22 ]
In this specification, a distinction is made between two applications:
For the isochronous data exchange the clocks of the participants must be synchronized. For this purpose, the specifications of the Precision Time Protocol according to IEC 61588 for time synchronization with TSN [ 23 ] are adapted accordingly.
The telegrams are arranged in queues according to the priorities provided in the VLAN tag. The Time-Aware Shaper (TAS) [ 24 ] now specifies a clock pulse with which the individual queues are processed in a switch. This leads to a time-slot procedure where the isochronous, cyclical data is transmitted with the highest priority, the cyclical data with the second priority before all acyclic data. This reduces the latency time and also the jitter for the cyclic data. If a data telegram with low priority lasts too long, it can be interrupted by a cyclic data telegram with high priority and transmitted further afterwards. This procedure is called Frame Preemption [ 25 ] and is mandatory for CC-D.
For the realization [ 26 ] of a Profinet interface as controller or device, no additional hardware requirements are required for Profinet (CC-A and CC-B) that cannot be met by a common Ethernet interface ( 100BASE-TX or 100BASE-FX ). To enable a simpler line topology, the installation of a switch with 2 ports in a device is recommended.
For the realization of class C (CC-C) devices, an extension of the hardware with time synchronization with the Precision Time Protocol (PTP) and the functionalities of bandwidth reservation is required. For class D (CC-D) devices, the hardware must support the required functionalities of Time-Sensitive Networking (TSN) according to IEEE standards .
The method of implementation [ 27 ] depends on the design and performance of the device and the expected quantities. The alternatives are
At the general meeting of the Profibus user organisation in 2000, the first concrete discussions for a successor to Profibus based on Ethernet took place. Just one year later, the first specification of Component Based Automation (CBA) was published and presented at the Hanover Fair. In 2002, the Profinet CBA became part of the international standard IEC 61158 / IEC 61784 -1.
A Profinet CBA system [ 29 ] consists of different automation components. One component comprises all mechanical, electrical and information technology variables. The component may have been created with the usual programming tools. To describe a component, a Profinet Component Description (PCD) file is created in XML . A planning tool loads these descriptions and allows the logical connections between the individual components to be created to implement a plant.
The basic idea behind Profinet CBA was that in many cases it is possible to divide an entire automation system into autonomously operating - and thus manageable - subsystems. The structure and functionality may well be found in several plants in identical or slightly modified form. Such so-called Profinet components are normally controlled by a manageable number of input signals. Within the component, a control program written by the user executes the required functionality and sends the corresponding output signals to another controller. The communication of a component-based system is planned instead of programmed. Communication with Profinet CBA was suitable for bus cycle times of approx. 50 to 100 ms.
Individual systems show how these concepts can be successfully implemented in the application. However, Profinet CBA does not find the expected acceptance in the market and will no longer be listed in the IEC 61784-1 standard from the 4th edition of 2014.
In 2003 the first specification of Profinet IO (IO = Input Output) was published. The application interface of the Profibus DP (DP = Decentralized Periphery), which was successful on the market, was adopted and supplemented with current protocols from the Internet. In the following year, the extension with isochronous transmission follows, which makes Profinet IO suitable for motion control applications. Profisafe is adapted so that it can also be used via Profinet. With the clear commitment of AIDA [ 30 ] to Profinet in 2004, acceptance in the market is given. In 2006 Profinet IO becomes part of the international standard IEC 61158 / IEC 61784 -2.
In 2007, according to the neutral count, 1 million Profinet devices have already been installed, in the following year this number doubles to 2 million. By 2019, a total of 26 million [ 31 ] devices sold by the various manufacturers are reported.
In 2019, the specification for Profinet was completed with Time-Sensitive Networking (TSN), [ 32 ] thus introducing the CC-D conformance class. | https://en.wikipedia.org/wiki/Profinet |
In mathematics , more precisely in formal language theory , the profinite words are a generalization of the notion of finite words into a complete topological space . This notion allows the use of topology to study languages and finite semigroups . For example, profinite words are used to give an alternative characterization of the algebraic notion of a variety of finite semigroups .
Let A be an alphabet . The set of profinite words over A consists of the completion of a metric space whose domain is the set A ∗ {\displaystyle A^{*}} of words over A . The distance used to define the metric is given using a notion of separation of words. Those notions are now defined.
Let M and N be monoids , and let p and q be elements of the monoid M . Let φ be a morphism of monoids from M to N . It is said that the morphism φ separates p and q if ϕ ( p ) ≠ ϕ ( q ) {\displaystyle \phi (p)\neq \phi (q)} . For example, the morphism ϕ : A ∗ → Z / 2 Z , w ↦ | w | ( mod 2 ) {\displaystyle \phi :A^{*}\to \mathbb {Z} /2\mathbb {Z} ,w\mapsto |w|(\operatorname {mod} 2)} sending a word to the parity of its length separates the words ababa and abaa . Indeed ϕ ( a b a b a ) = 1 ≠ 0 = ϕ ( a b a a ) {\displaystyle \phi (ababa)=1\neq 0=\phi (abaa)} .
It is said that N separates p and q if there exists a morphism of monoids φ from M to N that separates p and q . Using the previous example, Z / 2 Z {\displaystyle \mathbb {Z} /2\mathbb {Z} } separates ababa and abaa . More generally, Z / n Z {\displaystyle \mathbb {Z} /n\mathbb {Z} } separates any words whose size are not congruent modulo n . In general, any two distinct words can be separated, using the monoid whose elements are the factors of p plus a fresh element 0. The morphism sends prefixes of p to themselves and everything else to 0.
The distance between two distinct words p and q is defined as the inverse of the size of the smallest monoid N separating p and q . Thus, the distance of ababa and abaa is 1 2 {\displaystyle {\frac {1}{2}}} . The distance of p to itself is defined as 0.
This distance d is an ultrametric , that is, d ( x , z ) ≤ max { d ( x , y ) , d ( y , z ) } {\displaystyle d(x,z)\leq \max \left\{d(x,y),d(y,z)\right\}} . Furthermore it satisfies d ( u w , v w ) ≤ d ( u , v ) {\displaystyle d(uw,vw)\leq d(u,v)} and d ( w u , w v ) ≤ d ( u , v ) {\displaystyle d(wu,wv)\leq d(u,v)} .
Since any word p can be separated from any other word using a monoid with |p|+1 elements, where |p| is the length of p , it follows that the distance between p and any other word is at least 1 | p | {\displaystyle {\frac {1}{|p|}}} . Thus the topology defined by this metric is discrete .
The profinite completion of A ∗ {\displaystyle A^{*}} , denoted A ∗ ^ {\displaystyle {\widehat {A^{*}}}} , is the completion of the set of finite words under the distance defined above. The completion preserves the monoid structure.
The topology on A ∗ ^ {\displaystyle {\widehat {A^{*}}}} is compact .
Any monoid morphism ϕ : A ∗ → M {\displaystyle \phi :A^{*}\to M} , with M finite can be extended uniquely into a monoid morphism ϕ ^ : A ∗ ^ → M {\displaystyle {\widehat {\phi }}:{\widehat {A^{*}}}\to M} , and this morphism is uniformly continuous (using any metric on M {\displaystyle M} compatible with the discrete topology). Furthermore, A ∗ ^ {\displaystyle {\widehat {A^{*}}}} is the least topological space with this property.
A profinite word is an element of A ∗ ^ {\displaystyle {\widehat {A^{*}}}} . And a profinite language is a set of profinite words. Every finite word is a profinite word. A few examples of profinite words that are not finite are now given.
For m any word, let m ω {\displaystyle m^{\omega }} denote lim i → ∞ m i ! {\displaystyle \lim _{i\to \infty }m^{i!}} , which exists because m i ! {\displaystyle m^{i!}} is a Cauchy sequence . Intuitively, to separate m i ! {\displaystyle m^{i!}} and m i ′ ! {\displaystyle m^{i'!}} , a monoid should count at least up to min ( i , i ′ ) {\displaystyle \min(i,i')} , and hence requires at least min ( i , i ′ ) {\displaystyle \min(i,i')} elements. Since m i ! {\displaystyle m^{i!}} is a Cauchy sequence , m ω {\displaystyle m^{\omega }} is indeed a profinite word.
Furthermore, the word m ω {\displaystyle m^{\omega }} is idempotent . This is due to the fact that, for any morphism ϕ : A ∗ → N {\displaystyle \phi :A^{*}\to N} with N finite, ϕ ( m i ! ) = ϕ ( m ) i ! {\displaystyle \phi (m^{i!})=\phi (m)^{i!}} . Since N is finite, for i great enough, ϕ ( m ) i ! {\displaystyle \phi (m)^{i!}} is idempotent, and the sequence is constant.
Similarly, m ω + 1 {\displaystyle m^{\omega +1}} and m ω − 1 {\displaystyle m^{\omega -1}} are defined as lim n → ∞ m n ! + 1 {\displaystyle \lim _{n\to \infty }m^{n!+1}} and lim n → ∞ m n ! − 1 {\displaystyle \lim _{n\to \infty }m^{n!-1}} respectively.
The notion of profinite languages allows one to relate notions of semigroup theory to notions of topology. More precisely, given P a profinite language, the following statements are equivalent:
Similar statements also hold for languages P of finite words. The following conditions are equivalent.
Those characterisations are due to the more general fact that, taking the closure of a language of finite words, and restricting a profinite language to finite words are inverse operations, when they are applied to recognisable languages.
Pin, Jean-Éric (2022-02-18). Mathematical Foundations of Automata Theory (PDF) . pp. 130– 139.
Almeida, Jorge (1994). Finite semigroups and universal algebra . River Edge, NJ: World Scientific Publishing Co. Inc. ISBN 981-02-1895-8 . | https://en.wikipedia.org/wiki/Profinite_word |
A progenitor cell is a biological cell that can differentiate into a specific cell type. Stem cells and progenitor cells have this ability in common. However, stem cells are less specified than progenitor cells. Progenitor cells can only differentiate into their "target" cell type. [ 1 ] The most important difference between stem cells and progenitor cells is that stem cells can replicate indefinitely, whereas progenitor cells can divide only a limited number of times. Controversy about the exact definition remains and the concept is still evolving. [ 2 ]
The terms "progenitor cell" and "stem cell" are sometimes equated. [ 3 ]
Most progenitors are identified as oligopotent . In this point of view, they can compare to adult stem cells, but progenitors are said to be in a further stage of cell differentiation. They are "midway" between stem cells and fully differentiated cells. The kind of potency they have depends on the type of their "parent" stem cell and also on their niche. Some research found that progenitor cells were mobile and that these progenitor cells could move through the body and migrate towards the tissue where they are needed. [ 4 ] Many properties are shared by adult stem cells and progenitor cells.
Progenitor cells have become a hub for research on a few different fronts. Current research on progenitor cells focuses on two different applications: regenerative medicine and cancer biology. Research on regenerative medicine has focused on progenitor cells, and stem cells, because their cellular senescence contributes largely to the process of aging. [ 5 ] Research on cancer biology focuses on the impact of progenitor cells on cancer responses, and the way that these cells tie into the immune response. [ 6 ]
The natural aging of cells, called their cellular senescence, is one of the main contributors to aging on an organismal level. [ 7 ] There are a few different ideas to the cause behind why aging happens on a cellular level. Telomere length has been shown to positively correlate to longevity. [ 8 ] [ 9 ] Increased circulation of progenitor cells in the body has also positively correlated to increased longevity and regenerative processes. [ 10 ] Endothelial progenitor cells (EPCs) are one of the main focuses of this field. They are valuable cells because they directly precede endothelial cells, but have characteristics of stem cells. These cells can produce differentiated cells to replenish the supply lost in the natural process of aging, which makes them a target for aging therapy research. [ 11 ] This field of regenerative medicine and aging research is still currently evolving.
Recent studies have shown that haematopoietic progenitor cells contribute to immune responses in the body. They have been shown to respond a range of inflammatory cytokines . They also contribute to fighting infections by providing a renewal of the depleted resources caused by the stress of an infection on the immune system. Inflammatory cytokines and other factors released during infections will activate haematopoietic progenitor cells to differentiate to replenish the lost resources. [ 12 ]
The characterization or the defining principle of progenitor cells, in order to separate them from others, is based on the different cell markers rather than their morphological appearance. [ 13 ]
Before embryonic day 40 (E40), progenitor cells generate other progenitor cells; after that period, progenitor cells produce only dissimilar mesenchymal stem cell daughters. The cells from a single progenitor cell form a proliferative unit that creates one cortical column; these columns contain a variety of neurons with different shapes. [ 20 ] | https://en.wikipedia.org/wiki/Progenitor_cell |
Progeria is a specific type of progeroid syndrome , also known as Hutchinson–Gilford syndrome or Hutchinson–Gilford progeroid syndrome (HGPS). [ 8 ] A single gene mutation is responsible for causing progeria. The affected gene , known as lamin A ( LMNA ), makes a protein necessary for holding the cell nucleus together. When this gene mutates, an abnormal form of lamin A protein called progerin is produced. Progeroid syndromes are a group of diseases that cause individuals to age faster than usual, leading to them appearing older than they actually are. People born with progeria typically live until their mid- to late-teens or early twenties. [ 9 ] [ 10 ] Severe cardiovascular complications usually develop by puberty , later on resulting in death.
Most children with progeria appear normal at birth and during early infancy. [ 11 ] Children with progeria usually develop the first symptoms during their first few months of life. The earliest symptoms may include a failure to thrive and a localized scleroderma -like skin condition. As a child ages past infancy, additional conditions become apparent, usually around 18–24 months. Limited growth, full-body alopecia (hair loss), and a distinctive appearance (a small face with a shallow, recessed jaw and a pinched nose) are all characteristics of progeria. [ 5 ]
Signs and symptoms of this progressive disease tend to become more marked as the child ages. Later, the condition causes wrinkled skin, kidney failure, loss of eyesight, and atherosclerosis and other cardiovascular problems. [ 12 ] Scleroderma, a hardening and tightening of the skin on trunk and extremities of the body, is prevalent. People diagnosed with this disorder usually have small, fragile bodies, like those of older adults. The head is usually large relative to the body, with a narrow, wrinkled face and a beak nose. Prominent scalp veins are noticeable (made more obvious by alopecia), as well as prominent eyes. Musculoskeletal degeneration causes loss of body fat and muscle, stiff joints, hip dislocations, and other symptoms generally absent in the non-elderly population. Individuals usually retain typical mental and motor function. [ citation needed ]
Hutchinson-Gilford progeroid syndrome (HGPS) is an extremely rare autosomal dominant genetic disorder in which symptoms resembling aspects of aging are manifested at an early age. [ 8 ] Its occurrence is usually the result of a sporadic germline mutation ; although HGPS is genetically dominant, people rarely live long enough to have children, preventing them from passing the disorder on in a hereditary manner. [ 13 ]
HGPS is caused by mutations that weaken the structure of the cell nucleus, making normal cell division difficult. The histone mark H4K20me3 is involved and caused by de novo mutations that occur in a gene that encodes lamin A . Lamin A is made but is not processed properly. This poor processing creates an abnormal nuclear morphology and disorganized heterochromatin . Patients also do not have appropriate DNA repair, and they also have increased genomic instability. [ 14 ]
In normal conditions, the LMNA gene codes for a structural protein called prelamin A, which undergoes a series of processing steps before attaining its final form, called lamin A. [ 15 ] Prelamin A contains a "CAAX" where C is a cysteine, A an aliphatic amino acid, and X any amino acid. This motif at the carboxyl-termini of proteins triggers three sequential enzymatic modifications. First, protein farnesyltransferase catalyzes the addition of a farnesyl moiety to the cysteine. Second, an endoprotease that recognizes the farnesylated protein catalyzes the peptide bond's cleavage between the cysteine and -aaX. In the third step, isoprenylcysteine carboxyl methyltransferase catalyzes methylation of the carboxyl-terminal farnesyl cysteine. The farnesylated and methylated protein is transported through a nuclear pore to the interior of the nucleus . Once in the nucleus, the protein is cleaved by a protease called zinc metallopeptidase STE24 ( ZMPSTE24 ), which removes the last 15 amino acids, which includes the farnesylated cysteine. After cleavage by the protease, prelamin A is referred to as lamin A. In most mammalian cells, lamin A, along with lamin B1, lamin B2, and lamin C, makes up the nuclear lamina , which provides shape and stability to the inner nuclear envelope. [ 16 ] [ 17 ] [ 18 ]
Before the late 20th century, research on progeria yielded very little information about the syndrome. In 2003, the cause of progeria was discovered to be a point mutation in position 1824 of the LMNA gene, which replaces a cytosine with thymine. [ 19 ] This mutation creates a 5' cryptic splice site within exon 11, resulting in a shorter than normal mRNA transcript. When this shorter mRNA is translated into protein, it produces an abnormal variant of the prelamin A protein, referred to as progerin . Progerin's farnesyl group cannot be removed because the ZMPSTE24 cleavage site is lacking from progerin, so the abnormal protein is permanently attached to the nuclear rim. One result is that the nuclear lamina does not provide the nuclear envelope with enough structural support, causing it to take on an abnormal shape. [ 20 ] Since the support that the nuclear lamina normally provides is necessary for the organizing of chromatin during mitosis , weakening of the nuclear lamina limits the ability of the cell to divide. [ 21 ] However, defective cell division is unlikely to be the main defect leading to progeria, particularly because children develop normally without any signs of disease until about one year of age. Farnesylated prelamin A variants also lead to defective DNA repair, which may play a role in the development of progeria. [ 22 ] Progerin expression also leads to defects in the establishment of fibroblast cell polarity, which is also seen in physiological aging. [ 23 ]
To date, over 1,400 SNPs in the LMNA gene are known. [ 24 ] They can manifest as changes in mRNA, splicing, or protein amino acid sequence (e.g. Arg471Cys, [ 25 ] Arg482Gln, [ 26 ] Arg527Leu, [ 27 ] Arg527Cys, [ 28 ] and Ala529Val). [ 29 ] Progerin may also play a role in normal human aging, since its production is activated in typical senescent cells. [ 21 ] Unlike other " accelerated aging diseases ", such as Werner syndrome , Cockayne syndrome , or xeroderma pigmentosum , progeria may not be directly caused by defective DNA repair . These diseases each cause changes in a few specific aspects of aging but never in every aspect at once, so they are often called "segmental progerias". [ 30 ]
A 2003 report in Nature [ 31 ] said that progeria may be a de novo dominant trait. It develops during cell division in a newly conceived zygote or in the gametes of one of the parents. It is caused by mutations in the LMNA (lamin A protein ) gene on chromosome 1 ; the mutated form of lamin A is commonly known as progerin. One of the authors, Leslie Gordon, was a physician who did not know anything about progeria until her own son, Sam , was diagnosed at 22 months. Gordon and her husband, pediatrician Scott Berns, founded the Progeria Research Foundation. [ 32 ]
A subset of progeria patients with heterozygous mutations of LMNA have presented an atypical form of the condition, with initial symptoms not developing until late childhood or early adolescence. These patients have had longer lifespans than those with typical-onset progeria. [ 11 ] This atypical form is extremely rare, with presentations of the condition varying between patients with even the same mutation. [ 33 ] The general phenotype of atypical cases is consistent with typical progeria, but other factors (severity, onset, and lifespan) vary in presentation. [ 34 ]
Lamin A is a major component of a protein scaffold on the inner edge of the nucleus called the nuclear lamina that helps organize nuclear processes such as RNA and DNA synthesis. [ citation needed ]
Prelamin A contains a CAAX box at the C-terminus of the protein (where C is a cysteine and A is any aliphatic amino acids ). This ensures that the cysteine is farnesylated and allows prelamin A to bind membranes , specifically the nuclear membrane. After prelamin A has been localized to the cell nuclear membrane, the C-terminal amino acids, including the farnesylated cysteine, are cleaved off by a specific protease. The resulting protein, now lamin A, is no longer membrane-bound and carries out functions inside the nucleus. [ 35 ] [ 36 ]
In HGPS, the recognition site that the enzyme requires for cleavage of prelamin A to lamin A is mutated. Lamin A cannot be produced, and prelamin A builds up on the nuclear membrane, causing a characteristic nuclear blebbing . [ 37 ] This results in the symptoms of progeria, although the relationship between the misshapen nucleus and the symptoms is not known.
A study that compared HGPS patient cells with the skin cells from young and elderly normal human subjects found similar defects in the HGPS and elderly cells, including down-regulation of certain nuclear proteins, increased DNA damage, and demethylation of histone , leading to reduced heterochromatin . [ 38 ] Nematodes over their lifespan show progressive lamin changes comparable to HGPS in all cells but neurons and gametes . [ 39 ] These studies suggest that lamin A defects are associated with normal aging . [ 38 ] [ 40 ]
The presence of progerin also leads to the accumulation of dysfunctional mitochondria within the cell. These mitochondria are characterized by a swollen morphology, caused by a condensation of mtDNA and TFAM into the mitochondria, which is driven by a severe mitochondrial dysfunction (low mitochondrial membrane potential, low ATP production, low respiration capacity and high ROS production). [ 41 ] [ 42 ] [ 43 ] Therefore, contributing substantially to the senescence phenotype. Although, the explanation for this defective-mitochondria accumulation in progeria is about to be elucidated, it has been proposed that low PGC1-α expression [ 41 ] [ 42 ] [ 44 ] (important for mitochondrial biogenesis , maintenance and function) along with low LAMP2 protein level and lysosome number (both important for mitophagy : the degradation of defective mitochondria pathway), [ 41 ] could be implicated.
Skin changes, abnormal growth, and loss of hair occur. These symptoms normally start appearing by one year of age. A genetic test for LMNA mutations can confirm the diagnosis of progeria. [ 45 ] [ 46 ] Prior to the advent of the genetic test, misdiagnosis was common. [ 46 ]
Other syndromes with similar symptoms (non- laminopathy progeroid syndromes) include: [ 47 ]
In November 2020, the U.S. Food and Drug Administration approved lonafarnib , which helps prevent buildup of defective progerin and similar proteins. [ 48 ] A clinical trial in 2018 points to significantly lower mortality rates – treatment with lonafarnib alone compared with no treatment (3.7% vs. 33.3%) – at a median post-trial follow-up time span of 2.2 years. [ 49 ] The drug, given orphan drug status and Pediatric Disease Priority Review Voucher, is taken twice daily in the form of capsules and may cost US$650,000 per year, making it prohibitive for the vast majority of families. It is unclear how it will be covered by health insurance in the United States. Common side effects of the drug include "nausea, vomiting, diarrhea, infections, decreased appetite, and fatigue". [ 13 ]
Other treatment options have focused on reducing complications (such as cardiovascular disease ) with coronary artery bypass surgery and low-dose acetylsalicylic acid . [ 50 ] Growth hormone treatment has been attempted. [ 51 ] The use of Morpholinos has also been attempted in mice and cell cultures in order to reduce progerin production. Antisense Morpholino oligonucleotides specifically directed against the mutated exon 11–exon 12 junction in the mutated pre-mRNAs were used. [ 52 ]
A type of anticancer drug, the farnesyltransferase inhibitors (FTIs), has been proposed, but their use has been mostly limited to animal models . [ 53 ] A Phase II clinical trial using the FTI lonafarnib began in May 2007. [ 54 ] In studies on the cells another anti-cancer drug, rapamycin , caused removal of progerin from the nuclear membrane through autophagy . [ 20 ] [ 55 ] It has been proved that pravastatin and zoledronate are effective drugs when it comes to the blocking of farnesyl group production. [ citation needed ]
Farnesyltransferase inhibitors (FTIs) are drugs that inhibit the activity of an enzyme needed to make a link between progerin proteins and farnesyl groups. This link generates the permanent attachment of the progerin to the nuclear rim. In progeria, cellular damage can occur because that attachment occurs, and the nucleus is not in a normal state. Lonafarnib is an FTI, which means it can avoid this link, so progerin can not remain attached to the nucleus rim, and it now has a more normal state. [ citation needed ]
Studies of sirolimus , an mTOR Inhibitor , demonstrate that it can minimize the phenotypic effects of progeria fibroblasts. Other observed consequences of its use are abolishing nuclear blebbing , degradation of progerin in affected cells, and reducing insoluble progerin aggregates formation. These results have been observed only in vitro and are not the results of any clinical trial, although it is believed that the treatment might benefit HGPS patients. [ 20 ]
Recently, it has been demonstrated that the CRM1 protein (a key component of the nuclear export machinery in mammalian) is upregulated in HGPS cells, which drives to the abnormal localization of NES containing proteins from the nucleus to the cytoplasm. [ 56 ] Moreover, the inhibition of CRM1 in HGPS alleviates the associated-senescence phenotype [ 56 ] as well as the mitochondrial function (an important determinant in senescence ) and lysosome content. [ 41 ] These results are under in vivo validation with selinexor (a more suitable CRM1 inhibitor for human use [ 57 ] ).
As there is no known cure, life expectancy of people with progeria is 15 years, as of 2024. [ 58 ] At least 90 percent of patients die from complications of atherosclerosis, such as heart attack or stroke. [ 59 ]
Mental development is not adversely affected; in fact, intelligence tends to be average to above average. [ 60 ] With respect to the features of aging that progeria appears to manifest, the development of symptoms is comparable to aging at a rate eight to ten times faster than normal. With respect to those that progeria does not exhibit, patients show no neurodegeneration or cancer predisposition. They also do not develop conditions that are commonly associated with accumulation of damage, such as cataracts (caused by UV exposure) and osteoarthritis . [ 45 ]
Although there may not be any successful treatments for progeria itself, there are treatments for the problems it causes, such as arthritic, respiratory, and cardiovascular problems, and recent medicinal breakthroughs enabled one patient to live until age 28. [ 61 ] People with progeria have normal reproductive development, and there are known cases of women with progeria who delivered healthy offspring. [ 62 ]
A study from the Netherlands has shown an incidence of 1 in 20 million births. [ 63 ] According to the Progeria Research Foundation, as of September 2020, there are 179 known cases in the world, in 53 countries; 18 of the cases were identified in the United States. [ 64 ] [ 13 ] Hundreds of cases have been reported in medical history since 1886. [ 65 ] [ 66 ] [ 67 ] However, the Progeria Research Foundation believes there may be as many as 150 undiagnosed cases worldwide. [ 68 ]
There have been only two cases in which a healthy person was known to carry the LMNA mutation that causes progeria. [ 69 ] One family from India had four of six children with progeria. [ 70 ]
A mouse model of progeria exists, though in the mouse, the LMNA prelamin A is not mutated. Instead, ZMPSTE24 , the specific protease that is required to remove the C-terminus of prelamin A, is missing. Both cases result in the buildup of farnesylated prelamin A on the nuclear membrane and in the characteristic nuclear LMNA blebbing.
In 2020, BASE editing was used in a mouse model to target the LMNA gene mutation that causes the progerin protein instead of the healthy Lamin A [ 71 ] [ 72 ] while in 2023 a study designed a peptide that prevented progerin from binding to BubR1 [ 73 ] which is known to regulate aging in mice. [ 74 ]
Repair of DNA double-strand breaks can occur by either of two processes, non-homologous end joining (NHEJ) or homologous recombination (HR). A-type lamins promote genetic stability by maintaining levels of proteins that have key roles in NHEJ and HR. [ 75 ] Mouse cells deficient for maturation of prelamin A show increased DNA damage and chromosome aberrations and have increased sensitivity to DNA damaging agents. [ 22 ] In progeria, the inability to adequately repair DNA damages due to defective A-type lamin may cause aspects of premature aging [ 76 ] (also see DNA damage theory of aging ).
Fibroblast samples from children with progeria syndrome exhibit accelerated epigenetic aging effects according to the epigenetic clock for skin and blood samples. [ 77 ]
Progeria was first described in 1886 by Jonathan Hutchinson . [ 78 ] It was also described independently in 1897 by Hastings Gilford . [ 79 ] The condition was later named Hutchinson–Gilford progeria syndrome. Scientists are interested in progeria partly because it might reveal clues about the normal process of aging. [ 80 ] [ 69 ] [ 81 ]
The word progeria comes from the Greek words pro ( πρό ) 'before, premature', and gēras ( γῆρας ), ' old age '. [ 82 ]
In 1987, fifteen-year-old Mickey Hays , who had progeria, appeared along with Jack Elam in the documentary I Am Not a Freak . [ 83 ] Elam and Hays first met during the filming of the 1986 film The Aurora Encounter , [ 84 ] in which Hays was cast as an alien. The friendship that developed lasted until Hays died in 1992, on his 20th birthday. Elam said, "You know I've met a lot of people, but I've never met anybody that got next to me like Mickey." [ This quote needs a citation ]
Harold Kushner , who among other things wrote the book When Bad Things Happen to Good People , had a son, Aaron, who died at the age of 14 in 1977 of progeria. [ 85 ] Margaret Casey, a 29-year-old woman with progeria who was then believed to be the oldest survivor of the premature aging disease, died on Sunday, May 26, 1985. Casey, a freelance artist, was admitted to Yale-New Haven Hospital on the night of May 25 with respiratory problems, which caused her death. [ 86 ] Sam Berns was an American activist with the disease. He was the subject of the HBO documentary Life According to Sam . Berns also gave a TEDx talk titled "My Philosophy for a Happy Life" on December 13, 2013. [ 87 ]
Hayley Okines was an English progeria patient who spread awareness of the condition. [ 88 ] Leon Botha , the South African painter and DJ who was known, among other things, for his work with the hip-hop duo Die Antwoord , lived with progeria. [ 89 ] He died in 2011, aged 26. Tiffany Wedekind of Columbus, Ohio, is believed to be the oldest survivor of progeria at 44 years old as of September 2022. [ 90 ] [ unreliable source ] Alexandra Peraut is a Catalan girl with progeria; she has inspired the book Una nena entre vint milions ('A girl in 20 million'), a children's book to explain progeria to youngsters. [ 91 ] [ 92 ] Adalia Rose Williams, born December 10, 2006, was an American girl with progeria, who was a notable YouTuber and vlogger who shared her everyday life on social media. She died on January 12, 2022, at the age of 15. [ 93 ]
Amy Foose, born September 12, 1969, was an American girl with progeria, who died at the age of 16 on December 19, 1985. [ 94 ] Sister of American automobile designer, artist, and TV star, Chip Foose , who started a foundation in her name called Amy's Depot . [ 95 ] The Progeria Research Foundation gives out The Amy Award every few years, in honor of Amy. [ 96 ]
Sammy Basso , born December 1, 1995, was an Italian biologist, activist and writer who studied progeria and campaigned to raise awareness of the disease, died at the age of 28 on October 5, 2024. At the time of his death he was the longest-living survivor of the condition. [ 97 ] [ 98 ] | https://en.wikipedia.org/wiki/Progeria |
Progeroid syndromes ( PS ) are a group of rare genetic disorders that mimic physiological aging , making affected individuals appear to be older than they are. [ 1 ] [ 2 ] The term progeroid syndrome does not necessarily imply progeria ( Hutchinson–Gilford progeria syndrome ), which is a specific type of progeroid syndrome.
Progeroid means "resembling premature aging", a definition that can apply to a broad range of diseases. Familial Alzheimer's disease and familial Parkinson's disease are two well-known accelerated-aging diseases that are more frequent in older individuals. They affect only one tissue and can be classified as unimodal progeroid syndromes. Segmental progeria, which is more frequently associated with the term progeroid syndrome, tends to affect multiple or all tissues while causing affected individuals to exhibit only some of the features associated with aging. [ citation needed ]
All disorders within this group are thought to be monogenic , [ 3 ] meaning they arise from mutations of a single gene . Most known PS are due to genetic mutations that lead to either defects in the DNA repair mechanism or defects in lamin A/C .
Examples of PS include Werner syndrome (WS), Bloom syndrome (BS), Rothmund–Thomson syndrome (RTS), Cockayne syndrome (CS), xeroderma pigmentosum (XP), trichothiodystrophy (TTD), combined xeroderma pigmentosum - Cockayne syndrome (XP-CS), restrictive dermopathy (RD), and Hutchinson–Gilford progeria syndrome (HGPS). Individuals with these disorders tend to have a reduced lifespan. [ 3 ] Progeroid syndromes have been widely studied in the fields of aging, regeneration , stem cells , and cancer . The most widely studied of the progeroid syndromes are Werner syndrome and Hutchinson–Gilford progeria, as they are seen to most resemble natural aging . [ 3 ]
One of the main causes of progeroid syndromes are genetic mutations , which lead to defects in the cellular processes which repair DNA . The DNA damage theory of aging proposes that aging is a consequence of the accumulation of naturally occurring DNA damages . The accumulated damage may arise from reactive oxygen species (ROS), chemical reactions (e.g. with intercalating agents ), radiation , depurination , and deamination . [ citation needed ]
Mutations in three classes of DNA repair proteins, RecQ protein-like helicases (RECQLs), nucleotide excision repair (NER) proteins, and nuclear envelope proteins LMNA (lamins) have been associated with the following progeroid syndromes: [ citation needed ]
RecQ is a family of conserved ATP -dependent helicases required for repairing DNA and preventing deleterious recombination and genomic instability . [ 4 ] DNA helicases are enzymes that bind to double-stranded DNA and temporarily separate them. This unwinding is required during replication of the genome under mitosis , but in the context of PS, it is a required step in repairing damaged DNA. Thus, DNA helicases, maintain the integrity of a cell, and defects in these helicases are linked to an increased predisposition to cancer and aging phenotypes . [ 5 ] Thus, individuals with RecQ-associated PS show an increased risk of developing cancer, [ 6 ] which is caused by genomic instability and increased rates of mutation. [ 7 ]
There are five genes encoding RecQ in humans (RECQ1-5), and defects in RECQL2/WRN, RECQL3/BLM and RECQL4 lead to Werner syndrome (WS), Bloom syndrome (BS), and Rothmund–Thomson syndrome (RTS), respectively. [ 4 ] [ 8 ] On the cellular level, cells of affected individuals exhibit chromosomal abnormalities, genomic instability, and sensitivity to mutagens . [ 7 ]
Werner syndrome (WS) is a rare autosomal recessive disorder. [ 9 ] [ 10 ] It has a global incidence rate of less than 1 in 100,000 live births, [ 9 ] although incidences in Japan and Sardinia are higher, where it affects 1 in 20,000-40,000 and 1 in 50,000, respectively. [ 11 ] [ 12 ] As of 2006, there were approximately 1,300 reported cases of WS worldwide. [ 3 ] Affected individuals typically grow and develop normally until puberty , when they do not experience the typical adolescent growth spurt. The mean age of diagnosis is twenty-four. [ 13 ] The median and mean age of death are 47-48 and 54 years, respectively; [ 14 ] the main cause of death is cardiovascular disease or cancer. [ 3 ] [ 13 ]
Affected individuals can exhibit growth retardation, short stature, premature graying of hair, hair loss , wrinkling , prematurely aged faces, beaked noses , skin atrophy (wasting away) with scleroderma -like lesions , loss of fat tissues , abnormal fat deposition leading to thin legs and arms, and severe ulcerations around the Achilles tendon and malleoli . Other signs include change in voice, making it weak, hoarse, or high-pitched; atrophy of gonads , leading to reduced fertility ; bilateral cataracts (clouding of lens); premature arteriosclerosis (thickening and loss of elasticity of arteries); calcinosis (calcium deposits in blood vessels); atherosclerosis (blockage of blood vessels); type 2 diabetes ; loss of bone mass ; telangiectasia ; and malignancies . [ 3 ] [ 9 ] In fact, the prevalence of rare cancers, such as meningiomas , are increased in individuals with Werner syndrome. [ 15 ]
Approximately 90% of individuals with Werner Syndrome have any of a range of mutations in the eponymous gene, WRN , the only gene currently connected to Werner syndrome. [ 14 ] WRN encodes the WRNp protein, a 1432 amino acid protein with a central domain resembling members of the RecQ helicases. WRNp is active in unwinding DNA, a step necessary in DNA repair and DNA replication . [ 10 ] [ 11 ] Since WRNp's function depends on DNA, it is only functional when localized to the nucleus. [ citation needed ]
Mutations that cause Werner syndrome only occur at the regions of the gene that encode for protein and not at non-coding regions. [ 16 ] These mutations can have a range of effects. They may decrease the stability of the transcribed messenger RNA (mRNA), which increases the rate at which they are degraded. With fewer mRNA, fewer are available to be translated into the WRNp protein. Mutations may also lead to the truncation (shortening) of the WRNp protein, leading to the loss of its nuclear localization signal sequence , which would normally transport it to the nucleus where it can interact with the DNA. This leads to a reduction in DNA repair. [ 16 ] Furthermore, mutated proteins are more likely to be degraded than normal WRNp. [ 11 ] Apart from causing defects in DNA repair, its aberrant association with p53 down-regulates the function of p53, leading to a reduction in p53-dependent apoptosis and increase the survival of these dysfunctional cells. [ 17 ]
Cells of affected individuals have reduced lifespan in culture , [ 18 ] more chromosome breaks and translocations [ 19 ] and extensive deletions. [ 20 ] These DNA damages, chromosome aberrations and mutations may in turn cause more RecQ-independent aging phenotypes. [ citation needed ]
Bloom syndrome (BS) is a very rare autosomal recessive disorder. [ 21 ] Incidence rates are unknown, although it is known to be higher in people of Ashkenazi Jewish background, presenting in around 1 in 50,000. Approximately one-third of individuals who have BS are of Ashkenazi Jewish descent. [ citation needed ]
There is no evidence from the Bloom's Syndrome Registry or from the peer-reviewed medical literature that BS is a progeroid condition associated with advanced aging. [ citation needed ] It is, however, associated with early-onset cancer and adult-type diabetes and also with Werner syndrome, [ citation needed ] which is a progeroid syndrome, through mutation in the RecQ helicases. These associations have led to the speculation that BS could be associated with aging. Unfortunately, the average lifespan of persons with Bloom syndrome is 27 years; consequently, there is insufficient information to completely rule out the possibility that BS is associated with some features of aging. [ citation needed ]
People with BS start their life with a low weight and length when they are born. Even as adults, they typically remain under 5 feet tall. [ 22 ] Individuals with BS are characterized by low weight and height and abnormal facial features, particularly a long, narrow face with a small lower jaw, a large nose and prominent ears. Most also develop photosensitivity , which causes the blood vessels to be dilated and leads to reddening of the skin, usually presented as a "butterfly-shaped patch of reddened skin across the nose and cheeks". [ 23 ] Other characteristics of BS include learning disabilities , an increased risk of diabetes , gastroesophageal reflux (GER), and chronic obstructive pulmonary disease (COPD). GER may also lead to recurrent infections of the upper respiratory tract , ears , and lungs during infancy. BS causes infertility in males and reduced fertility and early-onset menopause in females. In line with any RecQ-associated PS, people with BS have an increased risk of developing cancer, often more than one type. [ citation needed ]
BS is caused by mutations in the BLM gene, which encodes for the Bloom syndrome protein , a RecQ helicase. [ 24 ] These mutations may be frameshift , missense , non-sense , or mutations of other kinds and are likely to cause deletions in the gene product. [ 25 ] [ 26 ] Apart from helicase activity that is common to all RecQ helices, it also acts to prevent inappropriate homologous recombination . During replication of the genome, the two copies of DNA, called sister chromatids , are held together through a structure called the centromere . During this time, the homologous (corresponding) copies are in close physical proximity to each other, allowing them to 'cross' and exchange genetic information, a process called homologous recombination . Defective homologous recombination can cause mutation and genetic instability. [ 27 ] Such defective recombination can introduce gaps and breaks within the genome and disrupt the function of genes, possibly causing growth retardation, aging and elevated risk of cancer. It introduces gaps and breaks within the genome and disrupts the function of genes, often causing retardation of growth, aging and elevated risks of cancers. The Bloom syndrome protein interacts with other proteins, such as topoisomerase IIIα and RMI2, [ 28 ] [ 29 ] [ 30 ] and suppresses illegitimate recombination events between sequences that are divergent from strict homology, thus maintaining genome stability. [ 27 ] Individuals with BS have a loss-of-function mutation , which means that the illegitimate recombination is no longer suppressed, leading to higher rates of mutation (~10-100 times above normal, depending on cell type). [ 31 ] [ 32 ]
Nucleotide excision repair is a DNA repair mechanism. There are three excision repair pathways: nucleotide excision repair (NER), base excision repair (BER), and DNA mismatch repair (MMR). In NER, the damaged DNA strand is removed and the undamaged strand is kept as a template for the formation of a complementary sequence with DNA polymerase. DNA ligase joins the strands together to form dsDNA. There are two subpathways for NER, which differ only in their mechanism for recognition: global genomic NER (GG-NER) and transcription coupled NER (TC-NER). [ citation needed ]
Defects in the NER pathway have been linked to progeroid syndromes. There are 28 genes in this pathway. Individuals with defects in these genes often have developmental defects and exhibit neurodegeneration . They can also develop CS, XP, and TTD, [ 33 ] often in combination with each other, as with combined xeroderma pigmentosa-Cockayne syndrome (XP-CS). [ 34 ] Variants of these diseases, such as DeSanctis–Cacchione syndrome and Cerebro-oculo-facio-skeletal (COFS) syndrome, can also be caused by defects in the NER pathway. However, unlike RecQ-associated PS, not all individuals affected by these diseases have increased risk of cancer. [ 3 ] All these disorders can be caused by mutations in a single gene, XPD, [ 35 ] [ 36 ] [ 37 ] [ 38 ] or in other genes. [ 39 ]
Cockayne syndrome (CS) is a rare autosomal recessive PS. There are three types of CS, distinguished by severity and age of onset. It occurs at a rate of about 1 in 300,000-500,000 in the United States and Europe. [ 40 ] [ 41 ] The mean age of death is ~12 years, [ 42 ] although the different forms differ significantly. Individuals with the type I (or classical) form of the disorder usually first show symptoms between one and three years and have lifespans of between 20 and 40 years. Type II Cockayne syndrome (CSB) is more severe: symptoms present at birth and individuals live to approximately 6–7 years of age. [ 3 ] Type III has the mildest symptoms, first presents later in childhood, [ 41 ] and the cause of death is often severe nervous system deterioration and respiratory tract infections. [ 43 ]
Individuals with CS appear prematurely aged and exhibit severe growth retardation leading to short stature. They have a small head (less than the -3 standard deviation), [ 44 ] fail to gain weight and failure to thrive . They also have extreme cutaneous photosensitivity (sensitivity to sunlight), neurodevelopmental abnormalities, and deafness, and often exhibit lipoatrophy, atrophic skin, severe tooth decay , sparse hair, calcium deposits in neurons, cataracts, sensorineural hearing loss, pigmentary retinopathy , and bone abnormalities. However, they do not have a higher risk of cancer. [ citation needed ]
Type I and II are known to be caused by mutation of a specific gene. CSA is caused by mutations in the cross-complementing gene 8 ( ERCC8 ), which encodes for the CSA protein. These mutations are thought to cause alternate splicing of the pre-mRNA which leads to an abnormal protein. [ 45 ] CSB is caused by mutations in the ERCC6 gene, which encodes the CSB protein. [ 46 ] CSA and CSB are involved in transcription-coupled NER (TC-NER), which is involved in repairing DNA; they ubiquitinate RNA polymerase II , halting its progress thus allowing the TC-NER mechanism to be carried out. [ 47 ] The ubiquitinated RNAP II then dissociates and is degraded via the proteasome . [ 48 ] Mutations in ERCC8, ERCC6, or both mean DNA is no longer repaired through TC-NER, and the accumulation of mutations leads to cell death, which may contribute to the symptoms of Cockayne syndrome. [ 41 ]
Xeroderma pigmentosum (XP) is a rare autosomal recessive disorder, affecting about one per million in the United States and autochthonic Europe populations [ 40 ] but with a higher incidence rate in Japan, North Africa, and the Middle East. [ 50 ] There have been 830 published cases from 1874 to 1982. [ 51 ] The disorder presents at infancy or early childhood. [ citation needed ]
Xeroderma pigmentosum mostly affects the eye and skin. Individuals with XP have extreme sensitivity to light in the ultraviolet range starting from one to two years of age, [ 51 ] and causes sunburn , freckling of skin, dry skin and pigmentation after exposure. [ 52 ] When the eye is exposed to sunlight, it becomes irritated and bloodshot , and the cornea becomes cloudy. Around 30% of affected individuals also develop neurological abnormalities, including deafness , poor coordination, decreased intellectual abilities, difficulty swallowing and talking, and seizures; these effects tend to become progressively worse over time. All affected individuals have a 1000-fold higher risk of developing skin cancer : [ 53 ] half of the affected population develop skin cancer by age 10, usually at areas most exposed to sunlight (e.g. face, head, or neck). [ 54 ] The risk for other cancers such as brain tumors , lung cancer and eye cancers also increase. [ 55 ]
There are eight types of XP (XP-A through XP-G), plus a variant type (XP-V), all categorized based on the genetic cause. XP can be caused by mutations in any of these genes: DDB2 , ERCC2 , ERCC3 , ERCC4 , ERCC5 , XPA , XPC . These genes are all involved in the NER repair pathway that repairs damaged DNA. The variant form, XP-V, is caused by mutations in the POLH gene, which unlike the rest does not code for components of the NER pathway but produces a DNA polymerase that allows accurate translesion synthesis of DNA damage resulting from UV radiation; its mutation leads to an overall increase in UV-dependent mutation, which ultimately causes the symptoms of XP. [ citation needed ]
Trichothiodystrophy (TTD) is a rare autosomal recessive disease whose symptoms span across multiple systems [ 56 ] and can vary greatly in severity. The incidence rate of TTD is estimated to be 1.2 per million in Western Europe. [ 40 ] Milder cases cause sparse and brittle hair, which is due to the lack of sulfur , [ 57 ] an element that is part of the matrix proteins that give hair its strength. [ 58 ] More severe cases cause delayed development, significant intellectual disability, and recurrent infection; the most severe cases see death at infancy or early childhood. [ citation needed ]
TTD also affects the mother of the affected child during pregnancy, when she may experience pregnancy-induced high blood pressure and develop HELLP syndrome . The baby has a high risk of being born prematurely and will have a low birth weight . After birth, the child's normal growth is retarded, resulting in a short stature.
Other symptoms include scaly skin , abnormalities of the fingernails and toenails, clouding of the lens of the eye from birth ( congenital cataracts ), poor co-ordination, and ocular and skeletal abnormalities. Half of affected individuals also experience photosensitivity to UV light. [ 56 ]
TTD is caused by mutations in one of three genes, ERCC2 , ERCC3 , or GTF2H5 , the first two of which are also linked to xeroderma pigmentosum. However, patients with TTD do not show a higher risk of developing skin cancer, in contrast to patients with XP. [ 57 ] The three genes associated with TTD encode for XPB, XPD and p8/TTDA of the general transcription factor IIH (TFIIH) complex, [ 59 ] which is involved in transcription and DNA damage repair. Mutations in one of these genes cause reduction of gene transcription, which may be involved in development (including placental development ), [ 60 ] and thus may explain retardation in intellectual abilities, in some cases; [ 57 ] these mutations also lead to reduction in DNA repair, causing photosensitivity. [ 57 ] [ 61 ]
A form of TTD without photosensitivity also exists, although its mechanism is unclear. The MPLKIP gene has been associated with this form of TTD, although it accounts for only 20% of all known cases of the non-photosensitive form of TTD, and the function of its gene product is also unclear. Mutations in the TTDN1 gene explain another 10% of non-photosensitive TTD. [ 62 ] The function of the gene product of TTDN1 is unknown, but the sex organs of individuals with this form of TTD often produce no hormones, a condition known as hypogonadism . [ 62 ]
Hutchinson–Gilford progeria syndrome (HGPS) and restrictive dermopathy (RD) are two PS caused by a defect in lamin A/C, which is encoded by the LMNA gene. [ 63 ] [ 64 ] Lamin A is a major nuclear component that determines the shape and integrity of the nucleus , by acting as a scaffold protein that forms a filamentous meshwork underlying the inner nuclear envelope , the membrane that surrounds the nucleus. [ citation needed ]
Hutchinson–Gilford progeria syndrome is an extremely rare developmental autosomal dominant condition, characterized by premature and accelerated aging (~7 times the normal rate) [ 65 ] beginning at childhood. It affects 1 in ~4 million newborns; over 130 cases have been reported in the literature since the first described case in 1886. [ 66 ] The mean age of diagnosis is ~3 years and the mean age of death is ~13 years. The cause of death is usually myocardial infarction, caused by the severe hardening of the arteries (arteriosclerosis). [ 67 ] There is currently no treatment available. [ 68 ]
Individuals with HGPS typically appear normal at birth, but their growth is severely retarded, resulting in short stature, a very low body weight and delayed tooth eruption. Their facial/cranial proportions and facial features are abnormal, characterized by larger-than-normal eyes, a thin, beaked nose, thin lips, small chin and jaw ( micrognathia ), protruding ears, scalp hair, eyebrows, and lashes, hair loss , large head , large fontanelle and generally appearing aged. Other features include skeletal alterations (osteolysis, osteoporosis), amyotrophy (wasting of muscle), lipodystrophy and skin atrophy (loss of subcutaneous tissue and fat) with sclerodermatous focal lesions, severe atherosclerosis and prominent scalp veins. [ 69 ] However, the level of cognitive function, motor skills, and risk of developing cancer is not affected significantly. [ 66 ]
HGPS is caused by sporadic mutations (not inherited from parent) in the LMNA gene, which encodes for lamin A. [ 63 ] [ 64 ] Specifically, most HGPS are caused by a dominant, de novo , point mutation p.G608G (GGC > GGT). [ 64 ] This mutation causes a splice site within exon 11 of the pre-mRNA to come into action, leading to the last 150 base pairs of that exon, and consequently, the 50 amino acids near the C-terminus , being deleted. [ 64 ] This results in a truncated lamin A precursor (a.k.a. progerin or LaminAΔ50). [ 70 ]
After being translated, a farnesol is added to prelamin A using protein farnesyltransferase ; this farnesylation is important in targeting lamin to the nuclear envelope, where it maintains its integrity. Normally, lamin A is recognized by ZMPSTE24 (FACE1, a metalloprotease ) and cleaved, removing the farnesol and a few other amino acids. [ citation needed ]
In the truncated lamin A precursor, this cleavage is not possible and the prelamin A cannot mature. When the truncated prelamin A is localized to the nuclear envelope, it will not be processed and accumulates, [ 71 ] leading to "lobulation of the nuclear envelope, thickening of the nuclear lamina, loss of peripheral heterochromatin, and clustering of nuclear pores", causing the nucleus to lose its shape and integrity. [ 72 ] The prelamin A also maintains the farnesyl and a methyl moiety on its C-terminal cysteine residue, ensuring their continued localization at the membrane. When this farnesylation is prevented using farnesyltransferase inhibitor (FTI), the abnormalities in nuclear shape are significantly reduced. [ 71 ] [ 73 ]
HGPS is considered autosomal dominant, which means that only one of the two copies of the LMNA gene needs to be mutated to produce this phenotype. As the phenotype is caused by an accumulation of the truncated prelamin A, only mutation in one of the two genes is sufficient. [ 72 ] At least 16 Other mutations in lamin A/C, [ 74 ] [ 75 ] or defects in the ZMPSTE24 gene, [ 76 ] have been shown to cause HGPS and other progeria-like symptoms, although these are less studied.
Repair of DNA double-strand breaks can occur by one of two processes, non-homologous end joining (NHEJ) or homologous recombination (HR). A-type lamins promote genetic stability by maintaining levels of proteins which have key roles in NHEJ and HR. [ 77 ] Mouse cells deficient for maturation of prelamin A show increased DNA damage and chromosome aberrations and have increased sensitivity to DNA damaging agents. [ 78 ] In HGPS, the inability to adequately repair DNA damages due to defective A-type lamin may cause aspects of premature aging (see DNA damage theory of aging ). [ citation needed ]
Restrictive dermopathy (RD), also called tight skin contracture syndrome, is a rare, lethal autosomal recessive perinatal genodermatosis . [ 79 ] Two known causes of RD are mutations in the LMNA gene, which lead to the production of truncated prelamin A precursor, and insertions in the ZMPSTE24 , which lead to a premature stop codon. [ 79 ]
Individuals with RD exhibit growth retardation starting in the uterus , tight and rigid skin with erosions, prominent superficial vasculature and epidermal hyperkeratosis , abnormal facial features (small mouth, small pinched nose and micrognathia), sparse or absent eyelashes and eyebrows, mineralization defects of the skull, thin dysplastic clavicles, pulmonary hypoplasia and multiple joint contractures. Most affected individuals die in the uterus or are stillbirths, and liveborns usually die within a week. [ citation needed ]
Patients with Marfan-progeroid-lipodystrophy syndrome typically exhibit congenital lipodystrophy and a neonatal progeroid appearance. [ 80 ] [ 81 ] Sometimes identified as having neonatal progeroid syndrome , the term is a misnomer since they do not exhibit accelerated aging. [ 82 ] The condition is caused by mutations near the 3'-terminus of the FBN1 gene. [ 80 ] [ 81 ] [ 82 ] [ 83 ] [ 84 ] [ 85 ] [ excessive citations ]
Hutchinson–Gilford progeria syndrome, Werner syndrome, and Cockayne syndrome are the three genetic disorders in which patients have premature aging features. Premature aging also develops on some animal models which have genetic alterations. [ 86 ] [ 87 ] Although the patients with these syndromes and the animal models with premature aging symptoms have different genetic backgrounds, they all have abnormal structures of tissues/organs as a result of defective development. Misrepair-accumulation aging theory [ 88 ] [ 89 ] suggests that the abnormality of tissue structure is the common point between premature aging and normal aging. [ 90 ] Premature aging is a result of Mis-construction during development as a consequence of gene mutations, whereas normal aging is a result of accumulation of Misrepairs for the survival of an organism. Thus the process of development and that of aging are coupled by Mis-construction and Mis-re-construction (Misrepair) of the structure of an organism. [ citation needed ]
Wiedemann–Rautenstrauch (WR) syndrome , also known as neonatal progeroid syndrome , [ 91 ] is an autosomal recessive progeroid syndrome. More than 30 cases have been reported. [ 92 ] Most affected individuals die by seven months of age, but some do survive into their teens.
WR is associated with abnormalities in bone maturation, and lipids and hormone metabolism. [ 93 ] Affected individuals exhibit intrauterine and postnatal growth retardation, leading to short stature and an aged appearance from birth. They have physical abnormalities including a large head (macrocephaly), sparse hair, prominent scalp veins, inward-folded eyelids , widened anterior fontanelles, hollow cheeks (malar hypoplasia), general loss of fat tissues under the skin, delayed tooth eruption, abnormal hair pattern , beaked noses, mild to severe intellectual disability and dysmorphism . [ 94 ]
The cause of WR is unknown, although defects in DNA repair have been implicated. [ 92 ]
Classified as an autosomal recessive defect, but the pathology has still yet to be well researched. [ citation needed ]
Some segmental progeroid syndromes, such as Werner syndrome (WS), Bloom syndrome (BS), Rothmund-Thomson syndromes (RTS) and combined xeroderma pigmentosa-Cockayne syndrome (XP-CS), are associated with an increased risk of developing cancer in the affected individual; two exceptions are Hutchinson–Gilford progeria (HGPS) and Cockayne syndrome. [ 95 ]
Within animal models for progeroid syndromes, early observations have detected abnormalities within overall mitochondrial function, [ 96 ] [ 97 ] signal transduction between membrane receptors , [ 98 ] and nuclear regulatory proteins .
Alterations in lipid and carbohydrate metabolism, a triplet-repeat disorder (myotonic dystrophy) and an idiopathic disorder
Hayley Okines was an English girl with classic progeria famed for her efforts in spreading awareness of the condition. She was featured in the media. [ 99 ]
Lizzie Velásquez is an American motivational speaker who has a syndrome that resembles progeria, although the exact nature is unclear; it is now thought to be a form of neonatal progeroid syndrome. [ 100 ] Velásquez is an advocate of anti-bullying. [ 101 ] [ 102 ]
Jesper Sørensen is widely recognized in Denmark as the only child in Denmark and Scandinavia with progeria (as of 2008). [ 103 ] His fame came about after a documentary in 2008 on TV 2 about Sørensen. [ 104 ]
F. Scott Fitzgerald 's 1922 short story The Curious Case of Benjamin Button is about a boy who was born with the appearance of a 70-year-old and who ages backwards. This short story is thought to be inspired by progeria. [ 105 ] The description of the fictitious Smallweed family in the Charles Dickens ' Bleak House suggests the characters had progeria. [ 106 ] Christopher Snow, the main character in Dean Koontz 's Moonlight Bay Trilogy , has xeroderma pigmentosum, as does Luke from the 2002 novel Going Out by Scarlett Thomas . In the visual novel Chaos;Head , the character Shogun eventually dies of a progeroid syndrome, and in its sequel Chaos;Child , more characters get this same fictional progeroid syndrome, which by then is called Chaos Child Syndrome. In Kimberly Akimbo , a 2000 play by David Lindsay-Abaire , and its Tony Award for Best Musical -winning adaptation of the same name , the main character, Kimberly Levaco, has an unnamed progeria-like condition. [ citation needed ]
Paa , a 2009 Indian comedy-drama film, features a protagonist, Auro ( Amitabh Bachchan ), who has progeria. Jack is a 1996 American comedy-drama film, in which the titular character (portrayed by Robin Williams ) has Werner syndrome. Taiyou no Uta , a 2006 Japanese film, features Kaoru Amane (portrayed by Yui ), a 16-year-old girl has xeroderma pigmentosum. [ citation needed ] | https://en.wikipedia.org/wiki/Progeroid_syndromes |
Progesterone ( / p r oʊ ˈ dʒ ɛ s t ər oʊ n / ⓘ ; P4 ) is an endogenous steroid and progestogen sex hormone involved in the menstrual cycle , pregnancy , and embryogenesis of humans and other species. [ 1 ] [ 13 ] It belongs to a group of steroid hormones called the progestogens [ 13 ] and is the major progestogen in the body. Progesterone has a variety of important functions in the body. It is also a crucial metabolic intermediate in the production of other endogenous steroids , including the sex hormones and the corticosteroids , and plays an important role in brain function as a neurosteroid . [ 14 ]
In addition to its role as a natural hormone, progesterone is also used as a medication, such as in combination with estrogen for contraception , to reduce the risk of uterine or cervical cancer , in hormone replacement therapy , and in feminizing hormone therapy . [ 15 ] It was first prescribed in 1934. [ 16 ]
Progesterone is the most important progestogen in the body. As a potent agonist of the nuclear progesterone receptor (nPR) (with an affinity of K D = 1 nM) the resulting effects on ribosomal transcription plays a major role in regulation of female reproduction. [ 13 ] [ 17 ] In addition, progesterone is an agonist of the more recently discovered membrane progesterone receptors (mPRs), [ 18 ] of which the expression has regulation effects in reproduction function ( oocyte maturation , labor, and sperm motility ) and cancer although additional research is required to further define the roles. [ 19 ] It also functions as a ligand of the PGRMC1 (progesterone receptor membrane component 1) which impacts tumor progression , metabolic regulation, and viability control of nerve cells . [ 20 ] [ 21 ] [ 22 ] Moreover, progesterone is also known to be an antagonist of the sigma σ 1 receptor , [ 23 ] [ 24 ] a negative allosteric modulator of nicotinic acetylcholine receptors , [ 14 ] and a potent antagonist of the mineralocorticoid receptor (MR). [ 25 ] Progesterone prevents MR activation by binding to this receptor with an affinity exceeding even those of aldosterone and glucocorticoids such as cortisol and corticosterone , [ 25 ] and produces antimineralocorticoid effects, such as natriuresis , at physiological concentrations. [ 26 ] In addition, progesterone binds to and behaves as a partial agonist of the glucocorticoid receptor (GR), albeit with very low potency ( EC 50 >100-fold less relative to cortisol ). [ 27 ] [ 28 ]
Progesterone, through its neurosteroid active metabolites such as 5α-dihydroprogesterone and allopregnanolone , acts indirectly as a positive allosteric modulator of the GABA A receptor . [ 29 ]
Progesterone and some of its metabolites, such as 5β-dihydroprogesterone , are agonists of the pregnane X receptor (PXR), [ 30 ] albeit weakly so ( EC 50 >10 μM). [ 31 ] In accordance, progesterone induces several hepatic cytochrome P450 enzymes , [ 32 ] such as CYP3A4 , [ 33 ] [ 34 ] especially during pregnancy when concentrations are much higher than usual. [ 35 ] Perimenopausal women have been found to have greater CYP3A4 activity relative to men and postmenopausal women, and it has been inferred that this may be due to the higher progesterone levels present in perimenopausal women. [ 33 ]
Progesterone modulates the activity of CatSper (cation channels of sperm) voltage-gated Ca 2+ channels. Since eggs release progesterone, sperm may use progesterone as a homing signal to swim toward eggs ( chemotaxis ). As a result, it has been suggested that substances that block the progesterone binding site on CatSper channels could potentially be used in male contraception . [ 36 ] [ 37 ]
Progesterone has a number of physiological effects that are amplified in the presence of estrogens . Estrogens through estrogen receptors (ERs) induce or upregulate the expression of the PR. [ 39 ] One example of this is in breast tissue, where estrogens allow progesterone to mediate lobuloalveolar development. [ 40 ] [ 41 ] [ 42 ]
Elevated levels of progesterone potently reduce the sodium-retaining activity of aldosterone, resulting in natriuresis and a reduction in extracellular fluid volume. Progesterone withdrawal, on the other hand, is associated with a temporary increase in sodium retention (reduced natriuresis, with an increase in extracellular fluid volume) due to the compensatory increase in aldosterone production, which combats the blockade of the mineralocorticoid receptor by the previously elevated level of progesterone. [ 43 ]
Progesterone plays a role in early human sexual differentiation. [ 44 ] Placental progesterone is the feedstock for the 5α-dihydrotestosterone (DHT) produced via the backdoor pathway found operating in multiple non-gonadal tissues of the fetus , [ 45 ] whereas deficiencies in this pathway lead to undervirilization of the male fetus, resulting in incomplete development of the male genitalia. [ 46 ] [ 47 ] DHT is a potent androgen that is responsible for the development of male genitalia, including the penis and scrotum . [ citation needed ]
During early fetal development, the undifferentiated gonads can develop into either testes or ovaries. The presence of the Y chromosome leads to the development of testes. The testes then produce testosterone, which is converted to DHT via the enzyme 5α-reductase . DHT is a potent androgen that is responsible for the masculinization of the external genitalia and the development of the prostate gland. Progesterone, produced by the placenta during pregnancy, plays a role in fetal sexual differentiation by serving as a precursor molecule for the synthesis of DHT via the backdoor pathway. In the absence of adequate levels of steroidogenic enzymes during fetal development, the backdoor pathway for DHT synthesis can become deficient, leading to undermasculinization of the male fetus. This can result in the development of ambiguous genitalia or even female genitalia in some cases. Therefore, both DHT and progesterone play crucial roles in early fetal sexual differentiation, with progesterone acting as a precursor molecule for DHT synthesis and DHT promoting the development of male genitalia. [ 44 ]
Progesterone has key effects via non-genomic signalling on human sperm as they migrate through the female reproductive tract before fertilization occurs, though the receptor(s) as yet remain unidentified. [ 48 ] Detailed characterisation of the events occurring in sperm in response to progesterone has elucidated certain events including intracellular calcium transients and maintained changes, [ 49 ] slow calcium oscillations, [ 50 ] now thought to possibly regulate motility. [ 51 ] It is produced by the ovaries. [ 52 ] Progesterone has also been shown to demonstrate effects on octopus spermatozoa. [ 53 ]
Progesterone is sometimes called the " hormone of pregnancy ", [ 54 ] and it has many roles relating to the development of the fetus:
The fetus metabolizes placental progesterone in the production of adrenal steroids. [ 45 ]
Progesterone plays an important role in breast development . In conjunction with prolactin , it mediates lobuloalveolar maturation of the mammary glands during pregnancy to allow for milk production and thus lactation and breastfeeding of offspring following parturition (childbirth). [ 58 ] Estrogen induces expression of the PR in breast tissue and hence progesterone is dependent on estrogen to mediate lobuloalveolar development. [ 40 ] [ 41 ] [ 42 ] It has been found that RANKL Tooltip Receptor activator of nuclear factor kappa-B ligand is a critical downstream mediator of progesterone-induced lobuloalveolar maturation. [ 59 ] RANKL knockout mice show an almost identical mammary phenotype to PR knockout mice, including normal mammary ductal development but complete failure of the development of lobuloalveolar structures. [ 59 ]
Though to a far lesser extent than estrogen, which is the major mediator of mammary ductal development (via the ERα ), [ 60 ] [ 61 ] progesterone may be involved in ductal development of the mammary glands to some extent as well. [ 62 ] PR knockout mice or mice treated with the PR antagonist mifepristone show delayed although otherwise normal mammary ductal development at puberty. [ 62 ] In addition, mice modified to have overexpression of PRA display ductal hyperplasia, [ 59 ] and progesterone induces ductal growth in the mouse mammary gland. [ 62 ] Progesterone mediates ductal development mainly via induction of the expression of amphiregulin , the same growth factor that estrogen primarily induces the expression of to mediate ductal development. [ 62 ] These animal findings suggest that, while not essential for full mammary ductal development, progesterone seems to play a potentiating or accelerating role in estrogen-mediated mammary ductal development. [ 62 ]
Progesterone also appears to be involved in the pathophysiology of breast cancer , though its role, and whether it is a promoter or inhibitor of breast cancer risk, has not been fully elucidated. [ 63 ] [ 64 ] Most progestins , or synthetic progestogens, like medroxyprogesterone acetate , have been found to increase the risk of breast cancer in postmenopausal people in combination with estrogen as a component of menopausal hormone therapy . [ 65 ] [ 64 ] The combination of natural oral progesterone or the atypical progestin dydrogesterone with estrogen has been associated with less risk of breast cancer than progestins plus estrogen. [ 66 ] [ 67 ] [ 68 ] However, this may simply be an artifact of the low progesterone levels produced with oral progesterone. [ 63 ] [ 69 ] More research is needed on the role of progesterone in breast cancer. [ 64 ]
The estrogen receptor , as well as the progesterone receptor , have been detected in the skin , including in keratinocytes and fibroblasts . [ 70 ] [ 71 ] At menopause and thereafter, decreased levels of female sex hormones result in atrophy , thinning, and increased wrinkling of the skin and a reduction in skin elasticity , firmness, and strength. [ 70 ] [ 71 ] These skin changes constitute an acceleration in skin aging and are the result of decreased collagen content, irregularities in the morphology of epidermal skin cells , decreased ground substance between skin fibers , and reduced capillaries and blood flow . [ 70 ] [ 71 ] The skin also becomes more dry during menopause, which is due to reduced skin hydration and surface lipids (sebum production). [ 70 ] Along with chronological aging and photoaging, estrogen deficiency in menopause is one of the three main factors that predominantly influences skin aging. [ 70 ]
Hormone replacement therapy, consisting of systemic treatment with estrogen alone or in combination with a progestogen, has well-documented and considerable beneficial effects on the skin of postmenopausal people. [ 70 ] [ 71 ] These benefits include increased skin collagen content, skin thickness and elasticity, and skin hydration and surface lipids. [ 70 ] [ 71 ] Topical estrogen has been found to have similar beneficial effects on the skin. [ 70 ] In addition, a study has found that topical 2% progesterone cream significantly increases skin elasticity and firmness and observably decreases wrinkles in peri- and postmenopausal people. [ 71 ] Skin hydration and surface lipids, on the other hand, did not significantly change with topical progesterone. [ 71 ]
These findings suggest that progesterone, like estrogen, also has beneficial effects on the skin, and may be independently protective against skin aging. [ 71 ]
Progesterone and its neurosteroid active metabolite allopregnanolone appear to be importantly involved in libido in females. [ 72 ]
Dr. Diana Fleischman , of the University of Portsmouth , and colleagues looked for a relationship between progesterone and sexual attitudes in 92 women. Their research, published in the Archives of Sexual Behavior found that women who had higher levels of progesterone scored higher on a questionnaire measuring homoerotic motivation. They also found that men who had high levels of progesterone were more likely to have higher homoerotic motivation scores after affiliative priming compared to men with low levels of progesterone. [ 73 ] [ 74 ] [ 75 ] [ 76 ]
Progesterone, like pregnenolone and dehydroepiandrosterone (DHEA), belongs to an important group of endogenous steroids called neurosteroids . It can be metabolized within all parts of the central nervous system . [ 77 ]
Neurosteroids are neuromodulators , and are neuroprotective , neurogenic , and regulate neurotransmission and myelination . [ 78 ] The effects of progesterone as a neurosteroid are mediated predominantly through its interactions with non-nuclear PRs, namely the mPRs and PGRMC1 , as well as certain other receptors, such as the σ 1 and nACh receptors. [ 79 ]
Previous studies have shown that progesterone supports the normal development of neurons in the brain, and that the hormone has a protective effect on damaged brain tissue. It has been observed in animal models that females have reduced susceptibility to traumatic brain injury and this protective effect has been hypothesized to be caused by increased circulating levels of estrogen and progesterone in females. [ 80 ]
The mechanism of progesterone protective effects may be the reduction of inflammation that follows brain trauma and hemorrhage. [ 81 ] [ 82 ]
Damage incurred by traumatic brain injury is believed to be caused in part by mass depolarization leading to excitotoxicity . One way in which progesterone helps to alleviate some of this excitotoxicity is by blocking the voltage-dependent calcium channels that trigger neurotransmitter release. [ 83 ] It does so by manipulating the signaling pathways of transcription factors involved in this release. Another method for reducing the excitotoxicity is by up-regulating the GABA A , a widespread inhibitory neurotransmitter receptor. [ 84 ]
Progesterone has also been shown to prevent apoptosis in neurons, a common consequence of brain injury. It does so by inhibiting enzymes involved in the apoptosis pathway specifically concerning the mitochondria, such as activated caspase 3 and cytochrome c . [ 85 ]
Not only does progesterone help prevent further damage, it has also been shown to aid in neuroregeneration . [ 86 ] One of the serious effects of traumatic brain injury includes edema. Animal studies show that progesterone treatment leads to a decrease in edema levels by increasing the concentration of macrophages and microglia sent to the injured tissue. [ 83 ] [ 87 ] This was observed in the form of reduced leakage from the blood brain barrier in secondary recovery in progesterone treated rats. In addition, progesterone was observed to have antioxidant properties, reducing the concentration of oxygen free radicals faster than without. [ 84 ] There is also evidence that the addition of progesterone can also help re myelinate damaged axons due to trauma, restoring some lost neural signal conduction. [ 84 ] Another way progesterone aids in regeneration includes increasing the circulation of endothelial progenitor cells in the brain. This helps new vasculature to grow around scar tissue which helps repair the area of insult. [ 88 ]
Progesterone enhances the function of serotonin receptors in the brain, so an excess or deficit of progesterone has the potential to result in significant neurochemical issues. This provides an explanation for why some people resort to substances that enhance serotonin activity such as nicotine , alcohol , and cannabis when their progesterone levels fall below optimal levels. [ 89 ]
In a 2012 University of Amsterdam study of 120 women, women's luteal phase (higher levels of progesterone, and increasing levels of estrogen) was correlated with a lower level of competitive behavior in gambling and math contest scenarios, while their premenstrual phase (sharply-decreasing levels of progesterone, and decreasing levels of estrogen) was correlated with a higher level of competitive behavior. [ 92 ]
In mammals, progesterone, like all other steroid hormones , is synthesized from pregnenolone , which itself is derived from cholesterol . [ citation needed ]
Cholesterol undergoes double oxidation to produce 22 R -hydroxycholesterol and then 20α,22 R -dihydroxycholesterol . This vicinal diol is then further oxidized with loss of the side chain starting at position C22 to produce pregnenolone. This reaction is catalyzed by cytochrome P450scc . [ citation needed ]
The conversion of pregnenolone to progesterone takes place in two steps. First, the 3β- hydroxyl group is oxidized to a keto group and second, the double bond is moved to C4, from C5 through a keto/ enol tautomerization reaction. [ 108 ] This reaction is catalyzed by 3β-hydroxysteroid dehydrogenase/δ 5-4 -isomerase . [ citation needed ]
Progesterone in turn is the precursor of the mineralocorticoid aldosterone , and after conversion to 17α-hydroxyprogesterone , of cortisol and androstenedione . Androstenedione can be converted to testosterone , estrone , and estradiol , highlighting the critical role of progesterone in testosterone synthesis. [ citation needed ]
Pregnenolone and progesterone can also be synthesized by yeast . [ 109 ]
Approximately 30 mg of progesterone is secreted from the ovaries per day in reproductive-age women, while the adrenal glands produce about 1 mg of progesterone per day. [ 110 ]
Progesterone binds extensively to plasma proteins , including albumin (50–54%) and transcortin (43–48%). [ 111 ] It has similar affinity for albumin relative to the PR. [ 17 ]
The metabolism of progesterone is rapid and extensive and occurs mainly in the liver , [ 112 ] [ 113 ] [ 114 ] though enzymes that metabolize progesterone are also expressed widely in the brain , skin , and various other extrahepatic tissues . [ 77 ] [ 115 ] Progesterone has an elimination half-life of only approximately 5 minutes in circulation . [ 112 ] The metabolism of progesterone is complex, and it may form as many as 35 different unconjugated metabolites when it is ingested orally. [ 114 ] [ 116 ] Progesterone is highly susceptible to enzymatic reduction via reductases and hydroxysteroid dehydrogenases due to its double bond (between the C4 and C5 positions) and its two ketones (at the C3 and C20 positions). [ 114 ]
The major metabolic pathway of progesterone is reduction by 5α-reductase [ 77 ] and 5β-reductase into the dihydrogenated 5α-dihydroprogesterone and 5β-dihydroprogesterone , respectively. [ 113 ] [ 114 ] [ 117 ] [ 118 ] This is followed by the further reduction of these metabolites via 3α-hydroxysteroid dehydrogenase and 3β-hydroxysteroid dehydrogenase into the tetrahydrogenated allopregnanolone , pregnanolone , isopregnanolone , and epipregnanolone . [ 119 ] [ 113 ] [ 114 ] [ 117 ] Subsequently, 20α-hydroxysteroid dehydrogenase and 20β-hydroxysteroid dehydrogenase reduce these metabolites to form the corresponding hexahydrogenated pregnanediols (eight different isomers in total), [ 113 ] [ 118 ] which are then conjugated via glucuronidation and/or sulfation , released from the liver into circulation, and excreted by the kidneys into the urine . [ 112 ] [ 114 ] The major metabolite of progesterone in the urine is the 3α,5β,20α isomer of pregnanediol glucuronide , which has been found to constitute 15 to 30% of an injection of progesterone. [ 17 ] [ 120 ] Other metabolites of progesterone formed by the enzymes in this pathway include 3α-dihydroprogesterone , 3β-dihydroprogesterone , 20α-dihydroprogesterone , and 20β-dihydroprogesterone , as well as various combination products of the enzymes aside from those already mentioned. [ 17 ] [ 114 ] [ 120 ] [ 121 ] Progesterone can also first be hydroxylated (see below) and then reduced. [ 114 ] Endogenous progesterone is metabolized approximately 50% into 5α-dihydroprogesterone in the corpus luteum , 35% into 3β-dihydroprogesterone in the liver, and 10% into 20α-dihydroprogesterone. [ 122 ]
Relatively small portions of progesterone are hydroxylated via 17α-hydroxylase (CYP17A1) and 21-hydroxylase (CYP21A2) into 17α-hydroxyprogesterone and 11-deoxycorticosterone (21-hydroxyprogesterone), respectively, [ 116 ] and pregnanetriols are formed secondarily to 17α-hydroxylation. [ 123 ] [ 124 ] Even smaller amounts of progesterone may be also hydroxylated via 11β-hydroxylase (CYP11B1) and to a lesser extent via aldosterone synthase (CYP11B2) into 11β-hydroxyprogesterone . [ 125 ] [ 126 ] [ 44 ] In addition, progesterone can be hydroxylated in the liver by other cytochrome P450 enzymes which are not steroid-specific. [ 127 ] 6β-Hydroxylation, which is catalyzed mainly by CYP3A4 , is the major transformation, and is responsible for approximately 70% of cytochrome P450-mediated progesterone metabolism. [ 127 ] Other routes include 6α-, 16α-, and 16β-hydroxylation. [ 114 ] However, treatment of women with ketoconazole , a strong CYP3A4 inhibitor, had minimal effects on progesterone levels, producing only a slight and non-significant increase, and this suggests that cytochrome P450 enzymes play only a small role in progesterone metabolism. [ 128 ]
Progesterone levels are relatively low during the preovulatory phase of the menstrual cycle , rise after ovulation , and are elevated during the luteal phase , as shown in the diagram above. Progesterone levels tend to be less than 2 ng/mL prior to ovulation and greater than 5 ng/mL after ovulation. If pregnancy occurs, human chorionic gonadotropin is released, maintaining the corpus luteum and allowing it to maintain levels of progesterone. Between 7 and 9 weeks, the placenta begins to produce progesterone in place of the corpus luteum in a process called the luteal-placental shift. [ 131 ]
After the luteal-placental shift, progesterone levels start to rise further and may reach 100 to 200 ng/mL at term. Whether a decrease in progesterone levels is critical for the initiation of labor has been argued and may be species-specific. After delivery of the placenta and during lactation, progesterone levels are very low. [ citation needed ]
Progesterone levels are low in children and postmenopausal people. [ 132 ] Adult males have levels similar to those in women during the follicular phase of the menstrual cycle.
Blood test results should always be interpreted using the reference ranges provided by the laboratory that performed the results. Example reference ranges are listed below.
Progesterone is produced in high amounts in the ovaries (by the corpus luteum ) from the onset of puberty to menopause , and is also produced in smaller amounts by the adrenal glands after the onset of adrenarche in both males and females. To a lesser extent, progesterone is produced in nervous tissue , especially in the brain, and in adipose (fat) tissue , as well.
During human pregnancy , progesterone is produced in increasingly high amounts by the ovaries and placenta . At first, the source is the corpus luteum that has been "rescued" by the presence of human chorionic gonadotropin (hCG) from the conceptus. However, after the 8th week, production of progesterone shifts to the placenta. The placenta utilizes maternal cholesterol as the initial substrate, and most of the produced progesterone enters the maternal circulation, but some is picked up by the fetal circulation and used as substrate for fetal corticosteroids. At term the placenta produces about 250 mg progesterone per day.
An additional animal source of progesterone is milk products. After consumption of milk products the level of bioavailable progesterone goes up. [ 144 ]
In at least one plant, Juglans regia , progesterone has been detected. [ 145 ] In addition, progesterone-like steroids are found in Dioscorea mexicana . Dioscorea mexicana is a plant that is part of the yam family native to Mexico . [ 146 ] It contains a steroid called diosgenin that is taken from the plant and is converted into progesterone. [ 147 ] Diosgenin and progesterone are also found in other Dioscorea species, as well as in other plants that are not closely related, such as fenugreek .
Another plant that contains substances readily convertible to progesterone is Dioscorea pseudojaponica native to Taiwan . Research has shown that the Taiwanese yam contains saponins — steroids that can be converted to diosgenin and thence to progesterone. [ 148 ]
Many other Dioscorea species of the yam family contain steroidal substances from which progesterone can be produced. Among the more notable of these are Dioscorea villosa and Dioscorea polygonoides . One study showed that the Dioscorea villosa contains 3.5% diosgenin. [ 149 ] Dioscorea polygonoides has been found to contain 2.64% diosgenin as shown by gas chromatography-mass spectrometry . [ 150 ] Many of the Dioscorea species that originate from the yam family grow in countries that have tropical and subtropical climates. [ 151 ]
Progesterone is used as a medication . It is used in combination with estrogens mainly in hormone therapy for menopausal symptoms and low sex hormone levels . [ 116 ] [ 152 ] It may also be used alone to treat menopausal symptoms. Studies have shown that transdermal progesterone (skin patch) and oral micronized progesterone are effective treatments for certain symptoms of menopause such as hot flashes and night sweats, which are otherwise referred to as vasomotor symptoms or VMS. [ 153 ]
It is also used to support pregnancy and fertility and to treat gynecological disorders . [ 154 ] [ 155 ] [ 156 ] [ 157 ] Progesterone has been shown to prevent miscarriage in those with 1) vaginal bleeding early in their current pregnancy and 2) a previous history of miscarriage. [ 158 ] Progesterone can be taken by mouth , through the vagina , and by injection into muscle or fat , among other routes . [ 116 ]
Progesterone is a naturally occurring pregnane steroid and is also known as pregn-4-ene-3,20-dione. [ 159 ] [ 160 ] It has a double bond ( 4-ene ) between the C4 and C5 positions and two ketone groups (3,20- dione ), one at the C3 position and the other at the C20 position. [ 159 ] [ 160 ]
Progesterone is commercially produced by semisynthesis. Two main routes are used: one from yam diosgenin first pioneered by Marker in 1940, and one based on soy phytosterols scaled up in the 1970s. Additional (not necessarily economical) semisyntheses of progesterone have also been reported starting from a variety of steroids. For the example, cortisone can be simultaneously deoxygenated at the C-17 and C-21 position by treatment with iodotrimethylsilane in chloroform to produce 11-keto-progesterone (ketogestin), which in turn can be reduced at position-11 to yield progesterone. [ 161 ]
An economical semisynthesis of progesterone from the plant steroid diosgenin isolated from yams was developed by Russell Marker in 1940 for the Parke-Davis pharmaceutical company. [ 162 ] This synthesis is known as the Marker degradation .
The 16-DPA intermediate is important to the synthesis of many other medically important steroids. A very similar approach can produce 16-DPA from solanine . [ 163 ]
Progesterone can also be made from the stigmasterol found in soybean oil also. c.f. Percy Julian .
A total synthesis of progesterone was reported in 1971 by W.S. Johnson . [ 169 ] The synthesis begins with reacting the phosphonium salt 7 with phenyl lithium to produce the phosphonium ylide 8 . The ylide 8 is reacted with an aldehyde to produce the alkene 9 . The ketal protecting groups of 9 are hydrolyzed to produce the diketone 10 , which in turn is cyclized to form the cyclopentenone 11 . The ketone of 11 is reacted with methyl lithium to yield the tertiary alcohol 12 , which in turn is treated with acid to produce the tertiary cation 13 . The key step of the synthesis is the π-cation cyclization of 13 in which the B-, C-, and D-rings of the steroid are simultaneously formed to produce 14 . This step resembles the cationic cyclization reaction used in the biosynthesis of steroids and hence is referred to as biomimetic . In the next step the enol orthoester is hydrolyzed to produce the ketone 15 . The cyclopentene A-ring is then opened by oxidizing with ozone to produce 16 . Finally, the diketone 17 undergoes an intramolecular aldol condensation by treating with aqueous potassium hydroxide to produce progesterone. [ 169 ]
George W. Corner and Willard M. Allen discovered the hormonal action of progesterone in 1929. [ 17 ] [ 170 ] [ 171 ] [ 172 ] By 1931–1932, nearly pure crystalline material of high progestational activity had been isolated from the corpus luteum of animals, and by 1934, pure crystalline progesterone had been refined and obtained and the chemical structure of progesterone was determined. [ 17 ] [ 171 ] This was achieved by Adolf Butenandt at the Chemisches Institut of Technical University in Danzig , who extracted this new compound from several thousand liters of urine . [ 173 ]
Chemical synthesis of progesterone from stigmasterol and pregnanediol was accomplished later that year. [ 171 ] [ 174 ] Up to this point, progesterone, known generically as corpus luteum hormone, had been being referred to by several groups by different names, including corporin, lutein, luteosterone, and progestin. [ 17 ] [ 175 ] In 1935, at the time of the Second International Conference on the Standardization of Sex Hormones in London, England , a compromise was made between the groups and the name progesterone (progestational steroidal ketone) was created. [ 17 ] [ 176 ]
The use of progesterone tests in dog breeding to pinpoint ovulation is becoming more widely used. There are several tests available but the most reliable test is a blood test with blood drawn by a veterinarian and sent to a lab for processing. Results can usually be obtained with 24 to 72 hours. The rationale for using progesterone tests is that increased numbers begin in close proximity to preovulatory surge in gonadotrophins and continue through ovulation and estrus. When progesterone levels reach certain levels they can signal the stage of estrus the female is. Prediction of birth date of the pending litter can be very accurate if ovulation date is known. Puppies deliver with a day or two of 9 weeks gestation in most cases. It is not possible to determine pregnancy using progesterone tests once a breeding has taken place, however. This is due to the fact that, in dogs, progesterone levels remain elevated throughout the estrus period. [ 177 ]
Pricing for progesterone can vary depending location, insurance coverage, discount coupons, quantity, shortages, manufacturers, brand or generic versions, different pharmacies, and so on. As of currently, 30 capsules of 100 mg of the generic version, Progesterone, from CVS Pharmacy is around $40 without any discounts or insurance applied. The brand version, Prometrium, is around $450 for 30 capsules without any discounts or insurance applied. [ 178 ] In comparison, Walgreens offers 30 capsules of 100 mg in the generic version for $51 without insurance or coupons applied. The brand name costs around $431 for 30 capsules of 100 mg. [ 179 ] | https://en.wikipedia.org/wiki/Progesterone |
A program is a set of instructions used to control the behavior of a machine . Examples of such programs include:
The execution of a program is a series of actions following the instructions it contains. Each instruction produces effects that alter the state of the machine according to its predefined meaning.
While some machines are called programmable , for example a programmable thermostat or a musical synthesizer , they are in fact just devices which allow their users to select among a fixed set of a variety of options, rather than being controlled by programs written in a language (be it textual, visual or otherwise).
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Program_(machine) |
Program equilibrium is a game-theoretic solution concept for a scenario in which players submit computer programs to play the game on their behalf and the programs can read each other's source code. The term was introduced by Moshe Tennenholtz in 2004. [ 1 ] The same setting had previously been studied by R. Preston McAfee , [ 2 ] J. V. Howard [ 3 ] and Ariel Rubinstein . [ 4 ]
The program equilibrium literature considers the following setting. Consider a normal-form game as a base game. For simplicity, consider a two-player game in which S 1 {\displaystyle S_{1}} and S 2 {\displaystyle S_{2}} are the sets of available strategies and u 1 {\displaystyle u_{1}} and u 2 {\displaystyle u_{2}} are the players' utility functions . Then we construct a new (normal-form) program game in which each player i {\displaystyle i} chooses a computer program p i {\displaystyle p_{i}} . The payoff (utility) for the players is then determined as follows. Each player's program p i {\displaystyle p_{i}} is run with the other program p − i {\displaystyle p_{-i}} as input and outputs a strategy s i {\displaystyle s_{i}} for Player i {\displaystyle i} . For convenience one also often imagines that programs can access their own source code. [ nb 1 ] Finally, the utilities for the players are given by u i ( s 1 , s 2 ) {\displaystyle u_{i}(s_{1},s_{2})} for i = 1 , 2 {\displaystyle i=1,2} , i.e., by applying the utility functions for the base game to the chosen strategies.
One has to further deal with the possibility that one of the programs p i {\displaystyle p_{i}} doesn't halt. One way to deal with this is to restrict both players' sets of available programs to prevent non-halting programs. [ 1 ] [ 5 ]
A program equilibrium is a pair of programs ( p 1 , p 2 ) {\displaystyle (p_{1},p_{2})} that constitute a Nash equilibrium of the program game. In other words, ( p 1 , p 2 ) {\displaystyle (p_{1},p_{2})} is a program equilibrium if neither player i {\displaystyle i} can deviate to an alternative program p i ′ {\displaystyle p_{i}'} such that their utility is higher in ( p i ′ , p − i ) {\displaystyle (p_{i}',p_{-i})} than in ( p 1 , p 2 ) {\displaystyle (p_{1},p_{2})} .
Instead of programs, some authors have the players submit other kinds of objects, such as logical formulas specifying what action to play depending on an encoding of the logical formula submitted by the opponent. [ 6 ] [ 7 ]
Various authors have proposed ways to achieve cooperative program equilibrium in the Prisoner's Dilemma .
Multiple authors have independently proposed the following program for the Prisoner's Dilemma: [ 1 ] [ 3 ] [ 2 ]
If both players submit this program, then the if-clause will resolve to true in the execution of both programs. As a result, both programs will cooperate. Moreover, (CliqueBot,CliqueBot) is an equilibrium. If either player deviates to some other program p i {\displaystyle p_{i}} that is different from CliqueBot, then the opponent will defect. Therefore, deviating to p i {\displaystyle p_{i}} can at best result in the payoff of mutual defection, which is worse than the payoff of mutual cooperation.
This approach has been criticized for being fragile. [ 5 ] If the players fail to coordinate on the exact source code they submit (for example, if one player adds an extra space character), both programs will defect. The development of the techniques below is in part motivated by this fragility issue.
Another approach is based on letting each player's program try to prove something about the opponent's program or about how the two programs relate. [ 6 ] [ 8 ] [ 9 ] [ 10 ] One example of such a program is the following:
Using Löb's theorem it can be shown that when both players submit this program, they cooperate against each other. [ 8 ] [ 9 ] [ 10 ] Moreover, if one player were to instead submit a program that defects against the above program, then (assuming consistency of the proof system is used) the if-condition would resolve to false and the above program would defect. Therefore, (FairBot,FairBot) is a program equilibrium as well.
Another proposed program is the following: [ 5 ] [ 11 ] [ 12 ]
Here ϵ {\displaystyle \epsilon } is a small [ quantify ] positive number.
If both players submit this program, then they terminate almost surely and cooperate. The expected number of steps to termination is given by the geometric series . Moreover, if both players submit this program, neither can profitably deviate, assuming ϵ {\displaystyle \epsilon } is sufficiently small [ quantify ] , because defecting with probability Δ {\displaystyle \Delta } would cause the opponent to defect with probability ( 1 − ϵ ) Δ {\displaystyle (1-\epsilon )\Delta } .
We here give a theorem that characterizes what payoffs can be achieved in program equilibrium.
The theorem uses the following terminology: A pair of payoffs ( v 1 , v 2 ) {\displaystyle (v_{1},v_{2})} is called feasible if there is a pair of (potentially mixed) strategies ( s 1 , s 2 ) {\displaystyle (s_{1},s_{2})} such that u i ( s 1 , s 2 ) = v i {\displaystyle u_{i}(s_{1},s_{2})=v_{i}} for both players i {\displaystyle i} . That is, a pair of payoffs is called feasible if it is achieved in some strategy profile . A payoff v i {\displaystyle v_{i}} is called individually rational if it is better than that player's minimax payoff; that is, if v i ≥ min σ − i max s i u i ( σ − i , s i ) {\displaystyle v_{i}\geq \min _{\sigma _{-i}}\max _{s_{i}}u_{i}(\sigma _{-i},s_{i})} , where the minimum is over all mixed strategies for Player − i {\displaystyle -i} . [ nb 2 ]
Theorem (folk theorem for program equilibrium): [ 4 ] [ 1 ] Let G be a base game. Let ( v 1 , v 2 ) {\displaystyle (v_{1},v_{2})} be a pair of real-valued payoffs. Then the following two claims are equivalent:
The result is referred to as a folk theorem in reference to the so-called folk theorems (game theory) for repeated games , which use the same conditions on equilibrium payoffs ( v 1 , v 2 ) {\displaystyle (v_{1},v_{2})} . | https://en.wikipedia.org/wiki/Program_equilibrium |
The program evaluation and review technique ( PERT ) is a statistical tool used in project management , which was designed to analyze and represent the tasks involved in completing a given project .
PERT was originally developed by Charles E. Clark for the United States Navy in 1958; it is commonly used in conjunction with the Critical Path Method (CPM), which was also introduced in 1958. [ 1 ]
PERT is a method of analyzing the tasks involved in completing a project, especially the time needed to complete each task, and to identify the minimum time needed to complete the total project. It incorporates uncertainty by making it possible to schedule a project while not knowing precisely the details and durations of all the activities. It is more event-oriented than start- and completion-oriented, and is used more for projects where time is the major constraint rather than cost. It is applied to very large-scale, one-time, complex, non-routine infrastructure projects, as well as R&D projects.
PERT offers a management tool, [ 2 ] : 497 which relies "on arrow and node diagrams of activities and events : arrows represent the activities or work necessary to reach the events or nodes that indicate each completed phase of the total project." [ 3 ]
PERT and CPM are complementary tools, because "CPM employs one time estimation and one cost estimation for each activity; PERT may utilize three time estimates (optimistic, expected, and pessimistic) and no costs for each activity. Although these are distinct differences, the term PERT is applied increasingly to all critical path scheduling." [ 3 ]
PERT was developed primarily to simplify the planning and scheduling of large and complex projects. It was developed by the United States Navy Special Projects Office , Lockheed Aircraft , and Booz Allen Hamilton to support the Navy's Polaris missile project. [ 4 ] [ 5 ] It found applications throughout industry. An early example is the 1968 Winter Olympics in Grenoble which used PERT from 1965 until the opening of the 1968 Games. [ 6 ] This project model was the first of its kind, a revival for the scientific management of Frederick Taylor and later refined by Henry Ford ( Fordism ). DuPont 's CPM was invented at roughly the same time as PERT.
Initially PERT stood for Program Evaluation Research Task, but by 1959 was renamed. [ 4 ] It had been made public in 1958 in two publications of the U.S. Department of the Navy, entitled Program Evaluation Research Task, Summary Report, Phase 1. [ 7 ] and Phase 2. [ 8 ] both primarily written by Charles F. Clark. [ 1 ] In a 1959 article in The American Statistician , Willard Fazar , Head of the Program Evaluation Branch, Special Projects Office, U.S. Navy, gave a detailed description of the main concepts of PERT. He explained:
Through an electronic computer, the PERT technique processes data representing the major, finite accomplishments (events) essential to achieve end-objectives; the inter-dependence of those events; and estimates of time and range of time necessary to complete each activity between two successive events. Such time expectations include estimates of "most likely time", "optimistic time", and "pessimistic time" for each activity. The technique is a management control tool that sizes up the outlook for meeting objectives on time; highlights danger signals requiring management decisions; reveals and defines both methodicalness and slack in the flow plan or the network of sequential activities that must be performed to meet objectives; compares current expectations with scheduled completion dates and computes the probability for meeting scheduled dates; and simulates the effects of options for decision— before decision. [ 9 ]
Ten years after the introduction of PERT, the American librarian Maribeth Brennan compiled a selected bibliography with about 150 publications on PERT and CPM, all published between 1958 and 1968. [ 3 ]
For the subdivision of work units in PERT [ 10 ] another tool was developed: the Work Breakdown Structure . The Work Breakdown Structure provides "a framework for complete networking, the Work Breakdown Structure was formally introduced as the first item of analysis in carrying out basic PERT/CPM." [ 11 ]
In a PERT diagram, the main building block is the event , with connections to its known predecessor events and successor events.
Besides events, PERT also tracks activities and sub-activities:
PERT defines four types of time required to accomplish an activity:
PERT supplies a number of tools for management with determination of concepts, such as:
The first step for scheduling the project is to determine the tasks that the project requires and the order in which they must be completed. The order may be easy to record for some tasks (e.g., when building a house, the land must be graded before the foundation can be laid) while difficult for others (there are two areas that need to be graded, but there are only enough bulldozers to do one). Additionally, the time estimates usually reflect the normal, non-rushed time. Many times, the time required to execute the task can be reduced for an additional cost or a reduction in the quality.
In the following example there are seven tasks, labeled A through G . Some tasks can be done concurrently ( A and B ) while others cannot be done until their predecessor task is complete ( C cannot begin until A is complete). Additionally, each task has three time estimates: the optimistic time estimate ( o ), the most likely or normal time estimate ( m ), and the pessimistic time estimate ( p ). The expected time ( te ) is computed using the formula ( o + 4 m + p ) ÷ 6. [ 2 ] : 512–513
Once this step is complete, one can draw a Gantt chart or a network diagram.
A network diagram can be created by hand or by using diagram software. There are two types of network diagrams, activity on arrow ( AOA ) and activity on node ( AON ). Activity on node diagrams are generally easier to create and interpret. To create an AON diagram, it is recommended (but not required) to start with a node named start . This "activity" has a duration of zero (0). Then you draw each activity that does not have a predecessor activity ( a and b in this example) and connect them with an arrow from start to each node. Next, since both c and d list a as a predecessor activity, their nodes are drawn with arrows coming from a . Activity e is listed with b and c as predecessor activities, so node e is drawn with arrows coming from both b and c , signifying that e cannot begin until both b and c have been completed. Activity f has d as a predecessor activity, so an arrow is drawn connecting the activities. Likewise, an arrow is drawn from e to g . Since there are no activities that come after f or g , it is recommended (but again not required) to connect them to a node labeled finish .
By itself, the network diagram pictured above does not give much more information than a Gantt chart; however, it can be expanded to display more information. The most common information shown is:
In order to determine this information it is assumed that the activities and normal duration times are given. The first step is to determine the ES and EF. The ES is defined as the maximum EF of all predecessor activities, unless the activity in question is the first activity, for which the ES is zero (0). The EF is the ES plus the task duration (EF = ES + duration).
Barring any unforeseen events , the project should take 19.51 work days to complete. The next step is to determine the late start (LS) and late finish (LF) of each activity. This will eventually show if there are activities that have slack . The LF is defined as the minimum LS of all successor activities, unless the activity is the last activity, for which the LF equals the EF. The LS is the LF minus the task duration (LS = LF − duration).
The next step is to determine the critical path and if any activities have slack . The critical path is the path that takes the longest to complete. To determine the path times, add the task durations for all available paths. Activities that have slack can be delayed without changing the overall time of the project. Slack is computed in one of two ways, slack = LF − EF or slack = LS − ES. Activities that are on the critical path have a slack of zero (0).
The critical path is aceg and the critical time is 19.51 work days. It is important to note that there can be more than one critical path (in a project more complex than this example) or that the critical path can change. For example, let's say that activities d and f take their pessimistic (b) times to complete instead of their expected (T E ) times. The critical path is now adf and the critical time is 22 work days. On the other hand, if activity c can be reduced to one work day, the path time for aceg is reduced to 15.34 work days, which is slightly less than the time of the new critical path, beg (15.67 work days).
Assuming these scenarios do not happen, the slack for each activity can now be determined.
Therefore, activity b can be delayed almost 4 work days without delaying the project. Likewise, activity d or activity f can be delayed 4.68 work days without delaying the project (alternatively, d and f can be delayed 2.34 work days each).
Depending upon the capabilities of the data input phase of the critical path algorithm, it may be possible to create a loop, such as A -> B -> C -> A. This can cause simple algorithms to loop indefinitely. Although it is possible to "mark" nodes that have been visited, then clear the "marks" upon completion of the process, a far simpler mechanism involves computing the total of all activity durations. If an EF of more than the total is found, the computation should be terminated. It is worth saving the identities of the most recently visited dozen or so nodes to help identify the problem link.
During project execution a real-life project will never execute exactly as it was planned due to uncertainty. This can be due to ambiguity resulting from subjective estimates that are prone to human errors or can be the result of variability arising from unexpected events or risks. The main reason that PERT may provide inaccurate information about the project completion time is due to this schedule uncertainty. This inaccuracy may be large enough to render such estimates as not helpful.
One possible method to maximize solution robustness is to include safety in the baseline schedule in order to absorb disruptions. This is called proactive scheduling , however, allowing for every possible disruption would be very slow and couldn't be accommodated by the baseline schedule. A second approach, termed reactive scheduling , defines a procedure to react to disruptions that cannot be absorbed by the baseline schedule. | https://en.wikipedia.org/wiki/Program_evaluation_and_review_technique |
The PUMA ( P rogrammable U niversal M achine for A ssembly , or P rogrammable U niversal M anipulation A rm ) is an industrial robotic arm developed by Victor Scheinman at pioneering robot company Unimation . Initially developed by Unimation for General Motors , the PUMA was based on earlier designs Scheinman invented while at Stanford University based on sponsorship and mentoring from robot inventor George Devol . [ 1 ]
Unimation produced PUMAs for years until being purchased by Westinghouse (ca. 1980), and later by Swiss company Stäubli (1988). Nokia Robotics manufactured about 1500 PUMA robots during the 1980s, the Puma-560 being their most popular model with customers. Some own Nokia Robotics products were also designed, like Nokia NS-16 Industrial Robot or NRS-15 [ 2 ] . Nokia sold their Robotics division in 1990.
In 2002, General Motors Controls, Robotics and Welding (CRW) organization donated the original prototype PUMA robot to the Smithsonian Institution's National Museum of American History. It joins a collection of historically important robots that includes an early Unimate and the Odetics Odex 1. [ 3 ]
The essence of the design is represented in three categories; 200, 500, and 700 series.
The 200 series is a smaller desktop unit. Notably, this model was used for the first robotic stereotactic brain biopsy in 1985.
The 500 Series and can reach almost 2 meters up. This model is the more popular design and is the most recognizable configuration.
The 700 series is the largest of the group and was intended for assembly line, paint, and welding work.
All designs consist of two main components: the mechanical arm and the control system. These are typically interconnected by one or two large multi-conductor cables. When two cables are used, one carries power to the servo motors and brakes while the second carries the position feedback for each joint back to the control system.
The control computer is based on the LSI-11 architecture which is very similar to PDP11 computers. The system has a boot program and basic debug tool loaded on ROM chips. The operating system is loaded from external storage through a serial port, usually from a floppy disk.
The control unit also contains the servo power supply, analog and digital feedback processing boards, and servo drive system.
The arm appears in the film Innerspace . An arm was displayed in the "Bird And The Robot" attraction at the World of Motion pavilion of EPCOT .
Variable Assembly Language | https://en.wikipedia.org/wiki/Programmable_Universal_Machine_for_Assembly |
A programmable load is a type of test equipment or instrument which emulates DC or AC resistance loads normally required to perform functional tests of batteries , power supplies or solar cells . By virtue of being programmable, tests like load regulation , battery discharge curve measurement and transient tests can be fully automated and load changes for these tests can be made without introducing switching transient that might change the measurement or operation of the power source under test.
Programmable loads most commonly use one transistor/FET, or an array of parallel connected transistors/FETs for more current handling, to act as a variable resistor. Internal circuitry in the equipment monitors the actual current through the transistor/FET, compares it to a user-programmed desired current, and through an error amplifier changes the drive voltage to the transistor/FET to dynamically change its resistance. This 'negative feedback' results in the actual current always matching the programmed desired current, regardless of other changes in the supplied voltage or other variables. Of course, if the power source is not able to supply the desired amount of current, the DC load equipment cannot furnish the difference; it can restrict current to a level, but it cannot boost current to a higher level. Most commercial DC loads are equipped with microprocessor front end circuits that allow the user to not only program a desired current through the load ('constant current' or CC), but the user can alternatively program the load to have a constant resistance (CR) or constant power dissipation (CP). | https://en.wikipedia.org/wiki/Programmable_load |
In the context of installing firmware onto a device, a programmer , device programmer , chip programmer , device burner , [ 1 ] : 364 or PROM writer [ 2 ] is a device that writes, a.k.a. burns, firmware to a target device's non-volatile memory . [ 3 ] : 3
Typically, the target device memory is one of the following types: PROM , EPROM , EEPROM , Flash memory , eMMC , MRAM , FeRAM , NVRAM , PLD , PLA , PAL , GAL , CPLD , FPGA .
Generally, a programmer connects to a device in one of two ways.
In some cases, the target device is inserted into a socket (usually ZIF ) on the programmer. [ 4 ] : 642, pdf15 If the device is not a standard DIP packaging , a plug-in adapter board, which converts the footprint with another socket, is used. [ 5 ] : 58
In some cases, a programmer connects to a device via a cable to a connection port on the device. This is sometimes called on-board programming , in-circuit programming , or in-system programming . [ 6 ] [ 7 ] [ 8 ]
Data is transferred from the programmer to the device as signals via connecting pins.
Some devices have a serial interface [ 9 ] : 232, pdf3 for receiving data (including JTAG interface). [ 4 ] : 642, pdf15 Other devices communicate on parallel pins, followed by a programming pulse with a higher voltage for programming the data into the device. [ 10 ] : 125
Usually, a programmer is controlled via a connected personal computer through a parallel port, [ 1 ] : 364 USB port, [ 11 ] or LAN interface. [ citation needed ] A program on the controlling computer interacts with the programmer to perform operations such as configure install parameters and program the device, [ 1 ] : 364 [ 12 ] : 430 [ 13 ] [ 14 ]
There are four general types of programmers:
Regarding old PROM programmers, as the many programmable devices have different voltage requirements, every pin driver must be able to apply different voltages in a range of 0–25 Volts. [ 19 ] : 651 [ 20 ] : 40 But according to the progress of memory device technology, recent flash memory programmers do not need high voltages. [ 21 ] [ 22 ]
In the early days of computing , booting mechanism was a mechanical devices usually consisted of switches and LEDs . It means the programmer was not an equipment but a human, who entered machine codes one by one, by setting the switches in a series of "on" and "off" positions. These positions of switches corresponded to the machine codes, similar to today's assembly language . [ 23 ] : 261–262 [ 24 ] [ 25 ] Nowadays, EEPROMs are used for bootstrapping mechanism as BIOS , and no need to operate mechanical switches for programming. [ 26 ] : 45
For each vendor's web site, refer to "External links" section. | https://en.wikipedia.org/wiki/Programmer_(hardware) |
In video game development and overall software development , programmer art refers to assets created by programmers .
Programmer art is made when there is an immediate need for an asset that does not yet exist. When this happens, a programmer will often use or create a placeholder, meant to be replaced at a later time before the project is published. [ 1 ] [ 2 ]
The term programmer art can encompass any art created by programmers. These assets can serve various purposes, such as quick testing of features, behind-the-scenes reasons, or even being intended for end-user display. The effort invested in an asset depends on its context and whether it will be replaced or not. Generally, programmer art is a placeholder graphic, meant to be replaced.
It is a recurring trope for programmers, who are often believed to be logical-minded, to have little experience with or interest in creating art. It is somewhat seen as a contrast, leading to the creation of the term.
In indie games , programmer art is often the norm as small-time developers rarely have dedicated artists or budgets for professionally made assets. It can also be a deliberate choice as some end-users prefer it for its authenticity.
Common forms of programmer art include stick figure sprites in side-scrolling games, fuchsia textures in games using 3D models , and grid textures for level geometry. Games with a "top-down" perspective tend to use alphanumeric characters and simple 2D graphics to represent characters and landscape elements.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Programmer_art |
Programming Languages: History and Fundamentals [ 1 ] is a book about programming languages written by Jean E. Sammet . Published in 1969, the book gives an overview of the state of the art of programming in the late 1960s, and records the history of programming languages up to that time. [ 2 ]
The book was considered a standard work on programming languages by professionals in the field. [ 3 ] According to Dag Spicer, senior curator of the Computer History Museum , Programming Languages "was, and remains, a classic." [ 4 ]
Programming Languages provides a history and description of 120 programming languages, with an extensive bibliography of reference works about each language and sample programs for many of them. [ 5 ] The book outlines both the technical definition and usage of each language, as well as the historical, political, and economic context of each language. [ 6 ]
Because Sammet was deeply involved in the history of programming language creation in the United States , she was able to give an insider's perspective. [ 3 ] The author excluded most programming languages used only outside the US, and excluded those she considered not to be high-level programming languages . [ 6 ]
The book covers both well-known and obscure programming languages. Among the 120 languages included in the book are:
Sammet pioneered the COBOL language while working at Sylvania and FORMAC (an extension of FORTRAN ) while at IBM. While managing IBM's Boston Advanced Programming Department, Sammet began researching programming languages more widely and collecting documentation. Starting in 1967 she published annual reports in Computers and Automation , the first computer magazine , on the languages in use across the field of programming. [ 9 ]
Computers were new and rare in the 1960s, and were a subject of fascination that book publishers hoped to profit from. Prentice Hall approached Sammet asking her to write about FORTRAN. Sammet said that she would rather write about every programming language. Prentice Hall and IBM told her to go ahead. [ 9 ]
Sammet used her book to advocate for high-level languages at a time when assembly languages were popular and there was widespread doubt about the value of high-level languages in the field of programming. [ 9 ]
An image of the Tower of Babel was printed on the dust jacket of the book, with the names of various programming languages printed on the bricks making up the tower. A similar image had appeared on the January 1961 issue of the Communications of the ACM . [ 10 ] | https://en.wikipedia.org/wiki/Programming_Languages:_History_and_Fundamentals |
Programming Research Limited (PRQA) was a United Kingdom -based developer of code quality management software for embedded software, which included the static program analysis tools QA·C and QA·C++, now known as Helix QAC . It created the High Integrity C++ software coding standard . In May 2018, the company was acquired by Minneapolis, MN –based Perforce , and its products were renamed. [ 1 ]
These tools were widely adopted in safety-critical industries like automotive, aerospace, and medical devices.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Programming_Research_Limited |
Programming Ruby is a book about the Ruby programming language by Dave Thomas and Andrew Hunt , authors of The Pragmatic Programmer . In the Ruby community, it is commonly known as "The PickAxe" because of the pickaxe on the cover. The book has helped Ruby to spread outside Japan. [ 1 ]
The complete first edition of this book is freely available under the Open Publication License v1.0, and was published by Addison-Wesley in 2001. The second edition, covering the features of Ruby 1.8, was published by The Pragmatic Programmers, LLC in 2004. The third edition, covering Ruby 1.9, was published in 2010, with the fourth edition, covering Ruby 1.9 and 2.0 being published in 2013.
A fifth edition , updated for Ruby 3.3, was written by Noel Rappin, and published by Pragmatic Programmers, LLC in 2023.
This article about a software book or series of books is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Programming_Ruby |
One thing the most visited websites have in common is that they are dynamic websites . Their development typically involves server-side coding , client-side coding and database technology . The programming languages applied to deliver such dynamic web content vary vastly between sites. | https://en.wikipedia.org/wiki/Programming_languages_used_in_most_popular_websites |
Progress in Materials Science is a journal publishing review articles covering most areas of materials science , published by the Pergamon imprint of Elsevier . It was started in 1949 with the title Progress in Metal Physics with Bruce Chalmers serving as first editor. It was changed to the current title in 1961.
This article about a materials science journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Progress_in_Materials_Science |
Progress in Polymer Science is a peer-reviewed scientific journal publishing review articles on topics broadly related to polymer chemistry . The 2022 impact factor of this journal was 27.1, ranking it the highest in the subject category "Polymer Science". [ 1 ] The journal is available since 1967. Currently it is edited by Editor-in-Chief Jean-Francois Lutz and Senior Editors Chuanbing Tang and Ophelia Tsui; Assistant Editor Ilhem Faiza Hakem. [ 2 ] Honorary Editors-in-Chief include Krzysztof "Kris" Matyjaszewski and Guy C. Berry from Carnegie Mellon University . | https://en.wikipedia.org/wiki/Progress_in_Polymer_Science |
Progress of Theoretical and Experimental Physics is a monthly peer-reviewed scientific journal published by Oxford University Press on behalf of the Physical Society of Japan . [ 1 ] It was established as Progress of Theoretical Physics in July 1946 by Hideki Yukawa and obtained its current name in January 2013.
The journal is part of the SCOAP 3 initiative. [ 2 ]
This article about a particle physics journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Progress_of_Theoretical_and_Experimental_Physics |
The progress zone is a layer of mesodermal cells immediately beneath the apical ectodermal ridge in the developing limb bud . The fate of the mesodermal cells is thought to be patterned by the length of time spent in the progress zone during limb outgrowth. [ Ref 1 ] However, some recent evidence using microinjected embryos suggests that the cells are prespecified early in limb bud development. [ Ref 2 ]
The progress zone acts as positional information to tell which cells to develop into the limb. If cells spend a very short time in this area as they receive signals from the apical ectodermal ridge, then more proximal limb structures are not able to develop even if distal ones can. [ 1 ]
This developmental biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Progress_zone |
Progressive collapse is the process where a primary structural element fails, resulting in the failure of adjoining structural elements, which in turn causes further structural failure . [ 1 ]
Progressive collapses may be accidental, as the result of design deficiencies, fire, unintentional overload, material failure or natural phenomenon (e.g. erosion , wind or earthquakes ). They can also be induced deliberately as a demolition method, specifically that of building implosion , or caused by acts of terrorism or war.
As the resulting damage in a progressive collapse is disproportionate to the original cause, the term disproportionate collapse is frequently used in engineering to describe this collapse type.
Based on recommendations from the United States Commerce Department 's National Institute of Standards and Technology (NIST), a comprehensive set of building code changes were approved by the International Code Council (ICC). The recommendations were based on the findings of NIST's three-year investigation of the collapses of New York City's World Trade Center (WTC) towers on September 11, 2001 .
The proposals addressed areas such as increased resistance to building collapse from fire and other incidents, use of sprayed fire-resistive materials (commonly known as "fireproofing"), performance and redundancy of fire protection systems (i.e., automatic sprinklers), fuel oil storage/piping, elevators for use by first responders and evacuating occupants, the number and location of stairwells, and exit path markings.
The model code changes consistent with the NIST WTC investigation recommendations that are now required by the IBC include: | https://en.wikipedia.org/wiki/Progressive_collapse |
Progressive contextualization (PC) is a scientific method pioneered and developed by Andrew P. Vayda and research team between 1979 and 1984. [ 1 ] The method was developed to help understand cause of damage and destruction of forest and land during the New Order Regime in Indonesia , as well as practical ethnography . Vayda proposed the Progressive contextualization method due to his dissatisfaction with several conventional anthropological methods to describe accurately and quickly cases of illegal logging , land destruction and the network of actor-investor protecting the actions, as well as various consequences detrimental to the environment and social life.
The essence of this method is to track and assess:
It rejects the assumption of ecological and socio-cultural homogeneity . Instead, it focuses on diversity and it looks at how different individuals and groups operate in and adapt to their total environments through a variety of behaviors, technologies, organizations, structures and beliefs.
Based on such a premise and through the practical interpretation of facts, the approach will lead to 'concrete findings on who is doing what, why they are doing it, and with what effects.' | https://en.wikipedia.org/wiki/Progressive_contextualization |
A progressive download is the transfer of digital media files from a server to a client , typically using the HTTP protocol when initiated from a computer. The consumer may begin playback of the media before the download is complete. The key difference between streaming media and progressive download is in how the digital media data is received and stored by the end user device that is accessing the digital media.
A media player that is capable of progressive download playback relies on meta data located in the header of the file to be intact and a local buffer of the digital media file as it is downloaded from a web server. At the point in which a specified amount of data becomes available to the local playback device, the media will begin to play. This specified amount of buffer is embedded into the file by the producer of the content in the encoder settings and is reinforced by additional buffer settings imposed by the media player.
Initially the digital media file type known as JPEG was the first visual media to render a progressive visual display as the digital media was downloaded and actually referred to as a progressive download. The distinction between the technical behavior of progressive download as opposed to the common or commercial use of the term progressive download to describe that behavior was not documented and there is a good deal of question regarding the origin of the term versus the origin of the technical implementation. Apple in reference to their QuickTime media player employed the term Fast Start [ 1 ] in 1997, to describe what was commercially referred to as progressive download playback of encoded digital media content.
The end user experience is similar to streaming media , however the file is downloaded to a physical drive on the end user's device; the file is typically stored in the temporary directory of the associated web browser if the medium was embedded into a web page or is diverted to a storage directory that is set in the preferences of the media player used for playback. The file will stutter or stop playback if the rate of playback exceeds the rate at which the file is downloaded. The file will begin to play again after further download.
This fast start playback is the result of moving the meta data from the end of the digital media file to the front, this move of the meta data gave the media player all the information it required to begin playback as the file was still being downloaded. Prior to that change, the meta data summary was located at the end of a media file and the entire file would need to be downloaded in order for the meta data to be read and the player begin playback. [ 2 ]
HTTP Pseudo-streaming (or progressive download), similar to streaming media or HTTP Live Streaming , also supports adaptive bitrate streaming . The disadvantage of HTTP Pseudo-streaming over streaming media is reduced security, since HTTP is easier to sniff compared to RTMP , along with long loading times when seeking larger videos. [ 3 ]
The MP4 files consist of chunks of data , called atoms . These atoms stores information like subtitles, etc. The special atom, called moov atom is responsible for storing information regarding how to play the video like dimensions , frames per second and such which is important to begin playing a video on HTML video player. [ 4 ] But atoms can appear in any order, so web servers like Nginx spends some CPU, memory and disk I/O to find the moov atom so that HTML video based clients can start playing the video. To optimize for HTTP Pseudo-streaming, it's important to move the moov atom to the beginning, so that web servers like Nginx optimize for faster pseudo-streaming resulting in playing of playbacks without having to wait for the entire file to be arrived or scanned. [ 5 ]
ffmpeg -i sample_input.mp4 -movflags faststart -acodec copy -vcodec copy sample_output.mp4
By doing this optimization, we effectively prevent web servers, like nginx , from spending relative amount of computation in finding the moov atom hence increasing the playback performance on HTML video based client. [ 5 ] [ 4 ]
Initially, the file is played from the beginning. A user may wish to point to a part of the file which haven't been downloaded yet. This capability is called seeking [ 7 ] and it makes possible to download and start playing any part of the media file. That is often referred to as pseudo-streaming.
For Flash video seeking requires a list of seek points in the media file metadata. These points are offsets in the video (both in seconds and bytes) at which a new key frame starts. A web server or a media server which handles the download, must support seek points in query string of requests for downloading data.
For other types of media files such as MP4 or MKV , web servers must be capable of handling a special offset parameter. The offset parameter name differs for various servers so it must be specified in player settings.
Some servers support seeking via additional modules only, they are specified below. Seeking parameter names are written in italic . | https://en.wikipedia.org/wiki/Progressive_download |
In mathematics , Proizvolov's identity is an identity concerning sums of differences of positive integers . The identity was posed by Vyacheslav Proizvolov as a problem in the 1985 All-Union Soviet Student Olympiads . [ 1 ]
To state the identity, take the first 2 N positive integers,
and partition them into two subsets of N numbers each. Arrange one subset in increasing order:
Arrange the other subset in decreasing order:
Then the sum
is always equal to N 2 .
Take for example N = 3. The set of numbers is then {1, 2, 3, 4, 5, 6}. Select three numbers of this set, say 2, 3 and 5. Then the sequences A and B are:
The sum is
which indeed equals 3 2 .
For any a , b {\displaystyle a,b} , we have: | a − b | = max { a , b } − min { a , b } {\displaystyle |a-b|=\max\{a,b\}-\min\{a,b\}} . For this reason, it suffices to establish that the sets { max { a i , b i } : 1 ≤ i ≤ n } {\displaystyle \{\max\{a_{i},b_{i}\}:1\leq i\leq n\}} and : { n + 1 , n + 2 , … , 2 n } {\displaystyle \{n+1,n+2,\dots ,2n\}} coincide. Since the numbers a i , b i {\displaystyle a_{i},b_{i}} are all distinct, it suffices to show that for any 1 ≤ k ≤ n {\displaystyle 1\leq k\leq n} , max { a k , b k } > n {\displaystyle \max\{a_{k},b_{k}\}>n} . Assume the contrary that this is false for some k {\displaystyle k} , and consider n + 1 {\displaystyle n+1} positive integers a 1 , a 2 , … , a k , b k , b k + 1 , … , b n {\displaystyle a_{1},a_{2},\dots ,a_{k},b_{k},b_{k+1},\dots ,b_{n}} . Clearly, these numbers are all distinct (due to the construction), but there are at most n {\displaystyle n} of them, which is a contradiction. | https://en.wikipedia.org/wiki/Proizvolov's_identity |
Project 112 was a biological and chemical weapon experimentation project conducted by the United States Department of Defense from 1962 to 1973.
The project started under John F. Kennedy 's administration, and was authorized by his Secretary of Defense Robert McNamara , as part of a total review of the US military. The name "Project 112" refers to this project's number in the 150 project review process authorized by McNamara. Funding and staff were contributed by every branch of the U.S. armed services and intelligence agencies—a euphemism for the Office of Technical Services of the Central Intelligence Agency 's Directorate of Science & Technology . Canada and the United Kingdom also participated in some Project 112 activities. [ 1 ]
Project 112 primarily concerned the use of aerosols to disseminate biological and chemical agents that could produce "controlled temporary incapacitation" (CTI). The test program would be conducted on a large scale at "extracontinental test sites" in the Central and South Pacific [ 2 ] and Alaska in conjunction with Britain, Canada and Australia.
At least 50 trials were conducted; of these at least 18 tests involved simulants of biological agents (such as BG ), and at least 14 involved chemical agents including sarin and VX , but also tear gas and other simulants. [ 1 ] Test sites included Porton Down (UK), Ralston (Canada) and at least 13 US warships; the shipborne trials were collectively known as Shipboard Hazard and Defense— SHAD . [ 1 ] The project was coordinated from Deseret Test Center , Utah.
As of 2005 [update] , publicly available information on Project 112 remains incomplete. [ 1 ]
In January 1961, Defense Secretary Robert McNamara sent a directive about chemical and biological weapons to the Joint Chiefs of Staff, urging them to: "consider all possible applications, including use as an alternative to nuclear weapons. Prepare a plan for the development of an adequate biological and chemical deterrent capability, to include cost estimates, and appraisal of domestic and international political consequences." The Joint Chiefs established a Joint Task Force that recommended a five-year plan to be conducted in three phases. [ 3 ] [ 4 ]
On April 17, 1963, President Kennedy signed National Security Action Memorandum 235 (NSAM 235) which approved:
Policy guides governing the conduct of large-scale scientific or technological experiments that might have significant or protracted effects on the physical or biological environment. Experiments which by their nature could result in domestic or foreign allegations that they might have such effects will be included in this category even though the sponsoring agency feels confident that such allegations would in fact prove to be unfounded. [ 5 ]
Project 112 was a highly classified military testing program which was aimed at both offensive and defensive human, animal, and plant reaction to biological and chemical warfare in various combinations of climate and terrain. [ 3 ] The U.S. Army Chemical Corps sponsored the United States portion of an agreement between the US, Britain, Canada, and Australia to negotiate, host, conduct, or participate in mutual interest research and development activity and field testing.
The command structure for the Deseret Test Center , which was organized to oversee Project 112, somewhat bypassed standard Defense Department channels and reported directly to the Joint Chiefs of Staff and US Cabinet consisting of Secretary of Defense , Secretary of State , and to a much smaller extent, the Secretary of Agriculture . Experiments were planned and conducted by the Deseret Test Center and Deseret Chemical Depot at Fort Douglas, Utah . The tests were designed to test the effects of biological weapons and chemical weapons on personnel, plants, animals, insects, toxins, vehicles, ships and equipment. Project 112 and Project SHAD experiments involved unknowing test subjects who did not give informed consent, and took place on land and at sea in various climates and terrains. Experiments involved humans, plants, animals, insects, aircraft, ships, submarines and amphibious vehicles. [ 6 ]
There was a large variety of goals for the proposed tests, for example: "selected protective devices in preventing penetration of a naval ship by a biological aerosol," the impact of "meteorological conditions on weapon system performance over the open sea," the penetrability of jungle vegetation by biological agents, "the penetration of an arctic inversion by a biological aerosol cloud," "the feasibility of an offshore release of Aedes aegypti mosquito as a vector for infectious diseases," "the feasibility of a biological attack against an island complex," and the study of the decay rates of biowarfare agents under various conditions. [ 7 ]
Project 112 tests used the following agents and simulants: Francisella tularensis , Serratia marcescens , Escherichia coli , Bacillus globii , staphylococcal enterotoxin Type B, Puccinia graminis var. tritici (stem rust of wheat). [ 7 ] Agents and simulants were usually dispensed as aerosols using spraying devices or bomblets.
In May 1965, vulnerability tests in the U.S. using the anthrax simulant Bacillus globigii were performed in the Washington, D.C. area by SOD covert agents. One test was conducted at the Greyhound bus terminal and the other at the north terminal of the National Airport . In these tests the bacteria were released from spray generators hidden in specially built briefcases. SOD also conducted a series of tests in the New York City Subway system between 7 and 10 June 1966 by dropping light bulbs filled with Bacillus subtilis var. niger . In the latter tests, results indicated that a city-level epidemic would have occurred. Local police and transit authorities were not informed of these tests. [ 7 ]
Project SHAD, an acronym for Shipboard Hazard and Defense (or sometimes Decontamination), was part of the larger program called Project 112, which was conducted during the 1960s. Project SHAD encompassed tests designed to identify U.S. warships' vulnerabilities to attacks with chemical or biological warfare agents and to develop procedures to respond to such attacks while maintaining a war-fighting capability. [ 8 ] The Department of Defense (DoD) states that Project 112 was initiated out of concern for the ability of the United States to protect and defend against potential CB threats. [ 8 ] Project 112 consisted of both land-based and sea-based tests. The sea-based tests, called Project SHAD were primarily launched from other ships such as the USS Granville S. Hall (YAG-40) and USS George Eastman (YAG-39) , Army tugboats, submarines, or fighter aircraft and was designed to identify U.S. warships' vulnerabilities to attacks with chemical or biological warfare agents and to develop decontamination and other methods to counter such attacks while maintaining a war-fighting capability. [ 8 ] The classified information related to SHAD was not completely cataloged or located in one facility. Furthermore, The Deseret Test Center was closed in the 1970s and the search for 40-year-old documents and records kept by different military services in different locations was a challenge to researchers. [ 8 ] A fact sheet was developed for each test that was conducted and when a test cancellation was not documented, a cancellation analysis was developed outlining the logic used to presume that the test had been cancelled. [ 8 ]
The existence of Project 112 (along with the related Project SHAD) was categorically denied by the military until May 2000, when a CBS Evening News investigative report produced dramatic revelations about the tests. This report caused the Department of Defense and the Department of Veterans Affairs to launch an extensive investigation of the experiments, and reveal to the affected personnel their exposure to toxins.
Revelations concerning Project SHAD were first exposed by independent producer and investigative journalist Eric Longabardi . Longabardi's six-year investigation into the still secret program began in early 1994. It ultimately resulted in a series of investigative reports produced by him, which were broadcast on the CBS Evening News in May 2000. After the broadcast of these exclusive reports, the Pentagon and Veteran's Administration opened their own ongoing investigations into the long classified program. In 2002, Congressional hearings on Project SHAD, in both the Senate and House, further shed media attention on the program. In 2002, a class action federal lawsuit was filed on behalf of the US sailors exposed in the testing. Additional actions, including a multi-year medical study, were conducted by National Academy of Sciences /Institute of Medicine to assess the potential medical harm caused to the thousands of unwitting US Navy sailors, civilians, and others who were exposed in the secret testing. The results of that study were finally released in May 2007.
Because most of the participants that were involved with Project 112 and SHAD were unaware of any tests being done, no effort was made to ensure the informed consent of the military personnel. [ 9 ] The US Department of Defense (DoD) conducted testing of agents in other countries that were considered too unethical to perform within the continental United States. Until 1998, the Department of Defense stated officially that Project SHAD did not exist. Because the DoD refused to acknowledge the program, surviving test subjects have been unable to obtain disability payments for health issues related to the project. US Representative Mike Thompson said of the program and the DoD's effort to conceal it, "They told me – they said, but don't worry about it, we only used simulants. And my first thought was, well, you've lied to these guys for 40 years, you've lied to me for a couple of years. It would be a real leap of faith for me to believe that now you're telling me the truth." [ 10 ]
The Department of Veterans Affairs commenced a three-year study comparing known SHAD-affected veterans to veterans of similar ages who were not involved in any way with SHAD or Project 112. The study cost approximately US$3 million, and results are being compiled for future release. DoD has committed to providing the VA with the relevant information it needs to settle benefits claims as quickly and efficiently as possible and to evaluate and treat veterans who were involved in those tests. This required analyzing historical documents recording the planning and execution of Project 112/SHAD tests. [ 8 ]
The released historical information about Project 112 from DoD consists of summary fact sheets rather than original documents or maintained federal information. As of 2003, 28 fact sheets have been released, focusing on the Deseret Test Center in Dugway, Utah , which was built entirely for Project 112/SHAD and was closed after the project was finished in 1973.
Original records are missing or incomplete. For example, a 91-meter aerosol test tower was sprayed by an F-4E with "aerosols" on Ursula Island in the Philippines and appears in released original Project SHAD documentation but without a fact sheet or further explanation or disclosure as to the nature of the test that was conducted or even what the test was called. [ 11 ]
Author Sheldon H. Harris researched the history of Japanese Biological warfare and the American cover-up extensively. Harris and other scholars found that U.S. intelligence authorities had seized the Japanese researchers' archive after the technical information was provided by Japan. The information was transferred in an arrangement that exchanged keeping the information a secret and not pursuing war crimes charges.
The arrangement with the United States concerning Japanese WMD research provided extensive Japanese technical information in exchange for not pursuing certain charges and also allowed Japan's government to deny knowledge of the use of these weapons by Japan's military in China during World War II. [ 12 ] German scientists in Europe also skipped war crimes charges and went to work as U.S. employed intelligence agents and technical experts in an arrangement known as Operation Paperclip . [ 13 ]
The U.S. would not cooperate when the Soviet Union attempted to pursue war crimes charges against the Japanese. General Douglas MacArthur denied the U.S. Military had any captured records on Japan's military biological program. "The U.S. denial was absolutely misleading but technically correct as the Japanese records on biological warfare were then in the custody of U.S intelligence agencies rather than in possession of the military". [ 3 ] A formerly top secret report by the U.S. War Department at the close of World War II, clearly stipulates that the United States exchanged Japan's military technical information on Biological Warfare experimentation against humans, plants, and animals in exchange for war crimes immunity. [ 13 ] The War department notes that, "The voluntary imparting of this BW information may serve as a forerunner for obtaining much additional information in other fields of research." [ 14 ] Armed with Nazi and Imperial Japanese biowarfare know-how, the United States government and its intelligence agencies began conducting widespread field testing of potential CBW capabilities on American cities, crops, and livestock. [ 3 ]
It is known that Japanese scientists were working at the direction of the Japan's military and intelligence agencies on advanced research projects of the United States including America's covert biomedical and biowarfare programs from the end of World War II through at least the 1960s. [ 15 ] [ 3 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ]
The U.S. General Accounting Office (GAO) in September 1994 found that between 1940 and 1974, DOD and other national security agencies studied "hundreds, perhaps thousands" of weapons tests and experiments involving large area coverage of hazardous substances. [ 20 ] The report states:
...Dugway Proving Ground is a military testing facility located approximately 80 miles (130 km) from Salt Lake City. For several decades, Dugway has been the site of testing for various chemical and biological agents. From 1951 through 1969, hundreds, perhaps thousands of open-air tests using bacteria and viruses that cause disease in human, animals, and plants were conducted at Dugway... It is unknown how many people in the surrounding vicinity were also exposed to potentially harmful agents used in open-air tests at Dugway. [ 20 ]
Innocent civilians in cities, on subways and at airports were sprayed with disease carrying mosquitoes, "aerosols," containing bacteria, viruses, or exposed to a variety of dangerous chemical, biological and radiological agents as well as stimulant agents that were later found to be more dangerous than first thought. [ 20 ] [ 21 ]
Precise information on the number of tests, experiments, and participants is not available and the exact number of veterans exposed will probably never be known. [ 22 ] [ 23 ]
On December 2, 2002, President George W. Bush signed Public Law 107–314, the Bob Stump National Defense Authorization Act (NDAA) for Fiscal Year 2003 which included Section 709 entitled Disclosure of Information on Project 112 to Department of Veterans Affairs. Section 709 required disclosure of information concerning Project 112 to United States Department of Veterans Affairs (DVA) and the General Accounting Office (GAO).
Public Law 107–314 required the identification and release of not only Project 112 information to VA but also that of any other projects or tests where a service member might have been exposed to a CBW agent and directed The Secretary of Defense to work with veterans and veterans service organizations to identify the other projects or tests conducted by the Department of Defense that may have exposed members of the Armed Forces to chemical or biological agents. [ 24 ] However, the issues surrounding the test program were not resolved by the passage of the law and "the Pentagon was accused of continuing to withhold documents on Cold War chemical and biological weapons tests that used unsuspecting veterans as "human samplers" after reporting to Congress it had released all medically relevant information." [ 25 ]
A 2004 GAO report revealed that of the participants who were identified from Project 112, 94 percent were from ship-based tests of Project SHAD that comprised only about one-third of the total number of tests conducted. [ 23 ] The Department of Defense informed the Veterans Administration that Project 112/SHAD and Mustard Gas programs have been officially closed as of June 2008 while Edgewood Arsenal testing remains open as DoD continues to identify Veterans who were "test participants" in the program. [ 26 ] DoD's current effort to identify Cold War exposures began in 2004 and is endeavoring to identify all non-Project 112/SHAD veterans exposed to chemical and biological substances due to testing and accidents from World War II through 1975. [ 22 ]
"America has a sad legacy of weapons testing in the Pacific...people were removed from their homes and their islands used as targets." While this statement during congressional testimony during the Department of Defense's inquiry into Project 112 was referring a completely different and separate testing program, there are common concerns about potential adverse health impacts and the timely release of information. [ 27 ] Congress was unsatisfied with the DOD's unwillingness to disclose information relating to the scope of America's chemical and biological warfare past and provide the information necessary to assess and deal with the risks to public safety and U.S. service members' health that CBW testing may have posed or continue to pose. [ 27 ]
A Government Accounting Office May 2004 report, Chemical and Biological Defense: DOD Needs to Continue to Collect and Provide Information on Tests and Potentially Exposed Personnel states:
During the 1962–-74 time period, the Department of Defense (DOD) conducted a classified chemical and biological warfare test program, called Project 112, that might have exposed U.S. service members and others including DOD civilian personnel, DOD contractors, and foreign nationals to chemical or biological agents employed in these tests... While there is no database that contains information concerning the biological and chemical tests that have been conducted, we determined that hundreds of such classified tests and research projects were conducted outside Project 112 while it was ongoing. In addition, information from various sources shows that personnel from all services were involved in chemical and biological testing.
We learned during this review that hundreds of chemical and biological tests similar to those conducted under Project 112 were conducted during the same time period...
This study listed 31 biological field tests performed at various military installations... The study did not quantify the number of test participants nor did it identify them.
In addition, we reported in 1993 and 1994 that hundreds of radiological, chemical, and biological tests were conducted in which hundreds of thousands of people were used as test subjects.
We also reported that the Army Chemical Corps conducted a classified medical research program for developing incapacitating agents. This program involved testing nerve agents, nerve agent antidotes, psycho chemicals, and irritants... In total, Army documents identified 7,120 Army and Air Force personnel who participated in these tests. Further, GAO concluded that precise information on the scope and the magnitude of tests involving human subjects was not available, and the exact number of human subjects might never be known. [ 23 ]
On appeal in Vietnam Veterans of America v. Central Intelligence Agency , a panel majority held in July 2015 that Army Regulation 70-25 (AR 70–25) created an independent duty to provide ongoing medical care to veterans who many years ago participated in U.S. chemical and biological testing programs. Prior to the finding that the Army is required to provide medical care long after a veteran last participated in a testing program was a 2012 finding that the Army has an ongoing duty to seek out and provide "notice" to former test participants of any new information that could potentially affect their health. [ 26 ] The case was initially brought forward by concerned veterans who participated in the Edgewood Arsenal human experiments .
Corroborating suspicions of Project 112 activities on Okinawa include "An Organizational History of the 267th Chemical company", which was made available by the U.S. Army Heritage and Education Center to Yellow Medicine County, Minnesota , Veteran's Service Officer Michelle Gatz in 2012. [ 28 ] According to the document, the 267th Chemical Company was activated on Okinawa on December 1, 1962, as the 267th Chemical Platoon (SVC) was billeted at Chibana Depot. During this deployment, "Unit personnel were actively engaged in preparing RED HAT area, site 2 for the receipt and storage of first increment items, [shipment] "YBA", DOD Project 112." The company received further shipments, code named YBB and YBF, which according to declassified documents also included sarin, VX, and mustard gas. [ 28 ]
The late author Sheldon H. Harris in his book "Factories of Death: Japanese Biological Warfare, 1932–1945, and the American cover up" wrote about Project 112:
The test program, which began in fall 1962 and which was funded at least through fiscal year 1963, was considered by the Chemical Corps to be "an ambitious one." The tests were designed to cover "not only trials at sea, but Arctic and tropical environmental tests as well." The tests, presumably, were conducted at what research officers designated, but did not name, "satellite sites." These sites were located both in the continental United States and in foreign countries. The tests conducted there were aimed at both human, animal and plant reaction to BW. It is known that tests were undertaken in Cairo, Egypt, Liberia, in South Korea, and in Japan's satellite province of Okinawa in 1961, or earlier. This was at least one year prior to the creation of Project 112. The Okinawa anti-crop research project may lend some insight to the larger projects 112 sponsored. BW experts in Okinawa and "at several sites in the Midwest and south:" conducted in 1961 "field tests" for wheat rust and rice blast disease. These tests met with "partial success" in the gathering of data, and led, therefore, to a significant increase in research dollars in fiscal year 1962 to conduct additional research in these areas. The money was devoted largely to developing "technical advice on the conduct of defoliation and anti-crop activities in Southeast Asia." By the end of fiscal year 1962, the Chemical Corps had let or were negotiating contracts for over one thousand chemical defoliants . The Okinawa tests evidently were fruitful.
The U.S. government has previously disclosed information on chemical and biological warfare tests it held at sea and on land yet new-found documents show that the U.S. Army tested biological weapons in Okinawa in the early 1960s, when the prefecture was still under U.S. rule. During these tests, conducted at least a dozen times between 1961 and 1962, rice blast fungus was released by the Army using "a midget duster to release inoculum alongside fields in Okinawa and Taiwan," in order to measure effective dosages requirements at different distances and the negative effects on crop production. [ 29 ] [ 30 ] Rice blast or Pyricularia oryzae produces a mycotoxin called tenuazonic acid which has been implicated in human and animal disease. [ 31 ]
A number of studies, reports and briefings have been done on chemical and biological warfare exposures. A list of the major documents is provided below. [ 32 ]
This article incorporates public domain material from websites or documents of the United States government . | https://en.wikipedia.org/wiki/Project_112 |
Project 25 ( P25 or APCO-25 ) is a suite of standards for interoperable Land Mobile Radio (LMR) systems designed primarily for public safety users. The standards allow analog conventional, digital conventional, digital trunked , or mixed-mode systems. P25 was originally developed for public safety users in the United States but has gained acceptance for public safety, security, public service, and some commercial applications worldwide. [ 1 ] P25 radios are a direct replacement for analog UHF (typically FM ) radios, adding the ability to transfer data as well as voice for more natural implementations of encryption and text messaging . P25 radios are commonly implemented by dispatch organizations, such as police , fire , ambulance and emergency rescue service, using vehicle-mounted radios combined with repeaters and handheld walkie-talkie use.
Starting around 2012, products became available with the newer phase 2 modulation protocol, the older protocol known as P25 became P25 phase 1. P25 phase 2 products use the more advanced AMBE2+ vocoder, which allows audio to pass through a more compressed bitstream and provides two TDMA voice channels in the same RF bandwidth (12.5 kHz), while phase 1 can provide only one voice channel. The two protocols are not compatible. However, P25 Phase 2 infrastructure can provide a "dynamic transcoder" feature that translates between Phase 1 and Phase 2 as needed. In addition to this, phase 2 radios are backwards compatible with phase 1 modulation and analog FM modulation, per the standard. The European Union has created the Terrestrial Trunked Radio (TETRA) and Digital mobile radio (DMR) protocol standards, which fill a similar role to Project 25.
Public safety radios have been upgraded from analog FM to digital since the 1990s because of an increased use of data on radio systems for such features as GPS location, trunking , text messaging, metering, and encryption with different levels of security.
Various user protocols and different public safety radio spectrum made it difficult for Public Safety agencies to achieve interoperability and widespread acceptance. However, lessons learned during disasters the United States faced in the past decades have forced agencies to assess their requirements during a disaster when basic infrastructure has failed. To meet the growing demands of public safety digital radio communication, the United States Federal Communications Commission (FCC) at the direction of the United States Congress initiated a 1988 inquiry for recommendations from users and manufacturers to improve existing communication systems. [ 2 ] [ 3 ] Based on the recommendations, to find solutions that best serve the needs of public safety management, in October 1989 APCO Project 25 came into existence in a coalition with: [ 2 ] [ 4 ]
A steering committee consisting of representatives from the above-mentioned agencies along with FPIC ( Department of Homeland Security Federal Partnership for Interoperable Communication), Coast Guard and the Department of Commerce 's National Institute of Standards and Technology (NIST), Office of Law Enforcement Standards was established to decide the priorities and scope of technical development of P25. [ 4 ]
Interoperable emergency communication is integral to initial response, public health, community safety, national security and economic stability. Of all the problems experienced during disaster events, one of the most serious is poor communication due to lack of appropriate and efficient means to collect, process, and transmit important information in a timely fashion. In some cases, radio communication systems are incompatible and inoperable not just within a jurisdiction but within departments or agencies in the same community. [ 6 ] Non-operability occurs due to use of outdated equipment, limited availability of radio frequencies, isolated or independent planning, lack of coordination, and cooperation, between agencies, community priorities competing for resources, funding and ownership, and control of communications systems. [ 7 ] Recognizing and understanding this need, Project 25 (P25) was initiated collaboratively by public safety agencies and manufacturers to address the issue with emergency communication systems . P25 is a collaborative project to ensure that two-way radios are interoperable. The goal of P25 is to enable public safety responders to communicate with each other and, thus, achieve enhanced coordination, timely response, and efficient and effective use of communications equipment. [ 8 ]
P25 was established to address the need for common digital public safety radio communications standards for first-responders and homeland security/emergency response professionals. The Telecommunications Industry Association 's TR-8 engineering committee facilitates such work through its role as an ANSI-accredited standards development organization (SDO) and has published the P25 suite of standards as the TIA-102 series of documents, which now include 49 separate parts on Land Mobile Radio and TDMA implementations of the technology for public safety. [ 9 ]
Project 25 (P25) is a set of standards produced through the joint efforts of the Association of Public Safety Communications Officials International (APCO), the National Association of State Telecommunications Directors (NASTD), selected federal agencies and the National Communications System (NCS), and standardized under the Telecommunications Industry Association (TIA)... The P25 suite of standards involves digital Land Mobile Radio ( LMR ) services for local, state/provincial and national (federal) public safety organizations and agencies...
P25 is applicable to LMR equipment authorized or licensed, in the U.S., under NTIA or FCC rules and regulations.
Although developed primarily for North American public safety services, P25 technology and products are not limited to public safety alone and have also been selected and deployed in other private system application, worldwide. [ 10 ]
P25-compliant systems are being increasingly adopted and deployed throughout the United States, as well as other countries. Radios can communicate in analog mode with legacy radios, and in either digital or analog mode with other P25 radios. Additionally, the deployment of P25-compliant systems will allow for a high degree of equipment interoperability and compatibility.
P25 standards use the proprietary Improved Multi-Band Excitation (IMBE) and Advanced Multi-Band Excitation (AMBE+2) voice codecs which were designed by Digital Voice Systems, Inc. to encode/decode the analog audio signals. It is rumored that the licensing cost for the voice-codecs that are used in P25 standard devices is the main reason that the cost of P25 compatible devices is so high. [ 11 ]
P25 may be used in "talk around" mode without any intervening equipment between two radios, in conventional mode where two radios communicate through a repeater or base station without trunking or in a trunked mode where traffic is automatically assigned to one or more voice channels by a Repeater or Base Station.
The protocol supports the use of Data Encryption Standard (DES) encryption (56 bit), 2-key Triple-DES encryption, three-key Triple-DES encryption, Advanced Encryption Standard (AES) encryption at up to 256 bits keylength, RC4 ( 40 bits , sold by Motorola as Advanced Digital Privacy ), or no encryption.
The RC4 Advanced Digital Privacy can withstand casual attackers. It is supposed to offer 40-bit security, where an attacker must test the 2 to the power of 40 possible keys to find the right one. This level of encryption offers no real protection and there is software that allows you to find the key. [ 12 ]
The protocol also supports the ACCORDION 1.3, BATON , Firefly , MAYFLY and SAVILLE Type 1 ciphers.
The P25 User Needs Working Group (UNWG), which represents P25 users, identifies user needs for the P25 standards, which are communicated to the P25 Steering Committee. The P25 Steering Committee adds identified P25 user needs to the Statement of P25 User Needs (SPUN) document. The TIA TR-8 Engineering Committee and its subcommittees, which represents manufacturers in the P25 industry, is then expected to develop TIA-102 standards that satisfy identified P25 user needs. [ 13 ]
Once developed, TIA-102 standards may also subsequently be adopted by the P25 Steering Committee as P25 standards, and adopted by ANSI as American National Standards; however, TIA-102 standards do not automatically become P25 standards, and some TIA-102 standards have never been adopted by ANSI. [ 14 ] The TIA-102 standards, P25 standards, and associated ANSI standards have not been adopted by ISO as de jure international standards; however, P25 systems have been deployed in 83 countries, so they nonetheless serve as one set of de facto international standards alongside other international Land Mobile Radio (LMR) standards such as TETRA and DMR . [ 15 ]
P25's Suite of Standards specify eight open interfaces between the various components of a land mobile radio system. These interfaces are:
P25-compliant technology has been deployed over two main phases with future phases yet to be finalized.
Phase 1 radio systems operate in 12.5 kHz digital mode using a single user per channel access method. Phase 1 radios use Continuous 4 level FM (C4FM) modulation—a special type of 4 FSK modulation [ 17 ] —for digital transmissions at 4,800 baud and 2 bits per symbol, yielding 9,600 bits per second total channel throughput. Of this 9,600, 4,400 is voice data generated by the IMBE codec, 2,800 is forward error correction, and 2,400 is signalling and other control functions. Receivers designed for the C4FM standard can also demodulate the "Compatible quadrature phase shift keying " (CQPSK) standard, as the parameters of the CQPSK signal were chosen to yield the same signal deviation at symbol time as C4FM. Phase 1 uses the IMBE voice codec.
These systems involve standardized service and facility specifications, ensuring that any manufacturers' compliant subscriber radio has access to the services described in such specifications. Abilities include backward compatibility and interoperability with other systems, across system boundaries, and regardless of system infrastructure. In addition, the P25 suite of standards provides an open interface to the radio frequency (RF) subsystem to facilitate interlinking of different vendors' systems.
To improve spectrum use, P25 Phase 2 was developed for trunking systems using a 2-slot TDMA scheme and is now required for all new trunking systems in the 700 MHz band. [ 18 ] Phase 2 uses the AMBE+2 voice codec to reduce the needed bitrate so that one voice channel will only require 6,000 bits per second (including error correction and signalling). Phase 2 is not backwards compatible with Phase 1 (due to the TDMA operation), although multi-mode TDMA radios and systems are capable of operating in Phase 1 mode when required, if enabled. A subscriber radio cannot use TDMA transmission without a synchronization source; therefore direct radio to radio communication resorts to conventional FDMA digital operation. Multi-band subscriber radios can also operate on narrow-band FM as a lowest common denominator between almost any two way radios. This makes analog narrow-band FM the de facto "interoperability" mode for some time.
Originally the implementation of Phase 2 was planned to split the 12.5 kHz channel into two 6.25 kHz slots, or Frequency-Division Multiple Access (FDMA). However it proved more advantageous to use existing 12.5 kHz frequency allocations in Time Division Multiple Access (TDMA) mode for a number of reasons. It allowed subscriber radios to save battery life by only transmitting half the time which also yields the ability for the subscriber radio to listen and respond to system requests between transmissions.
Phase 2 is what is known as 6.25 kHz "bandwidth equivalent" which satisfies an FCC requirement for voice transmissions to occupy less bandwidth. Voice traffic on a Phase 2 system transmits with the full 12.5 kHz per frequency allocation, as a Phase 1 system does, however it does so at a faster data rate of 12 kbit/s allowing two simultaneous voice transmissions. As such subscriber radios also transmit with the full 12.5 kHz, but in an on/off repeating fashion resulting in half the transmission and thus an equivalent of 6.25 kHz per each radio. This is accomplished using the AMBE voice coder that uses half the rate of the Phase 1 IMBE voice coders. [ 19 ]
From 2000 to 2009, the European Telecommunications Standards Institute (ETSI) and TIA were working collaboratively on the Public Safety Partnership Project or Project MESA (Mobility for Emergency and Safety Applications), [ 20 ] which sought to define a unified set of requirements for a next-generation aeronautical and terrestrial digital wideband/broadband radio standard that could be used to transmit and receive voice, video, and high-speed data in wide-area, multiple-agency networks deployed by public safety agencies. [ 21 ] [ 22 ]
The final functional and technical requirements have been released by ETSI [ 23 ] and were expected to shape the next phases of American Project 25 and European DMR, dPMR , and TETRA, but no interest from the industry followed, since the requirements could not be met by available commercial off-the-shelf technology, and the project was closed in 2010. [ citation needed ]
During the United States 2008 wireless spectrum auction , the FCC allocated 20 MHz of the 700 MHz UHF radio band spectrum freed in the digital TV transition to public safety networks. The FCC expects providers to employ LTE for high-speed data and video applications. [ 24 ]
P25 systems do not have to resort to using in band signaling such as Continuous Tone-Coded Squelch System (CTCSS) tone or Digital-Coded Squelch (DCS) codes for access control. Instead they use what is called a Network Access Code (NAC) which is included outside of the digital voice frame. This is a 12-bit code that prefixes every packet of data sent, including those carrying voice transmissions.
The NAC is a feature similar to CTCSS or DCS for analog radios. That is, radios can be programmed to only pass audio when receiving the correct NAC. NACs are programmed as a three-hexadecimal-digit code that is transmitted along with the digital signal being transmitted.
Since the NAC is a three-hexadecimal-digit number (12 bits), there are 4,096 possible NACs for programming, far more than all analog methods combined.
Three of the possible NACs have special functions:
Adoption of these standards has been slowed by budget problems in the US; however, funding for communications upgrades from the Department of Homeland Security usually requires migrating to Project 25. It is also being used in other countries worldwide including Australia, New Zealand, Brazil, [ 25 ] Canada, India and Russia. [ 26 ] As of mid-2004 there were 660 networks with P25 deployed in 54 countries. [ 26 ] At the same time, in 2005, the European Terrestrial Trunked Radio (TETRA) was deployed in sixty countries, and it is the preferred choice in Europe, China, and other countries. [ 26 ] This was largely based on TETRA systems being many times cheaper than P25 systems ($900 vs $6,000 for a radio) [ 26 ] at the time. However P25 radio prices are rapidly approaching parity with TETRA radio prices through increased competition in the P25 market. The majority of P25 networks are based in Northern America where it has the advantage that a P25 system has the same coverage and frequency bandwidth as the earlier analog systems that were in use so that channels can be easily upgraded one by one. [ 26 ] Some P25 networks also allow intelligent migration from the analog radios to digital radios operating within the same network. Both P25 and TETRA can offer varying degrees of functionality, depending on available radio spectrum, terrain and project budget.
While interoperability is a major goal of P25, many P25 features present interoperability challenges. In theory, all P25 compliant equipment is interoperable. In practice, interoperable communications isn't achievable without effective governance, standardized operating procedures, effective training and exercises, and inter-jurisdictional coordination. The difficulties inherent in developing P25 networks using features such as digital voice, encryption, or trunking sometimes result in feature-backlash and organizational retreat to minimal "feature-free" P25 implementations which fulfill the letter of any Project 25 migration requirement without realizing the benefits thereof. Additionally, while not a technical issue per se, frictions often result from the unwieldy bureaucratic inter-agency processes that tend to develop in order to coordinate interoperability decisions.
The United States DHS 's Project 25 Compliance Assessment Program (P25 CAP) [ 32 ] aims for interoperability among different vendors by testing to P25 Standards. P25 CAP, a voluntary program, allows suppliers to publicly attest to their products' compliance. [ 32 ]
Independent, accredited labs test vendor's P25 radios for compliance to P25 Standards, derived from TIA-102 Standards and following TIA-TR8 testing procedures. Only approved products [ 33 ] may be purchased using US federal grant dollars. [ 34 ] Generally, non-approved products should not be trusted to be meet P25 standards for performance, conformance, and interoperability.
P25 product labeling varies. "P25" and "P25 compliant" mean nothing while high standards apply for a vendor to claim a product is "P25 CAP compliant" or "P25 compliant with the Statement of Requirements (P25 SOR)" [ 35 ]
At the Securecomm 2011 conference in London, security researcher Steve Glass presented a paper, written by himself and co-author Matt Ames, that explained how DES-OFB and Motorola's proprietary ADP (RC4 based) ciphers were vulnerable to brute force key recovery. [ 36 ] This research was the result of the OP25 project [ 37 ] which uses GNU Radio [ 38 ] and the Ettus Universal Software Radio Peripheral (USRP) [ 39 ] to implement an open source P25 packet sniffer and analyzer. The OP25 project was founded by Steve Glass in early 2008 while he was performing research into wireless networks as part of his PhD thesis.
The paper is available for download from the NICTA website. [ 40 ]
In 2011, the Wall Street Journal published an article describing research into security flaws of the system, including a user interface that makes it difficult for users to recognize when transceivers are operating in secure mode. [ 41 ] According to the article, "(R)esearchers from the University of Pennsylvania overheard conversations that included descriptions of undercover agents and confidential informants , plans for forthcoming arrests and information on the technology used in surveillance operations." The researchers found that the messages sent over the radios are sent in segments, and blocking just a portion of these segments can result in the entire message being jammed. "Their research also shows that the radios can be effectively jammed (single radio, short range) using a highly modified pink electronic child's toy and that the standard used by the radios 'provides a convenient means for an attacker' to continuously track the location of a radio's user. With other systems, jammers have to expend a lot of power to block communications, but the P25 radios allow jamming at relatively low power, enabling the researchers to prevent reception using a $30 toy pager designed for pre-teens."
The report was presented at the 20th USENIX Security Symposium in San Francisco in August 2011. [ 42 ] The report noted a number of security flaws in the Project 25 system, some specific to the way it has been implemented and some inherent in the security design.
The report did not find any breaks in the P25 encryption; however, they observed large amounts of sensitive traffic being sent in the clear due to implementations problems. They found switch markings for secure and clear modes difficult to distinguish (∅ vs. o). This is exacerbated by the fact that P25 radios when set to secure mode continue to operate without issuing a warning if another party switches to clear mode. In addition, the report authors said many P25 systems change keys too often, increasing the risk that an individual radio on a net may not be properly keyed, forcing all users on the net to transmit in the clear to maintain communications with that radio.
One design choice was to use lower levels of error correction for portions of the encoded voice data that are deemed less critical for intelligibility. As a result, bit errors may be expected in typical transmissions, and while harmless for voice communication, the presence of such errors force the use of stream ciphers , which can tolerate bit errors, and prevents the use of a standard technique, message authentication codes (MACs), to protect message integrity from stream cipher attacks . The varying levels of error correction are implemented by breaking P25 message frames into subframes. This allows an attacker to jam entire messages by transmitting only during certain short subframes that are critical to reception of the entire frame. As a result, an attacker can effectively jam Project 25 signals with average power levels much lower than the power levels used for communication. Such attacks can be targeted at encrypted transmissions only, forcing users to transmit in the clear.
Because Project 25 radios are designed to work in existing two-way radio frequency channels, they cannot use spread spectrum modulation, which is inherently jam-resistant. An optimal spread spectrum system can require an effective jammer to use 1,000 times as much power (30 dB more) as the individual communicators. According to the report, a P25 jammer could effectively operate at 1/25th the power (14 dB less) than the communicating radios. The authors developed a proof-of-concept jammer using a Texas Instruments CC1110 single chip radio, found in an inexpensive toy. [ 42 ]
Certain metadata fields in the Project 25 protocol are not encrypted, allowing an attacker to perform traffic analysis to identify users. Because Project 25 radios respond to bad data packets addressed to them with a retransmission request, an attacker can deliberately send bad packets forcing a specific radio to transmit even if the user is attempting to maintain radio silence . Such tracking by authorized users is considered a feature of P25, referred to as "presence". [ 43 ]
The report's authors concluded by saying "It is reasonable to wonder why this protocol, which was developed over many years and is used for sensitive and critical applications, is so difficult to use and so vulnerable to attack." The authors separately issued a set of recommendations for P25 users to mitigate some of the problems found. [ 44 ] These include disabling the secure/clear switch, using Network Access Codes to segregate clear and encrypted traffic, and compensating for the unreliability of P25 over-the-air rekeying by extending key life.
P25 and TETRA are used in more than 53 countries worldwide for both public safety and private sector radio networks. There are some differences in features and capacities: [ 45 ] [ 46 ] [ 47 ] | https://en.wikipedia.org/wiki/Project_25 |
Project 4.1 was the designation for a medical study and experimentation conducted by the United States of those residents of the Marshall Islands exposed to radioactive fallout from the 1 March 1954 Castle Bravo nuclear test at Bikini Atoll , which had an unexpectedly large yield . Government and mainstream historical sources point to the study being organized on March 6 or 7 March 1954, less than a week after the Bravo shot.
In the wake of the Castle Bravo detonation, a new research section was added to the Castle Bravo Weapons Effects research section. Program 4, "Biomedical effects," was to include one project, Project 4.1, titled "Study of Response of Human Beings exposed to Significant Beta and Gamma Radiation due to Fall-out from High-Yield Weapons." Eugene P. Cronkite of the National Naval Medical Center was designated as Project Officer. [ 1 ] Cronkite's instructions stressed the importance of secrecy surrounding the project:
... the project is classified SECRET RESTRICTED DATA . Due to possible adverse public reaction, you will specifically instruct all personnel in this project to be particularly careful not to discuss the purpose of this project and its background or findings with any except those who have specific " need to know ." [ 2 ]
The purpose of the project, as a 1982 Defense Nuclear Agency report explained, was both medical as well as for research purposes:
The purposes of [Project 4.1] were to (1) evaluate the severity of radiation injury to the human beings exposed, (2) provide for all necessary medical care, and (3) conduct a scientific study of radiation injuries to human beings. [ 3 ]
As a Department of Energy Committee writing on the human radiation experiments wrote, "It appears to have been almost immediately apparent to the AEC and the Joint Task Force running the Castle series that research on radiation effects could be done in conjunction with the medical treatment of the exposed populations." [ 4 ] The DOE report also concluded that "The dual purpose of what is now a DOE medical program has led to a view by the Marshallese that they were being used as 'guinea pigs' in a 'radiation experiment.'" [ 4 ]
Organizations involved in the project included the Naval Medical Research Institute , the Naval Radiological Defense Laboratory , Patrol Squadron 29, the Naval Air Station, Kwajalein, Los Alamos National Laboratory , the Applied Fisheries Laboratory at the University of Washington , and Hanford Atomic Power Operations . Three U.S. Navy ships were used in the project: USS Nicholas , USS Renshaw , and USS Philip . [ 3 ] The primary study of the Marshallese was terminated around 75 days after the time of exposure. In July 1954 a meeting at the Division of Biology at the U.S. Atomic Energy Commission decided to complete 6- and 12-month follow-up exposure studies, some of which were later written up as addendums to Project 4.1. [ 5 ]
Some Marshallese have alleged that the exposure of the Marshallese was premeditated. In 1972, Micronesian Representative Ataji Balos charged at the Congress of Micronesia that the exposure during Bravo was purposeful so that the AEC could develop medical capabilities for treating those exposed to fallout during nuclear war , and charged that the Marshallese were chosen because of their marginal status in the world at large. According to a U.S. internal transcription of Balos' talk, Balos alleged that "The U.S. chose to make guinea pigs out of our people because they are not white but some brown natives in some remote Pacific islands. Medical treatment that Rongelapese and Utrikese have been receiving is also highly questionable." [ 6 ] The AEC issued a staff comment denying these charges.
In 1994, a 1953 Castle Bravo program prospectus was found which included reference to Project 4.1 apparently written before the Bravo shot had occurred. The U.S. government responded that someone had gone back into the project list after the Bravo test to insert Project 4.1; thus, according to the U.S. government, the acts were not premeditated. All other U.S. documents point to Project 4.1 having been established after the Bravo test—most sources point to its having been organized on 7 March 1954. [ 7 ] The final Project 4.1 report began in its preface with the statement that "Operation CASTLE did not include a biomedical program" (it mentions this in discussing the ad hoc nature by which the project personnel were assembled). [ 8 ] All official and mainstream historical accounts of the Bravo test indicate that its high level of fallout was a result of a miscalculation in relation to its design and was not deliberate (see the Castle Bravo article for more information on the alleged accident).
Barton C. Hacker, the official historian of U.S. nuclear testing exposures (who is, in the end, very critical of the U.S. handling of the Bravo incident), characterized the controversy in the following way:
In March 1954, the AEC had quickly decided that learning how the Marshallese victims of Castle Bravo responded to their accidental exposure could be of immense medical and military value. Immediate action centered on seeing them evacuated and decontaminated, then cared for medically. But studies of their exposures and aftereffects also began. That effort became project 4.1 in the Castle experimental program. This unfortunate choice of terminology may help explain later charges that the AEC had deliberately exposed the Marshallese to observe the effects. Like the American radium dial painters of the 1920s and the Japanese of Hiroshima and Nagasaki in 1945, the Marshallese of 1954 inadvertently were to provide otherwise unobtainable data on the human consequences of high radiation exposures. Findings from project 4.1 soon began to appear in print. [ 9 ]
Controversy continues however, fed by the legacy of mistrust sown by American nuclear testing in the Marshall Islands , which involved relocating hundreds of people and rendering several atolls uninhabitable. While most sources do not think that the exposure was intentional, there is no dispute that the United States did carefully study the exposed Marshallese, but never obtained informed consent from the study subjects. This study of the Marshallese was in some cases beneficial for their treatment, and in other cases not. In these ways, the study of the exposed Marshallese reflects the same ethical lapses as were undertaken in other aspects of the secret human radiation experiments conducted by the Atomic Energy Commission in the 1940s and 1950s, which came to light only after the end of the Cold War .
According to the Final Project 4.1 report, the Bravo test exposed 239 Marshallese on the Utirik , Rongelap , and Ailinginae Atolls to significant level of radiation, and 28 Americans stationed on the Rongerik Atoll were also exposed. Those on the Rongelap Atoll were the most seriously affected, receiving approximately 175 rads of radiation before they were evacuated. Those on Ailinginae received 69 rads, those on Utirik received 14 rads, and the Americans on Rongerik received an average dose of 78 rads. [ 11 ] [ 12 ] [ 13 ] [ 14 ]
The results of the original Project 4.1 were published by the study's authors in professional medical journals in 1955, such as the Journal of the American Medical Association . [ 15 ]
In 2010 it was calculated that by sub-population, the projected proportion of cancers attributable to radiation from fallout from all nuclear tests conducted in the Marshall Islands is 55% (with a 28% to 69% uncertainty range) among 82 persons exposed in 1954 on Rongelap Atoll and Ailinginae Atoll . [ 16 ]
Most of the individuals exposed did not immediately show signs of radiation sickness , though within a few days other effects of significant radiation exposure manifested: loss of hair and significant skin damage, including "raw, weeping lesions", among the Rongelap and Ailinginae groups. The lesions healed quickly, however, consistent with radiation exposure. The report abstract concluded that "estimates of total body burden indicate that there is no long term hazard." [ 8 ]
Additional follow-up checks on the Marshallese studied in Project 4.1 were conducted at regular intervals afterwards every year since 1954. Though the Marshallese experienced far milder immediate effects than the Japanese fishermen exposed to Bravo fallout on the fishing boat Daigo Fukuryū Maru , the long-term effects were more pronounced as they depended largely on subsistence living and were relocated to the site of the testing in Bikini, Ene Wetak, and Rongelap while the Japanese fisherman were returned to Japan. For the first decade after the test, the effects were ambiguous and statistically difficult to correlate to radiation exposure: miscarriages and stillbirths among exposed Rongelap women doubled in the first five years after the accident, [ medical citation needed ] but then returned to normal; some developmental difficulties and impaired growth appeared in children, [ medical citation needed ] but in no clear-cut pattern. In the decades that followed, though, the effects were undeniable. Children began to disproportionately develop thyroid cancer (due to exposure to radioiodines ), [ 17 ] and almost a third of those exposed developed neoplasms by 1974. [ 9 ] [ unreliable medical source? ]
This is a list of reports made under Project 4.1. This list is not exhaustive. | https://en.wikipedia.org/wiki/Project_4.1 |
Project 523 ( Chinese : 523项目 ) [ 1 ] is a code name for a 1967 secret military project of the People's Republic of China to find antimalarial medications . [ 2 ] Named after the date the project launched, 23 May, it addressed malaria , an important threat in the Vietnam War . At the behest of Ho Chi Minh , Prime Minister of North Vietnam , Zhou Enlai , the Premier of the People's Republic of China , convinced Mao Zedong , Chairman of the Chinese Communist Party , to start the mass project "to keep [the] allies' troops combat-ready", as the meeting minutes put it. More than 500 Chinese scientists were recruited. The project was divided into three streams. [ 3 ] The one for investigating traditional Chinese medicine discovered and led to the development of a class of new antimalarial drugs called artemisinins . [ 3 ] [ 4 ] Launched during and lasting throughout the Cultural Revolution , Project 523 was officially terminated in 1981.
For their high efficacy, safety and stability, artemisinins such as artemether and artesunate became the drugs of choice in treating falciparum malaria . The World Health Organization advocates their combination drugs and includes them in its List of Essential Medicines . Among the scientists of the project, Zhou Yiqing and his team at the Institute of Microbiology and Epidemiology of the Chinese Academy of Military Medical Sciences, were awarded the European Inventor Award of 2009 in the category "Non-European countries" for the development of Coartem (artemether-lumefantrine combination drug). [ 5 ] Tu Youyou of the Qinghaosu Research Center, Institute of Chinese Materia Medica, Academy of Traditional Chinese Medicine (now the China Academy of Traditional Chinese Medical Sciences), received both the 2011 Lasker-DeBakey Clinical Medical Research Award and 2015 Nobel Prize in Physiology or Medicine for her role in the discovery of artemisinin. [ 6 ]
The Vietnam War was fought between North Vietnam (with support from Communist countries such as Soviet Union and China) and South Vietnam (with support from the United States and its allies). The conflicts began in 1954 and became large-scale battles by 1961. [ 7 ] Although in a better warfare position, the People's Army of Vietnam (North Vietnamese Army) and its allies in the South, Viet Cong , suffered increasing mortality because of malaria epidemics . In some battlefields, the disease would reduce military strengths by half and in severe cases, disable 90% of the troops. [ 8 ] North Vietnamese Prime Minister Ho Chi Minh asked Chinese Premier Zhou Enlai for medical help. The year before, party Chairman Mao Zedong had introduced the Cultural Revolution , during which he would close schools and universities and banish scientists and intellectuals. [ 9 ] [ 10 ] Mao took Ho's plea seriously and approved a military project. On 23 May 1967, about six hundred scientists convened. These included military personnel, scientists, and medical practitioners of Western and traditional Chinese medicine . The meeting marked the start of the military-research programme, which received the code name Project 523, after the date (23 May) it launched. [ 2 ] The project was divided into three main streams, one for developing synthetic compounds, one for clinical studies (or infection control ) [ 3 ] and another for investigating traditional Chinese medicine. [ 11 ] Classified as a top secret state mission, the project itself saved many scientists from the atrocities of the Cultural Revolution. [ 8 ]
As the first line strategy, the troops were given synthetic drugs. Drug combinations using pyrimethamine and dapsone , pyrimethamine and sulfadoxine , and sulfadoxine and piperaquine phosphate were tested in the battlefield. [ 12 ] Because these drugs had serious adverse effects, [ 8 ] the primary focus was to examine traditional Chinese medicines and look for new compounds. The first drug of interest was chángshān ( 常山 ), an extract from the roots of Dichroa febrifuga depicted in the Shennong Ben Cao Jing . Another early candidate was huanghuahao (sweet wormwood or Artemisia annua ). These two plants became a huge success in modern pharmacology. [ 13 ] [ 14 ] [ 15 ]
The first interest was on chángshān, the root extract of Dichroa febrifuga . In the 1940s, Chinese scientists had shown that it was effective against different species of Plasmodium . [ 16 ] American scientists isolated febrifugine as its major active antimalarial compound. [ 17 ] The project scientists confirmed the antimalarial activity but found it unsuitable for human use due to its overwhelming potency and toxicity, outrivaling that of quinine . [ 18 ] After the project, the compound remained under investigation, with attempts to discover suitable derivatives, [ 19 ] [ 20 ] [ 21 ] among which halofuginone is an effective drug against malaria, cancer, fibrosis and inflammatory disease. [ 22 ]
The fourth-century Chinese physician Ge Hong 's book Zhouhou Beiji Fang ( Chinese : 《肘後備急方》 ; lit. 'The Handbook of Prescriptions for Emergencies') described Artemisia annua extract, called qinghao , as a treatment of malarial fever. [ 23 ] Tu Youyou and her team were the first to investigate. In 1971 they found that their extract from the dried leaves (collected from Beijing) did not indicate any antimalarial activity. [ 4 ] On careful reading of Ge's description they changed their extraction method of using fresh leaves under low temperature. Ge explicitly describes the recipe as: "qinghao, one bunch, take two sheng [2 × 0.2 L] of water for soaking it, wring it out, take the juice, ingest it in its entirety". [ 1 ] Following the findings of scientists at the Yunnan Institute of Pharmacology, they found that only the fresh plant specimen collected from Sichuan province would yield the active compound. [ 3 ] They made the purified extract into tablets , which showed very low activity. They soon realized that the compound was very insoluble and made it in capsules instead. On 4 October 1971 they successfully treated malaria in experimental mice (infected with Plasmodium berghei ) and monkeys (infected with Plasmodium cynomolgi ) using the new extract. [ 4 ]
In August 1972 they reported a clinical trial in which 21 malarial patients were cured. In 1973 the Yunnan scientists and those at the Shandong Institute of Pharmacology independently obtained the antimalarial compound in a crystalline form gave the name huanghaosu or huanghuahaosu , [ 3 ] eventually renamed qinghaosu (yet later to be popularised as "artemisinin", after the botanical name). [ 12 ] The same year Tu synthesized the compound dihydroartemisinin from the extract. This compound was more soluble and potent than the native compound. Other scientists subsequently synthesized other artemisinin derivatives, of which the most important are artemether and artesunate . [ 24 ] All clinical trials by this time confirmed that artemisinins are more effective than the conventional antimalarial drugs, such as chloroquine and quinine. [ 12 ] A group of scientists in Shanghai, including chemist Wu Yulin , determined artemisinin's chemical structure in 1975 and published it in 1977 when the secrecy rules lifted. [ 3 ] The artemisinins became the most potent as well as the safest and most rapidly acting antimalarial drugs, [ 25 ] recommended by the World Health Organization for the treatment of different types of malaria. [ 26 ]
Project 523 also resulted in the discovery of synthetic drugs such as pyronaridine in 1973, lumefantrine in 1976 and naphthoquine in 1986. These are all antimalarial drugs and are still used in artemisinin-combination therapy. [ 12 ]
After Saigon fell on 30 April 1975, ending the Vietnam War, the military purpose of Project 523 subsided. Researchers could not publish their findings but could share their works within the working groups. The first publication in English (and thus circulated outside China) was in the December 1979 issue of the Chinese Medical Journal , authored simply by the Qinghaosu Antimalaria Coordinating Research Group. [ 27 ] This attracted collaboration with the Special Programme for Research and Training in Tropical Diseases (TDR), sponsored by the United Nations Children's Fund , the United Nations Development Programme , the World Bank , and WHO, but the research remained closed to non-Chinese scientists. By the early 1980s, research had practically stopped, and the project was officially terminated in 1981. [ 8 ] The TDR took this opportunity to organise the first international conference in Beijing on artemisinin and its variants in 1981. Supported by WHO, the Chinese Ministry of Health established the National Chinese Steering Committee for Development of Qinghaosu and its Derivatives to continue the important achievements of Project 523. [ 8 ]
The first international collaboration was between Keith Arnold at the Roche Far East Research Foundation, Hong Kong, and Chinese researchers Jing-Bo Jiang, Xing-Bo Guo, Guo-Qiao Li, and Yun Cheung Kong. [ 28 ] They made their first international publication in 1982 in The Lancet , in which they reported the comparative efficacy of artemisinin and mefloquine on chloroquine -resistant Plasmodium falciparum . [ 29 ] Arnold was among those who developed mefloquine in 1979 and was planning to test the new drug in China. He and his wife Moui became the most important people in translating the historical account of the Project 523 and bringing it to international recognition. [ 30 ] The Division of Experimental Therapeutics at the Walter Reed Army Institute of Research , under the United States Army , was the first to produce artemisinin and its derivatives outside China. Their production paved the way for commercial success. [ 31 ]
Artemether was more promising for clinical drug than its parent molecule artemisinin. In 1981, the National Steering Committee for Development of Qinghaosu (artemisinin) and its Derivatives authorised Zhou Yiqing , who was working at the Institute of Microbiology and Epidemiology of the Chinese Academy of Military Medical Sciences , to work on artemether. [ 32 ] Zhou showed that artemether combined with another antimalarial lumefantrine was the most potent of all antimalarial drugs. He worked alone for four years, and Ning Dianxi and his team joined Zhou in 1985. They found that in clinical trials the combined tablet had cure rate of severe malaria of more than 95%, including in areas where multi-drug resistance is experienced. [ 33 ] They applied for patent in 1991 but received it only in 2002. In 1992, they registered it as a new drug in China. Noticing this, Novartis signed a pact for mass production. In 1999, Novartis obtained the international licensing rights and gave the brand name Coartem. The US Food and Drug Administration approved the drug in 2009. [ 34 ] | https://en.wikipedia.org/wiki/Project_523 |
Project Andrea ( Spanish : Proyecto Andrea ) is the code name of an effort by the military dictatorship of Augusto Pinochet to manufacture sarin gas [ a ] for use as a weapon against its opponents. [ 1 ] [ 2 ]
At the beginning of the period of the Military Regime, an electronic and chemical warfare laboratory was installed in the house of Michael Townley and Mariana Callejas in Lo Curro [ es ] , located at Via Naranja 4925, Vitacura , Chile. The government of Augusto Pinochet had given them that house – three floors, almost 1,000 square meters of building and 5,000 of land – located in the upper part of Santiago , in return for services rendered to the Dirección de Inteligencia Nacional (DINA). [ 3 ]
A voluminous concrete cube, rather ugly, with something of an orphanage, hospital, or other public building.
Legally it was not theirs, as it had been acquired by then Army Major Raúl Iturriaga and a DINA lawyer who died in strange circumstances in 1976 under a false identity. [ 3 ]
The idea was for it to serve as housing for the married agents and their children, but mainly – because it was not an unconditional gift – for the barracks to operate there, from which subsequent terrorist operations abroad would be prepared. In the DINA this barracks house was called Quetropillán . It had two permanent agents, who served as drivers and assistants, and a secretary, who kept the accounts and assisted the homeowner in administrative tasks. In addition, the team included a gardener, a cook, and two chemists: Francisco Oyarzún and Eugenio Berríos , alias Hermes. [ 4 ] The latter two spent the day locked in a laboratory, experimenting with the lethality of sarin gas on mice and rabbits. [ 3 ]
The sarin gas was first manufactured by the DINA in Santiago, and then began to be made in Colonia Dignidad , with its logistical support. It was exported and used to assassinate opponents of the regime both in Chile and abroad. The victims presented the symptoms of a heart attack.
Sarin manufactured in Chile was used for the first time by Michael Townley against two Peruvian citizens. [ 5 ] Townley revealed to Judge Alejandro Madrid that in Chile not only real estate conservator Renato León Zenteno (1976) and Army corporal and DINA agent Manuel Leyton (1977) were murdered with sarin, but also other people whose deaths were made to appear as suicides or strange deaths. [ 1 ] Some of these people, according to Townley, were involved in the storage and transport of containers of sarin in the 1970s and early 1980s. One of them would be a doctor or assistant who participated in the autopsies of Renato Zenteno and Corporal Leyton. [ 5 ]
Townley told the judge that the Protocol Director of the Chilean Ministry of Foreign Affairs , Carlos Guillermo Osorio, had not committed suicide – as was officially announced in October 1977 – but had been killed. [ 1 ] [ 5 ] Osorio was in charge of granting passports with false identities so that Army officers – among them, Armando Fernández Larios – could travel to the United States to prepare the attack against Orlando Letelier and two others (Rolando Mosqueira and René Riveros), to try to mislead American intelligence about the authorship of the attack and cover for the DINA. The sources maintain that Townley affirmed that Osorio was another victim of sarin, even though he had been shot once in the head. [ 5 ]
Eugenio Berríos played a role in the death of Carmelo Soria, a CEPAL official who was kidnapped by a DINA operative in July 1976 and taken to Townley's house in Lo Curro. [ 5 ] There, in the Berríos laboratory, Soria was administered sarin gas, according to the investigation, and was then tortured until his spine was broken. Later, his body was found in a car in the Canal San Carlos [ es ] . [ 6 ]
This gas was probably used to murder the journalist Eugenio Lira Massi [ es ] , who in June 1975 was found dead in circumstances not entirely clear, in the room he occupied in Paris, where he worked at the newspaper L'Humanité . In 1990, the journalist Edwin Harrington published in the magazine Nueva Voz that Lira was murdered by the DINA as part of a plan called Operation France after the arrival in the French capital of "Bernardo Conrads Salazar, identity card No. 4.152.556-6, official of the security service of the dictatorship." Harrington, who cited an FBI report as one of his main sources, argued that Lira's death may have been triggered by sarin gas, which Townley carried on his travels in a bottle of Chanel perfume. [ 4 ] [ 7 ]
Another suspicious death was that of Alfred Schaak, [ 8 ] a representative of Paul Schäfer in Germany in charge of arms trafficking. In 1985, two couples who fled from Colonia Dignidad publicly denounced Schäfer's pedophilia. It seems that Schaak then wanted to denounce the arms trade. To prevent this, Winfried Schmidtke and Helmut Seelbach, who were received at the airport by Schaak, who was in perfect health, would have traveled from Dignidad to Germany. A few days later, in October, Schaak died suddenly. Dr. Hartmut Hopp went immediately to Germany and brought the body of Schaak to Chile. In the assembly of settlers he said that Schaak had died of fever and that in his will he left his property to the Colony. At the same time they reported – ten months after the fact – the escape of the couples, adding that their complaints in Germany had done them great harm. [ 8 ]
The subsequent condemnation of Schäfer does not include sarin gas production in Colonia Dignidad, the Cerro Gallo Massacre [ es ] , nor the Monte Maravilla forced labor camp that the colony maintained, since Judge Jorge Zepeda Arancibia did not include these charges in the ruling that sentenced him to a minimum term of seven years of imprisonment for infraction of the Law on Arms Control. [ 8 ]
Alexei Jaccard, age 25, was arrested in Buenos Aires on 16 May 1977, along with two other communist militants, by agents of the Argentine dictatorship and the DINA. [ 9 ] At that time all traces of them were lost. Despite the efforts made by his family to learn his whereabouts, both in Argentina and in Chile, they only turned up false leads. This changed when three agents, who did not lose their memory or declare themselves insane as their former boss Augusto Pinochet had, gave the judiciary unequivocal information about what happened with Jaccard and the two militants, Ricardo Ramírez Herrera and Héctor Velásquez Mardones. The testimonies coincide in that the three detainees, from Buenos Aires, were taken to the La Reina barracks by "Don Jaime" (alias of Captain Germán Barriga, who committed suicide in 2005) and his agents of Dolphin Group, an elite squad that operated inside the Lautaro Brigade [ es ] . [ 10 ] The director of the DINA, Manuel Contreras , always stated in private and in public that Jaccard, Herrera, and Velásquez were arrested by Argentine intelligence, which had made them disappear by throwing their bodies into the Río de la Plata . [ 11 ] But the former agents Eduardo Oyarce Riquelme, Héctor Valdebenito Araya, and Guillermo Ferrán Martínez, all prosecuted for the crimes committed in Simón Bolívar, deny that version, and confirm the passage of Jaccard and his companions through that barracks. [ 11 ] Former agent Oyarce recalls another relevant piece of information:
They were eliminated with sarin gas, but I can not say who applied it. [ 9 ]
On 23 July 2007, Judge Alejandro Madrid undertook "two resolutions that mark milestones in the trials for human rights violations" and that are related to the Andrea Project: "He affirmed that the murder of the ex-Army corporal Manuel Leyton was carried out using sarin gas, and prosecuted thirteen former DINA agents for this crime. He proceeded to prosecute the former Army auditor, Fernando Torres Silva, for conspiracy in the case of the murder with the chemical that produced the poisonous element in the security services, Eugenio Berríos." [ 12 ]
The FBI has evidence confirming that Augusto Pinochet accumulated large amounts of poison gas, [ 13 ] according to Saul Landau , the US investigator for the Orlando Letelier murder. The FBI investigated the sarin, and its conclusions were condensed in a report indicating that it was manufactured in an amount sufficient to kill the entire Peruvian Army twice over. [ 13 ]
The explicit orders were to locate Letelier's residence and place of work and contact the Cuban Nationalist Movement (MNC) group so that we could eliminate him with sarin, by running him over or other accident, by any method, but for Letelier, the Chilean government wanted him dead.
In 2012, Javier Rebolledo published La danza de los cuervos ( The Dance of the Crows ), a book describing the atrocities committed in the Simón Bolívar barracks. He then premiered the movie El Mocito , in which the experiences of a boy who worked in the Simón Bolívar House of Extermination are related. It details the torture and suffering of the political abductees, and mention is made of the use of sarin gas to exterminate the prisoners. [ 14 ] [ 15 ] [ 16 ] | https://en.wikipedia.org/wiki/Project_Andrea |
Project Ara was a modular smartphone project under development by Google . The project was originally headed by the Advanced Technology and Projects team within Motorola Mobility while it was a Google subsidiary. Google retained the ATAP group when selling Motorola Mobility to Lenovo , and it was placed under the stewardship of the Android development staff; Ara was later split off as an independent operation. [ 1 ] [ 3 ] Google stated that Project Ara was being designed to be utilized by "6 billion people": 1 billion current smartphone users, and 5 billion feature phone users. [ 4 ] [ 5 ]
Under its original design, as envisioned by NewDealDesign , under the leadership of Gadi Amit , Project Ara was intended to consist of hardware modules providing common smartphone parts, such as processors, displays, batteries, and cameras, as well as modules providing more specialized components, and "frames" that these modules were to be attached to. This design would allow a device to be upgraded over time with new capabilities and upgraded without requiring the purchase of an entire new device, providing a longer lifecycle for the device and potentially reducing electronic waste . [ 6 ] [ 7 ] However, by 2016, the concept had been revised, resulting in a base phone with non-upgradable core components, and modules providing supplemental features.
Google planned to launch a new developer version of Ara in the fourth quarter of 2016, with a target bill of materials cost of $50 for a basic phone, leading into a planned consumer launch in 2017. However, on September 2, 2016, Reuters reported that two non-disclosed sources leaked that Alphabet 's manufacture of frames had been canceled, with possible future licensing to third parties. [ 8 ] [ 9 ] Later that day, Google confirmed that Project Ara had been shelved. [ 10 ]
Google intended Project Ara to lower the entry barrier for phone hardware manufacturers so there could be "hundreds of thousands of developers" instead of the existing oligarchy of phone manufacturers. [ 11 ]
The Project Ara concept consisted of modules inserted into metal endoskeletal frames known as "endos." The frame would be the only component manufactured by Google. [ 11 ] The frame was the switch to the on-device network linking all the modules together. Google planned two sizes of frames on launch; a "mini" frame about the size of a Nokia 3310 and a "medium" frame about the size of a Nexus 5 . [ 12 ] Google also planned a "large", phablet frame about the size of a Samsung Galaxy Note 3 to be released in the future. [ 12 ] Frames have slots on the front for the display and other modules. On the back are additional slots for modules. Each frame was expected to cost around US$15. [ 13 ] [ 14 ] [ 15 ] The data from the modules can be transferred at up to 10 Gbit/s per connection. The 2×2 modules have two connections and would allow up to 20 Gbit/s.
Modules would provide common smartphone features, such as cameras and speakers, but could also provide more specialized features, such as medical devices, receipt printers, laser pointers , pico projectors , night vision sensors, or game controller buttons. Each slot on the frame accepted any module of the correct size. The front slots are of various heights and took up the whole width of the frame. [ 12 ] The rear slots had standard sizes of 1×1, 1×2 and 2×2. [ 12 ] Modules were to be hot-swapped without turning the phone off. [ 11 ] The frame also included a small backup battery so the main battery can be hot-swapped. [ 11 ] Modules were originally to be secured with electropermanent magnets , but this was replaced by a different method. The enclosures of the modules were planned to be 3D-printed , but due to the lack of development in the technology, Google opted instead for a customizable molded case. [ 16 ] [ 11 ]
Google intended to sell a starter kit where the bill of materials is US$50 and includes a frame, display, battery, low-end CPU and WiFi. [ 16 ] Google planned to provide an open development process for modules, and would not have required manufacturers to pay a license fee. [ 4 ] Modules were to be available both at an official Google store and at third-party retailers. Similarly to Android apps, an Ara device would be configured by default to only accept modules officially certified by Google, but users would have been able to disable this. [ 13 ]
Project Ara was developed and was led by Paul Eremenko , [ 11 ] [ 17 ] who in 2015 became CEO of the Airbus Group Silicon Valley technology and business innovation center. The project fell under Regina Dugan , who runs Google's Advanced Technology and Projects (ATAP) organization. Both Eremenko and Dugan worked previously at DARPA , where Eremenko originated the fractionated spacecraft concept and ran the Adaptive Vehicle Make program before heading the Tactical Technology office . The core Project Ara team at Google consisted of three people, with most of the work being done by outside contractors, such as NK Labs, a Massachusetts-based engineering firm. [ 11 ] NK Labs then subcontracted the firm Leaflabs to do firmware development, and they later became the primary firmware developers in a direct contract with Google. The main physical concept design of the Frame and Modules was created by NewDealDesign a San Francisco based Technology design studio that was commissioned by ATAP to lead the design of the project. That was selected from 11 different configurations analyzed by the joint team. [ 18 ] The company 3D Systems was contracted to experiment with 3D printing of electrical components, which could further the goal of mass customization . [ 11 ]
Prior to its acquisition of Motorola Mobility in 2011, Google had previously acquired some patents related to modular mobile phones from Modu . [ 19 ] Initial exploration of this concept began in 2012 and work started on April 1, 2013. Unrelated to work done by the Ara team, [ 11 ] Dutch designer Dave Hakkens announced the Phonebloks modular phone concept independently in September 2013. Motorola Mobility publicly announced Project Ara on October 29, 2013, and said they would be working collaboratively with Phonebloks, although the original team, consisting of internal and external resources, continued working together without any change to its original design and technology. [ 20 ] Motorola Mobility went on a 5-month road trip throughout the United States in 2013 called "MAKEwithMOTO" to gauge consumer interest in customized phones. [ 11 ] Interested developers, testers, or users could sign up to be Ara Scouts. [ 20 ]
The first version of the developers' kit relied on a prototype implementation of the Ara on-device network using the Mobile Industry Processor Interface (MIPI) UniPro protocol implemented on FPGA and running over a Low-voltage differential signaling (LVDS) physical layer with modules connecting via retractable pins. [ 11 ] Subsequent versions were to be built around a much more efficient and higher performance ASIC implementation of UniPro, running over a capacitive M-PHY physical layer. [ 21 ] A near-working prototype of an Ara smartphone was demonstrated at Google I/O 2014, but it froze on the boot screen. [ 22 ]
In January 2015, Google unveiled the "Spiral 2" prototype, and that it planned to test market a later revision of the system in the U.S. territory of Puerto Rico later in the year. Google chose the region due to it having a large mobile phone market, and because it is still subject to U.S. telecommunications laws—allowing for continued correspondence with the FCC. [ 23 ] [ 24 ] [ 25 ] However, in August 2015, Google announced that the Ara pilot in Puerto Rico had been delayed indefinitely, and that the company would instead hold pilots in "a few locations" in the U.S. some time in 2016. [ 26 ] [ 27 ]
At Google I/O 2016, the company unveiled a new development model, the "Developer Edition". The new iteration featured notable changes to the original concept; the device now consisted of a base phone with core components that cannot be upgraded, including the antenna, battery, display, sensors, and system-on-chip, and extensible with modules for adding features such as a secondary display or replacement cameras and speakers. Google announced that it planned to ship the Developer Edition in late 2016, [ 10 ] and perform a consumer launch of Project Ara in 2017. [ 28 ]
On September 2, 2016, Google confirmed that Project Ara had been cancelled. [ 10 ]
Initial reception to an earlier but similar modular phone concept— Phonebloks —was mixed, citing possible infeasibility, lack of a working prototype, as well as other production and development concerns. Project Ara's launch followed shortly after the launch of Phonebloks and better addressed some of the production and development issues since it had OEM backing, but other issues were raised about the Project Ara modular concept. [ citation needed ]
Potential issues with the modular concept include a tradeoff between volumetric efficiency and modularity, as the framework interface holding the device would increase overall size and weight. Eremenko says modularity would create a difference of less than 25% in size, power, and weight to components, and he believes that is an acceptable trade-off for the added flexibility. [ 29 ] The current prototype is 9.7mm thick, slightly thicker than conventional smartphones. [ 11 ] Additional issues include regulatory approval; the FCC tests single configurations for approval, not modular configurations. [ 30 ] In 2014, Google said the FCC "[had] been encouraging so far". [ 11 ] | https://en.wikipedia.org/wiki/Project_Ara |
Project Athena was a joint project of MIT , Digital Equipment Corporation , and IBM to produce a campus-wide distributed computing environment for educational use. [ 1 ] It was launched in 1983, and research and development ran until June 30, 1991. As of 2023 [update] , Athena is still in production use at MIT. It works as software (currently a set of Debian packages) [ 2 ] that makes a machine a thin client , that will download educational applications from the MIT servers on demand.
Project Athena was important in the early history of desktop and distributed computing. It created the X Window System , Kerberos , and Zephyr Notification Service . [ 1 ] It influenced the development of thin computing , LDAP , Active Directory , and instant messaging .
Leaders of the $50 million, five-year project at MIT included Michael Dertouzos , director of the Laboratory for Computer Science ; Jerry Wilson, dean of the School of Engineering ; and Joel Moses , head of the Electrical Engineering and Computer Science department. DEC agreed to contribute more than 300 terminals, 1600 microcomputers, 63 minicomputers, and five employees. IBM agreed to contribute 500 microcomputers, 500 workstations, software, five employees, and grant funding. [ 3 ] [ 4 ]
In 1979 Dertouzos proposed to university president Jerome Wiesner that the university network mainframe computers for student use. At that time MIT used computers throughout its research, but undergraduates did not use computers except in Course VI (computer science) classes. With no interest from the rest of the university, the School of Engineering in 1982 approached DEC for equipment for itself. President Paul E. Gray and the MIT Corporation wanted the project to benefit the rest of the university, and IBM agreed to donate equipment to MIT except to the engineering school. [ 5 ]
Project Athena began in May 1983. Its initial goals were to: [ 6 ]
The project intended to extend computer power into fields of study outside computer science and engineering, such as foreign languages, economics, and political science. To implement these goals, MIT decided to build a Unix -based distributed computing system. Unlike those at Carnegie Mellon University , which also received the IBM and DEC grants, students did not have to own their own computer; MIT built computer labs for their users, although the goal was to put networked computers into each dormitory. Students were required to learn FORTRAN and Lisp , [ 4 ] and would have access to 3M computers , capable of 1 million instructions per second and with 1 megabyte of RAM and a 1 megapixel display. [ 6 ] [ 7 ]
Although IBM and DEC computers were hardware-incompatible, Athena's designers intended that software would run similarly on both. MIT did not want to be dependent on one vendor at the end of Athena. Sixty-three DEC VAX-11/750 servers were the first timesharing clusters. "Phase II" began in September 1987, with hundreds of IBM RT PC workstations replacing the VAXes, which became fileservers for the workstations. The DEC-IBM division between departments no longer existed. Upon logging into a workstation, students would have immediate access to a universal set of files and programs via central services. Because the workstation used a thin client model, the user interface would be consistent despite the use of different hardware vendors for different workstations. A small staff could maintain hundreds of clients. [ 5 ] [ 8 ]
The project spawned many technologies that are widely used today, such as the X Window System and Kerberos . Among the other technologies developed for Project Athena were the Zephyr Notification Service and the Hesiod name and directory service. [ 1 ]
As of November 1988 [update] MIT had 722 workstations in 33 private and public clusters on and off campus, including student living groups and fraternities . A survey found that 92% of undergraduates had used the Athena workstations at least once, and 25% used them every day. [ 5 ] [ 9 ] The project received an extension of three years in January 1988. Developers who had focused on creating the operating system and courseware for various educational subjects now worked to improve Athena's stability and make it more user friendly . When Project Athena ended in June 1991, MIT's IT department took it over and extended it into the university's research and administrative divisions. [ 8 ]
In 1993, the IBM RT PC workstations were retired, being replaced by Sun SPARCclassic , IBM RS/6000 POWERstation 220, and Personal DECstation 5000 Model 25 systems. [ 10 ] As of April 1999 [update] the MIT campus had more than 1300 Athena workstations, and more than 6000 Athena users logged into the system daily. [ 8 ] Athena is still used by many in the MIT community through the computer labs scattered around the campus. It is also now available for installation on personal computers, including laptops.
Athena continues in use as of 2023 [update] , providing a ubiquitous computing platform for education at MIT; plans are to continue its use indefinitely.
Athena was designed to minimize the use of labor in its operation, in part through the use of (what is now called ) " thin client " architecture and standard desktop configurations. This not only reduces labor content in operations but also minimizes the amount of training for deployment, software upgrade, and trouble-shooting. These features continue to be of considerable benefit today.
In keeping with its original intent, access to the Athena system has been greatly enlarged in the last several years. Whereas in 1991 much of the access was in public "clusters" ( computer labs ) in academic buildings, access has been extended to dormitories , fraternities and sororities , and independent living groups. All dormitories have officially supported Athena clusters. In addition, most dormitories have "quick login" kiosks, which is a standup workstation with a timer to limit access to ten minutes. The dormitories have "one port per pillow" Internet access.
Originally, the Athena release used Berkeley Software Distribution (BSD) as the base operating system for all hardware platforms. As of April 1999 [update] public clusters consisted of Sun SPARC and SGI Indy workstations. [ 8 ] SGI hardware was dropped in anticipation of the end of IRIX production in 2006. Linux-Athena was introduced in version 9, with the Red Hat Enterprise Linux operating system running on cheaper x86 or x86-64 hardware. Athena 9 also replaced the internally developed "DASH" menu system and Motif Window Manager (mwm) with a more modern GNOME desktop. Athena 10 is based on Ubuntu Linux (derived from Debian ) only. [ 11 ] [ 12 ] [ 13 ] Support for Solaris is expected to be dropped almost entirely. [ 14 ]
"I felt that, we would know Athena was successful, if we were surprised by some of the applications, it turned out that our surprises were largely in the humanities" — Joel Moses [ 15 ]
The original concept of Project Athena was that there would be course-specific software developed to use in conjunction with teaching. Today, computers are most frequently used for "horizontal" applications such as e-mail, word processing, communications, and graphics.
The big impact of Athena on education has been the integration of third party applications into courses. Maple , and especially, MATLAB , are integrated into large numbers of science and engineering classes. Faculty expect that their students have access to, and know how to use, these applications for projects, and homework assignments, and some have used the MATLAB platform to rebuild the courseware that they had originally built using the X Window System .
More specialized third-party software are used on Athena for more discipline-specific work. Rendering software, for architecture and computer graphics classes, molecular modeling software, for chemistry, chemical engineering, and material science courses, and professional software used by chemical engineers in industry, are important components of a number of MIT classes in various departments.
Athena was not a research project, and the development of new models of computing was not a primary objective of the project. Indeed, quite the opposite was true. MIT wanted a high-quality computing environment for education. The only apparent way to obtain one was to build it internally, using existing components where available, and augmenting those components with software to create the desired distributed system. However, the fact that this was a leading edge development in an area of intense interest to the computing industry worked strongly to the favor of MIT by attracting large amounts of funding from industrial sources.
Long experience has shown that advanced development directed at solving important problems tends to be much more successful than advanced development promoting technology that must look for a problem to solve. [ citation needed ] Athena is an excellent example of advanced development undertaken to meet a need that was both immediate and important. The need to solve a "real" problem kept Athena on track to focus on important issues and solve them, and to avoid getting side-tracked into academically interesting but relatively unimportant problems. Consequently, Athena made very significant contributions to the technology of distributed computing, but as a side-effect to solving an educational problem.
The leading edge system architecture and design features pioneered by Athena, using current terminology, include:
Many of the design concepts developed in the "on-line consultant" now appear in popular help desk software packages.
Because the functional and system management benefits provided by the Athena system were not available in any other system, its use extended beyond the MIT campus. In keeping with the established policy of MIT, the software was made available at no cost to all interested parties. Digital Equipment Corporation, having implemented Athena at various beta-test sites, [ 18 ] "productized" the software as DECAthena to make it more portable, and offered it along with support services to the market. A number of academic and industrial organizations installed the Athena software. As of early 1992, 20 universities worldwide were using DECathena, with a reported 30 commercial organisations evaluating the product. [ 19 ]
The architecture of the system also found use beyond MIT. The architecture of the Distributed Computing Environment (DCE) software from the Open Software Foundation was based on concepts pioneered by Athena. Subsequently, the Windows NT network operating system from Microsoft incorporates Kerberos and several other basic architecture design features first implemented by Athena. [ 1 ] | https://en.wikipedia.org/wiki/Project_Athena |
Project BAMBI ( BA llistic M issile B oost I ntercept ) was a project as part of the United States national missile defense .
At the end of the Second World War , the United States and the Soviet Union began confiscating various German intellectual property for use by their own countries. Among these plans were the plans for intercontinental ballistic missiles (ICBMs) that arrived in New York in 1946. The Pentagon spent the next several decades studying and developing both ICBM and anti-ICBM technology. [ 1 ]
In the early 1950s, both the United States and the Soviet Union were capable of waging nuclear war , but not without inviting retaliatory strikes. At the time, nuclear equipped aerial bombs carried by strategic bomber were the only means of deploying a nuclear strike on another country. In order to prevent nuclear attacks of this nature, the United States army developed Project Nike . The missiles designed by Project Nike were intended to intercept the nuclear armed enemy aircraft before they were able to drop their payload. [ 2 ]
On May 15 of 1957, the Soviet Union launched the world's first ICBM, the R-7 . In response, the United States launched their test model ICBM, Atlas A , in June of the same year. Although both of these ICBMs had less than stellar performances, the technology to wage war around the world using nuclear warheads was now on the horizon. [ 1 ]
Two years after the start of the space race, the Soviet Union revolutionized the world of atomic defense with the successful launch of the world's first artificial satellite , Sputnik , on October 4, 1957. [ 1 ] The United States quickly realized that by employing this satellite technology, the Soviet Union could potentially deploy nuclear armed ICBMs from orbit, where they would be poised to perform highly accurate nuclear strikes. A United States missile defense program, the Advanced Research Projects Agency (ARPA), was established in early 1958 in an effort to minimize this new threat. [ 1 ]
The first project undertaken by the ARPA was Project Defender , which had the primary goal of finding a defense against these ballistic missiles. Almost immediately, the ARPA retrofitted the now defunct Nike missiles into Nike-Zeus missiles that were meant to intercept incoming Soviet ICBMs as they reentered the atmosphere and before they could reach their intended targets. As testing of these Nike-Zeus missiles continued, those working on the Project Defender sought a simpler solution to the issue of these space-faring ICBMs. [ 2 ]
By 1960, the idea of space-based interceptors (SBIs) seemed a far more practical solution. These SBIs were envisioned to be capable of boost phase killing and became collectively known as the ballistic missile boost intercept (BAMBI) ABM systems. One of the most notable of the proposed BAMBI systems was the space patrol active defense (SPAD). This was a network of 500 satellites capable of detecting boost plumes with onboard infrared scanners that would then launch several interceptors along a track mapped by an onboard computer. These interceptors were designed to deploy a wire web with a radius ranging from 15 to 50 feet that were adorned with 1 gram pellets at each intersection of the net. These nets would then collide with the detected ICBM during its climb through the atmosphere, shred the fuel tanks of the booster and cause catastrophic damage to the ICBM. [ 1 ] BAMBI had a projected annual operational deployment and operation cost of 50 billion dollars. [ 3 ] Although sound in theory, the high price tag and a lack of the necessary technology in 1960 prevented this BAMBI system from being developed. Project BAMBI continued to explore other SBI options and workarounds for another 3 years before being cancelled in May 1963 under the Kennedy administration who wanted to avoid deploying a network of nuclear satellites in space after the Cuban Missile Crisis . [ 4 ]
In August 1963, the United States, the Soviet Union, and more than 100 other countries signed the Limited Test Ban Treaty which prohibited nuclear testing in space, the atmosphere, or underwater. In December of that same year, the UN adopted a resolution that established a set of general rules for the use of space. It required nations to receive approval from international consultants before they could interfere with the peaceful use of space but it did not ban the development and use of military satellites. Using this loophole, the United States and the Soviet Union were able to retain the bulk of their space programs that had been largely built around satellite deployments. [ 1 ] Four years later, in 1967, the Outer Space Treaty was signed by 66 nations and prohibited the passive orbiting of nuclear weapons. [ 3 ]
The United States missile defense program (and Project BAMBI) found new life in 1983 with the announcement of the Strategic Defense Initiative (SDI) by President Ronald Reagan during his “Star Wars” speech. [ 4 ] The SDI office was limited by the ABM Treaty and the 1974 protocol to a single, central, missile defense site with only 100 interceptors and were prevented from deploying space based missile defense systems. [ 1 ] To get around these restrictions, the SDI considered several options like a patrol of crewed space fighters and a resurrection of project BAMBI. This new iteration of BAMBI (dubbed Smart Rocks was proposed by the military advisor to Ronald Reagan, Daniel Graham, and would utilize battle stations low in earth's orbit and air to air missiles. Similar to the SBIs of the BAMBI project, these battle stations would also detect ICBMs by their infrared plume and intercept the ICBMs via collision. Other options of the time were the X-ray lasers of Project Excalibur . Although the Smart Rocks system was initially ignored, after the failed tests of Project Excalibur in 1986, the United States Secretary of Defense , Caspar Weinberger , requested an updated version of Smart Rocks. [ 5 ]
The new ballistic missile defense Brilliant Pebbles would eventually become the chief weapons system of the Strategic Defense System (SDS). [ 5 ] With the passing of the missile defense act of 1991 and the collapse of the Soviet Union at the end of that same year, it became apparent that SDI would not be able to demonstrate the effectiveness of the Brilliant Pebbles technology because the need for the SDS in general had passed. SDI became the Ballistic Missile Defense Organization (BMDO) in an attempt to salvage their usefulness, but President Bill Clinton cancelled the project in 1993 only for it to be revived by President Bush in 2002 under the new name, the Missile Defense Agency (MDA). [ 1 ] The MDA was later reorganized into the Ballistic Missile Defense System (BMDS) and President Bush withdrew the United States from the ABM treaty, [ 1 ] but despite this, space-based missile defense programs have yet to be employed by any successive administration. | https://en.wikipedia.org/wiki/Project_BAMBI |
Project Bacchus was a covert investigation by the Defense Threat Reduction Agency to determine whether it is possible to construct a bioweapons production facility with off-the-shelf equipment.
Project Bacchus operated from 1999 to 2000 to investigate whether would-be terrorists could build an anthrax production facility and remain undetected. [ 1 ] During the two-year simulation, the facility was constructed, and successfully produced an anthrax-like bacterium . [ 2 ] The participating scientists were able to make about 1 kilogram (2.2 lb) of highly refined bacterial particles. [ 2 ]
The secret Project Bacchus was disclosed in a September 2001 article in The New York Times . [ 1 ] Reporters Judith Miller , Stephen Engelberg and William J. Broad collaborated on the article. [ 1 ] Shortly after it appeared, they published a book containing further details. [ 1 ] The book, Germs: Biological Weapons and America's Secret War , and the article are the only publicly available sources [ citation needed ] concerning Project Bacchus and its sister projects, Clear Vision and Jefferson . [ 1 ] | https://en.wikipedia.org/wiki/Project_Bacchus |
Project Coast was a top-secret chemical and biological weapons (CBW) programme instituted by the apartheid -era government of South Africa in the 1980s. Project Coast was the successor to a limited postwar CBW programme, which mainly produced the lethal agents CX powder and mustard gas , as well as non-lethal tear gas for riot control purposes. [ 1 ] The programme was headed by the cardiologist Wouter Basson , who was also the personal physician of South African Prime Minister P. W. Botha . [ 2 ]
From 1975 onwards, the South African Defence Force (SADF) found itself embroiled in conventional battles in Angola as a result of the South African Border War . The perception that its enemies had access to battlefield chemical and biological weapons (CBW) led South Africa to begin expanding its programme, initially as a defensive measure and by researching vaccines. As the years went on, research shifted to offensive uses. In 1981, President P. W. Botha ordered the SADF to develop CBW technology for use against South Africa's enemies. In response, the head of the South African Medical Service division, which was responsible for defensive CBW capabilities, hired Wouter Basson , a cardiologist, to visit other countries and report back on their respective CBW capabilities. He returned with the recommendation that South Africa's programme be expanded. In 1983, Project Coast was formed, with Basson at its head. To hide the programme and its procurement of CBW-related substances, Project Coast formed four front companies: Delta G Scientific Company , Roodeplaat Research Laboratories , Protechnik and Infladel . [ 3 ] Ben Raubenheimer was appointed as CEO. [ 3 ] : 52
Project Coast created a progressively larger variety of lethal offensive CBW toxins and biotoxins , in addition to the defensive measures. Initially, they were intended for use by the military in combat as a last resort. To that end, they copied Soviet techniques and designed devices that looked like ordinary objects but could poison those targeted for assassination. Examples included umbrellas and walking sticks that fired pellets containing poison, syringes disguised as screwdrivers, and poisoned beer cans and envelopes. In the early 1990s, with the end of apartheid, South Africa's weapons of mass destruction programmes were stopped. Despite efforts to destroy equipment, stocks, and information from those programmes, some still remain, leading to fears that they may find their way into the possession of terrorist networks.
In May 2002, Daan Goosen , the former head of South Africa's biological weapons programmes, contacted the FBI and offered to exchange existing bacterial stocks from the programmes in return for US$5 million, together with immigration permits for him and 19 other associates and their family members. The offer was eventually refused, with the FBI claiming that the strains were obsolete and therefore no longer a threat. [ 4 ] [ 5 ]
The South African chemical weapons programme investigated all the standard CBW agents such as irritant riot control agents , lethal nerve agents and anticholinergic deliriants , which have been researched by virtually all countries that have carried out CBW research. The South African programme differed from the CBW programmes of many countries in its focus on developing nonlethal agents to help suppress internal dissent. [ 3 ] : 77–109 This led to the investigation of unusual nonlethal agents, including illicit recreational drugs such as phencyclidine , MDMA , methaqualone and cocaine , as well as medicinal drugs such as diazepam , midazolam , ketamine , suxamethonium and tubocurarine , as potential incapacitating agents.
According to the testimony given by Wouter Basson to the Truth and Reconciliation Commission , [ 6 ] analogues of the compounds were prepared and studied. Both methaqualone and MDMA (along with the deliriant BZ ) were manufactured in large quantities and successfully weaponized into a fine dust or aerosol form that could be released over a crowd as a potential riot control agent. It was later discovered that Basson was also selling large quantities of MDMA and methaqualone as tablets on the black market. The amount manufactured was far larger than what was sold, but the court accepted that at least some genuine weaponization and testing of the agents had been done.
A black mamba and its extracted venom were also part of the research, as were E. coli O157:H7 bacteria genetically modified to express some of the toxins made by Clostridium perfringens bacteria. [ 3 ] A list of purchases at RRL and other documents include references to such things as the snake, biological agents such as anthrax , brucellosis , cholera and salmonella among others, and chemicals including aluminium phosphide , thallium acetate , sodium azide , sodium cyanide , mercury oxycyanide , cantharides , colchicine , powerful anticoagulants such as brodifacoum , phenylsilatranes , strychnine , paraquat , " knockout drops ", digoxin , acetylcholinesterase inhibitors such as aldicarb and paraoxon and other poisons.
Other plans referenced in the UN report included crowd control with pheromones , and discussion of the development of several novel compounds, including a locally produced variant of BZ, novel derivatives of CR gas including "a compound which had a pyridine moiety in place of one of the benzene rings...[and] caused severe blisters on the skin", a new, more potent analogue of methaqualone and a "dimethylketone-amphetamine" derivative of MDMA. [ 3 ] Another unusual project attempted to develop a method of sterilising crowds using a known male sterilant, pyridine [ citation needed ] . That was to be sprayed onto the crowds from a gas cylinder pressurised with nitrogen gas since pyridine is highly flammable. A subsequent industrial accident caused the death of a gas company employee when the experimental contaminated medical oxygen cylinder had been returned to the gas supplier and filled with oxygen that exploded. [ 7 ]
In 1985, four SWAPO detainees held at Reconnaissance Regiment headquarters were allegedly given a sleeping drug in soft drinks, taken to Lanseria airport outside Johannesburg, and injected with three toxic substances supplied by Basson. Their bodies were thrown into the Atlantic Ocean . [ citation needed ]
The Civil Cooperation Bureau operative Petrus Jacobus Botes, who claimed to have also directed bureau operations in Mozambique and Swaziland , asserted that he was ordered in May 1989 to contaminate the water supply at Dobra, a refugee camp in Namibia , with cholera and yellow fever organisms. A South African Army doctor provided them to him. In late August 1989, he led an attempt to contaminate the water supply, but it failed because of the high chlorine content in the treated water at the camp. [ 8 ]
Research on birth control methods to reduce the black birth rate was one such area. Goosen, the managing director of Roodeplaat Research Laboratories between 1983 and 1986, told Tom Mangold of the BBC that Project Coast had supported a project to develop a contraceptive that would have been applied clandestinely to blacks. Goosen reported that the project had developed a 'vaccine' for males and females and that the researchers were still searching for a means by which it could be delivered to make black people sterile without them being made aware. Schalk van Rensburg stated that “fertility and fertility control studies comprised 18% of all projects”. [ 9 ]
Testimony given at the Truth and Reconciliation Commission suggested that Project Coast researchers were also looking into putting birth control substances in water supplies. [ 8 ] The project officer for Project Coast, Basson, was put on trial for 64 charges, all of which were committed while he held that position. [ 9 ] Goosen testified that when asked what motivated him, Basson had replied that "although we do not have any doubt that black people will take over the country one day, when my daughter asks me what I did to prevent this, at least my conscience will be clean". [ 10 ]
Despite strong links to Israel and Libya , no country has been directly implicated for involvement in the project, however, the project would not have been able to develop without some form of international support. [ 9 ] According to Miles Jackson, while the focus on apartheid South Africa’s research into fertility is barely part of the ongoing discussion regarding Project Coast, what occurred could constitute conspiracy to commit genocide under international law. [ 11 ] | https://en.wikipedia.org/wiki/Project_Coast |
Project Eagle is an interactive art demo of a colony on Mars , developed by Blackbird Interactive in collaboration with NASA 's Jet Propulsion Laboratory . [ 1 ] It was released on Steam on 27 November 2018, in honor of the successful InSight landing.
Project Eagle was built in the Unity (Game Engine) and utilizes design elements similar to that of the RTS game Homeworld: Deserts of Kharak , including the sensors manager view and camera systems.
The Martian terrain was generated using radar data from NASA's HiRISE camera on board the Mars Reconnaissance Orbiter . [ 2 ]
Project Eagle is set in 2117 in a hypothetical future after the first human colonists arrive on Mars in the year 2034. [ 3 ] [ 4 ] The fictional "Eagle Base" is located at the foot of Mount Sharp (Aeolis Mons), in Quad 51 of Aeolis Palus in Gale Crater , near the site of the Curiosity rover landing. The Curiosity landing site is marked with a plinth in Project Eagle . [ 4 ]
Project Eagle was presented on stage at D.I.C.E. 2017 by NASA's Dr. Jeff Norris, and BBI's CEO Rob Cunningham and CCO Aaron Kambeitz. The talk took place directly after the conference keynote speech by Jeffrey Kaplan from Blizzard Entertainment . [ 5 ] There, Jeff Norris said: "We wanted to publicly exhibit a project that shows what this medium could do for inspiring space exploration". [ 6 ]
This astronomy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Project_Eagle |
Project GABRIEL was an investigation to gauge the impact of nuclear fallout resulting from nuclear warfare . The United States Atomic Energy Commission surmised that the radioactive isotope strontium-90 (Sr-90) presented the greatest hazard to life globally, [ 1 ] which resulted in the commissioning of Project SUNSHINE : which sought to examine the levels of Sr-90 in human tissues and bones (with a special interest in infants) gathered from around the world.
During the Cold War era, there was an escalation of atmospheric testing of nuclear weapons. After the atomic bombings of Hiroshima and Nagasaki in 1945, testing continued and the scale increased with the first hydrogen bomb in 1952. Soon after the United States tested the hydrogen bomb , the Soviet Union followed, in 1953. The mushroom clouds that occurred from the explosions released radioactive isotopes in mass quantities.
The first comprehensive study of the problem of radioactive isotope release began in spring 1949 with a one-man project called GABRIEL, conducted by Nicholas M. Smith Jr. at Oak Ridge National Laboratory . Smith produced his first report in 1949.
Project GABRIEL was revived in mid-1951 because bombs that were dropped had brought up concerns people had about the dangers of strontium-90. United States Atomic Energy Commission (AEC) was interested by GABRIEL's report though they said it was lacking in hard data and needed independent confirmation of the tests. [ 2 ]
After reviewing Project GABRIEL in 1953, it was given first priority status. The secret project would define "practical limits" for using atomic weapons. A task team was assembled and the codename used was "Project HORN." In 1954, AEC argued that fallout was harmless because there was not enough evidence to prove that fallout would harm humans, animals, or crops. The AEC campaign persuaded the public that the worldwide fallout was harmless. [ 3 ] This claim was later disputed when scientists announced publicly that there was no safe level of radiation. This was confirmed in a confidential report by a geneticist for the AEC. By 1954 and the Castle Bravo incident, it was obvious that radioactive fallout was dangerous to humans. The United States Atomic Energy Commission Division of Biology and Medicine dealt with efforts directed towards experimental and field studies and the correlation of data dealing with Project GABRIEL. [ 1 ] The RAND Corporation , Laboratories at Columbia University , AEC 's New York office, the University of Chicago , an exclusive group of scientists, UCLA , and the United States Air Force were all involved in collection and testing of samples from around the world for radioactive fallout.
After the hazards of strontium-90 became evident, the next step was to focus on impact and damage per detonation. Smith's tests focused primarily on how many atomic weapons could potentially be detonated before radioactive contamination of air, water and soil became a long-term effect on crops, animals and humans worldwide. [ 4 ] In 1949, Smith estimated that it would take 3,000 Hiroshima-sized detonations in a single growing season to see if it have an effect on people who ate crops in affected areas. In 1951, Smith repeated this study with new information from the previous two detonations. With the new information, he then calculated that 10,000 Hiroshima-sized detonations would be needed before the long-term hazards became serious. [ 5 ] The testing was done with bones, urine and tissue samples collected worldwide. These samples were all tested for nuclear fallout, yet were falsely studied under the guise of nutritional importance and naturally occurring radon. It was determined that Sr-90 is a "bone-seeker," depositing in bones and marrow after ingestion. Civilian prisoners were considered for certain radiation testing, mainly Utah State prison inmates. One document revealed tests done on the bones of a stillborn baby showed that strontium-90 levels were 36% higher than the average 55% of other stillborn. [ 6 ]
Project GABRIEL opened a wide range of questions about formation, transformation, fallout and biological hazards due to bomb debris. GABRIEL supported work in research projects that might apply to the side effects of nuclear war. It was the sole support of the major research effort of Project SUNSHINE , which tested biological damage from radioactive fallout of Sr-90. By 1954 Project GABRIEL included about 70 investigations supported by the Division of Biology and Medicine. At a summer conference that was hosted by the RAND Corporation the estimate of detonations was revised and increased to 25,000 megatons worth of damage. [ 7 ] Project Sunshine was led by radiation physicist Willard Libby on July 21, 1953. Libby realized GABRIEL lacked data in other aspects of fallout, examined carbon-14 and developed radiocarbon dating . The Project GABRIEL report by the AEC was issued in 1954, while the RAND Corporation issued their report on Project SUNSHINE in 1953. [ 8 ] Both Project GABRIEL and SUNSHINE played a direct role in the reorganization of the AEC's Division of Biology and Medicine in 1957. | https://en.wikipedia.org/wiki/Project_GABRIEL |
Project Galileo is an educational astronomy project based at Clifton College in Bristol in the United Kingdom .
Project Galileo started in December 2000, as a result of a telescope being donated to a Bristol school, with the overall aim of making the system available for other schools in the area to use. The system became operational in July 2005, and achieved full on-line capability by the autumn of 2008, running until the main project co-ordinator left teaching.
Project Galileo aims to provide a curriculum resource for Science and Technology at Key Stages 3 & 4 and AS/A2 levels, including ICT & Design & Technology courses at post-16 level, to inspire teenagers to "learn the art as well as science of research and research teams, how scientific research is practiced, how physical ideas become mainstream, how collaboration between scientific disciplines fosters greater understanding, and to encourage links and collaboration between schools of different age-ranges and traditions in the city of Bristol." (Project FAQ)
The project focuses on the issues surrounding the maintenance and successful operation of a remote-controlled observatory . It is built around a remote-controlled telescope with a 1.3 metre dome and CCD camera located in Bristol, which enables imaging and searches of comets , asteroids , supernovae , deep-sky and planetary objects.
Schools registered via the main Project Galileo website, and then logged in to the telescope to order images, much in the same way as is done with other online school observatories . One important feature of the software being developed for the project is that it comes under the GPL , so other schools or institutions who want to develop similar systems will be able to modify the code rather than having to spend vital educational budgets for proprietary software. [ citation needed ]
Here is a simplified diagram of how the system works: | https://en.wikipedia.org/wiki/Project_Galileo |
Project Gemini ( IPA : / ˈ dʒ ɛ m ɪ n i / ) was the second United States human spaceflight program to fly. Conducted after the first American crewed space program, Project Mercury , while the Apollo program was still in early development, Gemini was conceived in 1961 and concluded in 1966. The Gemini spacecraft carried a two-astronaut crew. Ten Gemini crews and 16 individual astronauts flew low Earth orbit (LEO) missions during 1965 and 1966.
Gemini's objective was the development of space travel techniques to support the Apollo mission to land astronauts on the Moon . In doing so, it allowed the United States to catch up and overcome the lead in human spaceflight capability the Soviet Union had obtained in the early years of the Space Race , by demonstrating mission endurance up to just under 14 days, longer than the eight days required for a round trip to the Moon ; methods of performing extravehicular activity (EVA) without tiring; and the orbital maneuvers necessary to achieve rendezvous and docking with another spacecraft. This left Apollo free to pursue its prime mission without spending time developing these techniques.
All Gemini flights were launched from Launch Complex 19 (LC-19) at Cape Kennedy Air Force Station in Florida. Their launch vehicle was the Titan II GLV , a modified intercontinental ballistic missile . [ note 1 ] Gemini was the first program to use the newly built Mission Control Center at the Houston Manned Spacecraft Center for flight control . [ note 2 ] The project also used the Agena target vehicle , a modified Atlas-Agena upper stage, used to develop and practice orbital rendezvous and docking techniques.
The astronaut corps that supported Project Gemini included the " Mercury Seven ", " The New Nine ", and " The Fourteen ". During the program, three astronauts died in air crashes during training, including both members of the prime crew for Gemini 9. The backup crew flew this mission.
Gemini was robust enough that the United States Air Force planned to use it for the Manned Orbital Laboratory (MOL) program, which was later canceled. Gemini's chief designer, Jim Chamberlin , also made detailed plans for cislunar and lunar landing missions in late 1961. He believed Gemini spacecraft could fly in lunar operations before Project Apollo, and cost less. NASA's administration did not approve those plans. In 1969, Lukas Bingham proposed a " Big Gemini " that could have been used to shuttle up to 12 astronauts to the planned space stations in the Apollo Applications Project (AAP). The only AAP project funded was Skylab (the first American space station)—which used existing spacecraft and hardware—thereby eliminating the need for Big Gemini.
The constellation for which the project was named is commonly pronounced / ˈ dʒ ɛ m ɪ n aɪ / , the last syllable rhyming with eye . However, staff of the Manned Spacecraft Center, including the astronauts, tended to pronounce the name / ˈ dʒ ɛ m ɪ n i / , rhyming with knee . NASA's public affairs office then issued a statement in 1965 declaring "Jeh'-mih-nee" the "official" pronunciation. [ 2 ] Gus Grissom , acting as Houston capsule communicator when Ed White performed his spacewalk on Gemini 4 , is heard on flight recordings pronouncing the spacecraft's call sign "Jeh-mih-nee 4", and the NASA pronunciation is used in the 2018 film First Man . [ 2 ]
The Apollo program was conceived in early 1960 as a three-man spacecraft to follow Project Mercury . Jim Chamberlin , the head of engineering at the Space Task Group (STG), was assigned in February 1961 to start working on a bridge program between Mercury and Apollo. [ 3 ] He presented two initial versions of a two-man spacecraft, then designated Mercury Mark II, at a NASA retreat at Wallops Island in March 1961. [ 3 ] Scale models were shown in July 1961 at the McDonnell Aircraft Corporation 's offices in St. Louis. [ 3 ]
After Apollo was chartered to land men on the Moon by President John F. Kennedy on May 25, 1961, it became evident to NASA officials that a follow-on to the Mercury program was required to develop certain spaceflight capabilities in support of Apollo. NASA approved the two-man / two-vehicle program rechristened Project Gemini (Latin for "twins"), in reference to the third constellation of the Zodiac with its twin stars Castor and Pollux , on December 7, 1961. [ 3 ] McDonnell Aircraft was contracted to build it on December 22, 1961. [ 4 ] The program was publicly announced on January 3, 1962, with these major objectives: [ 5 ]
Chamberlin designed the Gemini capsule, which carried a crew of two. He was previously the chief aerodynamicist on Avro Canada 's CF-105 Arrow fighter interceptor program. [ 6 ] Chamberlin joined NASA along with 25 senior Avro engineers after cancellation of the Canadian Arrow program, and became head of the U.S. Space Task Group's engineering division in charge of Gemini. [ 6 ] [ 7 ] The prime contractor was McDonnell Aircraft Corporation, which was also the prime contractor for the Project Mercury capsule. [ 8 ]
Astronaut Gus Grissom was heavily involved in the development and design of the Gemini spacecraft . What other Mercury astronauts dubbed "Gusmobile" was so designed around Grissom's 5'6" body that, when NASA discovered in 1963 that 14 of 16 astronauts would not fit in the spacecraft, the interior had to be redesigned. [ 9 ] Grissom wrote in his posthumous 1968 book Gemini! that the realization of Project Mercury 's end and the unlikelihood of his having another flight in that program prompted him to focus all his efforts on the upcoming Gemini program. [ 10 ]
The Gemini program was managed by the Manned Spacecraft Center , located in Houston, Texas , under direction of the Office of Manned Space Flight, NASA Headquarters, Washington, D.C. Dr. George E. Mueller , Associate Administrator of NASA for Manned Space Flight, served as acting director of the Gemini program. William C. Schneider, Deputy Director of Manned Space Flight for Mission Operations served as mission director on all Gemini flights beginning with Gemini 6A.
Guenter Wendt was a McDonnell engineer who supervised launch preparations for both the Mercury and Gemini programs and would go on to do the same when the Apollo program launched crews. His team was responsible for completion of the complex pad close-out procedures just prior to spacecraft launch, and he was the last person the astronauts would see prior to closing the hatch. The astronauts appreciated his taking absolute authority over, and responsibility for, the condition of the spacecraft and developed a good-humored rapport with him. [ 11 ]
In 1961, NASA selected McDonnell Aircraft , which was the prime contractor for the Project Mercury capsule, to build the Gemini capsule, the first of which was delivered in 1963. The spacecraft was 18 feet 5 inches (5.61 m) long and 10 feet (3.0 m) wide, with a launch weight varying from 7,100 to 8,350 pounds (3,220 to 3,790 kg). [ 12 ]
The Gemini crew capsule (referred to as the Reentry Module) was essentially an enlarged version of the Mercury capsule. Unlike Mercury, the retrorockets , electrical power, propulsion systems, oxygen, and water were located in a detachable Adapter Module behind the Reentry Module which would burn up on reentry. A major design improvement in Gemini was to locate all internal spacecraft systems in modular components, which could be independently tested and replaced when necessary, without removing or disturbing other already tested components.
Many components in the capsule itself were accessible through their respective small access doors. Unlike Mercury, Gemini used completely solid-state electronics, and its modular design made it easy to repair. [ 13 ]
Gemini's emergency launch escape system did not use an escape tower powered by a solid-fuel rocket , but instead used aircraft-style ejection seats . The tower was heavy and complicated, and NASA engineers reasoned that they could do away with it as the Titan II's hypergolic propellants would burn immediately on contact. A Titan II booster explosion had a smaller blast effect and flame than on the cryogenically fueled Atlas and Saturn. Ejection seats were sufficient to separate the astronauts from a malfunctioning launch vehicle. At higher altitudes, where the ejection seats could not be used, the astronauts would return to Earth inside the spacecraft, which would separate from the launch vehicle. [ 14 ]
The main proponent of using ejection seats was Chamberlin, who had never liked the Mercury escape tower and wished to use a simpler alternative that would also reduce weight. He reviewed several films of Atlas and Titan II ICBM failures, which he used to estimate the approximate size of a fireball produced by an exploding launch vehicle and from this he gauged that the Titan II would produce a much smaller explosion, thus the spacecraft could get away with ejection seats.
Maxime Faget , the designer of the Mercury LES, was on the other hand less-than-enthusiastic about this setup. Aside from the possibility of the ejection seats seriously injuring the astronauts, they would also only be usable for about 40 seconds after liftoff, by which point the booster would be attaining Mach 1 speed and ejection would no longer be possible. He was also concerned about the astronauts being launched through the Titan's exhaust plume if they ejected in-flight and later added, "The best thing about Gemini was that they never had to make an escape." [ 15 ]
The Gemini ejection system was never tested with the Gemini cabin pressurized with pure oxygen, as it was prior to launch. In January 1967, the fatal Apollo 1 fire demonstrated that pressurizing a spacecraft with pure oxygen created an extremely dangerous fire hazard. [ 16 ] In a 1997 oral history, astronaut Thomas P. Stafford commented on the Gemini 6 launch abort in December 1965, when he and command pilot Wally Schirra nearly ejected from the spacecraft:
So it turns out what we would have seen, had we had to do that, would have been two Roman candles going out, because we were 15 or 16 psi, pure oxygen, soaking in that for an hour and a half. You remember the tragic fire we had at the Cape. (...) Jesus, with that fire going off and that, it would have burned the suits. Everything was soaked in oxygen. So thank God. That was another thing: NASA never tested it under the conditions that they would have had if they would have had to eject. They did have some tests at China Lake where they had a simulated mock-up of Gemini capsule, but what they did is fill it full of nitrogen. They didn't have it filled full of oxygen in the sled test they had. [ 17 ]
Gemini was the first astronaut-carrying spacecraft to include an onboard computer, the Gemini Guidance Computer , to facilitate management and control of mission maneuvers. This computer, sometimes called the Gemini Spacecraft On-Board Computer (OBC), was very similar to the Saturn Launch Vehicle Digital Computer . The Gemini Guidance Computer weighed 58.98 pounds (26.75 kg). Its core memory had 4096 addresses , each containing a 39-bit word composed of three 13-bit "syllables". All numeric data was 26-bit two's-complement integers (sometimes used as fixed-point numbers ), either stored in the first two syllables of a word or in the accumulator . Instructions (always with a 4-bit opcode and 9 bits of operand) could go in any syllable. [ 18 ] [ 19 ] [ 20 ] [ 21 ]
Unlike Mercury, Gemini used in-flight radar and an artificial horizon , similar to those used in the aviation industry. [ 18 ] Like Mercury, Gemini used a joystick to give the astronauts manual control of yaw, pitch, and roll . Gemini added control of the spacecraft's translation (forward, backward, up, down, and sideways) with a pair of T-shaped handles (one for each crew member). Translation control enabled rendezvous and docking , and crew control of the flight path. The same controller types were also used in the Apollo spacecraft . [ 9 ]
The original intention for Gemini was to land on solid ground instead of at sea, using a Rogallo wing rather than a parachute, with the crew seated upright controlling the forward motion of the craft. To facilitate this, the airfoil did not attach just to the nose of the craft, but to an additional attachment point for balance near the heat shield. This cord was covered by a strip of metal which ran between the twin hatches. [ 22 ] This design was ultimately dropped, and parachutes were used to make a sea landing as in Mercury. The capsule was suspended at an angle closer to horizontal, so that a side of the heat shield contacted the water first. This eliminated the need for the landing bag cushion used in the Mercury capsule.
The adapter module in turn was separated into a Retro module and an Equipment module.
The Retro module contained four solid-fuel TE-M-385 Star-13 E retrorockets, each spherical in shape except for its rocket nozzle, which were structurally attached to two beams that reached across the diameter of the retro module, crossing at right angles in the center. [ 23 ] Re-entry began with the retrorockets firing one at a time. Abort procedures at certain periods during lift-off would cause them to fire at the same time, thrusting the Descent module away from the Titan rocket.
Gemini was equipped with an Orbit Attitude and Maneuvering System (OAMS), containing sixteen thrusters for translation control in all three perpendicular axes (forward/backward, left/right, up/down), in addition to attitude control (pitch, yaw, and roll angle orientation) as in Mercury. Translation control allowed changing orbital inclination and altitude, necessary to perform space rendezvous with other craft, and docking with the Agena Target Vehicle (ATV), with its own rocket engine which could be used to perform greater orbit changes.
Early short-duration missions had their electrical power supplied by batteries; later endurance missions used the first fuel cells in crewed spacecraft.
Gemini was in some regards more advanced than Apollo because the latter program began almost a year earlier. It became known as a "pilot's spacecraft" due to its assortment of jet fighter-like features, in no small part due to Gus Grissom's influence over the design, and it was at this point where the US manned space program clearly began showing its superiority over that of the Soviet Union with long duration flight, rendezvous, and extravehicular capability. [ note 4 ] The Soviet Union during this period was developing the Soyuz spacecraft intended to take cosmonauts to the Moon, but political and technical problems began to get in the way, leading to the ultimate end of their crewed lunar program.
The Titan II debuted in 1962 as the Air Force's second-generation ICBM to replace the Atlas. By using hypergolic fuels, it could be stored longer and be easily readied for launch in addition to being a simpler design with fewer components. The only caveat was the propellant mix ( nitrogen tetroxide and hydrazine ) were extremely toxic compared to the Atlas' liquid oxygen/RP-1. However, the Titan had considerable difficulty being man-rated due to early problems with pogo oscillation . The launch vehicle used a radio guidance system that was unique to launches from Cape Kennedy.
Deke Slayton , as director of flight crew operations, had primary responsibility for assigning crews for the Gemini program. Each flight had a primary crew and backup crew, and the backup crew would rotate to primary crew status three flights later. Slayton intended for first choice of mission commands to be given to the four remaining active astronauts of the Mercury Seven : Alan Shepard , Grissom, Cooper, and Schirra. ( John Glenn had retired from NASA in January 1964 and Scott Carpenter , who was blamed by some in NASA management for the problematic reentry of Aurora 7 , was on leave to participate in the Navy's SEALAB project and was grounded from flight in July 1964 due to an arm injury sustained in a motorbike accident. Slayton himself continued to be grounded due to a heart problem.) As for Shepard, during training on the Gemini Project, his inner ear deficiency due to Menière's Disease would effectively ground him as well and keep him removed from the flight roster until he underwent corrective surgery and would not fly on Gemini at all, but return to flight with Apollo 14 as Commander.
Titles used for the left-hand (command) and right-hand (pilot) seat crew positions were taken from the U.S. Air Force pilot ratings , Command Pilot and Pilot . Sixteen astronauts flew on 10 crewed Gemini missions:
In late 1963, Slayton selected Shepard and Stafford for Gemini 3, McDivitt and White for Gemini 4, and Schirra and Young for Gemini 5 (which was to be the first Agena rendezvous mission). The backup crew for Gemini 3 was Grissom and Borman, who were also slated for Gemini 6 , to be the first long-duration mission. Finally Conrad and Lovell were assigned as the backup crew for Gemini 4 .
Delays in the production of the Agena Target Vehicle caused the first rearrangement of the crew rotation. The Schirra and Young mission was bumped to Gemini 6 and they became the backup crew for Shepard and Stafford. Grissom and Borman then had their long-duration mission assigned to Gemini 5.
The second rearrangement occurred when Shepard developed Ménière's disease , an inner ear problem. Grissom was then moved to command Gemini 3. Slayton felt that Young was a better personality match with Grissom and switched Stafford and Young. Finally, Slayton tapped Cooper to command the long-duration Gemini 5. Again for reasons of compatibility, he moved Conrad from backup commander of Gemini 4 to pilot of Gemini 5, and Borman to backup command of Gemini 4. Finally he assigned Armstrong and Elliot See to be the backup crew for Gemini 5.
The third rearrangement of crew assignment occurred when Slayton felt that See wasn't up to the physical demands of EVA on Gemini 8. He reassigned See to be the prime commander of Gemini 9 and put Scott as pilot of Gemini 8 and Charles Bassett as the pilot of Gemini 9.
The fourth and final rearrangement of the Gemini crew assignment occurred after the deaths of See and Bassett when their trainer jet crashed, coincidentally into a McDonnell building which held their Gemini 9 capsule in St. Louis. The backup crew of Stafford and Cernan was then moved up to the new prime crew of Gemini 9A. Lovell and Aldrin were moved from being the backup crew of Gemini 10 to be the backup crew of Gemini 9. This cleared the way through the crew rotation for Lovell and Aldrin to become the prime crew of Gemini 12.
Along with the deaths of Grissom, White, and Roger Chaffee in the fire of Apollo 1 , this final arrangement helped determine the makeup of the first seven Apollo crews, and who would be in position for a chance to be the first to walk on the Moon.
In April 1964 and January 1965, two Gemini missions were flown without crews to test systems and the heat shield. These were followed by 10 flights with crews in 1965 and 1966. All were launched by Titan II launch vehicles. Some highlights from the Gemini program:
Rendezvous in orbit is not a straightforward maneuver. Should a spacecraft increase its speed to catch up with another, the result is that it goes into a higher and slower orbit and the distance thereby increases. The right procedure is to go to a lower orbit first, which increases relative speed, and then approach the target spacecraft from below and decrease orbital speed to meet it. [ 24 ] To practice these maneuvers, special rendezvous and docking simulators were built for the astronauts. [ 25 ]
The Gemini-Titan II launch vehicle was adapted by NASA from the U.S. Air Force Titan II ICBM . (Similarly, the Mercury-Atlas launch vehicle had been adapted from the USAF Atlas missile .) The Gemini-Titan II rockets were assigned Air Force serial numbers, which were painted in four places on each Titan II (on opposite sides on each of the first and second stages). USAF crews maintained Launch Complex 19 and prepared and launched all of the Gemini-Titan II launch vehicles. Data and experience operating the Titans was of value to both the U.S. Air Force and NASA.
The USAF serial numbers assigned to the Gemini-Titan launch vehicles are given in the tables above. Fifteen Titan IIs were ordered in 1962 so the serial is "62-12XXX", but only "12XXX" is painted on the Titan II. The order for the last three of the 15 launch vehicles was canceled on July 30, 1964, and they were never built. Serial numbers were, however, assigned to them prospectively: 12568 - GLV-13; 12569 - GLV-14; and 12570 - GLV-15.
From 1962 to 1967, Gemini cost $1.3 billion in 1967 dollars ($9.07 billion in 2023 [ 29 ] ). [ 1 ] In January 1969, a NASA report to the US Congress estimating the costs for Mercury, Gemini, and Apollo (through the first crewed Moon landing) included $1.2834 billion for Gemini: $797.4 million for spacecraft, $409.8 million for launch vehicles, and $76.2 million for support. [ 30 ]
A number of detailed Gemini models and mockups are on display: [ 51 ]
McDonnell Aircraft, the main contractor for Mercury and Gemini, was also one of the original bidders on the prime contract for Apollo, but lost out to North American Aviation . McDonnell later sought to extend the Gemini program by proposing a derivative which could be used to fly a cislunar mission and even achieve a crewed lunar landing earlier and at less cost than Apollo, but these proposals were rejected by NASA.
A range of applications were considered for Advanced Gemini missions, including military flights, space station crew and logistics delivery, and lunar flights. The Lunar proposals ranged from reusing the docking systems developed for the Agena Target Vehicle on more powerful upper stages such as the Centaur, which could propel the spacecraft to the Moon, to complete modifications of the Gemini to enable it to land on the lunar surface. Its applications would have ranged from crewed lunar flybys before Apollo was ready, to providing emergency shelters or rescue for stranded Apollo crews, or even replacing the Apollo program.
Some of the Advanced Gemini proposals used "off-the-shelf" Gemini spacecraft, unmodified from the original program, while others featured modifications to allow the spacecraft to carry more personnel, dock with space stations, visit the Moon, and perform other mission objectives. Other modifications considered included the addition of wings or a parasail to the spacecraft, in order to enable it to make a horizontal landing.
Big Gemini (or "Big G") was another proposal by McDonnell Douglas made in August 1969. It was intended to provide large-capacity, all-purpose access to space, including missions that ultimately used Apollo or the Space Shuttle.
The study was performed to generate a preliminary definition of a logistic spacecraft derived from Gemini that would be used to resupply an orbiting space station. Land-landing at a preselected site and refurbishment and reuse were design requirements. Two baseline spacecraft were defined: a nine-man minimum modification version of the Gemini B called Min-Mod Big G and a 12-man advanced concept, having the same exterior geometry but with new, state-of-the-art subsystems, called Advanced Big G. [ 52 ] Three launch vehicles- Saturn IB , Titan IIIM , and Saturn INT-20 (S-IC/S-IVB) were investigated for use with the spacecraft.
The Air Force had an interest in the Gemini system, and decided to use its own modification of the spacecraft as the crew vehicle for the Manned Orbital Laboratory . To this end, the Gemini 2 spacecraft was refurbished and flown again atop a mockup of the MOL, sent into space by a Titan III C. This was the first time a spacecraft went into space twice.
The USAF also thought of adapting the Gemini spacecraft for military applications, such as crude observation of the ground (no specialized reconnaissance camera could be carried) and practicing making rendezvous with suspicious satellites. This project was called Blue Gemini . The USAF did not like the fact that Gemini would have to be recovered by the US Navy, so they intended for Blue Gemini eventually to use the airfoil and land on three skids, carried over from the original design of Gemini.
At first some within NASA welcomed sharing of the cost with the USAF, but it was later agreed that NASA was better off operating Gemini by itself. Blue Gemini was canceled in 1963 by Secretary of Defense Robert McNamara , who decided the NASA Gemini flights could conduct necessary military experiments. MOL was canceled by Secretary of Defense Melvin Laird in 1969, when it was determined that uncrewed spy satellites could perform the same functions much more cost-effectively.
This article incorporates public domain material from websites or documents of the National Aeronautics and Space Administration . | https://en.wikipedia.org/wiki/Project_Gemini |
Project Habakkuk or Habbakuk (spelling varies) was a plan by the British during the Second World War to construct an aircraft carrier out of pykrete , a mixture of wood pulp and ice , for use against German U-boats in the mid- Atlantic , which were beyond the flight range of land-based planes at that time. The plan was to create what would have been the largest ship ever at 600 metres (1,969 ft) long, which would have been much bigger than even the USS Enterprise and the USS Gerald R. Ford , the largest naval vessel ever, at 342 metres (1,122 ft) long. The idea came from Geoffrey Pyke , who worked for Combined Operations Headquarters . After promising scale tests and the creation of a prototype on Patricia Lake , Jasper National Park , in Alberta , Canada, the project was shelved due to rising costs, added requirements, and the availability of longer-range aircraft and escort carriers which closed the Mid-Atlantic gap that the project was intended to address.
Geoffrey Pyke was an old friend of J. D. Bernal and had been recommended to Lord Louis Mountbatten , Chief of Combined Operations , by the cabinet minister Leopold Amery . Pyke worked at Combined Operations Headquarters (COHQ) alongside Bernal and was regarded as a genius by Mountbatten. [ 1 ]
Pyke conceived the idea of Habakkuk while he was in the United States organising the production of M29 Weasels for Project Plough , a scheme to assemble an elite unit for winter operations in Norway, Romania and the Italian Alps. [ 1 ] He had been considering the problem of how to protect seaborne landings and Atlantic convoys out of reach of aircraft cover. The problem was that steel and aluminium were in short supply, and were required for other purposes. Pyke decided that the answer was ice, which could be manufactured for just 1% of the energy needed to make an equivalent mass of steel. He proposed that an iceberg, natural or artificial, be levelled to provide a runway and hollowed out to shelter aircraft.
From New York, Pyke sent the proposal via diplomatic bag to COHQ, with a label forbidding anyone apart from Mountbatten from opening the package. Mountbatten in turn passed Pyke's proposal on to Churchill , who was enthusiastic about it. [ 2 ]
Pyke was not the first to suggest a floating mid-ocean stopping point for aircraft, nor even the first to suggest that such a floating island could be made of ice. A German scientist, A. Gerke from Waldenburg, had proposed the idea and carried out some preliminary experiments on Lake Zurich in 1930. [ 3 ] [ 4 ] The idea was a recurring one: in 1940 an idea for an ice island was circulated around the Admiralty , but was treated as a joke by officers, including Nevil Shute , who circulated a memorandum that gathered ever more caustic comments. The document was retrieved just before it reached the First Sea Lord 's inbox. [ 5 ]
The project's code name was often incorrectly spelled Habbakuk in official documents. This may have been Pyke's error. At least one early unsigned document (apparently written by him) spells it Habbakuk. However, post-war publications by people concerned with the project, such as Perutz and Goodeve, all restore the proper spelling, with one "b" and three "k"s. [ citation needed ] The name is a reference to the project's ambitious goal:
Behold ye among the heathen, and regard, and wonder marvellously: for I will work a work in your days, which ye will not believe, though it be told you. Habakkuk 1:5
David Lampe, in his book, Pyke, the Unknown Genius , states that the name was derived from Voltaire 's Candide and was misspelled by Pyke's Canadian secretary. However, the word does not actually appear in Candide , [ 6 ] so this is probably inaccurate.
In early 1942 Pyke and Bernal called in Max Perutz to determine whether an icefloe large enough to withstand Atlantic conditions could be built up fast enough. Perutz pointed out that natural icebergs have too small a surface above water for an airstrip, and are prone to suddenly rolling over. The project would have been abandoned if it had not been for the invention of pykrete , a mixture of water and woodpulp that when frozen was stronger than plain ice, was slower-melting and would not sink. Developed by his government group and named after Pyke, it has been suggested that Pyke was inspired by Inuit sleds reinforced with moss. [ 7 ] This is probably apocryphal, as the material was originally described in a paper by Mark and Hohenstein in Brooklyn. [ 2 ]
Pykrete could be machined like wood and cast into shapes like metal, and when immersed in water formed an insulating shell of wet wood pulp on its surface that protected its interior from further melting. However, Perutz found a problem: ice flows slowly, in what is known as plastic flow , and his tests showed that a pykrete ship would slowly sag unless it was cooled to −16 °C (3 °F). To accomplish this the ship's surface would have to be protected by insulation, and it would need a refrigeration plant and a complicated system of ducts. [ 2 ]
Perutz proceeded to conduct experiments on the viability of pykrete and its optimum composition in a secret location underneath Smithfield Meat Market in the City of London . [ 8 ] [ 9 ] The research took place in a refrigerated meat locker behind a protective screen of frozen animal carcasses. [ 10 ]
The decision was made to build a large-scale model at Jasper National Park in Canada to examine insulation and refrigeration techniques, and to see how pykrete would stand up to artillery and explosives. Large ice blocks were constructed at Lake Louise, Alberta , and a small prototype was constructed at Patricia Lake, Alberta , measuring 60 by 30 feet (18 metres by 9 metres), weighing 1,000 tons and kept frozen by a one-horsepower motor. [ 10 ] The work was done by conscientious objectors who did alternative service of various kinds instead of military service. They were never told what they were building. [ 11 ] Bernal informed COHQ that the Canadians were building a 1,000-ton model, and that it was expected to take eight men fourteen days to build it. The Chief of Combined Operations (CCO) responded that Churchill had invited the Chiefs of Staff Committee to arrange for an order to be placed for one complete ship at once, with the highest priority, and that further ships were to be ordered immediately if it appeared that the scheme was certain of success.
The Canadians were confident about constructing a vessel for 1944. The necessary materials were available to them in the form of 300,000 tons of wood pulp, 25,000 tons of fibreboard insulation, 35,000 tons of timber and 10,000 tons of steel. The cost was estimated at £700,000. [ 12 ]
Meanwhile Perutz had determined via his experiments at Smithfield Market that the optimum structural properties were given by a mixture of 14 per cent wood pulp and 86 per cent water. He wrote to Pyke in early April 1943 and pointed out that if certain tests were not completed in May, there would be no chance of delivering a completed ship in 1944.
By May the problem of cold flow had become serious and it was obvious that more steel reinforcement would be needed, as well as a more effective insulating skin around the vessel's hull. This caused the cost estimate to grow to £2.5 million. In addition, the Canadians had decided that it was impractical to attempt the project "this coming season". Bernal and Pyke were forced to conclude that no Habakkuk vessel would be ready in 1944. [ 12 ]
Pyke was excluded from the planning for Habakkuk in an effort to secure American participation, a decision that Bernal supported. Pyke's earlier disagreements with American personnel on Project Plough , which had caused his removal from that project, were the main factor in this decision. [ 13 ]
Naval architects and engineers continued to work on Habakkuk with Bernal and Perutz during the summer of 1943. The requirements for the vessel became more demanding: it had to have a range of 7,000 miles (11,000 km) and be able to withstand the largest waves recorded, and the Admiralty wanted it to be torpedo-proof, which meant that the hull had to be at least 40 ft (12 m) thick. The Fleet Air Arm decided that heavy bombers should be able to take off from it, which meant that the deck had to be 2,000 ft (610 m) long. Steering also raised problems; it was initially projected that the ship would be steered by varying the speed of the motors on either side, but the Royal Navy decided that a rudder was essential. However, the problem of mounting and controlling a rudder over 100 ft (30 m) high was never solved. [ 12 ]
Naval architects produced three alternative versions of Pyke's original concept, which were discussed at a meeting with the Chiefs of Staff in August 1943:
Air Chief Marshal Portal asked about potential bomb damage to Habakkuk III, and Bernal suggested that a certain amount of deck covering might be ripped off, but could be repaired by some kind of flexible matting. It would be more difficult to deal with bomb holes in the centre portion, though the roof over the aircraft hangars would be made blast proof against 1,000 kg bombs. Bernal considered that no one could say whether the larger Habakkuk II was a practical proposition until a large-scale model could be completed and tested in Canada in the spring of 1944. He had no doubts about the suitability of pykrete as a material, but said that constructional and navigational difficulties remained to be overcome. [ 12 ]
The final design of Habakkuk II gave the bergship , as it was called, a displacement of 2.2 million tons. Steam turbogenerators were to supply 33,000 hp (25,000 kW) for 26 electric motors mounted in separate external nacelles (normal, internal ship engines would have generated too much heat for an ice craft). Its armament would have included 40 dual-barrelled 4.5" DP (dual-purpose) turrets and numerous light anti-aircraft guns, and it would have housed an airstrip and up to 150 twin-engined bombers or fighters. [ 2 ]
According to some accounts, at the Quebec Conference in 1943 Lord Louis Mountbatten brought a block of pykrete along to demonstrate its potential to the admirals and generals who accompanied Winston Churchill and Franklin D. Roosevelt . Mountbatten entered the project meeting with two blocks and placed them on the ground. One was a normal ice block and the other was pykrete. He then drew his service pistol and shot at the first block. It shattered and splintered. Next he fired at the pykrete to give an idea of the resistance of that kind of ice to projectiles. The bullet ricocheted off the block, grazing the trouser leg of Admiral Ernest King , and ended up in the wall.
Sir Alan Brooke 's diaries support this account, [ 15 ] telling how Mountbatten brought two blocks, one of ice and one of pykrete. After first shooting at the ice, with a warning to beware of splinters, Mountbatten said "I shall fire at the block on the right to show you the difference". Brooke reported that "the bullet rebounded out of the block and buzzed round our legs like an angry bee".
Max Perutz gave an account of a similar incident in his book I Wish I Made You Angry Earlier . A demonstration of pykrete was given at Combined Operations Headquarters (COHQ) by a naval officer, Lieutenant Commander Douglas Adshead-Grant , who was provided by Perutz with rods of ice and pykrete packed with dry ice in thermos flasks and large blocks of ice and pykrete. Grant demonstrated the comparative strength of ice and pykrete by firing bullets into both blocks: the ice shattered, but the bullet rebounded from the pykrete and hit the Chief of Imperial General Staff Sir Alan Brooke in the shoulder. [ 16 ]
By the time of the 1943 Quebec Conference the Habakkuk project had won the support of both Churchill and Mountbatten, [ 17 ] and was assigned to the National Research Council of Canada because of the cold Canadian winters and Canadians' prior familiarity with ice physics. The small prototype built in 1944 on Patricia Lake near Jasper, Alberta, confirmed the researchers' forecast that the full-size vessel would cost more money and machinery than a whole fleet of conventional aircraft carriers. (The sunken remains of the metal parts of the boat remain there to this day.) [ 18 ] NRC President C. J. Mackenzie later said British promoters of Habakkuk were so intimidated by Prime Minister Churchill that they kept this information from him until Mackenzie's next visit to Britain. [ 19 ]
Mountbatten later listed several reasons why the special boat's construction would be expensive and not needed: [ citation needed ]
In addition, Mountbatten himself withdrew from the project.
The final meeting of the Habakkuk board took place in December 1943. It was officially concluded that "The large Habbakuk II made of pykrete has been found to be impractical because of the enormous production resources required and technical difficulties involved."
The use of ice had actually been falling out of favour before that, and other ideas for " floating islands " had been considered, such as welding Liberty Ships or landing craft together ( Project TENTACLE ). [ 20 ] It took three hot summers to completely melt the prototype constructed in Canada.
Perutz wrote that he stayed in Washington D.C. while U.S. Navy engineers evaluated the viability of Habakkuk. He concluded: "The U.S. Navy finally decided that Habakkuk was a false prophet. One reason was [that] the enormous amount of steel needed for the refrigeration plant that was to freeze the pykrete was greater than that needed to build the entire carrier of steel, but the crucial argument was that the rapidly increasing range of land-based aircraft rendered floating islands unnecessary." [ 21 ]
The Habakkuk design received criticism, notably from Sir Charles F. Goodeve , Assistant Controller of Research and Development for the Admiralty during the Second World War. [ 22 ] In an article published after the war Goodeve pointed out the large amount of wood pulp that would be required was enough to affect paper production significantly. He also claimed that each ship would require 40,000 tons of cork insulation, thousands of miles of steel tubing for brine circulation and four power stations, but that for all those resources (some of which could be used to manufacture conventional ships of more effective fighting power) Habakkuk would be capable of travelling at only 6 knots (11 km/h) of speed. His article also contained extensive derisive comments about the properties of ice as used for ship construction.
In the 15 April 2009 episode of the U.S. TV show MythBusters Jamie Hyneman and Adam Savage built a small flat-bottomed boat dubbed Yesterday's News out of a modified version of pykrete, using whole sheets of wet newspaper instead of wood pulp. They successfully piloted the boat in Alaskan waters at a speed of 25 miles per hour (40 km/h), but it began to leak through the melting pykrete in 20 minutes. After attempting to flash freeze leaks with a fire extinguisher and bailing the water with a hand pump, 10 minutes later Hyneman determined that the boat was taking on more water than the pump could remove and they headed back to shore, trailing sloughed portions of newspaper in their wake. They later inferred that it is possible to build a boat out of pykrete, and that pykrete lived up to its purported properties of being bullet-proof, stronger and taking longer to melt than ice. However, they expressed doubt that an aircraft carrier made of pykrete could have survived for long. The conclusion was "Plausible, but ludicrous." [ 23 ]
In September 2010 the BBC programme Bang Goes the Theory also attempted to recreate a pykrete boat. A rough hull using 5,000 kilograms (11,000 lb) of hemp fibre pykrete was frozen in a coldstore, then launched in Portsmouth Harbour for a planned trip across the Solent to Cowes . The hull immediately started to leak because of the holes that had been cut in its rear to mount an outboard motor; the weight of the motor itself caused these holes to drop below the waterline. [ 24 ] [ 25 ] | https://en.wikipedia.org/wiki/Project_Habakkuk |
Project Houdini is a computer program used by the 2008 U.S. presidential campaign of Barack Obama . Although it originally had missteps, it has been credited with helping increase Democratic Party turnout over the 2004 election. It has been compared to ORCA , which failed under similar circumstances for the Mitt Romney's 2012 presidential campaign . [ 1 ] [ 2 ] [ 3 ]
This American elections -related article is a stub . You can help Wikipedia by expanding it .
This Web - software -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Project_Houdini |
Project Jefferson was a covert U.S. Defense Intelligence Agency program designed to determine if the current anthrax vaccine was effective against genetically modified bacteria . The program's legal status under the 1972 Biological Weapons Convention (BWC) is disputed. [ citation needed ]
Project Jefferson began in 1997 [ 1 ] and was designed to reproduce a strain of genetically modified anthrax isolated by Russian scientists during the 1990s. [ 2 ] The goal was to determine whether or not the strain was resistant to the commercially available U.S. anthrax vaccine . [ 2 ]
The project was disclosed in a September 4, 2001 article in The New York Times . [ 3 ] [ 4 ] Reporters Judith Miller , Stephen Engelberg and William J. Broad collaborated to write the article. [ 3 ] It is presumed that the reporters had knowledge of the program for at least several months; shortly after the article appeared they published a book that detailed the story further. [ 3 ] The 2001 book, Germs: Biological Weapons and America's Secret War , and the article are the only publicly available sources [ citation needed ] detailing Project Jefferson and its sister projects, Bacchus and Clear Vision . [ 3 ]
Project Jefferson was operated by the Defense Intelligence Agency and reviewed by lawyers at the Pentagon . [ 4 ] Those lawyers determined that Project Jefferson was in line with the BWC. [ 4 ] Despite assertions from the Clinton and Bush administrations that the project, and its sisters, were legal, several international legal scholars disagreed. [ 2 ]
Notable was the fact that the clandestine program was omitted from BWC confidence-building measure (CBM) declarations. [ 2 ] These measures were introduced to the BWC in 1986 and 1991 to strengthen the treaty, the U.S. had long been a proponent of their value and some asserted [ who? ] that these tests damaged American credibility. [ 2 ] U.S. desire to keep such programs secret was, according to Bush administration officials, a "significant reason" that Bush rejected a draft agreement signed by 143 nations to strengthen the BWC. [ 4 ] | https://en.wikipedia.org/wiki/Project_Jefferson |
Project Kahu was a major upgrade program for the A-4K Skyhawk attack aircraft operated by the Royal New Zealand Air Force (RNZAF) in the mid-1980s. Prior to the implementation of the upgrade, the A-4K Skyhawks, which had served with the RNZAF since 1970, had become dated compared to modern jet fighter aircraft.The project was named after the Māori-language name for the New Zealand swamp harrier .
By 1982 it was increasing apparent that the Douglas A-4 Skyhawk fleet that equipped the Strike Wing of the Royal New Zealand Air Force (RNZAF) required modernisation, particularly in respect of its navigation and weapons-delivery systems. A request for a tender to modernise the Skyhawks was opened in May that year, with a proposal from the American company Lear Siegler Inc. deemed to be the most appropriate. [ 1 ] [ 2 ]
In 1984, the update program was in principle approved, concurrently with the purchase of ten Skyhawks from Australia; these had been serving with the Royal Australian Navy . [ 3 ] Meanwhile a Defence Review conducted in 1983 saw the Skyhawks take on a new role as a maritime strike force, rather than the close support role it was previously tasked with. Since this required changes in the equipment that was required for the upgrade, a new tendering process commenced, with Lear Siegler again submitting the best proposal which also covered the Skyhawks newly acquired from Australia. The New Zealand Government approved the tender, which would cost NZ$148 million, on 1 May 1985. However, it was not until March 1986 that the contract with Lear Siegler to upgrade twenty-two aircraft (ten of which being the Australian Skyhawks) was signed. The project was designated Kahu after the New Zealand swamp harrier . [ 4 ] Prior to commencing the actual upgrade, personnel from the RNZAF's No. 75 Squadron , which operated the Skyhawks, went to the United States to liaise with Lear Siegler staff. [ 2 ]
Lear Siegler was not the only company involved in the upgrade. Private companies in New Zealand were also engaged, including Pacific Aerospace in Hamilton , which worked on the wings, while Fisher & Paykel did electrical work. SAFE Air was another local company involved in the work, providing draughting services. The work was completed by RNZAF staff at the RNZAF's No. 1 Repair Depot at Woodbourne . [ 5 ] Although the contract covered all twenty-two of the RNZAF's Skyhawks, only twenty-one were completed as one A-4K (NZ6210) was lost in 1989 before it was upgraded. [ 6 ]
The Kahu upgrade called for the installation of nearly 30 individual avionic and weapons systems. [ 7 ] One major system, and the single most expensive item, was the Westinghouse AN/APG-66 radar , which had a range of 80 nautical miles (150 km). This was as used in the General Dynamics F-16 Fighting Falcon but optimized for maritime tracking. Space constraints limited the size of the antenna that was able to be used although changes to the software was able to compensate for the resulting loss of antenna gain. [ 7 ] Another was the Ferranti 4510 wide-angle Heads-up Display was added to the cockpit; this showed flight and weapon aiming information on a screen in the pilot's field of vision. This allowed pilots to fly without looking down at the instrumentation. The HUD also included a video recording system. [ 8 ] Once the HUD and related paraphernalia was installed, it was discovered that the centre of gravity of the Skyhawk was affected. This was overcome with the addition of 106 pounds (48 kg) of weight to the tail of the aircraft. [ 9 ]
A General Instrument ALR-66 radar warning receiver was installed to detect radar emissions. [ 8 ] An ALR-39 chaff and flare dispenser, as used by the Royal Australian Air Force (RAAF) on its McDonnell Douglas F/A-18 Hornets , was added. This compatibility was useful during training exercises in Australia since chaff and flares could be readily sourced from participating RAAF squadrons. A new control stick with Hands on Throttle and Stick , consolidating multiple switches was added. [ 10 ] A glass cockpit with two large cathode ray tube screens, a MIL-STD 1553B databus, Litton Industries LN-93 inertial navigation system were other items of equipment included in the upgrade. [ 11 ]
Parts of the wings were reskinned and the wiring replaced. It also received armament upgrades including the ability to fire AIM-9L Sidewinders , AGM-65 Mavericks and GBU-16 Paveway II laser-guided bombs . [ 5 ] Using advances in miniaturization, it was possible to incorporate these additional electronics items entirely within the fuselage without requiring the use of the dorsal hump. [ 12 ] The Kahu-modified Skyhawk could be recognized by a blade-like Instrument landing system aerial antenna on the leading edge of the vertical stabilizer . [ 13 ]
A two-seater TA-4K Skyhawk, with the tail code NZ6254, was the first aircraft to be completed and was unveiled to the public on 2 June 1988. [ 14 ] It subsequently undertook an extensive test programme. This was conducted by Flight Lieutenant Steve Moore, who had recently become only the second RNZAF pilot to graduate from the Empire Test Pilot School in the United Kingdom. [ 15 ] [ 16 ] The program was completed in June 1991 when the final aircraft, NZ6202, was returned to No. 75 Squadron. [ 17 ]
Once completed, the Kahu upgrade was largely successful, with noticeable improvements in accuracy of gunnery and bombing exercises. [ 17 ] As of 1995, staff at the United States Foreign Military Sales support office regarded the A-4K as the "most sophisticated Skyhawk flying...the most complete aircraft right now for ground attack and air-to-air combat missions". [ 18 ] However, the ALR-66 radar warning system never performed satisfactorily. The receivers lacked sensitivity and it was common in training that it would not be triggered by the radar guided weapons actuated by opponents. [ 8 ] A drawback to the project was that much of the new electronics was bespoke equipment, which subsequently caused supply and maintenance problems that was exacerbated when the Skyhawks were deployed overseas. [ 19 ]
The updated Skyhawks served in the RNZAF until late 2001, when the Strike Wing was disbanded under a new defence policy of the New Zealand Government at the time. Put into storage, eight Skyhawks were eventually sold to Draken International , which would fly them as adversary aircraft as part of their defence training contract with the United States armed forces. [ 20 ] [ 21 ] The remaining aircraft were donated to museums including the Air Force Museum of New Zealand , Museum of Transport & Technology , and the Fleet Air Arm Museum in Australia. [ 22 ] | https://en.wikipedia.org/wiki/Project_Kahu |
A Project Labor Agreement ( PLA ), also known as a Community Workforce Agreement , [ 1 ] is a pre-hire collective bargaining agreement with one or more labor unions that establishes the terms and conditions of employment for a specific construction project. [ 2 ] Before any workers are hired on the project, construction unions have bargaining rights to determine the wage rates and benefits of all employees working on the particular project and to agree to the provisions of the agreement. [ 3 ] [ 4 ] The terms of the agreement apply to all contractors and subcontractors who successfully bid on the project, and supersedes any existing collective bargaining agreements . [ 3 ] PLAs are used on both public and private projects, and their specific provisions may be tailored by the signatory parties to meet the needs of a particular project. [ 4 ] The agreement may include provisions to prevent any strikes, lockouts , or other work stoppages for the length of the project. [ 3 ] PLAs typically require that employees hired for the project are referred through union hiring halls , that nonunion workers pay union dues for the length of the project, and that the contractor follow union rules on pensions, work conditions and dispute resolution. [ 5 ]
PLAs are authorized under the National Labor Relations Act (NLRA), 29 U.S.C. §§ 151–169. Sections 8(e) and (f) of the NLRA, 29 U.S.C. §§ 158(e) and (f) make special exceptions from other requirements of the NLRA in order to permit employers to enter into pre-hire agreements with labor unions in the construction industry. [ 6 ] The agreements have been in use in the United States since the 1930s, and first became the subject of debate in the 1980s, for their use on publicly funded projects. In these instances, government entities made signing PLAs a condition of working on taxpayer funded projects. This type of PLA, known as a government-mandated PLA, is distinct from a PLA voluntarily entered into by contractors on public or private work—as is permitted by the NLRA—as well as a PLA mandated by a private entity on a privately funded construction project.
Presidential executive orders issued since 1992 have affected the use of government-mandated PLAs for federal construction projects. Executive Order 13502, issued by President Barack Obama in February 2009, [ 7 ] encouraged federal agencies to consider mandating PLAs on a case-by-case basis for federal contracts of $25 million or more. President Joe Biden 's Executive Order 14063 , which revoked Obama's executive order, [ 7 ] requires PLAs on federal construction contracts of $35 million or more.
The use of PLAs is opposed by a number of groups, [ 8 ] who argue that the agreements discriminate against non-union contractors and do not improve efficiency or reduce costs of construction projects. Studies of PLAs have mixed results, with some studies concluding that PLAs have a favorable impact, while others find that the agreements can increase costs, and may negatively impact non-union contractors and workers.
The earliest uses of Project Labor Agreements in the U.S. date back to several dam projects in the 1930s, including the Grand Coulee Dam in Washington , the Shasta Dam in California and the Hoover Dam in Nevada . [ 9 ] Modern PLAs particularly developed from those used in construction carried out during World War II , a period when skilled labor was in demand, construction unions controlled 87% of the national market [ 10 ] and government spending on construction had increased significantly over a short period of time. These early PLAs focused on establishing standard rates of pay and preventing work stoppages. [ 11 ] PLA projects that followed included Cape Canaveral in the 1960s, [ 12 ] Disney World from 1967 to 1971 and the Trans-Alaska Pipeline from 1973 to 1977. [ 9 ] [ 13 ] During this period and subsequently, the unionized share of the construction industry precipitously declined as construction users sought more open competition. By the 1980s, nonunion contractors claimed in excess of 80% of the construction work, in a wide variety of trades, with some variation in different parts of the country. [ 10 ]
The Boston Harbor reclamation project that began in the 1980s became the focus of debate over the legality of PLAs. [ 12 ] [ 13 ] When the Massachusetts Water Resources Authority elected to use a PLA for the project that mandated union-only labor, [ 14 ] the Associated Builders and Contractors of Massachusetts/Rhode Island, Inc. challenged its legality, asserting that the use of a PLA was prohibited by the National Labor Relations Act . [ 15 ] In 1990, the First Circuit federal appeals court ruled that the Boston Harbor PLA breached federal labor law because of its union-work requirement. [ 16 ]
On October 23, 1992, while the Boston Harbor case was still in court, President George H. W. Bush signed Executive Order 12818 prohibiting federal agencies from exclusively contracting union labor for construction projects. [ 17 ] Bush's order prohibited the use of PLAs in federal construction projects. [ 18 ] The Clinton administration rescinded this order when President Bill Clinton issued Executive Order 12836 in February 1993, shortly after he took office. [ 19 ] This order allowed federal agencies to fund construction projects where contractors required a PLA. [ 20 ] One month later, in the Boston Harbor cleanup case, the United States Supreme Court unanimously upheld the use of the agreements on public projects. [ 6 ] The Supreme Court ruled that if the government was in the role of a regulator, it was not able to require PLA use under labor law preemption principles, however, it could choose to do so as a market participant without being preempted by the National Labor Relations Act. [ 13 ] The Court did not address the separate question of whether government-mandated PLAs are lawful under federal or state competitive bidding laws. The decision led to increased use of PLAs in public-sector construction projects throughout the U.S. [ 12 ] [ 13 ]
In 1997, Clinton proposed an executive order stating that federal agencies must consider use of PLAs for federally funded projects. [ 21 ] Republicans staunchly opposed the move, believing it would restrict federal projects to union contractors only. Clinton abandoned the proposed executive order, [ 22 ] but issued a memorandum on June 5, 1997, encouraging federal departments to consider the use of PLAs for “large and significant” projects. [ 23 ] The memorandum required that government agencies review each project to decide whether a PLA would allow the agency to increase efficiency and reduce costs. [ 20 ]
On February 17, 2001, President George W. Bush signed Executive Order 13202, “Preservation of Open Competition and Government Neutrality Towards Government Contractors’ Labor Relations on Federal and Federally Funded Construction Projects”, prohibiting the use of PLA mandates for construction projects with federal funding. [ 24 ] This order stated that construction projects receiving federal funding would not be allowed to impose project labor agreements. [ 25 ] Specifically, the order declared that neither the federal government, nor any agency acting with federal assistance, shall require or prohibit construction contractors to sign union agreements as a condition of performing work on federally funded construction projects. [ 24 ] The order allowed any PLAs that had previously been agreed to continue, and did not affect projects that did not receive federal funding. [ 26 ] Bush's order revoked the previous executive order affecting PLAs, Clinton's order 12836, which revoked the executive order issued by President George H.W. Bush in 1992. [ 19 ] President George W. Bush issued an amendment in April 2001 (Executive Order 13208), allowing certain projects to be exempted from this order, if a project-related contract had already been awarded with a PLA requirement or prohibition at the time of the order. [ 27 ]
In August 2001, U.S. District Court ruled Executive Order 13202 invalid in a case examining the use of a PLA by Maryland for the Woodrow Wilson Bridge replacement project. The court ruled that the order was invalid as it conflicted with the National Labor Relations Act . [ 26 ] The judge issued a permanent injunction to block enforcement of the order on November 7, 2001. [ 28 ] [ 29 ] In July 2002, the U.S. Court of Appeals for the District of Columbia overturned the District Court's decision and ordered the removal of the injunction. [ 25 ] Following this decision, the Department of Defense , NASA and the General Services Administration formally recognized the order in the Federal Register and implemented it in their construction bidding processes. [ 29 ]
Although the Court of Appeals decision in 2002 upheld the executive order prohibiting federal projects from using PLAs, individual states and counties were permitted to use PLAs for some public works where funding was from state and local revenue. These PLAs received opposition by organizations such as the Associated Builders and Contractors, and the Black Contractors Group. [ 30 ] A notable example of pro-PLA legislation was passed in New Jersey, which enacted a law in 2002 allowing use of PLAs for some government funded projects. [ 31 ]
On February 6, 2009, President Barack Obama signed executive order 13502, [ 2 ] which urges federal agencies to consider mandating the use of PLAs on federal construction projects costing $25 million or more on a case-by-case basis. [ 32 ] This act served to revoke the Bush executive orders 13202 and 13208 from eight years earlier that prohibited government-mandated PLAs on federal and federally funded construction projects. [ 33 ] The Obama order states that federal agencies can require a PLA if such an agreement will achieve federal goals in economy and efficiency. According to the terms of the order, non-union contractors may compete for contracts subject to PLAs, but they must agree to the various terms and conditions contained in each PLA in order to win a federal contract and build a project. [ 18 ] A key change from the 2001 order is that by repealing the Bush orders, the Obama order permits recipients of federal funding, such as state, local and private construction owners, to mandate PLAs on public works projects of any size. However, the order does not encourage or mandate recipients of federal assistance to use a government-mandated PLA. [ 18 ]
With the February 2009 stimulus bill allocating approximately $140 billion for federal, state and local construction projects, [ 34 ] [ 35 ] battles over government-mandated PLAs on public works projects from 2009 to 2011 have been widespread at the state and local government level. Government officials and legislators have clashed over using PLA mandates on projects in states including Iowa, [ 36 ] Oregon, [ 37 ] Ohio, [ 38 ] California, [ 39 ] and others. [ 40 ] [ 41 ] Individual communities have voted on whether to prohibit the use of government-mandated PLAs on taxpayer funded construction projects, including ballot initiatives in Chula Vista , Oceanside , [ 42 ] and in San Diego County, California in 2010, which resulted in officials being prohibited from mandating or prohibiting the use of PLAs for government projects. [ 43 ] In 2011, contractors filed bid protests with the Government Accountability Office against government mandated PLAs for construction projects in New Hampshire, New Jersey, Pennsylvania and Washington, D.C. These protests led to federal PLA mandates being removed from project solicitations in each case. [ 44 ]
President Joe Biden 's Executive Order 14063 , which revoked Obama's executive order, [ 7 ] requires PLAs on federal construction contracts of $35 million or more and established a training strategy for Contracting Officers . [ 7 ] Federal Acquisition Regulation 52.222-34 provides for a contract clause for inclusion in a relevant construction contract, requiring maintenance of a PLA for the duration of the project. [ 45 ] According to the Federal Acquisition Regulatory Council's 2022 proposed rule implementing Executive Order 14063, from FY 2009 to FY 2021, federal agencies mandated PLAs on just 12 out of more than 2,000 federal construction contracts covered by the Obama order. [ 7 ]
As of March 2023, through legislation or by executive order issued by the state governor, the following 25 states have active laws banning government-mandated project labor agreements on state, state-assisted and/or local taxpayer-funded construction projects to some degree: Alabama, Arizona, Arkansas, Florida, Georgia, Idaho, Iowa, Kansas, Kentucky, Louisiana, Michigan, Mississippi, Missouri, Montana, North Carolina, North Dakota, Oklahoma, South Carolina, South Dakota, Tennessee, Texas, Utah, West Virginia, Wisconsin and Wyoming. [ 46 ] [ 47 ] [ 48 ] [ 49 ] [ 50 ] [ 51 ] [ 52 ] All legal challenges to these laws have failed. [ 53 ] State policies restricting government-mandated PLAs were repealed in Maine, Minnesota, Nevada and Virginia following Democratic party trifectas in state government. [ 54 ] States with executive orders, or that have enacted legislation authorizing or encouraging the use of PLAs on public projects include California, [ 55 ] Connecticut, [ 56 ] Hawaii, [ 57 ] Illinois, [ 58 ] Maryland, [ 59 ] Maine, [ 60 ] New Jersey, [ 61 ] New York, Virginia, [ 62 ] and Washington State. [ 63 ]
The Biden administration is pushing state and local governments to require PLAs on federally assisted projects via more than $250 billion worth of federal agency grant programs that give grant applicants favorable treatment if PLAs are required on taxpayer-funded infrastructure projects procured by state and local governments seeking federal dollars flowing from new programs in the Infrastructure Investment and Jobs Act, the American Rescue Plan Act and other legislation passed by Congress that authorize and fund infrastructure dollars but are silent on PLA preferences and requirements. [ 64 ] The Biden administration has also been criticized [ 65 ] for pushing PLA mandates on private construction projects receiving federal grants and tax breaks, such as pro-PLA policy contained in the U.S. Department of Commerce CHIPS Incentives Program's Commercial Fabrication Facilities [ 66 ] Notice of Funding Opportunity. [ 67 ]
On February 27, 2023, Rep. James Comer, R-Ky., and Sen. Todd Young, R-Ind., introduced the Fair and Open Competition Act (H.R. 1209 / S. 537), [ 68 ] [ 69 ] legislation which promotes fair and open competition on contracts to build taxpayer-funded federal and federally assisted construction projects by restricting the use of government-mandated project labor agreements. [ 70 ] [ 71 ] [ 72 ] The legislation is supported [ 73 ] by a large coalition of construction and employer associations. [ 74 ] [ 75 ]
There has been much debate over the government-mandated PLAs, particularly for publicly funded projects. [ 13 ] The use of project labor agreements is supported by the construction unions, [ 76 ] and some political figures, who state that PLAs are needed to ensure that large, complex projects are completed on time and on schedule. [ 77 ] According to those who support the use of such agreements, PLAs enable project owners to control costs and ensure that there are no disruptions to the construction schedule, for example from strikes. [ 78 ] In particular, proponents of PLAs point to the inclusion of clauses in the agreement that agree to establish labor management problem solving committees that deal with scheduling, quality control, health and safety, and productivity problems during the project. [ 79 ] They also state that PLAs ensure that the workforce hired has received training and is of high quality. [ 79 ] The use of PLAs in large private construction projects such as the building of the New England Patriots ' Gillette Stadium , are given as examples of how PLAs help project owners meet tight deadlines, according to supporters. [ 77 ] In addition to the stated benefits to project owners, supporters of PLAs also say that PLA use has a positive impact on local communities, through set goals for local hiring and provision of education. [ 80 ]
A coalition of U.S. construction and employer groups, [ 81 ] including the Associated General Contractors of America (AGC), [ 82 ] Associated Builders and Contractors (ABC), [ 83 ] Construction Industry Roundtable (CIRT), National Federation of Independent Business (NFIB), the National Black Chamber of Commerce , and the U.S. Chamber of Commerce [ 84 ] have actively opposed the use of PLAs, particularly for those mandated by lawmakers on taxpayer-funded construction projects procured by federal, state and local government. These groups have challenged the use of such agreements through litigation, lobbying, public relations campaigns. [ 83 ] Opponents of PLAs supported the Bush executive order prohibiting government-mandated PLAs, and have argued that between 2001 and 2008, when the executive order was in place, no federal projects suffered significant labor problems, delays or cost overruns attributable to the absence of PLAs. [ 85 ] Surveys of nonunion contractors suggest that government-mandated PLAs, increase costs; reduce competition; decrease economy and efficiency in government contracting; undermine investments in workforce development programs; reduce/have no impact on project safety, quality, budget or timeliness; result in worse local hiring outcomes; and decrease hiring of women, veteran and disadvantaged business enterprises and construction workers. [ 86 ] According to those who oppose PLAs, the agreements place restrictions on the hiring and working practices of contractors, and can lead to increased costs for project owners. [ 87 ]
One of their objections to PLAs is that the agreements require contractors to obey inefficient union work rules [ 83 ] and pay into union benefits plans even if they have existing benefits plans, [ 26 ] [ 88 ] which can increase labor costs and expose contractors to additional uncertainty and financial risk in the form of multi-employer pension plan liabilities. [ 89 ] [ 90 ] In addition, they oppose the use of PLAs to restrict hiring on projects to construction workers selected by unions through union hiring halls, stating that this does not increase the quality of worker as all those who are licensed in a craft have at least the same level of education and skill, regardless of whether they belong to a union. [ 77 ] Opponents object to clauses in typical PLAs that force contractors to replace most or all of their existing employees with unfamiliar union labor from union hiring halls because it can reduce workforce safety, productivity and diversity. [ 86 ] Research has found that the limited number of nonunion craft professionals permitted to work on construction projects subject to a government-mandated PLA suffer an estimated 34% reduction in wages and benefits unless they force their existing workforce to accept union representation, pay union dues and/or join a union as a condition of employment and receiving benefits on a PLA jobsite. [ 89 ] Opponents also claim provisions in typical PLAs require contractors to hire apprentices only from apprenticeship programs affiliated with unions, which limits investments in employer and association government-registered apprenticeship programs not affiliated with unions [ 91 ] and may exacerbate skilled labor shortages within the construction industry expected to top half a million workers in 2023. [ 92 ]
Another point of debate is the proportion of construction workers who are unionized. According to opponents, under PLAs contractors must hire their construction workers through union hiring halls, [ 93 ] and unionized workers are the majority of those who work on PLA projects, despite non-union workers making up the majority of the construction workforce. [ 77 ] Estimates of the percentage of construction workers who are non-union, cited by opponents of PLAs, are around 85%, [ 94 ] based on figures from the U.S. Department of Labor Bureau of Labor Statistics, [ 95 ] and more recent data puts this figure at a record high of 88.3%. [ 96 ] This figure has been disputed by supporters of PLAs, who state that the figures given by those in opposition to PLAs are misleading and are based on census data that encompasses too broad a concept of a construction worker. [ 97 ] According to a study by Cornell University in 2010 cited by Mary Vogel, in Massachusetts 60% of the Construction Trades is unionized. [ 98 ] Mary Vogel is the executive director of the Construction Institute, a non-profit organization dedicated to the needs of the unionized construction sector in Massachusetts. In 2021, a state's percentage of the construction workforce belonging to a construction union varied from a low of 1.8% (South Carolina) to a high of 34.5% (Illinois), with 25 states having lower unionization rates than the U.S. construction industry's average of 12.6%.
A number of politicians do not agree with the use of government-mandated PLAs for publicly funded construction projects, and have introduced bills or executive orders that prohibit using the agreements for government projects or prevent the use of public funds for projects using PLAs. [ 99 ] [ 100 ] [ 101 ]
A main argument has been the impact of PLAs on project cost. [ 102 ] Those who oppose PLAs state that the agreements impact competition for project bids, reducing the number of potential bidders as non-union contractors are less likely to bid due to the potential restrictions a PLA would pose. [ 93 ] According to opponents of the agreements, the reduced competition leads to higher bids and a higher cost for the project owner. [ 76 ] In addition, opponents argue that the cost may also be increased due to contractors having greater expenses under a PLA. For example, according to Max Lyons of the Employee Policy Foundation, the cost of a project under a PLA is increased up to 7%, since contractors are required to pay their employees the union wage, rather than the government-determined prevailing wage. [ 78 ] Opponents have also argued that there is evidence to show that PLA mandates add costs by forcing non-union contractors to pay into union benefit plans and their existing benefit plans. [ 103 ] [ 89 ] Supporters of PLA use argue that the end cost of projects is not increased if a PLA is in place, compared to projects without such an agreement, since the agreements prevent cost overruns. [ 104 ] In response, opponents of the agreements cite examples of projects a PLA was in place and costs overran including Boston's Big Dig project, Safeco Field in Seattle , and the San Francisco International Airport . [ 94 ] An August 2021 study by the Rand Corporation found that government-mandated PLAs on affordable housing projects in Los Angeles increased construction costs by 14.5% and approximately 800 additional units (an increase of 11% more housing units) could have been produced without a government-mandated PLA policy. [ 105 ] Multiple studies examining the impact of government-mandated PLAs on school construction costs in Connecticut (2020), [ 106 ] New Jersey (2019), [ 107 ] and Ohio (2017) [ 108 ] by the Beacon Hill Institute found that PLA projects experienced increased costs by up to 20% compared to similar non-PLA school projects also subject to state prevailing wage laws, echoing findings from previous research on the cost of PLAs on school construction conducted in Massachusetts (2003), Connecticut (2004) and New York (2006). [ 109 ] [ 110 ] [ 111 ] In 2010, the New Jersey Department of Labor studied the impact of government-mandated PLAs on the cost of school construction in New Jersey during 2008, and found that school construction projects where a PLA was used had 30.5% higher costs, per square foot, than those without a PLA. [ 112 ] A 2009 study of PLAs on federal construction projects, carried out by Rider Levett Bucknall to determine whether PLAs should be used in the U.S. Department of Veterans Affairs' construction projects, found that costs would increase if PLAs are used for construction projects in locations where union membership is low. According to their analysis, in areas including Denver , Colorado , New Orleans , Louisiana , and Orlando, Florida , where unions do not have a great presence, use of PLAs for projects would lead to cost increases from 5% to 9%. In two cities, San Francisco and New York City , where unions have a great presence, the study predicted mixed results regarding potential cost savings ranging from small project cost increases to small cost savings. [ 113 ]
Opponents of PLAs state that the agreements impact competition for project bids, which can lead to higher costs. [ 78 ] It is argued by those who oppose PLAs, such as former ABC president Henry Kelly, that PLAs discourage if not prevent non-unionized contractors from competing for construction projects, particularly federal projects. [ 76 ] Competitive bidding statutes discourage public sector PLAs from discrimination between non-union and union contractors, as discrimination between bidders would typically represent a violation of such statutes. [ 114 ] [ 115 ] Non-union contractors have been awarded contracts on public sector PLA projects, for example the Boston Harbor project. [ 6 ] In the United States Supreme Court ruling on the use of a PLA for the Boston Harbor project, it was stated that project owners are within their rights to choose a contractor who is willing to enter into a pre-hire agreement, and that contractors have a choice whether or not they wish to enter such an agreement. [ 6 ] However, in a subsequent case the Supreme Court observed the following limitation on the Boston Harbor holding, "In finding that the state agency had acted as a market participant, we stressed that the challenged action "was specifically tailored to one particular job."" [ 116 ]
PLAs often require all companies to obtain their workers from union hiring halls, though the union controlling this employee referral system may not discriminate on the basis of a worker's union or non-union status. [ 117 ] It is often the case, however, that the hired employees must join a union and pay union dues, usually for the duration of the project. [ 118 ] [ 119 ] PLA opponents argue that the union control of hiring prevents a non-union contractor for using its own employees and efficient work practices. [ 120 ] The increased cost to contractors and the impact on their workers of joining a union, is said by opponents of PLAs to discourage non-union contractors from bidding on projects with a PLA. [ 121 ] For instance, a project in Ohio in 2010, to build dormitories for two schools saw an increased number of bids when a PLA was no longer required, and the bid prices were 22% lower than they had been when a PLA was in place. [ 119 ]
According to supporters, PLAs can be used by public project owners like school boards or city councils to set goals for creating local jobs and achieving social welfare goals through the construction projects they apply to. [ 4 ] [ 80 ] [ 115 ] PLAs may include provisions for targeted hiring and apprenticeship ratio provisions. According to proponents, by including requirements for a certain proportion of local workers to enter union apprenticeship programs working on the construction program, PLAs can be used to help local workers gain skills. [ 4 ] The term "Community Workforce Agreement" (CWA) may be used to describe PLAs with community-focused provisions. [ 122 ] [ 123 ] Proponents state that Community Workforce Agreements re-inject the tax dollars paying for these infrastructure projects back to the communities. [ 80 ] [ 124 ] [ 125 ] Those who oppose PLAs have pointed to examples such as the construction of the Yankee Stadium and the Washington Nationals Ballpark , for both of which community focused agreements were in place but the goals of local hiring and resources to be provided to the community were not met. [ 126 ] [ 127 ] [ 128 ] According to a report for the DC Sports & Entertainment Commission, the PLA for the Nationals Ballpark failed to meet its three main goals of local workers performing 50% of journeyman hours, apprenticeships provided to city residents only, and apprentices to carry out 25% of the work hours on the project. [ 128 ] According to groups such as ABC, since the PLAs require that workers are hired through the unions and there are much fewer union workers, this can mean that meeting local hiring goals is impossible. [ 126 ]
A number of women and minority contractor groups oppose project labor agreements, [ 84 ] arguing that PLAs disproportionately impact small businesses, particularly those owned by women and minorities. These groups argue that PLAs are anti-free-market and discriminatory. [ 129 ] [ 130 ] In particular, groups including the National Association of Women Business Owners, have voiced their opposition to PLAs, and in 1998, there was a House hearing dedicated to the issue of minority groups' opposition to government-mandated PLAs. [ 131 ] The National Black Chamber of Commerce opposes the use of PLAs due to the low numbers of black union members in the construction industry. According to the NBCC, implementing PLAs leads to discrimination against black workers who are generally non-union workers and also prevents contractors from using casual laborers. [ 132 ] [ 133 ] According to the United States Pan-Asian American Chamber of Commerce, the majority of their membership comprises small businesses that are unfairly impacted by PLAs, particularly due to increased costs and lowered employee benefits. [ 134 ]
A number of studies and reports have been published, aiming at identifying the impact of PLAs. In addition to academic research, reports have been produced by government agencies and individuals on behalf of state or federal government. In 1998 the Government Accountability Office produced a report on PLAs that noted an overall lack of data but reported that both “proponents and opponents of the use of PLAs said it would be difficult to compare contractor performance on federal projects with and without PLAs because it is highly unlikely that two such projects could be found that were sufficiently similar in cost, size, scope, and timing.” The GAO report concluded that it would be difficult to draw "any definitive conclusions" on the impact of PLAs on performance. [ 135 ] More recent reports include a favorable study of PLAs from the Cornell University School of Industrial and Labor Relations , in 2009, [ 4 ] reports produced by the Beacon Hill Institute since 2003, which conclude that PLAs increase costs of projects, [ 136 ] and an analysis published by the National University System Institute for Policy Research, which found that PLAs increased the cost of school construction in California.
In addition to studies examining the use of PLAs and their impact, reports are available detailing the history of PLA use and the arguments for and against their use. Reports examining the history of PLA use, include a 2001 California State Library report, compiled for California State Senate , which recounts the history of PLAs in California and uses case studies to examine the features of public and private PLAs. [ 137 ] In a 2001 University of Pennsylvania Journal of Law article, the author outlines the arguments on either side of PLAs and evaluates the state of the law since the 1993 Boston Harbor case decision. The article finds that while there are benefits to PLA use, they can present risks and should only be allowed on projects where they will further the goals of competitive bidding statutes, namely timely, efficient, high quality, and inexpensive construction. [ 138 ]
Studies have found that PLAs offer benefits to project owners and local communities, and do not disadvantage nonunion contractors and employees. A 2009 study by Fred B. Kotler, J.D., associate director of the Cornell University School of Industrial and Labor Relations found that there is no evidence that PLAs discriminate against employers and workers, limit the pool of bidders, and raise construction costs. [ 139 ] In a 2009 report by Dale Belman, of Michigan State University; Matthew M. Bodah of the University of Rhode Island and Peter Philips of the University of Utah, the authors stated that rather than increase cost, the agreements provide benefits to the community. According to their report, project cost is directly related to the complexity of a project, not the existence of an agreement. They found that PLAs are not suited to all projects, but some projects are good candidates for their use, such as highly complex construction projects. [ 140 ] Studies have also considered how PLAs may benefit communities through hiring locals. In a paper focused on whether PLAs for projects developed by the Los Angeles Community College District (LACCD), the Los Angeles Unified School District (LAUSD), and the City of Los Angeles met local hiring goals, the author found that the goal of 30% local hires set by the PLAs was met. [ 141 ]
Reports and studies addressing the cost impact of PLAs on construction projects have found that they may not lead to greater costs, such as a 2002 paper by the Harvard University Joint Center for Housing Studies, which states that the increased costs cited by opponents of PLAs is based on bids rather than end costs. According to the paper, a project's end costs would usually be higher than bid costs due to expenses that arise during construction. [ 11 ] In addition, a 2004 report by the Director of General Services for Contra Costa County, California reported that bids for five of eight projects subject to a PLA were lower than the architect/engineer cost estimate. [ 142 ] In 2004 a report written on the use of PLAs in Iowa states that PLA use increases efficiency and cost effectiveness of construction projects. "Public-sector PLAs on complex projects or projects where timely project completion is important have been shown to provide the performance desired by contractors and project managers, who repeatedly use them." [ 143 ] A 2009 paper concluded that there was difficulty in identifying the effect of PLAs on cost in construction of schools, due to the differences between schools built with PLAs and those built without them. The report stated that there was not any statistically significant evidence for an increase in costs for school construction. [ 144 ]
Reports on the legal considerations affecting PLAs make the case that PLAs are an effective tool for labor relations. [ 145 ] In a report in 1999, on the legality of PLAs, the authors stated that PLAs "serve as a productive and stabilizing force in the construction industry.” [ 146 ] This is supported by a UCLA study that challenged findings of the Beacon Hill Institute on PLAs, which found that in the private sector, the usage of PLAs "creates continuity and stability of the work force at the job site". [ 147 ]
An August 2021 study by the Rand Corporation found that government-mandated PLAs on affordable housing projects in Los Angeles increased construction costs by 14.5% and approximately 800 additional units (an increase of 11% more housing units) could have been produced without a government-mandated PLA policy. [ 105 ]
Multiple studies examining the impact of government-mandated PLAs on school construction costs in Connecticut (2020), [ 106 ] New Jersey (2019), [ 107 ] and Ohio (2017) [ 108 ] by the Beacon Hill Institute found that PLA projects experienced increased costs by up to 20% compared to similar non-PLA school projects also subject to state prevailing wage laws, echoing findings from previous research on the cost of PLAs on school construction conducted in Massachusetts (2003), Connecticut (2004) and New York (2006). [ 109 ] [ 110 ] [ 111 ] A report on PLAs by BHI, published in 2009, examined whether claims made in Obama's executive order that PLAs have a positive economic impact are correct. The report considered the findings of the institute's studies, further case studies of PLA and non-PLA projects and addressed criticisms of their previous studies and concluded that the justifications for PLA use in the executive order were not proven. In particular the report concluded that there was no economic benefit to taxpayers in using PLAs. [ 136 ]
An independently reviewed 2011 study by The National University System Institute for Policy Research analyzed the cost impact of PLAs on school construction in California from 1996 to 2008. [ 148 ] The study analyzed 551 school construction projects and is reportedly the largest study of PLAs to have been undertaken to date. [ 149 ] It found that the use of PLAs added between 13% and 15% to construction costs, which would represent a cost increase of between $28.90 and $32.49 per square foot when adjusted for inflation. [ 150 ] However, this study's conclusions were strongly disputed by Dr. Dale Belman of Michigan State University, a long-time proponent of the use of PLAs and whose prior research it referenced repeatedly, and who claimed the study misrepresented his findings. He wrote the authors: "Although your study has several serious statistical issues, at the end of the day, your results are basically consistent with those presented in my article on PLAs and Massachusetts school construction costs. The take-away from your results can be summarized as follows: When appropriate controls are included for differences in the characteristics of schools built including school type and location, building specifications, materials used etc., there is no statistical evidence that PLA schools are more costly compared to non PLA schools." The study authors point out in the report that they employed robust regression methods to account for variances in school construction materials/techniques and location. Robust regression is a statistical technique that is used in conjunction with predictive models when the data set lacks normal distribution, or when there are substantive outliers that may skew the results from a standard regression test. In a robust regression analysis, the influence of outliers is down-weighted, allowing more statistical relationships to appear in the results.
In 2010, the New Jersey Department of Labor studied the impact of government-mandated PLAs on the cost of school construction in New Jersey during 2008, and found that school construction projects where a PLA was used had 30.5% higher costs, per square foot, than those without a PLA. [ 112 ]
Earlier studies also found increased costs when PLAs were used, including a study in 2000 of a Nevada Water Authority project PLA, which found that the project cost an additional $200,000 because the true low bidder refused to sign the PLA. The project then went to a union contractor whose bid was $200,000 higher. [ 151 ] Also in 2000, a study commissioned by the Jefferson County, New York Board of Legislators examining the potential use of a PLA for the Jefferson County Courthouse Complex concluded that a PLA could result in additional costs of more than $955,000. The total estimated increase of costs for the projects, should a PLA be used, would have represented 7% of the total cost of the project. [ 152 ]
In addition to increased costs of projects, studies have found that PLAs can lead to greater costs for nonunion contractors and can lower their employees' take home pay. In 2009 [ 103 ] and 2021 [ 89 ] studies by John R. McGowan of Saint Louis University found that nonunion workers on government-mandated PLA projects have reduced wages, compared with what they would receive for work on a non-PLA project. In addition, nonunion employers would have to pay for additional benefits that their employees would be ineligible for and might be liable for pension fund withdrawal liability costs if the terms of the PLA mean they have to contribute to a union pension fund for the duration of the project. [ 103 ] [ 89 ]
PLAs also may impact competition by discouraging nonunion bidders, according to surveys of contractors [ 86 ] and studies including a September 2001 study by Ernst & Young, commissioned by Erie County, New York . This study analyzed the impact of PLAs on public construction projects and concluded that the number of bidders was reduced for projects with a PLA, as "the use of PLAs strongly inhibits participation in public bidding by non-union contractors." [ 153 ] The Worcester Municipal Research Bureau produced a report in 2001, based on a number of studies of PLA use. The report stated that PLAs reduced the number of bidders on construction projects, and led to lower savings than would be possible where contractors are able to work under their usual arrangements for employees. [ 154 ] In March 1995, an ABC study of the taxpayer costs for Roswell Park Comprehensive Cancer Center in Buffalo, New York, assessed bids for the same project both before and after a PLA was temporarily imposed in 1995. It revealed that there were 30% fewer bidders to perform the work and that costs increased by more than 26%. [ 155 ]
In terms of wider economic impact, a November 2000 Price Waterhouse Coopers study requested by the Los Angeles Unified School District was not able to confirm whether the project stabilization/labor agreement for the district's Proposition BB construction had produced either a positive or negative economic impact. [ 156 ] In March 2006, the Public Interest Institute released a study that concludes that the PLA agreed for the construction of the Iowa Events Center project in downtown Des Moines, placed an “unnecessary burden” on local workers, businesses and taxpayers. [ 157 ] | https://en.wikipedia.org/wiki/Project_Labor_Agreement |
The Khalije Fars ( Persian Gulf ) class is a destroyer class, the class was previously designated a training ship .
In terms of size and armament the vessel is the first true destroyer built by Iran.
Many experts believe that it was referred to as a training vessel in order to cover the real goal of the vessel.
The class is five times larger than the Jamaran , which means it is at least 5,500 tons and 7,000-7,500 tons full. A model which appeared in November 2014 shows a radar above the destroyer.
Construction began on November 11, 2019.
The Loghman Project (Persian Gulf Training Ship) is a training ship that weighs 5,000 tons, is 135 meters long, 16 meters wide, and has a draft of 4.7 meters, and is capable of sailing 8,000 nautical miles. [ 3 ]
Khalij-e Fars is the lead ship of Project Loghman and an upcoming training ship / destroyer of the Islamic Republic of Iran Navy currently under construction. [ 2 ] | https://en.wikipedia.org/wiki/Project_Loghman |
Project MinE is an independent large scale whole genome research project that was initiated by 2 patients with amyotrophic lateral sclerosis and started on World ALS Day, June 21, 2013. [ 1 ]
The symptoms of amyotrophic lateral sclerosis are caused by degeneration of motor nerve cells ( motor neurons ) in the spinal cord , brainstem , and motor cortex . The exact cause of this degeneration is unknown but it is thought that environmental exposures and genetic factors play a role in susceptibility to the disease. In 5-10% of patients the family history is positive for ALS. However, it is not always possible to establish the mode of inheritance in each pedigree and not all familial cases may suffer from a genuine Mendelian or monogenic disorder . Autosomal-dominant mutations in the C9orf72 and the SOD1 gene are found in a substantial number of familial ALS cases. Mutations in other genes (such as VAPB [2], ANG, TARDBP and FUS ) have been reported, but are found at a much lower frequency and with variable penetrance, suggesting the involvement of other genes.
Project MinE is a research project to systematically interrogate the human genome for both common and rare genetic variation in ALS (genetic "data mining" explains the project name). The project consists of two phases and combines a genome-wide association study (GWAS) study with whole genome sequencing :
The long-term benefit of the approach taken for project MinE is the priceless catalogue of many non-ALS whole genomes that can be used to investigate other human diseases, including Diabetes Mellitus, [ 2 ] some types of cancer, and other neurological disorders . [ 3 ] [ 4 ] Project MinE is worldwide the largest genetic study for Amyotrophic Lateral Sclerosis. The work has started in the second quarter of 2013 and is a unique international collaboration between scientists, industry, social foundations and patients. On July 25, 2016, the first results were published in 2 publications in Nature Genetics leading to the discovery of NEK1 and C21orf2 as new ALS risk genes. [ 5 ] [ 6 ] | https://en.wikipedia.org/wiki/Project_MinE |
Project Narwhal is the name of a computer program used by the 2012 campaign by Barack Obama . [ 1 ] It was contrasted in the Mitt Romney presidential campaign by Project Orca , so named because the orca is one of the few predators of the narwhal . [ 2 ]
Project Narwhal was developed for six-to-seven days a week and 14 hours a day by a staff of very-experienced workers of companies such as Twitter , Google , Facebook , Craigslist , Quora , Orbitz , and Threadless . [ 3 ] The intent of the program was to link previously separate repositories of information, enabling all the data gathered about each individual voter was available to all arms of the campaign. In testing Narwhal, the team, in campaign CTO Harper Reed 's words, role-played "every possible disaster situation," including three role-plays where all the systems would go down very quickly on election day. [ 3 ] These "game day" practices would prepare them for actual disasters when Amazon Web Services went down on October 21, 2012, and Hurricane Sandy threatened the technology infrastructure in the Eastern United States. [ 3 ]
This American elections -related article is a stub . You can help Wikipedia by expanding it .
This Web - software -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Project_Narwhal |
Project National Glory ( Chinese : 國光計劃 ) or Project Guoguang was a planned attempt by the Republic of China (ROC), whose government had retreated to Taiwan after losing the Chinese Civil War , to reconquer mainland China from the People's Republic of China (PRC) by large scale amphibious invasion . It was the most elaborate of the ROCs plans or studies to invade the mainland after 1949. Guoguang was initiated in 1961 in response to events involving the PRC, particularly the Great Leap Forward and the Sino-Soviet split . Guoguang was never executed; it required more troops and material than the ROC could muster, and it lacked support from the United States , especially after the Communist developed nuclear weapons and the disastrous naval skirmishes at Dongshan and East Chongwu . The use of a large scale invasion as the initial stage of reunification was effectively abandoned after 1966, although the Guoguang planning organization was not abolished until 1972. The ROC did not abandon the policy of using force for reunification until 1990.
In 1949, the ROC retreated from the mainland to Taiwan. [ 1 ] Despite the defeat, Chiang Kai-shek remained committed to recovering the mainland. [ 2 ] [ 3 ] [ 1 ] The ROC took steps to stabilize its position and prepare for the future war. [ 4 ] The armed forces (ROCAF) undertook reforms. The conscription system was modified to produce a reserve. [ 5 ] [ 4 ] Former Japanese soldiers, the White Group, contributed to planning and personnel training. The provision of military aid from the US was formalized with the Sino-American Mutual Defense Treaty . By the end of the 1950s, the ROCAF was an effective defensive force. [ 6 ] [ 5 ] General indoctrination and anti-PRC propaganda was widespread. [ 6 ] The sinicization of the native Taiwanese population drew special attention to support conscription and mobilization. [ 7 ] In March 1956, plans envisioned the mobilization of 730,000 men from the ages of 21 to 35. [ 8 ]
The ROC also planned and sought opportunities to attack the PRC. [ 9 ] [ 6 ] From 1951 to 1954, the ROC's irregular Anti-Communist Salvation Army — trained by the US Central Intelligence Agency — raided the PRC coast from ROC-controlled islands near the mainland. [ 6 ] The ROC's offer to attack the PRC during the Korean War — while the PRC's attention was diverted — was declined by the US. [ 5 ] Other options that were considered were a regional campaign on the PRC- Myanmar border, [ 9 ] and guerrilla involvement in the Vietnam War as a supporting diversion for a ROC invasion across the Taiwan Strait . [ 5 ]
By 1961, the ROC assessed that conditions were becoming favorable. Project Guoguang later identified that the ideal time to attack was when the PRC was embroiled in political strife, or at war with rebels or neighbouring countries. [ 10 ] The PRC was suffering internal unrest from the Great Leap Forward [ 11 ] [ 12 ] and the ongoing Sino-Soviet split was also considered to be advantageous. [ 5 ]
All Pan-Blue
Chiangist rule (Before 1992)
Taiwan, pro-Beijing
Taiwan, pro-independence (limited to conservative factions)
Republic of China (1912–1949)
Taiwan (1949–)
Republic of China (1912–1949)
Taiwan, pan-Blue
Republic of China (1912–1949)
Taiwan, pan-Blue
Taiwan, pro-Beijing
Taiwan, pro-independence (limited to conservative factions)
Taiwan, pan-Blue
Taiwan, pro-Beijing
Taiwan, pro-independence (limited to conservative factions)
Taiwan, other
Hong Kong, pro-ROC
Mainland China, pro-ROC (after 1949)
Republic of China (1912–1949)
Taiwan, pan-Blue
Taiwan, pro-independence (limited to conservative factions)
Taiwan, other
Taiwan under Japanese rule
Hong Kong, pro-ROC
All Pan-Blue
Taiwan, pro-Beijing
Taiwan, pro-independence (limited to conservative factions)
Republic of China (1912–1949)
Taiwan, pan-Blue
Taiwan, other
Hong Kong, pro-ROC
Republic of China (1912–1949)
Taiwan, pan-Blue
Hong Kong, pro-ROC
Project Guoguang was established on 1 April 1961. [ 11 ] [ 12 ] The ROCAF created a staff, the Guoguang Operation Office, [ 9 ] that was the primary supervisory body for invasion planning and preparations; [ 13 ] it was led by a lieutenant general and reported directly to Chiang Kai-shek. [ 9 ] The government also prepared in the first half of 1962; it created organizations for wartime mobilization and administration, the Special Defence Budget, and a new tax — the Special Defence Levy — that would be collected until June 1963. These government developments were noticed by foreign observers. [ 10 ] ROC agents and paramilitary forces along the coast shifted from gathering intelligence to probing attacks in 1962 and 1963. [ 14 ] In April 1964, Chiang Kai-shek ordered [ 15 ] the construction of a headquarters, including air raid shelters, behind his residence at Cihu . [ 16 ]
The war plan was divided into multiple phases, with Phase I being the initial surprise amphibious assault of Xiamen in Fujian province. The ROC island of Jinmen would be the forward operating base . PRC reinforcements were expected to arrive five days before the landing, so any landing would meet an immediate counterattack. It was estimated the landing would require 270,000 troops — about a third of mobilized strength — and suffer 50,000 casualties. [ 10 ] [ 17 ] After the landing, the ROC would advance by covertly fomenting, or taking advantage of unrest in the PRC. However, this was too vague for planning purposes, and so detailed planning did not proceed beyond Phase I. [ 10 ] Planners recognized that even Phase I was a difficult proposition; it stretched available manpower and exceeded available sea and airlift for troops and logistics . [ 17 ]
The ROC also sought US support as a necessary precondition for war, [ 10 ] and which could make up for the transport [ 12 ] and logistics shortfalls. [ 17 ] However, the US opposed the resumption of warfare in China; it communicated this through diplomatic channels, [ 10 ] [ 12 ] and by overtly surveilling the ROC's preparations through the Military Assistance Advisory Group . [ 10 ] This prompted the ROC to put the invasion on hold. [ 17 ]
Chiang Kai-shek decided to proceed without US approval following the PRC's first successful nuclear weapon test in October 1964; on 17 June 1965, he notified officers at a meeting at the ROC Military Academy that the invasion was imminent. [ 14 ] A final decision was to be made on 20 July. Mobilized officers, and personnel deployed to Kinmen, were required to have a will and testament as part of preparations. [ 8 ] Ultimately, no invasion was launched [ 8 ] but the year was marred by accidents and defeats.
On 24 June, an amphibious landing exercise [ 14 ] in southern Taiwan [ 15 ] caused the deaths of over ten soldiers when strong waves overturned five amphibious assault vehicles. [ 14 ]
On 6 August at the Battle of Dongshan , two ROC Navy (ROCN) warships carrying troops to conduct a reconnaissance of the mainland were intercepted and sunk by People's Liberation Army Navy torpedo boats near Dongshan Island , Fujian; 200 ROC personnel were killed. [ 15 ] [ 14 ] The warships lacked air support; the ROC Air Force had been unaware of the mission due to a communication error. [ 14 ]
In November at the Battle of East Chongwu , the ROCN warships Shan Hai and Lin Huai were intercepted by the PLAN while en route the islands of Magong and Wuqiu to pick up wounded troops. Lin Huai was sunk by two torpedoes, and 90 ROC personnel were killed. [ 15 ]
In September 1965, the ROC offered to aid the US in the Vietnam War by invading Guangdong province. The offer was rejected as the US was then attempting to limit the war and did not want to expand the conflict (at this time intervention in Cambodia and Laos was not US policy). [ 12 ]
The start of the PRC's Cultural Revolution in 1966 prompted the ROC to review its plans in anticipation of exploiting unrest on the mainland. In 1967, Chiang Kai-shek was confident that the instability overtaking the PRC — including in the government and military — would not be short-lived. The ROC again — and for the last time — sought US aid for an invasion; the request was rejected. [ 18 ]
The Guoguang Operation Office was renamed as the Operation Planning Office in 1966. [ 13 ]
The acceptance of US non-involvement and the replacement of US military aid by Foreign Military Sales forced the ROC to reassess its strategy. Economic development — upon which military preparedness would depend — now had to be considered. [ 19 ] Chiang Ching-kuo believed that success required a popular and armed anti-Communist revolution (" Hungary-style ") in the PRC — which an ROC invasion could then support — and significant changes to the international environment. [ 18 ] [ 20 ] The new strategy was to build an economy to support offensive operations while encouraging revolution in the PRC with psychological warfare and propaganda. [ 19 ] Initially, the ROCAF studied the "Wang-shih" plan which used special forces to infiltrate the PRC and incite rebellion. [ 18 ] [ 20 ] In response to the new strategy, the ROCAF adopted the defensive "Ku-an" plan while offensive preparations continued. [ 19 ]
Chiang Ching-kuo's control over policy began to increase in 1969. He was appointed as Vice Premier in July 1969, [ 19 ] [ 13 ] and then Premier on 1 June 1972. Crucially, Chiang Kai-shek suffered a road accident in September 1969 [ 19 ] after which he gradually receded from politics as his health declined. [ 13 ] Between 1969 and 1972, the international position of the ROC changed radically due to the normalization of relations between the US and the PRC. The Nixon Doctrine [ 19 ] [ 21 ] and announcement of the withdrawal of US troops from Taiwan in the Shanghai Communiqué [ 13 ] demanded that the ROC pay more attention to defense. [ 22 ] Chiang Ching-kuo appreciated this, although he continued to support — at least in principle — an eventual offensive. [ 21 ] [ 23 ]
For a time, the ROC may have abandoned the expectation of mounting a large scale attack on the PRC; the Operation Planning Office was abolished on 20 July 1972. Attention shifted to the "Wang-shih" plan, [ 13 ] which was revised in 1976 on the unrealized possibility of exploiting the death of Mao Zedong . [ 23 ] However, in 1987 the nominally defensive "Ku-an" plan gained an — ultimately incomplete — section pertaining to attacking the mainland based on the strategic concept that "the principal battlefield is the mainland, and the secondary battlefield is Taiwan." [ 24 ] The ROCAF remained organized as an offensive force, [ 25 ] and paratroops were trained to support an offensive. [ 24 ]
Lee Teng-hui served as acting President after Chiang Ching-kuo's death in 1988, [ 24 ] then formally assumed that office in 1990. He immediately abandoned the policy of pursuing the reunification of China through force, [ 25 ] which allowed the ROCAF to adopt a fully defensive posture starting in 1991. [ 26 ]
In April 2009 it was announced that secret documentation for Project Guoguang would be declassified and displayed at the Cihu Mausoleum starting in May 2009. [ 15 ] [ 16 ]
Communist Party / Soviet Republic ( Red Army ) → Liberated Area ( 8th Route Army , New Fourth Army , etc. → People's Liberation Army ) → People's Republic of China | https://en.wikipedia.org/wiki/Project_National_Glory |
Negin ( Persian : نگین , lit. 'gemstone') is the tentative title of an upcoming class of warship designed by Iran , unveiled in November 2019. [ 1 ] [ 2 ] The design of the vessel resembles littoral combat ship (LCS) in the American terminology, [ 3 ] though Iranians have identified it as a 'heavy destroyer'. [ 4 ]
The vessels in the class are to displace between 5,000 and 7,000 tons, according to what Iranian officials told press in November 2019. [ 1 ] In April 2020, Iran announced that the design phase has been concluded and the construction of the lead ship will begin shortly. [ 4 ]
The project is described as an attempt to improve blue-water capabilities of the Islamic Republic of Iran Navy , by Farzin Nadimi of The Washington Institute for Near East Policy . [ 2 ] Military journalist David Axe is skeptical that Iran can build such a warship. [ 5 ] | https://en.wikipedia.org/wiki/Project_Negin |
Project Orion was a study conducted in the 1950s and 1960s by the United States Air Force , DARPA , [ 1 ] and NASA into the viability of a nuclear pulse spaceship that would be directly propelled by a series of atomic explosions behind the craft. [ 2 ] [ 3 ] Following preliminary ideas in the 1940s, [ 4 ] and a classified paper co-authored by physicist Stanisław Ulam in 1955, [ 5 ] ARPA agreed to sponsor and fund the program in July 1958. [ 6 ] [ 7 ]
Early versions of the vehicle were designed for ground launch , but later versions were intended for use only in space. The design effort took place at General Atomics in San Diego, [ 5 ] and supporters included Wernher von Braun , [ 8 ] who issued a white paper advocating the idea. [ 2 ] [ 9 ] NASA also created a Mars mission profile based on the design, proposing a 125 day round trip carrying eight astronauts with a predicted development cost of $1.5 billion. [ 8 ] Non-nuclear tests were conducted with models, with the most successful test occurring in late 1959, [ 7 ] but the project was ultimately abandoned for reasons including the 1963 Partial Test Ban Treaty , [ 10 ] which prohibited nuclear explosions in space amid concerns over radioactive fallout . [ 2 ]
Physicists Ted Taylor and Freeman Dyson led the project, and Taylor has been described as the "driving force behind Orion". [ 6 ] In 1979, General Dynamics donated a 26-inch tall (66 cm) wooden model of the craft to the Smithsonian , which displays it at the Steven F. Udvar-Hazy Center in Fairfax County , Virginia . [ 10 ]
Physicist Stanislaw Ulam proposed the general idea of nuclear pulse propulsion in 1946, [ 4 ] and preliminary calculations were made by Frederick Reines and Ulam in a Los Alamos memorandum dated 1947. [ 4 ] [ 3 ] [ 11 ] In August 1955, Ulam co-authored a classified paper proposing the use of nuclear fission bombs, "ejected and detonated at a considerable distance", for propelling a vehicle in outer space. [ 5 ] [ 4 ] The project was led by Ted Taylor at General Atomics and physicist Freeman Dyson who, at Taylor's request, took a year away from the Institute for Advanced Study in Princeton to work on the project. [ 6 ]
In July 1958, ARPA agreed to sponsor Orion at an initial level of $1 million per year, at which point the project received its name and formally began. [ 6 ] [ 7 ] The agency granted a study of the concept to the General Dynamics Corporation , [ 10 ] but decided to withdraw support in late 1959. [ 6 ] The U.S. Air Force agreed to support Orion if a military use was found for the project, and the NASA Office of Manned Spaceflight also contributed funding. [ 5 ] The concept investigated by the government used a blast shield and shock absorber to protect the crew and convert the detonations into a continuous propulsion force. [ 12 ] [ 13 ] The most successful model test, in November 1959, reached roughly 100 meters in altitude with six sequenced chemical explosions. [ 7 ] NASA also produced a Mars mission profile for a 125 day round trip with eight astronauts, at a predicted development cost of $1.5 billion. [ 8 ] Orion was canceled in 1964, after the United States signed the Partial Test Ban Treaty the prior year; the treaty greatly reduced political support for the project. [ 10 ] [ 8 ] NASA had also decided, in 1959, that the civilian space program would be non-nuclear in the near-term. [ 6 ]
The Orion concept offered both high thrust and high specific impulse , or propellant efficiency: 2,000 pulse units ( I sp ) under the original design and an I sp of perhaps 4,000 to 6,000 seconds according to the Air Force plan, with a later 1968 fusion bomb proposal by Dyson potentially increasing this to more than 75,000 I sp , enabling velocities of 10,000 km/s. [ 8 ] A moderate-sized nuclear device was estimated, at the time, to produce about 5 or 10 billion horsepower. [ 6 ] [ 14 ]
The extreme power of the nuclear explosions, relative to the vehicle's mass, would be managed by using external detonations, although an earlier version of the pulse concept did propose containing the blasts in an internal pressure structure, with one such design prepared by The Martin Company . [ 8 ] [ 6 ] As a qualitative power comparison, traditional chemical rockets , such as the Saturn V that took the Apollo program to the Moon, produce high thrust with low specific impulse, whereas electric ion engines produce a small amount of thrust very efficiently. Orion, by contrast, would have offered performance greater than the most advanced conventional or nuclear rocket engines then under consideration. Supporters of Project Orion felt that it had potential for cheap interplanetary travel . [ 15 ]
From Project Longshot to Project Daedalus , Mini-Mag Orion , and other proposals which analyze thermal power dissipation, the principle of external nuclear pulse propulsion to maximize survivable power has remained common among serious concepts for interstellar flight without external power beaming and for very high-performance interplanetary flight. [ 8 ] Such later proposals have tended to modify the basic principle by envisioning equipment driving detonation of much smaller fission or fusion pellets, in contrast to Project Orion's larger nuclear pulse units (full nuclear bombs).
The Orion nuclear pulse drive combines a very high exhaust velocity, from 19 to 31 km/s (19 mi/s) in typical interplanetary designs, with meganewtons of thrust. [ 17 ] Many spacecraft propulsion drives can achieve one of these or the other, but nuclear pulse rockets are the only proposed technology that could potentially meet the extreme power requirements to deliver both at once (see spacecraft propulsion for more speculative systems).
Specific impulse ( I sp ) measures how much thrust can be derived from a given mass of fuel, and is a standard figure of merit for rocketry. For any rocket propulsion, since the kinetic energy of exhaust goes up with velocity squared ( kinetic energy = 1 / 2 mv 2 ), whereas the momentum and thrust go up with velocity linearly ( momentum = mv ), obtaining a particular level of thrust (as in a number of g acceleration) requires far more power each time that exhaust velocity and I sp are much increased in a design goal. (For instance, the most fundamental reason that electric propulsion systems of high I sp tend to be low thrust is due to their limits on available power. Their thrust is actually inversely proportional to I sp if power going into exhaust is constant or at its limit from heat dissipation needs or other engineering constraints.) [ 18 ] The Orion concept detonates nuclear explosions externally at a rate of power release which is beyond what nuclear reactors could survive internally with known materials and design.
Since weight is no limitation, an Orion craft can be extremely robust. An uncrewed craft could tolerate very large accelerations, perhaps 100 g . A human-crewed Orion, however, must use some sort of damping system behind the pusher plate to smooth the near instantaneous acceleration to a level that humans can comfortably withstand – typically about 2 to 4 g .
The high performance depends on the high exhaust velocity, in order to maximize the rocket's force for a given mass of propellant. The velocity of the plasma debris is proportional to the square root of the change in the temperature ( T c ) of the nuclear fireball. Since such fireballs typically achieve ten million degrees Celsius or more in less than a millisecond, they create very high velocities. However, a practical design must also limit the destructive radius of the fireball. The diameter of the nuclear fireball is proportional to the square root of the bomb's explosive yield.
The shape of the bomb's reaction mass is critical to efficiency. The original project designed bombs with a reaction mass made of tungsten . The bomb's geometry and materials focused the X-rays and plasma from the core of nuclear explosive to hit the reaction mass. In effect each bomb would be a nuclear shaped charge .
A bomb with a cylinder of reaction mass expands into a flat, disk-shaped wave of plasma when it explodes. A bomb with a disk-shaped reaction mass expands into a far more efficient cigar-shaped wave of plasma debris. The cigar shape focuses much of the plasma to impinge onto the pusher-plate. [ 19 ] For greatest mission efficiency the rocket equation demands that the greatest fraction of the bomb's explosive force be directed at the spacecraft, rather than being spent isotropically .
The maximum effective specific impulse, I sp , of an Orion nuclear pulse drive generally is equal to:
I s p = C 0 ⋅ V e g n {\displaystyle I_{sp}={\frac {C_{0}\cdot V_{e}}{g_{n}}}}
where C 0 is the collimation factor (what fraction of the explosion plasma debris will actually hit the impulse absorber plate when a pulse unit explodes), V e is the nuclear pulse unit plasma debris velocity, and g n is the standard acceleration of gravity (9.81 m/s 2 ; this factor is not necessary if I sp is measured in N·s/kg or m/s). A collimation factor of nearly 0.5 can be achieved by matching the diameter of the pusher plate to the diameter of the nuclear fireball created by the explosion of a nuclear pulse unit.
The smaller the bomb, the smaller each impulse will be, so the higher the rate of impulses and more than will be needed to achieve orbit. Smaller impulses also mean less g shock on the pusher plate and less need for damping to smooth out the acceleration.
The optimal Orion drive bomblet yield (for the human crewed 4,000 ton reference design) was calculated to be in the region of 0.15 kt, with approx 800 bombs needed to orbit and a bomb rate of approx 1 per second. [ 20 ]
The following can be found in George Dyson 's book. [ 19 ] The figures for the comparison with Saturn V are taken from this section and converted from metric (kg) to US short tons (abbreviated "t" here).
In late 1958 to early 1959, it was realized that the smallest practical vehicle would be determined by the smallest achievable bomb yield. The use of 0.03 kt (sea-level yield) bombs would give vehicle mass of 880 tons. However, this was regarded as too small for anything other than an orbital test vehicle and the team soon focused on a 4,000 ton "base design".
At that time, the details of small bomb designs were shrouded in secrecy. Many Orion design reports had all details of bombs removed before release. Contrast the above details with the 1959 report by General Atomics, [ 24 ] which explored the parameters of three different sizes of hypothetical Orion spacecraft:
The biggest design above is the "super" Orion design; at 8 million tons, it could easily be a city. [ 25 ] In interviews, the designers contemplated the large ship as a possible interstellar ark . This extreme design could be built with materials and techniques that could be obtained in 1958 or were anticipated to be available shortly after.
Most of the three thousand tons of each of the "super" Orion's propulsion units would be inert material such as polyethylene , or boron salts, used to transmit the force of the propulsion units detonation to the Orion's pusher plate, and absorb neutrons to minimize fallout. One design proposed by Freeman Dyson for the "Super Orion" called for the pusher plate to be composed primarily of uranium or a transuranic element so that upon reaching a nearby star system the plate could be converted to nuclear fuel.
The Orion nuclear pulse rocket design has extremely high performance. Orion nuclear pulse rockets using nuclear fission type pulse units were originally intended for use on interplanetary space flights.
Missions that were designed for an Orion vehicle in the original project included single stage (i.e., directly from Earth's surface) to Mars and back, and a trip to one of the moons of Saturn. [ 25 ]
Freeman Dyson performed the first analysis of what kinds of Orion missions were possible to reach Alpha Centauri , the nearest star system to the Sun . [ 26 ] His 1968 paper "Interstellar Transport" (published in Physics Today ) [ 27 ] retained the concept of large nuclear explosions but Dyson moved away from the use of fission bombs and considered the use of one megaton deuterium fusion explosions instead. His conclusions were simple: the debris velocity of fusion explosions was probably in the 3000–30,000 km/s range and the reflecting geometry of Orion's hemispherical pusher plate would reduce that range to 750–15,000 km/s. [ 28 ]
To estimate the upper and lower limits of what could be done using 1968 technology, Dyson considered two starship designs. The more conservative energy limited pusher plate design simply had to absorb all the thermal energy of each impinging explosion (4×10 15 joules, half of which would be absorbed by the pusher plate) without melting. Dyson estimated that if the exposed surface consisted of copper with a thickness of 1 mm, then the diameter and mass of the hemispherical pusher plate would have to be 20 kilometers and 5 million tonnes, respectively. 100 seconds would be required to allow the copper to radiatively cool before the next explosion. It would then take on the order of 1000 years for the energy-limited heat sink Orion design to reach Alpha Centauri.
In order to improve on this performance while reducing size and cost, Dyson considered an alternative momentum limited pusher plate design where an ablation coating of the exposed surface is substituted to get rid of the excess heat. The limitation is then set by the capacity of shock absorbers to transfer momentum from the impulsively accelerated pusher plate to the smoothly accelerated vehicle. Dyson calculated that the properties of available materials limited the velocity transferred by each explosion to about 30 meters per second independent of the size and nature of the explosion. If the vehicle is to be accelerated at 1 Earth gravity (9.81 m/s 2 ) with this velocity transfer, then the pulse rate is one explosion every three seconds. [ 29 ] The dimensions and performance of Dyson's vehicles are given as:
Later studies indicate that the top cruise velocity that can theoretically be achieved are a few percent of the speed of light (0.08–0.1 c ). [ 15 ] [ verification needed ] An atomic (fission) Orion can achieve perhaps 9–11% of the speed of light. A nuclear pulse drive starship powered by fusion- antimatter catalyzed nuclear pulse propulsion units would be similarly in the 10% range and pure matter-antimatter annihilation rockets would be theoretically capable of obtaining a velocity between 50% and 80% of the speed of light . In each case, saving fuel for slowing down halves the maximum speed. The concept of using a magnetic sail to decelerate the spacecraft as it approaches its destination has been discussed as an alternative to using propellant; this would allow the ship to travel near the maximum theoretical velocity. [ 30 ]
At 0.1 c , Orion thermonuclear starships would require a flight time of at least 44 years to reach Alpha Centauri, not counting time needed to reach that speed (about 36 days at constant acceleration of 1 g or 9.8 m/s 2 ). At 0.1 c , an Orion starship would require 100 years to travel 10 light years. The astronomer Carl Sagan suggested that this would be an excellent use for stockpiles of nuclear weapons. [ 31 ] [ full citation needed ]
As part of the development of Project Orion, to garner funding from the military, a derived "space battleship" space-based nuclear-blast-hardened nuclear-missile weapons platform was mooted in the 1960s by the United States Air Force. It would comprise the USAF "Deep Space Bombardment Force". [ 32 ] [ 33 ] [ 34 ]
A concept similar to Orion was designed by the British Interplanetary Society (B.I.S.) in the years 1973–1974. Project Daedalus was to be a robotic interstellar probe to Barnard's Star that would travel at 12% of the speed of light. In 1989, a similar concept was studied by the U.S. Navy and NASA in Project Longshot . Both of these concepts require significant advances in fusion technology, and therefore cannot be built at present, unlike Orion.
From 1998 to the present, the nuclear engineering department at Pennsylvania State University has been developing two improved versions of project Orion known as Project ICAN and Project AIMStar using compact antimatter catalyzed nuclear pulse propulsion units, [ 35 ] rather than the large inertial confinement fusion ignition systems proposed in Project Daedalus and Longshot. [ 36 ]
The expense of the fissionable materials required was thought to be high, until the physicist Ted Taylor showed that with the right designs for explosives, the amount of fissionables used on launch was close to constant for every size of Orion from 2,000 tons to 8,000,000 tons. The larger bombs used more explosives to super-compress the fissionables, increasing efficiency. The extra debris from the explosives also serves as additional propulsion mass.
The bulk of costs for historical nuclear defense programs have been for delivery and support systems, rather than for production cost of the bombs directly (with warheads being 7% of the U.S. 1946–1996 expense total according to one study). [ 37 ] After initial infrastructure development and investment, the marginal cost of additional nuclear bombs in mass production can be relatively low. In the 1980s, some U.S. thermonuclear warheads had $1.1 million estimated cost each ($630 million for 560). [ 38 ] For the perhaps simpler fission pulse units to be used by one Orion design, a 1964 source estimated a cost of $40,000 or less each in mass production (equivalent to $300,000 in 2023). [ 38 ] [ 39 ]
Project Daedalus later proposed fusion explosives ( deuterium or tritium pellets) detonated by electron beam inertial confinement. This is the same principle behind inertial confinement fusion . Theoretically, it could be scaled down to far smaller explosions, and require small shock absorbers.
From 1957 to 1964 this information was used to design a spacecraft propulsion system called Orion, in which nuclear explosives would be thrown behind a pusher-plate mounted on the bottom of a spacecraft and exploded. The shock wave and radiation from the detonation would impact against the underside of the pusher plate, giving it a powerful push. The pusher plate would be mounted on large two-stage shock absorbers that would smoothly transmit acceleration to the rest of the spacecraft.
During take-off, there were concerns of danger from fluidic shrapnel [ clarification needed ] being reflected from the ground. One proposed solution was to use a flat plate of conventional explosives spread over the pusher plate, and detonate this to lift the ship from the ground before going nuclear. This would lift the ship far enough into the air that the first focused nuclear blast would not create debris capable of harming the ship.
A preliminary design for a nuclear pulse unit was produced. It proposed the use of a shaped-charge fusion-boosted fission explosive. The explosive was wrapped in a beryllium oxide channel filler, which was surrounded by a uranium radiation mirror. The mirror and channel filler were open ended, and in this open end a flat plate of tungsten propellant was placed. The whole unit was built into a can with a diameter no larger than 6 inches (150 mm) and weighed just over 300 pounds (140 kg) so it could be handled by machinery scaled up from a soft-drink vending machine; Coca-Cola was consulted on the design. [ 40 ]
At 1 microsecond after ignition the gamma bomb plasma and neutrons would heat the channel filler and be somewhat contained by the uranium shell. At 2–3 microseconds the channel filler would transmit some of the energy to the propellant, which vaporized. The flat plate of propellant formed a cigar-shaped explosion aimed at the pusher plate.
The plasma would cool to 25,200 °F (14,000 °C) as it traversed the 82 feet (25 m) distance to the pusher plate and then reheat to 120,600 °F (67,000 °C) as, at about 300 microseconds, it hits the pusher plate and is recompressed. This temperature emits ultraviolet light, which is poorly transmitted through most plasmas. This helps keep the pusher plate cool. The cigar shaped distribution profile and low density of the plasma reduces the instantaneous shock to the pusher plate.
Because the momentum transferred by the plasma is greatest in the center, the pusher plate's thickness would decrease by approximately a factor of 6 from the center to the edge. This ensures the change in velocity is the same for the inner and outer parts of the plate.
At low altitudes where the surrounding air is dense, gamma scattering could potentially harm the crew without a radiation shield; a radiation refuge would also be necessary on long missions to survive solar flares . Radiation shielding effectiveness increases exponentially with shield thickness, see gamma ray for a discussion of shielding. On ships with a mass greater than 2,200,000 pounds (1,000,000 kg) the structural bulk of the ship, its stores along with the mass of the bombs and propellant, would provide more than adequate shielding for the crew. Stability was initially thought to be a problem due to inaccuracies in the placement of the bombs, but it was later shown that the effects would cancel out. [ 41 ] [ 42 ]
Numerous model flight tests, using conventional explosives, were conducted at Point Loma, San Diego in 1959. On November 14, 1959 the one-meter model, also known as "Hot Rod" and "putt-putt", first flew using RDX (chemical explosives) in a controlled flight for 23 seconds to a height of 184 feet (56 m). Film of the tests has been transcribed to video [ 43 ] and was featured on the BBC TV program To Mars by A-Bomb in 2003 with comments by Freeman Dyson and Arthur C. Clarke . The model landed by parachute undamaged and is in the collection of the Smithsonian National Air and Space Museum.
The first proposed shock absorber was a ring-shaped airbag. It was soon realized that, should an explosion fail, the 1,100,000–2,200,000-pound (500,000–1,000,000 kg) pusher plate would tear away the airbag on the rebound. So a two-stage detuned spring and piston shock absorber design was developed. On the reference design the first stage mechanical absorber was tuned to 4.5 times the pulse frequency whilst the second stage gas piston was tuned to 0.5 times the pulse frequency. This permitted timing tolerances of 10 ms in each explosion.
The final design coped with bomb failure by overshooting and rebounding into a center position. Thus following a failure and on initial ground launch it would be necessary to start or restart the sequence with a lower yield device. In the 1950s, methods of adjusting bomb yield were in their infancy and considerable thought was given to providing a means of swapping out a standard yield bomb for a smaller yield one in a 2 or 3 second time frame or to provide an alternative means of firing low yield bombs. Modern variable yield devices would allow a single standardized explosive to be tuned down (configured to a lower yield) automatically.
The bombs had to be launched behind the pusher plate with enough velocity to explode 66–98 feet (20–30 m) beyond it every 1.1 seconds. Numerous proposals were investigated, from multiple guns poking over the edge of the pusher plate to rocket propelled bombs launched from roller coaster tracks; however, the final reference design used a simple gas gun to shoot the devices through a hole in the center of the pusher plate.
Exposure to repeated nuclear blasts raises the problem of ablation (erosion) of the pusher plate. Calculations and experiments indicated that a steel pusher plate would ablate less than 1 mm, if unprotected. If sprayed with an oil it would not ablate at all (this was discovered by accident: a test plate had oily fingerprints on it and the fingerprints suffered no ablation). The absorption spectra of carbon and hydrogen minimize heating. The design temperature of the shockwave, 120,600 °F (67,000 °C), emits ultraviolet light. Most materials and elements are opaque to ultraviolet, especially at the 49,000 psi (340 MPa) pressures the plate experiences. This prevents the plate from melting or ablating.
One issue that remained unresolved at the conclusion of the project was whether or not the turbulence created by the combination of the propellant and ablated pusher plate would dramatically increase the total ablation of the pusher plate. According to Freeman Dyson, in the 1960s they would have had to actually perform a test with a real nuclear explosive to determine this; with modern simulation technology this could be determined fairly accurately without such empirical investigation.
Another potential problem with the pusher plate is that of spalling —shards of metal—potentially flying off the top of the plate. The shockwave from the impacting plasma on the bottom of the plate passes through the plate and reaches the top surface. At that point, spalling may occur, damaging the pusher plate. For that reason, alternative substances—plywood and fiberglass—were investigated for the surface layer of the pusher plate and thought to be acceptable.
If the conventional explosives in the nuclear bomb detonate but a nuclear explosion does not ignite, shrapnel could strike and potentially critically damage the pusher plate.
True engineering tests of the vehicle systems were thought to be impossible because several thousand nuclear explosions could not be performed in any one place. Experiments were designed to test pusher plates in nuclear fireballs and long-term tests of pusher plates could occur in space. The shock-absorber designs could be tested at full-scale on Earth using chemical explosives.
However, the main unsolved problem for a launch from the surface of the Earth was thought to be nuclear fallout . Freeman Dyson, group leader on the project, estimated back in the 1960s that with conventional nuclear weapons , each launch would statistically cause on average between 0.1 and 1 fatal cancers from the fallout. [ 44 ] That estimate is based on no-threshold model assumptions, a method often used in estimates of statistical deaths from other industrial activities. Each few million dollars of efficiency indirectly gained or lost in the world economy may statistically average lives saved or lost, in terms of opportunity gains versus costs. [ 45 ] Indirect effects could matter for whether the overall influence of an Orion-based space program on future human global mortality would be a net increase or a net decrease, including if change in launch costs and capabilities affected space exploration , space colonization , the odds of long-term human species survival , space-based solar power , or other hypotheticals.
Danger to human life was not a reason given for shelving the project. The reasons included lack of a mission requirement, the fact that no one in the U.S. government could think of any reason to put thousands of tons of payload into orbit, the decision to focus on rockets for the Moon mission, and ultimately the signing of the Partial Test Ban Treaty in 1963. The danger to electronic systems on the ground from an electromagnetic pulse was not considered to be significant from the sub-kiloton blasts proposed since solid-state integrated circuits were not in general use at the time.
From many smaller detonations combined, the fallout for the entire launch of a 12,000,000-pound (5,400,000 kg) Orion is equal to the detonation of a typical 10 megaton (40 petajoule ) nuclear weapon as an air burst , therefore most of its fallout would be the comparatively dilute delayed fallout . Assuming the use of nuclear explosives with a high portion of total yield from fission, it would produce a combined fallout total similar to the surface burst yield of the Mike shot of Operation Ivy , a 10.4 megaton device detonated in 1952. The comparison is not quite perfect as, due to its surface burst location, Ivy Mike created a large amount of early fallout contamination. Historical above-ground nuclear weapon tests included 189 megatons of fission yield and caused average global radiation exposure per person peaking at 1.0 × 10 −5 rem/sq ft (0.11 mSv/a) in 1963, with a 6.5 × 10 −7 rem/sq ft (0.007 mSv/a) residual in modern times , superimposed upon other sources of exposure, primarily natural background radiation, which averages 0.00022 rem/sq ft (2.4 mSv/a) globally but varies greatly, such as 0.00056 rem/sq ft (6 mSv/a) in some high-altitude cities. [ 46 ] [ 47 ] Any comparison would be influenced by how population dosage is affected by detonation locations, with very remote sites preferred.
With special designs of the nuclear explosive, Ted Taylor estimated that fission product fallout could be reduced tenfold, or even to zero, if a pure fusion explosive could be constructed instead. A 100% pure fusion explosive has yet to be successfully developed, according to declassified US government documents, although relatively clean PNEs ( peaceful nuclear explosions ) were tested for canal excavation by the Soviet Union in the 1970s with 98% fusion yield in the Taiga test's 15 kiloton devices, 0.3 kilotons fission, [ 44 ] [ 48 ] which excavated part of the proposed Pechora–Kama Canal .
The vehicle's propulsion system and its test program would violate the Partial Test Ban Treaty of 1963, as currently written, which prohibits all nuclear detonations except those conducted underground as an attempt to slow the arms race and to limit the amount of radiation in the atmosphere caused by nuclear detonations. There was an effort by the US government to put an exception into the 1963 treaty to allow for the use of nuclear propulsion for spaceflight but Soviet fears about military applications kept the exception out of the treaty. This limitation would affect only the US, Russia, and the United Kingdom. It would also violate the Comprehensive Nuclear-Test-Ban Treaty which has been signed by the United States and China as well as the de facto moratorium on nuclear testing that the declared nuclear powers have imposed since the 1990s.
The launch of such an Orion nuclear bomb rocket from the ground or low Earth orbit would generate an electromagnetic pulse that could cause significant damage to computers and satellites as well as flooding the van Allen belts with high-energy radiation. Since the EMP footprint would be a few hundred miles wide, this problem might be solved by launching from very remote areas. A few relatively small space-based electrodynamic tethers could be deployed to quickly eject the energetic particles from the capture angles of the van Allen belts.
An Orion spacecraft could be boosted by non-nuclear means to a safer distance only activating its drive well away from Earth and its satellites. The Lofstrom launch loop , space elevator , or other alternative launch systems hypothetically provide excellent solutions; in the case of the space elevator, existing carbon nanotube composites, with the possible exception of colossal carbon tubes , do not yet have sufficient tensile strength . All chemical rocket designs are extremely inefficient and expensive when launching large mass into orbit but could be employed if the result were cost effective.
A test that was similar to the test of a pusher plate occurred as an accidental side effect of a nuclear containment test called " Pascal-B " conducted on 27 August 1957. [ 49 ] The test's experimental designer Dr. Robert Brownlee performed a highly approximate calculation that suggested that the low-yield nuclear explosive would accelerate the massive (900 kg) steel capping plate to six times escape velocity . [ 50 ] The plate was never found but Dr. Brownlee believes that the plate never left the atmosphere; for example, it could have been vaporized by compression heating of the atmosphere due to its high speed. The calculated velocity was interesting enough that the crew trained a high-speed camera on the plate which, unfortunately, only appeared in one frame, indicating a very high lower bound for the speed of the plate.
The first appearance of the idea in print appears to be Robert A. Heinlein 's 1940 short story, " Blowups Happen ".
As discussed by Arthur C. Clarke in his recollections of the making of the film 2001: A Space Odyssey in The Lost Worlds of 2001 , a nuclear-pulse version of the U.S. interplanetary spacecraft Discovery One was considered. However the Discovery in the movie did not use this idea, as Stanley Kubrick thought it might be considered parody after making Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb . [ 51 ]
An Orion spaceship features prominently in the science fiction novel Footfall by Larry Niven and Jerry Pournelle . In the face of an alien siege/invasion of Earth, the humans must resort to drastic measures to get a fighting ship into orbit to face the alien fleet.
The opening premise of the show Ascension is that in 1963 President John F. Kennedy and the U.S. government, fearing the Cold War will escalate and lead to the destruction of Earth, launched the Ascension , an Orion-class spaceship, to colonize a planet orbiting Proxima Centauri, assuring the survival of the human race.
Author Stephen Baxter's science fiction novel Ark employs an Orion-class generation ship to escape ecological disaster on Earth.
Towards the conclusion of his Empire Games trilogy, Charles Stross includes a spacecraft modeled after Project Orion. The crafts' designers, constrained by a 1960s level of industrial capacity, intend it to be used to explore parallel worlds and to act as a nuclear deterrent, leapfrogging their foes' more contemporary capabilities.
In the horror novel Torment by Jeremy Robinson (written under the pseudonym Jeremy Bishop), the main characters escape from a global nuclear war in a nuclear pulse propulsion craft. The craft is among 3 others; part of the "Orion Protocol", an escape mechanism for members of the federal government. The craft are housed in a subterranean chamber below The Ellipse in Washington, D.C.
In the science fiction novel The Three-Body Problem and its associated television show, a probe is launched towards an approaching alien fleet using a variation of the Orion method. [ 52 ] | https://en.wikipedia.org/wiki/Project_Orion_(nuclear_propulsion) |
Project Oxygen is a research project at the Massachusetts Institute of Technology 's Computer Science and Artificial Intelligence Laboratory to develop pervasive, human-centered computing. The Oxygen architecture is to consist of handheld terminals, computers embedded in the environment, and dynamically configured networks which connect these devices. [ 1 ] [ 2 ] A Project Oxygen device, the H21, exhibits similarities to the iPhone. [ 3 ] As of 2021, Project Oxygen devices have never been officially used in wider society. [ 3 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Project_Oxygen |
Project Piaba ( / p iː ˈ ɑː b ə / , pee- AH -bə ) is a fishery initiative located on the Rio Negro tributary of the Amazon River . The program both promotes and researches sustainable aquarium pet fish collection and its impact on the environment . The name of the project comes from the Brazilian Portuguese word, piaba ( [ˈpjabɐ] ), which means "little fish", [ 1 ] referring specifically to the cardinal tetra ( Paracheirodon axelrodi ). [ 2 ] Project Piaba is an ongoing project with annual research expeditions to the Rio Negro region. [ 2 ] Because of the sustainable nature of the project, its slogan is "Buy a Fish, Save a Tree!" [ 3 ]
Many ornamental freshwater aquarium fish, including the cardinal tetra and the discus ( Symphysodon ssp.), are sourced from the Amazon River Basin area. [ 1 ] The Rio Negro region is the home of more than 100 different species of fish that are important to the pet fish trade. [ 4 ] In fact, several species, including cardinal tetras, show the adaptive trait of iridescence which may provide lower visibility in a blackwater environment. [ 4 ]
Project Piaba started with an ecological baseline study of the region which was conducted in 1989 by a group of researchers and students from the Universidade do Amazonas (UA) and the National Institute of Amazon Research (INPA). [ 5 ] This initial survey discovered and documented the importance of the fish trade to the local economy , and it led the researchers to wonder about the impact the fishing had on the environment. [ 5 ]
The ornamental fish trade in the Rio Negro region is considered "substantial by local standards, representing approximately US$ 3 million per year with over 30 million live fish exported annually." [ 6 ] About 40,000 people in the region, many of them caboclos (river-dwelling families) are dependent on the income from their fisheries. [ 7 ]
In the 1950s, Herbert R. Axelrod and Willi Schwarz had begun shipping aquarium fish out of Barcelos in Brazil. [ 8 ] In 1991, Ning Labbish Chao and Gregory Prang founded Project Piaba in order to support the local fisheries and in concert with them, help protect the habitat of collected fishes. [ 5 ] Because of the "gentle" way the fish are caught and most of the fish caught for the aquarium trade are short-lived and would naturally die out during the dry season, the ecological impact of catching the fish is considered minimal. [ 1 ] The fish are also not caught by fish farmers during their breeding season. [ 9 ] The cardinal tetra, especially, is considered a renewable resource . [ 10 ] Project Piaba assesses the sustainability of the species farmed in the Rio Negro area by using the " F value " which estimates the portion of the catch from the total biomass. [ 11 ]
The center of the Rio Negro aquarium trade, Barcelos, now celebrates ornamental fish in a festival held every January in conjunction with the annual research expedition of Project Piaba . [ 1 ] A stadium, known as Piabodrome, was even built for the festival. [ 1 ] The first festival took place in 1994, and a permanent exhibit highlighting the fish was installed in Barcelos by Project Piaba that same year. [ 12 ] Money donated by ichthyologist Herbert Axelrod helped support a lab and then later, the Centre for Aquatic Conservation, which has helped educate, support research and awareness of the project. [ 13 ] The Centre was first opened in 1997. [ 5 ] Other funds have come from the Association of Ornamental Fish Breeders and Exporters of Amazonas (ACEPOAM) for research on both the fish and the welfare of the fish farmers. [ 5 ]
Scott Dowd is, as of 2015, the director of Project Piaba . [ 2 ] He leads the yearly expeditions with experts from around the world, volunteers, and even family visiting the Amazon region. [ 14 ]
The project has acted as a case study for other, similar projects. [ 2 ] Areas such as the Western Ghats in India and areas of Bali are beginning to use similar practices to make money from the fish trade and sustain the environment of the fish. [ 1 ] Project Piab a is often used to show how groups can support the environment while providing economic stimulus to a poor region of the world. [ 15 ] [ 16 ] In addition, the sustainable sourcing of fish is also a stimulus to the idea of "beneficial home fishkeeping", which emphasizes proper fish care, which, in turn, supports those who catch the fish in the wild. [ 17 ] When no incentive exists to fish, individuals in the Rio Negro area turn to less environmentally friendly means of support, such as logging or cattle ranching . [ 18 ] In fact, Project Piaba aims to actively discourage domestic farming of fish that are also sustainable resources, like the cardinal tetra, because it takes the financial incentive away from protecting the rain forest of the Rio Negro area. [ 19 ] [ 20 ]
The project has the support of aquaria and zoos around the world [ 21 ] [ 22 ] and also from the International Union for Conservation of Nature . [ 23 ] | https://en.wikipedia.org/wiki/Project_Piaba |
Project ROSE (" Retrofit of Strike Element " [ 1 ] ) was a program by the Pakistan Air Force to upgrade the avionics of its aging Dassault Mirage IIIEP and Mirage 5PA fighter jets. [ 2 ] These had originally been built either by Dassault Aviation in France, or by the Government Aircraft Factories (GAF) in Australia. The program, based at the Pakistan Aeronautical Complex , focused on upgrading the military avionics and onboard computer systems, with equipment supplied by Pakistani Margella Electronics , French SAGEM and Italian SELEX consortia. [ 3 ] [ 4 ]
Originally the Pakistan Air Force, the program started inbegann main considerations of retiring or phasing out from active service. [ 1 ] The Pakistan Air Force, which was already operating Dassault Mirage IIIs and Mirage 5s, began its procurement of second-hand Mirage fighters from Australia , Lebanon , Libya , and Spain at prices reported to be within the Ministry of Defence 's budget. [ 5 ] More than 90% of aircraft were retrofitted at the Pakistan Aeronautical Complex in Kamra with the remaining being upgraded in France . [ 5 ] Between 1996 and 2000, several Mirage IIIs and Mirage 5s were boughrought other countries and were upgraded under this program at the Pakistan Aeronautical Complex. [ 5 ]
The upgrade package included the installation of a Grifo radar with a detection range of approximately 75 km, in-flight refueling probes, and airframe overhauls to extend service life. After the ROSE-III upgrade, locally manufactured weapons like the H-2 and H-4 SOW , the Takbir glide bomb , and stealth nuclear cruise missiles such as the Ra'ad Mk-1 and Ra'ad Mk-2 were integrated into the aircraft. Further considerations for upgrades were recommended but the program was eventually terminated due to the increasing cost of spare parts and the poor condition of the second-hand airframes . [ 5 ]
It is currently expected that the upgraded fighter jets will remain in service with the Pakistan Air Force in specialized tactical attack roles beyond 2020. The jets are expected to be replaced by either the JF–17 Thunder (Block 3, Block 4 and Block 5), additional F-16s , or the 5th generation stealth fighter in development under Project Azm ; though no official timelines for these timelines have been announced. [ 1 ] [ 6 ]
In the 1990s, the United States placed an economic and military embargo on Pakistan due to its atomic bomb program . During this time, the Indian Air Force began to modernize its fleet of fighter aircraft, thus putting stress on the Pakistan Air Force . Furthermore, the United States indefinitely delayed the procurement of F-16 fighter jets, which were already paid for by Pakistan. Restrictions on the Pakistan Air Force, which relied heavily on American-built infrastructure, prompted the development of solutions to maintain combat readiness. [ 3 ]
In 1992, the Pakistan Air Force devised a strategy on increasing its self-reliance and immediately launched the ROSE program, as well as Project Sabre II which resulted in the development of the JF-17 aircraft. It was not until 1995 that Prime Minister Benazir Bhutto released funds to the MoD for both programs. [ 3 ] Although the United States raised objections to the program, the PAF procured Mirage fighters from various countries including Australia , Belgium , Lebanon , Libya , and Spain from 1992 till 2003. [ 3 ]
SAGEM from France and SELEX from Italy were contracted to provide crucial consultations on military electronics and avionics in 1996. Special overhauling facilities and engineering divisions were established at the Pakistan Aeronautical Complex (PAC) in Kamra . [ 7 ] Over 90% of the aircraft were locally retrofitted at the Pakistan Aeronautical Complex, and a few aircraft were upgraded in France. Under this first phase of the program, designated as ROSE–I , around 33 Mirage III fighter jets, designated ROSE I , were upgraded to perform multiple mission types including air superiority and strike missions . The ROSE upgrade was also applied to 34 Mirage 5 fighter jets for conducting night operations . [ 7 ]
In 1998, Margalla Electronics, DESTO , GIDS and NIE joined the program after the departure of SAGEM and SELEX . In the second stage of the project named ROSE–II , around 20 Mirage fighter jets were upgraded and 14 aircraft were upgraded under ROSE–III . [ 3 ] Newer Mirages bought from Australia and Belgium were in good condition with low flight hours to supplement the PAF's own fleet of 34 Mirage IIIs and 32 Mirage 5s acquired directly from France between 1967 and 1982. [ 8 ] The ROSE project was set up to buy as many second-hand aircraft as possible and to upgrade them with the latest avionics and other modern systems. During 1998, the Pakistan Air Force bought the entire fleet of grounded Mirage IIIs from Lebanon and upgraded them indigenously at the Pakistan Aeronautical Complex . [ 5 ]
A project team was formed to manage the program and held review meetings frequently in both Pakistan and France where problems were discussed. The Pakistan Aeronautical Complex and its technical personnel were involved with parts manufacturing and quality control. PAF test pilots validated performance of the new equipment during test flights. [ 7 ] In 2003, the PAF bought a total of about 50 grounded Mirage 5 fighter jets from Libya along with 150 engines still in sealed packaging and a huge quantity of spare parts. [ 9 ] Most of these aircraft were to be broken up for spare parts required by the Mirage fleet already in PAF service. [ 9 ] With this purchase, the PAF became the largest operator of Dassault Mirage III/5 fighters in the world. [ 10 ]
In 1990, the PAF bought 43 Mirage IIIOs and seven Mirage IIIDs, which had been retired from the Royal Australian Air Force between 1987 and 1989. Out of the 50 Dassault Mirage III fighters received from Australia, 40 were found to be suitable for service with the PAF, [ 11 ] 12 of them were overhauled at PAC and made operational. After being inspected, the remaining 28 were selected for upgrade under Project ROSE. 28 of the ex-Australian Dassault Mirage IIIO/D aircraft of the PAF were modified to ROSE I standard. [ citation needed ] The cockpit was modernized with a new head-up display , new multi-function displays , and a new radar altimeter . [ 12 ] New navigation systems, including an inertial navigation system and a GPS , were also installed. A new radar warning receiver was installed. [ 12 ]
The FIAR Grifo M3 multi-mode radar was installed in the second phase of the upgrade project. [ 13 ] It was stated that ROSE I fighters could easily be in service beyond 2010. In early 1999 it was stated that problems in "certain parameters - and errors in certain modes" had surfaced during flight trials of the Grifo M3 radar in the Mirage III, but these were later solved. [ 8 ]
A new Italian fire-control radar, the FIAR (now SELEX Galileo) Grifo M3 , was installed. The PAF's standard short-range air-to-air missile at the time, the AIM-9L Sidewinder , was integrated with the Grifo M3 radar. [ 12 ]
The Grifo M3 was developed specifically to fit the Mirage III and has been in full operation on the Mirage III since 2001. It has a power consumption of 200 W, operates in the X-band and is compatible with infrared-guided , semi-active and active radar guided missiles. The circular antenna has a diameter of 47 cm. The radar has over 30 different operational air-to-air/air-to-surface mission and navigation modes. Air to air modes include Single/Dual Target Track and Track While Scan. Air to surface modes include Real Beam Map, Doppler Beam Sharpening, Sea Low/High, Ground Moving Target Indicator, Ground/Sea Moving Target Track. [ 14 ] [ 15 ]
Other optional modes include Raid Assessment, Non Cooperative Target Identification, SAR (synthetic aperture radar) and Precision Velocity Update. Low, medium and high pulse repetition frequencies reduce effects of ground clutter. Digital adaptive pulse-compression technology, dual channel receiver, scanning coverage +/-60 degrees in both azimuth and elevation, air cooling, weighs less than 91 kg, MTBF (flight guaranteed) over 220 hours. Extensive ECCM (electronic counter-countermeasures) provisions and built-in test equipment (BITE). IFF interrogators can also be integrated. [ 14 ] [ 15 ]
The in-flight refueling probes of South African origin were also installed on the upgraded Mirage III ROSE I aircraft, [ 16 ] stating that it is a pilot program for the induction of aerial refueling capability into the PAF. [ needs update ]
In 1996, SAGEM sold 44 surplus French Air Force Mirages (35 single-seat Mirage 5Fs and nine dual-seat Mirage IIIBEs) to the PAF. Only 34 Mirage 5Fs and six Mirage IIIBEs were intended to fly again, the others serving as spare parts sources. 20 Mirage 5Fs were overhauled and upgraded in France to ROSE II standards. In total, 29 Mirage 5Fs and six Mirage IIIBEs (respectively designated Mirage 5EF and Mirage IIIDF with the PAF) were delivered to Pakistan by air between 1999 and 2001, with five other aircraft delivered by boat to be overhauled by PAC (one single-seater crashed during an acceptance flight in France). [ 11 ] [ 12 ]
ROSE II Mirages are similar to ROSE I examples, but they are fitted with a navigation FLIR in place of the Grifo M3 radar. It is mounted in a pod under the nose. Moreover, a new inertial navigation system was installed, together with an encrypted radio. [ 12 ]
14 ex- French Air Force aircraft that hadn't been upgraded to ROSE II standards were upgraded to ROSE III standards in Pakistan. In addition to the upgrades embodied in the ROSE II standard, the ROSE III modernization includes a new head-up display , a new multi-function display , and a Chinese-made radar warning receiver . [ 12 ] A new PAF squadron was raised on 19 April 2007, No. 27 Tactical Attack "Zarrar" Squadron , to operate the Mirage 5 ROSE III fighters and specialize in night-time surface strike missions. [ 17 ]
A ROSE IV upgrade was also offered, but not taken up. It was based on the ROSE III upgrade standard, but it also included the installation of the Grifo 3 radar, AIM-9L/M capacity, as well as the Dart targeting pod , derived from the Litening . A chaff / flare dispenser and a radar warning receiver were also planned to be added. [ 12 ]
The ROSE program was successful and saved Pakistan Air Force 's financial capital to be spent in a huge amount on foreign exchange. [ 7 ] Under this program, further upgrades were considered and recommendations were made for the procurement of Mirage 2000 from Qatar . [ 18 ]
Acquisitions of Mirage 2000s from Qatar were bypassed by the JS HQ when Indian IAF inducted the jets in its fleet. [ 18 ] In 2003, the PAF bought 13 more Mirage IIIEs from Spain for spares cannibalization, and unlike the Australian or Lebanese purchases, that is just what they are being used for. Their condition dictated their return to service was highly unlikely. [ 18 ] Problems were encountered for the upgrade of the Mirage 5 's role in naval variant for the Navy . However, this was eventually solved with the procurement of spare parts. Because of the program, the PAF gained an international reputation of expertise in maintaining and upgrading the Mirage for both air and naval versions. [ 5 ]
The ROSE program provided the PAF with experience in aircraft technology and the capability to undertake similar projects in the future. The Mirage received new capabilities, thus improving its performance dramatically. [ 7 ] The program contributed to maintaining the operational relevance of Mirage aircraft in Pakistan’s air defense strategy. [ 5 ] The program was meant to be continued for some time after 2003, but the Pakistan Air Force had to terminate it due to a combination of high costs and aging Mirage III/5 airframes . [ 1 ] | https://en.wikipedia.org/wiki/Project_ROSE |
Project SHAD , an acronym for Shipboard Hazard and Defense , was part of a larger effort called Project 112 , which was conducted during the 1960s. Project SHAD encompassed tests designed to identify U.S. warships' vulnerabilities to attacks with chemical agents or biological warfare agents and to develop procedures to respond to such attacks while maintaining a war-fighting capability.
Project SHAD was part of a larger effort by the Department of Defense called Project 112. Project 112 was a chemical and biological weapons research, development, and testing project conducted by the United States Department of Defense and CIA handled by the Deseret Test Center and United States Army Chemical Materials Agency from 1962 to 1973. [ 1 ] The project started under John F. Kennedy 's administration, and was authorized by his Secretary of Defense Robert McNamara , as part of a total review of the US military. The name of the project refers to its number in the 150 review process. [ 2 ]
The Shipboard Hazard and Defense Project (SHAD) was a series of tests conducted by the U.S. Department of Defense during the 1960s to determine how well service members aboard military ships could detect and respond to chemical and biological attacks. Dee Dodson Morris of the Army Chemical Corps who coordinated the ongoing investigation, says, "The SHAD tests were intended to show how vulnerable Navy ships were to chemical or biological warfare agents. The objective was to learn how chemical or biological warfare agents would disperse throughout a ship, and to use that information to develop procedures to protect crew members and decontaminate ships." [ 3 ] DoD investigators note that over a hundred tests were planned but the lack of test results may indicate that many tests were never actually executed. [ 3 ] 134 tests were planned initially, but reportedly, only 46 tests were actually completed.
Public Law 107–314 required the identification and release of not only Project 112 information to the United States Veterans Administration , but also that of any other projects or tests where a veteran might have been exposed to a chemical or biological warfare agent, and directed the Secretary of Defense to work with veterans and veterans service organizations to identify the other projects or tests conducted by the Department of Defense that may have exposed members of the Armed Forces to chemical or biological agents. [ 4 ] In 2000, the Department of Defense began the process of declassifying records about the project. According to the U.S. Department of Veteran Affairs, approximately 6,000 U.S. Service members were believed to be involved in conducting the tests. [ 5 ] In 2002, the Department of Defense began publishing a list of fact sheets for each of the tests.
Although many of the roughly 5,500 veterans who took part were aware of the tests, some were involved without their knowledge. Certain issues surrounding the test program were not resolved by the passage of the law and the Department of Defense was accused of continuing to withhold documents on Cold War chemical and biological weapons tests that used unsuspecting veterans as "human samplers" after reporting to Congress it had released all medically relevant information. [ 6 ] A Government Accountability Office May 2004 report, Chemical and Biological Defense: DOD Needs to Continue to Collect and Provide Information on Tests and Potentially Exposed Personnel indicates that almost all participants who were identified from Project 112 — 94 percent — were from ship-based tests of Project SHAD that comprised only about one-third of the total number of tests conducted. [ 7 ]
Jack Alderson, a retired Navy officer who commanded the Army tugboats, told CBS News that he believes the Pentagon used him and other service members to test weapons, and that those tests included agents, vaccines, and decontamination products which have led to serious medical problems, including cancer. [ 8 ] Secrecy agreements can now be ignored by veterans in order to pursue healthcare concerns within the Department of Veterans Affairs. The V.A. has offered screening programs for veterans who believe they were involved in DoD sponsored tests during their service. The Institute of Medicine of the National Academies has commissioned studies of Project SHAD participants. The first, Long-Term Health Effects of Participation in Project SHAD (Shipboard Hazard and Defense) , was released in 2007, and found "no clear evidence that specific long-term health effects are associated with participation in Project SHAD." [ 9 ] The second, Shipboard Hazard and Defense II (SHAD II) , by the IoM's Medical Follow-up Agency (MFUA), began in 2012, and, as of April 2014, was ongoing. [ 10 ]
from SHAD Fact Sheets
OSD & Joint Staff FOIA Requester Service Center | https://en.wikipedia.org/wiki/Project_SHAD |
Project SUNSHINE was a series of research studies that began in 1953 to ascertain the impact of radioactive fallout on the world's population. [ 1 ] The project was initially kept secret, and only became known publicly in 1956. [ 1 ] Commissioned jointly by the United States Atomic Energy Commission and USAF Project Rand , SUNSHINE sought to examine the long-term effects of nuclear radiation on the biosphere due to repeated nuclear detonations of increasing yield. [ 2 ] With the conclusion from Project GABRIEL that radioactive isotope Strontium-90 (Sr-90) represented the most serious threat to human health from nuclear fallout, Project SUNSHINE sought to measure the global dispersion of Sr-90 by measuring its concentration in the tissues and bones of the dead. Of particular interest was tissue from the young, whose developing bones have the highest propensity to accumulate Sr-90 and thus the highest susceptibility to radiation damage. [ 2 ] SUNSHINE elicited a great deal of controversy when it was revealed that many of the remains sampled were utilized without prior permission from relatives of the dead, which wasn't known until many years later. [ 3 ]
On January 18, 1955, then-AEC commissioner Dr. Willard Libby said that there was insufficient data regarding the effects of fallout due to a lack of human samples – especially samples taken from children – to analyze. Libby was quoted saying, "I don't know how to get them, but I do say that it is a matter of prime importance to get them, and particularly in the young age group. So, human samples are often of prime importance, and if anybody knows how to do a good job of body snatching, they will really be serving their country." [ 4 ] This led to over 1,500 samples being gathered, of which only 500 were analyzed. [ 4 ] Many of the 1,500 sample cadavers were babies and young children, and were taken from countries from Australia to Europe, often without their parents' consent or knowledge. [ 5 ] According to the investigation launched after a British newspaper reported that British scientists had obtained children’s bodies from various hospitals and shipped their body parts to the United States, a British mother had said that her stillborn baby's legs were removed by British doctors, and to prevent her from finding out what had happened, she was not allowed to dress the baby for the funeral. [ 5 ]
In 1958, research for project SUNSHINE was brought to Belgium. Scientists started doing tests that were slightly different than those done previously in the United States and Europe by analyzing soils in agricultural regions instead of human bones. They headed in two main directions: environmental surveys and experimental research in natural and in controlled conditions. Their goal was to see the effect of Strontium-90 in the soils as well as to see how it transferred to the grass and grazing animals such as cows and sheep, the animals from which humans consume milk and meat. Researchers also looked for direct influences of strontium-90 by observing how well the contaminated grass and crops grew. [ 6 ]
In a 1957 article, Dr. Whitlock, director of Health Education in the National Dairy Council , Chicago, Illinois , discussed the impact of strontium-90 in the cow milk consumed by humans, concluding that the effects of Sr-90 would not be detectably harmful to the general populace of the US. "From the foregoing information, it would seem we have a long way to go before the presence of Strontium-90 in milk and other foods can catch up with the amounts of radioactivity to which we have long been exposed through natural resources." Specifically referring to the natural radioactivity one is exposed to from potassium-40 . [ 7 ] " | https://en.wikipedia.org/wiki/Project_SUNSHINE |
Project Sherwood was the codename for a United States program in controlled nuclear fusion during the period it was classified . After 1958, when fusion research was declassified around the world, the project was reorganized as a separate division within the United States Atomic Energy Commission (AEC) and lost its codename.
Sherwood developed out of a number of ad hoc efforts dating back to about 1951. Primary among these was the stellarator program at Princeton University , itself code-named Project Matterhorn . Since then the weapons labs had clamored to join the club, Los Alamos with its z-pinch efforts, Livermore 's magnetic mirror program, and later, Oak Ridge's fuel injector efforts. By 1953 the combined budgets were increasing into the million dollar range, demanding some sort of oversight at the AEC level.
The name "Sherwood" was suggested by Paul McDaniel, Deputy Director of the AEC. He noted that funding for the wartime Hood Building was being dropped and moved to the new program, so they were "robbing Hood to pay Friar Tuck", punning on the name of the British physicist and fusion researcher James L. Tuck and the popular phrase " to rob Peter to pay Paul ". The connection to Robin Hood and Friar Tuck gave the project its name. [ 1 ]
Lewis Strauss strongly supported keeping the program secret until pressure from the United Kingdom led to a declassification effort at the 2nd Atoms for Peace meeting in the fall of 1958. After this time a number of purely civilian organizations also formed to organize meetings on the topic, with the American Physical Society organizing meetings under their Division of Plasma Physics. These meetings have been carried on to this day and were renamed International Sherwood Fusion Theory Conference. [ 2 ] The original Project Sherwood became simply the Controlled Thermonuclear Research program within the AEC and its follow-on organizations.
Research centered on three plasma confinement designs; the stellarator headed by Lyman Spitzer at the Princeton Plasma Physics Laboratory , the toroidal pinch or Perhapsatron led by James Tuck at the Los Alamos National Laboratory and the magnetic mirror devices at the Livermore National Laboratory led by Richard F. Post . By June, 1954 a preliminary study had been completed for a full scale "Model D" stellarator that would be over 500 feet (150 m) long and produce 5,000 MW of electricity at a capital cost of $209 per kilowatt. [ 3 ] However, each concept encountered unanticipated problems, in the form of plasma instabilities that prevented the requisite temperatures and pressures from being achieved, and it eventually became clear that sustained hydrogen fusion would not be developed quickly. Strauss left AEC in 1958 and his successor did not share Strauss' enthusiasm for fusion research. Consequently, Project Sherwood was relegated from a crash program to one that concentrated on basic research.
The funding for Project Sherwood began with the closure of another program called Project Lincoln at the Hood Laboratory. [ 4 ] As the number of people working on the projects grew, so did the budget. Under Strauss the program was reorganized, and its funding and staffing increased dramatically. From early 1954 to 1955, the number of people working on Project Sherwood grew from 45 to 110. [ 4 ] By the next year, that number had doubled. The original budget from the shut down of Project Lincoln was $1 million. [ 4 ] The breakdown of the year budget from 1951 to 1957 can be seen in the table below. At its peak, Project Sherwood had a budget of $23 million per year and retained more than 500 scientists. [ 5 ]
The declassification of the program was a large topic of discussion between scientists at all of the laboratories involved with the project and at the Sherwood conferences . The reasoning for an initial high classification status was that if the research into controlled fusion were to be successful then it would be a significant advantage in regards to military aspects. In particular, fusion products high-energy neutrons which could be used to enrich uranium into plutonium for nuclear bomb production. If a small fusion machine was possible, this represented a significant proliferation risk. [ 6 ]
However, as the difficulty in making a working fusion reactor became increasingly clear, fears of hidden reactors faded. Additionally, while some of the required industrial work could be conducted without access to the classified information, there were some instances where the classified information of the program was a necessity for those people working on projects such as the large-scale stellarator , the ultra-high vacuum, and the problem of energy storage. [ 7 ] In these instances, there was a contract with the Commission that the information that was being used would only be shared with the personnel that was directly working on the project. It soon became apparent that industrial companies were expected to become highly invested in the area of fission and because of this it became clear that these companies should have full access to the research information obtained by Project Sherwood. In June 1956, permits for the research information from Project Sherwood became available through the Commission for companies that were qualified. [ 8 ]
Between 1955 and 1958, information became more and more available to the public with its gradual declassification beginning with the sharing of information with the United Kingdom . Huge supporters of declassification of the program included the director of the Division of Research, Thomas Johnson, and a member of his staff, Amasa Bishop . Some of their reasoning for wanting declassification was that the secrecy of the project could negatively impact their ability to enlist and employ experienced personnel to the program. [ 9 ] The also argued that it would change the way their conferences could be held. The scientists working on the project would be able to freely discuss their findings with others in the scientific community rather than only the scientists working on the same project. [ 9 ]
In 1956, Soviet physicist Igor Kurchatov gave a talk in the UK where he revealed the entire Soviet fusion program and detailed the problems they were having. Now that the very group of people the classification was intended to keep in the dark were at roughly the same stage of development, there was no obvious reason to continue classification. While the UK had been among the first to classify their program in the aftermath of the Klaus Fuchs affair in 1950, in the summer of 1957 they appeared to have successfully created fusion in their new ZETA and were clamoring to tell the press of their advances. Their agreement to share information with the US required them to classify their work, and now they also began pressing the US to agree to declassification.
By May 1958, basic information about the various projects within Project Sherwood including the stellarator , magnetic mirrors , and molecular ion beams had been released to the public. [ 10 ]
In the early 1950s, Oak Ridge National Laboratory was composed of a small group of scientists that were mostly experienced with research in ion-source technology . However, research from Project Sherwood was a growing area of interest, and the researchers at Oak Ridge National Laboratory wanted to participate in the discovery of controlled fusion. They studied areas of controlled fusion such as the rate of plasma diffusion in a magnetic field and the charge-exchange process. However, their work with ion-source was still a large part of their research. [ 11 ]
Although there was already a main project ( magnetic mirror ) at the University of California, scientist W. R. Baker began research into the pinch effect at UCRL, Berkeley in 1952. Two years later, Stirling Colgate began research on shock-heating at UCRL, Livermore. [ 12 ]
There was another small group of scientists at Tufts College in Medford, Massachusetts that had become involved in research of the pinch effect. Although their work was not officially part of the Atomic Energy Commission , some of their personnel attended the Sherwood conferences . [ 13 ]
In 1954, there was a program started at New York University called the Division of Research. It was a small program that included personnel from the Institute of Mathematical Sciences at New York University. [ 14 ] | https://en.wikipedia.org/wiki/Project_Sherwood |
Project Simoom was the name of a business case involving the Swedish Defence Research Agency (FOI) and Saudi Arabia , which aimed to build a propellant and explosives factory in Saudi Arabia to modify anti-tank weapon systems [ 1 ] [ 2 ] Details about the project were revealed to the public on 7 March 2012 by investigative journalists Daniel Öhman and Bo-Göran Bodin at the Swedish public radio broadcaster Sveriges Radio . [ 3 ] The project was criticized for constituting a possible breach of Swedish arms trade laws, and for its secretive nature. It resulted in the resignation of Defence Minister Sten Tolgfors on 29 March 2012 and in FOI ending its participation in the project. [ 4 ] [ 5 ]
This article about the military of Sweden is a stub . You can help Wikipedia by expanding it .
This Saudi Arabia related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Project_Simoom |
Project Weber / RENEW is a harm reduction organization in Providence, Rhode Island established in 2016 by the merger of Project RENEW and Project Weber . [ 1 ] The organization is staffed entirely by people who have directly experienced mental health issues, substance abuse and/or sex work. [ 2 ]
In 2006, Colleen Daley Ndoye started Project Revitalizing & Engaging Neighborhoods by Empowering Women (RENEW), which connects women sex workers with social services and substance abuse treatment. [ 3 ] Project RENEW has been credited with reducing arrests in Pawtucket. [ 4 ]
In 2008, Project Weber was founded by Rich Holcomb and James Waterman, in Providence, as the first supportive services in America to exclusively serve male sex workers. The project was named in honor of Roy Weber, a sex worker who was found murdered in Providence in 2003. Project Weber opened its first drop-in center in 2013. After two years of running the drop-in center and nearly seven years of complete abstinence from drugs and alcohol , Holcomb relapsed and resigned as director of Project Weber. The merger into Project Weber/RENEW occurred, in part, to sustain the work of Project Weber, after Holcomb's departure as director. Holcomb continues to be involved in the organization. [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ]
In 2016, Project Weber which served male sex workers and Project RENEW which served female sex workers merged to become Project Weber/RENEW in a hope to gather more funds and help more people. Project Weber/RENEW is funded by the Rhode Island Department of Health . Weber/RENEW's interventions include education, distribution of harm reduction supplies, peer-led street outreach, addressing basic needs, HIV prevention testing , support groups , and case management . [ 10 ] [ 11 ]
In 2021, Weber/RENEW began handing out harm reduction supplies in Kennedy Plaza. [ 12 ] The organization runs two drop-in centers run by workers in recovery. One in Providence and another in Pawtucket. [ 13 ] [ 14 ] [ 15 ] Additionally, the organization runs a mobile outreach van in Providence, Central Falls, and Pawtucket. [ 16 ] [ 9 ]
In partnership with the organization CODAC Behavioral Health, it intends to open an overdose prevention center at 349 Huntington Ave in Providence, a location currently occupied by CODAC. [ 17 ] [ 18 ] After the controversial announcement of the supervised consumption center's opening, some private donors terminated donations to CODAC. [ 19 ]
It is one of the largest distributors of Narcan in the state. [ 20 ] [ 21 ] [ 9 ]
In 2020 and 2021, Weber/RENEW was one of the only organizations in Rhode Island to continue in person harm reduction and outreach work, despite the risk of transmission at the beginning of the COVID-19 pandemic . In response to the pandemic, the organization expanded services to meet clients' basic needs. Weber/RENEW also started distributing COVID masks and cleaning supplies, hosting vaccination clinics, and sharing educational information about COVID and vaccines . [ 22 ]
In January 2022, Project Weber/ RENEW taught a Community Leadership in Nonviolence and Substance Use Prevention class for students at Blackstone Academy Charter School , in partnership with U.S. Attorney Zachary A. Cunha , Local Initiatives Support Corporation Rhode Island, and the Nonviolence Institute. [ 23 ]
In July 2022, Rhode Island became the first state in America to legalize supervised drug consumption sites. [ 24 ] [ 25 ]
In February 2024 the Providence City Council approved the establishment of the state's first supervised injection site, to be operated by Project Weber/RENEW and VICTA, a privately owned behavioral health organization. The site is to be located next to the campus of Rhode Island Hospital . [ 26 ]
Project Weber/RENEW focuses much of their outreach on the Rhode Island Public Transit Authority (RIPTA) bus terminal, Kennedy Plaza . Kennedy Plaza has one of the highest rates of overdoses in Providence. [ 27 ] [ 28 ]
In 2018, Miriam Hospital received a $2.5 million federal grant to partner with Project Weber/RENEW and the Rhode Island Public Health Institute to create Rhode Island's first substance use treatment program for gay and bisexual, Black and Latino men. In 2018, Project Weber/RENEW was awarded $10,000 from the Rhode Island Foundation for advocacy and training, as well as to connect high-risk transgender men and women with health and prevention services. [ 29 ] [ 30 ]
In June 2022, Project Weber/RENEW was named Grand Marshals for the return of PrideFest and the Illuminated Night Parade in Providence. [ 31 ] | https://en.wikipedia.org/wiki/Project_Weber/RENEW |
Project West Ford (also known as Westford Needles and Project Needles ) was a test carried out by Massachusetts Institute of Technology 's Lincoln Laboratory on behalf of the United States military in 1961 and 1963 to create an artificial ionosphere above the Earth. [ 1 ] This was done to solve a major weakness that had been identified in military communications. [ 2 ]
At the height of the Cold War , all international communications were either sent through submarine communications cables or bounced off the natural ionosphere . The United States military was concerned that the Soviets might cut those cables, forcing the unpredictable ionosphere to be the only means of communication with overseas forces. [ 1 ]
To mitigate the potential threat, Walter E. Morrow started Project Needles at the MIT Lincoln Laboratory in 1958. The goal of the project was to place a ring of 480,000,000 [ 3 ] [ 4 ] copper dipole antennas in orbit to facilitate global radio communication. The dipoles collectively provided passive support to Project West Ford's parabolic dish (located at the Haystack Observatory in the town of Westford ) to communicate with distant sites.
The needles used in the experiment were 1.78 centimetres (0.70 in) long and 25.4 micrometres (1.00 thou ) [1961] or 17.8 micrometres (0.70 thou) [1963] in diameter. [ 5 ] [ 6 ] The length was chosen because it was half the wavelength of the 8 GHz signal used in the study. [ 1 ] The needles were placed in medium Earth orbit at an altitude of between 3,500 and 3,800 kilometres (2,200–2,400 mi) at inclinations of 96 and 87 degrees.
A first attempt was launched on 21 October 1961, [ 6 ] during which the needles failed to disperse. [ 7 ] [ 8 ] The project was eventually successful with the 9 May 1963 [ 6 ] launch, with radio transmissions carried by the manufactured ring. [ 9 ] [ 8 ] However, the technology was ultimately shelved, partially due to the development of the modern communications satellite and partially due to protests from other scientists. [ 1 ] [ 2 ]
British radio astronomers, optical astronomers, and the Royal Astronomical Society protested the experiment. [ 10 ] [ 11 ] [ 12 ] The Soviet newspaper Pravda also joined the protests under the headline "U.S.A. Dirties Space". [ 13 ] The International Academy of Astronautics regards the experiment as the worst deliberate release of space debris . [ 14 ]
At the time, the issue was raised in the United Nations where the then United States Ambassador to the United Nations Adlai Stevenson defended the project. [ 15 ] Stevenson studied the published journal articles on Project West Ford. Using what he learned on the subject and citing the articles he had read, he successfully allayed the fears of most UN ambassadors from other countries. He and the articles explained that sunlight pressure would cause the dipoles to only remain in orbit for a short period of approximately three years. The international protest ultimately resulted in a consultation provision included in the 1967 Outer Space Treaty . [ 1 ] [ 10 ]
Although the dispersed needles in the second experiment removed themselves from orbit within a few years, [ 4 ] some of the dipoles that had not deployed correctly remained in clumps, contributing a small amount of the orbital debris tracked by NASA's Orbital Debris Program Office . [ 16 ] [ 17 ] Their numbers have been diminishing over time as they occasionally re-enter. As of April 2023 [update] , 44 clumps of needles larger than 10 cm were still known to be in orbit. [ 18 ] [ 1 ] [ 19 ] | https://en.wikipedia.org/wiki/Project_West_Ford |
Project Xanadu ( / ˈ z æ n ə d uː / ZAN -ə-doo ) [ 1 ] was the first hypertext project, founded in 1960 by Ted Nelson . Administrators of Project Xanadu have declared it superior to the World Wide Web , with the mission statement: "Today's popular software simulates paper. The World Wide Web (another imitation of paper) trivialises our original hypertext model with one-way ever-breaking links and no management of version or contents." [ 2 ]
Wired magazine published an article entitled "The Curse of Xanadu", calling Project Xanadu "the longest-running vaporware story in the history of the computer industry". [ 3 ] The first attempt at implementation began in 1960, but it was not until 1998 that an incomplete implementation was released. A version described as "a working deliverable ", OpenXanadu , was made available in 2014.
Nelson's vision was for a "digital repository scheme for world-wide electronic publishing". Nelson states that the idea began in 1960, when he was a student at Harvard University . He proposed a machine-language program which would store and display documents, together with the ability to perform edits. This was different from a word processor (which had not been invented yet) in that the functionality would have included visual comparisons of different versions of the document, a concept Nelson would later call "intercomparison". [ 4 ]
On top of this basic idea, Nelson wanted to facilitate nonsequential writing, in which the reader could choose their own path through an electronic document. He built upon this idea in a paper to the Association for Computing Machinery (ACM) in 1965, calling the new idea "zippered lists". These zippered lists would allow compound documents to be formed from pieces of other documents, a concept named transclusion . [ 5 ] [ 4 ] In 1967, while working for Harcourt, Brace , he named his project Xanadu, in honor of the poem " Kubla Khan " by Samuel Taylor Coleridge . [ 4 ]
Nelson's talk at the ACM predicted many of the features of today's hypertext systems, but at the time, his ideas had little impact. Though researchers were intrigued by his ideas, Nelson lacked the technical knowledge to demonstrate that the ideas could be implemented. [ 3 ]
Ted Nelson published his ideas in his 1974 book Computer Lib/Dream Machines and the 1981 Literary Machines .
Computer Lib/Dream Machines is written in a non-sequential fashion: it is a compilation of Nelson's thoughts about computing, among other topics, in no particular order. It contains two books, printed back to back, to be flipped between. Computer Lib contains Nelson's thoughts on topics that angered him, while Dream Machines discusses his hopes for the potential of computers to assist the arts.
In 1972, Cal Daniels completed the first demonstration version of the Xanadu software on a computer Nelson had rented for the purpose, though Nelson soon ran out of money. In 1974, with the advent of computer networking, Nelson refined his thoughts about Xanadu into a centralized source of information, calling it a " docuverse ".
In the summer of 1979, Nelson led the latest group of his followers, Roger Gregory , Mark S. Miller and Stuart Greene , to Swarthmore, Pennsylvania . In a house rented by Greene, they hashed out their ideas for Xanadu; but at the end of the summer the group went their separate ways. Miller and Gregory created an addressing system based on transfinite numbers that they called tumblers , which allowed any part of a file to be referenced.
The group continued their work, almost to the point of bankruptcy. In 1983, however, Nelson met John Walker , founder of Autodesk , at The Hackers Conference , a conference originally for the people mentioned in Steven Levy 's Hackers , and the group started working on Xanadu with Autodesk's financial backing.
According to economist Robin Hanson , in 1990 the first known corporate prediction market was used at Xanadu. Employees and consultants used it for example to bet on the cold fusion controversy at the time.
While at Autodesk, the group, led by Gregory, completed a version of the software, written in the C programming language , though the software did not work the way they wanted. However, this version of Xanadu was successfully demonstrated at The Hackers Conference and generated considerable interest. Then a newer group of programmers, hired from Xerox PARC , used the problems with this software as justification to rewrite the software in Smalltalk . This effectively split the group into two factions, and the decision to rewrite put a deadline imposed by Autodesk out of the team's reach. In August 1992, Autodesk divested the Xanadu group, which became the Xanadu Operating Company and struggled due to internal conflicts and lack of investment.
Charles S. Smith, the founder of a company called Memex (named after a hypertext system proposed by Vannevar Bush [ 6 ] ), hired many of the Xanadu programmers (including lead architects Mark S. Miller , Dean Tribble and Ravi Pandya) [ 3 ] and licensed the Xanadu technology, though Memex soon faced financial difficulties, and the then-unpaid programmers left, taking the computers with them (the programmers were eventually paid). At around this time, Tim Berners-Lee was developing the World Wide Web . When the Web began to see large growth that Xanadu did not, Nelson's team grew defensive in the supposed rivalry that was emerging that they were losing. The 1995 Wired Magazine article "The Curse of Xanadu" provoked a harsh rebuttal from Nelson, but contention largely faded as the Web dominated Xanadu. [ 7 ]
In 1998, Nelson released the source code to Xanadu as Project Udanax, [ 8 ] in the hope that the techniques and algorithms used could help to overturn some software patents . [ 9 ]
In 2007, Project Xanadu released XanaduSpace 1.0. [ 10 ]
A version described as "a working deliverable", OpenXanadu, was made available on the World Wide Web in 2014. It is called open because "you can see all the parts", but as of June 2014 [update] the site stated that it was "not yet open source". On the site, the creators claim that Tim Berners-Lee stole their idea, and that the World Wide Web is a "bizarre structure created by arbitrary initiatives of varied people and it has a terrible programming language" and that Web security is a "complex maze". They go on to say that Hypertext is designed to be paper, and that the World Wide Web allows nothing more than dead links to other dead pages. [ 11 ]
In 2016, Ted Nelson was interviewed by Werner Herzog in his documentary, Lo and Behold, Reveries of the Connected World . "By some, he was labeled insane for clinging on; to us, you appear to be the only one who is clinically sane", Herzog said. [ 12 ] Nelson was delighted by the praise. "No one has ever said that before!" said Nelson. "Usually I hear the opposite."
In the design of the Xanadu computer system, a tumbler is an address of any range of content or link or a set of ranges or links. According to Gary Wolf in Wired , the idea of tumblers was that "the address would not only point the reader to the correct machine, it would also indicate the author of the document, the version of the document, the correct span of bytes, and the links associated with these bytes." Tumblers were created by Roger Gregory and Mark Miller . [ 3 ] [ 14 ]
The idea behind tumblers comes from transfinite numbers . [ 3 ] | https://en.wikipedia.org/wiki/Project_Xanadu |
A project agreement is an agreement between the owner/developer of a construction project and the construction trade unions that will perform that construction work. A project agreement modifies the terms of otherwise applicable construction collective agreements for purposes of a specific construction project or a defined set of construction projects. Without exception, Project Agreements provide that there will be no strikes or lockouts on the covered construction project or projects, thereby removing a significant source of risk to the owner/developers of these projects. Project agreements typically replicate the principal economic terms of the otherwise applicable construction collective agreements, although there may be specific modifications to those terms.
Labour relations statutes in most Canadian jurisdictions contain provisions that specifically allow for the negotiation of project agreements. This is in contrast with the United States (see Project Labor Agreements) where there is no specific provisions pertaining to project labor agreements in the National Labor Relations Act. In Ontario , the Conservative Government amended the Labour Relations Act (Bill 139) to facilitate the adoption of Project Agreements that cover multiple projects as well as projects initiated subsequent to the commencement of a Project Agreement.
The Canadian statutory tradition of supporting and facilitating project agreements has led to their adoption in a wide range of circumstances in both the public and private sector . Major construction projects that were completed under the terms of project agreements include: various private sector industrial projects (e.g., Hudson Bay Mining Improvement Project in Flin Flon , Tembec Paper Mill Expansion in Pine Falls , and Co-op Oil Refinery in Regina ), major public sector projects (Highway 407 Construction in Ontario, Confederation Bridge project in Prince Edward Island , and multiple projects undertaken by various provincial hydro-electric authorities.) Had the City of Toronto won its bid to host the Olympics, construction related to the Olympics would have been carried out under the terms of a Project Agreement.
Governments, in their capacity as owner/developers of construction projects, have used project agreements to secure training and employment opportunities for groups that might otherwise not have access to skilled construction work. For example, the Project Agreement governing the construction of the Vancouver Island Highway provided for explicit employment equity hiring focused on women and members of First Nations . [ 1 ] | https://en.wikipedia.org/wiki/Project_agreement_(Canada) |
A project anatomy (also integration anatomy or organic integration plan ) is a tool for integration planning that visualizes dependencies between work items in development projects. It is mainly used in incremental development and Integration Driven Development projects.
The project anatomy has evolved from the system anatomy and in its purest form the work items (called work packages ) reflect the development of system capabilities. Often a more pragmatic approach is taken, though, where work packages may contain other items with important dependencies as well, e.g. HW deliveries for embedded systems.
Project anatomies evolved from system anatomies at Ericsson since the late 1990s. Both the terminology and the methodology have differed between organizations and the difference between "system anatomy", "project anatomy", "delta anatomy" and "integration anatomy" is sometimes diffuse or non-existent. In 2004 FindOut Technologies presented a SW tool (Paipe) for managing anatomies with more properties. The company has, since then, worked to establish the term Project Anatomy.
The project anatomy below is an example showing the work packages needed to develop a simple issue management system.
Work packages with many dependencies are called spiders and indicate a risk. The risk may be managed by splitting the work package or by moving dependants of it to later shipments (increments).
The colors indicate the current status of work packages, where green means "on track", yellow means "at risk" and red means "off track". Blue work packages are done. | https://en.wikipedia.org/wiki/Project_anatomy |
Project commissioning is the process of ensuring that all systems and components of a building or industrial plant are designed, installed, tested, operated, and maintained according to the owner's or final client's operational requirements. A commissioning process may be applied not only to new projects but also to existing units and systems subject to expansion, renovation or revamping . [ 1 ] [ 2 ]
In practice, the commissioning process is the integrated application of a set of engineering techniques and procedures to check, inspect and test every operational component of the project: from individual functions (such as instruments and equipment) up to complex amalgamations (such as modules , subsystems and systems ).
Commissioning activities in the broader sense applicable to all phases of the project from the basic and detailed design , procurement , construction and assembly until the final handover of the unit to the owner, sometimes including an assisted operation phase.
Similarly Refinery commissioning is defined as "The sequential, planned, and documented process of verifying, testing, and validating the performance of each refinery unit, system, and equipment to ensure they operate safely, efficiently, and within design specifications, culminating in the successful startup and steady-state operation of the entire refinery".
The main objective of commissioning is to effect the safe and orderly handover of the unit from the constructor to the owner, guaranteeing its operability in terms of performance , reliability , safety and information traceability . Additionally, when executed in a planned and effective way, commissioning normally represents an essential factor for the fulfillment of schedule, costs, safety and quality requirements of the project. [ 3 ]
For complex projects, the large volume and complexity of commissioning data, together with the need to guarantee adequate information traceability, normally leads to the use of powerful IT tools, known as commissioning management systems , to allow effective planning and monitoring of the commissioning activities.
There is currently no formal education or university degree which addresses the training or certification of a Project Commissioning Engineer. Various short and online training courses are available, but they are designed for qualified engineers.
Large civil and industrial projects for which Commissioning as an independent discipline is as important as traditional engineering disciplines, i.e. civil , naval , chemical , mechanical , electrical , electronic , instrumentation , automation , or telecom engineering, include chemical and petrochemical plants , oil and gas platforms and pipelines , metallurgical plants, paper and cellulose plants, coal handling plants, thermoelectric and hydroelectric plants, buildings, bridges, highways, and railroads. | https://en.wikipedia.org/wiki/Project_commissioning |
Project Cost Management ( PCM ) is the dimension of project management which aims to ensure that a project is completed within its approved budget. [ 1 ] [ 2 ] It encompasses several specific project management activities including estimating , job controls, field data collection , scheduling , accounting and design, and uses technology to measure cost and productivity through the full life-cycle of enterprise level projects. [ citation needed ]
According to the Project Management Body of Knowledge (PMBOK), PCM's primary concern is the cost of the resources needed to complete the project. However, PMBOK also notes that PCM should also consider the impact of project management decisions on customers' wider or life-cycle costs such as the use of the building or IT system generated by the project. [ 1 ] : 73
Beginning with estimating, a vital tool in PCM, actual historical data is used to accurately plan all aspects of the project. As the project continues, job control uses data from the estimate with the information reported from the field to measure the cost and production in the project. From project initiation to completion, project cost management has an objective to simplify and cheapen the project experience. [ 3 ]
This technological approach has been a big challenger to the mainstream estimating software and project management industries. [ 4 ] [ 5 ] | https://en.wikipedia.org/wiki/Project_cost_management |
Project delivery methods defines the characteristics of how a construction project is designed and built and the responsibilities of the parties involved in the construction (owner, designer and contractor). [ 1 ] They are used by a construction manager who is working as an agent to the owner or by the owner itself to carry-out a construction project while mitigating the risks to the scope of work, time , budget , quality and safety of the project. These risks ranges from cost overruns, time delays and conflict among the various parties. [ 2 ]
Though DBB is now used for most private projects and the majority of public projects, it has not historically been the predominant delivery method of choice. The master builders of centuries past acted both as designers and constructors for both public and private clients. In the United States , Zane's Post Road in Ohio and the IRT in New York City were both originally developed under more integrated delivery methods, as were most infrastructure projects until 1933. Integrated Project Delivery offers a new delivery method to remove considerable waste from the construction process while improving quality and a return to more collaborative methods from the past.
In an effort to assist industry professionals with the selection of appropriate project delivery systems, construction management researchers have prepared a Procurement Method and Contract Selection Model, which can be used for high level decision making for construction projects on a case-by-case basis. [ 3 ]
Common project delivery methods include:
There are two key variables which account for the bulk of the variation between delivery methods:
When the various service providers are segmented, the owner has the most control, but this control is costly and does not give each provider an incentive to optimize its contribution for the next service. When there is tight integration amongst providers, each step of the delivery is undertaken with future activities in mind, resulting in cost savings, but limiting the owner's influence throughout the project.
The owner's direct financing of a project simply means that the owner directly pays the providers for their services. In the case of a facility with a consistent revenue stream, indirect financing becomes possible: rather than be paid by the owner, the providers are paid with the revenue collected from the facility's operation.
Indirect financing risks being mistaken for privatization . Though the providers do have a concession to operate and collect revenue from a facility that they built and financed, the structure itself remains the property of the owner (usually a government agency in the case of public infrastructure).
(Build finance)
(Build–lease–transfer)
(Build-operate-transfer)
(Build–own–operate)
(Build–own–operate–transfer)
(Design–build)
(Design–bid–build)
(Design–build–finance)
(Design–build–finance–maintain)
(Design–build–finance–operate)
(Design–build–finance–maintain–operate) | https://en.wikipedia.org/wiki/Project_delivery_method |
Project engineering includes all parts of the design of manufacturing or processing facilities, either new or modifications to and expansions of existing facilities. A "project" consists of a coordinated series of activities or tasks performed by engineers, designers, drafters and others from one or more engineering disciplines or departments. Project tasks consist of such things as performing calculations, writing specifications, preparing bids, reviewing equipment proposals and evaluating or selecting equipment and preparing various lists, such as equipment and materials lists, and creating drawings such as electrical, piping and instrumentation diagrams , physical layouts and other drawings used in design and construction. A small project may be under the direction of a project engineer. Large projects are typically under the direction of a project manager or management team. Some facilities have in house staff to handle small projects, while some major companies have a department that does internal project engineering. Large projects are typically contracted out to engineering companies. Staffing at engineering companies varies according to the work load and duration of employment may only last until an individual's tasks are completed.
The role of the project engineer can often be described as that of a liaison between the project manager and the technical disciplines involved in a project. The distribution of "liaising" and performing tasks within the technical disciplines can vary wildly from project to project; this often depends on the type of product, its maturity, and the size of the company, to name a few. It is important for a project engineer to understand that balance. The project engineer should be knowledgeable enough to be able to speak intelligently within the various disciplines, and not purely be a liaison. The project engineer is also often the primary technical point of contact for the consumer.
A project engineer's responsibilities include schedule preparation, pre-planning and resource forecasting for engineering and other technical activities relating to the project, and project delivery management. They may also be in charge of performance management of vendors . They assure the accuracy of financial forecasts, which tie-in to project schedules. They ensure projects are completed according to project plans. Project engineers manage project team resources and training and develop extensive project management experience and expertise.
When used, an engineering company is generally contracted to conduct a study (capital cost estimate or technical assessment) or to design a project. Projects are designed to achieve some specific objective, ranging in scope from simple modifications to new factories or expansions costing hundreds of millions or even billions of dollars. The client usually provides the engineering company with a scoping document listing the details of the objective in terms of such things as production rate and product specifications and general to specific information about processes and equipment to be used and the expected deliverables, such as calculations, drawings, lists, specifications, schedules, etc. The client is typically involved in the entire design process and makes decisions throughout, including the technology, type of equipment to use, bid evaluation and supplier selection, the layout of equipment and operational considerations. Depending on the project the engineering company may perform material and energy balances to size equipment and to quantify inputs of materials and energy (steam, electric power, fuel). This information is used to write specifications for the equipment. The equipment specifications are sent out for bids. The client, the engineering company or both select the equipment. The equipment suppliers provide drawings of the equipment, which are used by the engineering company's mechanical engineers, and drafters to make general arrangement drawings, which show how the pieces of equipment are located in relation to other equipment. Layout drawings show specific information about the equipment, electric motors powering the equipment and such things as auxiliary equipment (pumps, fans, air compressors), piping and buildings. The engineering company maintains an equipment list with major equipment, auxiliary equipment, motors, etc. Electrical engineers are involved with power supply to motors and equipment. Process engineers perform material and energy balances and design the piping and instrumentation diagrams to show how equipment is supplied with process fluids, water, air, gases, etc. and the type of control loops used. The instrumentation and controls engineers specify the instrumentation and controls and handle any computer controls and control rooms. Civil and structural engineers deal with site layout and engineering, building design and structural concerns like foundations, pads, structures, supports and bracing for equipment. Environmental engineers deal with any air emissions and treatment of liquid effluent.
The various fields and topics that projects engineers are involved with include:
Project engineers are often project managers with qualifications in engineering or construction management . Other titles include field engineer, construction engineer , or construction project engineer. In smaller projects, this person may also be responsible for contracts and will be called an assistant project manager. A similar role is undertaken by a client's engineer or owner's engineer, but by inference, these often act more in the interests of the commissioning company.
Project engineers do not necessarily do design work, but instead represent the contractor or client out in the field, help tradespeople interpret the job's designs, ensure the job is constructed according to the project plans, and assist project controls , including budgeting, scheduling, and planning. In some cases a project engineer is responsible for assisting the assigned project manager with regard to design and a project and with the execution of one or more simultaneous projects in accordance with a valid, executed contract, per company policies and procedures and work instructions for customized and standardized plants.
Typical responsibilities may include: daily operations of field work activities and organization of subcontractors ; coordination of the implementation of a project, ensuring it is being built correctly; project schedules and forecasts; interpretation of drawings for tradesmen ; review of engineering deliverables; redlining drawings; regular project status reports; budget monitoring and trend tracking; bill of materials creation and maintenance; effective communications between engineering, technical, construction, and project controls groups; and assistance to the project manager. | https://en.wikipedia.org/wiki/Project_engineering |
A project manager is a professional in the field of project management . Project managers have the responsibility of the planning , procurement and execution of a project , in any undertaking that has a defined scope, defined start and a defined finish; regardless of industry. Project managers are first point of contact for any issues or discrepancies arising from within the heads of various departments in an organization before the problem escalates to higher authorities, as project representative.
Project management is the responsibility of a project manager. This individual seldom participates directly in the activities that produce the result, but rather strives to maintain the progress, mutual interaction and tasks of various parties in such a way that reduces the risk of overall failure, maximizes benefits, and minimizes costs.
A project manager is the person responsible for accomplishing the project objectives. Key project management responsibilities include
A project manager is a client representative and has to determine and implement the exact needs of the client, based on knowledge of the organization they are representing. An expertise is required in the domain the project managers are working to efficiently handle all the aspects of the project. The ability to adapt to the various internal procedures of the client and to form close links with the nominated representatives, is essential in ensuring that the key issues of cost, time, quality and above all, client satisfaction, can be realized.
Important areas of project management may include: [ 1 ]
George Roth and Hilary Bradbury identify a desire for more non-authoritarian leadership in project work. [ 2 ]
Some tools, knowledge and techniques for managing projects may be unique to project management - for example: work-breakdown structures , critical-path analysis and earned-value management . Understanding and applying the tools and techniques which are generally recognized [ citation needed ] as good practices are not sufficient alone for effective project management. Effective project management requires that the project manager understands and uses the knowledge and skills from at least four areas of expertise. [ citation needed ] Examples are PMBOK , Application Area Knowledge: standards and regulations set forth by ISO for project management, General Management Skills and Project Environment Management [ 3 ] There are many options for project-management software to assist in executing projects for project managers and any associated teams .
If recruiting and building an effective team, the manager must consider not only the technical skills of each team member, but also the critical roles of and chemistry between workers. A project team has mainly three separate components: project manager, core team and contracted team.
Most of the project-management issues that influence a project arise from risk , which in turn arises from uncertainty. [ citation needed ] Successful project managers focus on this as their main concern and attempt to reduce risk significantly, [ citation needed ] often by adhering to a policy of open communication , ensuring that project participants can voice their opinions and concerns. [ citation needed ]
The project manager is accountable for ensuring that everyone on the team knows and executes his or her role, feels empowered and supported in the role, knows the roles of the other team members and acts upon the belief that those roles will be performed. [ 4 ] The specific responsibilities of the project manager may vary depending on the industry, the company size, the company maturity, and the company culture . However, there are some responsibilities that are common to all project managers, noting: [ 5 ]
Architectural project manager are project managers in the field of architecture . They have many of the same skills as their counterpart in the construction industry . And will often work closely with the construction project manager in the office of the general contractor (GC), and at the same time, coordinate the work of the design team and numerous consultants who contribute to a construction project, and manage communication with the client. The issues of budget, scheduling, and quality control are the responsibility of the project manager in an architect's office.
Construction managers are primarily involved in the areas of design, bidding, contact management and construction of a project, as well as the in-between phases and post-construction.
Until recently, the American construction industry lacked any level of standardization, with individual States determining the eligibility requirements within their jurisdiction. However, several trade associations based in the United States have made strides in creating a commonly accepted set of qualifications and tests to determine a project manager's competency.
The profession has recently grown to accommodate several dozen construction management Bachelor of Science programs.
Many universities have also begun offering a master's degree in project management. These programs generally are tailored to working professionals who have project management experience or project related experience; they provide a more intense and in depth education surrounding the knowledge areas within the project management body of knowledge.
The United States Navy construction battalions, nicknamed the SeaBees , puts their command through strenuous training and certifications at every level. To become a chief petty officer in the SeaBees is equivalent to a BS in construction management with the added benefit of several years of experience to their credit. See ACE accreditation.
In engineering, project management involves seeing a product or device through the developing and manufacturing stages, working with various professionals in different fields of engineering and manufacturing to go from concept to finished product. Optionally, this can include different versions and standards as required by different countries, requiring knowledge of laws, requirements and infrastructure.
In the insurance industry project managers often oversee and manage the restoration of a client's home/office after a fire, flood, or other disaster, covering the fields from electronics through to the demolition and construction contractors.
IT project management generally falls into two categories, namely software (development) project manager and infrastructure project manager.
A software project manager has many of the same skills as their counterparts in other industries. Beyond the skills normally associated with traditional project management in industries such as construction and manufacturing, a software project manager will typically have an extensive background in software development . Many software project managers hold a degree in computer science , information technology , management of information systems or another related field.
In traditional project management a heavyweight, predictive methodology such as the waterfall model is often employed, but software project managers must also be skilled in more lightweight, adaptive methodologies such as DSDM , Scrum and XP . These project management methodologies are based on the uncertainty of developing a new software system and advocate smaller, incremental development cycles. These incremental or iterative cycles are time boxed (constrained to a known period of time, typically from one to four weeks) and produce a working subset of the entire system deliverable at the end of each iteration. The increasing adoption of lightweight approaches is due largely to the fact that software requirements are very susceptible to change, and it is extremely difficult to illuminate all the potential requirements in a single project phase before the software development commences.
The software project manager is also expected to be familiar with the software development life cycle (SDLC). This may require in-depth knowledge of requirements solicitation, application development, logical and physical database design and networking. This knowledge is typically the result of the aforementioned education and experience. There is not a widely accepted certification for software project managers, but many will hold the Project Management Professional (PMP) designation offered by the Project Management Institute , PRINCE2 or an advanced degree in project management, such as a MSPM or other graduate degree in technology management.
An infrastructure IT PM is concerned with the nuts and bolts of the IT department, including computers, servers, storage, networking, and such aspects of them as backup, business continuity, upgrades, replacement, and growth. Often, a secondary data center will be constructed in a remote location to help protect the business from outages caused by natural disasters or weather. Recently, cyber security has become a significant growth area within IT infrastructure management.
The infrastructure PM usually has an undergraduate degree in engineering or computer science, while a master's degree in project management is required for senior-level positions. Along with the formal education, most senior-level PMs are certified, by the Project Management Institute , as Project Management professionals . PMI also has several additional certification options, but PMP is by far the most popular.
Infrastructure PMs are responsible for managing projects that have budgets from a few thousand dollars up to many millions of dollars. They must understand the business and the business goals of the sponsor and the capabilities of the technology in order to reach the desired goals of the project. The most difficult part of the infrastructure PM's job maybe this translation of business needs / wants into technical specifications. Oftentimes, business analysts are engaged to help with this requirement. The team size of a large infrastructure project may run into several hundred engineers and technicians, many of whom have strong personalities and require strong leadership if the project goals are to be met.
Due to the high operations expense of maintaining a large staff of highly skilled IT engineering talent, many organizations outsource their infrastructure implementations and upgrades to third-party companies. Many of these companies have strong project management organizations with the ability to not only manage their clients projects, but to also generate high quality revenue at the same time.
Project managers in the field of social science have many of the same skills as their counterparts in the IT industry. For example, project managers for the 2020 United States Census followed program and project management policies, framework, and control processes for all projects established within the program. They managed projects designed as part of the program to produce official statistics , such as projects in systems engineering , questionnaire design , sampling , data collection , and public communications. [ 6 ] Project managers of qualitative research studies must also manage scope, schedule, and cost related to research design, participant recruitment, interviewing, reporting, as well as stakeholder engagement. [ 7 ] [ 8 ] | https://en.wikipedia.org/wiki/Project_manager |
Projected area is the two
dimensional area measurement of a three-dimensional object by projecting its shape on to an arbitrary plane. This is often used in mechanical engineering and architectural engineering related fields, especially for hardness testing, axial stress , wind pressures, and terminal velocity .
The geometrical definition of a projected area is: "the rectilinear parallel projection of a surface of any shape onto a plane". This translates into the equation: A projected = ∫ A cos β d A {\displaystyle A_{\text{projected}}=\int _{A}\cos {\beta }\,dA} where A is the original area, and β {\displaystyle \beta } is the angle between the normal to the local plane and the line of sight to the surface A. For basic shapes the results are listed in the table below. [ 1 ]
This engineering-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Projected_area |
Projected dynamical systems is a mathematical theory investigating the behaviour of dynamical systems where solutions are restricted to a constraint set. The discipline shares connections to and applications with both the static world of optimization and equilibrium problems and the dynamical world of ordinary differential equations . A projected dynamical system is given by the flow to the projected differential equation
where K is our constraint set. Differential equations of this form are notable for having a discontinuous vector field.
Projected dynamical systems have evolved out of the desire to dynamically model the behaviour of nonstatic solutions in equilibrium problems over some parameter, typically take to be time. This dynamics differs from that of ordinary differential equations in that solutions are still restricted to whatever constraint set the underlying equilibrium problem was working on, e.g. nonnegativity of investments in financial modeling, convex polyhedral sets in operations research , etc. One particularly important class of equilibrium problems which has aided in the rise of projected dynamical systems has been that of variational inequalities .
The formalization of projected dynamical systems began in the 1990s in Section 5.3 of the paper of Dupuis and Ishii. However, similar concepts can be found in the mathematical literature which predate this, especially in connection with variational inequalities and differential inclusions.
Any solution to our projected differential equation must remain inside of our constraint set K for all time. This desired result is achieved through the use of projection operators and two particular important classes of convex cones . Here we take K to be a closed , convex subset of some Hilbert space X .
The normal cone to the set K at the point x in K is given by
The tangent cone (or contingent cone ) to the set K at the point x is given by
The projection operator (or closest element mapping ) of a point x in X to K is given by the point P K ( x ) {\displaystyle P_{K}(x)} in K such that
for every y in K .
The vector projection operator of a vector v in X at a point x in K is given by
Which is just the Gateaux Derivative computed in the direction of the Vector field
Given a closed, convex subset K of a Hilbert space X and a vector field -F which takes elements from K into X , the projected differential equation associated with K and -F is defined to be
On the interior of K solutions behave as they would if the system were an unconstrained ordinary differential equation. However, since the vector field is discontinuous along the boundary of the set, projected differential equations belong to the class of discontinuous ordinary differential equations. While this makes much of ordinary differential equation theory inapplicable, it is known that when -F is a Lipschitz continuous vector field, a unique absolutely continuous solution exists through each initial point x(0)=x 0 in K on the interval [ 0 , ∞ ) {\displaystyle [0,\infty )} .
This differential equation can be alternately characterized by
or
The convention of denoting the vector field -F with a negative sign arises from a particular connection projected dynamical systems shares with variational inequalities. The convention in the literature is to refer to the vector field as positive in the variational inequality, and negative in the corresponding projected dynamical system. | https://en.wikipedia.org/wiki/Projected_dynamical_system |
In mathematics , particularly in functional analysis , a projection-valued measure , or spectral measure , is a function defined on certain subsets of a fixed set and whose values are self-adjoint projections on a fixed Hilbert space . [ 1 ] A projection-valued measure (PVM) is formally similar to a real-valued measure , except that its values are self-adjoint projections rather than real numbers. As in the case of ordinary measures, it is possible to integrate complex-valued functions with respect to a PVM; the result of such an integration is a linear operator on the given Hilbert space.
Projection-valued measures are used to express results in spectral theory , such as the important spectral theorem for self-adjoint operators , in which case the PVM is sometimes referred to as the spectral measure . The Borel functional calculus for self-adjoint operators is constructed using integrals with respect to PVMs. In quantum mechanics , PVMs are the mathematical description of projective measurements . [ clarification needed ] They are generalized by positive operator valued measures (POVMs) in the same sense that a mixed state or density matrix generalizes the notion of a pure state .
Let H {\displaystyle H} denote a separable complex Hilbert space and ( X , M ) {\displaystyle (X,M)} a measurable space consisting of a set X {\displaystyle X} and a Borel σ-algebra M {\displaystyle M} on X {\displaystyle X} . A projection-valued measure π {\displaystyle \pi } is a map from M {\displaystyle M} to the set of bounded self-adjoint operators on H {\displaystyle H} satisfying the following properties: [ 2 ] [ 3 ]
The second and fourth property show that if E 1 {\displaystyle E_{1}} and E 2 {\displaystyle E_{2}} are disjoint, i.e., E 1 ∩ E 2 = ∅ {\displaystyle E_{1}\cap E_{2}=\emptyset } , the images π ( E 1 ) {\displaystyle \pi (E_{1})} and π ( E 2 ) {\displaystyle \pi (E_{2})} are orthogonal to each other.
Let V E = im ( π ( E ) ) {\displaystyle V_{E}=\operatorname {im} (\pi (E))} and its orthogonal complement V E ⊥ = ker ( π ( E ) ) {\displaystyle V_{E}^{\perp }=\ker(\pi (E))} denote the image and kernel , respectively, of π ( E ) {\displaystyle \pi (E)} . If V E {\displaystyle V_{E}} is a closed subspace of H {\displaystyle H} then H {\displaystyle H} can be wrtitten as the orthogonal decomposition H = V E ⊕ V E ⊥ {\displaystyle H=V_{E}\oplus V_{E}^{\perp }} and π ( E ) = I E {\displaystyle \pi (E)=I_{E}} is the unique identity operator on V E {\displaystyle V_{E}} satisfying all four properties. [ 4 ] [ 5 ]
For every ξ , η ∈ H {\displaystyle \xi ,\eta \in H} and E ∈ M {\displaystyle E\in M} the projection-valued measure forms a complex-valued measure on H {\displaystyle H} defined as
with total variation at most ‖ ξ ‖ ‖ η ‖ {\displaystyle \|\xi \|\|\eta \|} . [ 6 ] It reduces to a real-valued measure when
and a probability measure when ξ {\displaystyle \xi } is a unit vector .
Example Let ( X , M , μ ) {\displaystyle (X,M,\mu )} be a σ -finite measure space and, for all E ∈ M {\displaystyle E\in M} , let
be defined as
i.e., as multiplication by the indicator function 1 E {\displaystyle 1_{E}} on L 2 ( X ) . Then π ( E ) = 1 E {\displaystyle \pi (E)=1_{E}} defines a projection-valued measure. [ 6 ] For example, if X = R {\displaystyle X=\mathbb {R} } , E = ( 0 , 1 ) {\displaystyle E=(0,1)} , and φ , ψ ∈ L 2 ( R ) {\displaystyle \varphi ,\psi \in L^{2}(\mathbb {R} )} there is then the associated complex measure μ φ , ψ {\displaystyle \mu _{\varphi ,\psi }} which takes a measurable function f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } and gives the integral
If π is a projection-valued measure on a measurable space ( X , M ), then the map
extends to a linear map on the vector space of step functions on X . In fact, it is easy to check that this map is a ring homomorphism . This map extends in a canonical way to all bounded complex-valued measurable functions on X , and we have the following.
Theorem — For any bounded Borel function f {\displaystyle f} on X {\displaystyle X} , there exists a unique bounded operator T : H → H {\displaystyle T:H\to H} such that [ 7 ] [ 8 ]
where μ ξ {\displaystyle \mu _{\xi }} is a finite Borel measure given by
Hence, ( X , M , μ ) {\displaystyle (X,M,\mu )} is a finite measure space .
The theorem is also correct for unbounded measurable functions f {\displaystyle f} but then T {\displaystyle T} will be an unbounded linear operator on the Hilbert space H {\displaystyle H} .
This allows to define the Borel functional calculus for such operators and then pass to measurable functions via the Riesz–Markov–Kakutani representation theorem . That is, if g : R → C {\displaystyle g:\mathbb {R} \to \mathbb {C} } is a measurable function, then a unique measure exists such that
Let H {\displaystyle H} be a separable complex Hilbert space , A : H → H {\displaystyle A:H\to H} be a bounded self-adjoint operator and σ ( A ) {\displaystyle \sigma (A)} the spectrum of A {\displaystyle A} . Then the spectral theorem says that there exists a unique projection-valued measure π A {\displaystyle \pi ^{A}} , defined on a Borel subset E ⊂ σ ( A ) {\displaystyle E\subset \sigma (A)} , such that [ 9 ]
where the integral extends to an unbounded function λ {\displaystyle \lambda } when the spectrum of A {\displaystyle A} is unbounded. [ 10 ]
First we provide a general example of projection-valued measure based on direct integrals . Suppose ( X , M , μ) is a measure space and let { H x } x ∈ X be a μ-measurable family of separable Hilbert spaces. For every E ∈ M , let π ( E ) be the operator of multiplication by 1 E on the Hilbert space
Then π is a projection-valued measure on ( X , M ).
Suppose π , ρ are projection-valued measures on ( X , M ) with values in the projections of H , K . π , ρ are unitarily equivalent if and only if there is a unitary operator U : H → K such that
for every E ∈ M .
Theorem . If ( X , M ) is a standard Borel space , then for every projection-valued measure π on ( X , M ) taking values in the projections of a separable Hilbert space, there is a Borel measure μ and a μ-measurable family of Hilbert spaces { H x } x ∈ X , such that π is unitarily equivalent to multiplication by 1 E on the Hilbert space
The measure class [ clarification needed ] of μ and the measure equivalence class of the multiplicity function x → dim H x completely characterize the projection-valued measure up to unitary equivalence.
A projection-valued measure π is homogeneous of multiplicity n if and only if the multiplicity function has constant value n . Clearly,
Theorem . Any projection-valued measure π taking values in the projections of a separable Hilbert space is an orthogonal direct sum of homogeneous projection-valued measures:
where
and
In quantum mechanics, given a projection-valued measure of a measurable space X {\displaystyle X} to the space of continuous endomorphisms upon a Hilbert space H {\displaystyle H} ,
A common choice for X {\displaystyle X} is the real line, but it may also be
Let E {\displaystyle E} be a measurable subset of X {\displaystyle X} and φ {\displaystyle \varphi } a normalized vector quantum state in H {\displaystyle H} , so that its Hilbert norm is unitary, ‖ φ ‖ = 1 {\displaystyle \|\varphi \|=1} . The probability that the observable takes its value in E {\displaystyle E} , given the system in state φ {\displaystyle \varphi } , is
We can parse this in two ways. First, for each fixed E {\displaystyle E} , the projection π ( E ) {\displaystyle \pi (E)} is a self-adjoint operator on H {\displaystyle H} whose 1-eigenspace are the states φ {\displaystyle \varphi } for which the value of the observable always lies in E {\displaystyle E} , and whose 0-eigenspace are the states φ {\displaystyle \varphi } for which the value of the observable never lies in E {\displaystyle E} .
Second, for each fixed normalized vector state φ {\displaystyle \varphi } , the association
is a probability measure on X {\displaystyle X} making the values of the observable into a random variable.
A measurement that can be performed by a projection-valued measure π {\displaystyle \pi } is called a projective measurement .
If X {\displaystyle X} is the real number line, there exists, associated to π {\displaystyle \pi } , a self-adjoint operator A {\displaystyle A} defined on H {\displaystyle H} by
which reduces to
if the support of π {\displaystyle \pi } is a discrete subset of X {\displaystyle X} .
The above operator A {\displaystyle A} is called the observable associated with the spectral measure.
The idea of a projection-valued measure is generalized by the positive operator-valued measure (POVM), where the need for the orthogonality implied by projection operators is replaced by the idea of a set of operators that are a non-orthogonal "partition of unity", i.e. a set of positive semi-definite Hermitian operators that sum to the identity. This generalization is motivated by applications to quantum information theory . | https://en.wikipedia.org/wiki/Projection-valued_measure |
Projection was the ultimate goal of Western alchemy . Once the philosopher's stone or powder of projection had been created, the process of projection would be used to transmute a lesser substance into a higher form; often lead into gold .
Typically, the process is described as casting a small portion of the Stone into a molten base metal.
The seventeenth century saw an increase in tales of physical transmutation and projection. These are variously explained as examples of charlatanism, fiction, pseudo-scientific error, or missed metaphor. The following is a typical account of the projection process described by Jan Baptista van Helmont in his De Natura Vitae Eternae . [ 1 ]
I have seen and I have touched the Philosopher’s Stone more than once. The color of it was like saffron in powder, but heavy and shining like pounded glass. I had once given me the fourth of a grain - I call a grain that which takes 600 to make an ounce. I made projection with this fourth part of a grain wrapped in paper upon eight ounces of quicksilver heated in a crucible. The result of the projection was eight ounces, lacking eleven grains, of the most pure gold.
Other reports include:
While it may not account for all claims of metallic transmutation, some alchemists of this time period give accounts of fraudulent projection demonstrations, distinguishing themselves from the projectors. Maier's Examen Fucorum Pseudo-chymicorum and Khunrath's Treuhertzige Warnungs-Vermahnung list tricks used by pseudo-alchemists. Accounts are given of double-bottomed crucibles used to conceal hidden gold during projection demonstrations. [ 3 ]
The concept of projection appears in various fictional works related to alchemy. It's a notable theme in Ben Jonson's The Alchemist where the following dialogue can be found, commenting on fraudulent applications of projection: [ 4 ]
When do you make projection? Son, be not hasty, I exalt our med'cine, By hanging him in balneo vaporoso, And giving him solution; then congeal him; And then dissolve him; then again congeal him; For look, how oft I iterate the work, So many times I add unto his virtue. As, if at first one ounce convert a hundred, After his second loose, he'll turn a thousand; His third solution, ten; his fourth, a hundred: After his fifth, a thousand thousand ounces Of any imperfect metal, into pure Silver or gold, in all examinations, As good as any of the natural mine. Get you your stuff here against afternoon, Your brass, your pewter, and your andirons. | https://en.wikipedia.org/wiki/Projection_(alchemy) |
In linear algebra and functional analysis , a projection is a linear transformation P {\displaystyle P} from a vector space to itself (an endomorphism ) such that P ∘ P = P {\displaystyle P\circ P=P} . That is, whenever P {\displaystyle P} is applied twice to any vector, it gives the same result as if it were applied once (i.e. P {\displaystyle P} is idempotent ). It leaves its image unchanged. [ 1 ] This definition of "projection" formalizes and generalizes the idea of graphical projection . One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object.
A projection on a vector space V {\displaystyle V} is a linear operator P : V → V {\displaystyle P\colon V\to V} such that P 2 = P {\displaystyle P^{2}=P} .
When V {\displaystyle V} has an inner product and is complete , i.e. when V {\displaystyle V} is a Hilbert space , the concept of orthogonality can be used. A projection P {\displaystyle P} on a Hilbert space V {\displaystyle V} is called an orthogonal projection if it satisfies ⟨ P x , y ⟩ = ⟨ x , P y ⟩ {\displaystyle \langle P\mathbf {x} ,\mathbf {y} \rangle =\langle \mathbf {x} ,P\mathbf {y} \rangle } for all x , y ∈ V {\displaystyle \mathbf {x} ,\mathbf {y} \in V} . A projection on a Hilbert space that is not orthogonal is called an oblique projection .
The eigenvalues of a projection matrix must be 0 or 1.
For example, the function which maps the point ( x , y , z ) {\displaystyle (x,y,z)} in three-dimensional space R 3 {\displaystyle \mathbb {R} ^{3}} to the point ( x , y , 0 ) {\displaystyle (x,y,0)} is an orthogonal projection onto the xy -plane. This function is represented by the matrix P = [ 1 0 0 0 1 0 0 0 0 ] . {\displaystyle P={\begin{bmatrix}1&0&0\\0&1&0\\0&0&0\end{bmatrix}}.}
The action of this matrix on an arbitrary vector is P [ x y z ] = [ x y 0 ] . {\displaystyle P{\begin{bmatrix}x\\y\\z\end{bmatrix}}={\begin{bmatrix}x\\y\\0\end{bmatrix}}.}
To see that P {\displaystyle P} is indeed a projection, i.e., P = P 2 {\displaystyle P=P^{2}} , we compute P 2 [ x y z ] = P [ x y 0 ] = [ x y 0 ] = P [ x y z ] . {\displaystyle P^{2}{\begin{bmatrix}x\\y\\z\end{bmatrix}}=P{\begin{bmatrix}x\\y\\0\end{bmatrix}}={\begin{bmatrix}x\\y\\0\end{bmatrix}}=P{\begin{bmatrix}x\\y\\z\end{bmatrix}}.}
Observing that P T = P {\displaystyle P^{\mathrm {T} }=P} shows that the projection is an orthogonal projection.
A simple example of a non-orthogonal (oblique) projection is P = [ 0 0 α 1 ] . {\displaystyle P={\begin{bmatrix}0&0\\\alpha &1\end{bmatrix}}.}
Via matrix multiplication , one sees that P 2 = [ 0 0 α 1 ] [ 0 0 α 1 ] = [ 0 0 α 1 ] = P . {\displaystyle P^{2}={\begin{bmatrix}0&0\\\alpha &1\end{bmatrix}}{\begin{bmatrix}0&0\\\alpha &1\end{bmatrix}}={\begin{bmatrix}0&0\\\alpha &1\end{bmatrix}}=P.} showing that P {\displaystyle P} is indeed a projection.
The projection P {\displaystyle P} is orthogonal if and only if α = 0 {\displaystyle \alpha =0} because only then P T = P . {\displaystyle P^{\mathrm {T} }=P.}
By definition, a projection P {\displaystyle P} is idempotent (i.e. P 2 = P {\displaystyle P^{2}=P} ).
Every projection is an open map onto its image, meaning that it maps each open set in the domain to an open set in the subspace topology of the image . [ citation needed ] That is, for any vector x {\displaystyle \mathbf {x} } and any ball B x {\displaystyle B_{\mathbf {x} }} (with positive radius) centered on x {\displaystyle \mathbf {x} } , there exists a ball B P x {\displaystyle B_{P\mathbf {x} }} (with positive radius) centered on P x {\displaystyle P\mathbf {x} } that is wholly contained in the image P ( B x ) {\displaystyle P(B_{\mathbf {x} })} .
Let W {\displaystyle W} be a finite-dimensional vector space and P {\displaystyle P} be a projection on W {\displaystyle W} . Suppose the subspaces U {\displaystyle U} and V {\displaystyle V} are the image and kernel of P {\displaystyle P} respectively. Then P {\displaystyle P} has the following properties:
The image and kernel of a projection are complementary , as are P {\displaystyle P} and Q = I − P {\displaystyle Q=I-P} . The operator Q {\displaystyle Q} is also a projection as the image and kernel of P {\displaystyle P} become the kernel and image of Q {\displaystyle Q} and vice versa. We say P {\displaystyle P} is a projection along V {\displaystyle V} onto U {\displaystyle U} (kernel/image) and Q {\displaystyle Q} is a projection along U {\displaystyle U} onto V {\displaystyle V} .
In infinite-dimensional vector spaces, the spectrum of a projection is contained in { 0 , 1 } {\displaystyle \{0,1\}} as ( λ I − P ) − 1 = 1 λ I + 1 λ ( λ − 1 ) P . {\displaystyle (\lambda I-P)^{-1}={\frac {1}{\lambda }}I+{\frac {1}{\lambda (\lambda -1)}}P.} Only 0 or 1 can be an eigenvalue of a projection. This implies that an orthogonal projection P {\displaystyle P} is always a positive semi-definite matrix . In general, the corresponding eigenspaces are (respectively) the kernel and range of the projection. Decomposition of a vector space into direct sums is not unique. Therefore, given a subspace V {\displaystyle V} , there may be many projections whose range (or kernel) is V {\displaystyle V} .
If a projection is nontrivial it has minimal polynomial x 2 − x = x ( x − 1 ) {\displaystyle x^{2}-x=x(x-1)} , which factors into distinct linear factors, and thus P {\displaystyle P} is diagonalizable .
The product of projections is not in general a projection, even if they are orthogonal. If two projections commute then their product is a projection, but the converse is false: the product of two non-commuting projections may be a projection.
If two orthogonal projections commute then their product is an orthogonal projection. If the product of two orthogonal projections is an orthogonal projection, then the two orthogonal projections commute (more generally: two self-adjoint endomorphisms commute if and only if their product is self-adjoint).
When the vector space W {\displaystyle W} has an inner product and is complete (is a Hilbert space ) the concept of orthogonality can be used. An orthogonal projection is a projection for which the range U {\displaystyle U} and the kernel V {\displaystyle V} are orthogonal subspaces . Thus, for every x {\displaystyle \mathbf {x} } and y {\displaystyle \mathbf {y} } in W {\displaystyle W} , ⟨ P x , ( y − P y ) ⟩ = ⟨ ( x − P x ) , P y ⟩ = 0 {\displaystyle \langle P\mathbf {x} ,(\mathbf {y} -P\mathbf {y} )\rangle =\langle (\mathbf {x} -P\mathbf {x} ),P\mathbf {y} \rangle =0} . Equivalently: ⟨ x , P y ⟩ = ⟨ P x , P y ⟩ = ⟨ P x , y ⟩ . {\displaystyle \langle \mathbf {x} ,P\mathbf {y} \rangle =\langle P\mathbf {x} ,P\mathbf {y} \rangle =\langle P\mathbf {x} ,\mathbf {y} \rangle .}
A projection is orthogonal if and only if it is self-adjoint . Using the self-adjoint and idempotent properties of P {\displaystyle P} , for any x {\displaystyle \mathbf {x} } and y {\displaystyle \mathbf {y} } in W {\displaystyle W} we have P x ∈ U {\displaystyle P\mathbf {x} \in U} , y − P y ∈ V {\displaystyle \mathbf {y} -P\mathbf {y} \in V} , and ⟨ P x , y − P y ⟩ = ⟨ x , ( P − P 2 ) y ⟩ = 0 {\displaystyle \langle P\mathbf {x} ,\mathbf {y} -P\mathbf {y} \rangle =\langle \mathbf {x} ,\left(P-P^{2}\right)\mathbf {y} \rangle =0} where ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } is the inner product associated with W {\displaystyle W} . Therefore, P {\displaystyle P} and I − P {\displaystyle I-P} are orthogonal projections. [ 3 ] The other direction, namely that if P {\displaystyle P} is orthogonal then it is self-adjoint, follows from the implication from ⟨ ( x − P x ) , P y ⟩ = ⟨ P x , ( y − P y ) ⟩ = 0 {\displaystyle \langle (\mathbf {x} -P\mathbf {x} ),P\mathbf {y} \rangle =\langle P\mathbf {x} ,(\mathbf {y} -P\mathbf {y} )\rangle =0} to ⟨ x , P y ⟩ = ⟨ P x , P y ⟩ = ⟨ P x , y ⟩ = ⟨ x , P ∗ y ⟩ {\displaystyle \langle \mathbf {x} ,P\mathbf {y} \rangle =\langle P\mathbf {x} ,P\mathbf {y} \rangle =\langle P\mathbf {x} ,\mathbf {y} \rangle =\langle \mathbf {x} ,P^{*}\mathbf {y} \rangle } for every x {\displaystyle x} and y {\displaystyle y} in W {\displaystyle W} ; thus P = P ∗ {\displaystyle P=P^{*}} .
The existence of an orthogonal projection onto a closed subspace follows from the Hilbert projection theorem .
An orthogonal projection is a bounded operator . This is because for every v {\displaystyle \mathbf {v} } in the vector space we have, by the Cauchy–Schwarz inequality : ‖ P v ‖ 2 = ⟨ P v , P v ⟩ = ⟨ P v , v ⟩ ≤ ‖ P v ‖ ⋅ ‖ v ‖ {\displaystyle \left\|P\mathbf {v} \right\|^{2}=\langle P\mathbf {v} ,P\mathbf {v} \rangle =\langle P\mathbf {v} ,\mathbf {v} \rangle \leq \left\|P\mathbf {v} \right\|\cdot \left\|\mathbf {v} \right\|} Thus ‖ P v ‖ ≤ ‖ v ‖ {\displaystyle \left\|P\mathbf {v} \right\|\leq \left\|\mathbf {v} \right\|} .
For finite-dimensional complex or real vector spaces, the standard inner product can be substituted for ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } .
A simple case occurs when the orthogonal projection is onto a line. If u {\displaystyle \mathbf {u} } is a unit vector on the line, then the projection is given by the outer product P u = u u T . {\displaystyle P_{\mathbf {u} }=\mathbf {u} \mathbf {u} ^{\mathsf {T}}.} (If u {\displaystyle \mathbf {u} } is complex-valued, the transpose in the above equation is replaced by a Hermitian transpose). This operator leaves u invariant, and it annihilates all vectors orthogonal to u {\displaystyle \mathbf {u} } , proving that it is indeed the orthogonal projection onto the line containing u . [ 4 ] A simple way to see this is to consider an arbitrary vector x {\displaystyle \mathbf {x} } as the sum of a component on the line (i.e. the projected vector we seek) and another perpendicular to it, x = x ∥ + x ⊥ {\displaystyle \mathbf {x} =\mathbf {x} _{\parallel }+\mathbf {x} _{\perp }} . Applying projection, we get P u x = u u T x ∥ + u u T x ⊥ = u ( sgn ( u T x ∥ ) ‖ x ∥ ‖ ) + u ⋅ 0 = x ∥ {\displaystyle P_{\mathbf {u} }\mathbf {x} =\mathbf {u} \mathbf {u} ^{\mathsf {T}}\mathbf {x} _{\parallel }+\mathbf {u} \mathbf {u} ^{\mathsf {T}}\mathbf {x} _{\perp }=\mathbf {u} \left(\operatorname {sgn} \left(\mathbf {u} ^{\mathsf {T}}\mathbf {x} _{\parallel }\right)\left\|\mathbf {x} _{\parallel }\right\|\right)+\mathbf {u} \cdot \mathbf {0} =\mathbf {x} _{\parallel }} by the properties of the dot product of parallel and perpendicular vectors.
This formula can be generalized to orthogonal projections on a subspace of arbitrary dimension . Let u 1 , … , u k {\displaystyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{k}} be an orthonormal basis of the subspace U {\displaystyle U} , with the assumption that the integer k ≥ 1 {\displaystyle k\geq 1} , and let A {\displaystyle A} denote the n × k {\displaystyle n\times k} matrix whose columns are u 1 , … , u k {\displaystyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{k}} , i.e., A = [ u 1 ⋯ u k ] {\displaystyle A={\begin{bmatrix}\mathbf {u} _{1}&\cdots &\mathbf {u} _{k}\end{bmatrix}}} . Then the projection is given by: [ 5 ] P A = A A T {\displaystyle P_{A}=AA^{\mathsf {T}}} which can be rewritten as P A = ∑ i ⟨ u i , ⋅ ⟩ u i . {\displaystyle P_{A}=\sum _{i}\langle \mathbf {u} _{i},\cdot \rangle \mathbf {u} _{i}.}
The matrix A T {\displaystyle A^{\mathsf {T}}} is the partial isometry that vanishes on the orthogonal complement of U {\displaystyle U} , and A {\displaystyle A} is the isometry that embeds U {\displaystyle U} into the underlying vector space. The range of P A {\displaystyle P_{A}} is therefore the final space of A {\displaystyle A} . It is also clear that A A T {\displaystyle AA^{\mathsf {T}}} is the identity operator on U {\displaystyle U} .
The orthonormality condition can also be dropped. If u 1 , … , u k {\displaystyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{k}} is a (not necessarily orthonormal) basis with k ≥ 1 {\displaystyle k\geq 1} , and A {\displaystyle A} is the matrix with these vectors as columns, then the projection is: [ 6 ] [ 7 ] P A = A ( A T A ) − 1 A T . {\displaystyle P_{A}=A\left(A^{\mathsf {T}}A\right)^{-1}A^{\mathsf {T}}.}
The matrix A {\displaystyle A} still embeds U {\displaystyle U} into the underlying vector space but is no longer an isometry in general. The matrix ( A T A ) − 1 {\displaystyle \left(A^{\mathsf {T}}A\right)^{-1}} is a "normalizing factor" that recovers the norm. For example, the rank -1 operator u u T {\displaystyle \mathbf {u} \mathbf {u} ^{\mathsf {T}}} is not a projection if ‖ u ‖ ≠ 1. {\displaystyle \left\|\mathbf {u} \right\|\neq 1.} After dividing by u T u = ‖ u ‖ 2 , {\displaystyle \mathbf {u} ^{\mathsf {T}}\mathbf {u} =\left\|\mathbf {u} \right\|^{2},} we obtain the projection u ( u T u ) − 1 u T {\displaystyle \mathbf {u} \left(\mathbf {u} ^{\mathsf {T}}\mathbf {u} \right)^{-1}\mathbf {u} ^{\mathsf {T}}} onto the subspace spanned by u {\displaystyle u} .
In the general case, we can have an arbitrary positive definite matrix D {\displaystyle D} defining an inner product ⟨ x , y ⟩ D = y † D x {\displaystyle \langle x,y\rangle _{D}=y^{\dagger }Dx} , and the projection P A {\displaystyle P_{A}} is given by P A x = argmin y ∈ range ( A ) ‖ x − y ‖ D 2 {\textstyle P_{A}x=\operatorname {argmin} _{y\in \operatorname {range} (A)}\left\|x-y\right\|_{D}^{2}} . Then P A = A ( A T D A ) − 1 A T D . {\displaystyle P_{A}=A\left(A^{\mathsf {T}}DA\right)^{-1}A^{\mathsf {T}}D.}
When the range space of the projection is generated by a frame (i.e. the number of generators is greater than its dimension), the formula for the projection takes the form: P A = A A + {\displaystyle P_{A}=AA^{+}} . Here A + {\displaystyle A^{+}} stands for the Moore–Penrose pseudoinverse . This is just one of many ways to construct the projection operator.
If [ A B ] {\displaystyle {\begin{bmatrix}A&B\end{bmatrix}}} is a non-singular matrix and A T B = 0 {\displaystyle A^{\mathsf {T}}B=0} (i.e., B {\displaystyle B} is the null space matrix of A {\displaystyle A} ), [ 8 ] the following holds: I = [ A B ] [ A B ] − 1 [ A T B T ] − 1 [ A T B T ] = [ A B ] ( [ A T B T ] [ A B ] ) − 1 [ A T B T ] = [ A B ] [ A T A O O B T B ] − 1 [ A T B T ] = A ( A T A ) − 1 A T + B ( B T B ) − 1 B T {\displaystyle {\begin{aligned}I&={\begin{bmatrix}A&B\end{bmatrix}}{\begin{bmatrix}A&B\end{bmatrix}}^{-1}{\begin{bmatrix}A^{\mathsf {T}}\\B^{\mathsf {T}}\end{bmatrix}}^{-1}{\begin{bmatrix}A^{\mathsf {T}}\\B^{\mathsf {T}}\end{bmatrix}}\\&={\begin{bmatrix}A&B\end{bmatrix}}\left({\begin{bmatrix}A^{\mathsf {T}}\\B^{\mathsf {T}}\end{bmatrix}}{\begin{bmatrix}A&B\end{bmatrix}}\right)^{-1}{\begin{bmatrix}A^{\mathsf {T}}\\B^{\mathsf {T}}\end{bmatrix}}\\&={\begin{bmatrix}A&B\end{bmatrix}}{\begin{bmatrix}A^{\mathsf {T}}A&O\\O&B^{\mathsf {T}}B\end{bmatrix}}^{-1}{\begin{bmatrix}A^{\mathsf {T}}\\B^{\mathsf {T}}\end{bmatrix}}\\[4pt]&=A\left(A^{\mathsf {T}}A\right)^{-1}A^{\mathsf {T}}+B\left(B^{\mathsf {T}}B\right)^{-1}B^{\mathsf {T}}\end{aligned}}}
If the orthogonal condition is enhanced to A T W B = A T W T B = 0 {\displaystyle A^{\mathsf {T}}WB=A^{\mathsf {T}}W^{\mathsf {T}}B=0} with W {\displaystyle W} non-singular, the following holds: I = [ A B ] [ ( A T W A ) − 1 A T ( B T W B ) − 1 B T ] W . {\displaystyle I={\begin{bmatrix}A&B\end{bmatrix}}{\begin{bmatrix}\left(A^{\mathsf {T}}WA\right)^{-1}A^{\mathsf {T}}\\\left(B^{\mathsf {T}}WB\right)^{-1}B^{\mathsf {T}}\end{bmatrix}}W.}
All these formulas also hold for complex inner product spaces, provided that the conjugate transpose is used instead of the transpose. Further details on sums of projectors can be found in Banerjee and Roy (2014). [ 9 ] Also see Banerjee (2004) [ 10 ] for application of sums of projectors in basic spherical trigonometry .
The term oblique projections is sometimes used to refer to non-orthogonal projections. These projections are also used to represent spatial figures in two-dimensional drawings (see oblique projection ), though not as frequently as orthogonal projections. Whereas calculating the fitted value of an ordinary least squares regression requires an orthogonal projection, calculating the fitted value of an instrumental variables regression requires an oblique projection.
A projection is defined by its kernel and the basis vectors used to characterize its range (which is a complement of the kernel). When these basis vectors are orthogonal to the kernel, then the projection is an orthogonal projection. When these basis vectors are not orthogonal to the kernel, the projection is an oblique projection, or just a projection.
Let P : V → V {\displaystyle P\colon V\to V} be a linear operator such that P 2 = P {\displaystyle P^{2}=P} and assume that P {\displaystyle P} is not the zero operator. Let the vectors u 1 , … , u k {\displaystyle \mathbf {u} _{1},\ldots ,\mathbf {u} _{k}} form a basis for the range of P {\displaystyle P} , and assemble these vectors in the n × k {\displaystyle n\times k} matrix A {\displaystyle A} . Then k ≥ 1 {\displaystyle k\geq 1} , otherwise k = 0 {\displaystyle k=0} and P {\displaystyle P} is the zero operator. The range and the kernel are complementary spaces, so the kernel has dimension n − k {\displaystyle n-k} . It follows that the orthogonal complement of the kernel has dimension k {\displaystyle k} . Let v 1 , … , v k {\displaystyle \mathbf {v} _{1},\ldots ,\mathbf {v} _{k}} form a basis for the orthogonal complement of the kernel of the projection, and assemble these vectors in the matrix B {\displaystyle B} . Then the projection P {\displaystyle P} (with the condition k ≥ 1 {\displaystyle k\geq 1} ) is given by P = A ( B T A ) − 1 B T . {\displaystyle P=A\left(B^{\mathsf {T}}A\right)^{-1}B^{\mathsf {T}}.}
This expression generalizes the formula for orthogonal projections given above. [ 11 ] [ 12 ] A standard proof of this expression is the following. For any vector x {\displaystyle \mathbf {x} } in the vector space V {\displaystyle V} , we can decompose x = x 1 + x 2 {\displaystyle \mathbf {x} =\mathbf {x} _{1}+\mathbf {x} _{2}} , where vector x 1 = P ( x ) {\displaystyle \mathbf {x} _{1}=P(\mathbf {x} )} is in the image of P {\displaystyle P} , and vector x 2 = x − P ( x ) . {\displaystyle \mathbf {x} _{2}=\mathbf {x} -P(\mathbf {x} ).} So P ( x 2 ) = P ( x ) − P 2 ( x ) = 0 {\displaystyle P(\mathbf {x} _{2})=P(\mathbf {x} )-P^{2}(\mathbf {x} )=\mathbf {0} } , and then x 2 {\displaystyle \mathbf {x} _{2}} is in the kernel of P {\displaystyle P} , which is the null space of A . {\displaystyle A.} In other words, the vector x 1 {\displaystyle \mathbf {x} _{1}} is in the column space of A , {\displaystyle A,} so x 1 = A w {\displaystyle \mathbf {x} _{1}=A\mathbf {w} } for some k {\displaystyle k} dimension vector w {\displaystyle \mathbf {w} } and the vector x 2 {\displaystyle \mathbf {x} _{2}} satisfies B T x 2 = 0 {\displaystyle B^{\mathsf {T}}\mathbf {x} _{2}=\mathbf {0} } by the construction of B {\displaystyle B} . Put these conditions together, and we find a vector w {\displaystyle \mathbf {w} } so that B T ( x − A w ) = 0 {\displaystyle B^{\mathsf {T}}(\mathbf {x} -A\mathbf {w} )=\mathbf {0} } . Since matrices A {\displaystyle A} and B {\displaystyle B} are of full rank k {\displaystyle k} by their construction, the k × k {\displaystyle k\times k} -matrix B T A {\displaystyle B^{\mathsf {T}}A} is invertible. So the equation B T ( x − A w ) = 0 {\displaystyle B^{\mathsf {T}}(\mathbf {x} -A\mathbf {w} )=\mathbf {0} } gives the vector w = ( B T A ) − 1 B T x . {\displaystyle \mathbf {w} =(B^{\mathsf {T}}A)^{-1}B^{\mathsf {T}}\mathbf {x} .} In this way, P x = x 1 = A w = A ( B T A ) − 1 B T x {\displaystyle P\mathbf {x} =\mathbf {x} _{1}=A\mathbf {w} =A(B^{\mathsf {T}}A)^{-1}B^{\mathsf {T}}\mathbf {x} } for any vector x ∈ V {\displaystyle \mathbf {x} \in V} and hence P = A ( B T A ) − 1 B T {\displaystyle P=A(B^{\mathsf {T}}A)^{-1}B^{\mathsf {T}}} .
In the case that P {\displaystyle P} is an orthogonal projection, we can take A = B {\displaystyle A=B} , and it follows that P = A ( A T A ) − 1 A T {\displaystyle P=A\left(A^{\mathsf {T}}A\right)^{-1}A^{\mathsf {T}}} . By using this formula, one can easily check that P = P T {\displaystyle P=P^{\mathsf {T}}} . In general, if the vector space is over complex number field, one then uses the Hermitian transpose A ∗ {\displaystyle A^{*}} and has the formula P = A ( A ∗ A ) − 1 A ∗ {\displaystyle P=A\left(A^{*}A\right)^{-1}A^{*}} . Recall that one can express the Moore–Penrose inverse of the matrix A {\displaystyle A} by A + = ( A ∗ A ) − 1 A ∗ {\displaystyle A^{+}=(A^{*}A)^{-1}A^{*}} since A {\displaystyle A} has full column rank, so P = A A + {\displaystyle P=AA^{+}} .
I − P {\displaystyle I-P} is also an oblique projection. The singular values of P {\displaystyle P} and I − P {\displaystyle I-P} can be computed by an orthonormal basis of A {\displaystyle A} . Let Q A {\displaystyle Q_{A}} be an orthonormal basis of A {\displaystyle A} and let Q A ⊥ {\displaystyle Q_{A}^{\perp }} be the orthogonal complement of Q A {\displaystyle Q_{A}} . Denote the singular values of the matrix Q A T A ( B T A ) − 1 B T Q A ⊥ {\displaystyle Q_{A}^{T}A(B^{T}A)^{-1}B^{T}Q_{A}^{\perp }} by the positive values γ 1 ≥ γ 2 ≥ … ≥ γ k {\displaystyle \gamma _{1}\geq \gamma _{2}\geq \ldots \geq \gamma _{k}} . With this, the singular values for P {\displaystyle P} are: [ 13 ] σ i = { 1 + γ i 2 1 ≤ i ≤ k 0 otherwise {\displaystyle \sigma _{i}={\begin{cases}{\sqrt {1+\gamma _{i}^{2}}}&1\leq i\leq k\\0&{\text{otherwise}}\end{cases}}} and the singular values for I − P {\displaystyle I-P} are σ i = { 1 + γ i 2 1 ≤ i ≤ k 1 k + 1 ≤ i ≤ n − k 0 otherwise {\displaystyle \sigma _{i}={\begin{cases}{\sqrt {1+\gamma _{i}^{2}}}&1\leq i\leq k\\1&k+1\leq i\leq n-k\\0&{\text{otherwise}}\end{cases}}} This implies that the largest singular values of P {\displaystyle P} and I − P {\displaystyle I-P} are equal, and thus that the matrix norm of the oblique projections are the same. However, the condition number satisfies the relation κ ( I − P ) = σ 1 1 ≥ σ 1 σ k = κ ( P ) {\displaystyle \kappa (I-P)={\frac {\sigma _{1}}{1}}\geq {\frac {\sigma _{1}}{\sigma _{k}}}=\kappa (P)} , and is therefore not necessarily equal.
Let V {\displaystyle V} be a vector space (in this case a plane) spanned by orthogonal vectors u 1 , u 2 , … , u p {\displaystyle \mathbf {u} _{1},\mathbf {u} _{2},\dots ,\mathbf {u} _{p}} . Let y {\displaystyle y} be a vector. One can define a projection of y {\displaystyle \mathbf {y} } onto V {\displaystyle V} as proj V y = y ⋅ u i u i ⋅ u i u i {\displaystyle \operatorname {proj} _{V}\mathbf {y} ={\frac {\mathbf {y} \cdot \mathbf {u} ^{i}}{\mathbf {u} ^{i}\cdot \mathbf {u} ^{i}}}\mathbf {u} ^{i}} where repeated indices are summed over ( Einstein sum notation ). The vector y {\displaystyle \mathbf {y} } can be written as an orthogonal sum such that y = proj V y + z {\displaystyle \mathbf {y} =\operatorname {proj} _{V}\mathbf {y} +\mathbf {z} } . proj V y {\displaystyle \operatorname {proj} _{V}\mathbf {y} } is sometimes denoted as y ^ {\displaystyle {\hat {\mathbf {y} }}} . There is a theorem in linear algebra that states that this z {\displaystyle \mathbf {z} } is the smallest distance (the orthogonal distance ) from y {\displaystyle \mathbf {y} } to V {\displaystyle V} and is commonly used in areas such as machine learning .
Any projection P = P 2 {\displaystyle P=P^{2}} on a vector space of dimension d {\displaystyle d} over a field is a diagonalizable matrix , since its minimal polynomial divides x 2 − x {\displaystyle x^{2}-x} , which splits into distinct linear factors. Thus there exists a basis in which P {\displaystyle P} has the form
where r {\displaystyle r} is the rank of P {\displaystyle P} . Here I r {\displaystyle I_{r}} is the identity matrix of size r {\displaystyle r} , 0 d − r {\displaystyle 0_{d-r}} is the zero matrix of size d − r {\displaystyle d-r} , and ⊕ {\displaystyle \oplus } is the direct sum operator. If the vector space is complex and equipped with an inner product , then there is an orthonormal basis in which the matrix of P is [ 14 ]
where σ 1 ≥ σ 2 ≥ ⋯ ≥ σ k > 0 {\displaystyle \sigma _{1}\geq \sigma _{2}\geq \dots \geq \sigma _{k}>0} . The integers k , s , m {\displaystyle k,s,m} and the real numbers σ i {\displaystyle \sigma _{i}} are uniquely determined. 2 k + s + m = d {\displaystyle 2k+s+m=d} . The factor I m ⊕ 0 s {\displaystyle I_{m}\oplus 0_{s}} corresponds to the maximal invariant subspace on which P {\displaystyle P} acts as an orthogonal projection (so that P itself is orthogonal if and only if k = 0 {\displaystyle k=0} ) and the σ i {\displaystyle \sigma _{i}} -blocks correspond to the oblique components.
When the underlying vector space X {\displaystyle X} is a (not necessarily finite-dimensional) normed vector space , analytic questions, irrelevant in the finite-dimensional case, need to be considered. Assume now X {\displaystyle X} is a Banach space .
Many of the algebraic results discussed above survive the passage to this context. A given direct sum decomposition of X {\displaystyle X} into complementary subspaces still specifies a projection, and vice versa. If X {\displaystyle X} is the direct sum X = U ⊕ V {\displaystyle X=U\oplus V} , then the operator defined by P ( u + v ) = u {\displaystyle P(u+v)=u} is still a projection with range U {\displaystyle U} and kernel V {\displaystyle V} . It is also clear that P 2 = P {\displaystyle P^{2}=P} . Conversely, if P {\displaystyle P} is projection on X {\displaystyle X} , i.e. P 2 = P {\displaystyle P^{2}=P} , then it is easily verified that ( 1 − P ) 2 = ( 1 − P ) {\displaystyle (1-P)^{2}=(1-P)} . In other words, 1 − P {\displaystyle 1-P} is also a projection. The relation P 2 = P {\displaystyle P^{2}=P} implies 1 = P + ( 1 − P ) {\displaystyle 1=P+(1-P)} and X {\displaystyle X} is the direct sum rg ( P ) ⊕ rg ( 1 − P ) {\displaystyle \operatorname {rg} (P)\oplus \operatorname {rg} (1-P)} .
However, in contrast to the finite-dimensional case, projections need not be continuous in general. If a subspace U {\displaystyle U} of X {\displaystyle X} is not closed in the norm topology, then the projection onto U {\displaystyle U} is not continuous. In other words, the range of a continuous projection P {\displaystyle P} must be a closed subspace. Furthermore, the kernel of a continuous projection (in fact, a continuous linear operator in general) is closed. Thus a continuous projection P {\displaystyle P} gives a decomposition of X {\displaystyle X} into two complementary closed subspaces: X = rg ( P ) ⊕ ker ( P ) = ker ( 1 − P ) ⊕ ker ( P ) {\displaystyle X=\operatorname {rg} (P)\oplus \ker(P)=\ker(1-P)\oplus \ker(P)} .
The converse holds also, with an additional assumption. Suppose U {\displaystyle U} is a closed subspace of X {\displaystyle X} . If there exists a closed subspace V {\displaystyle V} such that X = U ⊕ V , then the projection P {\displaystyle P} with range U {\displaystyle U} and kernel V {\displaystyle V} is continuous. This follows from the closed graph theorem . Suppose x n → x and Px n → y . One needs to show that P x = y {\displaystyle Px=y} . Since U {\displaystyle U} is closed and { Px n } ⊂ U , y lies in U {\displaystyle U} , i.e. Py = y . Also, x n − Px n = ( I − P ) x n → x − y . Because V {\displaystyle V} is closed and {( I − P ) x n } ⊂ V , we have x − y ∈ V {\displaystyle x-y\in V} , i.e. P ( x − y ) = P x − P y = P x − y = 0 {\displaystyle P(x-y)=Px-Py=Px-y=0} , which proves the claim.
The above argument makes use of the assumption that both U {\displaystyle U} and V {\displaystyle V} are closed. In general, given a closed subspace U {\displaystyle U} , there need not exist a complementary closed subspace V {\displaystyle V} , although for Hilbert spaces this can always be done by taking the orthogonal complement . For Banach spaces, a one-dimensional subspace always has a closed complementary subspace. This is an immediate consequence of Hahn–Banach theorem . Let U {\displaystyle U} be the linear span of u {\displaystyle u} . By Hahn–Banach, there exists a bounded linear functional φ {\displaystyle \varphi } such that φ ( u ) = 1 . The operator P ( x ) = φ ( x ) u {\displaystyle P(x)=\varphi (x)u} satisfies P 2 = P {\displaystyle P^{2}=P} , i.e. it is a projection. Boundedness of φ {\displaystyle \varphi } implies continuity of P {\displaystyle P} and therefore ker ( P ) = rg ( I − P ) {\displaystyle \ker(P)=\operatorname {rg} (I-P)} is a closed complementary subspace of U {\displaystyle U} .
Projections (orthogonal and otherwise) play a major role in algorithms for certain linear algebra problems:
As stated above, projections are a special case of idempotents. Analytically, orthogonal projections are non-commutative generalizations of characteristic functions . Idempotents are used in classifying, for instance, semisimple algebras , while measure theory begins with considering characteristic functions of measurable sets . Therefore, as one can imagine, projections are very often encountered in the context of operator algebras . In particular, a von Neumann algebra is generated by its complete lattice of projections.
More generally, given a map between normed vector spaces T : V → W , {\displaystyle T\colon V\to W,} one can analogously ask for this map to be an isometry on the orthogonal complement of the kernel: that ( ker T ) ⊥ → W {\displaystyle (\ker T)^{\perp }\to W} be an isometry (compare Partial isometry ); in particular it must be onto . The case of an orthogonal projection is when W is a subspace of V. In Riemannian geometry , this is used in the definition of a Riemannian submersion . | https://en.wikipedia.org/wiki/Projection_(linear_algebra) |
In mathematics , a projection is an idempotent mapping of a set (or other mathematical structure ) into a subset (or sub-structure). In this case, idempotent means that projecting twice is the same as projecting once. The restriction to a subspace of a projection is also called a projection , even if the idempotence property is lost.
An everyday example of a projection is the casting of shadows onto a plane (sheet of paper): the projection of a point is its shadow on the sheet of paper, and the projection (shadow) of a point on the sheet of paper is that point itself (idempotency). The shadow of a three-dimensional sphere is a disk. Originally, the notion of projection was introduced in Euclidean geometry to denote the projection of the three-dimensional Euclidean space onto a plane in it, like the shadow example. The two main projections of this kind are:
The concept of projection in mathematics is a very old one, and most likely has its roots in the phenomenon of the shadows cast by real-world objects on the ground. This rudimentary idea was refined and abstracted, first in a geometric context and later in other branches of mathematics. Over time different versions of the concept developed, but today, in a sufficiently abstract setting, we can unify these variations. [ citation needed ]
In cartography , a map projection is a map of a part of the surface of the Earth onto a plane, which, in some cases, but not always, is the restriction of a projection in the above meaning. The 3D projections are also at the basis of the theory of perspective . [ citation needed ]
The need for unifying the two kinds of projections and of defining the image by a central projection of any point different of the center of projection are at the origin of projective geometry .
Generally, a mapping where the domain and codomain are the same set (or mathematical structure ) is a projection if the mapping is idempotent , which means that a projection is equal to its composition with itself. A projection may also refer to a mapping which has a right inverse . Both notions are strongly related, as follows. Let p be an idempotent mapping from a set A into itself (thus p ∘ p = p ) and B = p ( A ) be the image of p . If we denote by π the map p viewed as a map from A onto B and by i the injection of B into A (so that p = i ∘ π ), then we have π ∘ i = Id B (so that π has a right inverse). Conversely, if π has a right inverse i , then π ∘ i = Id B implies that i ∘ π ∘ i ∘ π = i ∘ Id B ∘ π = i ∘ π ; that is, p = i ∘ π is idempotent. [ citation needed ]
The original notion of projection has been extended or generalized to various mathematical situations, frequently, but not always, related to geometry, for example: | https://en.wikipedia.org/wiki/Projection_(mathematics) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.