text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Modelling of a transmission line is done to analyse its performance and characteristics. The gathered information vis simulating the model can be used to reduce losses or to compensate these losses. Moreover, it gives more insight into the working of transmission lines and helps to find a way to improve the overall transmission efficiency with minimum cost.
Performance modelling is the abstraction of a real system into a simplified representation to enable the prediction of performance. [ 1 ] The creation of a model can provide insight into how a proposed or actual system will or does work. This can, however, point towards different things to people belonging to different fields of work.
Performance modelling has many benefits, which includes:
A model will often be created specifically so that it can be interpreted by a software tool that simulates the system's behaviour, based on the information contained in the performance model. Such tools provide further insight into the system's behaviour and can be used to identify bottlenecks or hot spots where the design is inadequate. Solutions to the problems identified might involve the provision of more physical resources or change in the structure of the design.
Performance modelling is found helpful, in case of:
Electric power transmission is the bulk movement of electrical energy from a generating site, such as a power plant , to an electrical substation and is different from the local wiring between high-voltage substations and customers, which is typically referred to as electric power distribution . The interconnected network which facilitates this movement is known as a transmission line. A transmission line is a set of electrical conductors carrying an electrical signal from one place to another. Coaxial cable and twisted pair cable are examples. The transmission line is capable of transmitting electrical power from one place to another. In many electric circuits, the length of the wires connecting the components can, for the most part, be ignored. That is, the voltage on the wire at a given time can be assumed to be the same at all points. However, when the voltage changes in a time interval comparable to the time it takes for the signal to travel down the wire, the length becomes important and the wire must be treated as a transmission line. Stated another way, the length of the wire is important when the signal includes frequency components with corresponding wavelengths comparable to or less than the length of the wire. So far transmission lines are categorized and defined in many ways. Few approaches to modelling have also being done by different methods. Most of them are mathematical and assumed circuit-based models.
Transmission can be of two types :
High-voltage direct current (HVDC) is used to transmit large amounts of power over long distances or for interconnections between asynchronous grids. When electrical energy is to be transmitted over very long distances, the power lost in AC transmission becomes appreciable and it is less expensive to use direct current instead of alternating current . [ 2 ] For a very long transmission line, these lower losses (and reduced construction cost of a DC line) can offset the additional cost of the required converter stations at each end.In DC transmission line, the mercury arc rectifier converts the alternating current into the DC. [ 3 ] The DC transmission line transmits the bulk power over long distance. At the consumer ends the thyratron converts the DC into the AC. [ 4 ]
The AC transmission line is used for transmitting the bulk of the power generation end to the consumer end. [ 5 ] The power is generated in the generating station. The transmission line transmits the power from generation to the consumer end. High-voltage power transmission allows for lesser resistive losses over long distances in the wiring. [ 5 ] This efficiency of high voltage transmission allows for the transmission of a larger proportion of the generated power to the substations and in turn to the loads, translating to operational cost savings. The power is transmitted from one end to another with the help of step-up and step down transformer. Most transmission lines are high-voltage three-phase alternating current (AC), although single phase AC is sometimes used in railway electrification systems . Electricity is transmitted at high voltages (115 kV or above) to reduce the energy loss which occurs in long-distance transmission.
Power is usually transmitted through overhead power lines . [ 6 ] Underground power transmission has a significantly higher installation cost and greater operational limitations, [ 6 ] but reduced maintenance costs. [ 7 ] Underground transmission is sometimes used in urban areas or environmentally sensitive locations. [ 7 ]
The lossless line approximation is the least accurate model; it is often used on short lines when the inductance of the line is much greater than its resistance. For this approximation, the voltage and current are identical at the sending and receiving ends.
The characteristic impedance is purely real, which means resistive for that impedance, and it is often called surge impedance for a lossless line. When a lossless line is terminated by surge impedance, there is no voltage drop. Though the phase angles of voltage and current are rotated, the magnitudes of voltage and current remain constant along the length of the line. For load > SIL, the voltage will drop from sending end and the line will “consume” VARs. For load < SIL, the voltage will increase from sending end, and the line will generate VARs.
In electrical engineering , the power factor of an AC electrical power system is defined as the ratio of the real power absorbed by the load to the apparent power flowing in the circuit, and is a dimensionless number in the closed interval of −1 to 1.
A power factor of less than one indicates the voltage and current are not in phase, reducing the instantaneous product of the two. A negative power factor occurs when the device (which is normally the load) generates power, which then flows back towards the source.
In an electric power system, a load with a low power factor draws more current than a load with a high power factor for the same amount of useful power transferred. The higher currents increase energy loss in the distribution system and require larger wires and other equipment. Because of the costs of larger equipment and wasted energy, electrical utilities will usually charge a higher cost to industrial or commercial customers where there is a low power factor.
The characteristic impedance or surge impedance (usually written Z 0 ) of a uniform transmission line is the ratio of the amplitudes of voltage and current of a single wave propagating along the line; that is, a wave travelling in one direction in the absence of reflections in the other direction. Alternatively and equivalently it can be defined as the input impedance of a transmission line when its length is infinite. Characteristic impedance is determined by the geometry and materials of the transmission line and, for a uniform line, is not dependent on its length. The SI unit of characteristic impedance is Ohm (Ώ)
Surge impedance determines the loading capability of the line and reflection coefficient of the current or voltage propagating waves. Z 0 = L C {\displaystyle Z_{0}={\sqrt {\frac {L}{C}}}}
Where,
Z 0 = Characteristic Impedance of the Line
L = Inductance per unit length of the Line
C = Capacitance per unit length of the Line
The transmission line has mainly four parameters, resistance, inductance, and capacitance and shunt conductance. [ 8 ] These parameters are uniformly distributed along the line. Hence, it is also called the distributed parameter of the transmission line.
In electrical engineering , the Ferranti effect is the increase in voltage occurring at the receiving end of a very long (> 200 km) AC electric power transmission line, relative to the voltage at the sending end, when the load is very small, or no load is connected. It can be stated as a factor, or as a percent increase:. [ 9 ]
The capacitive line charging current produces a voltage drop across the line inductance that is in-phase with the sending-end voltage, assuming negligible line resistance. Therefore, both line inductance and capacitance are responsible for this phenomenon. This can be analysed by considering the line as a transmission line where the source impedance is lower than the load impedance (unterminated). The effect is similar to an electrically short version of the quarter-wave impedance transformer , but with smaller voltage transformation.
The Ferranti effect is more pronounced the longer the line and the higher the voltage applied. [ 10 ] The relative voltage rise is proportional to the square of the line length and the square of frequency. [ 11 ]
The Ferranti effect is much more pronounced in underground cables, even in short lengths, because of their high capacitance per unit length, and lower electrical impedance .
A corona discharge is an electrical discharge brought on by the ionization of a fluid such as air surrounding a conductor that is electrically charged . Spontaneous corona discharges occur naturally in high-voltage systems unless care is taken to limit the electric field strength. A corona will occur when the strength of the electric field ( potential gradient ) around a conductor is high enough to form a conductive region, but not high enough to cause electrical breakdown or arcing to nearby objects. It is often seen as a bluish (or another colour) glow in the air adjacent to pointed metal conductors carrying high voltages and emits light by the same property as a gas discharge lamp .
In many high voltage applications, the corona is an unwanted side effect. Corona discharge from high voltage electric power transmission lines constitutes an economically significant waste of energy. Corona discharges are suppressed by improved insulation, corona rings , and making high voltage electrodes in smooth rounded shapes.
A, B, C, D are the constants also known as the transmission parameters or chain parameters. These parameters are used for the analysis of an electrical network. It is also used for determining the performance of input, output voltage and current of the transmission network.
The propagation constant of the sinusoidal electromagnetic wave is a measure of the change undergone by the amplitude and phase of the wave as it propagates in a given direction. The quantity being measured can be the voltage, the current in a circuit, or a field vector such as electric field strength or flux density. The propagation constant itself measures the change per unit length, but it is otherwise dimensionless. In the context of two-port networks and their cascades, propagation constant measures the change undergone by the source quantity as it propagates from one port to the next.
The real part of the propagation constant is the attenuation constant and is denoted by Greek lowercase letter α (alpha). It causes signal amplitude to decrease along a transmission line.
The imaginary part of the propagation constant is the phase constant and is denoted by Greek lowercase letter β (beta). It causes the signal phase to shift along a transmission line. Generally denoted in radians per meter (rad/m).
The propagation constant is denoted by Greek lowercase letter γ (gamma), and γ = α + jβ
Voltage regulation is a measure of the change in the voltage magnitude between the sending and receiving end of a component, such as a transmission or distribution line. It is given in percentage for different lines.
Mathematically, voltage regulation is given by,
V . R = V n o − l o a d − V f u l l − l o a d V n o − l o a d {\displaystyle V.R={\frac {V_{no-load}-V_{full-load}}{V_{no-load}}}}
The AC transmission has four line parameters, these are the series resistance & inductance , and shunt capacitance and admittance . These parameters are responsible for distinct behavior of voltage and current waveforms along the transmission line . Line parameters are generally represented in their respective units per km of length in transmission lines. So these parameters depend upon the geometric alignment of transmission lines (number of conductors used, shape of conductors, physical spacing between conductors and height above the ground etc.). These parameters are independent of current and voltage of any of the sending or receiving ends.
The electrical resistance of an object is property of a substance due to which it restricts the flow of electric current resulted from a potential difference in its two ends. [ 12 ] The inverse quantity is electrical conductance , and is the ease with which an electric current passes. Electrical resistance shares some conceptual parallels with the notion of mechanical friction . The SI unit of electrical resistance is the ohm ( Ω ), while electrical conductance is measured in siemens (S).
The resistance of an object depends in large part on the material it is made of—objects made of electrical insulators like rubber tend to have very high resistance and low conductivity, while objects made of electrical conductors like metals tend to have very low resistance and high conductivity. This material dependence is quantified by resistivity or conductivity . However, resistance and conductance are extensive rather than bulk properties , meaning that they also depend on the size and shape of an object. For example, a wire's resistance is higher if it is long and thin, and lower if it is short and thick. All objects show some resistance, except for superconductors , which have a resistance of zero.
The resistance ( R ) of an object is defined as the ratio of voltage across it ( V ) to current through it ( I ), while the conductance ( G ) is the inverse:
For a wide variety of materials and conditions, V and I are directly proportional to each other, and therefore R and G are constants (although they will depend on the size and shape of the object, the material it is made of, and other factors like temperature or strain). This proportionality is called Ohm's law , and materials that satisfy it are called ohmic materials. In other cases, such as a transformer , diode or battery , V and I are not directly proportional. The ratio V / I is sometimes still useful, and is referred to as a "chordal resistance" or "static resistance", [ 13 ] [ 14 ] since it corresponds to the inverse slope of a chord between the origin and an I–V curve . In other situations, the derivative d V d I {\displaystyle {\frac {dV}{dI}}\,\!} may be most useful; this is called the "differential resistance".
Transmission lines, as they consist of conducting wires of very long length, have an electrical resistance that can't be neglected at all.
When current flows within a conductor, magnetic flux is set up. With the variation of current in the conductor, the number of lines of flux also changes, and an emf is induced in it ( Faraday's Law ). This induced emf is represented by the parameter known as inductance (L).
In the SI system, the unit of inductance is the henry ( H ), which is the amount of inductance which causes a voltage of 1 volt when the current is changing at a rate of one ampere per second. [ 15 ]
The flux linking with the conductor consists of two parts, namely, the internal flux and the external flux :
The total inductance of the conductor is determined by the calculation of the internal and external flux.
The transmission line wiring is also inductive in nature and, the inductance of a single circuit line can be given mathematically by : L = μ 0 2 π l n D r ′ H / m {\displaystyle L={\frac {\mu _{0}}{2\pi }}ln{\frac {D}{r\prime }}H/m}
Where,
For transposed lines with two or more phases, the inductance between any two lines can be calculated using : L = μ 0 2 π l n D G M D r ′ H / m {\displaystyle L={\frac {\mu _{0}}{2\pi }}ln{\frac {D_{GMD}}{r\prime }}H/m} .
Where, D G M D {\displaystyle D_{GMD}} is the geometric mean distance in between the conductors.
If the lines are not properly transposed, the inductances become unequal and contain imaginary terms due to mutual inductances. In case of proper transposition, all the conductors occupy the available positions equal distance and thus the imaginary terms are cancelled out. And all the line inductances become equal.
Capacitance is the ratio of the change in an electric charge in a system to the corresponding change in its electric potential . The capacitance is a function only of the geometry of the design (e.g. area of the plates and the distance between them) and the permittivity of the dielectric material between the plates of the capacitor. For many dielectric materials, the permittivity and thus the capacitance is independent of the potential difference between the conductors and the total charge on them.
The SI unit of capacitance is the farad (F). A 1 farad capacitor, when charged with 1 coulomb of electrical charge, has a potential difference of 1 volt between its plates. [ 17 ] The reciprocal of capacitance is called elastance .
There are two closely related notions of capacitance self-capacitance and mutual capacitance :
Transmission line conductors constitute a capacitor between them, exhibiting mutual capacitance. The conductors of the transmission line act as a parallel plate of the capacitor and the air is the dielectric medium between them. The capacitance of a line gives rise to the leading current between the conductors. It depends on the length of the conductor. The capacitance of the line is proportional to the length of the transmission line. Their effect is negligible on the performance of lines with a short length and low voltage, but significant on the performance of lines with a long length and high voltage. The shunt capacitance of the line is responsible for Ferranti effect . [ 19 ]
The capacitance of a single phase transmission line can be given mathematically by : C a b = π ϵ 0 l n D r F / m {\displaystyle C_{ab}={\frac {\pi \epsilon _{0}}{ln{\frac {D}{r}}}}F/m}
Where,
For lines with two or more phases, the capacitance between any two lines can be calculated using : C = π ϵ 0 l n D G M D r F / m {\displaystyle C={\frac {\pi \epsilon _{0}}{ln{\frac {D_{GMD}}{r}}}}F/m}
Where, D G M D {\displaystyle D_{GMD}} is the geometric mean distance of the conductors.
The effect of self-capacitance, on a transmission line, is generally neglected because the conductors are not isolated and thus there exists no detectable self-capacitance.
In electrical engineering , admittance is a measure of how easily a circuit or device will allow a current to flow. It is defined as the reciprocal of impedance . The SI unit of admittance is the siemens (S); the older, synonymous unit is mho (℧) . [ 20 ]
Admittance is defined as
where
Resistance is a measure of the opposition of a circuit to the flow of a steady current, while impedance takes into account not only the resistance but also dynamic effects (known as reactance ). Likewise, admittance is not only a measure of the ease with which a steady current can flow but also the dynamic effects of the material's susceptance to polarization:
where
The dynamic effects of the material's susceptance relate to the universal dielectric response , the power-law scaling of a system's admittance with frequency under alternating current conditions.
In the context of electrical modelling of transmission lines, shunt components that provide paths of least resistance in certain models are generally specified in terms of their admittance. Transmission lines can span hundreds of kilometres, over which the line's capacitance can affect voltage levels. For short length transmission line analysis, this capacitance can be ignored and shunt components are not necessary for the model. Lines with more length, contain a shunt admittance governed by [ 21 ]
Y = y l = j ω C l {\displaystyle Y=yl=j\omega Cl}
where
Y – total shunt admittance
y – shunt admittance per unit length
l – length of the line
C – capacitance of the line
A two-port network (a kind of four-terminal network or quadripole ) is an electrical network ( circuit ) or device with two pairs of terminals to connect to external circuits. Two terminals constitute a port if the currents applied to them satisfy the essential requirement known as the port condition: the electric current entering one terminal must equal the current emerging from the other terminal on the same port. [ 22 ] [ 23 ] The ports constitute interfaces where the network connects to other networks, the points where signals are applied or outputs are taken. In a two-port network, often port 1 is considered the input port and port 2 is considered the output port.
The two-port network model is used in mathematical circuit analysis techniques to isolate portions of larger circuits. A two-port network is regarded as a " black box " with its properties specified by a matrix of numbers. This allows the response of the network to signals applied to the ports to be calculated easily, without solving for all the internal voltages and currents in the network. It also allows similar circuits or devices to be compared easily. For example, transistors are often regarded as two-ports, characterized by their h-parameters (see below) which are listed by the manufacturer. Any linear circuit with four terminals can be regarded as a two-port network provided that it does not contain an independent source and satisfies the port conditions.
Oftentimes, we are only interested in the terminal characteristics of the transmission line, which are the voltage and current at the sending and receiving ends, for performance analysis of the line. The transmission line itself is then modelled as a "black box" and a 2 by 2 transmission matrix is used to model its behaviour, as follows [ 24 ] [ 25 ]
This equation in matrix form, consists of two individual equations as stated below: [ 26 ]
V S = A V R + B I R {\displaystyle V_{S}=AV_{R}+BI_{R}}
I S = C V R + D I R {\displaystyle I_{S}=CV_{R}+DI_{R}}
Where,
V S {\displaystyle V_{S}} is the sending end voltage
V R {\displaystyle V_{R}} is the receiving end voltage
I S {\displaystyle I_{S}} is the sending end current
I R {\displaystyle I_{R}} is the receiving end current
1. A = V S V R {\displaystyle A={\frac {V_{S}}{V_{R}}}}
So, the parameter A is the ratio of sending end voltage to receiving end voltage, thus called the voltage ratio. Being the ratio of two same quantities, the parameter A is unitless.
2. C = I S V R {\displaystyle C={\frac {I_{S}}{V_{R}}}}
So, the parameter C is the ratio of sending end current to receiving end voltage, thus called the transfer admittance and the unit of C is Mho ( ℧ {\displaystyle \mho } ).
1. B = V S I R {\displaystyle B={\frac {V_{S}}{I_{R}}}}
So, the parameter B is the ratio of sending end voltage to receiving end current, thus called the transfer impedance and the unit of C is Ohm ( Ω ).
2. D = I S I R {\displaystyle D={\frac {I_{S}}{I_{R}}}}
So, the parameter D is the ratio of sending end current to receiving end current, thus called the current ratio. Being the ratio of two same quantities, the parameter D is unitless.
To summarize, ABCD Parameters for a two port(four terminal) passive, linear and bilateral network is given as :
The line is assumed to be a reciprocal, symmetrical network, meaning that the receiving and sending labels can be switched with no consequence. The transmission matrix T also has the following properties:
The parameters A , B , C , and D differ depending on how the desired model handles the line's resistance ( R ), inductance ( L ), capacitance ( C ), and shunt (parallel, leak) conductance G . The four main models are the short line approximation, the medium line approximation, the long line approximation (with distributed parameters), and the lossless line. In all models described, a capital letter such as R refers to the total quantity summed over the line and a lowercase letter such as r refers to the per-unit-length quantity.
The AC transmission line has resistance R, inductance L, capacitance C and the shunt or leakage conductance G. These parameters along with the load and the transmission line determines the performance of the line. The term performance means the sending end voltage, sending
end currents, sending end factor, power loss in the line, efficiency of the transmission line, regulate and limit of power flow during efficiency and transmission, regulation and limits of power during the steady-state and transient condition. AC transmission line is
generally categorized into three classes [ 28 ]
The classification of the transmission line depends on the frequency of power transfer and is an assumption made for ease of calculation of line performance parameters and its losses. [ 29 ] And because this, the range of length for the categorization of a transmission line is not rigid. The ranges of length may vary (a little), and all of them are valid in their areas of approximation.
Current and voltage propagate in a transmission line with a speed equal to the speed of light (c) i.e. appx. 3 × 10 8 m / s e c = 3 × 10 5 k m / s e c {\displaystyle 3\times 10^{8}m/sec=3\times 10^{5}km/sec} and the frequency (f) of voltage or current is 50 Hz ( although in the America and parts of Asia it is typically 60 Hz) [ 30 ]
Therefore, wavelength (λ) can be calculated as below :
f λ = c {\displaystyle f\lambda =c}
or, λ = c f {\displaystyle \lambda ={\frac {c}{f}}}
or, λ = 3 × 10 5 50 = 6000 K m {\displaystyle \lambda ={\frac {3\times 10^{5}}{50}}=6000Km}
A transmission line with 60 km of length is very very small( 1 100 {\displaystyle {\tfrac {1}{100}}} times) when compared with wavelength i.e. 6000 km. Up to 240 km ( 1 25 {\displaystyle {\tfrac {1}{25}}} times of wavelength) (250 km is taken for easy remembering) length of the line, current or voltage waveform is so small that it can be approximated to a straight line for all practical purposes. For line length of about 240 km parameters are assumed to be lumped (though practically these parameters are always distributed). Therefore, the response of transmission line for a length up to 250 km can be considered linear and hence the equivalent circuit of the line can be approximated to a linear circuit.
But if the length of the line is more than 250 km say 400 km i.e. 1 15 {\displaystyle {\tfrac {1}{15}}} times of wavelength, then the waveform of current or voltage can not be considered linear and therefore we need to use integration for the analysis of these lines.
The transmission lines which have a length less than 60 km are generally referred to as short transmission lines. For its short length, parameters like electrical resistance, impedance and inductance of these short lines are assumed to be lumped. The shunt capacitance for a short line is almost negligible and thus, are not taken into account (or assumed to be zero).
Now, if the impedance per km for an l km of line is, z 0 = r + j x {\displaystyle z_{0}=r+jx} and the sending end & receiving end voltages make an angle of Φ s {\displaystyle \Phi _{s}} & Φ r {\displaystyle \Phi _{r}} respectively, with the receiving end current. Then, the total impedance of the line will be, Z = l r + l j x {\displaystyle Z=lr+ljx}
The sending end voltage and current for this approximation are given by :
In this, the sending and receiving end voltages are denoted by V S {\displaystyle V_{S}} and V R {\displaystyle V_{R}} respectively. Also the currents I S {\displaystyle I_{S}} and I R {\displaystyle I_{R}} are entering and leaving the network respectively.
So, by considering the equivalent circuit model for the short transmission line, the transmission matrix can be obtained as follows:
Therefore, the ABCD parameters are given by :
A = D =1, B = Z Ω and C = 0
The transmission line having its effective length more than 80 km but less than 250 km is generally referred to as a medium transmission line. Due to the line length being considerably high, shunt capacitance along with admittance Y of the network does play a role in calculating the effective circuit parameters, unlike in the case of short transmission lines. For this reason, the modelling of a medium length transmission line is done using lumped shunt admittance along with the lumped impedance in series to the circuit.
Counterintuitive behaviours of medium-length transmission lines:
These lumped parameters of a medium length transmission line can be represented using two different models, namely :
In case of a nominal Π representation, the total lumped shunt admittance is divided into 2 equal halves, and each half with value Y ⁄ 2 is placed at both the sending & receiving end, while the entire circuit impedance is lumped in between the two halves. The circuit, so formed resembles the symbol of pi (Π), hence is known as the nominal Π (or Π network representation) of a medium transmission line. It is mainly used for determining the general circuit parameters and performing load flow analysis.
Applying KCL at the two shunt ends, we get
I S = I 1 + I 2 = Y 2 V S + Y 2 V R + I R {\displaystyle I_{S}=I_{1}+I_{2}={\frac {Y}{2}}V_{S}+{\frac {Y}{2}}V_{R}+I_{R}}
In this,
The sending and receiving end voltages are denoted by V S {\displaystyle V_{S}} and V R {\displaystyle V_{R}} respectively. Also the currents I S {\displaystyle I_{S}} and I R {\displaystyle I_{R}} are entering and leaving the network respectively.
I 1 & I 3 {\displaystyle I_{1}\&I_{3}} are the currents through the shunt capacitances at the sending and receiving end respectively whereas I 2 {\displaystyle I_{2}} is the current through the series impedance.
Again,
V S = Z I 2 + V R = Z ( V R Y 2 + I R ) + V R {\displaystyle V_{S}=ZI_{2}+V_{R}=Z(V_{R}{\frac {Y}{2}}+I_{R})+V_{R}}
or,
So, by substituting we get :
I S = Y 2 [ ( 1 + Y Z 2 ) V R + Z I R ] + Y 2 V R + I R {\displaystyle I_{S}={\frac {Y}{2}}[(1+{\frac {YZ}{2}})V_{R}+ZI_{R}]+{\frac {Y}{2}}V_{R}+I_{R}}
or,
The equation obtained thus, eq( 4 ) & ( 5 ) can be written into matrix form as follows :
so, the ABCD parameters are :
A = D = ( 1 + Y Z 2 ) {\displaystyle (1+{\frac {YZ}{2}})} per unit
B = Z Ω
C = Y ( 1 + Y Z 4 ) ℧ {\displaystyle Y(1+{\frac {YZ}{4}})\mho }
In the nominal T model of a medium transmission line, the net series impedance is divided into two halves and placed on either side of the lumped shunt admittance i.e. placed in the middle. The circuit so formed resembles the symbol of a capital T or star(Y), and hence is known as the nominal T network of a medium length transmission line.
The application of KCL at the juncture(the neutral point for Y connection) gives,
V J = V S − V J Z 2 = Y V J + V J − V R Z 2 {\displaystyle V_{J}={\frac {V_{S}-V_{J}}{\frac {Z}{2}}}=YV_{J}+{\frac {V_{J}-V_{R}}{\frac {Z}{2}}}}
The above equation can be rearranged as,
V J = 2 Y Z + 4 ( V S + V R ) {\displaystyle V_{J}={\frac {2}{YZ+4}}(V_{S}+V_{R})}
Here, the sending and receiving end voltages are denoted by V S {\displaystyle V_{S}} and V R {\displaystyle V_{R}} respectively. Also the currents I S {\displaystyle I_{S}} and I R {\displaystyle I_{R}} are entering and leaving the network respectively
Now, for the receiving end current, we can write :
By rearranging the equation and replacing the value of V J {\displaystyle V_{J}} with the derived value, we get :
Now, the sending end current can be written as:
I S = Y V J + I R {\displaystyle I_{S}=YV_{J}+I_{R}}
Replacing the value of V J {\displaystyle V_{J}} in the above equation :
The equation obtained thus, eq.( 8 ) & eq.( 9 ) can be written into matrix form as follows :
So, the ABCD parameters are :
A = D = ( 1 + Y Z 2 ) {\displaystyle (1+{\frac {YZ}{2}})} per unit
B = Z ( 1 + Y Z 4 ) Ω {\displaystyle Z(1+{\frac {YZ}{4}})\Omega }
C = Y ℧ {\displaystyle Y\mho }
A transmission line having a length more than 250 km is considered as a long transmission line. Unlike short and medium lines the line parameters of long transmission line are assumed to be distributed at each point of line uniformly. Thus modelling of a long line is somewhat difficult. But a few approaches can be made based on the length and values of line parameters. For a long transmission line, it is considered that the line may be divided into various sections, and each section consists of inductance, capacitance, resistance and conductance, as shown in the RLC (resistance and inductance in series, with shunt capacitance) cascade model.
Considering a bit smaller part of a long transmission line having length dx situated at a distance x from the receiving end. The series impedance of the line is represented by zdx and ydx is the shunt impedance of the line. Due to charging current and corona loss, the current is not uniform along the line. Voltage is also different in different parts of the line because of inductive reactance.
Where,
z – series impedance per unit length, per phase
y – shunt admittance per unit length, per phase to neutral
Δ V = I z Δ x ⇒ Δ V Δ x = I z {\displaystyle \Delta V=Iz\Delta x\Rightarrow {\frac {\Delta V}{\Delta x}}=Iz}
Again, as Δ x → 0 {\displaystyle \Delta x\rightarrow 0}
Δ V Δ x = I z {\displaystyle {\frac {\Delta V}{\Delta x}}=Iz}
Now for the current through the strip, applying KCL we get,
The second term of the above equation is the product of two small quantities and therefore can be neglected.
For Δ x → 0 {\displaystyle \Delta x\rightarrow 0} we have, d I d x = V y {\displaystyle {\frac {dI}{dx}}=Vy}
Taking the derivative concerning x of both sides, we get
Substitution in the above equation results
The roots of the above equation are located at ± y z {\displaystyle \pm {\sqrt {yz}}} .
Hence the solution is of the form,
Taking derivative with respect to x we get,
Combining these two we have,
The following two quantities are defined as,
Z c = z y {\displaystyle Z_{c}={\sqrt {\frac {z}{y}}}} , which is called the characteristic impedance
γ = y z {\displaystyle \gamma ={\sqrt {yz}}} , which is called the propagation constant
Then the previous equations can be written in terms of the characteristic impedance and propagation constant as,
Now, at x = 0 {\displaystyle x=0} we have, V = V r {\displaystyle V=V_{r}} and I = I r {\displaystyle I=I_{r}}
Therefore, by putting x = 0 {\displaystyle x=0} at eq.( 17 ) & eq.( 18 ) we get,
Solving eq.( 19 ) & eq.( 20 ) we get the following values for A 1 & A 2 {\displaystyle A_{1}\&A_{2}} :
Also, for l = x {\displaystyle l=x} , we have V = V S {\displaystyle V=V_{S}} and I = I S {\displaystyle I=I_{S}} .
Therefore, by replacing x by l we get,
Where,
V r + Z c I r 2 e γ l {\displaystyle {\frac {V_{r}+Z_{c}I_{r}}{2}}e^{\gamma l}} is called incident voltage wave
V r − Z c I r 2 e − γ l {\displaystyle {\frac {V_{r}-Z_{c}I_{r}}{2}}e^{-\gamma l}} is called reflected voltage wave
We can rewrite the eq.( 22 ) & eq.( 23 ) as,
So, by considering the corresponding analogy for long transmission line, the obtained equations i.e. eq.( 24 ) eq.( 25 ) can be written into matrix form as follows:
The ABCD parameters are given by :
A = D = cosh γ l {\displaystyle \cosh \gamma l}
B = Z c sinh γ l {\displaystyle Z_{c}\sinh \gamma l}
C = sinh γ l Z c {\displaystyle {\frac {\sinh \gamma l}{Z_{c}}}}
Like the medium transmission line, the long line can also be approximated into an equivalent Π representation. In the Π-equivalent of a long transmission line, the series impedance is denoted by Z′ while the shunt admittance is denoted by Y′.
So, the ABCD parameters of this long line can be defined like medium transmission line as :
A = D = ( 1 + Y ′ Z ′ 2 ) {\displaystyle (1+{\frac {Y\prime Z\prime }{2}})} per unit
B = Z′ Ω
C = Y ( 1 + Y ′ Z ′ 4 ) ℧ {\displaystyle Y(1+{\frac {Y\prime Z\prime }{4}})\mho }
Comparing it with the ABCD parameters of cascaded long transmission model, we can write :
or, Z ′ = Z sinh γ l γ l Ω {\displaystyle Z\prime =Z{\frac {\sinh \gamma l}{\gamma l}}\Omega }
Where Z(= zl), is the total impedance of the line.
By rearranging the above equation,
Y ′ 2 = 1 Z C cosh γ l − 1 sinh γ l {\displaystyle {\frac {Y\prime }{2}}={\frac {1}{Z_{C}}}{\frac {\cosh \gamma l-1}{\sinh \gamma l}}}
or, Y ′ 2 = 1 Z C tanh ( γ l 2 ) = y z tanh ( γ l 2 ) {\displaystyle {\frac {Y\prime }{2}}={\frac {1}{Z_{C}}}\tanh({\frac {\gamma l}{2}})={\sqrt[{}]{\frac {y}{z}}}\tanh({\frac {\gamma l}{2}})}
This can be further reduced to,
Y ′ 2 = y l 2 tanh ( γ l 2 ) l 2 y z = Y 2 tanh ( γ l 2 ) ( γ l 2 ) {\displaystyle {\frac {Y\prime }{2}}={\frac {yl}{2}}{\frac {\tanh({\frac {\gamma l}{2}})}{{\frac {l}{2}}{\sqrt[{}]{yz}}}}={\frac {Y}{2}}{\frac {\tanh({\frac {\gamma l}{2}})}{{\frac {(\gamma l}{2}})}}}
where Y(= yl) is called the total admittance of the line.
Now, if the line length(l) is small, sinh γ l ≡ γ l & tanh ( γ l 2 ) ≡ γ l 2 {\displaystyle \sinh \gamma l\equiv \gamma l\quad \&\quad \tanh({\frac {\gamma l}{2}})\equiv {\frac {\gamma l}{2}}} .
Now, if the line length (l) is small, it is found that Z = Z′ and Y = Y′.
This refers that if the line length(l) is small, the nominal-π representation incorporating the assumption of lumped parameters can be befitting. But if the length of the line(l) exceeds a certain boundary(near about 240 to 250) the nominal-π representation becomes erroneous and can not be used further, for performance analysis. [ 31 ]
Travelling waves are the current and voltage waves that create a disturbance and moves along the transmission line from the sending end of a transmission line to the other end at a constant speed. The travelling wave plays a major role in knowing the voltages and currents at all the points in the power system. These waves also help in designing the insulators, protective equipment, the insulation of the terminal equipment, and overall insulation coordination.
When the switch is closed at the transmission line's starting end, the voltage will not appear instantaneously at the other end. This is caused by the transient behaviour of inductor and capacitors that are present in the transmission line. The transmission lines may not have physical inductor and capacitor elements but the effects of inductance and capacitance exist in a line. Therefore, when the switch is closed the voltage will build up gradually over the line conductors. This phenomenon is usually called as the voltage wave is travelling from the transmission line's sending end to the other end. And similarly, the gradual charging of the capacitances happens due to the associated current wave.
If the switch is closed at any instant of time, the voltage at load does not appear instantly. The 1st section will charge first and then it will charge the next section. Until and unless a section gets charged the successive section will not be charged .thus this process is a gradual one. It can be realized such that several water tanks are placed connectively and water flows from the 1st tank to the last tank. | https://en.wikipedia.org/wiki/Performance_and_modelling_of_AC_transmission |
A performance gap is a disparity that is found between the energy use predicted and carbon emissions in the design stage of buildings and the energy use of those buildings in operation. Research in the UK suggests that actual carbon emissions from new homes can be 2.5 times the design estimates, on average. [ 1 ] For non-domestic buildings, the gap is even higher - actual carbon emissions as much as 3.8 times the design estimates, on average. [ 2 ]
There are established tools for reducing the performance gap, by reviewing project objectives, outline and detailed design drawings, design calculations, implementation of designs on site, and post-occupancy evaluation. NEF's Assured Performance Process (APP) is one such tool, which is being used extensively on different sites that form part of East Hampshire's Whitehill and Bordon new town development, one of the largest regeneration projects anywhere in the UK, with high ambitions for both environmental performance and health.
The performance gap is produced mainly due to uncertainties. Uncertainties are found in any “real-world” system, and buildings are no exception. As early as 1978, Gero and Dudnik wrote a paper presenting a methodology to solve the problem of designing subsystems ( HVAC ) subjected to uncertain demands. After that, other authors have shown an interest in the uncertainties that are present in building design; Ramallo-González classified uncertainties in building design/construction in three different groups: [ 3 ]
The type 1 from this grouping, have been divided here into two main groups: one concerning the uncertainty due to climate change; and the other concerning uncertainties due to the use of synthetic weather data files. Concerning the uncertainties due to climate change: buildings have long life spans, for example, in England and Wales, around 40% of the office blocks existing in 2004 were built before 1940 (30% if considered by floor area). [ 5 ] and, 38.9% of English dwellings in 2007 were built before 1944. [ 6 ] This long life span makes buildings likely to operate with climates that might change due to global warming. De Wilde and Coley showed how important is to design buildings that take into consideration climate change and that are able to perform well in future weathers. [ 7 ] Concerning the uncertainties due to the use of synthetic weather data files: Wang et al. showed the impact that uncertainties in weather data (among others) may cause in energy demand calculations. [ 8 ] The deviation in calculated energy use due to variability in the weather data were found to be different in different locations from a range of (-0.5% – 3%) in San Francisco to a range of (-4% to 6%) in Washington D.C. The ranges were calculated using TMY as the reference. These deviations on the demand were smaller than the ones due to operational parameters. For those, the ranges were (-29% – 79%) for San Francisco and (-28% – 57%) for Washington D.C. The operation parameters were those linked with occupants’ behaviour. The conclusion of this paper is that occupants will have a larger impact in energy calculations than the variability between synthetically generated weather data files.
The spatial resolution of weather data files was the concern covered by Eames et al. [ 9 ] Eames showed how a low spatial resolution of weather data files can be the cause of disparities of up to 40% in the heating demand.
In the work of Pettersen, uncertainties of group 2 (workmanship and quality of elements) and group 3 (behaviour) of the previous grouping were considered (Pettersen, 1994). This work shows how important occupants’ behaviour is on the calculation of the energy demand of a building. Pettersen showed that the total energy use follows a normal distribution with a standard deviation of around 7.6% when the uncertainties due to occupants are considered, and of around 4.0% when considering those generated by the properties of the building elements.
A large study was carried out by Leeds Metropolitan at Stamford Brook. This project saw 700 dwellings built to high efficiency standards. [ 10 ] The results of this project show a significant gap between the energy used expected before construction and the actual energy use once the house is occupied. The workmanship is analysed in this work. The authors emphasise the importance of thermal bridges that were not considered for the calculations, and how those originated by the internal partitions that separate dwellings have the largest impact on the final energy use. The dwellings that were monitored in use in this study show a large difference between the real energy use and that estimated using SAP, with one of them giving +176% of the expected value when in use.
Hopfe has published several papers concerning uncertainties in building design that cover workmanship. A more recent publication at the time of writing [ 11 ] looks into uncertainties of group 2 and 3. In this work the uncertainties are defined as normal distributions. The random parameters are sampled to generate 200 tests that are sent to the simulator (VA114), the results from which will be analysed to check the uncertainties with the largest impact on the energy calculations. This work showed that the uncertainty in the value used for infiltration is the factor that is likely to have the largest influence on cooling and heating demands.
Another study performed by de Wilde and Wei Tian, [ 12 ] compared the impact of most of the uncertainties affecting building energy calculations taking into account climate change. De Wilde and Tian used a two dimensional Monte Carlo Analysis to generate a database obtained with 7280 runs of a building simulator. A sensitivity analysis was applied to this database to obtain the most significant factors on the variability of the energy demand calculations. Standardised Regression Coefficients and Standardised Rank Regression Coefficients were used to compare the impacts of the uncertainties.
De Wilde and Tian agreed with Hopfe on the impact of uncertainties in the infiltration over energy calculations, but also introduced other factors, including uncertainties in: weather, U-Value of windows, and other variables related with occupants’ behaviour (equipment and lighting). Their paper compares many of the uncertainties with a good sized database providing a realistic comparison for the scope of the sampling of the uncertainties.
The work of Schnieders and Hermelink [ 13 ] showed a substantial variability in the energy demands of low-energy buildings designed under the same specification (Passivhaus).
The work of Schnieders and Hermelink [ 14 ] showed a substantial variability in the energy demands of low-energy buildings designed under the same specification (Passivhaus). Although the passivhaus standard has a very controlled, high quality workmanship, large differences have been seen in energy demand in different houses.
Blight and Coley [ 15 ] showed that that variability can be occasioned due to variance in occupant behaviour (the use of windows and doors was included in this work). The work of Blight and Coley proves two things: (1) Occupants have a substantial influence on energy use; and (2) The model they used to generate occupants’ behaviour is accurate for the creation of behavioural patterns of inhabitants.
The method used in the previous paper [ 16 ] to generate accurate profiles of occupants’ behaviour was the one developed by Richardson et al. [ 17 ] The method was developed using the Time-Use Survey (TUS) of the United Kingdom as a reference of real behaviour of occupants, this database was elaborated after recording the activity of more than 6000 occupants in 24-hours diaries with a 10 minutes resolution . Richardson’s paper shows how the tool is able to generate behavioural patterns that correlate with the real data obtained from the TUS.
The availability of this tool allows scientist’s to model the uncertainty of occupants’ behaviour as a set of behavioural patterns that have been proven to correlate with real occupants’ behaviour.
There have been works published to take into account occupancy in optimisation using the so called robust optimisation [ 18 ] | https://en.wikipedia.org/wiki/Performance_gap |
Performance rating is the step in the work measurement in which the analyst observes the worker's performance and records a value representing that performance relative to the analyst's concept of standard performance. [ 1 ]
Performance rating helps people do their jobs better, identifies training and education needs, assigns people to work they can excel in, and maintains fairness in salaries, benefits, promotion, hiring, and firing. Most workers want to know how they are doing on the job. Workers need performance feedback to work effectively. Accessing an employee timely, accurate, constructive feedback is key to effective performance. [ 2 ] Motivational strategies such as goal setting depend upon regular performance updates. There are many sources of error with performance ratings, and error can be reduced through rater training and through the use of behaviorally anchored rating scales . In industrial and organizational psychology such scales are used to clearly define the behaviors that constitute poor, average, and superior performance.
There are several methods of performance rating. The simplest and most common method is based on speed or pace. Dexterity and effectiveness are also important considerations when assessing performance. Standard performance is denoted as 100. [ 3 ] A performance rating greater than 100 means the worker's performance is more than standard, and less than 100 means the worker's performance is less than standard. Standard performance is not necessarily the performance level expected of workers. For example, a standard performance rating of a worker walking is 4.5 miles/hour. The rating is used in conjunction with a timing study to level out actual time (observed time) taken by the worker under observation. This leads to a basic minute value (observed time/100*rating). This balances out fast and slow workers to get to a standard/average time. Standard at a 100 is not a percentage, it simply makes the calculations easier. Most companies that set targets using work study methods will set it at a level of around 85, not 100.
Performance rating has become a continuous process by which an employer and employees attempt to understand company goals and how his or her progress toward contributing to them are measured. Performance measurement is an ongoing activity for all managers and their subordinates . [ 4 ] A performance measurement uses the following indicators:
The purpose of performance rating is to provide systematic evaluation of the employees’ contribution to the organization. [ 6 ] Globally, the combination of indicators and performance management , combined with intensifying work, transforms the work of employees and of the managers. On the managerial level, the will of hierarchy to fulfill performance indicators is dependent on task prioritizing, which is not shared amongst everyone.
Performance Rating intensifies [ clarification needed ] the environment of the organization but provides structure for production. [ 7 ] Performance satisfaction [ clarification needed ] is found to be directly related to both affective commitment and intention of employee. If motivated more likely to meet goals. | https://en.wikipedia.org/wiki/Performance_rating_(work_measurement) |
Performativity is the concept that language can function as a form of social action and have the effect of change. [ 1 ] The concept has multiple applications in diverse fields such as anthropology , social and cultural geography , economics , gender studies ( social construction of gender ), law , linguistics , performance studies , history , management studies and philosophy .
The concept is first described by philosopher of language John L. Austin when he referred to a specific capacity: the capacity of speech and communication to act or to consummate an action. Austin differentiated this from constative language, which he defined as descriptive language that can be "evaluated as true or false". Common examples of performative language are making promises, betting, performing a wedding ceremony, an umpire calling a foul, or a judge pronouncing a verdict. [ 1 ]
The concept of performance has been developed by such scholars as Richard Schechner , Victor Turner , Clifford Geertz , Erving Goffman , John Austin , John Searle , Pierre Bourdieu , Stern and Henderson, and Judith Butler .
Performance is a bodily practice that produces meaning. It is the presentation or 're-actualization' of symbolic systems through living bodies as well as lifeless mediating objects, such as architecture . [ 2 ] In the academic field, as opposed to the domain of the performing arts , the concept of performance is generally used to highlight dynamic interactions between social actors or between a social actor and their immediate environment.
Performance is an equivocal concept and for the purpose of analysis it is useful to distinguish between two senses of 'performance'.
In the more formal sense, performance refers to a framed event. Performance in this sense is an enactment out of convention and tradition. Founder of the discipline of performance studies Richard Schechner dubs this category 'is-performance'. [ 3 ] In a weaker sense, performance refers to the informal scenarios of daily life, suggesting that everyday practices are 'performed'. Schechner called this the 'as-performance'. [ 3 ] Generally the performative turn is concerned with the latter, although the two senses of performance should be seen as ends of a spectrum rather than distinct categories. [ 3 ]
The performative turn is a paradigmatic shift in the humanities and social sciences that affected such disciplines as anthropology , archaeology , linguistics , ethnography , history and the relatively young discipline of performance studies . Previously used as a metaphor for theatricality , performance is now often employed as a heuristic principle to understand human behaviour . The assumption is that all human practices are 'performed', so that any action at whatever moment or location can be seen as a public presentation of the self. This methodological approach entered the social sciences and humanities in the 1990s but is rooted in the 1940s and 1950s.
Underlying the performative turn was the need to conceptualize how human practices relate to their contexts in a way that went beyond the traditional sociological methods that did not problematize representation. Instead of focusing solely on given symbolic structures and texts, scholars stress the active, social construction of reality as well as the way that individual behaviour is determined by the context in which it occurs. Performance functions both as a metaphor and an analytical tool and thus provides a perspective for framing and analysing social and cultural phenomena.
The origins of the performative turn can be traced back to two strands of theorizing about performance as a social category that surfaced in the 1940s and 1950s.
The first strand is anthropological in origin and may be labelled the dramaturgical model. Kenneth Burke (1945) expounded a 'dramatistic approach' to analyse the motives underlying such phenomena as communicative actions and the history of philosophy. Anthropologist Victor Turner focussed on cultural expression in staged theatre and ritual. In his highly influential The Presentation of Self in Everyday Life (1959), Erving Goffman emphasized the link between social life and performance by stating that 'the theatre of performances is in public acts'. Within the performative turn, the dramaturgical model evolved from the classical concept of 'society as theatre' into a broader category that considers all culture as performance.
The second strand of theory concerns a development in the philosophy of language launched by John Austin in the 1950s. In How to do things with words [ 4 ] he introduced the concept of the ' performative utterance ', opposing the prevalent principle that declarative sentences are always statements that can be either true or false. Instead he argued that 'to say something is to do something'. [ 5 ] In the 1960s John Searle extended this concept to the broader field of speech act theory, where due attention is paid to the use and function of language. In the 1970s Searle engaged in polemics with postmodern philosopher Jacques Derrida , about the determinability of context and the nature of authorial intentions in a performative text.
The performative turn is anchored in the broader cultural development of postmodernism . An influential current in modern thought, postmodernism is a radical reappraisal of the assumed certainty and objectivity of scientific efforts to represent and explain reality. Postmodern scholars argue that society itself both defines and constructs reality through experience, representation and performance. From the 1970s onwards, the concept of performance was integrated into a variety of theories in the humanities and social sciences, such as phenomenology , critical theory (the Frankfurt school ), semiotics , Lacanian psychoanalysis , deconstructionism and feminism . [ 2 ] The conceptual shift became manifest in a methodology oriented towards culture as a dynamic phenomenon as well as in the focus on subjects of study that were neglected before, such as everyday life. For scholars, the concept of performance is a means to come to grips with human agency and to better understand the way social life is constructed.
The term derives from the founding work in speech act theory by ordinary language philosopher J. L. Austin . In the 1950s, Austin gave the name performative utterances to situations where saying something was doing something, rather than simply reporting on or describing reality. The paradigmatic case here is speaking the words "I do". [ 6 ] Austin did not use the word performativity .
Breaking with analytic philosophy , Austin argued in How to Do Things With Words that a "performative utterance" cannot be said to be either true or false as a constative utterance might be: it can only be judged either "happy" or "infelicitous" depending upon whether the conditions required for its success have been met. In this sense, performativity is a function of the pragmatics of language. Having shown that all utterances perform actions, even apparently constative ones, Austin famously discarded the distinction between "performative" and "constative" utterances halfway through the lecture series that became the book and replaced it with a three-level framework:
For example, if a speech act is an attempt to distract someone, the illocutionary force is the attempt to distract and the perlocutionary effect is the actual distraction caused by the speech act in the interlocutor.
Austin's account of performativity has been subject to extensive discussion in philosophy, literature, and beyond. Jacques Derrida , Shoshana Felman , Judith Butler , and Eve Kosofsky Sedgwick are among the scholars who have elaborated upon and contested aspects of Austin's account from the vantage point of deconstruction , psychoanalysis , feminism , and queer theory . Particularly in the work of feminists and queer theorists, performativity has played an important role in discussions of social change (Oliver 2003).
The concept of performativity has also been used in science and technology studies and in economic sociology . Andrew Pickering has proposed to shift from a "representational idiom" to a "performative idiom" in the study of science. Michel Callon has proposed to study the performative aspects of economics , i.e. the extent to which economic science plays an important role not only in describing markets and economies, but also in framing them. Karen Barad has argued that science and technology studies deemphasize the performativity of language in order to explore the performativity of matter (Barad 2003).
Other uses of the notion of performativity in the social sciences include the daily behavior (or performance) of individuals based on social norms or habits. Philosopher and feminist theorist Judith Butler has used the concept of performativity in their analysis of gender development, as well as in analysis of political speech. Eve Kosofsky Sedgwick describes queer performativity as an ongoing project for transforming the way we may define—and break—boundaries to identity. Through her suggestion that shame is a potentially performative and transformational emotion, Sedgwick has also linked queer performativity to affect theory . Also innovative in Sedgwick's discussion of the performative is what she calls periperformativity (2003: 67–91), which is effectively the group contribution to the success or failure of a speech act .
In A Taxonomy of Illocutionary Acts , John Searle takes up and reformulates the ideas of his colleague J. L. Austin . [ 7 ] Though Searle largely supports and agrees with Austin's theory of speech acts, he has a number of critiques, which he outlines: "In sum, there are (at least) six related difficulties with Austin's taxonomy; in ascending order of importance: there is a persistent confusion between verbs and acts, not all the verbs are illocutionary verbs, there is too much overlap of the categories, too much heterogeneity within the categories, many of the verbs listed in the categories don't satisfy the definition given for the category and, most important, there is no consistent principle of classification." [ 8 ]
His last key departure from Austin lies in Searle's claim that four of his universal 'acts' do not need 'extra-linguistic' contexts to succeed. [ 9 ] As opposed to Austin who thinks all illocutionary acts need extra-linguistic institutions, Searle disregards the necessity of context and replaces it with the "rules of language". [ 9 ]
In The Postmodern Condition: A Report on Knowledge (1979, English translation 1986), philosopher and cultural theorist Jean-François Lyotard defined performativity as the defining mode of legitimation of postmodern knowledge and social bonds, that is, power. [ 10 ] In contrast to the legitimation of modern knowledge through such grand narratives as Progress, Revolution, and Liberation, performativity operates by system optimization or the calculation of input and outputs. In a footnote, Lyotard aligns performativity with Austin's concept of performative speech act. Postmodern knowledge must not only report: it must do something and do it efficiently by maximizing input/output ratios.
Lyotard uses Wittgenstein's notion of language games to theorize how performativity governs the articulation, funding, and conduct of contemporary research and education, arguing that at bottom it involves the threat of terror: "be operational (that is commensurable) or disappear" (xxiv). While Lyotard is highly critical of performativity, he notes that it calls on researchers to explain not only the worth of their work but also the worth of that worth.
Lyotard associated performativity with the rise of digital computers in the post-World War II period. In Postwar: A History of Europe Since 1945, historian Tony Judt cites Lyotard to argue that the Left has largely abandoned revolutionary politics for human rights advocacy. The widespread adoption of performance reviews, organizational assessments, and learning outcomes by different social institutions worldwide has led social researchers to theorize "audit culture" and "global performativity".
Against performativity and Jürgen Habermas ' call for consensus, Lyotard argued for legitimation by paralogy , or the destabilizing, often paradoxical, introduction of difference into language games
Philosopher Jacques Derrida drew on Austin's theory of performative speech act while deconstructing its logocentric and phonocentric premises and reinscribing it within the operations of generalized writing. In contrast to structuralism's focus on linguistic form, Austin had introduced the force of speech acts, which Derrida aligns with Nietzsche's insights on language.
In "Signature, Event, Context," Derrida focused on Austin's privileging of speech and the accompanying presumptions of the presence of a speaker ("signature") and the bounding of a performative's force by an act or a context. In a passage that would become a touchstone of poststructuralist thought, Derrida stresses the citationality or iterability of any and all signs.
Every sign, linguistic or nonlinguistic, spoken or written (in the current sense of this opposition), in a small or large unit, can be cited , put between quotation marks; in doing so it can break with every given context, engendering an infinity of new contexts in a manner which is absolutely illimitable. This does not imply that the mark is valid outside of a context, but on the contrary that there are only contexts without any center or absolute anchorage [ ancrage ]. This citationality, this duplication or duplicity, this iterability of the mark is neither an accident nor an anomaly, it is that (normal/abnormal) without which a mark could not even have a function called "normal." What would a mark be that could not be cited? Or one whose origins would not get lost along the way? [ 11 ]
Derrida's stress on the citational dimension of performativity would be taken up by Judith Butler and other theorists. While he addressed the performativity of individual subject formation, Derrida also raised such questions as whether we can mark when the event of the Russian revolution went awry, thus scaling up the field of performativity to historical dimensions.
Philosopher and feminist theorist Judith Butler offered a new, more Continental (specifically, Foucauldian ) reading of the notion of performativity, which has its roots in linguistics and philosophy of language . They describe performativity as "that reiterative power of discourse to produce the phenomena that it regulates and constrains." [ 12 ] It is an anti-essentialist theory of subjectivity in which a performance of the self is repeated and dependent upon a social audience. n this way, these unfixed and precarious performances come to have the appearance of substance and continuity.
A key theoretical point that was most radical in regards to theories of subjectivity and performance is that there is no performer behind the performance. Butler derived this idea from Nietzsche's concept of "no doer behind the deed." This is to say that there is no self before the performance of the self, but rather that the performance has constitutive powers. This is how categories of the self for Judith Butler, such as gender, are seen as something that one "does," rather than something one "is." They have largely used this concept in their analysis of gender development. [ 13 ]
Influenced by Austin, Butler argued that gender is socially constructed through commonplace speech acts and nonverbal communication that are performative, in that they serve to define and maintain identities . [ 14 ] This view of performativity reverses the idea that a person's identity is the source of their secondary actions (speech, gestures). Instead, it views actions, behaviors, and gestures as both the result of an individual's identity as well as a source that contributes to the formation of one's identity which is continuously being redefined through speech acts and symbolic communication. [ 1 ] This view was also influenced by philosophers such as Michel Foucault and Louis Althusser . [ 15 ]
The concept places emphasis on the manners by which identity is passed or brought to life through discourse. Performative acts are types of authoritative speech. This can only happen and be enforced through the law or norms of the society. These statements, just by speaking them, carry out a certain action and exhibit a certain level of power. Examples of these types of statements are declarations of ownership, baptisms, inaugurations, and legal sentences. Something that is key to performativity is repetition. [ 16 ] The statements are not singular in nature or use and must be used consistently in order to exert power. [ 17 ]
Several criticisms have been raised regarding Butler's reading of performativity. The first is that the theory is individual in nature and does not take into consideration such factors as the space within which the performance occurs, the others involved, and how others might see or interpret what they witness. It has also been argued that Butler overlooks the unplanned effects of the performance act and the contingencies surrounding it. [ 18 ]
Another criticism is that Butler is not clear about the concept of subject. It has been said that in Butler's writings, the subject sometimes only exists tentatively, sometimes possesses a "real" existence, and other times is socially active. Also, some observe that the theory might be better suited to literary analysis as opposed to social theory. [ 19 ]
Others criticize Butler for taking ethnomethodological and symbolic interactionist sociological analyses of gender and merely reinventing them in the concept of performativity. [ 20 ] [ 21 ] For example, A. I. Green [ 21 ] argues that the work of Kessler and McKenna (1978) and West and Zimmerman (1987) builds directly from Garfinkel (1967) and Goffman (1959) to deconstruct gender into moments of attribution and iteration in a continual social process of "doing" masculinity and femininity in the performative interval . These latter works are premised on the notion that gender does not precede but, rather, follows from practice, instantiated in micro-interaction.
Performance studies emerged through the work of, among others, theatre director and scholar Richard Schechner , who applied the notion of performance to human behaviour beyond the performing arts . His interpretation of performance as non-artistic yet expressive social behaviour and his collaboration in 1985 with anthropologist Victor Turner led to the beginning of performance studies as a separate discipline. Schechner defines performance as 'restored behaviour', to emphasize the symbolic and coded aspects of culture. [ 22 ] Schechner understands performance as a continuum. Not everything is meant to be a performance, but everything, from performing arts to politics and economics, can be studied as performance. [ 3 ]
In the 1970s, Pierre Bourdieu introduced the concept of ' habitus ' or regulated improvisation, in a reaction against the structuralist notion of culture as a system of rules (Bourdieu 1972). Culture in his perspective undergoes a shift from 'a productive to a reproductive social order in which simulations and models constitute the world so that the distinction between real and appearance becomes erased'. [ 23 ] Though Bourdieu himself does not often employ the term 'performance', the notion of the bodily habitus as a formative site has been a source of inspiration for performance theorists.
The cultural historian Peter Burke suggested using the term ' occasionalism ' to stress the implication of the idea of performance that '[...] on different occasions or in different situations the same person behaves in different ways'. [ 24 ]
Within the social sciences and humanities, an interdisciplinary strand that has contributed to the performative turn is non-representational theory . It is a 'theory of practices' that focuses on repetitive ways of expression, such as speech and gestures. As opposed to representational theory, it argues that human conduct is a result of linguistic interplay rather than of codes and symbols that are consciously planned. Non-representational theory interprets actions and events, such as dance or theatre, as actualisations of knowledge. It also intends to shift the focus away from the technical aspects of representation, to the practice itself. [ 25 ]
Performance offers a tremendous interdisciplinary archive of social practices. It offers methods to study such phenomena as body art, ecological theatre, multimedia performance and other kinds of performance arts. [ 26 ]
Performance also provides a new registry of kinaesthetic effects, enabling a more conscientious observation of the moving body. The changing experience of movement, for example as a result of new technologies, has become an important subject of research. [ 27 ]
Moreover, the performative turn has helped scholars to develop an awareness of the relations between everyday life and stage performances. For example, at conferences and lectures, on the street and in other places where people speak in public, performers tend to use techniques derived from the world of theatre and dance. [ 28 ]
Performance allows us to study nature and other apparently 'immovable' and 'objectified' elements of the human environment (e.g. architecture) as active agents, rather than only as passive objects. Thus, in recent decades environmental scholars have acknowledged the existence of a fluid interaction between man and nature.
The performative turn has provided additional tools to study everyday life. A household for example may be considered as a performance, in which the relation between wife and husband is a role play between two actors.
In economics, the "performativity thesis" is the claim that the assumptions and models used by professionals and popularizers affect the phenomena they purport to describe; bringing the world more into line with theory. [ 29 ] [ 30 ] It also refers, more largely, to the idea of economic reality as a ceaselessly provoked reality and of things such as performance indicators, valuation formulas, consumer tests, stock prices or financial contracts constituting what they refer to. [ 31 ] This theory was developed by Michel Callon in The Laws of the Markets , before being further developed in Do Economists Make Markets edited by Donald Angus MacKenzie , Fabian Muniesa and Lucia Siu, and in Enacting Dismal Science edited by Ivan Boldyrev and Ekaterina Svetlova. [ 32 ] [ 33 ] The most important work in the field is that of Donald MacKenzie and Yuval Millo [ 34 ] [ 35 ] on the social construction of financial markets. In a seminal article, they showed that the option pricing theory called BSM (Black-Scholes-Merton) has been successful empirically not because of the discovery of preexisting price regularities, but because participants used it to set option prices, so that it made itself true.
The thesis of performativity of economics has been extensively criticized by Nicolas Brisset in Economics and Performativity . [ 36 ] Brisset defends the idea that the notion of performativity used by Callonian and Latourian sociologists leads to an overly relativistic view of the social world. Drawing on the work of John Austin and David Lewis , Brisset theorizes the idea of limits to performativity. To do this, Brisset considers that a theory, in order to be "performative", must become a convention. This requires conditions to be met. To take a convention status, a theory will have to:
Based on this framework, Brisset criticized the seminal work of MacKenzie and Millo on the performativity of the Black-Scholes-Merton financial model. [ 38 ] Drawing on the work of Pierre Bourdieu , Brisset also uses the notion of Speech Act to study economic models and their use in political power relations. [ 39 ]
MacKenzie's approach was also criticized by Uskali Maki for not using the concept of performativity in accordance with Austin's formulation. [ 40 ] This point gave rise to a debate in economic philosophy . [ 41 ] [ 42 ]
Judith Butler theorized gender as constructed by repeated acts. Acts that people come to perform in the mode of belief which cite existing norms, analogous to a script. Butler sees gender not as an expression of what one is but as something that one does. The appearance of a gendered essence is merely a "performative accomplishment". [ 43 ] Furthermore, they do not see it as socially imposed on a self that is prior to gender, as the self is not distinct from the categories which constitute it. According to Butler's theory, homosexuality and heterosexuality are not fixed categories. For Butler, a person is merely in a condition of "doing straightness" or "doing queerness," where these categories are not natural but historical and socially constititued. [ 18 ]
"For Butler, the distinction between the personal and the political or between private and public is itself a fiction designed to support an oppressive status quo: our most personal acts are, in fact, continually being scripted by hegemonic social conventions and ideologies ". [ 44 ]
In management, the concept of performativity has also been mobilized, relying on its diverse conceptualizations (Austin, Barad, Barnes, Butler, Callon, Derrida, Lyotard, etc.). [ 45 ]
In the study of management theories, performativity shows how actors use theories, how they produce effects on organizational practices and how these effects shape these practices. [ 46 ] [ 47 ]
For instance, by building on Michel Callon's perspective, the concept of performativity has been mobilized to show how the concept of Blue Ocean Strategy transformed organizational practices. [ 48 ]
The German news anchorman Hanns Joachim Friedrichs once argued that a good journalist should never act in collusion with anything, not even with a good thing. In the evening of November 9, 1989, the evening of the fall of the Berlin Wall , however, Friedrichs reportedly broke his own rule when he announced: "The gates of the wall are wide open." („Die Tore in der Mauer stehen weit offen.”) In reality, the gates were still closed. According to a historian, it was this announcement that encouraged thousands of East Berliners to march towards the wall, finally forcing the border guards to open the gates. In the sense of performativity, Friedrichs's words became a reality. [ 49 ] [ 50 ]
Theories of performativity have extended across multiple disciplines and discussions. Notably, interdisciplinary theorist José Esteban Muñoz has related video to theories of performativity. Specifically, Muñoz looks at the 1996 documentary by Susana Aiken and Carlos Aparicio, "The Transformation." [ 51 ]
Although historically and theoretically related to performance art, video art is not an immediate performance; it is mediated, iterative and citational. In this way, video art raises questions of performativity. Additionally, video art frequently puts bodies and display, complicating borders, surfaces, embodiment, and boundaries and so indexing performativity.
Despite cogent attempts at definition, the concept of performance continues to be plagued by ambiguities. Most pressing seems to be the paradox between performance as the consequence of following a script (cf. Schechners restored behaviour) and performance as a fluid activity with ample room for improvisation. Another problem involves the discrepancy between performance as a human activity that constructs culture (e.g. Butler and Derrida) on the one hand and performance as a representation of culture on the other (e.g. Bourdieu and Schechner). Another issue, important to pioneers such as Austin but now deemed irrelevant by postmodernism, concerns the sincerity of the actor. Can performance be authentic, or is it a product of pretence? | https://en.wikipedia.org/wiki/Performativity |
Pericentriolar material (PCM, sometimes also called pericent matrix) is a highly structured, [ 1 ] dense mass of protein which makes up the part of the animal centrosome that surrounds the two centrioles . The PCM contains proteins responsible for microtubule nucleation and anchoring [ 2 ] including γ-tubulin , pericentrin and ninein .
Although the PCM appears amorphous by electron microscopy , super-resolution microscopy finds that it is highly organized. The PCM have 9-fold symmetry that mimics the symmetry of the centriole . [ citation needed ] Some PCM proteins are organized such that one end of the protein is found near the centriole and the other end is farther away from the centriole.
The PCM size is dynamic during the cell cycle . After cell division, the PCM size is reduced in a process named centrosome reduction . [ 3 ] During the G2 phase of the cell cycle, the PCM grows in size in a process named centrosome maturation .
According to the Gene Ontology , the following human proteins are associated with the PCM [1] :
This cell biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pericentriolar_material |
Pericline is a form of albite exhibiting elongate prismatic crystals. [ 1 ]
Pericline twinning is a type of crystal twinning which show fine parallel twin laminae typically found in the alkali feldspars microcline . [ 2 ] The twinning results from a structural transformation between high temperature and low temperature forms. [ 3 ]
This article about a specific silicate mineral is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pericline |
In organic chemistry , a pericyclic reaction is the type of organic reaction wherein the transition state of the molecule has a cyclic geometry, the reaction progresses in a concerted fashion, and the bond orbitals involved in the reaction overlap in a continuous cycle at the transition state. Pericyclic reactions stand in contrast to linear reactions , encompassing most organic transformations and proceeding through an acyclic transition state, on the one hand and coarctate reactions , which proceed through a doubly cyclic, concerted transition state on the other hand. Pericyclic reactions are usually rearrangement or addition reactions. The major classes of pericyclic reactions are given in the table below (the three most important classes are shown in bold). Ene reactions and cheletropic reactions are often classed as group transfer reactions and cycloadditions/cycloeliminations, respectively, while dyotropic reactions and group transfer reactions (if ene reactions are excluded) are rarely encountered.
In general, these are considered to be equilibrium processes , although it is possible to push the reaction in one direction by designing a reaction by which the product is at a significantly lower energy level; this is due to a unimolecular interpretation of Le Chatelier's principle . There is thus a set of "retro" pericyclic reactions.
By definition, pericyclic reactions proceed through a concerted mechanism involving a single, cyclic transition state. Because of this, prior to a systematic understanding of pericyclic processes through the principle of orbital symmetry conservation , they were facetiously referred to as 'no-mechanism reactions'. However, reactions for which pericyclic mechanisms can be drawn often have related stepwise mechanisms proceeding through radical or dipolar intermediates that are also viable. Some classes of pericyclic reactions, such as the [2+2] ketene cycloaddition reactions , can be 'controversial' because their mechanism is sometimes not definitively known to be concerted (or may depend on the reactive system). Moreover, pericyclic reactions also often have metal-catalyzed analogs, although usually these are also not technically pericyclic, since they proceed via metal-stabilized intermediates, and therefore are not concerted.
Despite these caveats, the theoretical understanding of pericyclic reactions is probably among the most sophisticated and well-developed in all of organic chemistry. The understanding of how orbitals interact in the course of a pericyclic process has led to the Woodward–Hoffmann rules , a simple set of criteria to predict whether a pericyclic mechanism for a reaction is likely or favorable. For instance, these rules predict that the [4+2] cycloaddition of butadiene and ethylene under thermal conditions is likely a pericyclic process, while the [2+2] cycloaddition of two ethylene molecules is not. These are consistent with experimental data, supporting an ordered, concerted transition state for the former and a multistep radical process for the latter. Several equivalent approaches, outlined below, lead to the same predictions.
The aromatic transition state theory assumes that the minimum energy transition state for a pericyclic process is aromatic , with the choice of reaction topology determined by the number of electrons involved. For reactions involving (4 n + 2)-electron systems (2, 6, 10, ... electrons; odd number of electron pairs), Hückel topology transition states are proposed, in which the reactive portion of the reacting molecule or molecules have orbitals interacting in a continuous cycle with an even number of nodes. In 4 n -electron systems (4, 8, 12, ... electrons; even number of electron pairs) Möbius topology transition states are proposed, in which the reacting molecules have orbitals interacting in a twisted continuous cycle with an odd number of nodes. The corresponding (4 n + 2)-electron Möbius and 4 n -electron Hückel transition states are antiaromatic and are thus strongly disfavored. Aromatic transition state theory results in a particularly simply statement of the generalized Woodward–Hoffmann rules: A pericyclic reaction involving an odd number of electron pairs will proceed through a Hückel transition state (even number of antarafacial components in Woodward–Hoffmann terminology), [ 1 ] while a pericyclic reaction involving an even number of electron pairs will proceed through a Möbius transition state (odd number of antarafacial components).
Equivalently, pericyclic reactions have been analyzed with correlation diagrams , which track the evolution of the molecular orbitals (known as 'correlating' the molecular orbitals) of the reacting molecules as they progress from reactants to products via a transition state, based on their symmetry properties. Reactions are favorable ('allowed') if the ground state of the reactants correlate with the ground state of the products, while they are unfavorable ('forbidden') if the ground state of the reactants correlate with an excited state of the products. This idea is known as the conservation of orbital symmetry . Consideration of the interactions of the highest occupied and lowest unoccupied molecular orbitals ( frontier orbital analysis ) is another approach to analyzing the transition state of a pericyclic reaction.
The arrow-pushing convention for pericyclic reactions has a somewhat different meaning compared to polar (and radical) reactions. For pericyclic reactions, there is often no obvious movement of electrons from an electron rich source to an electron poor sink. Rather, electrons are redistributed around a cyclic transition state. Thus, electrons can be pushed in either of two directions for a pericyclic reaction. For some pericyclic reactions, however, there is a definite polarization of charge at the transition state due to asynchronicity (bond formation and breaking do not occur to a uniform extent at the transition state). Thus, one direction may be preferred over another, although arguably, both depictions are still formally correct. In the case of the Diels-Alder reaction shown below, resonance arguments make clear the direction of polarization. In more complex situations, however, detailed computations may be needed to determine the direction and extent of polarization.
Closely related to pericyclic processes are reactions that are pseudopericyclic . Although a pseudopericyclic reaction proceeds through a cyclic transition state, two of the orbitals involved are constrained to be orthogonal and cannot interact. Perhaps the most famous example is the hydroboration of an olefin . Although this appears to be a 4-electron Hückel topology forbidden group transfer process, the empty p orbital and sp 2 hybridized B–H bond are orthogonal and do not interact. Hence, the Woodward-Hoffmann rules do not apply. (The fact that hydroboration is believed to proceed through initial π complexation may also be relevant.)
Pericyclic reactions also occur in several biological processes: | https://en.wikipedia.org/wiki/Pericyclic_reaction |
The peridinin-chlorophyll-protein complex ( PCP or PerCP ) is a soluble molecular complex consisting of the peridinin-chlorophyll a-protein bound to peridinin , chlorophyll , and lipids . The peridinin molecules absorb light in the blue-green wavelengths (470 to 550 nm) and transfer energy to the chlorophyll molecules with extremely high efficiency. [ 1 ] [ 2 ] PCP complexes are found in many photosynthetic dinoflagellates , in which they may be the primary light-harvesting complexes . [ 3 ]
The PCP protein has been identified in dinoflagellate genomes in at least two forms, a homodimeric form composed of two 15- kD monomers, and a monomeric form of around 32kD believed to have evolved from the homodimeric form via gene duplication . The monomeric form consists of two pseudosymmetrical eight- helix domains in which the helices are packed in a complex topology resembling that of the beta sheets in a jelly roll fold . [ 1 ] The three-dimensional arrangement of helices forms a boat-shaped molecule with a large central cavity in which the pigments and lipids are bound. Each eight-helix segment typically binds four peridinin molecules, one chlorophyll a molecule, and one lipid molecule such as digalactosyl diacyl glycerol ; however, this stoichiometry varies among species and among PCP isoforms . [ 4 ] [ 3 ] The most common 4:1 peridinin:chlorophyll ratio was predicted by spectroscopy in the 1970s, [ 5 ] but was unconfirmed until the crystal structure of the Amphidinium carterae PCP complex was solved in the 1990s. [ 1 ] Whether formed from a protein monomer or dimer, the assembled protein-pigment complex is sometimes known as bPCP (for "building block") and is the minimal stable unit. [ 4 ] In at least some PCP forms, including that from A. carterae , these building blocks assemble into a trimer thought to be the biologically functional state. [ 1 ]
When the X-ray crystallography structure of PCP was solved in 1997, it represented a novel protein fold , and its topology remains unique among known proteins. The structure is referred to by the CATH database , which systematically classifies protein structures, as an "alpha solenoid" fold; however, elsewhere in the literature the term alpha solenoid is used for open and less compact helical protein structures. [ 6 ]
Photosynthetic dinoflagellates contain membrane -bound light-harvesting complexes similar to those found in green plants . They additionally contain water-soluble protein-pigment complexes that exploit carotenoids such as peridinin to extend their photosynthetic capacity. Peridinin absorbs light in the blue-green wavelengths (470 to 550 nm) which are inaccessible to chlorophyll by itself; instead the PCP complex uses the geometry of the relative pigment orientations to effect extremely high-efficiency energy transfer from the peridinin molecules to their neighboring chlorophyll molecule. [ 4 ] [ 3 ] [ 7 ] PCP has served as a common model system for spectroscopy and for theoretical calculations relating to the protein's photophysics. [ 8 ]
PCP complexes are thought to occupy the thylakoid lumen . After energy transfer from the peridinin to the chlorophyll pigment, PCP complexes are believed to then transfer energy from the excited chlorophyll to membrane-bound light harvesting complexes . [ 4 ] | https://en.wikipedia.org/wiki/Peridinin-chlorophyll-protein_complex |
Peridynamics is a non-local formulation of continuum mechanics that is oriented toward deformations with discontinuities, especially fractures . Originally, bond-based peridynamic was introduced, [ 1 ] wherein, internal interaction forces between a material point and all the other ones with which it can interact, are modeled as a central force field . [ 2 ] This type of force field can be imagined as a mesh of bonds connecting each point of the body with every other interacting point within a certain distance which depends on a material property, called the peridynamic horizon . Later, to overcome bond-based framework limitations for the material Poisson's ratio [ 3 ] [ 4 ] ( 1 / 3 {\displaystyle 1/3} for plane stress and 1 / 4 {\displaystyle 1/4} for plane strain in two-dimesional configurations; 1 / 4 {\displaystyle 1/4} for three-dimensional ones), state-base peridynamics, has been formulated. [ 5 ] Its characteristic feature is that the force exchanged between a point and another one is influenced by the deformation state of all other bonds relative to its interaction zone. [ 1 ]
The characteristic feature of peridynamics, which makes it different from classical local mechanics, is the presence of finite-range bonds between any two points of the material body: it is a feature that approaches such formulations as discrete meso-scale theories of matter. [ 1 ]
The term peridynamic , as an adjective, was proposed in the year 2000 and comes from the prefix peri- , which means all around , near , or surrounding ; and the root dyna , which means force or power . The term peridynamics , as a noun, is a shortened form of the phrase peridynamic model of solid mechanics. [ 1 ]
A fracture is a mathematical singularity to which the classical equations of continuum mechanics cannot be applied directly. The peridynamic theory has been proposed with the purpose of mathematically models fractures formation and dynamic in elastic materials. [ 1 ] It is founded on integral equations , in contrast with classical continuum mechanics, which is based on partial differential equations . Since partial derivatives do not exist on crack surfaces [ 1 ] and other geometric singularities , the classical equations of continuum mechanics cannot be applied directly when such features are present in a deformation . The integral equations of the peridynamic theory hold true also on singularities and can be applied directly, because they do not require partial derivatives. The ability to apply the same equations directly at all points in a mathematical model of a deforming structure helps the peridynamic approach to avoid the need for the special techniques of fracture mechanics like xFEM . [ 6 ] For example, in peridynamics, there is no need for a separate crack growth law based on a stress intensity factor . [ 7 ]
In the context of peridynamic theory, physical bodies are treated as constituted by a continuous points mesh which can exchange long-range mutual interaction forces, within a maximum and well established distance δ > 0 {\displaystyle \delta >0} : the peridynamic horizon radius. This perspective approaches much more to molecular dynamics than macroscopic bodies, and as a consequence, is not based on the concept of stress tensor (which is a local concept) and drift toward the notion of pairwise force that a material point x {\displaystyle {\bf {x}}} exchanges within its peridynamic horizon. With a Lagrangian point of view, suited for small displacements, the peridynamic horizon is considered fixed in the reference configuration and, then, deforms with the body. [ 3 ] Consider a material body represented by Ω ⊂ R n {\displaystyle \Omega \subset \mathbb {R} ^{n}} , where n {\displaystyle n} can be either 1, 2 or 3. The body has a positive density ρ {\displaystyle \rho } . Its reference configuration at the initial time is denoted by Ω 0 ⊂ R n {\displaystyle \Omega _{0}\subset \mathbb {R} ^{n}} . It is important to note that the reference configuration can either be the stress -free configuration or a specific configuration of the body chosen as a reference. In the context of peridynamics, every point in Ω {\displaystyle \Omega } interacts with all the points x ′ {\displaystyle {\bf {x}}'} within a certain neighborhood defined by d ( x , x ′ ) ≤ δ {\displaystyle d({\bf {x}},{\bf {x}}')\leq \delta } , where δ > 0 {\displaystyle \delta >0} and d ( ⋅ , ⋅ ) {\displaystyle d(\cdot ,\cdot )} represents a suitable distance function on Ω 0 {\displaystyle \Omega _{0}} . This neighborhood is often referred to as B δ ( x ) {\displaystyle B_{\delta }({\bf {x}})} in the literature. It is commonly known as the horizon [ 7 ] [ 8 ] or the family of x {\displaystyle {\bf {x}}} . [ 3 ] [ 9 ]
The kinematics of x {\displaystyle {\bf {x}}} is described in terms of its displacement from the reference position, denoted as u ( x , t ) : Ω 0 × R + → R n {\displaystyle {\bf {u}}({\bf {x}},t):\Omega _{0}\times \mathbb {R} ^{+}\rightarrow \mathbb {R} ^{n}} . Consequently, the position of x {\displaystyle {\bf {x}}} at a specific time t {\displaystyle t} is determined by y ( x , t ) := x + u ( x , t ) {\displaystyle {\bf {y}}({\bf {x}},t):={\bf {x}}+{\bf {u}}({\bf {x}},t)} . Furthermore, for each pair of interacting points, the change in the length of the bond relative to the initial configuration is tracked over time through the relative strain s ( x , x ′ , t ) {\displaystyle s({\bf {x}},{\bf {x}}',t)} , which can be expressed as:
s ( x , x ′ , t ) = | u ( x ′ , t ) − u ( x , t ) | | x ′ − x | , {\displaystyle s\left({\bf {x}},{\bf {x}}',t\right)={\frac {\left|{\bf {u}}\left({\bf {x}}^{\prime },t\right)-{\bf {u}}({\bf {x}},t)\right|}{\left|{\bf {x}}^{\prime }-{\bf {x}}\right|}},}
where | ⋅ | {\displaystyle |\cdot |} denotes the Euclidean norm [ 3 ] and x ′ ∈ B δ ( x ) ∩ Ω 0 {\displaystyle {\bf {x}}'\in B_{\delta }({\bf {x}})\cap \Omega _{0}} .
The interaction between any x {\displaystyle {\bf {x}}} and x ′ {\displaystyle {\bf {x'}}} is referred to as a bond . These pairwise bonds have varying lengths over time in response to the force per unit volume squared, denoted as [ 3 ]
f ≡ f ( x ′ , x , u ( x ′ ) , u ( x ) , t ) {\displaystyle {\bf {f}}\equiv {\bf {f}}({\bf {x}}',{\bf {x}},{\bf {u}}({\bf {x}}'),{\bf {u}}({\bf {x}}),t)} .
This force is commonly known as the pairwise force function or peridynamic kernel , and it encompasses all the constitutive (material-dependent) properties. It describes how the internal forces depend on the deformation. It's worth noting that the dependence of u {\displaystyle {\bf {u}}} on t {\displaystyle t} has been omitted here for the sake of simplicity in notation. Additionally, an external forcing term, b ( x , t ) {\displaystyle \mathbf {b} ({\bf {x}},t)} , is introduced, which results in the following equation of motion, representing the fundamental equation of peridynamics: [ 3 ]
ρ u t t ( x , t ) = F ( x , t ) . {\displaystyle {\rho {\bf {u}}_{tt}({\bf {x}},t)={\bf {F}}({\bf {x}},t)}\,.}
where the integral term F ( x , t ) {\displaystyle {\bf {F}}({\bf {x}},t)} is the sum of all of the internal and external per-unit-volume forces acting on x {\displaystyle {\bf {x}}} :
F ( x , t ) := ∫ Ω 0 ∩ B δ ( x ) f ( x ′ , x , u ( x ′ ) , u ( x ) ) d V x ′ + b ( x , t ) . {\displaystyle {{\bf {F}}({\bf {x}},t):=\int _{\Omega _{0}\cap B_{\delta }({\bf {x}})}{\bf {f}}\left({\bf {x}}',{\bf {x}},{\bf {u}}\left({\bf {x}}'\right),{\bf {u}}({\bf {x}})\right)dV_{{\bf {x}}'}+{\bf {b}}({\bf {x}},t)}\,.}
The vector valued function f {\displaystyle {\bf {f}}} is the force density that x ′ {\displaystyle {\bf {x'}}} exerts on x {\displaystyle {\bf {x}}} . This force density depends on the relative displacement and relative position vectors between x ′ {\displaystyle {\bf {x'}}} and x {\displaystyle {\bf {x}}} . The dimension of f {\displaystyle {\bf {f}}} is [ N / m 6 ] {\displaystyle [N/m^{6}]} . [ 3 ]
In this formulation of peridynamics, the kernel is determined by the nature of internal forces and physical constraints that governs the interaction between only two material points. For the sake of brevity, the following quantities are defined ξ := x ′ − x {\displaystyle {\bf {\bf {\xi }}}:={\bf {x}}'-{\bf {x}}} and η := u ( x ′ ) − u ( x ) {\displaystyle {\bf {\eta }}:={\bf {u}}({\bf {x}}')-{\bf {u}}({\bf {x}})} so that [ 1 ]
f ( x ′ − x , u ( x ′ ) − u ( x ) ) ≡ f ( ξ , η ) {\displaystyle {\bf {f}}({\bf {x}}'-{\bf {x}},{\bf {u}}({\bf {x}}')-{\bf {u}}({\bf {x}}))\equiv {\bf {{f}({\bf {\xi }},{\bf {\eta }})}}}
For any x {\displaystyle {\bf {x}}} and x ′ {\displaystyle {\bf {x'}}} belonging to the neighborhood B δ ( x ) {\displaystyle B_{\delta }({\bf {x}})} , the following relationship holds: f ( − η , − ξ ) = − f ( η , ξ ) {\displaystyle {\bf {f}}(-\eta ,-\xi )=-{\bf {f}}(\eta ,\xi )} . This expression reflects the principle of action and reaction, commonly known as Newton's third law. It guarantees the conservation of linear momentum in a system composed of mutually interacting particles. [ 1 ]
For any x {\displaystyle {\bf {x}}} and x ′ {\displaystyle {\bf {{x}'}}} belonging to the neighborhood B δ ( x ) {\displaystyle B_{\delta }({\bf {x}})} , the following condition holds: ( ξ + η ) × f ( ξ , η ) = 0 {\displaystyle (\xi +\eta )\times {\bf {f}}(\xi ,\eta )=0} . This condition arises from considering the relative deformed ray- vector connecting x {\displaystyle {\bf {x}}} and x ′ {\displaystyle {\bf {{x}'}}} as ξ + η {\displaystyle \xi +\eta } . The condition is satisfied if and only if the pairwise force density vector has the same direction as the relative deformed ray-vector. In other words, f ( ξ , η ) = f ( ξ , η ) ( ξ + η ) {\displaystyle {\bf {f}}(\xi ,\eta )=f(\xi ,\eta )(\xi +\eta )} for all ξ {\displaystyle \xi } and η {\displaystyle \eta } , where f ( ξ , η ) {\displaystyle f(\xi ,\eta )} is a scalar-valued function. [ 1 ]
An hyperelastic material is a material with constitutive relation such that: [ 1 ]
∫ Γ f ( ξ , η ) ⋅ d η = 0 , ∀ closed curve Γ , ∀ ξ ≠ 0 , {\displaystyle \int _{\Gamma }{\bf {f}}({\bf {\xi }},{\bf {\eta }})\cdot d{\bf {\eta }}=0\,,\quad \forall {\text{ closed curve }}\Gamma ,\ \ \ \ \forall {\bf {\xi }}\neq {\bf {{0},}}}
or, equivalently, by Stokes' theorem
∇ η × f ( ξ , η ) = 0 {\displaystyle \nabla _{\bf {\eta }}\times {\bf {f}}({\bf {\xi }},{\bf {\eta }})={\bf {{0}\,}}} , ∀ ξ , η {\displaystyle \forall \,{\bf {\xi }},\,{\bf {\eta }}}
and, thus,
f ( ξ , η ) = ∇ η Φ ( ξ , η ) ∀ ξ , η . {\displaystyle {\bf {f}}({\bf {\xi }},{\bf {\eta }})=\nabla _{\bf {\eta }}\Phi ({\bf {\xi }},\,{\bf {\eta }})\,\forall {\bf {\xi }},\,{\bf {\eta }}\,.}
In the equation above Φ ( ξ , η ) {\displaystyle \Phi ({\bf {\xi }},{\bf {\eta }})} is the scalar valued potential function in C 2 ( R n ∖ { 0 } × R n ) {\displaystyle C^{2}(\mathbb {R} ^{n}\setminus {\bf {{\{0\}}\times \mathbb {R} ^{n})}}} . [ 1 ] Due to the necessity of satisfying angular momentum conservation, the condition below on the scalar valued function f ( ξ , η ) {\displaystyle f({\bf {\xi }},{\bf {\eta }})} follows [ 1 ]
∂ f ( ξ , η ) ∂ η = g ( ξ , η ) ( ξ + η ) . {\displaystyle {\frac {\partial f({\bf {\xi }},{\bf {\eta }})}{\partial {\bf {\eta }}}}=g({\bf {\xi }},{\bf {\eta }})({\bf {\xi }}+{\bf {\eta }}).}
where g ( ξ , η ) {\displaystyle g({\bf {\xi }},{\bf {\eta }})} is a scalar valued function. Integrating both sides of the equation, the following condition on g ( ξ , η ) {\displaystyle g({\bf {\xi }},{\bf {\eta }})} is obtained [ 1 ]
f ( ξ , η ) = h ( | ξ + η | , ξ ) ( ξ + η ) {\displaystyle {\bf {f}}({\bf {\xi }},{\bf {\eta }})=h(|{\bf {\xi }}+{\bf {\eta }}|,{\bf {\xi }})({\bf {\xi }}+{\bf {\eta }})} ,
for h ( | ξ + η | , ξ ) {\displaystyle h(|{\bf {\xi }}+{\bf {\eta }}|,{\bf {\xi }})} a scalar valued function. The elastic nature of f {\displaystyle {\bf {f}}} is evident: the interaction force depends only on the initial relative position between points x {\displaystyle {\bf {x}}} and x ′ {\displaystyle {\bf {x}}'} and the modulus of their relative position, | ξ + η | {\displaystyle |{\bf {\xi }}+{\bf {\eta }}|} , in the deformed configuration Ω t {\displaystyle \Omega _{t}} at time t {\displaystyle t} . Applying the isotropy hypothesis, the dependence on vector ξ {\displaystyle {\bf {\xi }}} can be substituted with a dependence on its modulus | ξ | {\displaystyle |{\bf {\xi }}|} , [ 1 ]
f ( ξ , η ) = h ( | ξ + η | , | ξ | ) ( ξ + η ) . {\displaystyle {\bf {f}}({\bf {\xi }},{\bf {\eta }})=h(|{\bf {\xi }}+{\bf {\eta }}|,|{\bf {\xi }}|)({\bf {\xi }}+{\bf {\eta }}).}
Bond forces can, thus, be considered as modeling a spring net that connects each point x ∈ Ω 0 {\displaystyle {\bf {x}}\in \Omega _{0}} pairwise with x ′ ∈ B δ ( x ) ∩ Ω 0 {\displaystyle {\bf {x}}'\in B_{\delta }({\bf {x}})\cap \Omega _{0}} .
If | η | ≪ 1 {\displaystyle |{\bf {\eta }}|\ll 1} , the peridynamic kernel can be linearised around η = 0 {\displaystyle {\bf {\eta }}={\bf {0}}} : [ 1 ]
f ( ξ , η ) ≈ f ( ξ , 0 ) + ∂ f ( ξ , η ) ∂ η | η = 0 η ; {\displaystyle {\bf {f}}({\bf {\xi }},{\bf {\eta }})\approx {\bf {f}}({\bf {\xi }},{\bf {{0})+\left.{\frac {\partial {\bf {f}}({\bf {\xi }},{\bf {\eta }})}{\partial {\bf {\eta }}}}\right|_{{\bf {\eta }}={\bf {0}}}{\bf {\eta }};}}}
then, a second-order micro-modulus tensor can be defined as
C ( ξ ) = ∂ f ( ξ , η ) ∂ η | η = 0 = ξ ⊗ ∂ f ( ξ , η ) ∂ η | η = 0 + f 0 I {\displaystyle {\bf {C}}({\bf {\xi }})=\left.{\frac {\partial {\bf {f}}({\bf {\xi }},{\bf {\eta }})}{\partial {\bf {\eta }}}}\right|_{{\bf {\eta }}={\bf {0}}}={\bf {\xi }}\otimes \left.{\frac {\partial f({\bf {\xi }},{\bf {\eta }})}{\partial {\bf {\eta }}}}\right|_{{\bf {\eta }}={\bf {0}}}+f_{0}I}
where f 0 := f ( ξ , 0 ) {\displaystyle f_{0}:=f({\bf {\xi }},{\bf {0}})} and I {\displaystyle I} is the identity tensor. Following application of linear momentum balance, elasticity and isotropy condition, the micro-modulus tensor can be expressed in this form [ 1 ]
C ( ξ ) = λ ( | ξ | ) ξ ⊗ ξ + f 0 I . {\displaystyle {\bf {C}}({\bf {\xi }})=\lambda (|{\bf {\xi }}|){\bf {\xi }}\otimes {\bf {\xi }}+f_{0}I.}
Therefore, for a linearised hyperelastic material, its peridynamic kernel holds the following structure [ 1 ]
f ( ξ , η ) ≈ f ( ξ , 0 ) + ( λ ( | ξ | ) ξ ⊗ ξ + f 0 I ) η . {\displaystyle {\bf {f}}({\bf {\xi }},{\bf {\eta }})\approx {\bf {f}}({\bf {\xi }},{\bf {0}})+\left(\lambda (|{\bf {\xi }}|){\bf {\xi }}\otimes {\bf {\xi }}+f_{0}I\right){\bf {\eta }}.}
The peridynamic kernel is a versatile function that characterizes the constitutive behavior of materials within the framework of peridynamic theory. One commonly employed formulation of the kernel is used to describe a class of materials known as prototype micro-elastic brittle (PMB) materials. In the case of isotropic PMB materials, the pairwise force is assumed to be linearly proportional to the finite stretch [ 7 ] experienced by the material, defined as
s := ( | ξ + η | − | ξ | ) / | ξ | {\displaystyle s:=(|{\bf {\xi }}+{\bf {\eta }}|-|{\bf {\xi }}|)/|{\bf {\xi }}|} ,
so that
f ( η , ξ ) = f ( | ξ + η | , | ξ | ) n , {\displaystyle \mathbf {f} ({\bf {\eta }},{\bf {\xi }})=f(|{\bf {\xi }}+{\bf {\eta }}|,|{\bf {\xi }}|){\bf {{n},}}}
where
n := ( ξ + η ) / | ξ + η | {\displaystyle {\bf {{n}:=({\bf {\xi }}+{\bf {\eta }})/|{\bf {\xi }}+{\bf {\eta }}|}}}
and where the scalar function f {\displaystyle f} is defined as follow [ 7 ]
f = c s μ ( s , t ) = c | ξ + η | − | ξ | | ξ | μ ( s , t ) , {\displaystyle f=cs\mu (s,t)=c\;{\frac {|{\bf {\xi }}+{\bf {\eta }}|-|{\bf {\xi }}|}{|{\bf {\xi }}|}}\mu (s,t),} with
μ ( s , t ) = { 1 , if s ( t ′ , ξ ) < s 0 , 0 , otherwise, for all 0 ≤ t ′ ≤ t ; {\displaystyle \mu (s,t)=\left\{{\begin{array}{ll}1\,,&{\text{ if }}s\left(t^{\prime },{\bf {\xi }}\right)<s_{0}\,,\\0\,,&{\text{ otherwise, }}\end{array}}\ \ \ \ {\text{ for all }}0\leq t^{\prime }\leq t\right.;}
The constant c {\displaystyle c} is referred to as the micro-modulus constant , and the function μ ( s , t ) {\displaystyle \mu (s,t)} serves to indicate whether, at a given time t ′ ≤ t {\displaystyle t'\leq t} , the bond stretch s {\displaystyle s} associated with the pair ( x , x ′ ) {\displaystyle ({\bf {x,\,x'}})} has surpassed the critical value s 0 {\displaystyle s_{0}} . If the critical value is exceeded, the bond is considered broken , and a pairwise force of zero is assigned for all t ≥ t ′ {\displaystyle t\geq t'} . [ 1 ]
After a comparison between the strain energy density value obtained under isotropic extension respectively employing peridynamics and classical continuum theory framework, the physical coherent value of micro-modulus c {\displaystyle c} can be found [ 7 ]
c = 18 k π δ 4 , {\displaystyle c={\frac {18k}{\pi \delta ^{4}}},}
where k {\displaystyle k} is the material bulk modulus .
Following the same approach [ 10 ] the micro-modulus constant c {\displaystyle c} can be extended to c ( ξ , δ ) {\displaystyle c({\bf {\xi }},\delta )} , where c {\displaystyle c} is now a micro-modulus function . This function provides a more detailed description of how the intensity of pairwise forces is distributed over the peridynamic horizon B δ ( x ) {\displaystyle B_{\delta }({\bf {x}})} . Intuitively, the intensity of forces decreases as the distance between x {\displaystyle {\bf {x}}} and x ′ ∈ B δ ( x ) {\displaystyle {\bf {x}}'\in B_{\delta }({\bf {x}})} increases, but the specific manner in which this decrease occurs can vary.
The micro-modulus function is expressed as [ 11 ]
c ( ξ , δ ) := c ( 0 , δ ) k ( ξ , δ ) , {\displaystyle c({\bf {\xi }},\delta ):=c({\bf {{0},\delta )k({\bf {\xi }},\delta )\,,}}}
where the constant c ( 0 , δ ) {\displaystyle c({\bf {{0},\delta )}}} is obtained by comparing peridynamic strain density with the classical mechanical theories; [ 12 ] k ( ξ , δ ) {\displaystyle k({\bf {\xi }},\delta )} is a function defined on Ω 0 {\displaystyle \Omega _{0}} with the following properties (given the restrictions of momentum conservation and isotropy) [ 11 ]
{ k ( ξ , δ ) = k ( − ξ , δ ) , lim ξ → 0 k ( ξ , δ ) = max ξ ∈ R n { k ( ξ , δ ) } , lim ξ → δ k ( ξ , δ ) = 0 , ∫ R n lim δ → 0 k ( ξ , δ ) d x = ∫ R n Δ ( ξ ) d x = 1 , {\displaystyle \left\{{\begin{array}{l}k({\bf {\xi }},\delta )=k(-{\bf {\xi }},\delta )\,,\\\lim _{{\bf {\xi }}\rightarrow {\bf {0}}}k({\bf {\xi }},\delta )=\max _{{\bf {\xi }}\ \in \mathbb {R} ^{n}}\{k({\bf {\xi }},\delta )\}\,,\\\lim _{{\bf {\xi }}\rightarrow \delta }k({\bf {\xi }},\delta )=0\,,\\\int _{\mathbb {R} ^{n}}\lim _{\delta \rightarrow 0}k({\bf {\xi }},\delta )d{\bf {x}}=\int _{\mathbb {R} ^{n}}\Delta ({\bf {\xi }})d{\bf {x}}=1\,,\end{array}}\right.}
where Δ ( ξ ) {\displaystyle \Delta ({\bf {\xi }})} is the Dirac delta function .
The simplest expression for the micro-modulus function is [ 11 ]
c ( 0 , δ ) k ( ξ , δ ) = c 1 B δ ( x ′ ) {\displaystyle c({\bf {{0},\delta )k({\bf {\xi }},\delta )=c{\bf {{1}_{B_{\delta }({\bf {x}}')}}}}}} ,
where 1 A {\displaystyle {\bf {{1}_{A}}}} : X → R {\displaystyle X\rightarrow \mathbb {R} } is the indicator function of the subset A ⊂ X {\displaystyle A\subset X} , defined as
1 A ( x ) := { 1 , x ∈ A , 0 , x ∉ A , ; {\displaystyle \mathbf {1} _{A}(x):={\begin{cases}1,&x\in A\,,\\0,&x\notin A\,,\end{cases}}\;\;;}
It is characterized by k ( ξ , δ ) {\displaystyle k({\bf {\xi }},\delta )} to a be a linear function [ 13 ]
k ( ξ , δ ) = ( 1 − | ξ | δ ) 1 B δ ( x ′ ) . {\displaystyle k({\bf {\xi }},\delta )=\left(1-{\frac {|{\bf {\xi }}|}{\delta }}\right){\bf {{1}_{B_{\delta }({\bf {x}}')}.}}}
If one wants to reflects the fact that most common discrete physical systems are characterized by a Maxwell-Boltzmann distribution , in order to include this behavior in peridynamics, the following expression for k ( ξ , δ ) {\displaystyle k({\bf {\xi }},\delta )} can be utilized [ 14 ]
k ( ξ , δ ) = e − ( | ξ | / δ ) 2 1 B δ ( x ′ ) ; {\displaystyle k({\bf {\xi }},\delta )=e^{-(|{\bf {\xi }}|/\delta )^{2}}{\bf {{1}_{B_{\delta }({\bf {x}}')};}}}
In the literature one can find also the following expression for the k ( ξ , δ ) {\displaystyle k({\bf {\xi }},\delta )} function [ 11 ]
k ( ξ , δ ) = ( 1 − ( ξ δ ) 2 ) 2 1 B δ ( x ′ ) . {\displaystyle k({\bf {\xi }},\delta )=\left(1-\left({\frac {\xi }{\delta }}\right)^{2}\right)^{2}{\bf {{1}_{B_{\delta }({\bf {x}}')}.}}}
Overall, depending on the specific material property to be modeled, there exists a wide range of expressions for the micro-modulus and, in general, for the peridynamic kernel. The above list is, thus, not exhaustive. [ 11 ]
Damage is incorporated in the pairwise force function by allowing bonds to break when their elongation exceeds some prescribed value. After a bond breaks, it no longer sustains any force, and the endpoints are effectively disconnected from each other. When a bond breaks, the force it was carrying is redistributed to other bonds that have not yet broken. This increased load makes it more likely that these other bonds will break. The process of bond breakage and load redistribution, leading to further breakage, is how cracks grow in the peridynamic model. [ 7 ]
Analytically, the bond breaking is specified inside the expression of the peridynamic kernel, by the function [ 7 ]
μ ( s , t ) = { 1 , if s ( t ′ , ξ ) < s 0 , 0 , otherwise, for all 0 ≤ t ′ ≤ t ; {\displaystyle \mu (s,t)=\left\{{\begin{array}{ll}1\,,&{\text{ if }}s\left(t^{\prime },{\bf {\xi }}\right)<s_{0}\,,\\0\,,&{\text{ otherwise, }}\end{array}}\ \ \ \ {\text{ for all }}0\leq t^{\prime }\leq t\right.;}
If the graph of f ( s , t ) {\displaystyle {\bf {f}}(s,t)} versus bond stretching s {\displaystyle s} is plotted, the action of the bond breaking function μ {\displaystyle \mu } in the fracture formation is clear. However, not only abrupt fracture can be modeled in the peridynamic framework and more general expressions for μ {\displaystyle \mu } can be employed. [ 7 ]
The theory described above assumes that each peridynamic bond responds independently of all the others. This is an oversimplification for most materials and leads to restrictions on the types of materials that can be modeled. In particular, this assumption implies that any isotropic linear elastic solid is restricted to a Poisson ratio of 1/4. [ 3 ]
To address this lack of generality, the idea of peridynamic states was introduced. This framework allows the force density in each bond to depend on the stretches in all the bonds connected to its endpoints, in addition to its own stretch. For example, the force in a bond could depend on the net volume changes at the endpoints. The effect of this volume change, relative to the effect of the bond stretch, determines the Poisson ratio . With peridynamic states, any material that can be modeled within the standard theory of continuum mechanics can be modeled as a peridynamic material, while retaining the advantages of the peridynamic theory for fracture. [ 5 ]
Mathematically the equation of the internal and external force term
F ( x , t ) := ∫ Ω 0 ∩ B δ ( x ) f ( x ′ , x , u ( x ′ ) , u ( x ) ) d V x ′ + b ( x , t ) . {\displaystyle {{\bf {F}}({\bf {x}},t):=\int _{\Omega _{0}\cap B_{\delta }({\bf {x}})}{\bf {f}}\left({\bf {x}}',{\bf {x}},{\bf {u}}\left({\bf {x}}'\right),{\bf {u}}({\bf {x}})\right)dV_{{\bf {x}}'}+{\bf {b}}({\bf {x}},t)}\,.}
used in the bond-based formulations is substituted by [ 5 ] F ( x , t ) := ∫ B δ ( x ) { T _ [ x , t ] ⟨ x ′ − x ⟩ − T _ [ x ′ , t ] ⟨ x − x ′ ⟩ } d V x ′ + b ( x , t ) , {\displaystyle {\bf {F}}({\bf {x}},t):=\int _{B_{\delta }({\bf {x}})}\left\{{\underline {\mathbf {T} }}[\mathbf {x} ,t]\left\langle \mathbf {x} ^{\prime }-\mathbf {x} \right\rangle -{\underline {\mathbf {T} }}\left[\mathbf {x} ^{\prime },t\right]\left\langle \mathbf {x} -\mathbf {x} ^{\prime }\right\rangle \right\}dV_{\mathbf {x} ^{\prime }}+\mathbf {b} (\mathbf {x} ,t),}
where T _ {\displaystyle {\underline {\mathbf {T} }}} is the force vector state field.
A general m-order state A _ ⟨ ⋅ ⟩ : B δ ( x ) → L m . {\displaystyle {\underline {\mathbf {A} }}\langle \cdot \rangle :B_{\delta }({\bf {x}})\rightarrow {\mathcal {L}}_{m}.} is a mathematical object similar to a tensor , with the exception that it is [ 5 ]
Vector states are states of order equal to 2. For so called simple material , T _ {\displaystyle {\underline {\mathbf {T} }}} is defined as
T _ := T ^ _ ( Y _ ) {\displaystyle {\underline {\mathbf {T} }}:={\underline {\mathbf {\hat {T}} }}({\underline {\mathbf {Y} }})}
where T ^ _ : V → V {\displaystyle {\underline {\mathbf {\hat {T}} }}:{\mathcal {V}}\rightarrow {\mathcal {V}}} is a Riemann-integrable function on B δ ( x ) {\displaystyle B_{\delta }({\bf {x}})} , and Y _ {\displaystyle {\underline {\mathbf {Y} }}} is called deformation vector state field and is defined by the following relation [ 5 ]
Y _ [ x , t ] ⟨ ξ ⟩ = y ( x + ξ , t ) − y ( x , t ) ∀ x ∈ Ω 0 , ξ ∈ B δ ( x ) , t ≥ 0 {\displaystyle {\underline {\mathbf {Y} }}[\mathbf {x} ,t]\langle {\boldsymbol {\xi }}\rangle =\mathbf {y} (\mathbf {x} +{\boldsymbol {\xi }},t)-\mathbf {y} (\mathbf {x} ,t)\quad \forall \mathbf {x} \in \Omega _{0},\xi \in B_{\delta }({\bf {x}}),t\geq 0}
thus Y _ ⟨ x ′ − x ⟩ {\displaystyle {\underline {\mathbf {Y} }}\left\langle \mathbf {x} ^{\prime }-\mathbf {x} \right\rangle } is the image of the bond x ′ − x {\displaystyle \mathbf {x} ^{\prime }-\mathbf {x} } under the deformation
such that
Y _ ⟨ ξ ⟩ = 0 if and only if ξ = 0 , {\displaystyle {\underline {\mathbf {Y} }}\langle {\boldsymbol {\xi }}\rangle =\mathbf {0} {\text{ if and only if }}{\boldsymbol {\xi }}=\mathbf {0} ,}
which means that two distinct particles never occupy the same point as the deformation progresses. [ 5 ]
It can be proved [ 5 ] that balance of linear momentum follow from the definition of F ( x , t ) {\displaystyle {\bf {F}}({\bf {x,\,t}})} , while, if the constitutive relation is such that
∫ B δ ( x ) Y _ ⟨ ξ ⟩ × T _ ⟨ ξ ⟩ d V ξ = 0 ∀ Y _ ∈ V {\displaystyle \int _{B_{\delta }({\bf {x}})}{\underline {\mathbf {Y} }}\langle {\boldsymbol {\xi }}\rangle \times {\underline {\mathbf {T} }}\langle {\boldsymbol {\xi }}\rangle dV_{\boldsymbol {\xi }}=0\quad \forall {\underline {\mathbf {Y} }}\in {\mathcal {V}}}
the force vector state field satisfy balance of angular momentum. [ 5 ]
The growing interest in peridynamics [ 6 ] come from its capability to fill the gap between atomistic theories of matter and classical local continuum mechanics. It is applied effectively to micro-scale phenomena, such as crack formation and propagation, [ 15 ] [ 16 ] [ 17 ] wave dispersion , [ 18 ] [ 19 ] intra-granular fracture. [ 20 ] These phenomena can be described by appropriate adjustment of the peridynamic horizon radius, which is directly linked to the extent of non-local interactions between points within the material. [ 21 ]
In addition to the aforementioned research fields, peridynamics' non-local approach to discontinuities has found applications in various other areas. In geo-mechanics , it has been employed to study water-induced soil cracks, [ 22 ] [ 23 ] geo-material failure , [ 24 ] rocks fragmentation, [ 25 ] [ 26 ] and so on. In biology , peridynamics has been used to model long-range interactions in living tissues , [ 27 ] cellular ruptures, cracking of bio-membranes , [ 28 ] and more. [ 6 ] Furthermore, peridynamics has been extended to thermal diffusion theory, [ 29 ] [ 30 ] enabling the modeling of heat conduction in materials with discontinuities, defects, inhomogeneities, and cracks. It has also been applied to study advection-diffusion phenomena in multi-phase fluids [ 31 ] and to construct models for transient advection-diffusion problems. [ 32 ] With its versatility, peridynamics has been used in various multi-physics analyses , including micro-structural analysis, [ 33 ] fatigue and heat conduction in composite materials, [ 34 ] [ 35 ] galvanic corrosion in metals, [ 36 ] electricity-induced cracks in dielectric materials and more. [ 6 ] | https://en.wikipedia.org/wiki/Peridynamics |
The perifocal coordinate ( PQW ) system is a frame of reference for an orbit . The frame is centered at the focus of the orbit, i.e. the celestial body about which the orbit is centered. The unit vectors p ^ {\displaystyle \mathbf {\hat {p}} } and q ^ {\displaystyle \mathbf {\hat {q}} } lie in the plane of the orbit. p ^ {\displaystyle \mathbf {\hat {p}} } is directed towards the periapsis of the orbit and q ^ {\displaystyle \mathbf {\hat {q}} } has a true anomaly ( θ {\displaystyle \theta } ) of 90 degrees past the periapsis. The third unit vector w ^ {\displaystyle \mathbf {\hat {w}} } is the angular momentum vector and is directed orthogonal to the orbital plane such that: [ 1 ] [ 2 ] w ^ = p ^ × q ^ {\displaystyle \mathbf {\hat {w}} =\mathbf {\hat {p}} \times \mathbf {\hat {q}} }
And, since w ^ {\displaystyle \mathbf {\hat {w}} } is the unit vector in the direction of the angular momentum vector, it may also be expressed as: w ^ = h ‖ h ‖ {\displaystyle \mathbf {\hat {w}} ={\frac {\mathbf {h} }{\|\mathbf {h} \|}}} where h is the specific relative angular momentum.
The position and velocity vectors can be determined for any location of the orbit. The position vector, r , can be expressed as: r = r cos θ p ^ + r sin θ q ^ {\displaystyle \mathbf {r} =r\cos \theta \mathbf {\hat {p}} +r\sin \theta \mathbf {\hat {q}} } where θ {\displaystyle \theta } is the true anomaly and the radius r = ‖ r ‖ {\displaystyle r=\|\mathbf {r} \|} may be calculated from the orbit equation .
The velocity vector, v , is found by taking the time derivative of the position vector: v = r ˙ = ( r ˙ cos θ − r θ ˙ sin θ ) p ^ + ( r ˙ sin θ + r θ ˙ cos θ ) q ^ {\displaystyle \mathbf {v} =\mathbf {\dot {r}} =({\dot {r}}\cos \theta -r{\dot {\theta }}\sin \theta )\mathbf {\hat {p}} +({\dot {r}}\sin \theta +r{\dot {\theta }}\cos \theta )\mathbf {\hat {q}} }
A derivation from the orbit equation can be made to show that: r ˙ = μ h e sin θ {\displaystyle {\dot {r}}={\frac {\mu }{h}}e\sin \theta } where μ {\displaystyle \mu } is the gravitational parameter of the focus, h is the specific relative angular momentum of the orbital body, e is the eccentricity of the orbit, and θ {\displaystyle \theta } is the true anomaly. r ˙ {\displaystyle {\dot {r}}} is the radial component of the velocity vector (pointing inward toward the focus) and r θ ˙ {\displaystyle r{\dot {\theta }}} is the tangential component of the velocity vector. By substituting the equations for r ˙ {\displaystyle {\dot {r}}} and r θ ˙ {\displaystyle r{\dot {\theta }}} into the velocity vector equation and simplifying, the final form of the velocity vector equation is obtained as: [ 3 ] v = μ h [ − sin θ p ^ + ( e + cos θ ) q ^ ] {\displaystyle \mathbf {v} ={\frac {\mu }{h}}\left[-\sin \theta \mathbf {\hat {p}} +(e+\cos \theta )\mathbf {\hat {q}} \right]}
The perifocal coordinate system can also be defined using the orbital parameters inclination ( i ), right ascension of the ascending node ( Ω {\displaystyle \Omega } ) and the argument of periapsis ( ω {\displaystyle \omega } ). The following equations convert from perifocal coordinates to equatorial coordinates and vice versa. [ 4 ]
[ x equatorial y equatorial z equatorial ] = [ cos Ω cos ω − sin Ω cos i sin ω − cos Ω sin ω − sin Ω cos i cos ω sin Ω sin i sin Ω cos ω + cos Ω cos i sin ω − sin Ω sin ω + cos Ω cos i cos ω − cos Ω sin i sin i sin ω sin i cos ω cos i ] [ x perifocal y perifocal z perifocal ] {\displaystyle {\begin{bmatrix}x_{\text{equatorial}}\\y_{\text{equatorial}}\\z_{\text{equatorial}}\\\end{bmatrix}}={\begin{bmatrix}\cos \Omega \cos \omega -\sin \Omega \cos i\sin \omega &-\cos \Omega \sin \omega -\sin \Omega \cos i\cos \omega &\sin \Omega \sin i\\\sin \Omega \cos \omega +\cos \Omega \cos i\sin \omega &-\sin \Omega \sin \omega +\cos \Omega \cos i\cos \omega &-\cos \Omega \sin i\\\sin i\sin \omega &\sin i\cos \omega &\cos i\\\end{bmatrix}}{\begin{bmatrix}x_{\text{perifocal}}\\y_{\text{perifocal}}\\z_{\text{perifocal}}\\\end{bmatrix}}}
In most cases, z perifocal = 0 {\displaystyle z_{\text{perifocal}}=0} .
[ x perifocal y perifocal z perifocal ] = [ cos Ω cos ω − sin Ω cos i sin ω sin Ω cos ω + cos Ω cos i sin ω sin i sin ω − cos Ω sin ω − sin Ω cos i cos ω − sin Ω sin ω + cos Ω cos i cos ω sin i cos ω sin Ω sin i − cos Ω sin i cos i ] [ x equatorial y equatorial z equatorial ] {\displaystyle {\begin{bmatrix}x_{\text{perifocal}}\\y_{\text{perifocal}}\\z_{\text{perifocal}}\\\end{bmatrix}}={\begin{bmatrix}\cos \Omega \cos \omega -\sin \Omega \cos i\sin \omega &\sin \Omega \cos \omega +\cos \Omega \cos i\sin \omega &\sin i\sin \omega \\-\cos \Omega \sin \omega -\sin \Omega \cos i\cos \omega &-\sin \Omega \sin \omega +\cos \Omega \cos i\cos \omega &\sin i\cos \omega \\\sin \Omega \sin i&-\cos \Omega \sin i&\cos i\\\end{bmatrix}}{\begin{bmatrix}x_{\text{equatorial}}\\y_{\text{equatorial}}\\z_{\text{equatorial}}\\\end{bmatrix}}}
Perifocal reference frames are most commonly used with elliptical orbits for the reason that the p ^ {\displaystyle \mathbf {\hat {p}} } coordinate must be aligned with the eccentricity vector . Circular orbits , having no eccentricity, give no means by which to orient the coordinate system about the focus. [ 5 ]
The perifocal coordinate system may also be used as an inertial frame of reference because the axes do not rotate relative to the fixed stars. This allows the inertia of any orbital bodies within this frame of reference to be calculated. This is useful when attempting to solve problems like the two-body problem . [ 6 ] | https://en.wikipedia.org/wiki/Perifocal_coordinate_system |
5346
103968
ENSG00000166819
ENSMUSG00000030546
O60240
Q8CGN5
NM_001145311 NM_002666
NM_001113471 NM_175640
NP_001138783 NP_002657
NP_001106942 NP_783571
Perilipin , also known as lipid droplet-associated protein , perilipin 1 , or PLIN , is a protein that, in humans, is encoded by the PLIN gene . [ 5 ] The perilipins are a family of proteins that associate with the surface of lipid droplets . Phosphorylation of perilipin is essential for the mobilization of fats in adipose tissue. [ 6 ]
Perilipin is part of a gene family with six currently-known members. In vertebrates , closely related genes include adipophilin (also known as adipose differentiation-related protein or Perilipin 2 ), TIP47 ( Perilipin 3 ), Perilipin 4 and Perilipin 5 (also called MLDP, LSDP5, or OXPAT). Insects express related proteins, LSD1 and LSD2 , in fat bodies. [ 7 ] The yeast Saccharomyces cerevisiae expresses PLN1 (formerly PET10), that stabilizes lipid droplets and aids in their assembly. [ 8 ]
The perilipins are considered to have their origins in a common ancestral gene which, during the first and second vertebrate genome duplication, gave rise to six types of PLIN genes. [ 9 ]
Human perilipin-1 is composed by 522 amino acids , which add up to a molecular mass of 55.990 kDa. It presents an estimated number of 15 phosphorylation sites (residues 81, 85, 126, 130, 132, 137, 174, 299, 301, 382, 384, 408, 436, 497, 499 and 522) [ 11 ] from which 3 -those in bold- have been suggested to be relevant for stimulated-lipolysis through PKA phosphorylation - they correspond respectively to PKA Phosphorylation sites 1, 5 and 6. [ 12 ] A compositional bias of Glutamic acid can be found between residues 307 and 316. [ 13 ] Its secondary structure has been suggested to be conformed exclusively by partially hydrophobic α-helixes , [ 10 ] as well as the respective coils and bends.
Whereas perilipin-1 is coded by a single gene, alternative mRNA splicing processes can lead to three protein isoforms (Perilipin A, B and C). Both Perilipin A and B present common N-terminal regions, differing in the C-terminal ones. [ 14 ] Concretely, beginning from the N-terminal of Perilipin-1, a PAT domain—characteristic of its protein family—can be found, followed by an also characteristic repeated sequence of 13 residues which form amphipathic helixes with an active role in linking membranes [ 15 ] and a 4-helix bundle before the C-terminal carbon. [ 16 ] In Perilipin A, lipophile nature is conferred by the slightly hydrophobic amino acids concentrated in the central 25% of the sequence , region that anchors the protein to the core of the lipid droplet. [ 17 ]
Unlike its human ortholog, murine perilipin is composed of 517 amino acids in the primary structure of which several regions can be identified. Three moderately hydrophobic sequences (H1, H2, H3) of 18 rem (243-260 aa), 23 rem (320-332 aa) and 16 rem (349-364 aa) can be identified in the centre of the protein, as well as an acidic region of 28 residues where both glutamic and aspartic acids add up to 19 of them. Five sequences 18 residues long that could form amphipathic β-pleated sheets—according to a prediction made through LOCATE program—are found between aa 111 and 182. [ original research? ] Serines occupying positions 81, 222, 276, 433, 492 and 517 act as phosphorylation sites -numbered from 1 to 6- for PKA, [ 18 ] as well as several other threonines and serines which add up to 27 phosphorylation sites. [ 19 ]
Perilipin is a protein that coats lipid droplets (LDs) in adipocytes , [ 20 ] the fat -storing cells in adipose tissue . In fact, PLIN1 is greatly expressed in white adipocytes. [ 21 ]
It controls adipocyte lipid metabolism . [ 22 ] It handles essential functions in the regulation of basal and hormonally stimulated lipolysis [ 23 ] and also rises the formation of large LDs which implies an increase in the synthesis of triglycerides . [ 21 ]
In humans, Perilipin A is the most abundant protein associated with the adipocyte LDs [ 7 ] and lower PLIN1 expression is related with higher rates of lipolysis. [ 24 ]
Under basal conditions, Perilipin acts as a protective coating of LDs from the body's natural lipases , such as hormone-sensitive lipase (HSL) and adipose triglyceride lipase (ATGL), [ 25 ] [ 24 ] which break triglycerides into glycerol and free fatty acids for use in lipid metabolism. [ 6 ]
In times of energy deficit, Perilipin is hyperphosphorylated by PKA following β-adrenergic receptor activation. [ 6 ] Phosphorylated perilipin changes conformation, exposing the stored lipids to hormone-sensitive lipase-mediated lipolysis.
Specifically, in the basal state Perilipin A allows a low level of basal lipolysis [ 26 ] by reducing the access of cytosolic lipases to stored triacylglycerol in LDs. [ 23 ] It is found at their surface in a complex with CGI-58, the co-activator of ATGL. ATGL might also be in this complex but it is quiescent. [ 27 ]
Under lipolytically stimulated conditions, PKA is activated and phosphorylates up to 6 Serine residues on Perilipin A (Ser81, 222, 276, 433, 492, and 517) and 2 on HSL (Ser659, and 660). [ 27 ] Although PKA also phosphorylates HSL, which can increase its activity, the more than 50-fold increase in fat mobilization (triggered by epinephrine ) is primarily due to Perilipin phosphorylation [ citation needed ] .
Then, Phosphorylated HSL translocates to the LD surface and associates with Perilipin A and Adipocyte fatty acid-binding protein (AFABP). [ 27 ] Consequently, HSL gains access to triacylglycerol (TAG) and diacylglycerol (DAG), substrates in LDs. Also, CGI-58 separates from the LD outer layer which leads to a redistribution of ATGL. [ 23 ] In particular, ATGL interacts with Perilipin A through phosphorylated Ser517. [ 27 ]
As a result, PKA phosphorylation implies an enriched colocation of HLS and ATGL which facilitates maximal lipolysis by the two lipases. [ 23 ]
Perilipin is an important regulator of lipid storage. [ 6 ] Both an overexpression or deficiency of the protein, caused by a mutation, lead to severe health issues.
Perilipin expression is elevated in obese animals and humans. Polymorphisms in the human perilipin (PLIN) gene have been associated with variance in body-weight regulation and may be a genetic influence on obesity risk in humans. [ 28 ]
This protein can be modified by O-linked acetylglucosamine ( O-GlNac ) moieties and the enzyme that intervenes is O-GlcNAc transferase (OGT). An abundance of OGT obstructs lipolysis and benefits diet-induced obesity and whole-body insulin resistance. Studies also propose that an overexpression of adipose O-GlcNAc signaling is a molecular expression of obesity and diabetes in humans. [ 29 ]
Perilipin-null mice eat more food than wild-type mice, but gain 1/3 less fat than wild-type mice on the same diet; perilipin-null mice are thinner, with more lean muscle mass. [ 30 ] Perilipin-null mice also exhibit enhanced leptin production and a greater tendency to develop insulin resistance than wild-type mice. Even though perilipin-null mice present less fat mass and a higher insulin resistance, they do not show signs of a whole lipodystrophic phenotype . [ 31 ]
In humans, studies suggest that a deficiency of PLIN1 causes lipodystrophic syndromes, [ 32 ] which disables the optimal accumulation of triglycerides in adipocytes that derives in an abnormal deposition of lipids in tissues such as skeletal muscle and liver. The storage of lipids in the liver leads to insulin resistance and hypertriglyceridemia . Affected patients are characterized by a subcutaneous fat with smaller than normal adipocytes, macrophage infiltration and fibrosis .
These findings affirm a new primary form of inherited lipodystrophy and emphasize on the severe metabolic consequences of a defect in the formation of lipid droplets in adipose tissue.
In particular, variants 13041A>G and 14995A>T have been associated with increased risk of obesity in women and 11482G>A has been associated with decreased perilipin expression and increased lipolysis in women. [ 33 ] [ 34 ] | https://en.wikipedia.org/wiki/Perilipin-1 |
Perindopril is a medication used to treat high blood pressure , heart failure , or stable coronary artery disease . [ 3 ] [ 4 ] [ 5 ]
As a long-acting ACE inhibitor , perindopril works by inhibiting production of the vasoconstriction hormone , angiotensin , thereby relaxing blood vessels , increasing urine output, and decreasing blood volume , leading to a reduction of blood pressure. It also increases blood renin activity and decreases aldosterone secretion, causing increased urine production and excretion of sodium . [ 3 ]
As a prodrug , perindopril is hydrolyzed in the liver to its active metabolite , perindoprilat. It was patented in 1980 and approved for medical use in 1988. [ 3 ] [ 6 ]
Perindopril is taken in the form of perindopril arginine/amlodipine or perindopril erbumine. [ 1 ] Both forms are therapeutically equivalent and interchangeable, [ 1 ] [ 3 ] but the dose prescribed to achieve the same effect may differ between the two forms. [ 1 ]
Perindopril should not be used during pregnancy, as it may harm the fetus. [ 1 ] Some people may have allergic reactions to perindopril, while common side-effects may include cough,
headache, dizziness, diarrhea , or upset stomach. [ 1 ]
In Australia during 2023-24, it was the fourth-most prescribed drug. [ 7 ]
Perindopril shares the indications of ACE inhibitors as a class, including essential hypertension , stable coronary artery disease (reduction of risk of cardiac events in patients with a history of myocardial infarction or revascularization ), treatment of symptomatic coronary artery disease or heart failure , and diabetic nephropathy . [ 1 ] [ 3 ] [ 8 ]
In combination with indapamide , perindopril has been shown to significantly reduce the progression of chronic kidney disease and renal complications in patients with type 2 diabetes. [ 9 ] [ 10 ] In addition, the Perindopril pROtection aGainst REcurrent Stroke Study ( PROGRESS ) found that whilst perindopril monotherapy demonstrated no significant benefit in reducing recurrent strokes when compared to placebo, the addition of low dose indapamide to perindopril therapy was associated with larger reductions in both blood pressure lowering and recurrent stroke risk in patients with pre-existing cerebrovascular disease , irrespective of their blood pressure. [ 11 ] [ 12 ] There is evidence to support the use of perindopril and indapamide combination over perindopril monotherapy to prevent strokes and improve mortality in patients with a history of stroke, transient ischaemic attack or other cardiovascular disease. [ 11 ] [ 13 ]
The Anglo-Scandinavian Cardiac Outcomes Trial-Blood Pressure Lowering Arm ( ASCOT-BLA ) was a 2005 landmark trial that compared the effects of the established therapy of the combination of atenolol and bendroflumethiazide to the new drug combination of amlodipine and perindopril (trade names Viacoram , AceryCal ). [ 14 ] The study of more than 19,000 patients worldwide was terminated earlier than anticipated because it clearly demonstrated a statistically significant improvement in mortality and cardiovascular outcomes with the newer treatment. The combination of amlodipine and perindopril remains in the current treatment guidelines for hypertension. [ 1 ] [ 15 ]
Perindopril may cause death or birth defects of a fetus if taken by the mother during pregnancy. [ 1 ] [ 3 ] It is poisonous in children and is not prescribed for them. [ 4 ]
It may cause allergic reactions, and may have adverse effects in people with heart, liver or kidney problems. [ 1 ]
Usually mild at the start of treatment, side effects may include cough, fatigue, headache, nausea, or upset stomach, among other minor effects. [ 1 ] [ 4 ]
Each tablet contains 2, 4, or 8 mg of the tert-butylamine salt of perindopril. [ 1 ] Perindopril is also available under the trade name Coversyl Plus , containing 4 mg of perindopril combined with 1.25 mg indapamide , a thiazide-like diuretic . [ 1 ]
In Australia, each tablet contains 2.5, 5, or 10 mg of perindopril arginine. Perindopril is also available under the trade name Coversyl Plus, containing 5 mg of perindopril arginine combined with 1.25 mg indapamide and Coversyl Plus LD , containing 2.5 mg of perindopril arginine combined with 0.625 mg indapamide.
The efficacy and tolerability of a fixed-dose combination of 4 mg perindopril and 5 mg amlodipine , a calcium channel antagonist, is used. [ 1 ] [ 4 ]
Perindopril is available under the following brand names among others: [ 1 ] [ 4 ]
In July 2014, the European Commission imposed fines of €427,700,000 on Laboratoires Servier and five companies which produce generics due to Servier's abuse of their dominant market position, in breach of European Union Competition law. Servier's strategy included acquiring the principal source of generic production of perindopril and entering into several pay-for-delay agreements with potential generic competitors. [ 21 ]
Media related to Perindopril at Wikimedia Commons | https://en.wikipedia.org/wiki/Perindopril |
In pathology , perineural invasion , abbreviated PNI , refers to the invasion of cancer to the space surrounding a nerve . It is common in head and neck cancer, prostate cancer and colorectal cancer.
Unlike perineural spread (PNS), which is defined as gross tumor spread along a larger, typically named nerve that is at least partially distinct from the main tumor mass and can be seen on imaging studies, PNI is defined as tumor cells infiltrating small, unnamed nerves that can only be seen microscopically but not radiologically and are often confined to the main tumor mass. The transition from PNI to PNS is not precisely defined, but PNS is detectable on MRI and may have clinical manifestations that correlate with the affected nerve. [ 1 ]
Cancers with PNI usually have a poorer prognosis, [ 2 ] as PNI is thought to be indicative of perineural spread, which can make resection of malignant lesions more difficult. Cancer cells use nerves as routes of metastasis , which could explain why PNI is associated with poorer outcomes. [ 3 ]
In prostate cancer, PNI in needle biopsies is poor prognosticator; [ 2 ] however, in prostatectomy specimens it is unclear whether it carries a worse prognosis. [ 4 ]
In one study, PNI was found in approximately 90% of radical prostatectomy specimens, and PNI outside of the prostate, especially, was associated with a poorer prognosis. [ 5 ] However, there exists controversies about whether PNI has prognostic significance toward cancer malignancy.
In perineural invasion, cancer cells proliferate around peripheral nerves and eventually invade them. Cancer cells migrate in response to different mediators released by autonomic and sensory fibers. Tumor cells secrete CCL2 and CSF-1 to accumulate endoneurial macrophages and, at the same time, release factors that stimulate perineural invasion. Schwann cells release TGFβ, increasing the aggressiveness of cancer cells through TGFβ-RI. Schwann cells drive perineural invasion, cancer cells interact directly with Schwann cells via NCAM1 to invade and migrate along nerves. [ 6 ]
PNI and PNS are considered an unfavorable prognostic factors in head and neck cancer of virtually all sites, associated with poor local and regional disease control, probability of locoregional and distant metastases, disease recurrence, and lower survival rate. Although adenoid cystic carcinoma accounts for only 1-3% of head and neck tumors, it has the highest relative incidence of PNI. PNI is also commonly found in patients with salivary duct carcinomas, polymorphous low-grade adenocarcinomas, cutaneous malignancies, desmoplastic melanomas, myelomas, lymphomas, and leukemias. If PNS is not detected, there is a high risk of cancer progression. PNS recurrence is rarely resectable, and repeated irradiation has greater toxic side effects and fewer benefits compared with initial early treatment. The trigeminal nerve and facial nerve are the most commonly affected nerves primarily because of their extensive innervation area, although virtually any cranial nerve and its branches can provide a route for the PNS. [ 1 ] | https://en.wikipedia.org/wiki/Perineural_invasion |
In mathematics, specifically algebraic geometry , a period or algebraic period [ 1 ] is a complex number that can be expressed as an integral of an algebraic function over an algebraic domain . The periods are a class of numbers which includes, alongside the algebraic numbers, many well known mathematical constants such as the number π . Sums and products of periods remain periods, such that the periods P {\displaystyle {\mathcal {P}}} form a ring .
Maxim Kontsevich and Don Zagier gave a survey of periods and introduced some conjectures about them.
Periods play an important role in the theory of differential equations and transcendental numbers as well as in open problems of modern arithmetical algebraic geometry. [ 2 ] They also appear when computing the integrals that arise from Feynman diagrams , and there has been intensive work trying to understand the connections. [ 3 ]
A number α {\displaystyle \alpha } is a period if it can be expressed as an integral of the form
where P {\displaystyle P} is a polynomial and Q {\displaystyle Q} a rational function on R n {\displaystyle \mathbb {R} ^{n}} with rational coefficients . [ 1 ] A complex number is a period if its real and imaginary parts are periods.
An alternative definition allows P {\displaystyle P} and Q {\displaystyle Q} to be algebraic functions ; this looks more general, but is equivalent. The coefficients of the rational functions and polynomials can also be generalised to algebraic numbers because irrational algebraic numbers are expressible in terms of areas of suitable domains.
In the other direction, Q {\displaystyle Q} can be restricted to be the constant function 1 {\displaystyle 1} or − 1 {\displaystyle -1} , by replacing the integrand with an integral of ± 1 {\displaystyle \pm 1} over a region defined by a polynomial in additional variables.
In other words, a (nonnegative) period is the volume of a region in R n {\displaystyle \mathbb {R} ^{n}} defined by polynomial inequalities with rational coefficients. [ 2 ] [ 4 ]
The periods are intended to bridge the gap between the well-behaved algebraic numbers , which form a class too narrow to include many common mathematical constants and the transcendental numbers , which are uncountable and apart from very few specific examples hard to describe. The latter are also not generally computable .
The ring of periods P {\displaystyle {\mathcal {P}}} lies in between the fields of algebraic numbers Q ¯ {\displaystyle \mathbb {\overline {Q}} } and complex numbers C {\displaystyle \mathbb {C} } and is countable . [ 5 ] The periods themselves are all computable, [ 6 ] and in particular definable . That is: Q ¯ ⊂ P ⊂ C {\displaystyle \mathbb {\overline {Q}} \subset {\mathcal {P}}\subset \mathbb {C} } .
Periods include some of those transcendental numbers, that can be described in an algorithmic way and only contain a finite amount of information. [ 2 ]
The following numbers are among the ones known to be periods: [ 1 ] [ 2 ] [ 4 ] [ 7 ]
In particular: Even powers π 2 n {\displaystyle \pi ^{2n}} and Apéry's constant ζ ( 3 ) {\displaystyle \zeta (3)} .
In particular: Odd powers π 2 n + 1 {\displaystyle \pi ^{2n+1}} and Catalan's constant G {\displaystyle G} .
In particular: The Gieseking constant Cl 2 ( 1 3 π ) {\displaystyle {\text{Cl}}_{2}({\tfrac {1}{3}}\pi )} .
In particular: The perimeter P {\displaystyle P} of an ellipse with algebraic radii a {\displaystyle a} and b {\displaystyle b} .
Many of the constants known to be periods are also given by integrals of transcendental functions . Kontsevich and Zagier note that there "seems to be no universal rule explaining why certain infinite sums or integrals of transcendental functions are periods".
Kontsevich and Zagier conjectured that, if a period is given by two different integrals, then each integral can be transformed into the other using only the linearity of integrals (in both the integrand and the domain), changes of variables , and the Newton–Leibniz formula
(or, more generally, the Stokes formula ).
A useful property of algebraic numbers is that equality between two algebraic expressions can be determined algorithmically. The conjecture of Kontsevich and Zagier would imply that equality of periods is also decidable: inequality of computable reals is known recursively enumerable ; and conversely if two integrals agree, then an algorithm could confirm so by trying all possible ways to transform one of them into the other one.
Further open questions consist of proving which known mathematical constants do not belong to the ring of periods. An example of a real number that is not a period is given by Chaitin's constant Ω . Any other non- computable number also gives an example of a real number that is not a period. It is also possible to construct artificial examples of computable numbers which are not periods. [ 8 ] However there are no computable numbers proven not to be periods, which have not been artificially constructed for that purpose.
It is conjectured that 1/ π, Euler's number e and the Euler–Mascheroni constant γ are not periods. [ 2 ]
Kontsevich and Zagier suspect these problems to be very hard and remain open a long time.
The ring of periods can be widened to the ring of extended periods P ^ {\displaystyle {\hat {\mathcal {P}}}} by adjoining the element 1/ π. [ 2 ]
Permitting the integrand Q {\displaystyle Q} to be the product of an algebraic function and the exponential of an algebraic function, results in another extension: the exponential periods E P {\displaystyle {\mathcal {E}}{\mathcal {P}}} . [ 2 ] [ 4 ] [ 9 ] They also form a ring and are countable. It is Q ¯ ⊂ P ⊆ E P ⊂ C {\displaystyle {\overline {\mathbb {Q} }}\subset {\mathcal {P}}\subseteq {\mathcal {EP}}\subset \mathbb {C} } .
The following numbers are among the ones known to be exponential periods: [ 2 ] [ 4 ] [ 10 ]
In particular: The number e {\displaystyle e} .
In particular: π {\displaystyle {\sqrt {\pi }}} . | https://en.wikipedia.org/wiki/Period_(algebraic_geometry) |
A period on the periodic table is a row of chemical elements . All elements in a row have the same number of electron shells . Each next element in a period has one more proton and is less metallic than its predecessor. Arranged this way, elements in the same group (column) have similar chemical and physical properties , reflecting the periodic law . For example, the halogens lie in the second-to-last group ( group 17 ) and share similar properties, such as high reactivity and the tendency to gain one electron to arrive at a noble-gas electronic configuration. As of 2022 [update] , a total of 118 elements have been discovered and confirmed.
Modern quantum mechanics explains these periodic trends in properties in terms of electron shells . As atomic number increases, shells fill with electrons in approximately the order shown in the ordering rule diagram. The filling of each shell corresponds to a row in the table.
In the f-block and p-block of the periodic table, elements within the same period generally do not exhibit trends and similarities in properties (vertical trends down groups are more significant). However, in the d-block , trends across periods become significant, and in the f-block elements show a high degree of similarity across periods.
There are currently seven complete periods in the periodic table, comprising the 118 known elements. Any new elements will be placed into an eighth period; see extended periodic table . The elements are colour-coded below by their block : red for the s-block, yellow for the p-block, blue for the d-block, and green for the f-block.
The first period contains fewer elements than any other, with only two, hydrogen and helium . They therefore do not follow the octet rule , but rather a duplet rule . Chemically, helium behaves like a noble gas , and thus is taken to be part of the group 18 elements . However, in terms of its nuclear structure it belongs to the s-block , and is therefore sometimes classified as a group 2 element , or simultaneously both 2 and 18. Hydrogen readily loses and gains an electron, and so behaves chemically as both a group 1 and a group 17 element .
Period 2 elements involve the 2s and 2p orbitals . They include the biologically most essential elements besides hydrogen: carbon, nitrogen, and oxygen.
All period three elements occur in nature and have at least one stable isotope . All but the noble gas argon are essential to basic geology and biology.
Period 4 includes the biologically essential elements potassium and calcium , and is the first period in the d-block with the lighter transition metals . These include iron , the heaviest element forged in main-sequence stars and a principal component of the Earth, as well as other important metals such as cobalt , nickel , and copper . Almost all have biological roles.
Completing the fourth period are six p-block elements: gallium , germanium , arsenic , selenium , bromine , and krypton .
Period 5 has the same number of elements as period 4 and follows the same general structure but with one more post transition metal and one fewer nonmetal. Of the three heaviest elements with biological roles, two ( molybdenum and iodine ) are in this period; tungsten , in period 6, is heavier, along with several of the early lanthanides . Period 5 also includes technetium , the lightest exclusively radioactive element.
Period 6 is the first period to include the f-block , with the lanthanides (also known as the rare earth elements ), and includes the heaviest stable elements. Many of these heavy metals are toxic and some are radioactive, but platinum and gold are largely inert.
All elements of period 7 are radioactive . This period contains the heaviest element which occurs naturally on Earth, plutonium . All of the subsequent elements in the period have been synthesized artificially. Whilst five of these (from americium to einsteinium ) are now available in macroscopic quantities, most are extremely rare, having only been prepared in microgram amounts or less. Some of the later elements have only ever been identified in laboratories in quantities of a few atoms at a time.
Although the rarity of many of these elements means that experimental results are not very extensive, periodic and group trends in behaviour appear to be less well defined for period 7 than for other periods. Whilst francium and radium do show typical properties of groups 1 and 2, respectively, the actinides display a much greater variety of behaviour and oxidation states than the lanthanides . These peculiarities of period 7 may be due to a variety of factors, including a large degree of spin–orbit coupling and relativistic effects, ultimately caused by the very high positive electrical charge from their massive atomic nuclei .
No element of the eighth period has yet been synthesized. A g-block is predicted. It is not clear if all elements predicted for the eighth period are in fact physically possible. Therefore, there may not be a ninth period. | https://en.wikipedia.org/wiki/Period_(periodic_table) |
The periodatonickelates are a series of anions and salts of nickel complexed to the periodate anion. The most important of these salts are the diperiodatonickelates , in which nickel exhibits the +4 oxidation state : these are powerful oxidising agents , capable of oxidising bromate to perbromate .
The first periodatonickalates discovered were sodium nickel periodate (NaNiIO 6 · 0.5H 2 O ) and potassium nickel periodate (KNiIO 6 ·0.5H 2 O). P. Ray and B. Sarma obtained these dark purple double salts in 1949, mixing nickel sulfate with potassium or sodium periodate and (as oxidant) a boiling aqueous solution of an alkali persulfate salt. [ 1 ] It is now known that ozone can replace the persulfate salt, and that similar solids exist for other alkali ( Rb NiIO 6 ·0.5H 2 O, Cs NiIO 6 ·0.5H 2 O, and NH 4 NiIO 6 ·0.5H 2 O), as well as certain other tetravalent metals (including manganese , germanium , tin and lead ). [ 2 ]
The crystalline salts are insoluble in water, acid, or base. [ 2 ] The colour is due to absorbance of visible light shorter than 800 nm, with a peak at 540 nm . [ 2 ] The crystal structure of each has space group P 312. The structure is built from hexagonal oxygen layers; in every other layer, alkali atoms fill one-third of the hexagon centers and iodine and nickel fill the remainder; in the other layers, the centers are vacant. [ 2 ]
The diperiodatonickelates, also known as dihydroxydiperiodatonickelates , [ 3 ] contain nickel in the +4 oxidation state along with two periodate anions. A solid monoperiodatonickelate salt KNiIO 6 ·0.5H 2 O will dissolve in a solution of potassium hydroxide and potassium periodate to yield a diperiodatonickelate solution. [ 4 ]
The [Ni(OH) 2 [IO 5 OH] 2 ] 6− ion can form a brown salt with sodium (Na 4 H 2 [Ni(OH) 2 [IO 5 OH] 2 ] ·6H 2 O), another acidic sodium salt (Na 5 [Ni(OH) 2 [IO 5 OH] 2 ] ·H 2 O) and an orange salt with cobalt ([Co( en ) 3 ] 2 Ni(OH) 2 [IO 5 OH] 2 ). [ 5 ] [ 6 ] Diperiodatometalates with the same formula also exist for palladium and nickel, and similar diperiodatometalates can be made for Cu , Ag , Au , Ru and Os . [ 5 ]
A diperiodatonickelate will dissolve in alkaline water. Depending on pH and concentration , the resulting solution is an equilibrium between [Ni(OH) 2 [IO 3 (OH) 3 ] 2 ] 2− and [Ni(OH) 2 [IO 3 (OH) 3 ][IO 4 (OH) 2 ]] 3− . [ 7 ] It is also a strong oxidiser: like very few reagents, it oxidises bromate to perbromate. In the reaction, Ni IV reduces to Ni III with the release of a hydroxyl radical . The radical then oxidises bromate to a BrO 4 2− radical, which Ni III then converts to perbromate BrO 4 − . [ 7 ] | https://en.wikipedia.org/wiki/Periodatonickelates |
Periodic acid–Schiff ( PAS ) is a staining method used to detect polysaccharides (such as glycogen ) and mucosubstances (such as glycoproteins , glycolipids and mucins) in tissues. The reaction of periodic acid oxidizes vicinal diols in these sugars , usually breaking up the bond between two adjacent carbons not involved in the glycosidic linkage or ring closure in the ring of monosaccharide units that are part of the long polysaccharides and creating a pair of aldehydes at the two free tips of each broken monosaccharide ring. The oxidation condition has to be sufficiently regulated so as to not further oxidize the aldehydes. These aldehydes then react with the Schiff reagent to give a purple-magenta color. A suitable basic stain is often used as a counterstain .
• PAS diastase stain (PAS-D) is PAS stain used in combination with diastase , an enzyme that breaks down glycogen.
• Alcian blue/periodic acid–Schiff (AB/PAS or AB-PAS) uses alcian blue before the PAS step.
PAS staining is mainly used for staining structures containing a high proportion of carbohydrate macromolecules ( glycogen , glycoprotein , proteoglycans ), typically found in e.g. connective tissues , mucus , the glycocalyx , and basal laminae .
PAS staining can be used to assist in the diagnosis of several medical conditions:
Presence of glycogen can be confirmed on a section of tissue by using diastase to digest the glycogen from a section, then comparing a diastase digested PAS section with a normal PAS section. The diastase negative slide will show a magenta staining where glycogen is present within a section of tissue. The slide that has been treated with diastase will lack any positive PAS staining in those locations on the slide
PAS staining is also used for staining cellulose . One example would be looking for implanted medical devices composed of nonoxidized cellulose.
If the PAS stain will be performed on tissue, the recommended fixative is 10% neutral-buffered formalin or Bouin solution . For blood smears , the recommended fixative is methanol . Glutaraldehyde is not recommended because free aldehyde groups may be available to react with the Schiff reagent , which may result in false positive staining. [ 4 ] | https://en.wikipedia.org/wiki/Periodic_acid–Schiff_stain |
Periodic boundary conditions ( PBCs ) are a set of boundary conditions which are often chosen for approximating a large (infinite) system by using a small part called a unit cell . PBCs are often used in computer simulations and mathematical models . The topology of two-dimensional PBC is equal to that of a world map of some video games; the geometry of the unit cell satisfies perfect two-dimensional tiling, and when an object passes through one side of the unit cell, it re-appears on the opposite side with the same velocity. In topological terms, the space made by two-dimensional PBCs can be thought of as being mapped onto a torus ( compactification ). The large systems approximated by PBCs consist of an infinite number of unit cells. In computer simulations, one of these is the original simulation box, and others are copies called images . During the simulation, only the properties of the original simulation box need to be recorded and propagated. The minimum-image convention is a common form of PBC particle bookkeeping in which each individual particle in the simulation interacts with the closest image of the remaining particles in the system.
One example of periodic boundary conditions can be defined according to smooth real functions ϕ : R n → R {\displaystyle \phi :\mathbb {R} ^{n}\to \mathbb {R} } by
for all m = 0, 1, 2, ... and for constants a i {\displaystyle a_{i}} and b i {\displaystyle b_{i}} .
In molecular dynamics simulations and Monte Carlo molecular modeling , PBCs are usually applied to calculate properties of bulk gasses, liquids, crystals or mixtures. [ 1 ] A common application uses PBC to simulate solvated macromolecules in a bath of explicit solvent . Born–von Karman boundary conditions are periodic boundary conditions for a special system.
In electromagnetics, PBC can be applied for different mesh types to analyze the electromagnetic properties of periodical structures. [ 2 ]
Three-dimensional PBCs are useful for approximating the behavior of macro-scale systems of gases, liquids, and solids. Three-dimensional PBCs can also be used to simulate planar surfaces, in which case two-dimensional PBCs are often more suitable. Two-dimensional PBCs for planar surfaces are also called slab boundary conditions ; in this case, PBCs are used for two Cartesian coordinates (e.g., x and y), and the third coordinate (z) extends to infinity.
PBCs can be used in conjunction with Ewald summation methods (e.g., the particle mesh Ewald method) to calculate electrostatic forces in the system. However, PBCs also introduce correlational artifacts that do not respect the translational invariance of the system, [ 3 ] and requires constraints on the composition and size of the simulation box.
In simulations of solid systems, the strain field arising from any inhomogeneity in the system will be artificially truncated and modified by the periodic boundary. Similarly, the wavelength of sound or shock waves and phonons in the system is limited by the box size.
In simulations containing ionic (Coulomb) interactions, the net electrostatic charge of the system must be zero to avoid summing to an infinite charge when PBCs are applied. In some applications it is appropriate to obtain neutrality by adding ions such as sodium or chloride (as counterions ) in appropriate numbers if the molecules of interest are charged. Sometimes ions are even added to a system in which the molecules of interest are neutral, to approximate the ionic strength of the solution in which the molecules naturally appear. Maintenance of the minimum-image convention also generally requires that a spherical cutoff radius for nonbonded forces be at most half the length of one side of a cubic box. Even in electrostatically neutral systems, a net dipole moment of the unit cell can introduce a spurious bulk-surface energy, equivalent to pyroelectricity in polar crystals . Another consequence of applying PBCs to a simulated system such as a liquid or a solid is that this hypothetical system has no contact with its “surroundings”, due to it being infinite in all directions. Therefore, long-range energy contributions such as the electrostatic potential , and by extension the energies of charged particles like electrons, are not automatically aligned to experimental energy scales. Mathematically, this energy level ambiguity corresponds to the sum of the electrostatic energy being dependent on a surface term that needs to be set by the user of the method. [ 4 ]
The size of the simulation box must also be large enough to prevent periodic artifacts from occurring due to the unphysical topology of the simulation. In a box that is too small, a macromolecule may interact with its own image in a neighboring box, which is functionally equivalent to a molecule's "head" interacting with its own "tail". This produces highly unphysical dynamics in most macromolecules, although the magnitude of the consequences and thus the appropriate box size relative to the size of the macromolecules depends on the intended length of the simulation, the desired accuracy, and the anticipated dynamics. For example, simulations of protein folding that begin from the native state may undergo smaller fluctuations, and therefore may not require as large a box, as simulations that begin from a random coil conformation. However, the effects of solvation shells on the observed dynamics – in simulation or in experiment – are not well understood. A common recommendation based on simulations of DNA is to require at least 1 nm of solvent around the molecules of interest in every dimension. [ 5 ]
An object which has passed through one face of the simulation box should re-enter through the opposite face—or its image should do it. Evidently, a strategic decision must be made: Do we (A) “fold back” particles into the simulation box when they leave it, or do we (B) let them go on (but compute interactions with the nearest images)? The decision has no effect on the course of the simulation, but if the user is interested in mean displacements, diffusion lengths, etc., the second option is preferable.
To implement a PBC algorithm, at least two steps are needed.
Restricting the coordinates is a simple operation which can be described with the following code, where x_size is the length of the box in one direction (assuming an orthogonal unit cell centered on the origin) and x is the position of the particle in the same direction:
Distance and vector between objects should obey the minimum image criterion.
This can be implemented according to the following code (in the case of a one-dimensional system where dx is the distance direction vector from object i to object j):
For three-dimensional PBCs, both operations should be repeated in all 3 dimensions.
These operations can be written in a much more compact form for orthorhombic cells if the origin is shifted to a corner of the box. Then we have, in one dimension, for positions and distances respectively:
Assuming an orthorhombic simulation box with the origin at the lower left forward corner, the minimum image convention for the calculation of effective particle distances can be calculated with the “nearest integer” function as shown above, here as C/C++ code:
The fastest way of carrying out this operation depends on the processor architecture. If the sign of dx is not relevant, the method
was found to be fastest on x86-64 processors in 2013. [ 6 ]
For non-orthorhombic cells the situation is more complicated. [ 7 ]
In simulations of ionic systems more complicated operations
may be needed to handle the long-range Coulomb interactions spanning several box images, for instance Ewald summation .
PBC requires the unit cell to be a shape that will tile perfectly into a three-dimensional crystal. Thus, a spherical or elliptical droplet cannot be used. A cube or rectangular prism is the most intuitive and common choice, but can be computationally expensive due to unnecessary amounts of solvent molecules in the corners, distant from the central macromolecules. A common alternative that requires less volume is the truncated octahedron .
For simulations in 2D and 3D space, cubic periodic boundary condition is most commonly used since it is simplest in coding. In computer simulation of high dimensional systems, however, the hypercubic periodic boundary condition can be less efficient because corners occupy most part of the space. In general dimension, unit cell can be viewed as the Wigner-Seitz cell of certain lattice packing . [ 8 ] For example, the hypercubic periodic boundary condition corresponds to the hypercubic lattice packing. It is then preferred to choose a unit cell which corresponds to the dense packing of that dimension. In 4D this is D4 lattice ; and E8 lattice in 8-dimension. The implementation of these high dimensional periodic boundary conditions is equivalent to error correction code approaches in information theory . [ 9 ]
Under periodic boundary conditions, the linear momentum of the system is conserved, but Angular momentum is not. Conventional explanation of this fact is based on Noether's theorem , which states that conservation of angular momentum follows from rotational invariance of Lagrangian . However, this approach was shown to not be consistent: it fails to explain the absence of conservation of angular momentum of a single particle moving in a periodic cell. [ 10 ] Lagrangian of the particle is constant and therefore rotationally invariant, while angular momentum of the particle is not conserved. This contradiction is caused by the fact that Noether's theorem is usually formulated for closed systems. The periodic cell exchanges mass momentum, angular momentum, and energy with the neighboring cells.
When applied to the microcanonical ensemble (constant particle number, volume, and energy, abbreviated NVE), using PBC rather than reflecting walls slightly alters the sampling of the simulation due to the conservation of total linear momentum and the position of the center of mass; this ensemble has been termed the " molecular dynamics ensemble" [ 11 ] or the NVEPG ensemble. [ 12 ] These additional conserved quantities introduce minor artifacts related to the statistical mechanical definition of temperature , the departure of the velocity distributions from a Boltzmann distribution , and violations of equipartition for systems containing particles with heterogeneous masses . The simplest of these effects is that a system of N particles will behave, in the molecular dynamics ensemble, as a system of N-1 particles. These artifacts have quantifiable consequences for small toy systems containing only perfectly hard particles; they have not been studied in depth for standard biomolecular simulations, but given the size of such systems, the effects will be largely negligible. [ 12 ] | https://en.wikipedia.org/wiki/Periodic_boundary_conditions |
In mathematics , an infinite periodic continued fraction is a simple continued fraction that can be placed in the form
where the initial block [ a 0 ; a 1 , … , a k ] {\displaystyle [a_{0};a_{1},\dots ,a_{k}]} of k +1 partial denominators is followed by a block [ a k + 1 , a k + 2 , … , a k + m ] {\displaystyle [a_{k+1},a_{k+2},\dots ,a_{k+m}]} of m partial denominators that repeats ad infinitum . For example, 2 {\displaystyle {\sqrt {2}}} can be expanded to the periodic continued fraction [ 1 ; 2 , 2 , 2 , . . . ] {\displaystyle [1;2,2,2,...]} .
This article considers only the case of periodic regular continued fractions . In other words, the remainder of this article assumes that all the partial denominators a i ( i ≥ 1) are positive integers. The general case, where the partial denominators a i are arbitrary real or complex numbers, is treated in the article convergence problem .
Since all the partial numerators in a regular continued fraction are equal to unity we can adopt a shorthand notation in which the continued fraction shown above is written as
where, in the second line, a vinculum marks the repeating block. [ 1 ] Some textbooks use the notation
where the repeating block is indicated by dots over its first and last terms. [ 2 ]
If the initial non-repeating block is not present – that is, if k = -1, a 0 = a m and
the regular continued fraction x is said to be purely periodic . For example, the regular continued fraction [ 1 ; 1 , 1 , 1 , … ] {\displaystyle [1;1,1,1,\dots ]} of the golden ratio φ is purely periodic, while the regular continued fraction [ 1 ; 2 , 2 , 2 , … ] {\displaystyle [1;2,2,2,\dots ]} of 2 {\displaystyle {\sqrt {2}}} is periodic, but not purely periodic. However, the regular continued fraction [ 2 ; 2 , 2 , 2 , … ] {\displaystyle [2;2,2,2,\dots ]} of the silver ratio σ = 2 + 1 {\displaystyle \sigma ={\sqrt {2}}+1} is purely periodic.
Periodic continued fractions are in one-to-one correspondence with the real quadratic irrationals . The correspondence is explicitly provided by Minkowski's question-mark function . That article also reviews tools that make it easy to work with such continued fractions. Consider first the purely periodic part
This can, in fact, be written as
with the α , β , γ , δ {\displaystyle \alpha ,\beta ,\gamma ,\delta } being integers, and satisfying α δ − β γ = 1. {\displaystyle \alpha \delta -\beta \gamma =1.} Explicit values can be obtained by writing
which is termed a "shift", so that
and similarly a reflection, given by
so that T 2 = I {\displaystyle T^{2}=I} . Both of these matrices are unimodular , arbitrary products remain unimodular. Then, given x {\displaystyle x} as above, the corresponding matrix is of the form [ 3 ]
and one has
as the explicit form. As all of the matrix entries are integers, this matrix belongs to the modular group S L ( 2 , Z ) . {\displaystyle SL(2,\mathbb {Z} ).}
A quadratic irrational number is an irrational real root of the quadratic equation
where the coefficients a , b , and c are integers, and the discriminant , b 2 − 4 a c {\displaystyle b^{2}-4ac} , is greater than zero. By the quadratic formula , every quadratic irrational can be written in the form
where P , D , and Q are integers, D > 0 is not a perfect square (but not necessarily square-free), and Q divides the quantity P 2 − D {\displaystyle P^{2}-D} (for example ( 6 + 8 ) / 4 {\displaystyle (6+{\sqrt {8}})/4} ). Such a quadratic irrational may also be written in another form with a square-root of a square-free number (for example ( 3 + 2 ) / 2 {\displaystyle (3+{\sqrt {2}})/2} ) as explained for quadratic irrationals .
By considering the complete quotients of periodic continued fractions, Euler was able to prove that if x is a regular periodic continued fraction, then x is a quadratic irrational number. The proof is straightforward. From the fraction itself, one can construct the quadratic equation with integral coefficients that x must satisfy.
Lagrange proved the converse of Euler's theorem: if x is a quadratic irrational, then the regular continued fraction expansion of x is periodic. [ 4 ] Given a quadratic irrational x one can construct m different quadratic equations, each with the same discriminant, that relate the successive complete quotients of the regular continued fraction expansion of x to one another. Since there are only finitely many of these equations (the coefficients are bounded), the complete quotients (and also the partial denominators) in the regular continued fraction that represents x must eventually repeat.
The quadratic surd ζ = P + D Q {\displaystyle \zeta ={\frac {P+{\sqrt {D}}}{Q}}} is said to be reduced if ζ > 1 {\displaystyle \zeta >1} and its conjugate η = P − D Q {\displaystyle \eta ={\frac {P-{\sqrt {D}}}{Q}}} satisfies the inequalities − 1 < η < 0 {\displaystyle -1<\eta <0} . For instance, the golden ratio ϕ = ( 1 + 5 ) / 2 = 1.618033... {\displaystyle \phi =(1+{\sqrt {5}})/2=1.618033...} is a reduced surd because it is greater than one and its conjugate ( 1 − 5 ) / 2 = − 0.618033... {\displaystyle (1-{\sqrt {5}})/2=-0.618033...} is greater than −1 and less than zero. On the other hand, the square root of two 2 = ( 0 + 8 ) / 2 {\displaystyle {\sqrt {2}}=(0+{\sqrt {8}})/2} is greater than one but is not a reduced surd because its conjugate − 2 = ( 0 − 8 ) / 2 {\displaystyle -{\sqrt {2}}=(0-{\sqrt {8}})/2} is less than −1.
Galois proved that the regular continued fraction which represents a quadratic surd ζ is purely periodic if and only if ζ is a reduced surd. In fact, Galois showed more than this. He also proved that if ζ is a reduced quadratic surd and η is its conjugate, then the continued fractions for ζ and for (−1/η) are both purely periodic, and the repeating block in one of those continued fractions is the mirror image of the repeating block in the other. In symbols we have
where ζ is any reduced quadratic surd, and η is its conjugate.
From these two theorems of Galois a result already known to Lagrange can be deduced. If r > 1 is a rational number that is not a perfect square, then
In particular, if n is any non-square positive integer, the regular continued fraction expansion of √ n contains a repeating block of length m , in which the first m − 1 partial denominators form a palindromic string.
By analyzing the sequence of combinations
that can possibly arise when ζ = P + D Q {\displaystyle \zeta ={\frac {P+{\sqrt {D}}}{Q}}} is expanded as a regular continued fraction, Lagrange showed that the largest partial denominator a i in the expansion is less than 2 D {\displaystyle 2{\sqrt {D}}} , and that the length of the repeating block is less than 2 D .
More recently, sharper arguments [ 5 ] [ 6 ] [ 7 ] based on the divisor function have shown that the length of the repeating block for a quadratic surd of discriminant D is on the order of O ( D ln D ) . {\displaystyle {\mathcal {O}}({\sqrt {D}}\ln {D}).}
The following iterative algorithm [ 8 ] can be used to obtain the continued fraction expansion in canonical form ( S is any natural number that is not a perfect square ):
Notice that m n , d n , and a n are always integers.
The algorithm terminates when this triplet is the same as one encountered before.
The algorithm can also terminate on a i when a i = 2 a 0 , [ 9 ] which is easier to implement.
The expansion will repeat from then on. The sequence [ a 0 ; a 1 , a 2 , a 3 , … ] {\displaystyle [a_{0};a_{1},a_{2},a_{3},\dots ]} is the continued fraction expansion:
To obtain √ 114 as a continued fraction, begin with m 0 = 0; d 0 = 1; and a 0 = 10 (10 2 = 100 and 11 2 = 121 > 114 so 10 chosen).
So, m 1 = 10; d 1 = 14; and a 1 = 1.
Next, m 2 = 4; d 2 = 7; and a 2 = 2.
Now, loop back to the second equation above.
Consequently, the simple continued fraction for the square root of 114 is
√ 114 is approximately 10.67707 82520. After one expansion of the repetend, the continued fraction yields the rational fraction 21194 1985 {\displaystyle {\frac {21194}{1985}}} whose decimal value is approx. 10.67707 80856, a relative error of
0.0000016% or 1.6 parts in 100,000,000.
A more rapid method is to evaluate its generalized continued fraction . From the formula derived there :
and the fact that 114 is 2/3 of the way between 10 2 =100 and 11 2 =121 results in
which is simply the aforementioned [ 10 ; 1 , 2 , 10 , 2 , 1 , 20 , 1 , 2 ] {\displaystyle [10;1,2,\,10,2,1,\,20,1,2]} evaluated at every third term. Combining pairs of fractions produces
which is now [ 10 ; 1 , 2 , 10 , 2 , 1 , 20 , 1 , 2 ¯ ] {\displaystyle [10;1,2,{\overline {10,2,1,20,1,2}}]} evaluated at the third term and every six terms thereafter. | https://en.wikipedia.org/wiki/Periodic_continued_fraction |
Periodic counter-current chromatography ( PCC ) is a method for running affinity chromatography in a quasi-continuous manner. Today, the process is mainly employed for the purification of antibodies in the biopharmaceutical industry [ 1 ] as well as in research and development. When purifying antibodies, protein A is used as affinity matrix. However, periodic counter-current processes can be applied to any affinity type chromatography. [ 2 ]
In conventional affinity chromatography , a single chromatography column is loaded with feed material up to the point before target material (product) cannot be retained by the affinity material anymore.
The resin with the adsorbed product on it is then washed to remove impurities. Finally, the pure product is eluted with a different buffer. Notably, if too much feed material is loaded onto the column, the product can break through and product is consequently lost. Therefore, it is very important to only partially load the column to maximize the yield.
Periodic counter-current chromatography puts this problem aside by utilizing more than one column. PCC processes can be run with any number of columns, starting from two. [ 3 ] The following paragraph will explain a two-column version of PCC, but other protocols with more columns rely on the same principles (see below).
A diagram depicting the individual process steps is shown on the right. In Step 1, the so-called sequential loading phase, columns 1 and 2 are interconnected. Column 1 is fully loaded with sample (red) while its breakthrough is captured on column 2. In Step 2, column 1 is washed, eluted, cleaned and re-equilibrated while loading separately continues on column 2. In Step 3, after regeneration of column 1, the columns are again inter-connected and column 2 is fully loaded while its breakthrough is captured on column 1. Finally, in Step 4 column 2 is washed, eluted, cleaned and re-equilibrated while loading continues independently on column 1. This cyclic process is repeated in a continuous way.
Several variations of periodic counter-current chromatography with more than two columns exist. In these cases, additional columns are either placed within the feed stream during loading, having the same effect as using longer columns. Alternatively, additional columns can be kept in an unoccupied stand-by mode during loading. This mode offers additional assurance that the main process is not influenced by washing and cleaning protocols, albeit in practice this is rarely required. On the other hand, the underutilized columns reduce the theoretical maximum productivity for such processes. Generally, the advantages and disadvantages of different multi-column protocols are the subject of debate. [ 4 ] However, without a doubt, compared to single column batch processes, periodic counter-current processes provide significantly increased productivity.
On the time scale of continuous chromatography runs, it is fairly common to observe changes in important process parameters, such as column health, buffer quality, feed titer (concentration) or feed composition. Such changes result in an altered maximum column capacity, relative to the amount of loaded feed material. In order to achieve a steady quality and yield for each process cycle, the timing of the individual process steps therefore has to be adjusted. Manual changes are in principle conceivable, but rather impractical. More commonly, dynamic process control algorithms monitor the process parameters and apply changes as needed automatically.
There are two different operating modes for dynamic process controllers in use today (see Figure on the right).
The first one, called DeltaUV, monitors the difference between two signals from detectors situated before and after the first column. During initial loading, there is a large difference between the two signals, but it is diminishing as the impurities make their way through the column. Once the column is fully saturated with impurities and only additional product is being held back, the difference between the signals reaches a constant value. As long as the product is completely being captured on the column, the difference between the signals will remain constant. As soon as some of the product breaks through the column (compare above), the difference diminishes. Thus, the timing and amount of product breakthrough can be determined.
The second possibility, called AutomAb, requires only the signal of a single detector situated behind the first column. During initial loading, the signal increases, as more and more impurities make their way through the column. When the column is saturated with impurities and as long as the product is completely being captured on the column, the signal then remains constant. As soon as some of the product breaks through the column (compare above), the signal increases again. Thus, the timing and amount of product breakthrough can again be determined.
Both iterations work equally well in theory. In practice, the requirement for two synced signals and the exposure of one detector to unpurified feed material, makes the DetaUV approach less reliable than AutomAb.
As of 2017, Cytiva holds patents around three-column periodic counter-current chromatography: this technology is used in their Äkta PCC instrument. [ citation needed ] Likewise, ChromaCon holds patents for an optimized two-column version (CaptureSMB). [ citation needed ] CaptureSMB is used in ChromaCon 's Contichrom CUBE and under license in YMC's Ecoprime Twin systems. Additional manufacturers of systems capable of periodic counter-current chromatography include Novasep and Pall . [ citation needed ] | https://en.wikipedia.org/wiki/Periodic_counter-current_chromatography |
A periodic function , also called a periodic waveform (or simply periodic wave ), is a function that repeats its values at regular intervals or periods . The repeatable part of the function or waveform is called a cycle . [ 1 ] For example, the trigonometric functions , which repeat at intervals of 2 π {\displaystyle 2\pi } radians , are periodic functions. Periodic functions are used throughout science to describe oscillations , waves , and other phenomena that exhibit periodicity . Any function that is not periodic is called aperiodic .
A function f is said to be periodic if, for some nonzero constant P , it is the case that
for all values of x in the domain. A nonzero constant P for which this is the case is called a period of the function. If there exists a least positive [ 2 ] constant P with this property, it is called the fundamental period (also primitive period , basic period , or prime period .) Often, "the" period of a function is used to mean its fundamental period. A function with period P will repeat on intervals of length P , and these intervals are sometimes also referred to as periods of the function.
Geometrically, a periodic function can be defined as a function whose graph exhibits translational symmetry , i.e. a function f is periodic with period P if the graph of f is invariant under translation in the x -direction by a distance of P . This definition of periodicity can be extended to other geometric shapes and patterns, as well as be generalized to higher dimensions, such as periodic tessellations of the plane. A sequence can also be viewed as a function defined on the natural numbers , and for a periodic sequence these notions are defined accordingly.
The sine function is periodic with period 2 π {\displaystyle 2\pi } , since
for all values of x {\displaystyle x} . This function repeats on intervals of length 2 π {\displaystyle 2\pi } (see the graph to the right).
Everyday examples are seen when the variable is time ; for instance the hands of a clock or the phases of the moon show periodic behaviour. Periodic motion is motion in which the position(s) of the system are expressible as periodic functions, all with the same period.
For a function on the real numbers or on the integers , that means that the entire graph can be formed from copies of one particular portion, repeated at regular intervals.
A simple example of a periodic function is the function f {\displaystyle f} that gives the " fractional part " of its argument. Its period is 1. In particular,
The graph of the function f {\displaystyle f} is the sawtooth wave .
The trigonometric functions sine and cosine are common periodic functions, with period 2 π {\displaystyle 2\pi } (see the figure on the right). The subject of Fourier series investigates the idea that an 'arbitrary' periodic function is a sum of trigonometric functions with matching periods.
According to the definition above, some exotic functions, for example the Dirichlet function , are also periodic; in the case of Dirichlet function, any nonzero rational number is a period.
Using complex variables we have the common period function:
Since the cosine and sine functions are both periodic with period 2 π {\displaystyle 2\pi } , the complex exponential is made up of cosine and sine waves. This means that Euler's formula (above) has the property such that if L {\displaystyle L} is the period of the function, then
A function whose domain is the complex numbers can have two incommensurate periods without being constant. The elliptic functions are such functions. ("Incommensurate" in this context means not real multiples of each other.)
Periodic functions can take on values many times. More specifically, if a function f {\displaystyle f} is periodic with period P {\displaystyle P} , then for all x {\displaystyle x} in the domain of f {\displaystyle f} and all positive integers n {\displaystyle n} ,
If f ( x ) {\displaystyle f(x)} is a function with period P {\displaystyle P} , then f ( a x ) {\displaystyle f(ax)} , where a {\displaystyle a} is a non-zero real number such that a x {\displaystyle ax} is within the domain of f {\displaystyle f} , is periodic with period P a {\textstyle {\frac {P}{a}}} . For example, f ( x ) = sin ( x ) {\displaystyle f(x)=\sin(x)} has period 2 π {\displaystyle 2\pi } and, therefore, sin ( 5 x ) {\displaystyle \sin(5x)} will have period 2 π 5 {\textstyle {\frac {2\pi }{5}}} .
Some periodic functions can be described by Fourier series . For instance, for L 2 functions , Carleson's theorem states that they have a pointwise ( Lebesgue ) almost everywhere convergent Fourier series . Fourier series can only be used for periodic functions, or for functions on a bounded (compact) interval. If f {\displaystyle f} is a periodic function with period P {\displaystyle P} that can be described by a Fourier series, the coefficients of the series can be described by an integral over an interval of length P {\displaystyle P} .
Any function that consists only of periodic functions with the same period is also periodic (with period equal or smaller), including:
One subset of periodic functions is that of antiperiodic functions . This is a function f {\displaystyle f} such that f ( x + P ) = − f ( x ) {\displaystyle f(x+P)=-f(x)} for all x {\displaystyle x} . For example, the sine and cosine functions are π {\displaystyle \pi } -antiperiodic and 2 π {\displaystyle 2\pi } -periodic. While a P {\displaystyle P} -antiperiodic function is a 2 P {\displaystyle 2P} -periodic function, the converse is not necessarily true. [ 3 ]
A further generalization appears in the context of Bloch's theorems and Floquet theory , which govern the solution of various periodic differential equations. In this context, the solution (in one dimension) is typically a function of the form
where k {\displaystyle k} is a real or complex number (the Bloch wavevector or Floquet exponent ). Functions of this form are sometimes called Bloch-periodic in this context. A periodic function is the special case k = 0 {\displaystyle k=0} , and an antiperiodic function is the special case k = π / P {\displaystyle k=\pi /P} . Whenever k P / π {\displaystyle kP/\pi } is rational, the function is also periodic.
In signal processing you encounter the problem, that Fourier series represent periodic functions and that Fourier series satisfy convolution theorems (i.e. convolution of Fourier series corresponds to multiplication of represented periodic function and vice versa), but periodic functions cannot be convolved with the usual definition, since the involved integrals diverge. A possible way out is to define a periodic function on a bounded but periodic domain. To this end you can use the notion of a quotient space :
That is, each element in R / Z {\displaystyle {\mathbb {R} /\mathbb {Z} }} is an equivalence class of real numbers that share the same fractional part . Thus a function like f : R / Z → R {\displaystyle f:{\mathbb {R} /\mathbb {Z} }\to \mathbb {R} } is a representation of a 1-periodic function.
Consider a real waveform consisting of superimposed frequencies, expressed in a set as ratios to a fundamental frequency , f: F = 1 ⁄ f [f 1 f 2 f 3 ... f N ] where all non-zero elements ≥1 and at least one of the elements of the set is 1. To find the period, T, first find the least common denominator of all the elements in the set. Period can be found as T = LCD ⁄ f . Consider that for a simple sinusoid, T = 1 ⁄ f . Therefore, the LCD can be seen as a periodicity multiplier.
If no least common denominator exists, for instance if one of the above elements were irrational, then the wave would not be periodic. [ 4 ] | https://en.wikipedia.org/wiki/Periodic_function |
In crystallography , a periodic graph or crystal net is a three-dimensional periodic graph , i.e., a three-dimensional Euclidean graph whose vertices or nodes are points in three-dimensional Euclidean space , and whose edges (or bonds or spacers) are line segments connecting pairs of vertices, periodic in three linearly independent axial directions. There is usually an implicit assumption that the set of vertices are uniformly discrete , i.e., that there is a fixed minimum distance between any two vertices. The vertices may represent positions of atoms or complexes or clusters of atoms such as single-metal ions , molecular building blocks, or secondary building units , while each edge represents a chemical bond or a polymeric ligand .
Although the notion of a periodic graph or crystal net is ultimately mathematical (actually a crystal net is nothing but a periodic realization of an abelian covering graph over a finite graph [ 1 ] ), and is closely related to that of a Tessellation of space (or honeycomb) in the theory of polytopes and similar areas, much of the contemporary effort in the area is motivated by crystal engineering and prediction (design) , including metal-organic frameworks (MOFs) and zeolites .
A crystal net is an infinite molecular model of a crystal. Similar models existed in Antiquity , notably the atomic theory associated with Democritus , which was criticized by Aristotle because such a theory entails a vacuum, which Aristotle believed nature abhors . Modern atomic theory traces back to Johannes Kepler and his work on geometric packing problems . Until the twentieth century, graph-like models of crystals focused on the positions of the (atomic) components, and these pre-20th century models were the focus of two controversies in chemistry and materials science.
The two controversies were (1) the controversy over Robert Boyle ’s corpuscular theory of matter, which held that all material substances were composed of particles, and (2) the controversy over whether crystals were minerals or some kind of vegetative phenomenon. [ 2 ] During the eighteenth century, Kepler, Nicolas Steno , René Just Haüy , and others gradually associated the packing of Boyle-type corpuscular units into arrays with the apparent emergence of polyhedral structures resembling crystals as a result. During the nineteenth century, there was considerably more work done on polyhedra and also of crystal structure , notably in the derivation of the Crystallographic groups based on the assumption that a crystal could be regarded as a regular array of unit cells . During the early twentieth century, the physics and chemistry community largely accepted Boyle's corpuscular theory of matter—by now called the atomic theory—and X-ray crystallography was used to determine the position of the atomic or molecular components within the unit cells (by the early twentieth century, unit cells were regarded as physically meaningful).
However, despite the growing use of stick-and-ball molecular models , the use of graphical edges or line segments to represent chemical bonds in specific crystals have become popular more recently, and the publication of [ 3 ] encouraged efforts to determine graphical structures of known crystals, to generate crystal nets of as yet unknown crystals, and to synthesize crystals of these novel crystal nets. The coincident expansion of interest in tilings and tessellations , especially those modeling quasicrystals , and the development of modern Nanotechnology , all facilitated by the dramatic increase in computational power, enabled the development of algorithms from computational geometry for the construction and analysis of crystal nets. Meanwhile, the ancient association between models of crystals and tessellations has expanded with Algebraic topology . There is also a thread of interest in the very-large-scale integration (VLSI) community for using these crystal nets as circuit designs. [ 4 ]
A Euclidean graph in three-dimensional space is a pair ( V , E ), where V is a set of vertices (sometimes called points or nodes) and E is a set of edges (sometimes called bonds or spacers) where each edge joins two vertices. There is a tendency in the polyhedral and chemical literature to refer to geometric graphs as nets (contrast with polyhedral nets ), and the nomenclature in the chemical literature differs from that of graph theory. [ 5 ]
A symmetry of a Euclidean graph is an isometry of the underlying Euclidean space whose restriction to the graph is an automorphism ; the symmetry group of the Euclidean graph is the group of its symmetries. A Euclidean graph in three-dimensional Euclidean space is periodic if there exist three linearly independent translations whose restrictions to the net are symmetries of the net. Often (and always, if one is dealing with a crystal net), the periodic net has finitely many orbits, and is thus uniformly discrete in that there exists a minimum distance between any two vertices.
The result is a three-dimensional periodic graph as a geometric object.
The resulting crystal net will induce a lattice of vectors so that given three vectors that generate the lattice, those three vectors will bound a unit cell , i.e. a parallelepiped which, placed anywhere in space, will enclose a fragment of the net that repeats in the directions of the three axes.
Two vertices (or edges) of a periodic graph are symmetric if they are in the same orbit of the symmetry group of the graph; in other words, two vertices (or edges) are symmetric if there is a symmetry of the net that moves one onto the other. In chemistry, there is a tendency to refer to orbits of vertices or edges as “kinds” of vertices or edges, with the recognition that from any two vertices or any two edges (similarly oriented) of the same orbit, the geometric graph “looks the same”. Finite colorings of vertices and edges (where symmetries are to preserve colorings) may be employed.
The symmetry group of a crystal net will be a (group of restrictions of a) crystallographic space group , and many of the most common crystals are of very high symmetry, i.e. very few orbits. A crystal net is uninodal if it has one orbit of vertex (if the vertices were colored and the symmetries preserve colorings, this would require that a corresponding crystal have atoms of one element or molecular building blocks of one compound – but not vice versa, for it is possible to have a crystal of one element but with several orbits of vertices). Crystals with uninodal crystal nets include cubic diamond and some representations of quartz crystals. Uninodality corresponds with isogonality in geometry and vertex-transitivity in graph theory, and produces examples objective structures. [ 6 ] A crystal net is binodal if it has two orbits of vertex; crystals with binodal crystal nets include boracite and anatase . It is edge-transitive or isotoxal if it has one orbit of edges; crystals with edge-transitive crystal nets include boracite but not anatase – which has two orbits of edges. [ 7 ]
In the geometry of crystal nets, one can treat edges as line segments. For example, in a crystal net, it is presumed that edges do not “collide” in the sense that when treating them as line segments, they do not intersect. Several polyhedral constructions can be derived from crystal nets. For example, a vertex figure can be obtained by subdividing each edge (treated as a line segment) by the insertion of subdividing points, and then the vertex figure of a given vertex is the convex hull of the adjacent subdividing points (i.e., the convex polyhedron whose vertices are the adjacent subdividing points).
Another polyhedral construction is to determine the neighborhood of a vertex in the crystal net. One application is to define an energy function as a (possibly weighted) sum of squares of distances from vertices to their neighbors, and with respect to this energy function, the net is in equilibrium (with respect to this energy function) if each vertex is positioned at the centroid of its neighborhood, [ 8 ] this is the basis of the crystal net identification program SYSTRE. [ 9 ] (mathematicians [ 10 ] use the term ``harmonic realiaztions" instead of ``crystal nets in equilibrium positions" because the positions are characterized by the discrete Laplace equation; they also introduced the notion of standard realizations which are special harmonic realizations characterized by a certain minimal principle as well;see [ 11 ] ). Some crystal nets are isomorphic to crystal nets in equilibrium positions, and since an equilibrium position is a normal form , the crystal net isomorphism problem (i.e., the query whether two given crystal nets are isomorphic as graphs; not to be confused with crystal isomorphism ) is readily computed even though, as a subsumption of the graph isomorphism problem , it is apparently computationally difficult in general.
It is conjectured [ 12 ] that crystal nets may minimize entropy in the following sense. Suppose one is given an ensemble of uniformly discrete Euclidean graphs that fill space, with vertices representing atoms or molecular building blocks and with edges representing bonds or ligands, extending through all space to represent a solid. For some restrictions, there may be a unique Euclidean graph that minimizes a reasonably defined energy function, and the conjecture is that that Euclidean graph may necessarily be periodic. This question is still open, but some researchers observe crystal nets of high symmetry tending to predominate observed Euclidean graphs derived from some classes of materials. [ 13 ] [ 14 ]
Historically, crystals were developed by experimentation, currently formalized as combinatorial chemistry , but one contemporary desideratum is the synthesis of materials designed in advance, and one proposal is to design crystals (the designs being crystal nets, perhaps represented as one unit cell of a crystal net) and then synthesize them from the design. [ 15 ] This effort, in what Omar Yaghi described as reticular chemistry is proceeding on several fronts, from the theoretical [ 16 ] to synthesizing highly porous crystals. [ 17 ]
One of the primary issues in annealing crystals is controlling the constituents, which can be difficult if the constituents are individual atoms, e.g., in zeolites , which are typically porous crystals primarily of silicon and oxygen and occasional impurities. Synthesis of a specific zeolite de novo from a novel crystal net design remains one of the major goals of contemporary research. There are similar efforts in sulfides and phosphates . [ citation needed ]
Control is more tractable if the constituents are molecular building blocks, i.e., stable molecules that can be readily induced to assemble in accordance with geometric restrictions. [ citation needed ] Typically, while there may be many species of constituents, there are two main classes: somewhat compact and often polyhedral secondary building units (SBUs), and linking or bridging building units. A popular class of examples are the Metal-Organic Frameworks (MOFs), in which (classically) the secondary building units are metal ions or clusters of ions and the linking building units are organic ligands . These SBUs and ligands are relatively controllable, and some new crystals have been synthesized using designs of novel nets. [ 18 ] An organic variant are the Covalent Organic Frameworks (COFs), in which the SBUs might (but not necessarily) be themselves organic. [ citation needed ] The greater control over the SBUs and ligands can be seen in the fact that while no novel zeolites have been synthesized per design, several MOFs have been synthesized from crystal nets designed for zeolite synthesis, such as Zeolite-like Metal-Organic Frameworks (Z-MOFs) [ citation needed ] and zeolitic imidazolate framework (ZIFs). | https://en.wikipedia.org/wiki/Periodic_graph_(crystallography) |
Periodic lateralized epileptiform discharges are a type of EEG abnormality. They are one of the most frequent paroxystic complexes. [ 1 ] They are basically triphasic with sharply contoured wave followed by a slow wave mostly occurring unilaterally with duration 100-300 msec and amplitude 100-300 often present with fast rhythm between discharges. In recent literature it is referred to as Lateralized periodic discharges . [ 2 ]
This article about a medical condition affecting the nervous system is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Periodic_lateralized_epileptiform_discharges |
Periodic short-interval diffuse discharges are a type of EEG abnormality with periodicity less than 4.0 seconds. [ 1 ] They can consist of sharp waves or spikes, spike and wave, polyspikes or triphasics with background attenuation in between transients. [ 1 ]
This article about a medical condition affecting the nervous system is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Periodic_short-interval_diffuse_discharges |
Periodic systems of molecules are charts of molecules similar to the periodic table of the elements. Construction of such charts was initiated in the early 20th century and is still ongoing.
It is commonly believed that the periodic law , represented by the periodic chart, is echoed in the behavior of molecules , at least small molecules. For instance, if one replaces any one of the atoms in a triatomic molecule with a rare gas atom, there will be a drastic change in the molecule’s properties. Several goals could be accomplished by constructing an explicit representation of this periodic law as manifested in molecules: (1) a classification scheme for the vast number of molecules that exist, starting with small ones having just a few atoms, for use as a teaching aid and tool for archiving data, (2) forecasting data for molecular properties based on the classification scheme, and (3) a sort of unity with the periodic chart and the periodic system of fundamental particles . [ 1 ]
Periodic systems (or charts or tables) of molecules are the subjects of two reviews. [ 2 ] [ 3 ] The systems of diatomic molecules include those of (1) H. D. W. Clark, [ 4 ] [ 5 ] and (2) F.-A. Kong, [ 6 ] [ 7 ] which somewhat resemble the atomic chart. The system of R. Hefferlin et al. [ 8 ] [ 9 ] was developed from (3) a three-dimensional to (4) a four-dimensional system Kronecker product of the element chart with itself.
A totally different kind of periodic system is (5) that of G. V. Zhuvikin, [ 11 ] [ 12 ] which is based on group dynamics . In all but the first of these cases, other researchers provided invaluable contributions and some of them are co-authors. The architectures of these systems have been adjusted by Kong [ 7 ] and Hefferlin [ 13 ] to include ionized species, and expanded by Kong, [ 7 ] Hefferlin, [ 9 ] and Zhuvikin and Hefferlin [ 12 ] to the space of triatomic molecules. These architectures are mathematically related to the chart of the elements. They were first called “physical” periodic systems. [ 2 ]
Other investigators have focused on building structures that address specific kinds of molecules such as alkanes (Morozov); [ 14 ] benzenoids (Dias); [ 15 ] [ 16 ] functional groups containing fluorine , oxygen , nitrogen and sulfur (Haas); [ 17 ] [ 18 ] or a combination of core charge , number of shells, redox potentials, and acid-base tendencies (Gorski). [ 19 ] [ 20 ] These structures are not restricted to molecules with a given number of atoms and they bear little resemblance to the element chart; they are called “chemical” systems. Chemical systems do not start with the element chart, but instead start with, for example, formula enumerations (Dias), Grimm's hydride displacement law (Haas), reduced potential curves (Jenz), [ 21 ] a set of molecular descriptors (Gorski), and similar strategies.
E. V. Babaev [ 22 ] has erected a hyperperiodic system which in principle includes all of the systems described above except those of Dias, Gorski, and Jenz.
The periodic chart of the elements, like a small stool, is supported by three legs: (a) the Bohr – Sommerfeld “ solar system ” atomic model (with electron spin and the Madelung principle ), which provides the magic-number elements that end each row of the table and gives the number of elements in each row, (b)
solutions to the Schrödinger equation , which provide the same information, and (c) data provided by experiment, by the solar system model, and by solutions to the Schroedinger equation. The Bohr–Sommerfeld model should not be ignored: it gave explanations for the wealth of spectroscopic data that were already in existence before the advent of wave mechanics.
Each of the molecular systems listed above, and those not cited, is also supported by three legs: (a)
physical and chemical data arranged in graphical or tabular patterns (which, for physical periodic systems at least, echo the appearance of the element chart), (b) group dynamic, valence-bond, molecular-orbital, and other fundamental theories, and (c) summing of atomic period and group numbers (Kong), the Kronecker product and exploitation of higher dimensions (Hefferlin), formula enumerations (Dias), the hydrogen-displacement principle (Haas), reduced potential curves (Jenz), and similar strategies.
A chronological list of the contributions to this field [ 3 ] contains almost thirty entries dated 1862, 1907, 1929, 1935, and 1936; then, after a pause, a higher level of activity beginning with the 100th anniversary of Mendeleev’s publication of his element chart, 1969. Many publications on periodic systems of molecules include some predictions of molecular properties, but starting at the turn of the Century there have been serious attempts to use periodic systems for the prediction of progressively more precise data for various numbers of molecules. Among these attempts are those of Kong, [ 7 ] and Hefferlin [ 23 ] [ 24 ]
The collapsed-coordinate system has three independent variables instead of the six demanded by the Kronecker-product system. The reduction of independent variables makes use of three properties of gas-phase, ground-state, triatomic molecules. (1) In general, whatever the total number of constituent atomic valence electrons, data for isoelectronic molecules tend to be more similar than for adjacent molecules that have more or fewer valence electrons; for triatomic molecules, the electron count is the sum of the atomic group numbers (the sum of the column numbers 1 to 8 in the p-block of the periodic chart of the elements, C1+C2+C3). (2) Linear/bent triatomic molecules appear to be slightly more stable, other parameters being equal, if carbon is the central atom. (3) Most physical properties of diatomic molecules (especially spectroscopic constants) are closely monotonic with respect to the product of the two atomic period (or row) numbers , R1 and R2; for triatomic molecules, the monotonicity is close with respect to R1R2+R2R3 (which reduces to R1R2 for diatomic molecules). Therefore, the coordinates x, y, and z of the collapsed-coordinate system are C1+C2+C3, C2, and R1R2+R2R3. Multiple-regression predictions of four property values for molecules with tabulated data agree very well with the tabulated data (the error measures of the predictions include the tabulated data in all but a few cases). [ 25 ] | https://en.wikipedia.org/wiki/Periodic_systems_of_small_molecules |
This articles gives the crystalline structures of the elements of the periodic table which have been produced in bulk at STP and at their melting point (while still solid ) and predictions of the crystalline structures of the rest of the elements.
The following table gives the crystalline structure of the most thermodynamically stable form(s) for elements that are solid at standard temperature and pressure . Each element is shaded by a color representing its respective Bravais lattice , except that all orthorhombic lattices are grouped together.
The following table gives the most stable crystalline structure of each element at its melting point at atmospheric pressure (H, He, N, O, F, Ne, Cl, Ar, Kr, Xe, and Rn are gases at STP; Br and Hg are liquids at STP.) Note that helium does not have a melting point at atmospheric pressure, but it adopts a magnesium-type hexagonal close-packed structure under high pressure.
The following table give predictions for the crystalline structure of elements 85–87, 100–113 and 118; all but radon [ 2 ] have not been produced in bulk. Most probably Cn and Fl would be liquids at STP (ignoring radioactive self-heating concerns). Calculations have difficulty replicating the experimentally known structures of the stable alkali metals, and the same problem affects Fr ; [ 3 ] nonetheless, it is probably isostructural to its lighter congeners. [ 4 ] The latest predictions for Fl could not distinguish between FCC and HCP structures, which were predicted to be close in energy. [ 5 ] No predictions are available for elements 115–117.
The following is a list of structure types which appear in the tables above. Regarding the number of atoms in the unit cell, structures in the rhombohedral lattice system have a rhombohedral primitive cell and have trigonal point symmetry but are also often also described in terms of an equivalent but nonprimitive hexagonal unit cell with three times the volume and three times the number of atoms.
The observed crystal structures of many metals can be described as a nearly mathematical close-packing of equal spheres . A simple model for both of these is to assume that the metal atoms are spherical and are packed together as closely as possible. In closest packing, every atom has 12 equidistant nearest neighbours, and therefore a coordination number of 12. If the close packed structures are considered as being built of layers of spheres, then the difference between hexagonal close packing and face-centred cubic is how each layer is positioned relative to others. The following types can be viewed as a regular buildup of close-packed layers:
Precisely speaking, the structures of many of the elements in the groups above are slightly distorted from the ideal closest packing. While they retain the lattice symmetry as the ideal structure, they often have nonideal c/a ratios for their unit cell. Less precisely speaking, there are also other elements are nearly close-packed but have distortions which have at least one broken symmetry with respect to the close-packed structure: | https://en.wikipedia.org/wiki/Periodic_table_(crystal_structure) |
See list of sources at Electron configurations of the elements (data page) . | https://en.wikipedia.org/wiki/Periodic_table_(electron_configurations) |
The periodic table of topological insulators and topological superconductors , also called tenfold classification of topological insulators and superconductors , is an application of topology to condensed matter physics . It indicates the mathematical group for the topological invariant of the topological insulators and topological superconductors , given a dimension and discrete symmetry class. [ 1 ] The ten possible discrete symmetry families are classified according to three main symmetries: particle-hole symmetry , time-reversal symmetry and chiral symmetry . The table was developed between 2008–2010 [ 1 ] by the collaboration of Andreas P. Schnyder, Shinsei Ryu , Akira Furusaki and Andreas W. W. Ludwig; [ 2 ] [ 3 ] and independently by Alexei Kitaev . [ 4 ]
These table applies to topological insulators and topological superconductors with an energy gap, when particle-particle interactions are excluded. The table is no longer valid when interactions are included. [ 1 ]
The topological insulators and superconductors are classified here in ten symmetry classes (A,AII,AI,BDI,D,DIII,AII,CII,C,CI) named after Altland–Zirnbauer classification, defined here by the properties of the system with respect to three operators: the time-reversal operator T {\displaystyle T} , charge conjugation C {\displaystyle C} and chiral symmetry S {\displaystyle S} . The symmetry classes are ordered according to the Bott clock (see below) so that the same values repeat in the diagonals. [ 5 ]
An X in the table of "Symmetries" indicates that the Hamiltonian of the symmetry is broken with respect to the given operator. A value of ±1 indicates the value of the operator squared for that system. [ 5 ]
The dimension indicates the dimensionality of the systes: 1D (chain), 2D (plane) and 3D lattices. It can be extended up to any number of positive integer dimension. Below, there can be four possible group values that are tabulated for a given class and dimension: [ 5 ]
The non-chiral Su–Schrieffer–Heeger model ( d = 1 {\displaystyle d=1} ), can be associated with symmetry class BDI with an integer Z {\displaystyle \mathbb {Z} } topological invariant due to gauge invariance . [ 6 ] [ 7 ] The problem is similar to the integer quantum Hall effect and the quantum anomalous Hall effect (both in d = 2 {\displaystyle d=2} ) which are A class, with integer Z {\displaystyle \mathbb {Z} } Chern number . [ 8 ]
Contrarily, the Kitaev chain ( d = 1 {\displaystyle d=1} ), is an example of symmetry class D, with a Z 2 {\displaystyle \mathbb {Z} _{2}} binary topological invariant. [ 7 ] Similarly, the p x + i p y {\displaystyle p_{x}+ip_{y}} superconductors ( d = 2 {\displaystyle d=2} ) are also in class D, but with a Z {\displaystyle \mathbb {Z} } topological invariant. [ 7 ]
The quantum spin Hall effect ( d = 2 {\displaystyle d=2} ) described by Kane–Mele model is an example of AII class, with a Z 2 {\displaystyle \mathbb {Z} _{2}} topological invariant. [ 9 ]
There are ten discrete symmetry classes of topological insulators and superconductors, corresponding to the ten Altland–Zirnbauer classes of random matrices . They are defined by three symmetries of the Hamiltonian H ^ = ∑ i , j H i j c i † c j {\displaystyle {\hat {H}}=\sum _{i,j}H_{ij}c_{i}^{\dagger }c_{j}} , (where c i {\displaystyle c_{i}} , and c i † {\displaystyle c_{i}^{\dagger }} , are the annihilation and creation operators of mode i {\displaystyle i} , in some arbitrary spatial basis) : time-reversal symmetry , particle-hole (or charge conjugation ) symmetry, and chiral (or sublattice) symmetry.
In the Bloch Hamiltonian formalism for crystal structures , where the Hamiltonian H ( k ) {\displaystyle H(k)} acts on modes of crystal momentum k {\displaystyle k} , the chiral symmetry, TRS, and PHS conditions become
It is evident that if two of these three symmetries are present, then the third is also present, due to the relation S = T C {\displaystyle S=TC} .
The aforementioned discrete symmetries label 10 distinct discrete symmetry classes, which coincide with the Altland–Zirnbauer classes of random matrices.
A bulk Hamiltonian in a particular symmetry group is restricted to be a Hermitian matrix with no zero-energy eigenvalues (i.e. so that the spectrum is "gapped" and the system is a bulk insulator) satisfying the symmetry constraints of the group. In the case of d > 0 {\displaystyle d>0} dimensions, this Hamiltonian is a continuous function H ( k ) {\displaystyle H(k)} of the d {\displaystyle d} parameters in the Bloch momentum vector k → {\displaystyle {\vec {k}}} in the Brillouin zone ; then the symmetry constraints must hold for all k → {\displaystyle {\vec {k}}} .
Given two Hamiltonians H 1 {\displaystyle H_{1}} and H 2 {\displaystyle H_{2}} , it may be possible to continuously deform H 1 {\displaystyle H_{1}} into H 2 {\displaystyle H_{2}} while maintaining the symmetry constraint and gap (that is, there exists continuous function H ( t , k → ) {\displaystyle H(t,{\vec {k}})} such that for all 0 ≤ t ≤ 1 {\displaystyle 0\leq t\leq 1} the Hamiltonian has no zero eigenvalue and symmetry condition is maintained, and H ( 0 , k → ) = H 1 ( k → ) {\displaystyle H(0,{\vec {k}})=H_{1}({\vec {k}})} and H ( 1 , k → ) = H 2 ( k → ) {\displaystyle H(1,{\vec {k}})=H_{2}({\vec {k}})} ). Then we say that H 1 {\displaystyle H_{1}} and H 2 {\displaystyle H_{2}} are equivalent.
However, it may also turn out that there is no such continuous deformation. in this case, physically if two materials with bulk Hamiltonians H 1 {\displaystyle H_{1}} and H 2 {\displaystyle H_{2}} , respectively, neighbor each other with an edge between them, when one continuously moves across the edge one must encounter a zero eigenvalue (as there is no continuous transformation that avoids this). This may manifest as a gapless zero energy edge mode or an electric current that only flows along the edge.
An interesting question is to ask, given a symmetry class and a dimension of the Brillouin zone, what are all the equivalence classes of Hamiltonians. Each equivalence class can be labeled by a topological invariant; two Hamiltonians whose topological invariant are different cannot be deformed into each other and belong to different equivalence classes.
For each of the symmetry classes, the question can be simplified by deforming the Hamiltonian into a "projective" Hamiltonian, and considering the symmetric space in which such Hamiltonians live. These classifying spaces are shown for each symmetry class: [ 4 ]
For example, a (real symmetric) Hamiltonian in symmetry class AI can have its n {\displaystyle n} positive eigenvalues deformed to +1 and its N − n {\displaystyle N-n} negative eigenvalues deformed to -1; the resulting such matrices are described by the union of real Grassmannians ⋃ n = 0 ∞ G r ( n , N ) = ⋃ n = 0 ∞ O ( N ) / O ( n ) × O ( N − n ) {\displaystyle \bigcup _{n=0}^{\infty }\mathrm {Gr} (n,N)=\bigcup _{n=0}^{\infty }\mathrm {O} (N)/\mathrm {\mathrm {O} } (n)\times \mathrm {\mathrm {O} } (N-n)}
The strong topological invariants of a many-band system in d {\displaystyle d} dimensions can be labeled by the elements of the d {\displaystyle d} -th homotopy group of the symmetric space. These groups are displayed in this table, called the periodic table of topological insulators:
There may also exist weak topological invariants (associated to the fact that the suspension of the Brillouin zone is in fact equivalent to a d + 1 {\displaystyle d+1} sphere wedged with lower-dimensional spheres), which are not included in this table. Furthermore, the table assumes the limit of an infinite number of bands, i.e. involves N × N {\displaystyle N\times N} Hamiltonians for N → ∞ {\displaystyle N\to \infty } .
The table also is periodic in the sense that the group of invariants in d {\displaystyle d} dimensions is the same as the group of invariants in d + 8 {\displaystyle d+8} dimensions. In the case of no anti-unitary symmetries, the invariant groups are periodic in dimension by 2.
For nontrivial symmetry classes, the actual invariant can be defined by one of the following integrals over all or part of the Brillouin zone: the Chern number , the Wess-Zumino winding number , the Chern–Simons invariant , the Fu–Kane invariant.
The periodic table also displays a peculiar property: the invariant groups in d {\displaystyle d} dimensions are identical to those in d − 1 {\displaystyle d-1} dimensions but in a different symmetry class. Among the complex symmetry classes, the invariant group for A in d {\displaystyle d} dimensions is the same as that for AIII in d − 1 {\displaystyle d-1} dimensions, and vice versa. One can also imagine arranging each of the eight real symmetry classes on the Cartesian plane such that the x {\displaystyle x} coordinate is T 2 {\displaystyle T^{2}} if time reversal symmetry is present and 0 {\displaystyle 0} if it is absent, and the y {\displaystyle y} coordinate is C 2 {\displaystyle C^{2}} if particle hole symmetry is present and 0 {\displaystyle 0} if it is absent. Then the invariant group in d {\displaystyle d} dimensions for a certain real symmetry class is the same as the invariant group in d − 1 {\displaystyle d-1} dimensions for the symmetry class directly one space clockwise. This phenomenon was termed the Bott clock by Alexei Kitaev , in reference to the Bott periodicity theorem . [ 1 ] [ 10 ]
The Bott clock can be understood by considering the problem of Clifford algebra extensions. [ 1 ] Near an interface between two inequivalent bulk materials, the Hamiltonian approaches a gap closing. To lowest order expansion in momentum slightly away from the gap closing, the Hamiltonian takes the form of a Dirac Hamiltonian H Dirac ( k → ) = ∑ j = 1 d Γ j v j k j + m Γ 0 {\displaystyle H_{\text{Dirac}}({\vec {k}})=\sum _{j=1}^{d}\Gamma _{j}v_{j}k_{j}+m\Gamma _{0}} . Here, Γ 1 , Γ 2 , … , Γ d {\displaystyle \Gamma _{1},\Gamma _{2},\ldots ,\Gamma _{d}} are a representation of the Clifford Algebra { Γ i , Γ j } = 2 δ i j {\displaystyle \lbrace \Gamma _{i},\Gamma _{j}\rbrace =2\delta _{ij}} , while m Γ 0 {\displaystyle m\Gamma _{0}} is an added "mass term" that and anticommutes with the rest of the Hamiltonian and vanishes at the interface (thus giving the interface a gapless edge mode at k = 0 {\displaystyle k=0} ). The m Γ 0 {\displaystyle m\Gamma _{0}} term for the Hamiltonian on one side of the interface cannot be continuously deformed into the m Γ 0 {\displaystyle m\Gamma _{0}} term for the Hamiltonian on the other side of the interface. Thus (letting m {\displaystyle m} be an arbitrary positive scalar) the problem of classifying topological invariants reduces to the problem of classifying all possible inequivalent choices of Γ 0 {\displaystyle \Gamma _{0}} to extend the Clifford algebra to one higher dimension, while maintaining the symmetry constraints. | https://en.wikipedia.org/wiki/Periodic_table_of_topological_insulators_and_topological_superconductors |
In chemistry , periodic trends are specific patterns present in the periodic table that illustrate different aspects of certain elements when grouped by period and/or group . They were discovered by the Russian chemist Dmitri Mendeleev in 1863. Major periodic trends include atomic radius , ionization energy , electron affinity , electronegativity , nucleophilicity , electrophilicity , valency , nuclear charge , and metallic character . [ 1 ] Mendeleev built the foundation of the periodic table. [ 2 ] Mendeleev organized the elements based on atomic weight, leaving empty spaces where he believed undiscovered elements would take their places. [ 3 ] Mendeleev's discovery of this trend allowed him to predict the existence and properties of three unknown elements, which were later discovered by other chemists and named gallium , scandium , and germanium . [ 4 ] English physicist Henry Moseley discovered that organizing the elements by atomic number instead of atomic weight would naturally group elements with similar properties. [ 3 ]
The atomic radius is the distance from the atomic nucleus to the outermost electron orbital in an atom . In general, the atomic radius decreases as we move from left-to-right in a period , and it increases when we go down a group . This is because in periods, the valence electrons are in the same outermost shell . The atomic number increases within the same period while moving from left to right, which in turn increases the effective nuclear charge . The increase in attractive forces reduces the atomic radius of elements . When we move down the group, the atomic radius increases due to the addition of a new shell. [ 5 ] [ 6 ] [ 7 ]
Nuclear charge is defined as the number of protons in the nucleus of an element . Thus, from left-to-right of a period and top-to-bottom of a group , as the number of protons in the nucleus increases, the nuclear charge will also increase. [ 8 ] However, electrons of multi-electron atoms do not experience the entire nuclear charge due to shielding effects from the other electrons. In this case, the nuclear charge of atoms that experience this shielding is referred to as effective nuclear charge . Shielding increases as the number of an atom's inner shells increases. So from left-to-right of a period, the effective nuclear charge will still increase. But, from top-to-bottom of a group, as the number of shells increases, the effective nuclear charge will decrease. [ 9 ]
The ionization energy is the minimum amount of energy that an electron in a gaseous atom or ion has to absorb to come out of the influence of the attracting force of the nucleus . It is also referred to as ionization potential. The first ionization energy is the amount of energy that is required to remove the first electron from a neutral atom . The energy needed to remove the second electron from the neutral atom is called the second ionization energy and so on. [ 10 ] [ 11 ]
As one moves from left-to-right across a period in the modern periodic table , the ionization energy increases as the nuclear charge increases and the atomic size decreases. The decrease in the atomic size results in a more potent force of attraction between the electrons and the nucleus. However, suppose one moves down in a group . In that case, the ionization energy decreases as atomic size increases due to adding a valence shell , thereby diminishing the nucleus's attraction to electrons. [ 12 ] [ 13 ]
The energy released when an electron is added to a neutral gaseous atom to form an anion is known as electron affinity. [ 14 ] Trend-wise, as one progresses from left to right across a period , the electron affinity will increase as the nuclear charge increases and the atomic size decreases resulting in a more potent force of attraction of the nucleus and the added electron. However, as one moves down in a group , electron affinity decreases because atomic size increases due to the addition of a valence shell , thereby weakening the nucleus's attraction to electrons. Although it may seem that fluorine should have the greatest electron affinity, its small size generates enough repulsion among the electrons, resulting in chlorine having the highest electron affinity in the halogen family . [ 15 ]
The tendency of an atom in a molecule to attract the shared pair of electrons towards itself is known as electronegativity. It is a dimensionless quantity because it is only a tendency. [ 16 ] The most commonly used scale to measure electronegativity was designed by Linus Pauling . The scale has been named the Pauling scale in his honour. According to this scale, fluorine is the most electronegative element, while cesium is the least electronegative element . [ 17 ]
Trend-wise, as one moves from left to right across a period in the modern periodic table , the electronegativity increases as the nuclear charge increases and the atomic size decreases. However, if one moves down in a group , the electronegativity decreases as atomic size increases due to the addition of a valence shell , thereby decreasing the atom's attraction to electrons. [ 18 ]
However, in group XIII ( boron family ), the electronegativity first decreases from boron to aluminium and then increases down the group. It is due to the fact that the atomic size increases as we move down the group, but at the same time the effective nuclear charge increases due to poor shielding of the inner d and f electrons. As a result, the force of attraction of the nucleus for the electrons increases and hence the electronegativity increases from aluminium to thallium . [ 19 ] [ 20 ]
The valency of an element is the number of electrons that must be lost or gained by an atom to obtain a stable electron configuration . In simple terms, it is the measure of the combining capacity of an element to form chemical compounds . Electrons found in the outermost shell are generally known as valence electrons ; the number of valence electrons determines the valency of an atom. [ 21 ] [ 22 ]
Trend-wise, while moving from left to right across a period , the number of valence electrons of elements increases and varies between one and eight. But the valency of elements first increases from 1 to 4, and then it decreases to 0 as we reach the noble gases . However, as we move down in a group , the number of valence electrons generally does not change. Hence, in many cases the elements of a particular group have the same valency. However, this periodic trend is not always followed for heavier elements, especially for the f-block and the transition metals . These elements show variable valency as these elements have a d-orbital as the penultimate orbital and an s-orbital as the outermost orbital. The energies of these (n-1)d and ns orbitals (e.g., 4d and 5s) are relatively close. [ 23 ] [ 24 ] [ 25 ]
Metallic properties generally increase down the groups , as decreasing attraction between the nuclei and outermost electrons causes these electrons to be more loosely bound and thus able to conduct heat and electricity . Across each period , from left to right, the increasing attraction between the nuclei and the outermost electrons causes the metallic character to decrease. In contrast, the nonmetallic character decreases down the groups and increases across the periods. [ 26 ] [ 27 ]
Electrophilicity refers to the tendency of an electron -deficient species, called an electrophile, to accept electrons. [ 28 ] Similarly, nucleophilicity is defined as the affinity of an electron-rich species, known as a nucleophile, to donate electrons to another species. [ 29 ] Trends in the periodic table are useful for predicting an element's nucleophilicity and electrophilicity. In general, nucleophilicity decreases as electronegativity increases, meaning that nucleophilicity decreases from left to right across the periodic table. On the other hand, electrophilicity generally increases as electronegativity increases, meaning that electrophilicity follows an increasing trend from left to right on the periodic table. [ 28 ] However, the specific molecular or chemical environment of the electrophile also influences electrophilicity. Therefore, electrophilicity cannot be accurately predicted based solely on periodic trends. | https://en.wikipedia.org/wiki/Periodic_trends |
Eclipses may occur repeatedly, separated by certain intervals of time: these intervals are called eclipse cycles . [ 1 ] The series of eclipses separated by a repeat of one of these intervals is called an eclipse series .
Eclipses may occur when Earth and the Moon are aligned with the Sun , and the shadow of one body projected by the Sun falls on the other. So at new moon , when the Moon is in conjunction with the Sun, the Moon may pass in front of the Sun as viewed from a narrow region on the surface of Earth and cause a solar eclipse . At full moon , when the Moon is in opposition to the Sun, the Moon may pass through the shadow of Earth, and a lunar eclipse is visible from the night half of Earth. The conjunction and opposition of the Moon together have a special name: syzygy ( Greek for "junction"), because of the importance of these lunar phases .
An eclipse does not occur at every new or full moon, because the plane of the Moon's orbit around Earth is tilted with respect to the plane of Earth's orbit around the Sun (the ecliptic ): so as viewed from Earth, when the Moon appears nearest the Sun (at new moon) or furthest from it (at full moon), the three bodies are usually not exactly on the same line.
This inclination is on average about 5° 9′, much larger than the apparent mean diameter of the Sun (32′ 2″), the Moon as viewed from Earth's surface directly below the Moon (31′ 37″), and Earth's shadow at the mean lunar distance (1° 23′).
Therefore, at most new moons, Earth passes too far north or south of the lunar shadow, and at most full moons, the Moon misses Earth's shadow. Also, at most solar eclipses, the apparent angular diameter of the Moon is insufficient to fully occlude the solar disc, unless the Moon is around its perigee , i.e. nearer Earth and apparently larger than average. In any case, the alignment must be almost perfect to cause an eclipse.
An eclipse can occur only when the Moon is on or near the plane of Earth's orbit, i.e. when its ecliptic latitude is low. This happens when the Moon is around either of the two orbital nodes on the ecliptic at the time of the syzygy . Of course, to produce an eclipse, the Sun must also be around a node at that time – the same node for a solar eclipse or the opposite node for a lunar eclipse.
Up to three eclipses may occur during an eclipse season , a one- or two-month period that happens twice a year, around the time when the Sun is near the nodes of the Moon's orbit.
An eclipse does not occur every month, because one month after an eclipse the relative geometry of the Sun, Moon, and Earth has changed.
As seen from the Earth, the time it takes for the Moon to return to a node, the draconic month , is less than the time it takes for the Moon to return to the same ecliptic longitude as the Sun: the synodic month . The main reason is that during the time that the Moon has completed an orbit around the Earth, the Earth (and Moon) have completed about 1 ⁄ 13 of their orbit around the Sun: the Moon has to make up for this in order to come again into conjunction or opposition with the Sun. Secondly, the orbital nodes of the Moon precess westward in ecliptic longitude, completing a full circle in about 18.60 years, so a draconic month is shorter than a sidereal month . In all, the difference in period between synodic and draconic month is nearly 2 + 1 ⁄ 3 days. Likewise, as seen from the Earth, the Sun passes both nodes as it moves along its ecliptic path. The period for the Sun to return to a node is called the eclipse or draconic year : about 346.6201 days, which is about 1 ⁄ 20 year shorter than a sidereal year because of the precession of the nodes.
If a solar eclipse occurs at one new moon, which must be close to a node, then at the next full moon the Moon is already more than a day past its opposite node, and may or may not miss the Earth's shadow. By the next new moon it is even further ahead of the node, so it is less likely that there will be a solar eclipse somewhere on Earth. By the next month, there will certainly be no event.
However, about 5 or 6 lunations later the new moon will fall close to the opposite node. In that time (half an eclipse year) the Sun will have moved to the opposite node too, so the circumstances will again be suitable for one or more eclipses.
The periodicity of solar eclipses is the interval between any two solar eclipses in succession, which will be either 1, 5, or 6 synodic months . [ 2 ] It is calculated that the Earth will experience a total number of 11,898 solar eclipses between 2000 BCE and 3000 CE. A particular solar eclipse will be repeated approximately after every 18 years 11 days and 8 hours (6,585.32 days) of period, but not in the same geographical region. [ 3 ] A particular geographical region will experience a particular solar eclipse in every 54 years 34 days period. [ 2 ] Total solar eclipses are rare events, although they occur somewhere on Earth every 18 months on average. [ 4 ]
For two solar eclipses to be almost identical, the geometric alignment of the Earth, Moon and Sun, as well as some parameters of the lunar orbit should be the same. The following parameters and criteria must be repeated for the repetition of a solar eclipse:
These conditions are related to the three periods of the Moon's orbital motion, viz. the synodic month , anomalistic month and draconic month , and to the anomalistic year . In other words, a particular eclipse will be repeated only if the Moon will complete roughly an integer number of synodic, draconic, and anomalistic periods and the Earth-Sun-Moon geometry will be nearly identical. The Moon will be at the same node and the same distance from the Earth. This happens after the period called the saros . Gamma (how far the Moon is north or south of the ecliptic during an eclipse) changes monotonically throughout any single saros series. The change in gamma is larger when Earth is near its aphelion (June to July) than when it is near perihelion (December to January). When the Earth is near its average distance (March to April or September to October), the change in gamma is average.
For the repetition of a lunar eclipse, the geometric alignment of the Moon, Earth and Sun, as well as some parameters of the lunar orbit should be repeated. The following parameters and criteria must be repeated for the repetition of a lunar eclipse:
These conditions are related with the three periods of the Moon's orbital motion, viz. the synodic month , anomalistic month and draconic month . In other words, a particular eclipse will be repeated only if the Moon will complete roughly an integer number of synodic, draconic, and anomalistic periods (223, 242, and 239) and the Earth-Sun-Moon geometry will be nearly identical to that eclipse. The Moon will be at the same node and the same distance from the Earth. Gamma changes monotonically throughout any single Saros series. The change in gamma is larger when Earth is near its aphelion (June to July) than when it is near perihelion (December to January). When the Earth is near its average distance (March to April or September to October), the change in gamma is average.
Another thing to consider is that the motion of the Moon is not a perfect circle. Its orbit is distinctly elliptic, so the lunar distance from Earth varies throughout the lunar cycle. This varying distance changes the apparent diameter of the Moon, and therefore influences the chances, duration, and type (partial, annular, total, mixed) of an eclipse. This orbital period is called the anomalistic month , and together with the synodic month causes the so-called " full moon cycle " of about 14 lunations in the timings and appearances of full (and new) Moons. The Moon moves faster when it is closer to the Earth (near perigee) and slower when it is near apogee (furthest distance), thus periodically changing the timing of syzygies by up to 14 hours either side (relative to their mean timing), and causing the apparent lunar angular diameter to increase or decrease by about 6%. An eclipse cycle must comprise close to an integer number of anomalistic months in order to perform well in predicting eclipses.
If the Earth had a perfectly circular orbit centered around the Sun, and the Moon's orbit was also perfectly circular and centered around the Earth, and both orbits were coplanar (on the same plane) with each other, then two eclipses would happen every lunar month (29.53 days). A lunar eclipse would occur at every full moon, a solar eclipse every new moon, and all solar eclipses would be the same type. In fact the distances between the Earth and Moon and that of the Earth and the Sun vary because both the Earth and the Moon have elliptic orbits. Also, both the orbits are not on the same plane. The Moon's orbit is inclined about 5.14° to Earth's orbit around the Sun. So the Moon's orbit crosses the ecliptic at two points or nodes. If a New Moon takes place within about 17° of a node, then a solar eclipse will be visible from some location on Earth. [ 5 ] [ 6 ] [ 7 ]
At an average angular velocity of 0.99° per day, the Sun takes 34.5 days to cross the 34° wide eclipse zone centered on each node. Because the Moon's orbit with respect to the Sun has a mean duration of 29.53 days, there will always be one and possibly two solar eclipses during each 34.5-day interval when the Sun passes through the nodal eclipse zones. These time periods are called eclipse seasons. [ 2 ] Either two or three eclipses happen each eclipse season. During the eclipse season, the inclination of the Moon's orbit is low, hence the Sun , Moon, and Earth become aligned straight enough (in syzygy ) for an eclipse to occur.
These are the lengths of the various types of months as discussed above (according to the lunar ephemeris ELP2000-85, valid for the epoch J2000.0; taken from ( e.g. ) Meeus (1991) ):
Note that there are three main moving points: the Sun, the Moon, and the (ascending) node; and that there are three main periods, when each of the three possible pairs of moving points meet one another: the synodic month when the Moon returns to the Sun, the draconic month when the Moon returns to the node, and the eclipse year when the Sun returns to the node. These three 2-way relations are not independent (i.e. both the synodic month and eclipse year are dependent on the apparent motion of the Sun, both the draconic month and eclipse year are dependent on the motion of the nodes), and indeed the eclipse year can be described as the beat period of the synodic and draconic months (i.e. the period of the difference between the synodic and draconic months); in formula:
as can be checked by filling in the numerical values listed above.
Eclipse cycles have a period in which a certain number of synodic months closely equals an integer or half-integer number of draconic months: one such period after an eclipse, a syzygy ( new moon or full moon ) takes place again near a node of the Moon's orbit on the ecliptic , and an eclipse can occur again. However, the synodic and draconic months are incommensurate: their ratio is not an integer number. We need to approximate this ratio by common fractions : the numerators and denominators then give the multiples of the two periods – draconic and synodic months – that (approximately) span the same amount of time, representing an eclipse cycle.
These fractions can be found by the method of continued fractions : this arithmetical technique provides a series of progressively better approximations of any real numeric value by proper fractions.
Since there may be an eclipse every half draconic month, we need to find approximations for the number of half draconic months per synodic month: so the target ratio to approximate is: SM / (DM/2) = 29.530588853 / (27.212220817/2) = 2.170391682
The continued fractions expansion for this ratio is:
The ratio of synodic months per half eclipse year yields the same series:
Each of these is an eclipse cycle. Less accurate cycles may be constructed by combinations of these.
This table summarizes the characteristics of various eclipse cycles, and can be computed from the numerical results of the preceding paragraphs; cf. Meeus (1997) Ch.9. More details are given in the comments below, and several notable cycles have their own pages. Many other cycles have been noted, some of which have been named. [ 3 ]
The number of days given is the average. The actual number of days and fractions of days between two eclipses varies because of the variation in the speed of the Moon and of the Sun in the sky. The variation is less if the number of anomalistic months is near a whole number, and if the number of anomalistic years is near a whole number. (See graphs lower down of semester and Hipparchic cycle.)
Any eclipse cycle, and indeed the interval between any two eclipses, can be expressed as a combination of saros ( s ) and inex ( i ) intervals. These are listed in the column "formula".
The next nine cycles, Cartouche through Accuratissima, are all similar, being equal to 52 inex periods plus up to two triads and various numbers of saros periods. This means they all have a near-whole number of anomalistic months. They range from 1505 to 1841 years, and each series lasts for many thousands of years.
Any eclipse can be assigned to a given saros series and inex series. The year of a solar eclipse (in the Gregorian calendar ) is then given approximately by: [ 19 ]
When this is greater than 1, the integer part gives the year AD, but when it is negative the year BC is obtained by taking the integer part and adding 2. For instance, the eclipse in saros series 0 and inex series 0 was in the middle of 2884 BC.
A "panorama" of solar eclipses arranged by saros and inex has been produced by Luca Quaglia and John Tilley showing 61775 solar eclipses from 11001 BC to AD 15000 (see below). [ 20 ] Each column of the graph is a complete Saros series which progresses smoothly from partial eclipses into total or annular eclipses and back into partials. Each graph row represents an inex series. Since a saros, of 223 synodic months, is slightly less than a whole number of draconic months, the early eclipses in a saros series (in the upper part of the diagram) occur after the Moon goes through its node (the beginning and end of a draconic month), while the later eclipses (in the lower part) occur before the Moon goes through its node. Every 18 years, the eclipse occurs on average about half a degree further west with respect to the node, but the progression is not uniform.
Saros and inex number can be calculated for an eclipse near a given date. One can also find the approximate date of solar eclipses at distant dates by first determining one in an inex series such as series 50. This can be done by adding or subtracting some multiple of 28.9450 Gregorian years from the solar eclipse of 10 May, 2013, or 28.9444 Julian years from the Julian date of 27 April, 2013. Once such an eclipse has been found, others around the same time can be found using the short cycles. For lunar eclipses, the anchor dates May 4, 2004 or Julian April 21 may be used.
Saros and inex numbers are also defined for lunar eclipses. A solar eclipse of given saros and inex series will be preceded a fortnight earlier by a lunar eclipse whose saros number is 26 lower and whose inex number is 18 higher, or it will be followed a fortnight later by a lunar eclipse whose saros number is 12 higher and whose inex number is 43 lower. As with solar eclipses, the Gregorian year of a lunar eclipse can be calculated as:
Lunar eclipses can also be plotted in a similar diagram, this diagram covering 1000 AD to 2500 AD. The yellow diagonal band represents all the eclipses from 1900 to 2100. This graph immediately illuminates that this 1900–2100 period contains an above average number of total lunar eclipses compared to other adjacent centuries.
This is related to the fact that tetrads (see above) are more common at present than at other periods. Tetrads occur when four lunar eclipses occur at four lunar inex numbers, decreasing by 8 (that is, a semester apart), which are in the range giving fairly central eclipses (small gamma ), and furthermore the eclipses take place around halfway between the Earth's perihelion and aphelion. For example, in the tetrad of 2014-2015 (the so-called Four Blood Moons ), the inex numbers were 52, 44, 36, and 28, and the eclipses occurred in April and late September-early October. Normally the absolute value of gamma decreases and then increases, but because in April the Sun is further east than its mean longitude , and in September/October further west than its mean longitude, the absolute values of gamma in the first and fourth eclipse are decreased, while the absolute values in the second and third are increased. The result is that all four gamma values are small enough to lead to total lunar eclipses. The phenomenon of the Moon "catching up" with the Sun (or the point opposite the Sun), which is usually not at its mean longitude, has been called a "stern chase". [ 21 ]
Inex series move slowly through the year, each eclipse occurring about 20 days earlier in the year, 29 years later. This means that over a period of 18.2 inex cycles (526 years) the date moves around the whole year. But because the perihelion of Earth's orbit is slowly moving as well, the inex series that are now producing tetrads will again be halfway between Earth's perihelion and aphelion in about 586 years. [ 14 ]
One can skew the graph of inex versus saros for solar or lunar eclipses so that the x axis shows the time of year. (An eclipse which is two saros series and one inex series later than another will be only 1.8 days later in the year in the Gregorian calendar.) This shows the 586-year oscillations as oscillations that go up around perihelion and down around aphelion (see graph).
The properties of eclipses, such as the timing, the distance or size of the Moon and Sun, or the distance the Moon passes north or south of the line between the Sun and the Earth, depend on the details of the orbits of the Moon and the Earth. There exist formulae for calculating the longitude, latitude, and distance of the Moon and of the Sun using sine and cosine series. The arguments of the sine and cosine functions depend on only four values, the Delaunay arguments:
These four arguments are basically linear functions of time but with slowly varying higher-order terms. A diagram of inex and saros indices such as the "Panorama" shown above is like a map, and we can consider the values of the Delaunay arguments on it. The mean elongation, D, goes through 360° 223 times when the inex value goes up by 1, and 358 times when the saros value goes up by 1. It is thus equivalent to 0°, by definition, at each combination of solar saros index and inex index, because solar eclipses occur when the elongation is zero. From D one can find the actual elapsed time from some reference time such as J2000 , which is like a linear function of inex and saros but with a deviation that grows quadratically with distance from the reference time, amounting to about 19 minutes at a distance of 1000 years. The mean argument of latitude, F, is equivalent to 0° or 180° (depending on whether the saros index is even or odd) along the smooth curve going through the centre of the band of eclipses, where gamma is near zero (around inex series 50 at present). F decreases as we go away from this curve towards higher inex series, and increases on the other side, by about 0.5° per inex series. When the inex value is too far from the centre, the eclipses disappear because the Moon is too far north or south of the Sun. The mean anomaly of the Sun is a smooth function, increasing by about 10° when increasing inex by 1 in a saros series and decreasing by about 20° when increasing saros index by 1 in an inex series. This means it is almost constant when increasing inex by 1 and saros index by 2 (the "Unidos" interval of 65 years). The above graph showing the time of year of eclipses basically shows the solar anomaly, since the perihelion moves by only one day per century in the Julian calendar, or 1.7 days per century in the Gregorian calendar. The mean anomaly of the Moon is more complicated. If we look at the eclipses whose saros index is divisible by 3, then the mean anomaly is a smooth function of inex and saros values. Contours run at an angle, so that mean anomaly is fairly constant when inex and saros values increase together at a ratio of around 21:24. The function varies slowly, changing by only 7.4° when changing the saros index by 3 at a constant inex value. A similar smooth function obtains for eclipses with saros modulo 3 equal to 1, but shifted by about 120°, and for saros modulo 3 equal to 2, shifted by 120° the other way. [ 22 ] [ 23 ]
The upshot is that the properties vary slowly over the diagram in any of the three sets of saros series. The accompanying graph shows just the saros series that have saros index modulo 3 equal to zero. The blue areas are where the mean anomaly of the Moon is near 0°, meaning that the Moon is near perigee at the time of the eclipse, and therefore relatively large, favoring total eclipses. In the red area, the Moon is generally further from the Earth, and the eclipses are annular. We can also see the effect of the Sun's anomaly. Eclipses in July, when the Sun is further from the Earth, are more likely to be total, so the blue area extends over a greater range of inex index than for eclipses in January.
The waviness seen in the graph is also due to the Sun's anomaly. In April the Sun is further east than if its longitude progressed evenly, and in October it is further west, and this means that in April the Moon catches up with the Sun relatively late, and in October relatively early. This in turn means that the argument of latitude at the actual time of the eclipse will be raised higher in April and lowered in October. Eclipses (either partial or not) with low inex index (near the upper edge in the "Panorama" graph) fail to occur in April because syzygy occurs too far to the east of the node, but more eclipses occur at high inex values in April because syzygy is not so far west of the node. The opposite applies to October. It also means that in April ascending-node solar eclipses will cast their shadow further north (such as the solar eclipse of April 8, 2024 ), and descending-node eclipses further south. The opposite is the case in October.
Eclipses that occur when the earth is near perihelion (sun anomaly near zero) are in saros series in which the gamma value changes little every 18.03 years. The reason for this is that from one eclipse to the next in the saros series, the day in the year advances by about 11 days, but the Sun's position moves eastward by more than what it does for that change of day in year at other times. This means the Sun's position relative to the node doesn't change as much as for saros series giving eclipses at other times of the year. In the first half of the 21st century, solar saros series showing this slow rate of change of gamma include 122 (giving an eclipse on January 6, 2019), 132 (January 5, 2038), 141 (January 15, 2010), and 151 (January 4, 2011). Sometimes this phenomenon leads to a saros series giving a large number of central eclipses, for example solar saros 128 gave 20 eclipses with |γ|<0.75 between 1615 and 1958, whereas series 135 gave only nine, between 1872 and 2016. [ 14 ]
The time interval between two eclipses in an eclipse cycle is variable. The time of an eclipse can be advanced or delayed by up to ten hours due to the eccentricity of the Moon's orbit – the eclipse will be early when the Moon is going from perigee to apogee, and late when it is going from apogee toward perigee. The time is also delayed because of the eccentricity of the Earth's orbit. Eclipses occur about four hours later in April and four hours earlier in October. This means that the delay varies from eclipse to eclipse in a series. The delay is the sum of two sine-like functions, one based on the time in the anomalistic year and one on the time in the anomalistic month. The periods of these two waves depends on how close the nominal interval between two eclipses in the series is to a whole number of anomalistic years and anomalistic months. In series like the "Immobilis" or the "Accuratissima", which are near whole numbers of both, the delay varies very slowly, so the interval is quite constant. In series like the octon, the Moon's anomaly changes considerably at least twice every three intervals, so the intervals vary considerably.
The "Panorama" can also be related to where on the Earth the shadow of the Moon falls at the central time of the eclipse. If this "maximum eclipse" for a given eclipse is at a particular location, eclipses three saros later will be at a similar latitude (because the saros is close to a whole number of draconic months) and longitude (because a period of three saros is always within a couple hours of being 19755.96 days long, which would change the longitude by about 13° eastward). If instead we increase the saros index at a constant inex index, the intervals are quite variable because the number of anomalistic months or years is not very close to a whole number. This means that although the latitude will be similar (but changing sign), the longitude change can vary by more than 180°. Moving by six inex (a de la Hire cycle) preserves the latitude fairly well but the longitude change is very variable because of the variation of the solar anomaly.
Both the angular size of the Moon in the sky at eclipses at the ascending node and the size of the Sun at those eclipses vary in a sort of sine wave. The sizes at the descending node vary in the same way, but 180° out of phase. The Moon is large at an ascending-node eclipse when its perigee is near the ascending node, so the period for the size of the Moon is the time it takes for the angle between the node and the perigee to go through 360°, or
(Note that a plus sign is used because the perigee moves eastward whereas the node moves westward.) A maximum of this is in 2024 (September), explaining why the ascending-node solar eclipse of April 8, 2024 , is near perigee and total and the descending-node solar eclipse of October 2, 2024 , is near apogee and annular. Although this cycle is about a day less than six years, super-moon eclipses actually occur every three years on average, because there are also the ones at the descending node that occur in between the ones at the ascending node.
At lunar eclipses the size of the Moon is 180° out of phase with its size at solar eclipses.
The Sun is large at an ascending-node eclipse when its perigee (the direction toward the Sun when it is closest to the Earth) is near the ascending node, so the period for the size of the Sun is
In terms of Delaunay arguments, the Sun is biggest at ascending-node solar eclipses and smallest at descending-node solar eclipses around when l'+D=F (modulo 360°), such as June, 2010. It is smallest at descending-node solar eclipses and biggest at ascending-node solar eclipses 9.3 years later, such as September, 2019.
The lengths of the synodic, draconic, and anomalistic months, the length of the day, and the length of the anomalistic year are all slowly changing. The synodic and draconic months, the day, and the anomalistic year (at least at present) are getting longer, whereas the anomalistic month is getting shorter. The eccentricity of the Earth's orbit is presently decreasing at about one percent per 300 years, thus decreasing the effect of the sun's anomaly. Formulae for the Delaunay arguments show that the lengthening of the synodic month means that eclipses tend to occur later than they would otherwise proportionally to the square of the time separation from now, by about 0.32 hours per millennium squared. The other Delaunay arguments (mean anomaly of the Moon and of the sun and the argument of latitude) will all be increased because of this, but on the other hand the Delaunay arguments are also affected by the fact that the lengths of the draconic month and anomalistic month and year are changing. The net results are:
As an example, from the solar eclipse of April, 1688 BC, to that of April, AD 1623, is 110 inex plus 7 saros (equivalent to a "Palaea-Horologia" plus a "tritrix", 3310.09 Julian years). According to the table above, the Delaunay arguments should change by:
But because of the changing lengths of these, they actually changed by: [ 22 ]
Note that in this example, in terms of anomaly (position with respect to perigee) the moon returns to within 1% of an orbit (about 3.4°), rather than 3.2% as predicted using today's values of month lengths.
The fact that the day is getting longer means there are more revolutions of the Earth since some point in the past than what one might calculate from the time and date, and fewer from now to some future time. This effect means eclipses occur earlier in the day or calendar, going in the opposite direction relative to the effect of the lengthening synodic month already mentioned. This effect is known as ΔT . It cannot be calculated exactly but amounts to around 50 minutes per millennium squared. [ 24 ] In our example above, this means that although the eclipse in 1688 BC was centred on March 16 at 00:15:31 in Dynamic time , it actually occurred before midnight and therefore on March 15 (using time based on the location of present-day Greenwich, and using the proleptic Julian calendar ). [ 25 ]
The fact that the argument of latitude is decreased explains why one sees a curvature in the "Panorama" above. Central eclipses in the past and in the future are higher in the graph (lower inex number) than what one would expect from a linear extrapolation. This is because the ratio of the length of a synodic month to the length of a draconic month is getting smaller. Although both are getting longer, the draconic month is doing so more quickly because the rate at which the node moves west is decreasing. [ 22 ] | https://en.wikipedia.org/wiki/Periodicity_of_solar_eclipses |
Periodinanes also known as λ 5 - iodanes are organoiodine compounds with iodine in the +5 oxidation state. These compounds are described as hypervalent because the iodine center has more than 8 valence electrons .
The λ 5 -iodanes such as the Dess-Martin periodinane have square pyramidal geometry with 4 heteroatoms in basal positions and one apical phenyl group. [ 1 ]
Iodoxybenzene or iodylbenzene , C 6 H 5 IO 2 , is a known oxidizing agent.
Dess-Martin periodinane (1983) is another powerful oxidant and an improvement of the IBX acid already in existence in 1983. The IBX acid is prepared from 2-iodobenzoic acid and potassium bromate and sulfuric acid [ 2 ] and is insoluble in most solvents whereas the Dess-Martin reagent prepared from reaction of the IBX acid with acetic anhydride is very soluble. The oxidation mechanism ordinarily consists of a ligand exchange reaction followed by a reductive elimination .
The predominant use of periodinanes is as oxidizing reagents replacing toxic reagents based on heavy metals. [ 3 ] | https://en.wikipedia.org/wiki/Periodinane |
A Peripheral Interface Adapter (PIA) is a peripheral integrated circuit providing parallel I/O interfacing for microprocessor systems.
Common PIAs include the Motorola MC6820 and MC6821, and the MOS Technology MCS6520, all of which are functionally identical but have slightly different electrical characteristics. The PIA is most commonly packaged in a 40 pin DIP package .
The PIA is designed for glueless connection to the Motorola 6800 style bus , and provides 20 I/O lines, which are organised into two 8-bit bidirectional ports (or 16 general-purpose I/O lines) and 4 control lines (for handshaking and interrupt generation). The directions for all 16 general lines (PA0-7, PB0-7) can be programmed independently. The control lines can be programmed to generate interrupts, automatically generate handshaking signals for devices on the I/O ports, or output a plain high or low signal.
In 1976 Motorola switched the MC6800 family to a depletion-mode technology to improve the manufacturing yield and to operate at a faster speed. The Peripheral Interface Adapter had a slight change in the electrical characteristics of the I/O pins so the MC6820 became the MC6821. [ 1 ]
The MC6820 was used in the Apple I to interface the ASCII keyboard and the display. [ 2 ] It was also deployed in the 6800-powered first generation of Bally electronic pinball machines (1977-1985), such as Flash Gordon [ 3 ] and Kiss . [ 4 ] The MCS6520 was used in the Atari 400 and 800 [ 5 ] and Commodore PET [ 6 ] family of computers (for example, to provide four joystick ports to the machine).
The Tandy Color Computer uses two MC6821s to provide I/O access to the video, audio and peripherals. [ 7 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Peripheral_Interface_Adapter |
In computing, a peripheral bus is a computer bus designed to support computer peripherals like printers and hard drives . The term is generally used to refer to systems that offer support for a wide variety of devices, like Universal Serial Bus , as opposed to those that are dedicated to specific types of hardware. Serial AT Attachment, or SATA is designed and optimized for communication with mass storage devices .
This usage is not universal, some definitions of peripheral bus include any bus that is not a system bus , including examples like PCI . [ 1 ] Others treat PCI and similar systems as a third category, the expansion bus . [ clarification needed ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Peripheral_bus |
The peripheral nervous system ( PNS ) is one of two components that make up the nervous system of bilateral animals , with the other part being the central nervous system (CNS). The PNS consists of nerves and ganglia , which lie outside the brain and the spinal cord . [ 1 ] The main function of the PNS is to connect the CNS to the limbs and organs , essentially serving as a relay between the brain and spinal cord and the rest of the body. [ 2 ] Unlike the CNS, the PNS is not protected by the vertebral column and skull , or by the blood–brain barrier , which leaves it exposed to toxins . [ 3 ]
The peripheral nervous system can be divided into a somatic division and an autonomic division . Each of these can further be differentiated into a sensory and a motor sector. [ 4 ] In the somatic nervous system, the cranial nerves are part of the PNS with the exceptions of the olfactory nerve and epithelia and the optic nerve (cranial nerve II) along with the retina , which are considered parts of the central nervous system based on developmental origin. The second cranial nerve is not a true peripheral nerve but a tract of the diencephalon . [ 5 ] Cranial nerve ganglia , as with all ganglia , are part of the PNS. [ 6 ] The autonomic nervous system exerts involuntary control over smooth muscle and glands . [ 7 ]
The peripheral nervous system can be divided into a somatic and an autonomic division, which are part of the somatic nervous system and the autonomic nervous system , respectively. The somatic nervous system is under voluntary control, and transmits signals from the brain to end organs such as muscles . The sensory nervous system is part of the somatic nervous system and transmits signals from senses such as taste and touch (including fine touch and gross touch) to the spinal cord and brain. The autonomic nervous system is a "self-regulating" system which influences the function of organs outside voluntary control, such as the heart rate , or the functions of the digestive system .
The somatic nervous system includes the sensory nervous system (ex. the somatosensory system ) and consists of sensory nerves and somatic nerves, and many nerves which hold both functions.
In the head and neck , cranial nerves carry somatosensory data. There are twelve cranial nerves, ten of which originate from the brainstem , and mainly control the functions of the anatomic structures of the head with some exceptions. One unique cranial nerve is the vagus nerve , which receives sensory information from organs in the thorax and abdomen . The other unique cranial nerve is the accessory nerve which is responsible for innervating the sternocleidomastoid and trapezius muscles , neither of which are located exclusively in the head.
For the rest of the body, spinal nerves are responsible for somatosensory information. These arise from the spinal cord . Usually these arise as a web ("plexus") of interconnected nerves roots that arrange to form single nerves. These nerves control the functions of the rest of the body. In humans, there are 31 pairs of spinal nerves: 8 cervical, 12 thoracic, 5 lumbar, 5 sacral, and 1 coccygeal. These nerve roots are named according to the spinal vertebrata which they are adjacent to. In the cervical region, the spinal nerve roots come out above the corresponding vertebrae (i.e., nerve root between the skull and 1st cervical vertebrae is called spinal nerve C1). From the thoracic region to the coccygeal region, the spinal nerve roots come out below the corresponding vertebrae. This method creates a problem when naming the spinal nerve root between C7 and T1 (so it is called spinal nerve root C8). In the lumbar and sacral region, the spinal nerve roots travel within the dural sac and they travel below the level of L2 as the cauda equina.
The first 4 cervical spinal nerves, C1 through C4, split and recombine to produce a variety of nerves that serve the neck and back of head.
Spinal nerve C1 is called the suboccipital nerve , which provides motor innervation to muscles at the base of the skull .
C2 and C3 form many of the nerves of the neck, providing both sensory and motor control. These include the greater occipital nerve , which provides sensation to the back of the head, the lesser occipital nerve , which provides sensation to the area behind the ears , the greater auricular nerve and the lesser auricular nerve .
The phrenic nerve is a nerve essential for our survival which arises from nerve roots C3, C4 and C5. It supplies the thoracic diaphragm , enabling breathing . If the spinal cord is transected above C3, then spontaneous breathing is not possible. [ citation needed ]
The last four cervical spinal nerves, C5 through C8, and the first thoracic spinal nerve, T1, combine to form the brachial plexus , or plexus brachialis , a tangled array of nerves, splitting, combining and recombining, to form the nerves that subserve the upper-limb and upper back. Although the brachial plexus may appear tangled, it is highly organized and predictable, with little variation between people. See brachial plexus injuries .
The anterior divisions of the lumbar nerves , sacral nerves , and coccygeal nerve form the lumbosacral plexus , the first lumbar nerve being frequently joined by a branch from the twelfth thoracic . For descriptive purposes this plexus is usually divided into three parts:
The autonomic nervous system (ANS) controls involuntary responses to regulate physiological functions. [ 8 ] The brain and spinal cord of the central nervous system are connected with organs that have smooth muscle or cardiac muscle, such as the heart, bladder, and other cardiac, exocrine, and endocrine related organs, by ganglionic neurons. [ 8 ] The most notable physiological effects from autonomic activity are pupil constriction and dilation, and salivation of saliva. [ 8 ] The autonomic nervous system is always activated, but is either in the sympathetic or parasympathetic state. [ 8 ] Depending on the situation, one state can overshadow the other, resulting in a release of different kinds of neurotransmitters . [ 8 ]
The sympathetic system is activated during a "fight or flight" situation in which mental stress or physical danger is encountered. [ 8 ] Neurotransmitters such as norepinephrine , and epinephrine are released, [ 8 ] which increases heart rate and blood flow in certain areas like muscle, while simultaneously decreasing activities of non-critical functions for survival, like digestion. [ 9 ] The systems are independent to each other, which allows activation of certain parts of the body, while others remain rested. [ 9 ]
Primarily using the neurotransmitter acetylcholine (ACh) as a mediator, the parasympathetic system allows the body to function in a "rest and digest" state. [ 9 ] Consequently, when the parasympathetic system dominates the body, there are increases in salivation and activities in digestion, while heart rate and other sympathetic response decrease. [ 9 ] Unlike the sympathetic system, humans have some voluntary controls in the parasympathetic system. The most prominent examples of this control are urination and defecation. [ 9 ]
There is a lesser known division of the autonomic nervous system known as the enteric nervous system . [ 9 ] Located only around the digestive tract, this system allows for local control without input from the sympathetic or the parasympathetic branches, though it can still receive and respond to signals from the rest of the body. [ 9 ] The enteric system is responsible for various functions related to gastrointestinal system. [ 9 ]
Diseases of the peripheral nervous system can be specific to one or more nerves, or affect the system as a whole.
Any peripheral nerve or nerve root can be damaged, called a mononeuropathy . Such injuries can be because of injury or trauma, or compression . Compression of nerves can occur because of a tumour mass or injury. Alternatively, if a nerve is in an area with a fixed size it may be trapped if the other components increase in size, such as carpal tunnel syndrome and tarsal tunnel syndrome . Common symptoms of carpal tunnel syndrome include pain and numbness in the thumb, index and middle finger. In peripheral neuropathy, the function one or more nerves are damaged through a variety of means. Toxic damage may occur because of diabetes ( diabetic neuropathy ), alcohol, heavy metals or other toxins; some infections; autoimmune and inflammatory conditions such as amyloidosis and sarcoidosis . [ 8 ] Peripheral neuropathy is associated with a sensory loss in a "glove and stocking" distribution that begins at the peripheral and slowly progresses upwards, and may also be associated with acute and chronic pain. Peripheral neuropathy is not just limited to the somatosensory nerves, but the autonomic nervous system too ( autonomic neuropathy ). [ 8 ] | https://en.wikipedia.org/wiki/Peripheral_nervous_system |
Peripheral node addressin , often referred to as PNAd , are glycoprotein ligands . [ 1 ] More formally, the term includes "lymph" to specify the node: peripheral lymph node addressin. [ 2 ]
PNAd is a critical component of the immune system, enabling the targeted migration of lymphocytes to the lymph nodes and facilitating an effective immune response. PNAd's role in lymphocyte homing is essential for the proper functioning of the immune system, as it ensures that lymphocytes can efficiently enter the lymph nodes to encounter and respond to foreign antigens, such as viruses and bacteria.
PNAd is a type of cell adhesion molecule found on the surface of high endothelial venules (HEVs) in lymph nodes. It plays a crucial role in the immune system by facilitating the migration of lymphocytes, a type of white blood cell, from the bloodstream to the lymph nodes where they participate in immune responses.
The process of lymphocyte migration from the bloodstream to the lymph nodes is called lymphocyte homing. PNAd plays a key role in this process by interacting with L-selectin , which is present on the surface of lymphocytes. The adhesion molecule L-selectin binds to sulfated carbohydrate ligands on high endothelial venules (HEV). [ 1 ] The binding between PNAd and L-selectin allows lymphocytes to slow down and roll along the inner surface of HEVs. This rolling action enables lymphocytes to come into close contact with other molecules called chemokines, which trigger the firm adhesion and subsequent transmigration of lymphocytes across the endothelial cells and into the lymph node.
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Peripheral_node_addressin |
In immunology , peripheral tolerance is the second branch of immunological tolerance , after central tolerance . It takes place in the immune periphery (after T and B cells egress from primary lymphoid organs ). Its main purpose is to ensure that self-reactive T and B cells which escaped central tolerance do not cause autoimmune disease . [ 1 ] Peripheral tolerance can also serve a purpose in preventing an immune response to harmless food antigens and allergens . [ 2 ]
Self reactive cells are subject to clonal deletion or clonal diversion. Both processes of peripheral tolerance control the presence and production of self reactive immune cells. [ 3 ] Deletion of self-reactive T cells in the thymus is only 60-70% efficient, and naive T cell repertoire contains a significant portion of low-avidity self-reactive T cells. These cells can trigger an autoimmune response, and there are several mechanisms of peripheral tolerance to prevent their activation. [ 4 ] Antigen-specific mechanisms of peripheral tolerance include persistent of T cell in quiescence, ignorance of antigen and direct inactivation of effector T cells by either clonal deletion, conversion to regulatory T cells (Tregs) or induction of anergy . [ 5 ] [ 4 ] Tregs, which are also generated during thymic T cell development, further suppress the effector functions of conventional lymphocytes in the periphery. [ 6 ] Dendritic cells (DCs) participate in the negative selection of autoreactive T cells in the thymus, but they also mediate peripheral immune tolerance through several mechanisms. [ 7 ]
Dependence of a particular antigen on either central or peripheral tolerance is determined by its abundance in the organism. [ 8 ] B Cells have a lower probability that they will express cell surface markers to pose the threat of causing an autoimmune attack. [ 9 ] Peripheral tolerance of B cells is largely mediated by B cell dependence on T cell help. However, B cell peripheral tolerance is much less studied.
Tregs are the central mediators of immune suppression and they play a key role in maintaining peripheral tolerance. The master regulator of Treg phenotype and function is Foxp3. Natural Tregs (nTregs) are generated in the thymus during the negative selection . TCR of nTregs shows a high affinity for self-peptides, Induced Tregs (iTreg) develop from conventional naive helper T cells after antigen recognition in presence of TGF-β and IL-2. iTregs are enriched in the gut to establish tolerance to commensal microbiota and harmless food antigens. [ 10 ] Regardless of their origin, once present Tregs use several different mechanisms to suppress autoimmune reactions. These include depletion of IL-2 from the environment, secretion of anti-inflammatory cytokines IL-10, TGF-β and IL-35 [ 11 ] and induction of apoptosis of effector cells. CTLA-4 is a surface molecule present on Tregs which can prevent CD28 mediated costimulation of T cells after TCR antigen recognition. [ 6 ]
DCs are a major cell population responsible for the initiation of the adaptive immune response. They present short peptides on MHCII , which are recognized by specific TCR. After encountering an antigen with recognition danger or pathogen-associated molecular patterns , DCs start the secretion of proinflammatory cytokines, express costimulatory molecules CD80 and CD86 and migrate to the lymph nodes to activate naive T cells. [ 1 ] However, immature DCs (iDCs) are able to induce both CD4 and CD8 tolerance. The immunogenic potential of iDCs is weak, because of the low expression of costimulatory molecules and a modest level of MHCII. iDCs perform endocytosis and phagocytosis of foreign antigens and apoptotic cells, which occurs physiologically in peripheral tissues. Antigen-loaded iDCs migrate to the lymph nodes, secrete IL-10, TGF-β and present antigen to the naive T cells without costimulation. If the T cell recognizes the antigen, it is turned into the anergic state, depleted or converted to Treg. [ 12 ] iDCs are more potent Treg inducers than lymph node resident DCs. [ 7 ] BTLA is a crucial molecule for DCs mediated Treg conversion. [ 13 ] Tolerogenic DCs express FasL and TRAIL to directly induce apoptosis of responding T cells. They also produce indoleamine 2,3-dioxygenase (IDO) to prevent T cell proliferation. Retinoic acid is secreted to support iTreg differentiation, too. [ 14 ] Nonetheless, upon maturation (for example during the infection) DCs largely lose their tolerogenic capabilities. [ 12 ]
Aside from dendritic cells, additional cell populations were identified that are able to induce antigen-specific T cell tolerance. These are mainly the members of lymph node stromal cells (LNSCs). LNSCs are generally divided into several subpopulations based on the expression of gp38 ( PDPN ) and CD31 surface markers. [ 15 ] Among those, only fibroblastic reticular cells and lymphatic endothelial cells (LECs) were shown to play a role in peripheral tolerance. Both of those populations are able to induce CD8 T cell tolerance by the presentation of the endogenous antigens on MHCI molecules. [ 16 ] [ 17 ] LNSCs lack expression of the autoimmune regulator , and the production of autoantigens depends on transcription factor Deaf1 . LECs express PD-L1 to engage PD-1 on CD8 T cells to restrict self-reactivity. [ 18 ] LNSCs can drive the CD4 T cell tolerance by the presentation of the peptide-MHCII complexes, which they acquired from the DCs. [ 19 ] On the other hand, LECs can serve as a self-antigen reservoir and can transport self-antigens to DCs to direct self-peptide-MHCII presentation to CD4 T cells. In mesenteric lymph nodes (mLN), LNSCs can induce Tregs directly by secretion of TGF-β or indirectly by imprinting mLN-resident DCs. [ 18 ]
Although the majority of self-reactive T cell clones are deleted in the thymus by the mechanisms of central tolerance , low affinity self-reactive T cells continuously escape to the immune periphery. [ 8 ] Therefore, additional mechanisms exist to prevent self-reactive and unrestained T cells responses.
When naive T cells exit the thymus , they are in a quiescent state. That means they are in the non-proliferative, G0 stage of the cell cycle and they have low metabolic, transcriptional and translational activities, but still retain the capacity to enter the cell cycle. [ 20 ] Quiescence can prevent naive T cell activation after tonic signaling, meaning that T cells may be constitutively activated when not in the presence of a ligand. [ 21 ] After antigen exposure and costimulation, naive T cells start the process called quiescence exit, which results in proliferation and effector differentiation. [ 22 ]
Naive cells must enter and exit a quiescent state at the proper timing in their life cycle. If T cells exit a quiescence prematurely there is a lack of tolerance to potential self-reactive cells. T cells rely on negative regulators to keep them in a quiescence state until they are ready for exit, the down regulation of negative regulators increases T cell activation. Premature and over activation of T cells can lead to harmful down stream responses and possibly trigger an autoimmune response. [ 23 ]
As cells exit a quiescent state they will up regulate enzymes that are responsible for production of essential pathways (nucleic acids, proteins, carbohydrates, etc.). [ 23 ] At this stage the T cell will enter the cell cycle and continue to be metabolically active.
When self-reactive T cells escape thymic deletion they may enter an ignorant state. [ 24 ] Self-reactive T cells can fail to initiate immune response after recognition of self-antigen. These T cells are not classified as dysfunctional members of the immune response, rather they are antigen-inexperienced naive cells that will remain in circulation. [ 25 ] These cells remain the ability to become activated if in the presence of the correct stimuli.
Ignorance can be seen in situations where there is not a high enough concentration of antigen to trigger activation. The intrinsic mechanism of ignorance is when the affinity of TCR to antigen is too low to elicit T cell activation. There is also an extrinsic mechanism. Antigens, which are present in generally low numbers, can´t stimulate T cells sufficiently. [ 1 ] Additionally, there are anatomical barriers that prohibit the activation of these T cells. These specialized mechanisms ensuring ignorance by the immune system have developed in so-called immune privileged organs . [ 5 ]
T cells can overcome ignorance through a sufficient signal from signaling molecules (cytokines, infection, inflammatory stimuli, etc.) and induce an autoimmune response. [ 25 ] In the inflammatory context, T cells can override ignorance and induce autoimmune disease. [ 4 ]
Anergy is a state of functional unresponsiveness induced upon self antigen recognition. [ 26 ] T-cells can be made non-responsive to antigens presented if the T-cell engages an MHC molecule on an antigen presenting cell (signal 1) without engagement of costimulatory molecules (signal 2). Co-stimulatory molecules are upregulated by cytokines (signal 3) in the context of acute inflammation. Without pro-inflammatory cytokines, co-stimulatory molecules will not be expressed on the surface of the antigen presenting cell, and so anergy will result if there is an MHC-TCR interaction between the T cell and the APC. [ 5 ] TCR stimulation leads to translocation of NFAT into the nucleus. In the absence of costimulation, there is no MAPK signaling in T cells and translocation of transcription factor AP-1 into the nucleus is impaired. This disbalance of transcription factors in T cells results in the expression of several genes involved in forming an anergic state. [ 27 ] Anergic T cells show long-lasting epigenetic programming that silences effector cytokine production. Anergy is reversible and T cells can recover their functional responsiveness in the absence of the antigen. [ 4 ]
Before release into the periphery T cells are subjected to thymic deletion if they prove to have the capacity to react with self. Peripheral deletion is the disposal of potential self reactive T cells that escaped thymic deletion. [ 28 ]
After T cell response to co-stimulation-deficient antigen, a minor population of T cells develop anergy and a large proportion of T cells are rapidly lost by apoptosis. This cell death can be mediated by intrinsic pro-apoptotic family member BIM . The balance between proapoptotic BIM and the antiapoptotic mediator BCL-2 determine the eventual fate of the tolerized T cell. [ 4 ] There are also extrinsic mechanisms of deletion mediated by the cytotoxic activity of Fas /FasL or TRAIL/ TRAILR interaction. [ 14 ] Cell death can be mediated by intrinsic of extrinsic methods as mentioned. In most instances there is an up regulation of death markers or the presence of Bcl-2 proteins, which are proteins that are essential in facilitating programmed cell death. [ 28 ]
Immunopriviledged organs evolved mechanisms in which specialized tissue cells and immune cells can mount an appropriate response without disturbing the specialized tissue. [ 29 ] Immunopathogenic disturbances are not present in a variety of specialized organs such as; the eyes, reproductive organs and the central nervous system. These areas are protected by several mechanisms:
Fas-ligand expression binds Fas on lymphocytes inducing apoptosis, anti-inflammatory cytokines (including TGF-beta and interleukin 10 ) and blood-tissue-barrier with tight junctions between endothelial cells.
Split tolerance describes how some antigens can trigger an immune response in one aspect of the immune system and the same antigen could not trigger a response in another set of immune cells. Since many pathways of immunity are interdependent, they do not all need to be tolerized. For example, tolerized T cells will not activate auto-reactive B cells. Without this help from CD4 T cells , the B cells will not be activated. [ 1 ] | https://en.wikipedia.org/wiki/Peripheral_tolerance |
Peripheral Ulcerative Keratitis (PUK) is a group of destructive inflammatory diseases involving the peripheral cornea in human eyes. [ 1 ] The symptoms of PUK include pain , redness of the eyeball, photophobia , and decreased vision accompanied by distinctive signs of crescent-shaped damage of the cornea. [ 2 ] [ 3 ] The causes of this disease are broad, ranging from injuries , contamination of contact lenses , to association with other systemic conditions . [ 4 ] PUK is associated with different ocular and systemic diseases . [ 5 ] Mooren's ulcer is a common form of PUK. [ 5 ] The majority of PUK is mediated by local or systemic immunological processes, which can lead to inflammation and eventually tissue damage. Standard PUK diagnostic test involves reviewing the medical history and a completing physical examinations . [ 6 ] Two major treatments are the use of medications such as corticosteroids or other immunosuppressive agents and surgical resection of the conjunctiva . [ 7 ] The prognosis of PUK is unclear with one study providing potential complications. [ 8 ] PUK is a rare condition with an estimated incidence of 3 per million annually. [ 9 ] [ 10 ]
The most easily identifiable sign is a visible lesion of the cornea presented usually in a crescent shape. [ 2 ] [ 3 ] [ 11 ] Common reasons for destruction are stromal degradation and epithelial defects on the inflammatory cells. [ 2 ] There would be a change in conformation of the peripheral cornea, depending on the severity of corneal thinning. [ 11 ] This process is usually accompanied by the possibility of concealing perforation. [ 12 ] The formation of an oval-shaped ulcer at the margin of the cornea is also a sign. [ 2 ]
Symptoms of PUK include pain, redness, tearing, increased sensitivity to bright light, impaired or blurred vision , and the feeling of foreign objects trapped in the eyes. [ 5 ] [ 13 ]
There are several associations of PUK to ocular and systemic diseases . [ 5 ] [ 7 ] [ 14 ] Rheumatoid arthritis (RA), [ 9 ] Wegner's granulomatosis (WG), and Polyarteritis Nodosa (PAN) are the most common systemic conditions. [ 5 ]
There are three major causes for PUK. One possible cause is injury due to any kind of scratches by sharp or hard objects on the surface of the cornea. [ 7 ] The scratched area forms an opening in the cornea, allowing microorganisms to access the cornea and lead to infection. [ 7 ] Contamination of contact lenses is another cause as fungi , bacteria and parasites , microscopic parasite acanthamoeba , in particular, could inhabit the surface of the carrying case of the contact lens. [ 4 ] When placing the contact lens to one's eyes, invisible microorganisms may contaminate the cornea resulting in PUK. [ 4 ] An extended period of wearing contact lenses could also cause damage on the cornea surface, allowing the entry of microorganisms to the cornea. [ 4 ] Other than contamination of contact lenses, contamination occurring in water could also cause PUK. Especially in places like the ocean, rivers, lakes and hot tubs, massive amounts of bacteria , fungi , and parasites exist. [ 16 ] When there is an injury on the cornea surface, contact with contaminated water could transfer unwanted microorganisms into the cornea resulting in PUK. [ 17 ] Virus and bacteria are sources of infection to the cornea. Herpes virus and bacteria that cause gonorrhea are some examples. [ 17 ]
The corneal epithelium consists of five to six layers of cells with a total thickness of around 0.52mm. [ 1 ] The cornea thickens to 0.65mm towards the periphery of the cornea. [ 1 ] Stroma , which accounts for 90% of the corneal thickness, refers to the middle layer between epithelium and endothelium . [ 1 ] It is present in the peripheral cornea to act as a transitional zone between the sclera and cornea . [ 1 ] Limbal vasculature , deriving from capillaries that surround the peripheral cornea, supplies the stroma. [ 1 ] Various molecules normally diffuse from these capillaries at the periphery to the central cornea. [ 1 ] With limited diffusion , there is a higher concentration of IgM , factor C1 of the complement cascade , and Langerhans cells . [ 1 ] [ 7 ]
Any kind of inflammatory stimulus present in the peripheral cornea results in recruitment of neutrophil and activation of both classical and alternative pathways of immune response , namely the humoral and cell-mediated autoimmune responses . [ 16 ] These responses will lead to the formation of antigen-specific antibodies to combat foreign antigens . [ 16 ] However, antigen-antibody complexes formed may deposit in the vascular endothelium and activate complements leading to severe local inflammation. [ 16 ] Under this circumstance, inflammatory cells, such as macrophages and neutrophils, enter the peripheral cornea. [ 16 ] These inflammatory cells release enzymes protease and collagenases , causing potential disruption of the corneal stroma . [ 16 ] The additional release of cytokines , for example, interleukin-1 , from these cells further accelerates the process of stromal destruction. [ 7 ]
Mooren's ulcer is a common form of PUK. [ 5 ] [ 11 ] One classification of Mooren's ulcer, based on the clinical presentation, includes bilateral indolent mooren's ulcer, bilateral aggressive mooren's ulcer and unilateral mooren's ulcer. [ 5 ] Unilateral mooren's ulcer, meaning ulcer of one eye, mainly affects elderly above 60 years old. Rapid onset with redness and severe pain of the affected eye and either slow or extremely quick progression are some typical characteristics of unilateral mooren's ulcer. [ 5 ] Bilateral aggressive mooren's ulcer is prevalent in Indian between age 14 to 40. [ 5 ] The common presentation includes the appearance of lesions in one eye, followed by the development of lesions in another eye. [ 5 ] Finally, bilateral indolent mooren's ulcer is common in patients of at least 50-year-old. [ 5 ] It usually progresses slowly and causes little or no pain. [ 5 ]
Other classification methods also exist. The first one is classifying Mooren's ulcers based on clinical presentation and prognosis into two categories. [ 14 ] The first type is usually presented unilaterally, accompanied by symptoms ranging from mild to moderate. [ 14 ] Therefore, it has a more effective response to treatment. In contrast, type II appears in a bilateral manner, with severe symptoms and poor outcome of treatment. [ 14 ] The second classification is based on severity. [ 14 ] Grade I refers to corneal thinning, grade II describes impending corneal perforation , and grade III is corneal perforation with a diameter greater than 2mm. [ 14 ]
There are many investigative modalities available for diagnosing PUK, including history review and physical examination. [ 6 ] [ 7 ] A thorough history of ocular infections, contact lens usage, other medication, or surgery is necessary to identify possible presence of associated diseases. [ 7 ] An ophthalmic examination helps identify whether it is due to local pathogenesis. [ 7 ] Physical examinations allow more understanding of the underlying systemic process. [ 7 ]
A standard testing procedure includes hematological investigations, immunological testing, followed by chest X-ray . Hematological investigations are blood tests estimating hemoglobin, platelet counts, total white blood cell counts, erythrocyte sedimentation rate and viscosity . [ 6 ] Other common body checks include urinalysis and liver and renal function tests. [ 6 ] The selection of immunological testing for various markers is based on numerous additional medical examinations and clinical history of the patient. [ 6 ] Possible markers are antinuclear antibodies , anti-rheumatoid antibodies, and antibodies to cyclic citrullinated peptides . [ 6 ] Finally, a chest X-ray helps distinguish whether there are complications, such as pulmonary diseases, due to systemic conditions associated with PUK. [ 6 ]
One of the common causes of PUK is ocular infections by microorganisms such as bacteria, viruses, and fungi. [ 6 ] To detect the causative microorganism, doctors usually collect samples before the commencement of therapy and send them to laboratories. [ 6 ] Laboratory personnel then perform smear examination, inoculate the samples on culture media, and perform serological testing. [ 6 ] Serological testing is an antibody test providing information on PUK etiology . [ 6 ] The diagnosis of PUK due to systemic conditions requires a combination of serological and hematological testing, together with imaging techniques such as radiography and CT scanning. [ 6 ]
Various PUK therapies are of different objectives, for example, inflammation control, halting of disease progression, stroma repairment, avoidance of secondary complications, and vision restoration. [ 18 ] A thorough understanding of PUK and different therapies is important. [ 7 ] Medical and surgical treatments are two major approaches to manage PUK. [ 7 ]
As for medical therapy, there are several types of drugs available for PUK. Topical corticosteroids usually serve as therapy for milder unilateral cases of RA-associated PUK. [ 7 ] [ 9 ] [ 15 ] Systemic corticosteroids in the form of an oral dose are the acute management of more severe cases. [ 7 ] However, there are side effects with prolonged usage of oral corticosteroids. Immunosuppressive agents , such as azathioprine , cyclophosphamide , and methotrexate , have demonstrated efficacy in treating inflammatory eye diseases, including PUK. [ 7 ] [ 14 ] [ 9 ] The combined therapy of systemic corticosteroids up to 100 mg/day and immunosuppressive agents are used for severe cases of PUK. [ 7 ] Biological agents, such as anti-tissue necrosis factors (anti- TNF ), is a well-established treatment of systemic inflammatory diseases, [ 7 ] Infliximab and Adalimumab are TNF blockers for treating RA-associated PUK. [ 9 ] However, the high cost and uncertainty of long-term side effects are the possible drawbacks. [ 7 ]
In terms of surgical treatment, conjunctival resection is a common procedure, which can temporarily remove local inflammatory mediators and collagenases and therefore slow down the disease progression. [ 7 ] Other surgical management includes corneal gluing, or keratoplasty procedures. [ 11 ] [ 14 ] [ 9 ] Corneal transplantation is a management option when there is severe corneal melting or perforation although one possible disadvantage is the risk of rejection. [ 14 ]
Surgical treatment helps maintain the integrity of the globe, but it is usually complementary because it alone cannot influence the underlying immunological process. [ 7 ] Therefore, medical and surgical treatments are commonly used in conjunction. [ 7 ]
The choice of treatment may be different depending on the nature of PUK, infectious or noninfectious. Selection of the right targeted antimicrobial therapy for infectious PUK is based on clinical judgement and culture results. [ 18 ] For example, the appropriate treatment for bacterial infections is antibiotics, such as fluoroquinolones . [ 18 ] As for Mooren's ulcers, 56% of unilateral PUK and 50% of bilateral PUK in one eye showed recovery with intensive topical steroids. [ 18 ] Only 18% of patients with bilateral ulcers occurring simultaneously in both eyes show improvements with topical steroids alone; therefore a combination of immunosuppressive agents and systemic steroids should be given in early courses of management. [ 18 ] Corticosteroids are the first line of therapy, but side effects may arise from long-term usage. In addition, conjunctival resection can be performed to temporarily remove local inflammatory mediators, followed by the use of immunosuppressants . [ 18 ]
Currently, there are limited studies regarding the prognosis of PUK. However, one study has pointed out possible complications surrounding PUK include moderate to severe vision loss , corneal perforation and increased risk of recurrence. [ 8 ]
PUK is a rare condition with an estimated incidence of 3 per million annually. [ 9 ] [ 10 ] Studies have reported that most patients with PUK are older than 60 years of age (32%). [ 10 ] Among them, men have a higher occurrence rate in men (60%). Most patients live in rural areas (66%) and are in the lower socioeconomic groups . [ 10 ] The age of those with PUK ranges from 5 to 89 years, with a mean age of 45.5 years. [ 10 ]
The mortality rate after PUK diagnosis in an investigation of 34 patients with and without immunosuppressive medication is 53% and 5%, respectively. [ 9 ] Another single-centre study involving 46 patients with RA reported a mortality rate of 15%. [ 9 ] Reports have also shown a possibility of PUK occurrence after any ocular surgery. [ 10 ] In a retrospective study of 771 eyes, 1.4% of participants reported developing late-onset PUK at an average of 3–6 months after surgery. [ 10 ] | https://en.wikipedia.org/wiki/Peripheral_ulcerative_keratitis |
The peripheral vision horizon display, also called PVHD or the Malcolm Horizon (after inventor Dr. Richard Malcolm), is an aircraft cockpit instrument which assists pilots in maintaining proper attitude .
The PVHD was developed in the mid-1970s and manufactured in the early 1980s as a cockpit instrument to assist the pilot with being better aware of the aircraft attitude at all times. The development of the PVHD was driven by a high incidence of military aircraft accidents due to "attitude awareness issues." The PVHD was noted to have a subliminal effect on the pilot because in actual use the display was set so dim that it could barely be seen.
The PVHD was well received by pilots that tested it in helicopters as well as fixed-wing aircraft. It was flown in F-4s and A-10s, as well as helicopters. Initial production in 1983, however, was for the SR-71 Blackbird as an aid when refueling in the air. [ 1 ]
The initial concept demonstration was done in Canadian military laboratories and later development was undertaken by Varian Canada in Georgetown, Ontario . In 1981, Varian sold the project to Garrett Manufacturing in Rexdale , Toronto , Ontario .
In the simplest variant, the PVHD projects a dim line of light across the full width of the cockpit instrument panel. This line is projected over the top of all instruments. As the aircraft pitches and rolls, the line appears to stay parallel to the horizon outside of the aircraft. There is a small blip in the center of the line to indicate which way is up.
In actual use, the pilot initially sets the brightness of the line so that it just disappears when looking at it with their central vision. When the line does move due to an aircraft attitude change, the peripheral vision, being more sensitive to movement, picks up the movement and the brain subconsciously registers the information, and makes use of it.
In all variants, the aircraft gyro system provides pitch and roll information for the processor, which drives the projection system to keep the line parallel to the earth horizon. The subliminal effect on the pilot's peripheral vision aids them in retaining attitude awareness and quickly correcting the onset of the aircraft deviating from the desired attitude.
The PVHD helps when the real world horizon is blocked by weather or darkness, and the cockpit workload is so high that full attention cannot be given to the standard attitude instrument. The situation can be made worse by inertial effects of the aircraft fooling the pilot's organs of balance . These inertial effects can cause somato-gravic or somato-gyral illusions. In short, the pilot gets the wrong understanding of the aircraft attitude, often with a fatal outcome .
Several variants were built. The concept demonstration was done with conventional optics that projected a white line from a xenon arc lamp. The projector was driven by an analog computer and the lamp (line) was moved by servo motors .
A later production version used a microprocessor to sample and process the pitch/roll gyro information and a HeNe laser as a light source for the projector. The projector consisted of X and Y axis galvanometers to scan the line across the cockpit at more than 30 times per second in the form of a vector scanned display. This type of projection technology is now commonly used in laser light shows.
The Lockheed SR-71 "Blackbird" reconnaissance aircraft was fitted with a PVHD system. The system also included a heading indication, using varying light intensities along different segments of the horizon line. [ 2 ]
During the development of the single-seat night-attack version of the A-10 Warthog aircraft a PVHD system similar to that of the Lockheed SR-71 was incorporated. [ 3 ] | https://en.wikipedia.org/wiki/Peripheral_vision_horizon_display |
Peripherally acting μ-opioid receptor antagonists ( PAMORAs ) are a class of chemical compounds that are used to reverse adverse effects caused by opioids interacting with receptors outside the central nervous system (CNS), mainly those located in the gastrointestinal tract . PAMORAs are designed to specifically inhibit certain opioid receptors in the gastrointestinal tract and with limited ability to cross the blood–brain barrier . Therefore, PAMORAs do not affect the analgesic effects of opioids within the central nervous system. [ 1 ]
Opioid drugs are known to cause opioid-induced constipation (OIC) by inhibiting gastric emptying and decreasing peristaltic waves leading to delayed absorption of medications and more water absorption from the feces . That can result in hard and dry stool and constipation for some patients. [ 2 ]
OIC is one of the most common adverse effects caused by opioids, so the discovery of PAMORAs can prevent the effects that often compromise pain management . [ 3 ]
Methylnaltrexone bromide was the first medication in the drug class approved by the FDA . [ 4 ] It was discovered in 1979 by Leon Goldberg, a pharmacologist at the University of Chicago . Having witnessed the suffering of a dying friend with OIC, Goldberg tested various derivatives of naltrexone , a drug known to block the effects of opioids. His objective was to find a drug that could not pass the blood brain barrier , without affecting the analgesic effects of the opioids. After Goldberg died, his colleagues at the university continued to develop the compound. It was approved by the FDA in April 2008, originally for OIC in adult patients with advanced illness and later in adult patients with chronic noncancer pain. [ 5 ]
In the late 1970s, Dennis M. Zimmerman and his co-workers from Lilly Research Laboratories , Indiana, did research on structural concepts for narcotic antagonists defined in a 4-phenylpiperidine series. [ 6 ] They reported N -methyl- trans -3,4-dimethyl-4-phenylpiperidine to be pure opioid receptor antagonist with a new pharmacophore . To increase the potency they attached a phenolic group to the aromatic ring , N -methyl- trans -3,4-dimethyl-4-(3-hydroxyphenyl)piperidine. That structure was used to design and develop other opioid receptors antagonists such as alvimopan . [ 5 ] Alvimopan was approved later in 2008 for in-hospital use to increase the gastrointestinal function following a partial large or small bowel resection with primary anastomosis . Naloxegol was approved in September 2014 and naldemedine in March 2017, both for the treatment of OIC in adult patients with chronic cancer. [ 7 ] [ 8 ] [ 9 ] [ 10 ]
PAMORAs act by inhibiting the binding of opioids agonist to the μ-opioid receptor (MOR). The objective of PAMORAs treatment is to restore the enteric nervous system function (ENS). The MOR is found in several places in the body and PAMORAs is a competitive antagonist for binding to the receptor. The MORs in the gastrointestinal tract are the main receptors that PAMORAs are intended to block and prevent the binding of opioid agonists. [ 11 ] PAMORAs are used in the treatment of opioid-induced bowel dysfunction (OIBD), a potential adverse effect caused by chronic opioid use. PAMORAs act on the three pathophysiological mechanisms of this adverse effect. They act on gut motility , gut secretion and sphincter function. [ 12 ]
PAMORAs effect on gut motility is that it can increase the resting tone in the circular muscle layer. The antagonist enhances the effect on tonic inhibition of the muscle tone . This will normalize the tone in the circular muscle layer and therefore prevent opioid-induced rhythmic contractions. When these two factors are combined, it results in decreased transit time . Impliedly these effects will decrease the passive absorption of fluids which helps with decreasing OIBD symptoms such as constipation, gut spasm and abdominal cramp . [ 13 ]
PAMORAs effect on gut secretion will help reverse the decreased cAMP formation that opioid agonists induce. [ 14 ] Also, the antagonist will establish a normal secretion of chloride . Opioids agonists can also reduce the secretion of peptides by increasing the sympathetic nervous system through the μ-receptors in the ENS, which can lead to drier and harder stool. PAMORAs work against it so the stool becomes softer and less dry. [ 13 ]
PAMORAs effect on the function of the sphincter is in theory to regulate the movement coordination. The antagonist can prevent sphincter of Oddi dysfunction that is caused by opioids. [ 15 ] Antagonists can also reduce opioid-induced anal sphincter dysfunction. The dysfunction is tied to straining , hemorrhoids and incomplete emptying. [ 16 ]
Even though μ-opioid receptor (MOR) targeting drugs have been used for a long time, not much is known about the structure-activity relationship and the ligand -receptor interactions on the basis of well-defined biological effects on receptor activation or inhibition. Also, the distinction in the receptor-ligand interaction patterns of agonists and antagonists is not known for sure. One theory states that the morphinans biological activity could be determined by the size of the N-substituents. For example, antagonists usually have larger substituents, such as allyl - or cyclopropyl methyl at the morphinan nitrogen, while agonists generally contain a methyl group . On the other hand, agonist activity is also shown in ligands with larger groups at the morphinan nitrogen, and therefore this hypothesis is challenged. [ 17 ]
Methylnaltrexone bromide, naloxegol, and naldemedine all have similar structures, which is not far away from the chemical structure of morphine and other MOR-agonists. All contain a rigid pentacyclic structure that involves benzene ring (A), tetrahydrofuran ring (B), two cyclohexane rings (C and D) and a piperidine ring (E). [ 18 ] The most important functional groups for the biological action of opioids are the hydroxyl group on the phenol , N-methyl group, ether bridge between C4 and C5, the double bond between carbon number C7 and C8 and the hydroxyl groups at C3 and C6. The phenolic ring and its 3-hydroxyl group is vital for the analgesic effects as the removal of the OH group decrease the analgesic activity 10-fold. There is another principle for the hydroxyl group on C6 as the removal enhances its activity. The increased activity is mainly because of the increased lipophilicity and the increased ability to cross the blood–brain barrier. Naldemedine has the hydroxyl group while methylnaltrexone bromide has a ketone group and naloxegol has an ester . The double bond between C7 and C8 is not required for the analgesic effect and reduction of the double bond will increase the activity. None of the antagonists has a double bond in their structure. The N-substituent on the skeleton is thought to determine the pharmacological behavior and its interaction with MOR. It is also thought to play a key role in distinguishing antagonists from agonists. Allyl group, a methylcyclopropyl group or a methylcyclobutyl as N-substituent groups are thought to lead antagonist activity. [ 19 ] [ 20 ] [ 21 ]
Agonists and antagonists form certain chemical bonds with amino acids that construct the MOR. The majority of antagonists, as well as agonists, are predicted to form charged interaction with Asp147 and a hydrogen bond with Tyr148. However, majority of antagonists also form additional polar interactions with other amino acid residues such as Lys233, Gln124, Gln229, Asn150, Trp318 and Tyr128. Only a small minority of agonists form the same additional polar interactions. Both agonists and antagonists are known to form hydrogen bonds with His297. [ 22 ]
It can be concluded that interactions with the amino acid residues, Asp147 and Tyr148 are essential for the ligand to bind to the receptor and the molecules that form additional polar interactions with other residues are more often antagonists than agonists. [ 17 ]
The N-substituent group can form hydrophobic bonds with Tyr326 and Trp293 and the aromatic and cyclohexane rings can form similar bonds to Met151. The backside of the ligand can also form a hydrophobic bond, but with Val300 and Ile296. [ 22 ]
Methylnaltrexone bromide is the bromide salt form of methylnaltrexone, a quaternary methyl derivative of noroxymorphone . The methyl group and the quaternary salt formation increase the polarity and reduce the lipid solubility thereby restricts the blood–brain-barrier penetration. Methylnaltrexone has eight times higher affinity for MOR than for κ-opioid receptor (KOR) and δ-opioid receptor (DOR). [ 23 ] Naltrexone forms interaction with Asp147 and Tyr148 along with a hydrogen bond with Lys233. [ 24 ]
Peripherally selective trans-3,4-dimethyl-4-(3-hydroxylphenyl)piperidine opioid antagonists were developed for the treatment of gastrointestinal motility disorder by Zimmerman and his coworkers. From that, they derived the 4-(3-hydroxyphenyl)-3,4-dimethylpiperidine scaffold with functional groups spanning various sizes, charge, and polarity to reach peripheral opioid receptor antagonism while decreasing CNS drug exposure. The in vitro μ-Ki, in vivo AD50 , and ED50 and peripheral index (ratio) was examined for several selective analogs, and from that, they found out that the trans-3,4-dimethyl-4-(3-hydroxyphenyl) piperidine, Alvimopan, gave the best results. [ 5 ] The large zwitterionic structure and the high polarity prevents Alvimopan from crossing the blood–brain barrier, potency at binding peripheral MORs is thereby 200 times that of central MORs. [ 25 ]
Naloxegol is a polyethylene glycol -modified derivative of α- naloxol . Naloxegol has a similar form as naloxone as a heteropentacyclic compound both of which have an allyl group attached to the amine of the piperidine ring. However, naloxegol has a monomethoxy-terminated n=7 oligomer of PEG connected to the 6-alpha-hydroxyl group of ɑ-naloxol via an ether linkage. The PEG moiety increases the molecular weight and therefore restricts the uptake of naloxegol into the CNS . [ 26 ] Furthermore, pegylated naloxegol becomes a substrate for the P-glycoprotein efflux transporter that transports the compound out of the CNS. [ 27 ]
Naldemedine has a similar chemical structure as naltrexone but with an additional side chain that increases the molecular weight and polar surface area of the substance. Like naloxegol, naldemedine is a substrate of the P-glycoprotein efflux transporter. These properties result in less penetration into the CNS and decrease possible inference with the effects of opioid agonists. [ 28 ] Naldemedine is a dual antagonist for MOR and DOR. Activation of the DOR has been known to cause nausea and/or vomiting, so a dual antagonist can decrease both OIC and nausea/vomiting. [ 29 ]
The molecular weight , bioavailability , protein binding , elimination half-life , the time to achieve maximum plasma concentration and binding affinity are present in the table below. [ 26 ] [ 23 ]
Methylnaltrexone bromide has poor oral bioavailability, and for that reason, every other day it is administered subcutaneously . About half of the dose is excreted in the urine and somewhat less in feces with 85% eliminated unchanged. [ 24 ]
Alvimopan has considerable low bioavailability (6%) due to its high binding affinity and low dissociation rate . Essentially, alvimopan is mediated by biliary secretion with an average plasma clearance of 400 ml/min. Metabolism of alvimopan is via intestinal flora resulting in hydrolysis of alvimopan to the active amide metabolite (ADL 08-0011). However, the metabolite is considered clinically irrelevant due to its low binding affinity. [ 25 ]
When naloxegol is given with a fatty meal, absorption increases. Clearance is mostly via hepatic metabolism (P450-CYP3A) with unknown actions of the metabolites. Naloxegol has small fragments eliminated by renal excretion . [ 30 ]
Naldemedine metabolites mainly via CYP3A to nor-naldemedine, it also metabolites via UDP-glucuronosyltransferase 1A3 to naldemedine 3-G, but in a lesser extent. Those metabolites are both opioid receptor antagonists but are less potent than the parent compound . [ 33 ]
Axelopran is an oral PAMORA which is under development by Theravane Biopharma. It has completed phase II in clinical trials in more than 400 patients with OIC. Axelopran has a different chemical structure from other PAMORAs but with a similar mechanism of action . It acts as an antagonist for MOR, KOR and DOR, but with higher affinity for MOR and KOR than for DOR. Like other PAMORAs, the main goal is the treatment of OIC. [ 34 ] Axelopran is also being investigated in fixed-dose combination (FDC) with oxycodone . It is done by using spray coating technology to create an FDC of axelopran and controlled-release oxycodone. [ 35 ]
There is a demand for optimization of the receptor selectivity and affinity accompanied by an exploration of candidate compounds regarding their route of administration . These are the main objectives and future strategies for drug discovery and the development of PAMORAs.
Predominantly, the MORs exhibit functionally selective agonism. Therefore, future possible candidate compounds that target OIC are PAMORAs with optimized selectivity and affinity. [ 27 ] | https://en.wikipedia.org/wiki/Peripherally_acting_μ-opioid_receptor_antagonist |
Periphyton is a complex mixture of algae , cyanobacteria , heterotrophic microbes , and detritus that is attached to submerged surfaces in most aquatic ecosystems . The related term Aufwuchs ( German "surface growth" or "overgrowth", pronounced [ˈaʊ̯fˌvuːks] ⓘ ) refers to the collection of small animals and plants that adhere to open surfaces in aquatic environments, such as parts of rooted plants.
Periphyton serves as an important food source for invertebrates , tadpoles , and some fish . It can also absorb contaminants , removing them from the water column and limiting their movement through the environment. The periphyton is also an important indicator of water quality ; responses of this community to pollutants can be measured at a variety of scales representing physiological to community-level changes. Periphyton has often been used as an experimental system in, e.g., pollution-induced community tolerance studies.
In both marine and freshwater environments, algae – particularly green algae and diatoms – make up the dominant component of surface growth communities. Small crustaceans , rotifers , and protozoans are also commonly found in fresh water and the sea, but insect larvae , oligochaetes and tardigrades are peculiar to freshwater aufwuchs faunas. [ citation needed ]
Periphyton can contain species of cyanobacteria that are toxic to humans and other animals. [ 1 ] In fresh water, excessive growth and subsequent death and decay of periphyton can have undesirable effects: depleting oxygen in the water, altering its pH , and clogging the space between gravel and sand (the hyporheic zone ). These effects, known as eutrophication , can impair or kill fishes and other animals, reduce the quality of drinking water, and make waterways unappealing for recreation. Remediating the damage to biodiversity and ecosystems caused by excessive periphyton growth costs billions of doillars annually. [ 2 ]
Conversely, periphyton can be damaged by urbanization: the increased turbidity levels associated with urban sprawl can smother periphyton, causing it to detach from the rocks on which it lives.
Periphyton communities are used in aquaculture food production systems for the removal of solid and dissolved pollutants. Their performance in filtration is established and their application as aquacultural feed is being researched. It can be important for the clearance of harmful chemicals and reducing turbidity. [ citation needed ]
Periphyton serves as an indicator of water quality [ 3 ] because:
Many aquatic animals feed extensively on periphyton. The mbuna cichlids from Lake Malawi are particularly well known examples of fish adapted for feeding on periphyton. Examples include Labeotropheus trewavasae and Pseudotropheus zebra . They have scraper-like teeth that allow them to rasp the periphyton from rocks. [ 4 ] In marine communities, periphyton food sources are important for animals such as limpets and sea urchins . [ citation needed ] Another amphibian that feasts on periphyton are spring peepers , small chorus frogs that occupy many ponds throughout Canada and the eastern United States. [ 5 ] [ 6 ] Spring peepers filter periphyton from the environmental surfaces of their habitat. [ 5 ] | https://en.wikipedia.org/wiki/Periphyton |
Peripolar cells are specialized epithelial cells . Peripolar cells are located within Bowman's capsule at its vascular pole . These cells were discovered at the vascular pole of the sheep glomerulus . The cells contain numerous cytoplasmic granules . The granules in peripolar cells are secretory, and the cells show features of secretory epithelial cells, although no exocytosis was observed. By secreting specific molecules, they may influence the composition of the filtrate and the reabsorption processes in the renal tubules . There is also ongoing research that if it is part of the juxtaglomerular apparatus (JGA). The number, size, and appearance of peripolar cells can vary across different mammalian species. [ 1 ] [ 2 ] | https://en.wikipedia.org/wiki/Peripolar_cell |
Peripolesis is the process in which a cell attaches itself to another cell. This is differentiated from emperipolesis , which is when one cell is engulfed by another.
Peripolesis is thought to be a physiological mechanism involved in regulating some processes of immune response . It was observed between lymphocytes and macrophages following skin grafts between subjects, and after immune challenge with antigens . [ 1 ] Peripolesis was also observed in lung alveoli , where the peripolesed macrophages were not injured, but the cell membrane did appear to be temporarily altered. [ 2 ] In patients with active sarcoidosis , which is characterized by lymphocyte-macrophage cooperation, lymphocyte peripolesis appeared to occur in clusters and could last for minutes to hours. The lymphocytes could be seen moving around a macrophage while maintaining contact. [ 3 ] | https://en.wikipedia.org/wiki/Peripolesis |
Peritoneal washing is a procedure used to look for malignant cells , i.e. cancer , in the peritoneum .
Peritoneal washes are routinely done to stage abdominal and pelvic tumours, [ 1 ] e.g. ovarian cancer . | https://en.wikipedia.org/wiki/Peritoneal_washing |
The peritrich nuclear code (translation table 30) is a genetic code used by the nuclear genome of the peritrich ciliates Vorticella and Opisthonecta . [ 1 ]
Bases: adenine (A), cytosine (C), guanine (G) and thymine (T) or uracil (U).
Amino acids: Alanine (Ala, A), Arginine (Arg, R), Asparagine (Asn, N), Aspartic acid (Asp, D), Cysteine (Cys, C), Glutamic acid (Glu, E), Glutamine (Gln, Q), Glycine (Gly, G), Histidine (His, H), Isoleucine (Ile, I), Leucine (Leu, L), Lysine (Lys, K), Methionine (Met, M), Phenylalanine (Phe, F), Proline (Pro, P), Serine (Ser, S), Threonine (Thr, T), Tryptophan (Trp, W), Tyrosine (Tyr, Y), and Valine (Val, V).
This article incorporates text from the United States National Library of Medicine , which is in the public domain . [ 2 ] | https://en.wikipedia.org/wiki/Peritrich_nuclear_code |
The perivitelline space is the space between the zona pellucida and the cell membrane of an oocyte or fertilized ovum . [ 1 ] In the slow block to polyspermy , the cortical granules released from the ovum are deposited in the perivitelline space. Polysaccharides released in the granules cause the space to swell, pushing the zona pellucida further from the oocyte. [ 1 ] The hydrolytic enzymes released by the granules cause the zona reaction , which removes the ZP3 ligands from the zona pellucida. [ 1 ]
Clinically, the perivitelline space is relevant because it is where the polar body lodges after meiosis. | https://en.wikipedia.org/wiki/Perivitelline_space |
The Perkin Medal is an award given annually by the Society of Chemical Industry (American Section) to a scientist residing in America for an "innovation in applied chemistry resulting in outstanding commercial development." It is considered the highest honor given in the US chemical industry.
The Perkin Medal was first awarded in 1906 to commemorate the 50th anniversary of the discovery of mauveine , the world's first synthetic aniline dye , by Sir William Henry Perkin , an English chemist. The award was given to Sir William on the occasion of his visit to the United States in the year before he died. It was next given in 1908 and has been given every year since then. | https://en.wikipedia.org/wiki/Perkin_Medal |
The Perkin reaction is an organic reaction developed by English chemist William Henry Perkin in 1868 that is used to make cinnamic acids . It gives an α,β-unsaturated aromatic acid or α-substituted β-aryl acrylic acid by the aldol condensation of an aromatic aldehyde and an acid anhydride , in the presence of an alkali salt of the acid. [ 1 ] [ 2 ] The alkali salt acts as a base catalyst , and other bases can be used instead. [ 3 ]
Several reviews have been written. [ 4 ] [ 5 ] [ 6 ]
Clear from the reaction mechanism, the anhydride of aliphatic acid must contain at least 2 α-H for the reaction to occur. The above mechanism is not universally accepted, as several other versions exist, including decarboxylation without acetic group transfer. [ 7 ] | https://en.wikipedia.org/wiki/Perkin_reaction |
The Perkin rearrangement ( coumarin–benzofuran ring contraction ) is a rearrangement reaction in which a 2-halo coumarin in the presence of hydroxide undergoes a ring contraction to form a benzofuran . The name reaction recognizes William Henry Perkin , who first reported it in 1870. Several proposals have been made for the reaction mechanism , all of which involve initial opening of the lactone to give a carboxylate and phenolate . [ 1 ]
This organic chemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Perkin_rearrangement |
A Perkin triangle is a specialized apparatus for the distillation of air-sensitive materials.
Some compounds have high boiling points and are sensitive to air. A simple vacuum distillation system can be used, whereby the vacuum is replaced with an inert gas after the distillation is complete. However, this is a less satisfactory system if one desires to collect fractions under a reduced pressure. To do this, a "pig" adapter can be added to the end of the condenser, or for better results or for very air-sensitive compounds, a Perkin triangle apparatus can be used.
The Perkin triangle uses a series of glass or Teflon taps to allow fractions to be isolated from the rest of the still, without the main body of the distillation being removed from either the vacuum or heat source, so that the reflux may continue. To do this, the sample is first isolated from the vacuum through the taps. The vacuum over the sample is then replaced with an inert gas such as nitrogen or argon. The collection vessel or still receiver can then be removed and stoppered. Finally, a fresh collection vessel can be added to the system, evacuated, and linked back to the distillation system through the taps to collect the next fraction. The process is repeated until all fractions have been collected.
A Perkin triangle is also a convenient device for drying solvents. The solvent can be allowed to reflux over a drying agent housed in the still pot (shown as 2 in the figure) for a suitable time to dry solvent. The collecting tap (shown as 5 in the figure) can then be opened to collect the solvent in a Schlenk flask for storage. Depending on the boiling point of the solvent, a vacuum can be applied. | https://en.wikipedia.org/wiki/Perkin_triangle |
The Perkow reaction is an organic reaction in which a trialkyl phosphite ester reacts with a haloketone to form a dialkyl vinyl phosphate and an alkyl halide . [1]
In the related Michaelis–Arbuzov reaction the same reactants are known to form a beta-keto phosphonate which is an important reagent in the Horner–Wadsworth–Emmons reaction on the road to alkenes . The Perkow reaction, in this respect is considered a side-reaction.
The reaction mechanism of the Perkow reaction consists of a nucleophilic addition of the phosphite at the carbonyl carbon forming a zwitterionic intermediate. The zwitterionic intermediate rearranges to a cationic species while eliminating the halide. The cationic species then dealkylates through a second nucleophilic displacement in which the halide anion attacks one of the phosphite alkoxide substituents forming an enol phosphate . [ 1 ]
The Perkow reaction has been applied in the synthesis of an insect repellent [2] based on hexachloroacetone and triethylphosphite which is able to engage in a secondary [4+3] cycloaddition with furan through the action of the base sodium 2,2,2-trifluoroethoxide. The authors report mediocre yields.
The Perkow reaction is also used in the synthesis of novel quinolines . [3] When the substituent is n-butyl the reaction product is the classical Perkow adduct. In this reaction the leaving group is an electron deficient acyl group (owing to the presence of three fluorine groups). When the substituent on the other hand is phenyl (not shown) the phosphite has a preference for reaction with the acyl group leading to an ethyl enol ether . Key in explaining the difference in reactivity is the electron density on the α-keto carbon atom.
Aryl enol phosphates formed in good yields (ca. 90%) in the Perkow reaction can be used as phosphorylating reagents, e.g. able to transform AMP into ATP . [4] | https://en.wikipedia.org/wiki/Perkow_reaction |
PerlMonks is a community website covering all aspects of Perl programming and other related topics such as web applications and system administration . It is often referred to by users as 'The Monastery '. [ 1 ] The name PerlMonks, and the general style of the website, is designed to both humorously reflect the almost religious zeal that programmers sometimes have for their favorite language, and also to engender an atmosphere of calm reflection and consideration for other users.
Users (referred to as monks ) create discussion topics which other monks can reply to and vote as good or bad. Users have an experience rating (XP) that roughly measures their participation in the PerlMonks website as perceived by the other monks, not necessarily their proficiency in the Perl language. All monks have a 'home node', providing profile information and an area for Monks to personalize.
Notable members include the creator of the Perl language, the authors of several well-known Perl books
and the authors of numerous CPAN modules. CPAN authors frequently promote and provide support for their modules
at PerlMonks. [ citation needed ]
The site has tutorials, reviews, Q&A, poetry, obfuscated code , as well as sections for code snippets and entire scripts and modules.
Generally, the section of the site with the most traffic is Seekers of Perl Wisdom [1] , where users of all experience levels ask Perl-related questions. Some questions are from beginners trying to understand the basics of the language, while others are from seasoned veterans looking for methods to improve upon algorithms or to optimize performance. Those who provide answers are also of varying experience levels.
Much of the site's content consists of specific code examples. Some of these examples are for Perl's core features [2] , as documented on the official Perl documentation website ( http://perldoc.perl.org ). Other examples are for the Comprehensive Perl Archive Network ( CPAN ), which is a repository for Perl libraries (known as modules) that are not part of the core Perl distribution.
The code that the site runs on is a much hacked fork of an early version of the Everything Engine and was created by Nathan Oostendorp [ 2 ] as part of Blockstackers Intergalactic — the firm that also ran Slashdot . As a result, PerlMonks has many features in common with both Everything2 and Slashdot like its strong emphasis placed on user feedback.
Another feature that PerlMonks retains from Everything is the Chatterbox, which is a text chat area at the side of every page. Logged-in users can type in anything they want, and it appears for all users to see. Talk in the chatterbox is often Perl related, and various tools (written in Perl) have been written to improve the chatterbox experience. Some come to PerlMonks primarily for the chatterbox. Others find the chatterbox distracting and turn it off. | https://en.wikipedia.org/wiki/PerlMonks |
Permalloy ( / p ɜːr m . ʌ . l ɔɪ / [ 1 ] ) is a nickel – iron magnetic alloy , with about 80% nickel and 20% iron content. Invented in 1914 by physicist Gustav Elmen at Bell Telephone Laboratories , [ 2 ] it is notable for its very high magnetic permeability , which makes it useful as a magnetic core material in electrical and electronic equipment, and also in magnetic shielding to block magnetic fields . Commercial permalloy alloys typically have relative permeability of around 100,000, compared to several thousand for ordinary steel. [ 3 ]
In addition to high permeability, its other magnetic properties are low coercivity , near zero magnetostriction , and significant anisotropic magnetoresistance . The low magnetostriction is critical for industrial applications, allowing it to be used in thin films where variable stresses would otherwise cause a ruinously large variation in magnetic properties. Permalloy's electrical resistivity can vary as much as 5% depending on the strength and the direction of an applied magnetic field . Permalloys typically have the face-centered cubic crystal structure with a lattice constant of approximately 0.355 nm in the vicinity of a nickel concentration of 80%. A disadvantage of permalloy is that it is not very ductile or workable, so applications requiring elaborate shapes, such as magnetic shields, are made of other high permeability alloys such as mu metal . Permalloy is used in transformer laminations and magnetic recording heads .
Permalloy was initially developed in the early 20th century for inductive compensation of telegraph cables. [ 4 ] When the first transatlantic submarine telegraph cables were laid in the 1860s, it was found that the long conductors caused distortion which reduced the maximum signalling speed to only 10–12 words per minute. [ 4 ] The right conditions for transmitting signals through cables without distortion were first worked out mathematically in 1885 by Oliver Heaviside . [ 5 ] It was proposed by Carl Emil Krarup in 1902 in Denmark that the cable could be compensated by wrapping it with iron wire, increasing the inductance and making it a loaded line to reduce distortion. However, iron did not have high enough permeability to compensate a transatlantic-length cable. After a prolonged search, permalloy was discovered in 1914 by Gustav Elmen of Bell Laboratories , who found it had higher permeability than silicon steel . [ 2 ] Later, in 1923, he found its permeability could be greatly enhanced by heat treatment . [ 6 ] A wrapping of permalloy tape could reportedly increase the signalling speed of a telegraph cable fourfold. [ 4 ]
This method of cable compensation declined in the 1930s, but by World War II many other uses for Permalloy were found in the electronics industry .
Other compositions of permalloy are available, designated by a numerical prefix denoting the weight percentage of nickel in the alloy , for example "45 permalloy" means an alloy containing 45% nickel , and 55% iron by weight. "Molybdenum permalloy" is an alloy of 81% nickel , 17% iron and 2% molybdenum . The latter was invented at Bell Labs in 1940. At the time, when used in long distance copper telegraph lines, it allowed a tenfold increase in maximum line working speed. [ 5 ] Supermalloy , at 79% Ni, 16% Fe, and 5% Mo, is also well known for its high performance as a "soft" magnetic material, characterized by high permeability and low coercivity .
Due to its high magnetic permeability and low coercivity , Permalloy is often used in applications that require efficient magnetic field generation and sensing. [ 7 ] It exhibits low energy loss, which is beneficial for improving the performance of magnetic sensors, transformers , and inductors . [ 8 ] Permalloy is also used in the production of magnetic shielding materials, which help protect electronic equipment from external magnetic interference. [ 9 ] | https://en.wikipedia.org/wiki/Permalloy |
In linear algebra , the permanent of a square matrix is a function of the matrix similar to the determinant . The permanent, as well as the determinant, is a polynomial in the entries of the matrix. [ 1 ] Both are special cases of a more general function of a matrix called the immanant .
The permanent of an n × n matrix A = ( a i,j ) is defined as
perm ( A ) = ∑ σ ∈ S n ∏ i = 1 n a i , σ ( i ) . {\displaystyle \operatorname {perm} (A)=\sum _{\sigma \in S_{n}}\prod _{i=1}^{n}a_{i,\sigma (i)}.}
The sum here extends over all elements σ of the symmetric group S n ; i.e. over all permutations of the numbers 1, 2, ..., n .
For example,
perm ( a b c d ) = a d + b c , {\displaystyle \operatorname {perm} {\begin{pmatrix}a&b\\c&d\end{pmatrix}}=ad+bc,}
and
perm ( a b c d e f g h i ) = a e i + b f g + c d h + c e g + b d i + a f h . {\displaystyle \operatorname {perm} {\begin{pmatrix}a&b&c\\d&e&f\\g&h&i\end{pmatrix}}=aei+bfg+cdh+ceg+bdi+afh.}
The definition of the permanent of A differs from that of the determinant of A in that the signatures of the permutations are not taken into account.
The permanent of a matrix A is denoted per A , perm A , or Per A , sometimes with parentheses around the argument. Minc uses Per( A ) for the permanent of rectangular matrices, and per( A ) when A is a square matrix. [ 2 ] Muir and Metzler use the notation | + | + {\displaystyle {\overset {+}{|}}\quad {\overset {+}{|}}} . [ 3 ]
The word, permanent , originated with Cauchy in 1812 as “fonctions symétriques permanentes” for a related type of function, [ 4 ] and was used by Muir and Metzler [ 5 ] in the modern, more specific, sense. [ 6 ]
If one views the permanent as a map that takes n vectors as arguments, then it is a multilinear map and it is symmetric (meaning that any order of the vectors results in the same permanent). Furthermore, given a square matrix A = ( a i j ) {\displaystyle A=\left(a_{ij}\right)} of order n : [ 7 ]
Laplace's expansion by minors for computing the determinant along a row, column or diagonal extends to the permanent by ignoring all signs. [ 9 ]
For every i {\textstyle i} ,
p e r m ( B ) = ∑ j = 1 n B i , j M i , j , {\displaystyle \mathbb {perm} (B)=\sum _{j=1}^{n}B_{i,j}M_{i,j},}
where B i , j {\displaystyle B_{i,j}} is the entry of the i th row and the j th column of B , and M i , j {\textstyle M_{i,j}} is the permanent of the submatrix obtained by removing the i th row and the j th column of B .
For example, expanding along the first column,
perm ( 1 1 1 1 2 1 0 0 3 0 1 0 4 0 0 1 ) = 1 ⋅ perm ( 1 0 0 0 1 0 0 0 1 ) + 2 ⋅ perm ( 1 1 1 0 1 0 0 0 1 ) + 3 ⋅ perm ( 1 1 1 1 0 0 0 0 1 ) + 4 ⋅ perm ( 1 1 1 1 0 0 0 1 0 ) = 1 ( 1 ) + 2 ( 1 ) + 3 ( 1 ) + 4 ( 1 ) = 10 , {\displaystyle {\begin{aligned}\operatorname {perm} \left({\begin{matrix}1&1&1&1\\2&1&0&0\\3&0&1&0\\4&0&0&1\end{matrix}}\right)={}&1\cdot \operatorname {perm} \left({\begin{matrix}1&0&0\\0&1&0\\0&0&1\end{matrix}}\right)+2\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\0&1&0\\0&0&1\end{matrix}}\right)\\&{}+\ 3\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\1&0&0\\0&0&1\end{matrix}}\right)+4\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\1&0&0\\0&1&0\end{matrix}}\right)\\={}&1(1)+2(1)+3(1)+4(1)=10,\end{aligned}}}
while expanding along the last row gives,
perm ( 1 1 1 1 2 1 0 0 3 0 1 0 4 0 0 1 ) = 4 ⋅ perm ( 1 1 1 1 0 0 0 1 0 ) + 0 ⋅ perm ( 1 1 1 2 0 0 3 1 0 ) + 0 ⋅ perm ( 1 1 1 2 1 0 3 0 0 ) + 1 ⋅ perm ( 1 1 1 2 1 0 3 0 1 ) = 4 ( 1 ) + 0 + 0 + 1 ( 6 ) = 10. {\displaystyle {\begin{aligned}\operatorname {perm} \left({\begin{matrix}1&1&1&1\\2&1&0&0\\3&0&1&0\\4&0&0&1\end{matrix}}\right)={}&4\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\1&0&0\\0&1&0\end{matrix}}\right)+0\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\2&0&0\\3&1&0\end{matrix}}\right)\\&{}+\ 0\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\2&1&0\\3&0&0\end{matrix}}\right)+1\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\2&1&0\\3&0&1\end{matrix}}\right)\\={}&4(1)+0+0+1(6)=10.\end{aligned}}}
On the other hand, the basic multiplicative property of determinants is not valid for permanents. [ 10 ] A simple example shows that this is so.
4 = perm ( 1 1 1 1 ) perm ( 1 1 1 1 ) ≠ perm ( ( 1 1 1 1 ) ( 1 1 1 1 ) ) = perm ( 2 2 2 2 ) = 8. {\displaystyle {\begin{aligned}4&=\operatorname {perm} \left({\begin{matrix}1&1\\1&1\end{matrix}}\right)\operatorname {perm} \left({\begin{matrix}1&1\\1&1\end{matrix}}\right)\\&\neq \operatorname {perm} \left(\left({\begin{matrix}1&1\\1&1\end{matrix}}\right)\left({\begin{matrix}1&1\\1&1\end{matrix}}\right)\right)=\operatorname {perm} \left({\begin{matrix}2&2\\2&2\end{matrix}}\right)=8.\end{aligned}}}
Unlike the determinant, the permanent has no easy geometrical interpretation; it is mainly used in combinatorics , in treating boson Green's functions in quantum field theory , and in determining state probabilities of boson sampling systems. [ 11 ] However, it has two graph-theoretic interpretations: as the sum of weights of cycle covers of a directed graph , and as the sum of weights of perfect matchings in a bipartite graph .
The permanent arises naturally in the study of the symmetric tensor power of Hilbert spaces . [ 12 ] In particular, for a Hilbert space H {\displaystyle H} , let ∨ k H {\displaystyle \vee ^{k}H} denote the k {\displaystyle k} th symmetric tensor power of H {\displaystyle H} , which is the space of symmetric tensors . Note in particular that ∨ k H {\displaystyle \vee ^{k}H} is spanned by the symmetric products of elements in H {\displaystyle H} . For x 1 , x 2 , … , x k ∈ H {\displaystyle x_{1},x_{2},\dots ,x_{k}\in H} , we define the symmetric product of these elements by x 1 ∨ x 2 ∨ ⋯ ∨ x k = ( k ! ) − 1 / 2 ∑ σ ∈ S k x σ ( 1 ) ⊗ x σ ( 2 ) ⊗ ⋯ ⊗ x σ ( k ) {\displaystyle x_{1}\vee x_{2}\vee \cdots \vee x_{k}=(k!)^{-1/2}\sum _{\sigma \in S_{k}}x_{\sigma (1)}\otimes x_{\sigma (2)}\otimes \cdots \otimes x_{\sigma (k)}} If we consider ∨ k H {\displaystyle \vee ^{k}H} (as a subspace of ⊗ k H {\displaystyle \otimes ^{k}H} , the k th tensor power of H {\displaystyle H} ) and define the inner product on ∨ k H {\displaystyle \vee ^{k}H} accordingly, we find that for x j , y j ∈ H {\displaystyle x_{j},y_{j}\in H} ⟨ x 1 ∨ x 2 ∨ ⋯ ∨ x k , y 1 ∨ y 2 ∨ ⋯ ∨ y k ⟩ = perm [ ⟨ x i , y j ⟩ ] i , j = 1 k {\displaystyle \langle x_{1}\vee x_{2}\vee \cdots \vee x_{k},y_{1}\vee y_{2}\vee \cdots \vee y_{k}\rangle =\operatorname {perm} \left[\langle x_{i},y_{j}\rangle \right]_{i,j=1}^{k}} Applying the Cauchy–Schwarz inequality , we find that perm [ ⟨ x i , x j ⟩ ] i , j = 1 k ≥ 0 {\displaystyle \operatorname {perm} \left[\langle x_{i},x_{j}\rangle \right]_{i,j=1}^{k}\geq 0} , and that | perm [ ⟨ x i , y j ⟩ ] i , j = 1 k | 2 ≤ perm [ ⟨ x i , x j ⟩ ] i , j = 1 k ⋅ perm [ ⟨ y i , y j ⟩ ] i , j = 1 k {\displaystyle \left|\operatorname {perm} \left[\langle x_{i},y_{j}\rangle \right]_{i,j=1}^{k}\right|^{2}\leq \operatorname {perm} \left[\langle x_{i},x_{j}\rangle \right]_{i,j=1}^{k}\cdot \operatorname {perm} \left[\langle y_{i},y_{j}\rangle \right]_{i,j=1}^{k}}
Any square matrix A = ( a i j ) i , j = 1 n {\displaystyle A=(a_{ij})_{i,j=1}^{n}} can be viewed as the adjacency matrix of a weighted directed graph on vertex set V = { 1 , 2 , … , n } {\displaystyle V=\{1,2,\dots ,n\}} , with a i j {\displaystyle a_{ij}} representing the weight of the arc from vertex i to vertex j .
A cycle cover of a weighted directed graph is a collection of vertex-disjoint directed cycles in the digraph that covers all vertices in the graph. Thus, each vertex i in the digraph has a unique "successor" σ ( i ) {\displaystyle \sigma (i)} in the cycle cover, and so σ {\displaystyle \sigma } represents a permutation on V . Conversely, any permutation σ {\displaystyle \sigma } on V corresponds to a cycle cover with arcs from each vertex i to vertex σ ( i ) {\displaystyle \sigma (i)} .
If the weight of a cycle-cover is defined to be the product of the weights of the arcs in each cycle, then weight ( σ ) = ∏ i = 1 n a i , σ ( i ) , {\displaystyle \operatorname {weight} (\sigma )=\prod _{i=1}^{n}a_{i,\sigma (i)},} implying that perm ( A ) = ∑ σ weight ( σ ) . {\displaystyle \operatorname {perm} (A)=\sum _{\sigma }\operatorname {weight} (\sigma ).} Thus the permanent of A is equal to the sum of the weights of all cycle-covers of the digraph.
A square matrix A = ( a i j ) {\displaystyle A=(a_{ij})} can also be viewed as the adjacency matrix of a bipartite graph which has vertices x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\dots ,x_{n}} on one side and y 1 , y 2 , … , y n {\displaystyle y_{1},y_{2},\dots ,y_{n}} on the other side, with a i j {\displaystyle a_{ij}} representing the weight of the edge from vertex x i {\displaystyle x_{i}} to vertex y j {\displaystyle y_{j}} . If the weight of a perfect matching σ {\displaystyle \sigma } that matches x i {\displaystyle x_{i}} to y σ ( i ) {\displaystyle y_{\sigma (i)}} is defined to be the product of the weights of the edges in the matching, then weight ( σ ) = ∏ i = 1 n a i , σ ( i ) . {\displaystyle \operatorname {weight} (\sigma )=\prod _{i=1}^{n}a_{i,\sigma (i)}.} Thus the permanent of A is equal to the sum of the weights of all perfect matchings of the graph.
The answers to many counting questions can be computed as permanents of matrices that only have 0 and 1 as entries.
Let Ω( n , k ) be the class of all (0, 1)-matrices of order n with each row and column sum equal to k . Every matrix A in this class has perm( A ) > 0. [ 13 ] The incidence matrices of projective planes are in the class Ω( n 2 + n + 1, n + 1) for n an integer > 1. The permanents corresponding to the smallest projective planes have been calculated. For n = 2, 3, and 4 the values are 24, 3852 and 18,534,400 respectively. [ 13 ] Let Z be the incidence matrix of the projective plane with n = 2, the Fano plane . Remarkably, perm( Z ) = 24 = |det ( Z )|, the absolute value of the determinant of Z . This is a consequence of Z being a circulant matrix and the theorem: [ 14 ]
Permanents can also be used to calculate the number of permutations with restricted (prohibited) positions. For the standard n -set {1, 2, ..., n }, let A = ( a i j ) {\displaystyle A=(a_{ij})} be the (0, 1)-matrix where a ij = 1 if i → j is allowed in a permutation and a ij = 0 otherwise. Then perm( A ) is equal to the number of permutations of the n -set that satisfy all the restrictions. [ 9 ] Two well known special cases of this are the solution of the derangement problem and the ménage problem : the number of permutations of an n -set with no fixed points (derangements) is given by
perm ( J − I ) = perm ( 0 1 1 … 1 1 0 1 … 1 1 1 0 … 1 ⋮ ⋮ ⋮ ⋱ ⋮ 1 1 1 … 0 ) = n ! ∑ i = 0 n ( − 1 ) i i ! , {\displaystyle \operatorname {perm} (J-I)=\operatorname {perm} \left({\begin{matrix}0&1&1&\dots &1\\1&0&1&\dots &1\\1&1&0&\dots &1\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&1&1&\dots &0\end{matrix}}\right)=n!\sum _{i=0}^{n}{\frac {(-1)^{i}}{i!}},} where J is the n × n all 1's matrix and I is the identity matrix, and the ménage numbers are given by
perm ( J − I − I ′ ) = perm ( 0 0 1 … 1 1 0 0 … 1 1 1 0 … 1 ⋮ ⋮ ⋮ ⋱ ⋮ 0 1 1 … 0 ) = ∑ k = 0 n ( − 1 ) k 2 n 2 n − k ( 2 n − k k ) ( n − k ) ! , {\displaystyle {\begin{aligned}\operatorname {perm} (J-I-I')&=\operatorname {perm} \left({\begin{matrix}0&0&1&\dots &1\\1&0&0&\dots &1\\1&1&0&\dots &1\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&1&1&\dots &0\end{matrix}}\right)\\&=\sum _{k=0}^{n}(-1)^{k}{\frac {2n}{2n-k}}{2n-k \choose k}(n-k)!,\end{aligned}}}
where I' is the (0, 1)-matrix with nonzero entries in positions ( i , i + 1) and ( n , 1).
Permanent of n × n all 1's matrix is a number of possible arrangements of n mutually non-attacking rooks in the positions of the board of size n × n . [ 15 ]
The Bregman–Minc inequality , conjectured by H. Minc in 1963 [ 16 ] and proved by L. M. Brégman in 1973, [ 17 ] gives an upper bound for the permanent of an n × n (0, 1)-matrix. If A has r i ones in row i for each 1 ≤ i ≤ n , the inequality states that perm A ≤ ∏ i = 1 n ( r i ) ! 1 / r i . {\displaystyle \operatorname {perm} A\leq \prod _{i=1}^{n}(r_{i})!^{1/r_{i}}.}
In 1926, Van der Waerden conjectured that the minimum permanent among all n × n doubly stochastic matrices is n !/ n n , achieved by the matrix for which all entries are equal to 1/ n . [ 18 ] Proofs of this conjecture were published in 1980 by B. Gyires [ 19 ] and in 1981 by G. P. Egorychev [ 20 ] and D. I. Falikman; [ 21 ] Egorychev's proof is an application of the Alexandrov–Fenchel inequality . [ 22 ] For this work, Egorychev and Falikman won the Fulkerson Prize in 1982. [ 23 ]
The naïve approach, using the definition, of computing permanents is computationally infeasible even for relatively small matrices. One of the fastest known algorithms is due to H. J. Ryser . [ 24 ] Ryser's method is based on an inclusion–exclusion formula that can be given [ 25 ] as follows: Let A k {\displaystyle A_{k}} be obtained from A by deleting k columns, let P ( A k ) {\displaystyle P(A_{k})} be the product of the row-sums of A k {\displaystyle A_{k}} , and let Σ k {\displaystyle \Sigma _{k}} be the sum of the values of P ( A k ) {\displaystyle P(A_{k})} over all possible A k {\displaystyle A_{k}} . Then perm ( A ) = ∑ k = 0 n − 1 ( − 1 ) k Σ k . {\displaystyle \operatorname {perm} (A)=\sum _{k=0}^{n-1}(-1)^{k}\Sigma _{k}.}
It may be rewritten in terms of the matrix entries as follows: perm ( A ) = ( − 1 ) n ∑ S ⊆ { 1 , … , n } ( − 1 ) | S | ∏ i = 1 n ∑ j ∈ S a i j . {\displaystyle \operatorname {perm} (A)=(-1)^{n}\sum _{S\subseteq \{1,\dots ,n\}}(-1)^{|S|}\prod _{i=1}^{n}\sum _{j\in S}a_{ij}.}
The permanent is believed to be more difficult to compute than the determinant. While the determinant can be computed in polynomial time by Gaussian elimination , Gaussian elimination cannot be used to compute the permanent. Moreover, computing the permanent of a (0,1)-matrix is #P-complete . Thus, if the permanent can be computed in polynomial time by any method, then FP = #P , which is an even stronger statement than P = NP . When the entries of A are nonnegative, however, the permanent can be computed approximately in probabilistic polynomial time, up to an error of ε M {\displaystyle \varepsilon M} , where M {\displaystyle M} is the value of the permanent and ε > 0 {\displaystyle \varepsilon >0} is arbitrary. [ 26 ] The permanent of a certain set of positive semidefinite matrices is NP-hard to approximate within any subexponential factor. [ 27 ] If further conditions on the spectrum are imposed, the permanent can be approximated in probabilistic polynomial time: the best achievable error of this approximation is ε M {\displaystyle \varepsilon {\sqrt {M}}} ( M {\displaystyle M} is again the value of the permanent). [ 28 ] The hardness in these instances is closely linked with difficulty of simulating boson sampling experiments.
Another way to view permanents is via multivariate generating functions . Let A = ( a i j ) {\displaystyle A=(a_{ij})} be a square matrix of order n . Consider the multivariate generating function: F ( x 1 , x 2 , … , x n ) = ∏ i = 1 n ( ∑ j = 1 n a i j x j ) = ( ∑ j = 1 n a 1 j x j ) ( ∑ j = 1 n a 2 j x j ) ⋯ ( ∑ j = 1 n a n j x j ) . {\displaystyle {\begin{aligned}F(x_{1},x_{2},\dots ,x_{n})&=\prod _{i=1}^{n}\left(\sum _{j=1}^{n}a_{ij}x_{j}\right)\\&=\left(\sum _{j=1}^{n}a_{1j}x_{j}\right)\left(\sum _{j=1}^{n}a_{2j}x_{j}\right)\cdots \left(\sum _{j=1}^{n}a_{nj}x_{j}\right).\end{aligned}}} The coefficient of x 1 x 2 … x n {\displaystyle x_{1}x_{2}\dots x_{n}} in F ( x 1 , x 2 , … , x n ) {\displaystyle F(x_{1},x_{2},\dots ,x_{n})} is perm( A ). [ 29 ]
As a generalization, for any sequence of n non-negative integers, s 1 , s 2 , … , s n {\displaystyle s_{1},s_{2},\dots ,s_{n}} define: perm ( s 1 , s 2 , … , s n ) ( A ) {\displaystyle \operatorname {perm} ^{(s_{1},s_{2},\dots ,s_{n})}(A)} as the coefficient of x 1 s 1 x 2 s 2 ⋯ x n s n {\displaystyle x_{1}^{s_{1}}x_{2}^{s_{2}}\cdots x_{n}^{s_{n}}} in ( ∑ j = 1 n a 1 j x j ) s 1 ( ∑ j = 1 n a 2 j x j ) s 2 ⋯ ( ∑ j = 1 n a n j x j ) s n . {\displaystyle \left(\sum _{j=1}^{n}a_{1j}x_{j}\right)^{s_{1}}\left(\sum _{j=1}^{n}a_{2j}x_{j}\right)^{s_{2}}\cdots \left(\sum _{j=1}^{n}a_{nj}x_{j}\right)^{s_{n}}.}
MacMahon's master theorem relating permanents and determinants is: [ 30 ] perm ( s 1 , s 2 , … , s n ) ( A ) = coefficient of x 1 s 1 x 2 s 2 ⋯ x n s n in 1 det ( I − X A ) , {\displaystyle \operatorname {perm} ^{(s_{1},s_{2},\dots ,s_{n})}(A)={\text{ coefficient of }}x_{1}^{s_{1}}x_{2}^{s_{2}}\cdots x_{n}^{s_{n}}{\text{ in }}{\frac {1}{\det(I-XA)}},} where I is the order n identity matrix and X is the diagonal matrix with diagonal [ x 1 , x 2 , … , x n ] . {\displaystyle [x_{1},x_{2},\dots ,x_{n}].}
The permanent function can be generalized to apply to non-square matrices. Indeed, several authors make this the definition of a permanent and consider the restriction to square matrices a special case. [ 31 ] Specifically, for an m × n matrix A = ( a i j ) {\displaystyle A=(a_{ij})} with m ≤ n , define perm ( A ) = ∑ σ ∈ P ( n , m ) a 1 σ ( 1 ) a 2 σ ( 2 ) … a m σ ( m ) {\displaystyle \operatorname {perm} (A)=\sum _{\sigma \in \operatorname {P} (n,m)}a_{1\sigma (1)}a_{2\sigma (2)}\ldots a_{m\sigma (m)}} where P( n , m ) is the set of all m -permutations of the n -set {1,2,...,n}. [ 32 ]
Ryser's computational result for permanents also generalizes. If A is an m × n matrix with m ≤ n , let A k {\displaystyle A_{k}} be obtained from A by deleting k columns, let P ( A k ) {\displaystyle P(A_{k})} be the product of the row-sums of A k {\displaystyle A_{k}} , and let σ k {\displaystyle \sigma _{k}} be the sum of the values of P ( A k ) {\displaystyle P(A_{k})} over all possible A k {\displaystyle A_{k}} . Then [ 10 ] perm ( A ) = ∑ k = 0 m − 1 ( − 1 ) k ( n − m + k k ) σ n − m + k . {\displaystyle \operatorname {perm} (A)=\sum _{k=0}^{m-1}(-1)^{k}{\binom {n-m+k}{k}}\sigma _{n-m+k}.}
The generalization of the definition of a permanent to non-square matrices allows the concept to be used in a more natural way in some applications. For instance:
Let S 1 , S 2 , ..., S m be subsets (not necessarily distinct) of an n -set with m ≤ n . The incidence matrix of this collection of subsets is an m × n (0,1)-matrix A . The number of systems of distinct representatives (SDR's) of this collection is perm( A ). [ 33 ] | https://en.wikipedia.org/wiki/Permanent_(mathematics) |
The permanent adjustments of theodolites are made to establish fixed relationship between the instrument's fundamental lines. The fundamental lines or axis of a transit theodolite include the following:-
These adjustments once made last for a long time. These are important for accuracy of observations taken from the instrument. The permanent adjustments in case of transit theodolite are:-
The horizontal axis must be perpendicular to the vertical axis.
The vertical circle must read zero when the line of collimation is horizontal.
The axis of altitude level must be parallel to the line of collimation.
The line of collimation or line of sight should coincide with axis of the telescope. [ 1 ] The line of sight should also be perpendicular to the horizontal axis at its intersection with the vertical axis . Also, the optical axis , the axis of the objective slide, and the line of sight should coincide.
The axis of plate levels must be perpendicular to the vertical axis. [ 2 ] | https://en.wikipedia.org/wiki/Permanent_adjustments_of_theodolites |
A permanent downhole gauge ( PDG ) is a pressure and/or temperature gauge permanently installed in an oil or gas well . [ 1 ] These gauges are typically installed in the tubing in the well and can measure the tubing pressure, annulus pressure, or both. Systems installed in well casing to read formation pressure directly, suspended systems, and systems built in coil (continuous) tubing are also available. The data that PDGs provide are useful to reservoir engineers in determining the quantities of oil or gas contained below the Earth's surface in an oil or gas reservoir and also which method of production is best. [ 2 ]
Permanent downhole gauges are installed in oil wells and gas wells for the purposes of observation and optimization. Downhole gauges primarily monitor pressure at a single point or multiple points in a well, and may secondarily monitor temperature. [ 3 ] These gauges may use additional sensors to measure additional environmental variables: [ citation needed ]
The information provided by permanently mounted sensors, combined with remotely controlled valves , can enable "smart" or "intelligent" well design. A "smart well" is a well that can monitor information and make adjustments automatically with the goal of optimizing production and/or product recovery. [ 4 ] | https://en.wikipedia.org/wiki/Permanent_downhole_gauge |
Permanent vegetative cover refers to trees, perennial bunchgrasses and grasslands , legumes , and shrubs with an
expected life span of at least 5 years.
In the United States, permanent cover is required on cropland entered into the Conservation Reserve Program .
This article about environmental habitats is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Permanent_vegetative_cover |
Permanent wilting point ( PWP ) or wilting point ( WP ) is defined as the minimum amount of water in the soil that the plant requires not to wilt . If the soil water content decreases to this or any lower point a plant wilts and can no longer recover its turgidity when placed in a saturated atmosphere for 12 hours. The physical definition of the wilting point, symbolically expressed as θ pwp or θ wp , is said by convention as the water content at −1,500 kPa (−15 bar) of suction pressure, or negative hydraulic head . [ 1 ]
The concept was introduced in the early 1910s. Lyman Briggs and Homer LeRoy Shantz (1912) proposed the wilting coefficient, which is defined as the percentage water content of a soil when the plants growing in that soil are first reduced to a wilted condition from which they cannot recover in approximately saturated atmosphere without the addition of water to the soil . [ 2 ] [ 3 ] See pedotransfer function for wilting coefficient by Briggs.
Frank Veihmeyer and Arthur Hendrickson from University of California-Davis found that it is a constant (characteristic) of the soil and is independent of environmental conditions. Lorenzo A. Richards proposed it is taken as the soil water content when the soil is under a pressure of −15 bar. [ 4 ] | https://en.wikipedia.org/wiki/Permanent_wilting_point |
The permanganate index is an assessment of water quality . It involves the detection of oxidation by potassium permanganate in an acid medium under hot conditions.
The method is to heat a sample in a boiling water-bath with a known amount of potassium permanganate and sulfuric acid for a fixed period time (10 min). Part of the permanganate will be reduced by oxidizable material in the sample. The consumed permanganate can be determined by addition of an excess of oxalate solution , followed by titration with permanganate. The method applies to waters having a chloride ion concentration of less than 300 mg/L. Samples having a permanganate index over 10 mg/L should be diluted before analysis. The lower limit of the optimum range of the test is 0.5 mg/L. [ 1 ]
The permanganate index method is not recommended for waste water because some organic compounds are not oxidized or incompletely oxidized. | https://en.wikipedia.org/wiki/Permanganate_index |
Permanganometry is one of the techniques used in chemical quantitative analysis. It is a redox titration that involves the use of permanganates to measure the amount of analyte present in unknown chemical samples. [ 1 ] It involves two steps, namely the titration of the analyte with potassium permanganate solution and then the standardization of potassium permanganate solution with standard sodium oxalate solution. The titration involves volumetric manipulations to prepare the analyte solutions. [ 2 ]
Permanganometry allows the detection and estimation of the quantitative presence of various chemical species, such as iron (II), manganese (II), oxalate , nitrite , and hydrogen peroxide .
Depending on the conditions in which the titration is performed, the manganese is reduced from an oxidation of +7 to +2, +4, or +6.
In most cases, permanganometry is performed in a very acidic solution in which the following electrochemical reaction occurs: [ 3 ]
which shows that KMnO 4 (in an acidic medium) is a very strong oxidizing agent, able to oxidize Fe 2+ ( E ° Fe 3+ /Fe 2+ = +0.77 V), Sn 2+ ( E ° Sn 4+ /Sn 2+ = +0.2 V), and even Cl − ( E ° Cl 2 /Cl − = +1.36 V).
In weak acidic medium MnO − 4 can not accept 5 electrons to form Mn 2+ . Instead, it accepts only 3 electrons and forms solid MnO 2 by the following reaction:
In a strongly basic solution, with the concentration c (NaOH) >1 mol dm −3 , only one electron is accepted to produce manganate : | https://en.wikipedia.org/wiki/Permanganometry |
In electromagnetism , permeability is the measure of magnetization produced in a material in response to an applied magnetic field . Permeability is typically represented by the (italicized) Greek letter μ . It is the ratio of the magnetic induction B {\displaystyle B} to the magnetizing field H {\displaystyle H} in a material. The term was coined by William Thomson, 1st Baron Kelvin in 1872, [ 1 ] and used alongside permittivity by Oliver Heaviside in 1885. The reciprocal of permeability is magnetic reluctivity .
In SI units, permeability is measured in henries per meter (H/m), or equivalently in newtons per ampere squared (N/A 2 ). The permeability constant μ 0 , also known as the magnetic constant or the permeability of free space, is the proportionality between magnetic induction and magnetizing force when forming a magnetic field in a classical vacuum .
A closely related property of materials is magnetic susceptibility , which is a dimensionless proportionality factor that indicates the degree of magnetization of a material in response to an applied magnetic field.
In the macroscopic formulation of electromagnetism , there appear two different kinds of magnetic field :
The concept of permeability arises since in many materials (and in vacuum), there is a simple relationship between H and B at any location or time, in that the two fields are precisely proportional to each other: [ 2 ]
where the proportionality factor μ is the permeability, which depends on the material. The permeability of vacuum (also known as permeability of free space) is a physical constant, denoted μ 0 . The SI units of μ are volt-seconds per ampere-meter, equivalently henry per meter. Typically μ would be a scalar, but for an anisotropic material, μ could be a second rank tensor .
However, inside strong magnetic materials (such as iron, or permanent magnets ), there is typically no simple relationship between H and B . The concept of permeability is then nonsensical or at least only applicable to special cases such as unsaturated magnetic cores . Not only do these materials have nonlinear magnetic behaviour, but often there is significant magnetic hysteresis , so there is not even a single-valued functional relationship between B and H . However, considering starting at a given value of B and H and slightly changing the fields, it is still possible to define an incremental permeability as: [ 2 ]
assuming B and H are parallel.
In the microscopic formulation of electromagnetism , where there is no concept of an H field, the vacuum permeability μ 0 appears directly (in the SI Maxwell's equations) as a factor that relates total electric currents and time-varying electric fields to the B field they generate. In order to represent the magnetic response of a linear material with permeability μ , this instead appears as a magnetization M that arises in response to the B field: M = ( μ 0 − 1 − μ − 1 ) B {\displaystyle \mathbf {M} =\left(\mu _{0}^{-1}-\mu ^{-1}\right)\mathbf {B} } . The magnetization in turn is a contribution to the total electric current—the magnetization current .
Relative permeability, denoted by the symbol μ r {\displaystyle \mu _{\mathrm {r} }} , is the ratio of the permeability of a specific medium to the permeability of free space μ 0 :
where μ 0 ≈ {\displaystyle \mu _{0}\approx } 4 π × 10 −7 H/m is the magnetic permeability of free space . [ 3 ] In terms of relative permeability, the magnetic susceptibility is
The number χ m is a dimensionless quantity , sometimes called volumetric or bulk susceptibility, to distinguish it from χ p ( magnetic mass or specific susceptibility) and χ M ( molar or molar mass susceptibility).
Diamagnetism is the property of an object which causes it to create a magnetic field in opposition of an externally applied magnetic field, thus causing a repulsive effect. Specifically, an external magnetic field alters the orbital velocity of electrons around their atom's nuclei, thus changing the magnetic dipole moment in the direction opposing the external field. Diamagnets are materials with a magnetic permeability less than μ 0 (a relative permeability less than 1).
Consequently, diamagnetism is a form of magnetism that a substance exhibits only in the presence of an externally applied magnetic field. It is generally a quite weak effect in most materials, although superconductors exhibit a strong effect.
Paramagnetism is a form of magnetism which occurs only in the presence of an externally applied magnetic field. Paramagnetic materials are attracted to magnetic fields, hence have a relative magnetic permeability greater than one (or, equivalently, a positive magnetic susceptibility ).
The magnetic moment induced by the applied field is linear in the field strength, and it is rather weak . It typically requires a sensitive analytical balance to detect the effect. Unlike ferromagnets , paramagnets do not retain any magnetization in the absence of an externally applied magnetic field, because thermal motion causes the spins to become randomly oriented without it. Thus the total magnetization will drop to zero when the applied field is removed. Even in the presence of the field, there is only a small induced magnetization because only a small fraction of the spins will be oriented by the field. This fraction is proportional to the field strength and this explains the linear dependency. The attraction experienced by ferromagnets is non-linear and much stronger so that it is easily observed, for instance, in magnets on one's refrigerator.
For gyromagnetic media (see Faraday rotation ) the magnetic permeability response to an alternating electromagnetic field in the microwave frequency domain is treated as a non-diagonal tensor expressed by: [ 4 ]
The following table should be used with caution as the permeability of ferromagnetic materials varies greatly with field strength and specific composition and fabrication. For example, 4% electrical steel has an initial relative permeability (at or near 0 T) of 2,000 and a maximum of 38,000 at T = 1 [ 5 ] [ 6 ] and different range of values at different percent of Si and manufacturing process, and, indeed, the relative permeability of any material at a sufficiently high field strength trends toward 1 (at magnetic saturation).
A good magnetic core material must have high permeability. [ 35 ]
For passive magnetic levitation a relative permeability below 1 is needed (corresponding to a negative susceptibility).
Permeability varies with a magnetic field. Values shown above are approximate and valid only at the magnetic fields shown. They are given for a zero frequency; in practice, the permeability is generally a function of the frequency. When the frequency is considered, the permeability can be complex , corresponding to the in-phase and out of phase response.
A useful tool for dealing with high frequency magnetic effects is the complex permeability. While at low frequencies in a linear material the magnetic field and the auxiliary magnetic field are simply proportional to each other through some scalar permeability, at high frequencies these quantities will react to each other with some lag time. [ 36 ] These fields can be written as phasors , such that
where δ {\displaystyle \delta } is the phase delay of B {\displaystyle B} from H {\displaystyle H} .
Understanding permeability as the ratio of the magnetic flux density to the magnetic field, the ratio of the phasors can be written and simplified as
so that the permeability becomes a complex number.
By Euler's formula , the complex permeability can be translated from polar to rectangular form,
The ratio of the imaginary to the real part of the complex permeability is called the loss tangent ,
which provides a measure of how much power is lost in material versus how much is stored. | https://en.wikipedia.org/wiki/Permeability_(electromagnetism) |
Permeability is a property of foundry sand with respect to how well the sand can vent, i.e. how well gases pass through the sand. And in other words, permeability is the property by which we can know the ability of material to transmit fluid/gases. The permeability is commonly tested to see if it is correct for the casting conditions. [ 1 ]
The grain size , shape and distribution of the foundry sand, the type and quantity of bonding materials, the density to which the sand is rammed, and the percentage of moisture used for tempering the sand are important factors in regulating the degree of permeability. [ 1 ]
An increase in permeability usually indicates a more open structure in the rammed sand, and if the increase continues, it will lead to penetration-type defects and rough castings. A decrease in permeability indicates tighter packing and could lead to blows and pinholes. [ 1 ]
The absolute permeability number, which has no units, is determined by the rate of flow of air, under standard pressure , through a rammed cylindrical specimen. DIN standards define the specimen dimensions to be 50 mm in diameter and 50 mm tall, [ 2 ] while the American Foundry Society defines it to be two inches in diameter and two inches tall. [ 3 ] rammed cylindrical specimen.
formula is
PN = (VxH)/PxAxT
where
American Foundry Society has also released a chart where back pressure (P) from a rammed specimen placed on a permeability meter is correlated with a Permeability number. The Permeability number so measured is used in foundries for recording permeability value. | https://en.wikipedia.org/wiki/Permeability_(foundry_sand) |
In fluid mechanics , materials science and Earth sciences , the permeability of porous media (often, a rock or soil ) is a measure of the ability for fluids (gas or liquid) to flow through the media; it is commonly symbolized as k .
Fluids can more easily flow through a material with high permeability than one with low permeability. [ 1 ]
The permeability of a medium is related to the porosity , but also to the shapes of the pores in the medium and their level of connectedness. [ 2 ] Fluid flows can also be influenced in different lithological settings by brittle deformation of rocks in fault zones ; the mechanisms by which this occurs are the subject of fault zone hydrogeology . [ 3 ] Permeability is also affected by the pressure inside a material.
The SI unit for permeability is the square metre (m 2 ). A practical unit for permeability is the darcy (d), or more commonly the millidarcy (md) (1 d ≈ 10 −12 m 2 ). The name honors the French Engineer Henry Darcy who first described the flow of water through sand filters for potable water supply. Permeability values for most materials commonly range typically from a fraction to several thousand millidarcys. The unit of square centimetre (cm 2 ) is also sometimes used (1 cm 2 = 10 −4 m 2 ≈ 10 8 d).
The concept of permeability is of importance in determining the flow characteristics of hydrocarbons in oil and gas reservoirs, [ 4 ] and of groundwater in aquifers . [ 5 ]
For a rock to be considered as an exploitable hydrocarbon reservoir without stimulation, its permeability must be greater than approximately 100 md (depending on the nature of the hydrocarbon – gas reservoirs with lower permeabilities are still exploitable because of the lower viscosity of gas in comparison with oil). Rocks with permeabilities significantly lower than 100 md can form efficient seals (see petroleum geology ). Unconsolidated sands may have permeabilities of over 5000 md.
The concept also has many practical applications outside of geology, for example in chemical engineering (e.g., filtration ), as well as in Civil Engineering when determining whether the ground conditions of a site are suitable for construction.
The concept of permeability is also useful in computational fluid dynamics (CFD) for modeling flow through complex geometries such as packed beds, filter papers, or tube banks. When the size of individual components—such as particle diameter in packed beds or tube diameter in tube bundles—are significantly smaller than the overall flow domain, direct modeling becomes computationally intensive due to the fine mesh resolution required. In such cases, the domain can be approximated as a porous medium, with permeability estimated using correlations, experimental data, or separate fluid flow simulations. [ 6 ]
Permeability is part of the proportionality constant in Darcy's law which relates discharge (flow rate) and fluid physical properties (e.g. dynamic viscosity ), to a pressure gradient applied to the porous media: [ 7 ]
Therefore:
where:
In naturally occurring materials, the permeability values range over many orders of magnitude (see table below for an example of this range).
The global proportionality constant for the flow of water through a porous medium is called the hydraulic conductivity ( K , unit: m/s). Permeability, or intrinsic permeability, ( k , unit: m 2 ) is a part of this, and is a specific property characteristic of the solid skeleton and the microstructure of the porous medium itself, independently of the nature and properties of the fluid flowing through the pores of the medium. This allows to take into account the effect of temperature on the viscosity of the fluid flowing though the porous medium and to address other fluids than pure water, e.g. , concentrated brines , petroleum , or organic solvents . Given the value of hydraulic conductivity for a studied system, the permeability can be calculated as follows:
Tissue such as brain, liver, muscle, etc can be treated as a heterogeneous porous medium. Describing the flow of biofluids (blood, cerebrospinal fluid, etc.) within such a medium requires a full 3-dimensional anisotropic treatment of the tissue. In this case the scalar hydraulic permeability is replaced with the hydraulic permeability tensor so that Darcy's Law reads [ 8 ]
Connecting this expression to the isotropic case, κ = k 1 {\displaystyle {\boldsymbol {\kappa }}=k\mathbb {1} } , where k is the scalar hydraulic permeability, and 1 is the identity tensor .
Permeability is typically determined in the lab by application of Darcy's law under steady state conditions or, more generally, by application of various solutions to the diffusion equation for unsteady flow conditions. [ 9 ]
Permeability needs to be measured, either directly (using Darcy's law), or through estimation using empirically derived formulas. However, for some simple models of porous media, permeability can be calculated (e.g., random close packing of identical spheres ).
Based on the Hagen–Poiseuille equation for viscous flow in a pipe, permeability can be expressed as:
where:
Absolute permeability denotes the permeability in a porous medium that is 100% saturated with a single-phase fluid. This may also be called the intrinsic permeability or specific permeability. These terms refer to the quality that the permeability value in question is an intensive property of the medium, not a spatial average of a heterogeneous block of material equation 2.28 [ clarification needed ] [ further explanation needed ] ; and that it is a function of the material structure only (and not of the fluid). They explicitly distinguish the value from that of relative permeability . [ 10 ]
Sometimes, permeability to gases can be somewhat different than that for liquids in the same media. One difference is attributable to the "slippage" of gas at the interface with the solid [ 11 ] when the gas mean free path is comparable to the pore size (about 0.01 to 0.1 μm at standard temperature and pressure). See also Knudsen diffusion and constrictivity . For example, measurement of permeability through sandstones and shales yielded values from 9.0×10 −19 m 2 to 2.4×10 −12 m 2 for water and between 1.7×10 −17 m 2 to 2.6×10 −12 m 2 for nitrogen gas. [ 12 ] Gas permeability of reservoir rock and source rock is important in petroleum engineering , when considering the optimal extraction of gas from unconventional sources such as shale gas , tight gas , or coalbed methane .
To model permeability in anisotropic media, a permeability tensor is needed. Pressure can be applied in three directions, and for each direction, permeability can be measured (via Darcy's law in 3D) in three directions, thus leading to a 3 by 3 tensor. The tensor is realised using a 3 by 3 matrix being both symmetric and positive definite (SPD matrix):
The permeability tensor is always diagonalizable (being both symmetric and positive definite). The eigenvectors will yield the principal directions of flow where flow is parallel to the pressure gradient, and the eigenvalues represent the principal permeabilities.
These values do not depend on the fluid properties; see the table derived from the same source for values of hydraulic conductivity , which are specific to the material through which the fluid is flowing. [ 13 ] | https://en.wikipedia.org/wiki/Permeability_(porous_media) |
Permeable paving surfaces are made of either a porous material that enables stormwater to flow through it or nonporous blocks spaced so that water can flow between the gaps. Permeable paving can also include a variety of surfacing techniques for roads, parking lots, and pedestrian walkways. Permeable pavement surfaces may be composed of; pervious concrete , porous asphalt, paving stones , or interlocking pavers. [ 1 ] Unlike traditional impervious paving materials such as concrete and asphalt, permeable paving systems allow stormwater to percolate and infiltrate through the pavement and into the aggregate layers and/or soil below. In addition to reducing surface runoff, permeable paving systems can trap suspended solids, thereby filtering pollutants from stormwater. [ 2 ]
Permeable pavement is commonly used on roads, paths and parking lots subject to light vehicular traffic, such as cycle-paths , service or emergency access lanes, road and airport shoulders, and residential sidewalks and driveways.
Permeable solutions can be based on porous asphalt and concrete surfaces, concrete pavers (permeable interlocking concrete paving systems – PICP), or polymer-based grass pavers, grids and geocells. Porous pavements such as pervious concrete and pervious asphalt are better suited for urbanized areas that see more frequent vehicular traffic, while concrete pavers, grids, and geocells are better suited for light vehicular traffic, pedestrian and cycling pathways, and overflow parking lots. [ 3 ] Pervious concrete pavers allow water to percolate and infiltrate through the pavers and into the aggregate layers and/or soil below. Impervious concrete pavers installed with ample void space between each paver function in the same way as pervious concrete pavers as they enable stormwater to drain into the voids between each paver, either filled with coarse aggregate or vegetation, to a stone and/or soil base layer for on-site infiltration and filtering. [ 4 ] Polymer based grass grid or cellular paver systems provide load bearing reinforcement for unpaved surfaces of gravel or turf.
Grass pavers, plastic turf reinforcing grids (PTRG), and geocells ( cellular confinement systems ) are honeycombed 3D grid-cellular systems, made of thin-walled HDPE plastic or other polymer alloys. These provide grass reinforcement, ground stabilization and gravel retention. The 3D structure reinforces infill and transfers vertical loads from the surface, distributing them over a wider area. Selection of the type of cellular grid depends to an extent on the surface material, traffic and loads. The cellular grids are installed on a prepared base layer of open-graded stone (higher void spacing) or engineered stone (stronger). The surface layer may be compacted gravel or topsoil seeded with grass and fertilizer. In addition to load support, the cellular grid reduces compaction of the soil to maintain permeability, while the roots improve permeability due to their root channels. [ 5 ]
In new suburban growth, porous pavements protect watersheds by delaying and filtering the surge flow. In existing built-up areas and towns, redevelopment and reconstruction are opportunities to implement stormwater water management practices. Permeable paving is an important component in Low Impact Development (LID), a process for land development in the United States that attempts to minimize impacts on water quality and the similar concept of sustainable drainage systems (SuDS) in the United Kingdom.
The infiltration capacity of the native soil is a key design consideration for determining the depth of base rock for stormwater storage or for whether an underdrain system is needed.
Permeable paving surfaces have been demonstrated as effective in managing runoff from paved surfaces and recharging groundwater aquifers. [ 6 ] [ 7 ] Large volumes of urban runoff causes serious erosion and siltation in surface water bodies. Permeable pavers provide a solid ground surface, strong enough to take heavy loads, like large vehicles, while at the same time they allow water to filter through the surface and reach the underlying soils, mimicking natural ground absorption. [ 8 ] They can reduce downstream flooding and stream bank erosion, and maintain base flows in rivers to keep ecosystems self-sustaining. Permeable pavers also combat erosion that occurs when grass is dry or dead, by replacing grassed areas in suburban and residential environments. [ 9 ] The goal is to control stormwater at the source, reduce runoff and improve water quality by filtering pollutants in the subsurface layers. [ 3 ]
To control pollutants found in surface runoff , permeable paving surfaces capture the stormwater in the soil or aggregate base below the road or pathway, and subsequently treat the runoff via percolation , which allows water to infiltrate, supporting groundwater recharge or contain the stormwater to be released back into municipal stormwater management systems after a storm. [ 10 ] Permeable paving systems have shown effective in reducing suspended solids , Biochemical Oxygen Demand (BOD), chemical oxygen demand , and ammonium concentrations within groundwater . [ 10 ] In areas where infiltration is not possible due to unsuitable soil conditions, permeable pavements are used in the attenuation mode where water is retained in the pavement and slowly released to surface water systems between storm events. [ 10 ]
Permeable pavements may give urban trees the rooting space they need to grow to full size. A "structural-soil" pavement base combines structural aggregate with soil; a porous surface admits vital air and water to the rooting zone. This integrates healthy ecology and thriving cities, with the living tree canopy above, the city's traffic on the ground, and living tree roots below. The benefits of permeables on urban tree growth have not been conclusively demonstrated and many researchers have observed tree growth is not increased if construction practices compact materials before permeable pavements are installed. [ 11 ] [ 12 ]
Research findings indicate that employing high albedo (reflective) and permeable pavement has the potential to alleviate near-surface heat island effects and enhance air quality, while also potentially improving human thermal comfort . In comparison to impermeable pavement, permeable pavement exhibits minimal thermal impact on the near-surface air due to its capacity for heat exchange. [ 13 ]
Permeable pavements are designed to replace Effective Impervious Areas (EIAs), but can be used, in some cases, to manage stormwater from other impervious surfaces on site. [ 14 ] Use of this technique must be part of an overall on site management system for stormwater, and is not a replacement for other techniques.
During large storm events, the water table below the porous pavement can rise to a higher level, preventing the precipitation from being absorbed into the ground. Some additional water is stored in the open graded or crushed drain rock base, and remains until the subgrade can absorb the water. For clay-based soils, or other low to 'non'-draining soils, it is important to increase the depth of the crushed drain rock base to allow additional capacity for the water as it waits to be infiltrated.
Runoff across some land uses may become contaminated, where pollutant concentrations exceed those typically found in stormwater. These "hot spots" include commercial plant nurseries , recycling facilities, fueling stations , industrial storage, marinas , some outdoor loading facilities , public works yards, hazardous materials generators (if containers are exposed to rainfall), vehicle service, washing, and maintenance areas, and steam cleaning facilities. Since porous pavement is an infiltration practice, it should not be applied at stormwater hot spots due to the potential for groundwater contamination. All contaminated runoff should be prevented from entering municipal storm drain systems by using best management practices (BMPs) for the specific industry or activity. [ 15 ]
Reference sources differ on whether low or medium traffic volumes and weights are appropriate for porous pavements due to the variety of physical properties of each system. For example, around truck loading docks and areas of high commercial traffic, porous pavement is sometimes cited as being inappropriate. However, given the variability of products available, the growing number of existing installations in North America and targeted research by both manufacturers and user agencies, the range of accepted applications seems to be expanding. [ 16 ] Some concrete paver companies have developed products specifically for industrial applications. Working examples exist at fire halls, busy retail complex parking lots, and on public and private roads, including intersections in parts of North America with quite severe winter conditions.
Permeable pavements may not be appropriate when land surrounding or draining into the pavement exceeds a 20 percent slope, where pavement is down slope from buildings or where foundations have piped drainage at their footers. The key is to ensure that drainage from other parts of a site is intercepted and dealt with separately rather than being directed onto permeable surfaces. [ citation needed ]
Cold climates may present special challenges. Road salt contains chlorides that could migrate through the porous pavement into groundwater. Snow plow blades could catch block edges of concrete pavers or other block installations, damaging surfaces and creating potholes . Sand cannot be used for snow and ice control on porous surfaces because it will plug the pores and reduce permeability. [ 17 ] Although there are design modifications to reduce the risks, infiltrating runoff may freeze below the pavement, causing frost heave. Another issue is spalling damage, which exclusively occurs on porous concrete pavement from salt application during the winter season. Thus porous paving is suggested for warmer climates. However, other materials have proven to be effective, even lowering winter maintenance costs by preserving salt in the pavement itself. This also reduces the amount of storm water runoff that is contaminated with salt chlorides. [ 18 ] Pervious concrete and asphalt designed to reduce frost heave and spalling damage has been used successfully in Norway and New Hampshire . [ 19 ] Furthermore, experience suggests that preventive measures with rapid drainage below porous surfaces be taken in order to increase the rate of snow melt above ground.
It can be difficult to compare cost impacts between conventional impervious surfaces and permeable surfaces given the variables such as lifespan, geographic location, type of permeable paving system and site specific factors. Some estimates put the cost of permeable paving at about one third more expensive than that of conventional impervious paving. [ 20 ] Using permeable paving, however, can reduce the cost of providing larger or more stormwater BMPs on site, and these savings should be factored into any cost analysis. In addition, the off-site environmental impact costs of not reducing on-site stormwater volumes and pollution have historically been ignored or assigned to other groups (local government parks, public works and environmental restoration budgets, fisheries losses, etc.). Permeable paving systems, specifically pervious concrete pavers, have shown significant cost benefits after a Life Cycle Assessment was performed, as the reduction in total weight of material needed for each unit is reduced by nature of the porous design. [ 21 ]
Permeable paving systems, especially those with porous surfaces, require maintenance in order to keep the pores clear of fine aggregates as to not hinder the systems ability to infiltrate stormwater. The frequency of cleaning is again dependent on many site specific factors, such as runoff volume, neighboring sites and climate. Often, cleaning of permeable paving systems is done by suction excavators , which are alternatively used for excavation in sensitive areas and therefore are becoming increasingly common. If maintenance is not carried out on a regular basis, the porous pavements can begin to function more like impervious surfaces. [ 3 ] With more advanced paving systems the levels of maintenance needed can be greatly decreased, elastomerically bound glass pavements requires less maintenance than regular concrete paving as the glass bound pavement has 50% more void space.
Plastic grid systems, if selected and installed correctly, are becoming more and more popular with local government maintenance personnel owing to the reduction in maintenance efforts: reduced gravel migration and weed suppression in public park settings.
Some permeable paving products are prone to damage from misuse, such as drivers who tear up patches of plastic & gravel grid systems by "joy riding" on remote parking lots at night. The damage is not difficult to repair but can look unsightly in the meantime. Grass pavers require supplemental watering in the first year to establish the vegetation, otherwise they may need to be re-seeded. Regional climate also means that most grass applications will go dormant during the dry season. While brown vegetation is only a matter of aesthetics, it can influence public support for this type of permeable paving.
Traditional permeable concrete paving bricks tend to lose their color in relatively short time which can be costly to replace or clean and is mainly due to the problem of efflorescence .
Installation of porous pavements is no more difficult than that of dense pavements, but has different specifications and procedures which must be strictly adhered to. Nine different families of porous paving materials present distinctive advantages and disadvantages for specific applications. Here are examples:
Pervious concrete is widely available, can bear frequent traffic, and is universally accessible. Pervious concrete quality depends on the installer's knowledge and experience. [ 22 ]
Plastic grids allow for a 100% porous system using structural grid systems for containing and stabilizing either gravel or turf. These grids come in a variety of shapes and sizes depending on use; from pathways to commercial parking lots. These systems have been used readily in Europe for over a decade, but are gaining popularity in North America due to requirements by government for many projects to meet LEED environmental building standards. Plastic grid systems are also popular with homeowners due to their lower cost to install, ease of installation, and versatility. The ideal design for this type of grid system is a closed cell system, which prevents gravel/sand/turf from migrating laterally. [ 23 ]
Porous asphalt is produced and placed using the same methods as conventional asphalt concrete ; it differs in that fine (small) aggregates are omitted from the asphalt mixture. The remaining large, single-sized aggregate particles leave open voids that give the material its porosity and permeability. To ensure pavement strength, fiber may be added to the mix or a polymer-modified asphalt binder may be used. [ 24 ] Generally, porous asphalt pavements are designed with a subsurface reservoir that holds water that passes through the pavement, allowing it to evaporate and/or percolate slowly into the surround soils. [ 25 ] [ 26 ]
Open-graded friction courses (OGFC) are a porous asphalt surface course used on highways to improve driving safety by removing water from the surface. These use an open-graded mix design for the top layer of asphalt. Unlike a full-depth porous asphalt pavement, OGFCs do not drain water to the base of a pavement. Instead, they allow water to infiltrate the top 3/4 to 1.5 inch of the pavement and then drain out to the side of the roadway. This can improve the friction characteristics of the road and reduce road spray. [ 27 ]
Single-sized aggregate without any binder, e.g. loose gravel, stone-chippings, is another alternative. Although it can only be safely used in walkways and very low-speed, low-traffic settings, e.g. car-parks and drives, its potential cumulative area is great. [ citation needed ]
Porous turf , if properly constructed, can be used for occasional parking like that at churches and stadia. Plastic turf reinforcing grids can be used to support the increased load. [ 28 ] : 2 [ 29 ] Living turf transpires water, actively counteracting the "heat island" with what appears to be a green open lawn.
Permeable interlocking concrete pavements are concrete units with open, permeable spaces between the units. [ 28 ] : 2 More recently manufacturers have introduced styles with smaller joint allowing for better ADA compliance and still capturing a significant amount of stormwater. They give an architectural appearance, and can bear both light and heavy traffic, particularly interlocking concrete pavers, excepting high-volume or high-speed roads. [ 30 ] Some products are polymer-coated and have an entirely porous face.
Permeable clay brick pavements are fired clay brick units with open, permeable spaces between the units. Clay pavers provide a durable surface that allows stormwater runoff to permeate through the joints [ citation needed ] .
Resin bound paving is a mixture of resin binder and aggregate. Clear resin is used to fully coat each aggregate particle before laying. Enough resin is used to allow each aggregate particle to adhere to one another and to the base yet leave voids for water to permeate through. Resin bound paving provides a strong and durable surface that is suitable for pedestrian and vehicular traffic in applications such as pathways, driveways, car parks and access roads [ citation needed ] .
Stabilized decomposed granite is a mixture of a non-resin binder and aggregate (decomposed granite). The binder, which may include color, is mixed with the decomposed granite and the mixture is moistened either before it is put in place or after. Stabilized decomposed granite provides a strong and durable surface that is suitable for pedestrian and vehicular traffic in applications such as pathways, driveways, car parks and access roads. The surface is ADA compliant and can be painted on. [ citation needed ] .
Elastomerically bound recycled glass porous pavement consisting of bonding processed post-consumer glass with a mixture of resins, pigments, granite and binding agents. [ citation needed ] Approximately 75 percent of glass in the U.S. is disposed in landfills. [ 31 ] [ 32 ]
Wood permeable pavement is a natural and sustainable building material. Architects and landscape designers turning towards permeable pavers will find that some types of highly durable hardwoods (e.g. Black Locust) are an effective permeable pavers material. Wood paver blocks made of Black Locust provide a highly permeable, durable surface that will last for decades because of the characteristics of the wood. [ 33 ] Black Locust Lumber wood pavers exceed 10.180 PSI ( pounds per square inch ) and have a Janka Hardness 1,700 lbf. [ 34 ] They are suitable for pedestrian and vehicular traffic in the form of pathways and driveways and are placed upon permeable foundations. [ 35 ]
Stormwater management practices related to roadways: | https://en.wikipedia.org/wiki/Permeable_paving |
The permeameter is an instrument for rapidly measuring the electromagnetic permeability of samples of iron or steel with sufficient accuracy for many commercial purposes. The name was first applied by Silvanus P. Thompson to an apparatus devised by himself in 1890, which indicates the mechanical force required to detach one end of the sample, arranged as the core of a straight electromagnet , from an iron yoke of special form; when this force is known, the permeability can be easily calculated. [ 1 ]
This electromagnetism -related article is a stub . You can help Wikipedia by expanding it .
permeameter.aspx
www.glossary.oilfield.slb.com/en/Terms/p/permeameter.aspx | https://en.wikipedia.org/wiki/Permeameter |
Permeance , in general, is the degree to which a material admits a flow of matter or energy . Permeance is usually represented by a curly capital P: P .
In electromagnetism , permeance is the inverse of reluctance . In a magnetic circuit, permeance is a measure of the quantity of magnetic flux for a number of current-turns. A magnetic circuit almost acts as though the flux is conducted, therefore permeance is larger for large cross-sections of a material and smaller for smaller cross section lengths. This concept is analogous to electrical conductance in the electric circuit .
Magnetic permeance P is defined as the reciprocal of magnetic reluctance R (in analogy with the reciprocity between electric conductance and resistance): P = 1 R {\displaystyle {\mathcal {P}}={\frac {1}{\mathcal {R}}}}
which can also be re-written: P = Φ B N I {\displaystyle {\mathcal {P}}={\frac {\Phi _{\mathrm {B} }}{NI}}}
using Hopkinson's law (magnetic circuit analogue of Ohm's law for electric circuits) and the definition of magnetomotive force (magnetic analogue of electromotive force ): F = Φ B R = N I {\displaystyle {\mathcal {F}}=\Phi _{\mathrm {B} }{\mathcal {R}}=NI}
where:
Alternatively in terms of magnetic permeability (analogous to electric conductivity ): P = μ A ℓ {\displaystyle {\mathcal {P}}={\frac {\mu A}{\ell }}}
where:
The SI unit of magnetic permeance is the henry (H), equivalently, webers per ampere . [ a ]
In materials science , permeance is the degree to which a material transmits another substance. | https://en.wikipedia.org/wiki/Permeance |
In physics and engineering , permeation (also called imbuing ) is the penetration of a permeate (a fluid such as a liquid , gas , or vapor ) through a solid. It is directly related to the concentration gradient of the permeate, a material's intrinsic permeability , and the materials' mass diffusivity . [ 1 ] Permeation is modeled by equations such as Fick's laws of diffusion , and can be measured using tools such as a minipermeameter .
The process of permeation involves the diffusion of molecules, called the permeant, through a membrane or interface. Permeation works through diffusion; the permeant will move from high concentration to low concentration across the interface. A material can be semipermeable, with the presence of a semipermeable membrane . Only molecules or ions with certain properties will be able to diffuse across such a membrane. This is a very important mechanism in biology where fluids inside a blood vessel need to be regulated and controlled. Permeation can occur through most materials including metals, ceramics and polymers. However, the permeability of metals is much lower than that of ceramics and polymers due to their crystal structure and porosity.
Permeation is something that must be considered carefully in many polymer applications, due to their high permeability. Permeability depends on the temperature of the interaction as well as the characteristics of both the polymer and the permeant component. Through the process of sorption , molecules of the permeant can be either absorbed or desorbed at the interface. The permeation of a material can be measured through numerous methods that quantify the permeability of a substance through a specific material.
Permeability due to diffusion is measured in SI units of mol/(m・s・Pa) although Barrers are also commonly used. Permeability due to diffusion is not to be confused with Permeability (earth sciences) due to fluid flow in porous solids measured in Darcy. [ 2 ] [ 3 ]
Nollet tried to seal wine containers with a pig's bladder and stored them under water. After a while the bladder bulged outwards. He noticed the high pressure that discharged after he pierced the bladder. Curious, he did the experiment the other way round: he filled the container with water and stored it in wine. The result was a bulging inwards of the bladder. His notes about this experiment are the first scientific mention of permeation (later it would be called semipermeability).
Graham experimentally proved the dependency of gas diffusion on molecular weight , which is now known as Graham's law .
Barrer developed the modern Barrer measurement technique, and first used scientific methods for measuring permeation rates.
The permeation of films and membranes can be measured with any gas or liquid. One method uses a central module which is separated by the test film: the testing gas is fed on the one side of the cell and the permeated gas is carried to the detector by a sweep gas. The diagram on the right shows a testing cell for films, normally made from metals like stainless steel . The photo shows a testing cell for pipes made from glass , similar to a Liebig condenser . The testing medium (liquid or gas) is situated in the inner white pipe and the permeate is collected in the space between the pipe and the glass wall. It is transported by a sweep gas (connected to the upper and lower joint) to an analysing device.
Permeation can also be measured through intermittent contact. This method involves taking a sample of the test chemical and placing it on the surface of the material whose permeability is being observed while adding or removing specific amounts of the test chemical. After a known amount of time, the material is analyzed to find the concentration of the test chemical present throughout its structure. Along with the amount of time the chemical was on the material and the analysis of the test material, one can determine the cumulative permeation of the test chemical.
The following table gives examples of the calculated permeability coefficient of certain gases through a silicone membrane.
* 1 Barrer = 10 −10 cm 3 (STP) · cm /cm 2 · s · cm-Hg
Unless otherwise noted, permeabilities are measured and reported at 25 °C (RTP) and not (STP)
From W. L. Robb. Thin Silicone Membranes – Their Permeation Properties and Some Applications.
Annals of the New York Academy of Sciences, vol. 146, (January 1968) issue 1 Materials in, pp. 119–137 [ 4 ]
The flux or flow of mass of the permeate through the solid can be modeled by Fick's first law .
This equation can be modified to a very simple formula that can be used in basic problems to approximate permeation through a membrane.
where
We can introduce S {\displaystyle S} into this equation, which represents the sorption equilibrium parameter, which is the constant of proportionality between pressure ( p {\displaystyle p} ) and C {\displaystyle C} . This relationship can be represented as C = S p {\displaystyle C=Sp} .
The diffusion coefficient can be combined with the sorption equilibrium parameter to get the final form of the equation, where P {\displaystyle P} is the permeability of the membrane. The relationship being P = S D {\displaystyle P=SD}
In practical applications when looking at gases permeating metals, there is a way to relate gas pressure to concentration. Many gases exist as diatomic molecules when in the gaseous phase, but when permeating metals they exist in their singular ionic form. Sieverts' law states that the solubility of a gas, in the form of a diatomic molecule, in metal is proportional to the square root of the partial pressure of the gas.
The flux can be approximated in this case by the equation
We can introduce K {\displaystyle K} into this equation, which represents the reaction equilibrium constant . From the relationship S = K p N {\displaystyle S={K{\sqrt {p_{N}}}}} .
The diffusion coefficient can be combined with the reaction equilibrium constant to get the final form of the equation, where P {\displaystyle P} is the permeability of the membrane. The relationship being P = K D {\displaystyle P=KD} | https://en.wikipedia.org/wiki/Permeation |
Permethrin is a medication and an insecticide . [ 6 ] [ 7 ] As a medication, it is used to treat scabies and lice . [ 8 ] It is applied to the skin as a cream or lotion. [ 6 ] As an insecticide, it can be sprayed onto outer clothing or mosquito nets to kill the insects that touch them. [ 7 ] [ 9 ]
Side effects include rash and irritation where it is applied . [ 8 ] Use during pregnancy appears to be safe, and it is approved for use on and around people over the age of two months. [ 6 ] Permethrin is in the pyrethroid family of medications. [ 6 ] It works by disrupting the function of the neurons of lice and scabies mites. [ 6 ]
Permethrin was discovered in 1972. [ 10 ] It is on the World Health Organization's List of Essential Medicines . [ 11 ] In 2022, it was the 351st most commonly prescribed medication in the United States, with more than 40,000 prescriptions. [ 12 ]
Permethrin is available for topical use as a cream or lotion. It is indicated for the treatment and prevention in exposed individuals of head lice and treatment of scabies . [ 17 ] In general, one treatment is curative. [ 18 ] A single application of permethrin is more effective than a single oral dose of ivermectin for scabies. In addition permethrin provides more rapid symptomatic relief than ivermectin. [ 19 ] When a second dose of ivermectin is days later, the efficacy between permethrin and ivermectin approach parity. [ 20 ] Contact with eyes should be avoided. [ 21 ]
Permethrin acts on the nerve cell membrane to disrupt the sodium channel current by which the polarization of the membrane is regulated. Delayed repolarization and paralysis of the pests are the consequences of this disturbance. [ 22 ] [ 23 ]
In agriculture, permethrin is mainly used on cotton, wheat, maize , and alfalfa crops. Its use is controversial because, as a broad-spectrum chemical, it kills indiscriminately; as well as the intended pests, it can harm beneficial insects, including honey bees , as well as cats and aquatic life. [ 24 ] [ 25 ]
Permethrin kills ticks and mosquitoes on contact with treated clothing. A method of reducing deer tick populations by treating rodent vectors involves stuffing biodegradable cardboard tubes with permethrin-treated cotton. Mice collect the cotton for lining their nests. Permethrin on the cotton kills any immature ticks feeding on the mice. [ citation needed ]
Permethrin is used in tropical areas to prevent mosquito-borne disease such as dengue fever and malaria . Mosquito nets used to cover beds may be treated with a solution of permethrin. This increases the effectiveness of the bed net by killing parasitic insects before they are able to find gaps or holes in the net. Personnel working in malaria-endemic areas may be instructed to treat their clothing with permethrin as well. [ citation needed ]
Permethrin is the most commonly used insecticide worldwide for the protection of wool from keratinophagous insects such as Tineola bisselliella . [ 26 ]
To better protect soldiers from the risk and annoyance of biting insects, the British [ 27 ] and US armies are treating all new uniforms with permethrin. [ 28 ]
Permethrin (as well as other long-term pyrethroids) is effective over several months, in particular when used indoors. International studies report that permethrin can be detected in house dust, in fine dust, and on indoor surfaces even years after the application. Its degradation rate under indoor conditions is approximately 10% after 3 months. [ 29 ] [ 30 ]
In Aedes aegypti permethrin resistance is via " knockdown resistance " (kdr) mutations which is common to pyrethroids and DDT . This differs to the most common mechanism of insecticide resistance evolution which is selection for preexisting, low-frequency alleles . García et al. 2009 found that a kdr allele has rapidly spread throughout Mexico and become dominant there. [ 31 ]
Permethrin is moderately toxic if ingested, causing abdominal pain, sore throat, nausea and vomiting. If inhaled, permethrin may cause headache, respiratory irritation, difficulty breathing, dizziness, nausea and vomiting. Inhalation is more likely from aerosols than from vapors from surfaces and clothing, as permethrin has a low vapor pressure and volatilizes slowly. [ 32 ]
Topical application of permethrin can cause mild skin irritation, burning and paresthesia . [ 32 ] Permethrin has little systemic absorption, and is considered safe for topical use in adults and children over the age of two months. The FDA has assigned it as pregnancy category B. Animal studies have suggested that it may cause endocrine disruption by interfering with estrogenic activity [ 32 ] and have shown no effects on fertility or teratogenicity , but studies in humans have not been performed. The excretion of permethrin in breastmilk is unknown, and it is recommended that breastfeeding be temporarily discontinued during treatment. [ 21 ] Skin reactions are uncommon. [ 33 ] Excessive exposure to permethrin can cause nausea , headache, muscle weakness, excessive salivation , shortness of breath, and seizures . Worker exposure to the chemical can be monitored by measurement of the urinary metabolites , while severe overdose may be confirmed by measurement of permethrin in serum or blood plasma . [ 34 ]
Permethrin does not present any notable genotoxicity or immunotoxicity in humans and farm animals, but is classified by the EPA as a likely human carcinogen when ingested, based on reproducible studies in which mice fed permethrin developed liver and lung tumors . [ 35 ] A 2018 review failed to link permethrin exposure in humans to cancer. [ 36 ]
Permethrin is a chemical categorized in the pyrethroid insecticide group. [ 4 ] The chemicals in the pyrethroid family are created to emulate the chemicals found in the chrysanthemum flower. [ 4 ]
Absorption of topical permethrin is minimal. One in vivo study demonstrated 0.5% absorption in the first 48 hours based upon excretion of urinary metabolites. [ 37 ]
Distribution of permethrin has been studied in rat models, with highest amounts accumulating in fat and the brain. [ 38 ] This can be explained by the lipophilic nature of the permethrin molecule. [ citation needed ]
Metabolism of permethrin occurs mainly in the liver, where the molecule undergoes oxidation by the cytochrome P450 system, as well as hydrolysis, into non-toxic metabolites. [ 37 ]
The elimination of permethrin and its metabolites occurs mainly through urinary excretion, but also through feces. In rats, the excretion half-life is 12 hours for plasma and 9 to 23 hours for certain nervous tissue. [ 32 ]
Permethrin has four stereoisomers (two enantiomeric pairs), arising from the two stereocenters in the cyclopropane ring. The trans enantiomeric pair is known as transpermethrin. (1 R ,3 S )- trans and (1 R ,3 R )- cis enantiomers are responsible for the insecticidal properties of permethrin. [ 39 ]
Permethrin has a half-life of about 40 days in soil, 1–3 weeks on the surface of plants, over 20 days indoors, and 19–27 hours in the water column . [ 40 ] Permethrin-contaminated indoor surfaces can be decontaminated with bleach. [ 41 ]
In the early 1970s, it was identified that in many pyrethroids, including all natural pyrethrins and some synthetic analogs developed by that time (such as resmethrin ), the furan ring, being a probable site for photo-sensitized attack by oxygen, was responsible for their instability in air and light. Hence, a group of agricultural chemists at the Rothamsted Experimental Station led by Michael Elliott tried to substitute the 5-benzyl-3-furylmethyl alcohol with quite a few structurally similar ones. Discovering that an ester of 3-phenoxybenzyl alcohol with a slightly modified (chlorine-substituted) analog of the chrysanthemic acid they also found earlier was both photo-stable and very toxic for insects, they filed their patent applications in 1972 and published their results in Nature in 1973. [ 10 ] [ 42 ]
Numerous synthetic routes exist for the production of the DV-acid ester precursor. [ 43 ] The pathway known as the Kuraray Process uses four steps. [ 44 ] In general, the final step in the total synthesis of any of the synthetic pyrethroids is a coupling of a DV-acid ester and an alcohol. In the case of permethrin synthesis, the DV-acid cyclopropanecarboxylic acid , 3-(2,2-dichloroethenyl)-2,2-dimethyl-, ethyl ester, is coupled with the alcohol, m-phenoxybenzyl alcohol , through a transesterification reaction with base. Tetraisopropyl titanate or sodium ethylate may be used as the base. [ 44 ]
The alcohol precursor may be prepared in three steps. First, m-cresol , chlorobenzene , sodium hydroxide , potassium hydroxide , and cuprous chloride react to yield m-phenoxytoluene . [ 45 ] Second, oxidation of m-phenoxytoluene over selenium dioxide provides m-phenoxybenzaldehyde . Third, a Cannizzaro reaction of the benzaldehyde in formaldehyde and potassium hydroxide affords the m-phenoxybenzyl alcohol. [ 44 ]
In Nordic countries and North America, a permethrin formulation for lice treatment is marketed under trade name Nix, available over the counter. Johnson & Johnson 's UK brand Lyclear covers an assortment of different products, mostly non-insecticidal, but a few of which are based on permethrin. [ 46 ]
Stronger concentrations of permethrin are used to treat scabies (which embed inside the skin), compared to lice (which remain outside the skin). In the U.S. the more concentrated products such as Elimite are available by prescription only. [ 3 ] [ 47 ]
It is known to be highly toxic to cats, fish and aquatic species with long-lasting effects. [ 4 ] [ 48 ]
Permethrin is toxic to cats; however, it has little effect on dogs. [ 4 ] [ 49 ] [ 50 ] Many cats die after being given flea treatments intended for dogs, or by contact with dogs having recently been treated with permethrin. [ 51 ] In cats it may induce hyperexcitability, tremors, seizures, and death. [ 52 ]
Toxic exposure of permethrin can cause several symptoms, including convulsion , hyperaesthesia , hyperthermia , hypersalivation , and loss of balance and coordination. Exposure to pyrethroid -derived drugs such as permethrin requires treatment by a veterinarian, otherwise the poisoning is often fatal. [ 53 ] [ 54 ] This intolerance is due to a defect in glucuronosyltransferase , a common detoxification enzyme in other mammals, that also makes the cat intolerant to paracetamol (acetaminophen). [ 55 ] Based on those observations, the use of any external parasiticides based on permethrin is contraindicated for cats.
Permethrin is listed as a "restricted use" substance by the US Environmental Protection Agency (EPA) [ 56 ] due to its high toxicity to aquatic organisms, [ 57 ] so permethrin and permethrin-contaminated water should be properly disposed of. Permethrin is quite stable, having a half life of 51–71 days in an aqueous environment exposed to light. It is also highly persistent in soil. [ 58 ] | https://en.wikipedia.org/wiki/Permethrin |
The permissible exposure limit ( PEL or OSHA PEL ) is a legal limit in the United States for exposure of an employee to a chemical substance or physical agents such as high level noise. Permissible exposure limits were established by the Occupational Safety and Health Administration (OSHA). Most of OSHA's PELs were issued shortly after the adoption of the Occupational Safety and Health (OSH) Act in 1970. [ 1 ]
Chemical regulation is sometimes [ clarification needed ] expressed in parts per million (ppm), but often [ clarification needed ] in milligrams per cubic meter (mg/m 3 ). [ 2 ] Units of measure for physical agents such as noise are specific to the agent.
A PEL is usually given as a time-weighted average (TWA), although some are short-term exposure limits (STEL) or ceiling limits. A TWA is the average exposure over a specified period, usually a nominal eight hours. This means that for limited periods, a worker may be exposed to concentration excursions higher than the PEL as long as the TWA is not exceeded and any applicable excursion limit is not exceeded. An excursion limit typically means that "...worker exposure levels may exceed 3 times the PEL-TWA for no more than a total of 30 minutes during a workday, and under no circumstances should they exceed 5 times the PEL-TWA, provided that
the PEL-TWA is not exceeded." [ 3 ] Excursion limits are enforced in some states (for example Oregon) and on the federal level for certain contaminants such as asbestos.
A short-term exposure limit is one that addresses the average exposure over a 15-30 minute period of maximum exposure during a single work shift. A ceiling limit is one that may not be exceeded for any time, and is applied to irritants and other materials that have immediate effects.
The current PEL for OSHA standards are based on a 5 decibel exchange rate. OSHA's PEL for noise exposure is 90 decibels (dBA) for an 8-hour TWA. Levels of 90-140 dBA are included in the noise dose. [ 4 ] PEL can also be expressed as 100 percent “dose” for noise exposure. When the noise exposure increases by 5 dB, the exposure time is cut in half. [ 5 ] According to OSHA, a 95dBA TWA would be a 200 percent dose. [ 6 ] PEL is exceeded when TWA > 90 dBA. OSHA requires feasible engineering OR administrative controls , and mandatory hearing protection when the PEL is exceeded.
Like OSHA, Mine Safety and Health Administration (MSHA) also uses the same 5 decibel exchange rate and 90 dBA for an 8-hour TWA for their PEL. Once a miner's noise exposure exceeds the PEL, feasible engineering AND administrative controls must be in place to try to limit the noise exposure of the employees. If a mine operator uses administrative controls, procedures for such controls must be posted on the bulletin board and a copy must be supplied to all affected employees. [ 7 ]
The National Institute for Occupational Safety and Health (NIOSH) Recommended Exposure Limit (REL) for noise exposure uses a 3 decibel exchange rate. The recommendation for occupational noise exposure is 85 decibels (dBA) for an 8-hour TWA. For every 3 dB over 85, the exposure time is cut in half. NIOSH reports exposures above this level are considered hazardous. NIOSH uses a hierarchy of control to reduce or remove hazardous noise. [ 8 ]
Permissible Exposure Limits are regulatory limits for chemical hazards in a workplace set by OSHA . [ 9 ] [ 10 ] Organizations may implement stricter guidelines for chemical use and exposure, but OSHA guidelines must be followed at the minimum. [ 11 ] [ 10 ] Permissible Exposure Limits are time-weighted average, meaning that a worker may be exposed to higher concentrations of the chemical at different times of the work shift. [ 10 ] [ 12 ]
Many factors contribute in establishing Permissible Exposure Limits. Threshold Limit Values (TLVs), often determined by the American Conference of Governmental Industrial Hygienists (ACGHI), is a key component in determining the PEL. [ 11 ] [ 10 ] Other things that contribute to determining the PEL are toxicity and particle size. [ 10 ]
PELs for chemicals are measured in mg/M 3 (milligrams per cubic meter). [ 2 ] Mg/M 3 is used to measure pollutant’s mass in the air. [ 13 ] PELs compliance is monitored through direct reading measurement tools, various sampling methods, and measuring biological markers in workers. [ 14 ] [ 15 ] Sampling for biological markers may include sampling urine and blood. [ 15 ] Direct measurement tools, such as Q-Trak, and indirect measurement tools such as gas chromatography can be used for air sampling. [ 14 ]
The Occupational Safety and Health Administration (OSHA) in the United States established the allowed exposure limit for occupational noise at 90 dB and is based on an 8-hour time-weighted average for an 8-hour workday. [ 16 ] For worker's safety, OSHA mandates hearing conservation programs when noise levels are higher than 85 decibels. [ 17 ] This is dependent on the sector, profession, or nation, different restrictions may apply.
Currently, about 200 million Americans are subject to harmful workplace noise. [ 18 ] There are many factors, besides in the workplace, to how noise exposure can affect individuals more or less. These factors can include, but are not limited to, ageing, heredity factors, recreational activities, and some illnesses. [ 19 ]
While there are recommendations that exist for noise levels and noise control in communities, there is a lack of general agreement regarding acceptable exposure limits in non-occupational settings or the general environment. To limit noise exposure levels there are several approaches that can be used. One way to limit noise exposure is by wearing personal protective equipment (PPE) such as earplugs, or earmuffs. [ 19 ] Another way to limit exposure should be reducing being in environments with heavy amounts of noise exposure. [ 20 ] With this in mind it is important to keep individuals informed about prolonged noise exposure.
Related media at Wikimedia Commons: | https://en.wikipedia.org/wiki/Permissible_exposure_limit |
Permissible stress design is a design philosophy used by mechanical engineers and civil engineers . [ 1 ] [ 2 ]
The civil designer ensures that the stresses developed in a structure due to service loads do not exceed the elastic limit . This limit is usually determined by ensuring that stresses remain within the limits through the use of factors of safety .
In structural engineering , the permissible stress design approach has generally been replaced internationally by limit state design (also known as ultimate stress design, or in USA, Load and Resistance Factor Design, LRFD) as far as structural engineering is considered, except for some isolated cases.
In USA structural engineering construction, allowable stress design (ASD) has not yet been completely superseded by limit state design except in the case of Suspension bridges , which changed from allowable stress design to limit state design in the 1960s. Wood, steel, and other materials are still frequently designed using allowable stress design, although LRFD is probably more commonly taught in the USA university system.
In mechanical engineering design such as design of pressure equipment, the method uses the actual loads predicted to be experienced in practice to calculate stress and deflection. Such loads may include pressure thrusts and the weight of materials. The predicted stresses and deflections are compared with allowable values that have a "factor" against various failure mechanisms such as leakage, yield, ultimate load prior to plastic failure, buckling, brittle fracture, fatigue, and vibration/harmonic effects. However, the predicted stresses almost always assumes the material is linear elastic. The "factor" is sometimes called a factor of safety, although this is technically incorrect because the factor includes allowance for matters such as local stresses and manufacturing imperfections that are not specifically calculated; exceeding the allowable values is not considered to be good practice (i.e. is not "safe").
The permissible stress method is also known in some national standards as the working stress method because the predicted stresses are the unfactored stresses expected during operation of the equipment (e.g. AS1210, AS3990).
This mechanical engineering approach differs from an ultimate design approach which factors up the predicted loads for comparison with an ultimate failure limit. One method factors up the predicted load, the other method factors down the failure stress.
This engineering-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Permissible_stress_design |
In endocrinology , permissiveness is a biochemical phenomenon in which the presence of one hormone is required in order for another hormone to exert its full effects on a target cell. Hormones can interact in permissive, synergistic, or antagonistic ways. The chemical classes of hormones include amines , polypeptides , glycoproteins and steroids . Permissive hormones act as precursors to active hormones and may be classified as either prohormones or prehormones . It stimulate the formation of receptors of that hormone.
Thyroid hormone increases the number of beta-adrenergic receptors available for epinephrine at the latter's target cell, thereby increasing epinephrine's effect on that cell. Specially in cardiac cell. Without the thyroid hormone, epinephrine would have only a weak effect. [ 1 ]
Cortisol is required for the response of vascular and bronchial smooth muscle to catecholamines. [ 2 ] Cortisol is also required for the lipolytic effect of catecholamines, ACTH, and growth hormone on fat cells. [ 2 ] Cortisol is also required for the calorigenic effects of glucagon and catecholamines. [ 3 ]
The effects of a hormone in the body depend on its concentration. Permissive actions of glucocorticoids like cortisol generally occur at low concentrations. Abnormally high amounts of a hormone can result in atypical effects. Glucocorticoids function by attaching to cytoplasmic receptors to either enhance or suppress changes in the transcription of DNA and thus the synthesis of proteins. Glucocorticoids also inhibit the secretion of cytokines via post-translational modification effects. [ 4 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Permissiveness_(endocrinology) |
In electromagnetism , the absolute permittivity , often simply called permittivity and denoted by the Greek letter ε ( epsilon ), is a measure of the electric polarizability of a dielectric material. A material with high permittivity polarizes more in response to an applied electric field than a material with low permittivity, thereby storing more energy in the material. In electrostatics , the permittivity plays an important role in determining the capacitance of a capacitor .
In the simplest case, the electric displacement field D resulting from an applied electric field E is
D = ε E . {\displaystyle \mathbf {D} =\varepsilon \ \mathbf {E} ~.}
More generally, the permittivity is a thermodynamic function of state . [ 1 ] It can depend on the frequency , magnitude , and direction of the applied field. The SI unit for permittivity is farad per meter (F/m).
The permittivity is often represented by the relative permittivity ε r which is the ratio of the absolute permittivity ε and the vacuum permittivity ε 0 [ 2 ]
κ = ε r = ε ε 0 . {\displaystyle \kappa =\varepsilon _{\mathrm {r} }={\frac {\varepsilon }{\varepsilon _{0}}}~.}
This dimensionless quantity is also often and ambiguously referred to as the permittivity . Another common term encountered for both absolute and relative permittivity is the dielectric constant which has been deprecated in physics and engineering [ 3 ] as well as in chemistry. [ 4 ]
By definition, a perfect vacuum has a relative permittivity of exactly 1 whereas at standard temperature and pressure , air has a relative permittivity of ε r air ≡ κ air ≈ 1.0006 .
Relative permittivity is directly related to electric susceptibility ( χ ) by
χ = κ − 1 {\displaystyle \chi =\kappa -1}
otherwise written as
ε = ε r ε 0 = ( 1 + χ ) ε 0 . {\displaystyle \varepsilon =\varepsilon _{\mathrm {r} }\ \varepsilon _{0}=(1+\chi )\ \varepsilon _{0}~.}
The term "permittivity" was introduced in the 1880s by Oliver Heaviside to complement Thomson 's (1872) " permeability ". [ 5 ] [ irrelevant citation ] Formerly written as p , the designation with ε has been in common use since the 1950s.
The SI unit of permittivity is farad per meter (F/m or F·m −1 ). [ 6 ]
F m = C V ⋅ m = C 2 N ⋅ m 2 = C 2 ⋅ s 2 kg ⋅ m 3 = A 2 ⋅ s 4 kg ⋅ m 3 {\displaystyle {\frac {\text{F}}{\text{m}}}={\frac {\text{C}}{{\text{V}}{\cdot }{\text{m}}}}={\frac {{\text{C}}^{2}}{{\text{N}}{\cdot }{\text{m}}^{2}}}={\frac {{\text{C}}^{2}{\cdot }{\text{s}}^{2}}{{\text{kg}}{\cdot }{\text{m}}^{3}}}={\frac {{\text{A}}^{2}{\cdot }{\text{s}}^{4}}{{\text{kg}}{\cdot }{\text{m}}^{3}}}}
In electromagnetism , the electric displacement field D represents the distribution of electric charges in a given medium resulting from the presence of an electric field E . This distribution includes charge migration and electric dipole reorientation. Its relation to permittivity in the very simple case of linear, homogeneous, isotropic materials with "instantaneous" response to changes in electric field is:
D = ε E {\displaystyle \mathbf {D} =\varepsilon \ \mathbf {E} }
where the permittivity ε is a scalar . If the medium is anisotropic , the permittivity is a second rank tensor .
In general, permittivity is not a constant, as it can vary with the position in the medium, the frequency of the field applied, humidity, temperature, and other parameters. In a nonlinear medium , the permittivity can depend on the strength of the electric field. Permittivity as a function of frequency can take on real or complex values.
In SI units, permittivity is measured in farads per meter (F/m or A 2 ·s 4 ·kg −1 ·m −3 ). The displacement field D is measured in units of coulombs per square meter (C/m 2 ), while the electric field E is measured in volts per meter (V/m). D and E describe the interaction between charged objects. D is related to the charge densities associated with this interaction, while E is related to the forces and potential differences .
The vacuum permittivity ε o (also called permittivity of free space or the electric constant ) is the ratio D / E in free space . It also appears in the Coulomb force constant ,
k e = 1 4 π ε 0 {\displaystyle k_{\text{e}}={\frac {1}{\ 4\pi \varepsilon _{0}\ }}}
Its value is [ 7 ] [ 8 ]
ε 0 = d e f 1 c 2 μ 0 ≈ 8.854 187 8128 ( 13 ) × 10 − 12 F/m {\displaystyle \varepsilon _{0}\ {\stackrel {\mathrm {def} }{=}}\ {\frac {1}{c^{2}\mu _{0}}}\approx 8.854\,187\,8128(13)\times 10^{-12}{\text{ F/m }}}
where
The constants c and µ 0 were both defined in SI units to have exact numerical values until the 2019 revision of the SI . Therefore, until that date, ε 0 could be also stated exactly as a fraction, 1 c 2 μ 0 = 1 35 950 207 149.472 7056 π F/m {\displaystyle \ {\tfrac {1}{c^{2}\mu _{0}}}={\tfrac {1}{35\,950\,207\,149.472\,7056\pi }}{\text{ F/m}}\ } even if the result was irrational (because the fraction contained π ). [ 9 ] In contrast, the ampere was a measured quantity before 2019, but since then the ampere is now exactly defined and it is μ 0 that is an experimentally measured quantity (with consequent uncertainty) and therefore so is the new 2019 definition of ε 0 ( c remains exactly defined before and since 2019).
The linear permittivity of a homogeneous material is usually given relative to that of free space, as a relative permittivity ε r (also called dielectric constant , although this term is deprecated and sometimes only refers to the static, zero-frequency relative permittivity). In an anisotropic material, the relative permittivity may be a tensor, causing birefringence . The actual permittivity is then calculated by multiplying the relative permittivity by ε o :
ε = ε r ε 0 = ( 1 + χ ) ε 0 , {\displaystyle \ \varepsilon =\varepsilon _{\mathrm {r} }\ \varepsilon _{0}=(1+\chi )\ \varepsilon _{0}\ ,}
where χ (frequently written χ e ) is the electric susceptibility of the material.
The susceptibility is defined as the constant of proportionality (which may be a tensor ) relating an electric field E to the induced dielectric polarization density P such that
P = ε 0 χ E , {\displaystyle \ \mathbf {P} \ =\ \varepsilon _{0}\ \chi \ \mathbf {E} \;,}
where ε o is the electric permittivity of free space .
The susceptibility of a medium is related to its relative permittivity ε r by
χ = ε r − 1 . {\displaystyle \chi =\varepsilon _{\mathrm {r} }-1~.}
So in the case of a vacuum,
χ = 0 . {\displaystyle \chi =0~.}
The susceptibility is also related to the polarizability of individual particles in the medium by the Clausius-Mossotti relation .
The electric displacement D is related to the polarization density P by
D = ε 0 E + P = ε 0 ( 1 + χ ) E = ε r ε 0 E . {\displaystyle \mathbf {D} =\varepsilon _{0}\ \mathbf {E} +\mathbf {P} =\varepsilon _{0}\ (1+\chi )\ \mathbf {E} =\varepsilon _{\mathrm {r} }\ \varepsilon _{0}\ \mathbf {E} ~.}
The permittivity ε and permeability µ of a medium together determine the phase velocity v = c / n of electromagnetic radiation through that medium:
ε μ = 1 v 2 . {\displaystyle \varepsilon \mu ={\frac {1}{\ v^{2}}}~.}
The capacitance of a capacitor is based on its design and architecture, meaning it will not change with charging and discharging. The formula for capacitance in a parallel plate capacitor is written as
C = ε A d {\displaystyle C=\varepsilon \ {\frac {A}{d}}}
where A {\displaystyle A} is the area of one plate, d {\displaystyle d} is the distance between the plates, and ε {\displaystyle \varepsilon } is the permittivity of the medium between the two plates. For a capacitor with relative permittivity κ {\displaystyle \kappa } , it can be said that
C = κ ε 0 A d {\displaystyle C=\kappa \ \varepsilon _{0}{\frac {A}{d}}}
Permittivity is connected to electric flux (and by extension electric field) through Gauss's law . Gauss's law states that for a closed Gaussian surface , S ,
Φ E = Q enc ε 0 = ∮ S E ⋅ d A , {\displaystyle \Phi _{E}={\frac {Q_{\text{enc}}}{\varepsilon _{0}}}=\oint _{S}\mathbf {E} \cdot \mathrm {d} \mathbf {A} \ ,} where Φ E {\displaystyle \Phi _{E}} is the net electric flux passing through the surface, Q enc {\displaystyle Q_{\text{enc}}} is the charge enclosed in the Gaussian surface, E {\displaystyle \mathbf {E} } is the electric field vector at a given point on the surface, and d A {\displaystyle \mathrm {d} \mathbf {A} } is a differential area vector on the Gaussian surface.
If the Gaussian surface uniformly encloses an insulated, symmetrical charge arrangement, the formula can be simplified to
E A cos θ = Q enc ε 0 , {\displaystyle E\ A\ \cos \theta ={\frac {\;Q_{\text{enc}}}{\ \varepsilon _{0}\ }}\ ,} where θ {\displaystyle \ \theta \ } represents the angle between the electric field lines and the normal (perpendicular) to S .
If all of the electric field lines cross the surface at 90°, the formula can be further simplified to
E = Q enc ε 0 A . {\displaystyle \ E={\frac {\;Q_{\text{enc}}}{\ \varepsilon _{0}\ A\ }}~.}
Because the surface area of a sphere is 4 π r 2 , {\displaystyle \ 4\pi r^{2}\ ,} the electric field a distance r {\displaystyle r} away from a uniform, spherical charge arrangement is
E = Q ε 0 A = Q ε 0 ( 4 π r 2 ) = Q 4 π ε 0 r 2 . {\displaystyle \ E={\frac {Q}{\ \varepsilon _{0}A\ }}={\frac {Q}{\ \varepsilon _{0}\ \left(4\ \pi \ r^{2}\right)\ }}={\frac {Q}{\ 4\pi \ \varepsilon _{0}\ r^{2}\ }}~.}
This formula applies to the electric field due to a point charge, outside of a conducting sphere or shell, outside of a uniformly charged insulating sphere, or between the plates of a spherical capacitor.
In general, a material cannot polarize instantaneously in response to an applied field, and so the more general formulation as a function of time is
P ( t ) = ε 0 ∫ − ∞ t χ ( t − t ′ ) E ( t ′ ) d t ′ . {\displaystyle \mathbf {P} (t)=\varepsilon _{0}\int _{-\infty }^{t}\chi \left(t-t'\right)\mathbf {E} \left(t'\right)\,\mathrm {d} t'~.}
That is, the polarization is a convolution of the electric field at previous times with time-dependent susceptibility given by χ (Δ t ) . The upper limit of this integral can be extended to infinity as well if one defines χ (Δ t ) = 0 for Δ t < 0 . An instantaneous response would correspond to a Dirac delta function susceptibility χ (Δ t ) = χδ (Δ t ) .
It is convenient to take the Fourier transform with respect to time and write this relationship as a function of frequency. Because of the convolution theorem , the integral becomes a simple product,
P ( ω ) = ε 0 χ ( ω ) E ( ω ) . {\displaystyle \ \mathbf {P} (\omega )=\varepsilon _{0}\ \chi (\omega )\ \mathbf {E} (\omega )~.}
This frequency dependence of the susceptibility leads to frequency dependence of the permittivity. The shape of the susceptibility with respect to frequency characterizes the dispersion properties of the material.
Moreover, the fact that the polarization can only depend on the electric field at previous times (i.e. effectively χ (Δ t ) = 0 for Δ t < 0 ), a consequence of causality , imposes Kramers–Kronig constraints on the susceptibility χ (0) .
As opposed to the response of a vacuum, the response of normal materials to external fields generally depends on the frequency of the field. This frequency dependence reflects the fact that a material's polarization does not change instantaneously when an electric field is applied. The response must always be causal (arising after the applied field), which can be represented by a phase difference. For this reason, permittivity is often treated as a complex function of the (angular) frequency ω of the applied field:
ε → ε ^ ( ω ) {\displaystyle \varepsilon \rightarrow {\hat {\varepsilon }}(\omega )}
(since complex numbers allow specification of magnitude and phase). The definition of permittivity therefore becomes
D 0 e − i ω t = ε ^ ( ω ) E 0 e − i ω t , {\displaystyle D_{0}\ e^{-i\omega t}={\hat {\varepsilon }}(\omega )\ E_{0}\ e^{-i\omega t}\ ,} where
The response of a medium to static electric fields is described by the low-frequency limit of permittivity, also called the static permittivity ε s (also ε DC ):
ε s = lim ω → 0 ε ^ ( ω ) . {\displaystyle \varepsilon _{\mathrm {s} }=\lim _{\omega \rightarrow 0}{\hat {\varepsilon }}(\omega )~.}
At the high-frequency limit (meaning optical frequencies), the complex permittivity is commonly referred to as ε ∞ (or sometimes ε opt [ 11 ] ). At the plasma frequency and below, dielectrics behave as ideal metals, with electron gas behavior. The static permittivity is a good approximation for alternating fields of low frequencies, and as the frequency increases a measurable phase difference δ emerges between D and E . The frequency at which the phase shift becomes noticeable depends on temperature and the details of the medium. For moderate field strength ( E o ), D and E remain proportional, and
ε ^ = D 0 E 0 = | ε | e − i δ . {\displaystyle {\hat {\varepsilon }}={\frac {D_{0}}{E_{0}}}=|\varepsilon |e^{-i\delta }~.}
Since the response of materials to alternating fields is characterized by a complex permittivity, it is natural to separate its real and imaginary parts, which is done by convention in the following way:
ε ^ ( ω ) = ε ′ ( ω ) − i ε ″ ( ω ) = | D 0 E 0 | ( cos δ − i sin δ ) . {\displaystyle {\hat {\varepsilon }}(\omega )=\varepsilon '(\omega )-i\varepsilon ''(\omega )=\left|{\frac {D_{0}}{E_{0}}}\right|\left(\cos \delta -i\sin \delta \right)~.}
where
The choice of sign for time-dependence, e − iωt , dictates the sign convention for the imaginary part of permittivity. The signs used here correspond to those commonly used in physics, whereas for the engineering convention one should reverse all imaginary quantities.
The complex permittivity is usually a complicated function of frequency ω , since it is a superimposed description of dispersion phenomena occurring at multiple frequencies. The dielectric function ε ( ω ) must have poles only for frequencies with positive imaginary parts, and therefore satisfies the Kramers–Kronig relations . However, in the narrow frequency ranges that are often studied in practice, the permittivity can be approximated as frequency-independent or by model functions.
At a given frequency, the imaginary part, ε″ , leads to absorption loss if it is positive (in the above sign convention) and gain if it is negative. More generally, the imaginary parts of the eigenvalues of the anisotropic dielectric tensor should be considered.
In the case of solids, the complex dielectric function is intimately connected to band structure. The primary quantity that characterizes the electronic structure of any crystalline material is the probability of photon absorption, which is directly related to the imaginary part of the optical dielectric function ε ( ω ) . The optical dielectric function is given by the fundamental expression: [ 12 ]
ε ( ω ) = 1 + 8 π 2 e 2 m 2 ∑ c , v ∫ W c , v ( E ) ( φ ( ℏ ω − E ) − φ ( ℏ ω + E ) ) d x . {\displaystyle \varepsilon (\omega )=1+{\frac {8\pi ^{2}e^{2}}{m^{2}}}\sum _{c,v}\int W_{c,v}(E){\bigl (}\varphi (\hbar \omega -E)-\varphi (\hbar \omega +E){\bigr )}\,\mathrm {d} x~.}
In this expression, W c , v ( E ) represents the product of the Brillouin zone -averaged transition probability at the energy E with the joint density of states , [ 13 ] [ 14 ] J c , v ( E ) ; φ is a broadening function, representing the role of scattering in smearing out the energy levels. [ 15 ] In general, the broadening is intermediate between Lorentzian and Gaussian ; [ 16 ] [ 17 ] for an alloy it is somewhat closer to Gaussian because of strong scattering from statistical fluctuations in the local composition on a nanometer scale.
According to the Drude model of magnetized plasma, a more general expression which takes into account the interaction of the carriers with an alternating electric field at millimeter and microwave frequencies in an axially magnetized semiconductor requires the expression of the permittivity as a non-diagonal tensor: [ 18 ]
D ( ω ) = | ε 1 − i ε 2 0 i ε 2 ε 1 0 0 0 ε z | E ( ω ) {\displaystyle \mathbf {D} (\omega )={\begin{vmatrix}\varepsilon _{1}&-i\varepsilon _{2}&0\\i\varepsilon _{2}&\varepsilon _{1}&0\\0&0&\varepsilon _{z}\\\end{vmatrix}}\;\operatorname {\mathbf {E} } (\omega )}
If ε 2 vanishes, then the tensor is diagonal but not proportional to the identity and the medium is said to be a uniaxial medium, which has similar properties to a uniaxial crystal .
Materials can be classified according to their complex-valued permittivity ε , upon comparison of its real ε ′ and imaginary ε ″ components (or, equivalently, conductivity , σ , when accounted for in the latter). A perfect conductor has infinite conductivity, σ = ∞ , while a perfect dielectric is a material that has no conductivity at all, σ = 0 ; this latter case, of real-valued permittivity (or complex-valued permittivity with zero imaginary component) is also associated with the name lossless media . [ 19 ] Generally, when σ ω ϵ ≪ 1 {\displaystyle {\frac {\sigma }{\omega \epsilon }}\ll 1} we consider the material to be a low-loss dielectric (although not exactly lossless), whereas σ ω ϵ ≫ 1 {\displaystyle {\frac {\sigma }{\omega \epsilon }}\gg 1} is associated with a good conductor ; such materials with non-negligible conductivity yield a large amount of loss that inhibit the propagation of electromagnetic waves, thus are also said to be lossy media . Those materials that do not fall under either limit are considered to be general media.
In the case of a lossy medium, i.e. when the conduction current is not negligible, the total current density flowing is:
J tot = J c + J d = σ E + i ω ε ′ E = i ω ε ^ E {\displaystyle J_{\text{tot}}\ =\ J_{\mathrm {c} }+J_{\mathrm {d} }=\sigma \ E\ +\ i\ \omega \ \varepsilon '\ E=i\ \omega \ {\hat {\varepsilon }}\ E\ }
where
Note that this is using the electrical engineering convention of the complex conjugate ambiguity ; the physics/chemistry convention involves the complex conjugate of these equations.
The size of the displacement current is dependent on the frequency ω of the applied field E ; there is no displacement current in a constant field.
In this formalism, the complex permittivity is defined as: [ 20 ] [ 21 ]
ε ^ = ε ′ ( 1 − i σ ω ε ′ ) = ε ′ − i σ ω {\displaystyle \ {\hat {\varepsilon }}\ =\ \varepsilon '\left(\ 1\ -\ i\ {\frac {\sigma }{\ \omega \varepsilon '\ }}\ \right)\ =\ \varepsilon '\ -\ i\ {\frac {\ \sigma \ }{\ \omega \ }}}
In general, the absorption of electromagnetic energy by dielectrics is covered by a few different mechanisms that influence the shape of the permittivity as a function of frequency:
The above effects often combine to cause non-linear effects within capacitors. For example, dielectric absorption refers to the inability of a capacitor that has been charged for a long time to completely discharge when briefly discharged. Although an ideal capacitor would remain at zero volts after being discharged, real capacitors will develop a small voltage, a phenomenon that is also called soakage or battery action . For some dielectrics, such as many polymer films, the resulting voltage may be less than 1–2% of the original voltage. However, it can be as much as 15–25% in the case of electrolytic capacitors or supercapacitors .
In terms of quantum mechanics , permittivity is explained by atomic and molecular interactions.
At low frequencies, molecules in polar dielectrics are polarized by an applied electric field, which induces periodic rotations. For example, at the microwave frequency, the microwave field causes the periodic rotation of water molecules, sufficient to break hydrogen bonds . The field does work against the bonds and the energy is absorbed by the material as heat . This is why microwave ovens work very well for materials containing water. There are two maxima of the imaginary component (the absorptive index) of water, one at the microwave frequency, and the other at far ultraviolet (UV) frequency. Both of these resonances are at higher frequencies than the operating frequency of microwave ovens.
At moderate frequencies, the energy is too high to cause rotation, yet too low to affect electrons directly, and is absorbed in the form of resonant molecular vibrations. In water, this is where the absorptive index starts to drop sharply, and the minimum of the imaginary permittivity is at the frequency of blue light (optical regime).
At high frequencies (such as UV and above), molecules cannot relax, and the energy is purely absorbed by atoms, exciting electron energy levels. Thus, these frequencies are classified as ionizing radiation .
While carrying out a complete ab initio (that is, first-principles) modelling is now computationally possible, it has not been widely applied yet. Thus, a phenomenological model is accepted as being an adequate method of capturing experimental behaviors. The Debye model and the Lorentz model use a first-order and second-order (respectively) lumped system parameter linear representation (such as an RC and an LRC resonant circuit).
The relative permittivity of a material can be found by a variety of static electrical measurements. The complex permittivity is evaluated over a wide range of frequencies by using different variants of dielectric spectroscopy , covering nearly 21 orders of magnitude from 10 −6 to 10 15 hertz . Also, by using cryostats and ovens, the dielectric properties of a medium can be characterized over an array of temperatures. In order to study systems for such diverse excitation fields, a number of measurement setups are used, each adequate for a special frequency range.
Various microwave measurement techniques are outlined in Chen et al. [ 22 ] Typical errors for the Hakki–Coleman method employing a puck of material between conducting planes are about 0.3%. [ 23 ]
At infrared and optical frequencies, a common technique is ellipsometry . Dual polarisation interferometry is also used to measure the complex refractive index for very thin films at optical frequencies.
For the 3D measurement of dielectric tensors at optical frequency, Dielectric tensor tomography can be used. [ 24 ] | https://en.wikipedia.org/wiki/Permittivity |
Permutation codes are a family of error correction codes that were introduced first by Slepian in 1965. [ 1 ] and have been widely studied both in Combinatorics [ 2 ] [ 3 ] and Information theory due to their applications related to Flash memory [ 4 ] and Power-line communication . [ 5 ]
A permutation code C {\displaystyle C} is defined as a subset of the Symmetric Group in S n {\displaystyle S_{n}} endowed with the usual Hamming distance between strings of length n {\displaystyle n} . More precisely, if σ , τ {\displaystyle \sigma ,\tau } are permutations in S n {\displaystyle S_{n}} , then d ( τ , σ ) = | { i ∈ { 1 , 2 , . . . , n } : σ ( i ) ≠ τ ( i ) } | {\displaystyle d(\tau ,\sigma )=|\left\{i\in \{1,2,...,n\}:\sigma (i)\neq \tau (i)\right\}|}
The minimum distance of a permutation code C {\displaystyle C} is defined to be the minimum positive integer d m i n {\displaystyle d_{min}} such that there exist σ , τ {\displaystyle \sigma ,\tau } ∈ {\displaystyle \in } C {\displaystyle C} , distinct, such that d ( σ , τ ) = d m i n {\displaystyle d(\sigma ,\tau )=d_{min}} .
One of the reasons why permutation codes are suitable for certain channels is that the alphabet symbols only appear once in each codeword, which for example makes the errors occurring in the context of powerline communication less impactful on codewords
A main problem in permutation codes is to determine the value of M ( n , d ) {\displaystyle M(n,d)} , where M ( n , d ) {\displaystyle M(n,d)} is defined to be the maximum number of codewords in a permutation code of length n {\displaystyle n} and minimum distance d {\displaystyle d} . There has been little progress made for 4 ≤ d ≤ n − 1 {\displaystyle 4\leq d\leq n-1} , except for small lengths. We can define D ( n , k ) {\displaystyle D(n,k)} with k ∈ { 0 , 1 , . . . , n } {\displaystyle k\in \{0,1,...,n\}} to denote the set of all permutations in S n {\displaystyle S_{n}} which have distance exactly k {\displaystyle k} from the identity.
Let D ( n , k ) = { σ ∈ S n : d H ( σ , i d ) = k } {\displaystyle D(n,k)=\{\sigma \in S_{n}:d_{H}(\sigma ,id)=k\}} with | D ( n , k ) | = ( n k ) D k {\displaystyle |D(n,k)|={\tbinom {n}{k}}D_{k}} , where D k {\displaystyle D_{k}} is the number of derangements of order k {\displaystyle k} .
The Gilbert-Varshamov bound is a very well known upper bound, [ 6 ] and so far outperforms other bounds for small values of d {\displaystyle d} .
Theorem 1 : n ! ∑ k = 0 d − 1 | D ( n , k ) | ≤ M ( n , d ) ≤ n ! ∑ k = 0 [ d − 1 2 ] | D ( n , k ) | {\displaystyle {\frac {n!}{\sum _{k=0}^{d-1}|D(n,k)|}}\leq M(n,d)\leq {\frac {n!}{\sum _{k=0}^{[{\frac {d-1}{2}}]}|D(n,k)|}}}
There has been improvements on it for the case where d = 4 {\displaystyle d=4} [ 6 ] as the next theorem shows.
Theorem 2 : If k 2 ≤ n ≤ k 2 + k − 2 {\displaystyle k^{2}\leq n\leq k^{2}+k-2} for some integer k ≥ 2 {\displaystyle k\geq 2} , then
n ! M ( n , 4 ) ≥ 1 + ( n + 1 ) n ( n − 1 ) n ( n − 1 ) − ( n − k 2 ) ( ( k + 1 ) 2 − n ) ( ( k + 2 ) ( k − 1 ) − n ) {\displaystyle {\frac {n!}{M(n,4)}}\geq 1+{\frac {(n+1)n(n-1)}{n(n-1)-(n-k^{2})((k+1)^{2}-n)((k+2)(k-1)-n)}}} .
For small values of n {\displaystyle n} and d {\displaystyle d} , researchers have developed various computer searching strategies to directly look for permutation codes with some prescribed automorphisms [ 7 ]
There are numerous bounds on permutation codes, we list two here
An Improvement is done to the Gilbert-Varshamov bound already discussed above. Using the connection between permutation codes and independent sets in certain graphs one can improve the Gilbert–Varshamov bound asymptotically by a factor log ( n ) {\displaystyle \log(n)} , when the code length goes to infinity. [ 8 ]
Let G ( n , d ) {\displaystyle G(n,d)} denote the subgraph induced by the neighbourhood of identity in Γ ( n , d ) {\displaystyle \Gamma (n,d)} , the Cayley graph Γ ( n , d ) := Γ ( S n , S ( n , d − 1 ) ) {\displaystyle \Gamma (n,d):=\Gamma (S_{n},S(n,d-1))} and S ( n , k ) := ⋃ i = 1 k D ( n , i ) {\displaystyle S(n,k):=\bigcup _{i=1}^{k}D(n,i)} .
Let m ( n , d ) {\displaystyle m(n,d)} denotes the maximum degree in G ( n , d ) {\displaystyle G(n,d)}
Theorem 3 : Let m ′ ( n , d ) = m ( n , d ) + 1 {\displaystyle m'(n,d)=m(n,d)+1} and
M I S ( n , d ) := n ! . ∫ 0 1 ( 1 − t ) 1 m ′ ( n , d ) m ′ ( n , d ) + [ Δ ( n , d ) − m ′ ( n , d ) ] t d t {\displaystyle M_{IS}(n,d):=n!.\int _{0}^{1}{\frac {(1-t)^{\frac {1}{m'(n,d)}}}{m'(n,d)+[\Delta (n,d)-m'(n,d)]t}}dt}
Then, M ( n , d ) ≥ M I S ( n , d ) {\displaystyle M(n,d)\geq M_{IS}(n,d)}
where Δ ( n , d ) = ∑ k = 0 d − 1 ( n k ) D k {\displaystyle \Delta (n,d)=\sum _{k=0}^{d-1}{\binom {n}{k}}D_{k}} .
The Gilbert-Varshamov bound is, M ( n , d ) ≥ M G V ( n , d ) := n ! 1 + Δ ( n , d ) {\displaystyle M(n,d)\geq M_{GV}(n,d):={\frac {n!}{1+\Delta (n,d)}}}
Theorem 4 : when d {\displaystyle d} is fixed and n {\displaystyle n} does to infinity, we have
M I S ( n , d ) M G V ( n , d ) = Ω ( log ( n ) ) {\displaystyle {\frac {M_{IS}(n,d)}{M_{GV}(n,d)}}=\Omega (\log(n))}
Using a [ n , k , d ] q {\displaystyle [n,k,d]_{q}} linear block code, one can prove that there exists a permutation code in the symmetric group of degree n {\displaystyle n} , having minimum distance at least d {\displaystyle d} and large cardinality. [ 9 ] A lower bound for permutation codes that provides asymptotic improvements in certain regimes of length and distance of the permutation code [ 9 ] is discussed below. For a given subset K {\displaystyle \mathrm {K} } of the symmetric group S n {\displaystyle S_{n}} , we denote by M ( K , d ) {\displaystyle M(\mathrm {K} ,d)} the maximum cardinality of a permutation code of minimum distance at least d {\displaystyle d} entirely contained in K {\displaystyle \mathrm {K} } , i.e.
M ( K , d ) = m a x { | Γ | : Γ ⊂ K , d ( Γ ) ≥ d } {\displaystyle M(\mathrm {K} ,d)=max\{|\Gamma |:\Gamma \subset \mathrm {K} ,d(\Gamma )\geq d\}} .
Theorem 5: Let d , k , n {\displaystyle d,k,n} be integers such that 0 < k < n {\displaystyle 0<k<n} and 1 < d ≤ n {\displaystyle 1<d\leq n} . Moreover let q {\displaystyle q} be a prime power and s , r {\displaystyle s,r} be positive integers such that n = q s + r {\displaystyle n=qs+r} and 0 ≤ r < q {\displaystyle 0\leq r<q} . If there exists an [ n , k , d ] q {\displaystyle [n,k,d]_{q}} code C {\displaystyle C} such that C ⊥ {\displaystyle C^{\perp }} has a codeword of Hamming weight n {\displaystyle n} , then
M ( n , d ) ≥ n ! M ( K , d ) ( s + 1 ) ! r s ! q − r q n − k − 1 , {\displaystyle M(n,d)\geq {\frac {n!M(\mathrm {K} ,d)}{(s+1)!^{r}s!^{q-r}q^{n-k-1}}},}
where K = ( S s + 1 ) r × ( S s ) q − r {\displaystyle \mathrm {K} =(S_{s+1})^{r}\times (S_{s})^{q-r}}
Corollary 1 : for every prime power q ≥ n {\displaystyle q\geq n} , for every 2 < d ≤ n {\displaystyle 2<d\leq n} ,
M ( n , d ) ≥ n ! q d − 2 {\displaystyle M(n,d)\geq {\frac {n!}{q^{d-2}}}} .
Corollary 2 : for every prime power q {\displaystyle q} , for every 3 < d < q {\displaystyle 3<d<q} ,
M ( q + 1 , d ) ≥ ( q + 1 ) ! 2 q d − 2 {\displaystyle M(q+1,d)\geq {\frac {(q+1)!}{2q^{d-2}}}} . | https://en.wikipedia.org/wiki/Permutation_code |
In mathematics , a permutation polynomial (for a given ring ) is a polynomial that acts as a permutation of the elements of the ring, i.e. the map x ↦ g ( x ) {\displaystyle x\mapsto g(x)} is a bijection . In case the ring is a finite field , the Dickson polynomials , which are closely related to the Chebyshev polynomials , provide examples. [ 1 ] Over a finite field, every function, so in particular every permutation of the elements of that field, can be written as a polynomial function.
In the case of finite rings Z / n Z , such polynomials have also been studied and applied in the interleaver component of error detection and correction algorithms. [ 2 ] [ 3 ]
Let F q = GF( q ) be the finite field of characteristic p , that is, the field having q elements where q = p e for some prime p . A polynomial f with coefficients in F q (symbolically written as f ∈ F q [ x ] ) is a permutation polynomial of F q if the function from F q to itself defined by c ↦ f ( c ) {\displaystyle c\mapsto f(c)} is a permutation of F q . [ 4 ]
Due to the finiteness of F q , this definition can be expressed in several equivalent ways: [ 5 ]
A characterization of which polynomials are permutation polynomials is given by
( Hermite 's Criterion ) [ 6 ] [ 7 ] f ∈ F q [ x ] is a permutation polynomial of F q if and only if the following two conditions hold:
If f ( x ) is a permutation polynomial defined over the finite field GF( q ) , then so is g ( x ) = a f ( x + b ) + c for all a ≠ 0, b and c in GF( q ) . The permutation polynomial g ( x ) is in normalized form if a , b and c are chosen so that g ( x ) is monic , g (0) = 0 and (provided the characteristic p does not divide the degree n of the polynomial) the coefficient of x n −1 is 0.
There are many open questions concerning permutation polynomials defined over finite fields. [ 8 ] [ 9 ]
Hermite's criterion is computationally intensive and can be difficult to use in making theoretical conclusions. However, Dickson was able to use it to find all permutation polynomials of degree at most five over all finite fields. These results are: [ 10 ] [ 7 ]
A list of all monic permutation polynomials of degree six in normalized form can be found in Shallue & Wanless (2013) . [ 11 ]
Beyond the above examples, the following list, while not exhaustive, contains almost all of the known major classes of permutation polynomials over finite fields. [ 12 ]
These can also be obtained from the recursion D n ( x , a ) = x D n − 1 ( x , a ) − a D n − 2 ( x , a ) , {\displaystyle D_{n}(x,a)=xD_{n-1}(x,a)-aD_{n-2}(x,a),} with the initial conditions D 0 ( x , a ) = 2 {\displaystyle D_{0}(x,a)=2} and D 1 ( x , a ) = x {\displaystyle D_{1}(x,a)=x} .
The first few Dickson polynomials are:
If a ≠ 0 and n > 1 then D n ( x , a ) permutes GF( q ) if and only if ( n , q 2 − 1) = 1 . [ 14 ] If a = 0 then D n ( x , 0) = x n and the previous result holds.
The linearized polynomials that are permutation polynomials over GF( q r ) form a group under the operation of composition modulo x q r − x {\displaystyle x^{q^{r}}-x} , which is known as the Betti-Mathieu group, isomorphic to the general linear group GL( r , F q ) . [ 15 ]
An exceptional polynomial over GF( q ) is a polynomial in F q [ x ] which is a permutation polynomial on GF( q m ) for infinitely many m . [ 16 ]
A permutation polynomial over GF( q ) of degree at most q 1/4 is exceptional over GF( q ) . [ 17 ]
Every permutation of GF( q ) is induced by an exceptional polynomial. [ 17 ]
If a polynomial with integer coefficients (i.e., in ℤ[ x ] ) is a permutation polynomial over GF( p ) for infinitely many primes p , then it is the composition of linear and Dickson polynomials. [ 18 ] (See Schur's conjecture below).
In finite geometry coordinate descriptions of certain point sets can provide examples of permutation polynomials of higher degree. In particular, the points forming an oval in a finite projective plane , PG(2, q ) with q a power of 2, can be coordinatized in such a way that the relationship between the coordinates is given by an o-polynomial , which is a special type of permutation polynomial over the finite field GF( q ) .
The problem of testing whether a given polynomial over a finite field is a permutation polynomial can be solved in polynomial time . [ 19 ]
A polynomial f ∈ F q [ x 1 , … , x n ] {\displaystyle f\in \mathbb {F} _{q}[x_{1},\ldots ,x_{n}]} is a permutation polynomial in n variables over F q {\displaystyle \mathbb {F} _{q}} if the equation f ( x 1 , … , x n ) = α {\displaystyle f(x_{1},\ldots ,x_{n})=\alpha } has exactly q n − 1 {\displaystyle q^{n-1}} solutions in F q n {\displaystyle \mathbb {F} _{q}^{n}} for each α ∈ F q {\displaystyle \alpha \in \mathbb {F} _{q}} . [ 20 ]
For the finite ring Z / n Z one can construct quadratic permutation polynomials. Actually it is possible if and only if n is divisible by p 2 for some prime number p . The construction is surprisingly simple, nevertheless it can produce permutations with certain good properties. That is why it has been used in the interleaver component of turbo codes in 3GPP Long Term Evolution mobile telecommunication standard (see 3GPP technical specification 36.212 [ 21 ] e.g. page 14 in version 8.8.0).
Consider g ( x ) = 2 x 2 + x {\displaystyle g(x)=2x^{2}+x} for the ring Z /4 Z .
One sees: g ( 0 ) = 0 {\displaystyle g(0)=0} ; g ( 1 ) = 3 {\displaystyle g(1)=3} ; g ( 2 ) = 2 {\displaystyle g(2)=2} ; g ( 3 ) = 1 {\displaystyle g(3)=1} , so the polynomial defines the permutation ( 0 1 2 3 0 3 2 1 ) . {\displaystyle {\begin{pmatrix}0&1&2&3\\0&3&2&1\end{pmatrix}}.}
Consider the same polynomial g ( x ) = 2 x 2 + x {\displaystyle g(x)=2x^{2}+x} for the other ring Z / 8 Z .
One sees: g ( 0 ) = 0 {\displaystyle g(0)=0} ; g ( 1 ) = 3 {\displaystyle g(1)=3} ; g ( 2 ) = 2 {\displaystyle g(2)=2} ; g ( 3 ) = 5 {\displaystyle g(3)=5} ; g ( 4 ) = 4 {\displaystyle g(4)=4} ; g ( 5 ) = 7 {\displaystyle g(5)=7} ; g ( 6 ) = 6 {\displaystyle g(6)=6} ; g ( 7 ) = 1 {\displaystyle g(7)=1} , so the polynomial defines the permutation ( 0 1 2 3 4 5 6 7 0 3 2 5 4 7 6 1 ) . {\displaystyle {\begin{pmatrix}0&1&2&3&4&5&6&7\\0&3&2&5&4&7&6&1\end{pmatrix}}.}
Consider g ( x ) = a x 2 + b x + c {\displaystyle g(x)=ax^{2}+bx+c} for the ring Z / p k Z .
Lemma: for k =1 (i.e. Z / p Z ) such polynomial defines a permutation only in the case a =0 and b not equal to zero. So the polynomial is not quadratic, but linear.
Lemma: for k >1, p >2 ( Z / p k Z ) such polynomial defines a permutation if and only if a ≡ 0 ( mod p ) {\displaystyle a\equiv 0{\pmod {p}}} and b ≢ 0 ( mod p ) {\displaystyle b\not \equiv 0{\pmod {p}}} .
Consider n = p 1 k 1 p 2 k 2 . . . p l k l {\displaystyle n=p_{1}^{k_{1}}p_{2}^{k_{2}}...p_{l}^{k_{l}}} , where p t are prime numbers.
Lemma: any polynomial g ( x ) = a 0 + ∑ 0 < i ≤ M a i x i {\textstyle g(x)=a_{0}+\sum _{0<i\leq M}a_{i}x^{i}} defines a permutation for the ring Z / n Z if and only if all the polynomials g p t ( x ) = a 0 , p t + ∑ 0 < i ≤ M a i , p t x i {\textstyle g_{p_{t}}(x)=a_{0,p_{t}}+\sum _{0<i\leq M}a_{i,p_{t}}x^{i}} defines the permutations for all rings Z / p t k t Z {\displaystyle Z/p_{t}^{k_{t}}Z} , where a j , p t {\displaystyle a_{j,p_{t}}} are remainders of a j {\displaystyle a_{j}} modulo p t k t {\displaystyle p_{t}^{k_{t}}} .
As a corollary one can construct plenty quadratic permutation polynomials using the following simple construction.
Consider n = p 1 k 1 p 2 k 2 … p l k l {\displaystyle n=p_{1}^{k_{1}}p_{2}^{k_{2}}\dots p_{l}^{k_{l}}} , assume that k 1 >1.
Consider a x 2 + b x {\displaystyle ax^{2}+bx} , such that a = 0 mod p 1 {\displaystyle a=0{\bmod {p}}_{1}} , but a ≠ 0 mod p 1 k 1 {\displaystyle a\neq 0{\bmod {p}}_{1}^{k_{1}}} ; assume that a = 0 mod p i k i {\displaystyle a=0{\bmod {p}}_{i}^{k_{i}}} , i > 1. And assume that b ≠ 0 mod p i {\displaystyle b\neq 0{\bmod {p}}_{i}} for all i = 1, ..., l .
(For example, one can take a = p 1 p 2 k 2 . . . p l k l {\displaystyle a=p_{1}p_{2}^{k_{2}}...p_{l}^{k_{l}}} and b = 1 {\displaystyle b=1} ).
Then such polynomial defines a permutation.
To see this we observe that for all primes p i , i > 1, the reduction of this quadratic polynomial modulo p i is actually linear polynomial and hence is permutation by trivial reason. For the first prime number we should use the lemma discussed previously to see that it defines the permutation.
For example, consider Z /12 Z and polynomial 6 x 2 + x {\displaystyle 6x^{2}+x} .
It defines a permutation ( 0 1 2 3 4 5 6 7 8 ⋯ 0 7 2 9 4 11 6 1 8 ⋯ ) . {\begin{pmatrix}0&1&2&3&4&5&6&7&8&\cdots \\0&7&2&9&4&11&6&1&8&\cdots \end{pmatrix}}.
A polynomial g ( x ) for the ring Z / p k Z is a permutation polynomial if and only if it permutes the finite field Z / p Z and g ′ ( x ) ≠ 0 mod p {\displaystyle g'(x)\neq 0{\bmod {p}}} for all x in Z / p k Z , where g ′( x ) is the formal derivative of g ( x ). [ 22 ]
Let K be an algebraic number field with R the ring of integers . The term "Schur's conjecture" refers to the assertion that, if a polynomial f defined over K is a permutation polynomial on R / P for infinitely many prime ideals P , then f is the composition of Dickson polynomials, degree-one polynomials, and polynomials of the form x k . In fact, Schur did not make any conjecture in this direction. The notion that he did is due to Fried, [ 23 ] who gave a flawed proof of a false version of the result. Correct proofs have been given by Turnwald [ 24 ] and Müller. [ 25 ] | https://en.wikipedia.org/wiki/Permutation_polynomial |
Permutational multivariate analysis of variance ( PERMANOVA ), [ 1 ] is a non-parametric multivariate statistical permutation test . PERMANOVA is used to compare groups of objects and test the null hypothesis that the centroids and dispersion of the groups as defined by measure space are equivalent for all groups. A rejection of the null hypothesis means that either the centroid and/or the spread of the objects is different between the groups. Hence the test is based on the prior calculation of the distance between any two objects included in the experiment.
PERMANOVA shares some resemblance to ANOVA where they both measure the sum-of-squares within and between groups, and make use of F test to compare within-group to between-group variance. However, while ANOVA bases the significance of the result on assumption of normality, PERMANOVA draws tests for significance by comparing the actual F test result to that gained from random permutations of the objects between the groups. Moreover, whilst PERMANOVA tests for similarity based on a chosen distance measure, ANOVA tests for similarity of the group averages .
In the simple case of a single factor with p groups and n objects in each group, the total sum-of-squares is determined as:
where N = p n {\displaystyle N=pn} is the total number of objects, and d i j 2 {\displaystyle d_{ij}^{2}} is the squared distance between objects i and j .
Similarly, the within groups sum-of-squares is determined as:
where δ i j {\displaystyle \delta _{ij}} is 1 if the observations i and j belong to the same group, and 0 otherwise.
Then, the between groups sum-of-squares ( S S A {\displaystyle SS_{A}} ) can be calculated as the difference between the overall and the within groups sum-of-squares:
Finally, a pseudo F-statistic is calculated:
where p is the number of groups.
Finally, the PERMANOVA procedure draws significance for the actual F statistic by performing multiple permutations of the data. In each permutation π {\displaystyle \pi } the items are shuffled between groups, and the F-ratio is calculated for it, F π {\displaystyle F^{\pi }} . The P-value is then calculated by:
PERMANOVA is widely used in the field of ecology and is implemented in several software packages including the PERMANOVA [ 2 ] software, PRIMER and R (programming language) Vegan, lmPerm [ 3 ] and Python (programming language) skbio [ 4 ] packages. | https://en.wikipedia.org/wiki/Permutational_analysis_of_variance |
Permutationally invariant quantum state tomography (PI quantum state tomography) is a method for the partial determination of the state of a quantum system consisting of many subsystems.
In general, the number of parameters needed to describe the quantum mechanical state of a system consisting of N {\displaystyle N} subsystems is increasing exponentially with N . {\displaystyle N.} For instance, for an N {\displaystyle N} - qubit system, 2 ( N + 1 ) − 2 {\displaystyle 2^{(N+1)}-2} real parameters are needed to describe the state vector of a pure state, or 2 2 N − 1 {\displaystyle 2^{2N}-1} real parameters are needed to describe the density matrix of a mixed state . Quantum state tomography is a method to determine all these parameters from a series of measurements on many independent and identically prepared systems. Thus, in the case of full quantum state tomography , the number of measurements needed scales exponentially with the number of particles or qubits.
For large systems, the determination of the entire quantum state is no longer possible in practice and one is interested in methods that determine only a subset of the parameters necessary to characterize the quantum state that still contains important information about the state. Permutationally invariant quantum tomography is such a method. PI quantum tomography only measures ϱ P I , {\displaystyle \varrho _{\rm {PI}},} the permutationally invariant part of the density matrix. For the procedure, it is sufficient to carry out local measurements on the subsystems. If the state is close to being permutationally invariant, which is the case in many practical situations, then ϱ P I {\displaystyle \varrho _{\rm {PI}}} is close to the density matrix of the system.
Even if the state is not permutationally invariant, ϱ P I {\displaystyle \varrho _{\rm {PI}}} can still be used for entanglement detection and computing relevant operator expectations values. Thus, the procedure does not assume the permutationally invariance of the quantum state. The number of independent real parameters of ϱ P I {\displaystyle \varrho _{\rm {PI}}} for N {\displaystyle N} qubits scales as ∼ N 3 . {\displaystyle \sim N^{3}.} The number of local measurement settings scales as ∼ N 2 . {\displaystyle \sim N^{2}.} Thus, permutationally invariant quantum tomography is considered manageable even for large N {\displaystyle N} . In other words, permutationally invariant quantum tomography is considered scalable .
The method can be used, for example, for the reconstruction of the density matrices of systems with more than 10 particles, for photonic systems, for trapped cold ions or systems in cold atoms .
PI state tomography reconstructs the permutationally invariant part of the density matrix, which is defined as the equal mixture of the quantum states obtained after permuting the particles in all the possible ways [ 1 ]
where Π k {\displaystyle \Pi _{k}} denotes the k th permutation. For instance, for N = 2 {\displaystyle N=2} we have two permutations. Π 1 {\displaystyle \Pi _{1}} leaves the order of the two particles unchanged. Π 2 {\displaystyle \Pi _{2}} exchanges the two particles. In general, for N {\displaystyle N} particles, we have N ! {\displaystyle N!} permutations.
It is easy to see that ϱ P I {\displaystyle \varrho _{\rm {PI}}} is the density matrix that is obtained if the order of the particles is not taken into account. This corresponds to an experiment in which a subset of N {\displaystyle N} particles is randomly selected from a larger ensemble. The state of this smaller group is of course permutationally invariant.
The number of degrees of freedom of ϱ P I {\displaystyle \varrho _{\rm {PI}}} scales polynomially with the number of particles. For a system of N {\displaystyle N} qubits (spin- 1 / 2 {\displaystyle 1/2} particles) the number of real degrees of freedom is [ 2 ]
To determine these degrees of freedom, [ 1 ]
local measurement settings are needed. Here, a local measurement settings means that the operator A j {\displaystyle A_{j}} is to be measured on each particle. By repeating the measurement and collecting enough data, all two-point, three-point and higher order correlations can be determined.
So far we have discussed that the number of measurements scales polynomially with the number of qubits .
However, for using the method in practice, the entire tomographic procedure must be scalable. Thus, we need to store the state in the computer in a scalable way. Clearly, the straightforward way of storing the N {\displaystyle N} -qubit state in a 2 N × 2 N {\displaystyle 2^{N}\times 2^{N}} density matrix is not scalable. However, ϱ P I {\displaystyle \varrho _{\rm {PI}}} is a blockdiagonal matrix due to its permutational invariance and thus it can be stored much more efficiently. [ 3 ]
Moreover, it is well known that due to statistical fluctuations and systematic errors the density matrix obtained from the measured state by linear inversion is not positive semidefinite and it has some negative eigenvalues. An important step in a typical tomography is fitting a physical, i. e., positive semidefinite density matrix on the tomographic data. This step often represents a bottleneck in the overall process in full state tomography. However, PI tomography, as we have just discussed, allows the density matrix to be stored much more efficiently, which also allows an efficient fitting using convex optimization , which also guarantees that the solution is a global optimum. [ 3 ]
PI tomography is commonly used in experiments involving permutationally invariant states. If the density matrix ϱ P I {\displaystyle \varrho _{\rm {PI}}} obtained by PI tomography is entangled , then density matrix of the system, ϱ {\displaystyle \varrho } is also entangled. For this reason, the usual methods for entanglement verification, such as entanglement witnesses or the Peres-Horodecki criterion , can be applied to ϱ P I {\displaystyle \varrho _{\rm {PI}}} . Remarkably, the entanglement detection carried out in this way does not assume that the quantum system itself is permutationally invariant.
Moreover, the expectation value of any permutaionally invariant operator is the same for ϱ {\displaystyle \varrho } and for ϱ P I . {\displaystyle \varrho _{\rm {PI}}.} Very relevant examples of such operators are projectors to symmetric states, such as the Greenberger–Horne–Zeilinger state , the W state and symmetric Dicke states. Thus, we can obtain the fidelity with respect to the above-mentioned quantum states as the expectation value of the corresponding projectors in the state ϱ P I . {\displaystyle \varrho _{\rm {PI}}.}
The quantum fidelity of ϱ P I {\displaystyle \varrho _{\rm {PI}}} and ϱ {\displaystyle \varrho } can be bounded from below as [ 1 ]
where P s {\displaystyle P_{s}} is the projector to the symmetric subspace. For symmetric states, ⟨ P s ⟩ = 1 {\displaystyle {\langle }P_{s}\rangle =1} holds. This way, we can lower bound the difference knowing only ϱ P I . {\displaystyle \varrho _{\rm {PI}}.}
There are other approaches for tomography that need fewer measurements than full quantum state tomography. As we have discussed, PI tomography is typically most useful for quantum states that are close to being permutionally invariant. Compressed sensing is especially suited for low rank states. [ 4 ] Matrix product state tomography is most suitable for, e.g., cluster states and ground states of spin models. [ 5 ] Permutationally invariant tomography can be combined with compressed sensing. In this case, the number of local measurement settings needed can even be smaller than for permutationally invariant tomography. [ 2 ]
Permutationally invariant tomography has been tested experimentally for a four-qubit symmetric Dicke state, [ 1 ] and also for a six-qubit symmetric Dicke in photons, and has been compared to full state tomography and compressed sensing. [ 2 ] A simulation of permutationally invariant tomography shows that reconstruction of a positive semidefinite density matrix of 20 qubits from measured data is possible in a few minutes on a standard computer. [ 3 ] The hybrid method combining permutationally invariant tomography and compressed sensing has also been tested. [ 2 ] | https://en.wikipedia.org/wiki/Permutationally_invariant_quantum_state_tomography |
Permute (and Shuffle) instructions, part of bit manipulation as well as vector processing , copy unaltered contents from a source array to a destination array, where the indices are specified by a second source array. [ 1 ] The size (bitwidth) of the source elements is not restricted but remains the same as the destination size.
There exists two important permute variants, known as gather and scatter, respectively. The gather variant is as follows:
where the scatter variant is:
Note that unlike in memory-based gather-scatter all three of dest , src , and indices are registers (or parts of registers in the case of bit-level permute), not memory locations.
The scatter variant can be seen to "scatter" the source elements across (into) to the destination, where the "gather" variant is gathering data from the indexed source elements.
Given that the indices may be repeated in both variants, the resultant output is not a strict mathematical permutation because duplicates can occur in the output.
A special case of permute is also used in GPU " swizzling " (again, not strictly a permutation) which performs on-the-fly reordering of subvector data so as to align or duplicate elements with the appropriate SIMD lane .
Permute instructions occur in both scalar processors as well as vector processing engines as well as GPUs . In vector instruction sets they are typically named "Register Gather/Scatter" operations such as in RISC-V vectors, [ 2 ] and take Vectors as input for both source elements and source array, and output another Vector.
In scalar instruction sets the scalar registers are broken down into smaller sections (unpacked, SIMD style) where the fragments are then used as array sources. The (small, partial) results are then concatenated (packed) back into the scalar register as the result.
Some ISAs, particularly for cryptographic applications, even have bit-level permute operations, such as bdep (bit deposit) in RISC-V bitmanip; [ 3 ] in the Power ISA it is known as bpermd and has been included for several decades, and is still in the Power ISA v.3.0 B spec. [ 4 ]
Also in some non-vector ISAs, due to there sometimes being insufficient space in the one source input register to specify the permutation source array in full (particularly if the operation involves bit-level permutation), will include partial reordering instructions. Examples include VSHUFF32x4 from AVX-512 .
Permute operations in different forms are surprisingly common, occurring in AltiVec , Power ISA , PowerPC G4 , AVX-512 , SVE2 , [ 5 ] vector processors, and GPUs . They are sufficiently important that LLVM added the shufflevector [ 6 ] intrinsic and GCC added the __builtin_shuffle intrinsic. [ 7 ] GCC's intrinsic matches the functionality of OpenCL 's shuffle intrinsics. [ 8 ] Note that all of these, mathematically, are not permutations because duplicates can occur in the output. | https://en.wikipedia.org/wiki/Permute_instruction |
A permuted congruential generator ( PCG ) is a pseudorandom number generation algorithm developed in 2014 by Dr. M.E. O'Neill which applies an output permutation function to improve the statistical properties of a modulo-2 n linear congruential generator (LCG). It achieves excellent statistical performance [ 1 ] [ 2 ] [ 3 ] [ 4 ] with small and fast code, and small state size. [ 5 ]
LCGs with a power-of-2 modulus are simple, efficient, and have uniformly distributed binary outputs, but suffer from a well-known problem of short periods in the low-order bits. [ 5 ] : 31–34
A PCG addresses this by adding an output transformation between the LCG state and the PCG output. This adds two elements to the LCG:
The variable rotation ensures that all output bits depend on the most-significant bit of state, so all output bits have full period.
The PCG family includes a number of variants. The core LCG is defined for widths from 8 to 128 bits [ citation needed ] , although only 64 and 128 bits are recommended for practical use; smaller sizes are for statistical tests of the technique.
The additive constant in the LCG can be varied to produce different streams. The constant is an arbitrary odd integer, [ 6 ] so it does not need to be stored explicitly; the address of the state variable itself (with the low bit set) can be used.
There are several different output transformations defined. All perform well, but some have a larger margin than others. [ 5 ] : 39 They are built from the following components:
Each of these operations is either invertible (and thus one-to-one ) or a truncation (and thus 2 k -to-one for some fixed k ), so their composition maps the same fixed number of input states to each output value. This preserves the equidistribution of the underlying LCG.
These are combined into the following recommended output transformations, illustrated here in their most common sizes:
Finally, if a generator period longer than 2 128 is required, the generator can be extended with an array of sub-generators. One is chosen (in rotation) to be added to the main generator's output, and every time the main generator's state reaches zero, the sub-generators are cycled in a pattern which provides a period equal to 2 to the power of the total state size.
The generator recommended for most users [ 5 ] : 43 is PCG-XSH-RR with 64-bit state and 32-bit output. It can be implemented as:
The generator applies the output transformation to the initial state rather than the final state in order to increase the available instruction-level parallelism to maximize performance on modern superscalar processors . [ 5 ] : 43
A slightly faster version eliminates the increment, reducing the LCG to a multiplicative ( Lehmer -style) generator with a period of only 2 62 , and uses the weaker XSH-RS output function:
The time saving is minimal, as the most expensive operation (the 64×64-bit multiply) remains, so the normal version is preferred except in extremis . Still, this faster version also passes statistical tests. [ 4 ]
When executing on a 32-bit processor, the 64×64-bit multiply must be implemented using three 32×32→64-bit multiply operations. To reduce that to two, there are 32-bit multipliers which perform almost as well as the 64-bit one, such as 0xf13283ad [ 6 ] , 0xffffffff0e703b65 or 0xf2fc5985.
O'Neill proposes testing PRNGs by applying statistical tests to their reduced-size variants and determining the minimum number of internal state bits required to pass. [ 7 ] TestU01's BigCrush examines enough data to detect a period of 2 35 , so even an ideal generator requires 36 bits of state to pass it. Some very poor generators can pass if given a large enough state; [ 8 ] passing despite a small state is a measure of an algorithm's quality, and shows how large a safety margin exists between that lower limit and the state size used in practical applications.
PCG-RXS-M-XS (with 32-bit output) passes BigCrush with 36 bits of state (the minimum possible), PCG-XSH-RR ( pcg32() above) requires 39, and PCG-XSH-RS ( pcg32_fast() above) requires 49 bits of state. For comparison, xorshift* , one of the best of the alternatives, requires 40 bits of state, [ 5 ] : 19 and Mersenne twister fails despite 19937 bits of state. [ 9 ]
It has been shown that it is practically possible (with a large computation) to recover the seed of the pseudo-random generator given 512 consecutive output bytes. [ 10 ] This implies that it is practically possible to predict the rest of the pseudo-random stream given 512 bytes. | https://en.wikipedia.org/wiki/Permuted_congruential_generator |
A perovskite is a crystalline material of formula ABX 3 with a crystal structure similar to that of the mineral perovskite , this latter consisting of calcium titanium oxide (CaTiO 3 ). [ 2 ] The mineral was first discovered in the Ural mountains of Russia by Gustav Rose in 1839 and named after Russian mineralogist L. A. Perovski (1792–1856). In addition to being one of the most abundant structural families, perovskites have wide-ranging properties and applications. [ 3 ]
Perovskite structures are adopted by many compounds that have the chemical formula ABX 3 . 'A' and 'B' are positively charged ions (i.e. cations), often of very different sizes, and X is a negatively charged ion (an anion, frequently oxide) that bonds to both cations. The 'A' atoms are generally larger than the 'B' atoms. The ideal cubic structure has the B cation in 6-fold coordination, surrounded by an octahedron of anions, and the A cation in 12-fold cuboctahedral coordination. Additional perovskite forms may exist where both/either the A and B sites have a configuration of A1 x-1 A2 x and/or B1 y-1 B2 y and the X may deviate from the ideal coordination configuration as ions within the A and B sites undergo changes in their oxidation states. [ 4 ] The idealized form is a cubic structure ( space group Pm 3 m, no. 221), which is rarely encountered. The orthorhombic (e.g. space group Pnma, no. 62, or Amm2, no. 38) and tetragonal (e.g. space group I4/mcm, no. 140, or P4mm, no. 99) structures are the most common non-cubic variants. Although the perovskite structure is named after CaTiO 3 , this mineral has a non-cubic structure. SrTiO 3 and CaRbF 3 are examples of cubic perovskites. Barium titanate is an example of a perovskite which can take on the rhombohedral ( space group R3m, no. 160), orthorhombic, tetragonal and cubic forms depending on temperature. [ 5 ]
In the idealized cubic unit cell of such a compound, the type 'A' atom sits at cube corner position (0, 0, 0), the type 'B' atom sits at the body-center position (1/2, 1/2, 1/2) and X atoms (typically oxygen) sit at face centered positions (1/2, 1/2, 0), (1/2, 0, 1/2) and (0, 1/2, 1/2). The diagram to the right shows edges for an equivalent unit cell with A in the cube corner position, B at the body center, and X at face-centered positions.
Four general categories of cation-pairing are possible: A + B 2+ X − 3 , or 1:2 perovskites; [ 6 ] A 2+ B 4+ X 2− 3 , or 2:4 perovskites; A 3+ B 3+ X 2− 3 , or 3:3 perovskites; and A + B 5+ X 2− 3 , or 1:5 perovskites.
The relative ion size requirements for stability of the cubic structure are quite stringent, so slight buckling and distortion can produce several lower-symmetry distorted versions, in which the coordination numbers of A cations, B cations or both are reduced. Tilting of the BO 6 octahedra reduces the coordination of an undersized A cation from 12 to as low as 8. Conversely, off-centering of an undersized B cation within its octahedron allows it to attain a stable bonding pattern. The resulting electric dipole is responsible for the property of ferroelectricity and shown by perovskites such as BaTiO 3 that distort in this fashion.
Complex perovskite structures contain two different B-site cations. This results in the possibility of ordered and disordered variants.
Also common are the defect perovskites . Instead of the ideal ABO 3 stoichiometry, defect perovskites are missing some or all of the A, B, or O atoms. One example is rhenium trioxide . It is missing the A atoms. Uranium trihydride is another example of a simple defect perovskite. Here, all B sites are vacant, H − occupies the O sites, and the large U 3+ ion occupies the A site.
Many high temperature superconductors , especially cuprate superconductor , adopt defect perovskite structures. The prime example is yttrium barium copper oxide (YBCO), which has the formula YBa 2 Cu 3 O 7 . In this material Y 3+ and Ba 2+ , which are relatively large, occupy all A sites. Cu occupies all B sites. Two O atoms per formula unit are absent, hence the term defect . The compound YBa 2 Cu 3 O 7 is a superconductor. The average oxidation state of copper is Cu (7/3)+ since Y3+ and Ba2+ have fixed oxidation states. When heated in the absence of O 2 , the solid loses its superconducting properties, relaxes to the stoichiometry YBa 2 Cu 3 O 6.5 , and all copper sites convert to Cu 2+ . The material thus is an oxygen carrier , shuttling between two defect perovskites:
Perovskites can be deposited as epitaxial thin films on top of other perovskites, [ 7 ] using techniques such as pulsed laser deposition and molecular-beam epitaxy . These films can be a couple of nanometres thick or as small as a single unit cell. [ 8 ]
Perovskites may be structured in layers, with the ABO 3 structure separated by thin sheets of intrusive material. Based on the chemical makeup of their intrusions, these layered phases can be defined as follows: [ 9 ]
Although there is a large number of simple known ABX 3 perovskites, this number can be greatly expanded if the A and B sites are increasingly doubled / complex AA ′ BB ′ X 6 . [ 15 ] Ordered double perovskites are usually denoted as A 2 BB ′ O 6 where disordered are denoted as A(BB ′ )O 3 . In ordered perovskites, three different types of ordering are possible: rock-salt, layered, and columnar. The most common ordering is rock-salt followed by the much more uncommon disordered and very distant columnar and layered. [ 15 ] The formation of rock-salt superstructures is dependent on the B-site cation ordering. [ 16 ] [ 17 ] Octahedral tilting can occur in double perovskites, however Jahn–Teller distortions and alternative modes alter the B–O bond length.
The lattice of an antiperovskites (or inverse perovskites ) is the same as that of the perovskite structure, but the anion and cation positions are switched. The typical perovskite structure is represented by the general formula ABX 3 , where A and B are cations and X is an anion. When the anion is the ( divalent ) oxide ion, A and B cations can have charges 1 and 5, respectively, 2 and 4, respectively, or 3 and 3, respectively. In antiperovskite compounds, the general formula is reversed, so that the X sites are occupied by an electropositive ion, i.e., cation (such as an alkali metal ), while A and B sites are occupied by different types of anion. In the ideal cubic cell, the A anion is at the corners of the cube, the B anion at the octahedral center, and the X cation is at the faces of the cube. Thus the A anion has a coordination number of 12, while the B anion sits at the center of an octahedron with a coordination number of 6. Similar to the perovskite structure, most antiperovskite compounds are known to deviate from the ideal cubic structure, forming orthorhombic or tetragonal phases depending on temperature and pressure.
Whether a compound will form an antiperovskite structure depends not only on its chemical formula, but also the relative sizes of the ionic radii of the constituent atoms. This constraint is expressed in terms of the Goldschmidt tolerance factor , which is determined by the radii, r a , r b and r x , of the A, B, and X ions.
For the antiperovskite structure to be structurally stable, the tolerance factor must be between 0.71 and 1. If between 0.71 and 0.9, the crystal will be orthorhombic or tetragonal. If between 0.9 and 1, it will be cubic. By mixing the B anions with another element of the same valence but different size, the tolerance factor can be altered. Different combinations of elements result in different compounds with different regions of thermodynamic stability for a given crystal symmetry.. [ 18 ]
Antiperovskites naturally occur in sulphohalite, galeite, schairerite, kogarkoite , nacaphite, arctite , polyphite, and hatrurite. [ 19 ] It is also demonstrated in superconductive compounds such as CuNNi 3 and ZnNNi 3 .
Discovered in 1930, metallic antiperovskites have the formula M 3 AB where M represents a magnetic element, Mn, Ni, or Fe; A represents a transition or main group element, Ga, Cu, Sn, and Zn; and B represents N, C, or B. These materials exhibit superconductivity , giant magnetoresistance , and other unusual properties.
Antiperovskite manganese nitrides exhibit zero thermal expansion . [ 20 ] [ 21 ]
Beyond the most common perovskite symmetries ( cubic , tetragonal , orthorhombic ), a more precise determination leads to a total of 23 different structure types that can be found. [ 22 ] These 23 structure can be categorized into 4 different so-called tilt systems that are denoted by their respective Glazer notation. [ 23 ]
number
The notation consists of a letter a/b/c, which describes the rotation around a Cartesian axis and a superscript +/—/0 to denote the rotation with respect to the adjacent layer. A "+" denotes that the rotation of two adjacent layers points in the same direction, whereas a "—" denotes that adjacent layers are rotated in opposite directions. Common examples are a 0 a 0 a 0 , a 0 a 0 a – and a 0 a 0 a + which are visualized here.
Aside from perovskite itself, some perovskite minerals include loparite and bridgmanite . [ 2 ] [ 24 ] Bridgmanite is a silicate with the chemical formula (Mg,Fe)SiO 3 . It is the most common mineral in the Earth's mantle. At high pressures associated with the deeper mantel, the Si sites feature octahedral units. [ 2 ]
At the high pressure conditions of the Earth's lower mantle , the pyroxene enstatite , MgSiO 3 , which otherwise has tetrahedral Si sites, transforms into a denser perovskite-structured polymorph ; this phase may be the most common mineral in the Earth. [ 25 ] This phase has the orthorhombically distorted perovskite structure (GdFeO 3 -type structure) that is stable at pressures from ~24 GPa to ~110 GPa. However, it cannot be transported from depths of several hundred km to the Earth's surface without transforming back into less dense materials. At higher pressures, MgSiO 3 perovskite , commonly known as silicate perovskite, transforms to post-perovskite .
Although the most common perovskite compounds contain oxygen, there are a few perovskite compounds that form without oxygen. Fluoride perovskites such as NaMgF 3 are well known. A large family of metallic perovskite compounds can be represented by RT 3 M (R: rare-earth or other relatively large ion, T: transition metal ion and M: light metalloids). The metalloids occupy the octahedrally coordinated "B" sites in these compounds. RPd 3 B, RRh 3 B and CeRu 3 C are examples. MgCNi 3 is a metallic perovskite compound and has received lot of attention because of its superconducting properties. An even more exotic type of perovskite is represented by the mixed oxide-aurides of Cs and Rb, such as Cs 3 AuO, which contain large alkali cations in the traditional "anion" sites, bonded to O 2− and Au − anions. [ 26 ]
Of interest in the context of solar energy are materials of the type [R 4 N] + [MX 3 ] − . Thus, the quat cation occupies the B site and the metals occupy the A sites.These materials are the basis of perovskite solar cells . These materials have high charge carrier mobility and charge carrier lifetime that allow light-generated electrons and holes to move far enough to be extracted as current, instead of losing their energy as heat within the cell. [ 28 ] [ 29 ] [ 30 ]
Probably the dominant applications of perovskites are in microelectronics and telecommunications , which exploit the ferroelectric properties of barium titanate , lithium niobate , lead zirconium titanate and others.
Physical properties of interest to materials science among perovskites They are applicable to lasers. [ 31 ] [ 32 ] [ 33 ] They are also some interests for scintillator as they have a large light yield for radiation conversion. Because of the flexibility of bond angles inherent in the perovskite structure there are many different types of distortions that can occur from the ideal structure. These include tilting of the octahedra , displacements of the cations out of the centers of their coordination polyhedra, and distortions of the octahedra driven by electronic factors ( Jahn-Teller distortions ). [ 34 ] The financially biggest application of perovskites is in ceramic capacitors , in which BaTiO 3 is used because of its high dielectric constant. [ 35 ] [ 36 ] Light-emitting diodes exploit the high photoluminescence quantum efficiencies of perovskites. [ 37 ] [ 38 ] In the area of photoelectrolysis, water electrolysis at 12.3% efficiency can use perovskite photovoltaics. [ 39 ] [ 40 ] Scintillators based on cerium-doped lutetium aluminum perovskite (LuAP:Ce) single crystals were reported. [ 41 ] Layered Ruddlesden-Popper perovskites have shown potential as fast novel scintillators with room temperature light yields up to 40,000 photons/MeV, fast decay times below 5 ns and negligible afterglow. [ 42 ] [ 43 ] In addition this class of materials have shown capability for wide-range particle detection, including alpha particles and thermal neutrons . [ 44 ] | https://en.wikipedia.org/wiki/Perovskite_(structure) |
Perovskite light-emitting diodes (PeLEDs) are candidates for display and lighting technologies. Researchers have shown interest in perovskite light-emitting diodes (PeLEDs) owing to their capacity for emitting light with narrow bandwidth , adjustable spectrum , ability to deliver high color purity, and solution fabrication. [ 1 ] [ 2 ]
PeLEDs have not surpassed the efficiency of commercial organic light-emitting diodes ( OLEDs ) because specific critical parameters, such as charge carrier transport and optical output coupling efficiency, have not been optimized. [ 2 ]
The development of efficient green PeLEDs with a external quantum efficiency (EQE) exceeding 30% was reported by Bai and his colleagues on May 29, 2023. [ 2 ] This achievement was made by adjustments in charge carrier transport and the distribution of near-field light. These optimizations resulted in a light output coupling efficiency of 41.82%.
The modified structure of green PeLED achieved record external quantum efficiency of 30.84% at a brightness level of 6514 cd/m 2 . This work introduced an approach to building ultra-efficient PeLEDs by balancing electron-hole recombination and enhancing light outcoupling. [ 2 ]
Expanding the effective area of perovskite LEDs can decrease their performance. Sun et al. [ 3 ] introduced L-methionine (NVAL) to construct an intermediate phase with low formation enthalpy and COO − coordination. This new intermediate phase altered the crystallization pathway, effectively inhibiting phase segregation. Consequently, high-quality large-area quasi-2D perovskite films were achieved. They further fine-tuned the film's composite dynamics, leading to high-efficiency quasi-2D perovskite green LEDs with an effective area of 9.0 cm 2 . An external quantum efficiency (EQE) of 16.4% was attained at <n> = 3, making it the most efficient large-area perovskite LED. Moreover, a luminance of 9.1×104 cd/m 2 was achieved in the <n> = 10 films. [ 3 ]
On March 16, 2023, Zhou et al. [ 4 ] published a study demonstrating their successful control of ion behavior to create highly efficient sky-blue perovskite light-emitting diodes. They achieved this by utilizing a bifunctional passivator , which consisted of Lewis base benzoic acid anions and alkali metal cations. This passivator had a dual role: it effectively passivated the deficient lead atom while inhibited the migration of halide ions. The outcome of this innovative approach was the realization of an efficient perovskite LED that emitted light at a stable wavelength of 483 nm. The LED exhibited a commendable external quantum efficiency (EQE) of 16.58%, with a peak EQE reaching 18.65%. Through optical coupling enhancement, the EQE was further boosted to 28.82%. [ 4 ]
One of the most crucial aspects of lighting and display technology is the efficient generation of red emission. Quasi-2D perovskites have demonstrated potential for high emission efficiency due to robust carrier confinement. However, the external quantum efficiencies (EQE) of most red quasi-2D PeLEDs are not optimal due to different n-value phases within complex quasi-2D perovskite films.
To address this challenge, Jiang et al. [ 1 ] published their findings in Advanced Materials on July 20, 2022. Their research focused on strategically incorporating large cations to enhance the efficiency of red light perovskite LEDs. By introducing phenethylammonium iodide (PEAI)/3-fluorophenylethylammonium iodide (m-F-PEA) and 1-naphthylmethylammonium iodide (NMAI), they achieved precise control over the phase distribution of quasi-2D perovskite materials. This approach effectively reduced the prevalence of smaller n-index phases and concurrently addressed lead and halide defects in the perovskite films. The outcome of this research was the development of perovskite LEDs capable of achieving an EQE of 25.8% at 680 nm, accompanied by a peak brightness of 1300 cd/m 2 . [ 1 ]
High-performance white perovskite LED with high light extraction efficiency can be constructed through near-field optical coupling. [ 5 ] The near-field optical coupling between blue perovskite diode and red perovskite nanocrystal was achieved by a reasonably designed multi-layer translucent electrode (LiF/Al/Ag/LiF). The red perovskite nano-crystalline layer allows the waveguide mode and surface plasmon polarization mode captured in the blue perovskite diode to be extracted and converted into red light emission, increasing the light extraction efficiency by 50%. At the same time, the complementary emission spectra of blue photons and down-converted red photons contribute to the formation of white LEDs. Finally, the off-device quantum efficiency exceeds 12%, and the brightness exceeds 2000 cd/m 2 , which are both the highest in white PeLEDs. [ 5 ]
Preparing high-quality all-inorganic perovskite films through solution-based methods remains a formidable challenge, primarily attributed to the rapid and uncontrollable crystallization of such materials. The key innovation involved controlling the crystal orientation of the all-inorganic perovskite along the (110) plane through a low-temperature annealing process (35-40 °C). This precise control led to the orderly stacking of crystals, which significantly increased surface coverage and reduced defects within the material. After thorough optimization, the well-oriented CsPbBr 3 perovskite LED achieved an external quantum efficiency (EQE) of up to 16.45%, a remarkable brightness of 79,932 cd/m 2 , and a lifespan of 136 hours when initially operated at a brightness level of 100 cd/m 2 . [ 6 ]
On September 20, 2021, the team led by Sargent et al. [ 7 ] from the University of Toronto published their research findings in the Journal of the American Chemical Society (JACS) on bright and stable light-emitting diodes (LEDs) based on perovskite quantum dots within a perovskite matrix. The research reported that perovskite quantum dots remain stable in a precursor solution thin film of perovskite and drive the uniform crystallization of the perovskite matrix using strain quantum dots as nucleation centers. The type I band alignment ensures that quantum dots act as charge acceptors and radiative emitters. [ 7 ]
The new material exhibits suppressed biexciton Auger recombination and bright luminescence even at high excitation (600 W/cm 2 ). The red LEDs based on the new material demonstrate an external quantum efficiency of 18% and maintain high performance at a brightness exceeding 4700 cd/m 2 . The new material extends the LED's operating half-life to 2400 hours at an initial brightness of 100 cd/m 2 . [ 7 ]
To address the issue of phosphor converted LEDs ability to emit ultrabroad near infrared (NIR) radiations. [ 8 ] Researchers currently have used PeLED by using 2.5% W4+-doped and 2.8% Mo4+-doped Cs2Na0.95Ag0.05BiCl6. [ 8 ] Thus, producing perovskites emitting ultrabroad NIR radiation with spectral widths of 434 and 468 nm. [ 8 ] This brings a lot of promise to the emerging field of using PeLEDs. The perovskite market is still blooming; current work is being done for scaling up the manufacturing process. [ 9 ] The current prediction is that the global perovskite market will actually be able to exceed $10 billion by 2035. [ 9 ] The Ultrabroad Near Infrared Emitting PeLEDs also are integrated with biodegradable polymer polylactic acid for more versatility in applications. [ 8 ]
There is current work in improving NIR LEDs to be more optimal for biomedical imaging and sensing by doping in all-inorganic tin perovskites (CsSnI3). [ 10 ] By incorporating perovskites researchers have been able to achieve higher radiance and longer operational stability. [ 10 ] These improvements are a result of manipulating p-doping with a focus of controlling crystallization in the material. [ 10 ] These advancements not only bring promise to the medical field but also to energy sources and the utilization of peLEDs for energy purposes as well as perovskite solar cells. [ 11 ]
PeLEDs bring great promise in many large scale applications. PeLEDs have an advantage in commercialization since their raw material cost is lower than ILEDs and OLEDs. [ 12 ] When it comes to manufacturing costs there is a consideration of deposition of the functional layer and coating methods. [ 12 ] Researchers suggest that incorporating wet processing has the potential to improve production efficiency helping reduce the cost of making PeLEDS. [ 12 ]
Certain issues arise in the upscaling and commercialization of PeLEDS such challenges are the degradation of perovskite, producing a lead-free PeLED and the efficiency of the blue PeLED. [ 12 ] Due to perovskites poor thermal stability and degradation induced by other environmental factors such as vapor, oxygen light; the lifetime can vary and be shorter compared to other PeLEDs. [ 12 ] The concerns on producing lead-free PeLEDs is brought upon by the fact that they have low Photoluminescence quantum yield (PLQY). [ 12 ] The inefficacy of producing blue PeLEDs is attributed to halogen ion migration that occurs in perovskites. [ 12 ] This halogen ion migration results in phase segregation under voltage bias. [ 12 ] The phase segregation causes the blue PeLED to fail and rapidly shift the wavelength from blue to green. [ 12 ] There are other approaches that have been implemented to address this issues but a lot of limitations are still present. | https://en.wikipedia.org/wiki/Perovskite_light-emitting_diode |
Perovskite nanocrystals are a class of semiconductor nanocrystals, which exhibit unique characteristics that separate them from traditional quantum dots . [ 2 ] [ 3 ] [ 4 ] [ 5 ] Perovskite nanocrystals have an ABX 3 composition where A = cesium , methylammonium (MA), or formamidinium (FA); B = lead or tin ; and X = chloride, bromide, or iodide. [ 6 ]
Their unique qualities largely involve their unusual band-structure which renders these materials effectively defect tolerant or able to emit brightly without surface passivation . This is in contrast to other quantum dots such as CdSe which must be passivated with an epitaxially matched shell to be bright emitters. In addition to this, lead-halide perovskite nanocrystals remain bright emitters when the size of the nanocrystal imposes only weak quantum confinement . [ 7 ] [ 8 ] This enables the production of nanocrystals that exhibit narrow emission linewidths regardless of their polydispersity .
The combination of these attributes and their easy-to-perform synthesis [ 9 ] [ 10 ] has resulted in numerous articles demonstrating the use of perovskite nanocrystals as both classical and quantum light sources with considerable commercial interest. Perovskite nanocrystals have been applied to numerous other optoelectronic applications [ 11 ] [ 12 ] such as light emitting diodes , [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] lasers , [ 19 ] [ 20 ] visible communication , [ 21 ] scintillators , [ 22 ] [ 23 ] [ 24 ] solar cells , [ 25 ] [ 26 ] [ 27 ] and photodetectors . [ 28 ]
Perovskite nanocrystals possess numerous unique attributes: defect tolerance, high quantum yield , fast rates of radiative decay and narrow emission line width in weak confinement, which make them ideal candidates for a variety of optoelectronic applications. [ 29 ] [ 30 ]
The intriguing optoelectronic properties of lead halide perovskites were first studied in single crystals and thin films.: [ 31 ] [ 32 ] [ 33 ] [ 34 ] From these reports, it was discovered that these materials possess high carrier mobility , long carrier lifetimes , long carrier diffusion lengths, and small effective carrier masses . [ 35 ] [ 31 ] [ 36 ] [ 37 ] Unlike their nanocrystal counterparts, bulk ABX 3 materials are non-luminescent at room temperature, but they do exhibit bright photoluminescence once cooled to cryogenic temperatures. [ 36 ] [ 38 ] [ 39 ]
Contrary to the characteristics of other colloidal quantum dots such as CdSe , ABX 3 QDs are shown to be bright, high quantum yield (above 80%) and stable emitters with narrow linewidths without surface passivation. [ 40 ] [ 7 ] [ 41 ] In II-VI systems, the presence of dangling bonds on the surface results in photoluminescence quenching and photoluminescent intermittence or blinking . The lack of sensitivity to the surface can be rationalized from the electronic band structure and density of states calculations for these materials. Unlike conventional II-VI semiconductors where the band gap is formed by bonding and antibonding orbitals, the frontier orbitals in ABX 3 QDs are formed by antibonding orbitals composed of Pb 6s 6p and X np orbitals (n is the principle quantum number for the corresponding halogen atom ). [ 42 ] As a result, dangling bonds (under-coordinated atoms) result in intraband states or shallow traps instead of deep mid-gap states (e.g. d in CdSe QDs. This observation was corroborated by computational studies which demonstrated that the electronic structure of CsPbX 3 materials exhibits a trap-free band gap. [ 43 ] Furthermore, band structure calculations performed by various groups have demonstrated that these are direct band gap materials at their R-point (a critical point of the Brillouin zone ) with a composition dependent band gaps. [ 41 ] [ 44 ] [ 45 ] [ 46 ]
It was discovered in 2015 that the photoluminescence of perovskite nanocrystals can be post-synthetically tuned across the visible spectral range through halide substitution to obtain APbCl 3 , APb(Cl,Br) 3 , APbBr 3 , APb(Br,I) 3 , and APbI 3 ; there was no evidence of APb(Cl,I) 3 . [ 47 ] [ 48 ] The change in band-gap with composition can be described by Vegard's Law , which describes the change in lattice parameter as a function of the change in composition for a solid solution. However, the change in lattice parameter can be rewritten to describe the change in band gap for many semiconductors. The change in band gap directly affects the energy or wavelength of light that can be absorbed by the material and therefore its color. Furthermore, this directly alters the energy of emitted light according to the Stokes shift of the material. This quick, post-synthetic anion-tunability is in contrast to other quantum dot systems [ 49 ] [ 50 ] where emission wavelength is primarily tuned through particle size by altering the degree of quantum confinement.
Aside from tuning the absorption edge and emission wavelength by anion substitution, it was also observed that the A-site cation also affects both properties. [ 51 ] This occurs as a result of the distortion of the perovskite structure and the tilting of octahedra due to the size of the A-cation. Cs, which yields a Goldschmidt tolerance factor of less than one, results in a distorted, orthorhombic structure at room temperature. This results in reduced orbital overlap between the halide and lead atoms and blue shifts the absorption and emission spectra. On the other hand, FA yields a cubic structure and results in FAPbX 3 having red shifted absorption and emission spectra as compared to both Cs and MA. Of these three cations, MA is intermediate size between Cs and FA and therefore results in absorption and emission spectra intermediate between those of Cs and FA. Through the combination of both anionic and cationic tuning, the whole spectrum ranging from near-UV to near-IR can be covered. [ 52 ]
Recent studies have demonstrated that CsPbBr 3 nanocrystals have an absorption coefficient of 2x10 5 cm −1 at 335 nm and 8x10 4 cm −1 at 400 nm. [ 53 ] [ 54 ]
Spectroscopic studies of individual nanocrystals have revealed blinking -free emission and very low spectral diffusion without a passivating shell around the NCs. [ 55 ] [ 56 ] [ 57 ] [ 58 ] Studies have also demonstrated blinking-free emission at room temperature with a strongly reduced Auger recombination rate at room temperature (CsPbI 3 NCs). [ 59 ]
It was observed that emission from perovskite nanocrystals may be the result of a bright (optically active) triplet state. [ 30 ] Several effects have been suggested to play a role on the exciton fine structure such as electron-hole exchange interactions, [ 60 ] crystal field and shape anisotropy, [ 61 ] [ 62 ] as well as the Rashba effect. Recent reports have described the presence of the Rashba effect within bulk- [ 63 ] and nano- CsPbBr 3 and CsPb(Br,Cl) 3 . [ 64 ] While it has been reported that the Rashba effect contributes to the existence of a lowest energy triplet state CsPb(Br,Cl) 3 , recent work on FAPbBr 3 has indicated the presence of a lower lying dark state, which can be activated with the application of a magnetic field. [ 65 ] [ 66 ]
Numerous quantum optical technologies require coherent light sources. Perovskite nanocrystals have been demonstrated as sources of such light [ 67 ] as well as suitable materials for the generation of single photons with high coherence. [ 68 ] [ 69 ]
Monodisperse perovskite nanocrystals can be assembled into cubic superlattices , which can range from a few hundreds of nanometers to tens of microns in size [ 70 ] [ 71 ] [ 72 ] [ 73 ] [ 74 ] and show tunable photoluminescence by changing nanocrystal composition via anion exchange (for example, from green-emitting CsPbBr 3 nanocrystal superlattices to yellow and orange emitting CsPb(I 1−x Br x ) 3 nanocrystal superlattices to red-emitting CsPbI 3 ones). [ 75 ] These superlattices have been reported to exhibit very high degree of structural order [ 76 ] and unusual optical phenomena such as superfluorescence . [ 77 ] In the case of these superlattices, it was reported that the dipoles of the individual nanocrystals can become aligned and then simultaneously emit several pulses of light. [ 78 ]
Early attempts were made to prepare MAPbX 3 perovskites as nanocrystals in 2014 by non-template synthesis. [ 79 ] It was not until 2015 that CsPbX 3 nanocrystals were prepared by the Kovalenko research group at ETH Zurich . [ 41 ] by a hot-injection synthesis. Since then numerous other synthetic routes towards the successful preparation of ABX 3 NCs have been demonstrated. [ 80 ] [ 81 ]
The majority of papers reporting on ABX 3 NCs make use of a hot injection procedure in which one of the reagents is swiftly injected into a hot solution containing the other reagents and ligands . The combination of high temperature and rapid addition of the reagent result in a rapid reaction that results in supersaturation and nucleation occurring over a very short period of time with a large number of nuclei. After a short period of time, the reaction is quenched by quickly cooling to room temperature. [ 82 ] [ 83 ] Since 2015, several articles detailing improvements to this approach with zwitterionic ligands , [ 84 ] branched ligands and post-synthetic treatments [ 85 ] have been reported. Recently, soy-lecithin was demonstrated to be a ligand system for these nanocrystals that could stabilize concentrations from several ng/mL up to 400 mg/mL. [ 86 ]
A second, popular method for the preparation of ABX 3 NCs relies on the ionic nature of APbX 3 materials. Briefly, a polar, aprotic solvent such as DMF or DMSO is used to dissolve the starting reagents such as PbBr 2 , CsBr , oleic acid , and an amine . The subsequent addition of this solution into a non-polar solvent reduces the polarity of the solution and causes precipitation of the ABX 3 phase. [ 87 ] [ 88 ]
Microfluidics have been also used to synthesize CsPbX 3 NCs and to screen and study synthetic parameters. [ 89 ] Recently, a modular microfluidic platform has been developed at North Carolina State University to further optimize the synthesis and composition of these materials. [ 90 ]
Outside of the traditional synthetic routes, several papers have reported that CsPbX 3 NCs could be prepared on supports or within porous structures even without ligands. Dirin et al. first demonstrated that bright NCs of CsPbX 3 could be prepared without organic ligands within the pores of mesoporous silica . [ 7 ] By using mesoporous silica as a template, the size of CsPbX 3 nanodomains is restricted to the pore size. This allows for greater control over emission wavelength via quantum confinement and illustrates the defect tolerant nature of these materials. This concept was later extended to the preparation of ligand-free APbX 3 NCs on alkali-halide supports that could be shelled with NaBr without deteriorating their optical properties and protecting the nanocrystals against a number of polar solvents. [ 8 ]
As a result of the low melting point and ionic nature of ABX 3 materials, several studies have demonstrated that bright ABX 3 nanocrystals can also be prepared by ball-milling . [ 91 ]
With NCs, the composition can be tuned via ion exchange i.e. the ability to post-synthetically exchange the ions in the lattice for those added. This has been shown to be possible for both anions and cations.
The anions in the lead halide perovskites are highly mobile. The mobility arises from the diffusion of halide vacancies throughout the lattice, with an activation barrier of 0.29 eV and 0.25 eV for CsPbCl 3 and CsPbBr 3 respectively. [ 92 ] (see: physical properties). This was used by Nedelcu et al. [ 93 ] and Akkerman et al., [ 94 ] to demonstrate that the composition of cesium lead halide perovskite nanocrystals could be tuned continuously from CsPbCl 3 to CsPbBr 3 and from CsPbBr 3 to CsPbI 3 to obtain emission across the entire visible spectrum. While this was first observed in a colloidal suspension , this was also shown in solid pellets of alkali halide salts pressed with previously synthesized nanocrystals. [ 95 ] This same phenomenon has also been observed for MAPbX 3 and FAPbX 3 NCs. The anion exchange reaction has been investigated at the single nanocrystal level. In quantum-confined nanocrystals, the anion exchange leads to a uniform bandgap energy across the nanocrystal due to the quantum confinement effect. [ 96 ] However, in nanocrystals larger than the Bohr diameter, multiple emission sites form, resulting in iodide- or bromide-rich regions. [ 97 ]
Although several reports showed that CsPbX 3 NCs could be doped with Mn 2+ , they accomplished this through the addition of the Mn precursor during the synthesis, and not through cation exchange. [ 98 ] [ 99 ] [ 94 ] [ 100 ] Cation exchange can be used to partially exchange Pb 2+ with Sn 2+ , Zn 2+ , or Cd 2+ over the course of several hours. [ 101 ] In addition to these cations, gold was also shown to be a suitable candidate for cation exchange yielding a mixed-valent, and distorted, perovskite with the composition Cs 2 Au(I)Au(III)Br 6 . [ 102 ] A-site cation exchange has also been shown to be a viable route for the transformation of CsPbBr 3 to MAPbBr 3 and from CsPbI 3 to FAPbI 3 . [ 82 ]
Ligand-assisted reprecipitation method is dedicated for the preparation of perovskite nanoplatelets (NPls). In this method, the precursors in different solvents whether polar like Dimethylformamide and Dimethyl sulfoxide or non-polar like toluene and hexane are added in the presence of the ligands to form the perovskite NPls through supersaturation. The NPls thickness obtained from this method depends on the concentration of the ligands as well as the chain length of the organic ligands. Therefore, the thickness can be controlled by ratio between A-cation-oleate and lead-halide precursors in the reaction medium. By adjusting the toluene and acetone during the synthesis, the NPls are crystallized and precipitated at room temperature with these two solvents, respectively. [ 103 ]
Nanomaterials can be prepared with various morphologies that range from spherical particles/ quantum wells (0D) to wires (1D) and platelets or sheets (2D), and this has been previously demonstrated for QDs such as CdSe. While the initial report of lead halide perovskite NCs covered cubic particles, subsequent reports demonstrated that these materials could also be prepared as both platelets (2D) [ 104 ] and wires (1D). [ 105 ] Due to the varying degrees of quantum confinement present in these different shapes, the optical properties ( emission spectrum and mean lifetime ) change. [ 106 ] [ 107 ] [ 108 ] [ 109 ] As an example of the effect of morphology, cubic nanocrystals of CsPbBr 3 can emit from 470 nm to 520 nm based on their size (470 nm emission requires nanocrystals with an average diameter of less than 4 nm). [ 41 ] Within this same composition (CsPbBr 3 ), nanoplatelets exhibit emission that is blue shifted from that of cubes with the wavelength depending on the number of monolayers contained within the platelet (from 440 nm for three monolayers to 460 nm for 5 monolayers). [ 110 ] Nanowires of CsPbBr 3 , on the other hand, emit from 473 nm to 524 nm depending on the width of the wire prepared with lifetimes also in the range of 2.5 ns – 20.6 ns. [ 111 ]
Similarly to CsPbBr 3 , MAPbBr 3 NCs also exhibit morphologically dependent optical properties with nanocrystals of MAPbBr 3 emitting from 475 nm to 520 nm [ 112 ] and exhibiting average lifetimes on the order of 240 ns depending on their composition. Nanoplatelets and nanowires have been reported to emit at 465 nm and 532 nm, respectively. [ 113 ]
Perovskite nanocrystal all have the general composition ABX 3 in which A is a large, central cation (typically MA, FA, or Cs) that sits in a cavity surrounded by corner-sharing BX 6 octahedra (B = Pb, Sn; X = Cl, Br, I). Depending on the composition, the crystal structure can vary from orthorhombic to cubic , and the stability of a given composition can be qualitatively predicted by its goldschmidt tolerance factor [ 114 ]
t = ( r a + r x ) 2 ( r b + r x ) {\displaystyle t={(r_{a}+r_{x}) \over {\sqrt {2}}(r_{b}+r_{x})}}
where t is the calculated tolerance factor and r is the ionic radius of the A, B, and X ions, respectively. Structures with tolerance factors between 0.8 and 1 are expected to have cubic symmetry and form three dimensional perovskite structures such as those observed in CaTiO 3 . Furthermore, tolerance factors of t > 1 yield hexagonal structures (CsNiBr 3 type), and t < 0.8 result in NH 4 CdCl 3 type structures. [ 115 ] If the A-site cation is too large (t >1), but packs efficiently, 2D perovskites can be formed. [ 116 ]
The corner-sharing BX 6 octahedra form a three-dimensional framework through bridging halides . The angle (Φ) formed by B-X-B (metal-halide-metal) can be used to judge the closeness of a given structure to that of an ideal perovskite . [ 115 ] Although these octahedra are interconnected and form a framework, the individual octahedra are able to tilt with respect to one another. This tilting is affected by the size of the "A" cation as well as external stimuli such as temperature or pressure. [ 117 ] [ 118 ] [ 119 ] [ 120 ]
If the B-X-B angle deviates too far from 180°, phase transitions towards non-luminescent or all-together non-perovskite phases can occur. [ 121 ] [ 122 ] If the B-X-B angle does not deviate very far from 180°, the overall structure of the perovskite remains as a 3D network of interconnected octahedra, but the optical properties may change. This distortion increases the band gap of the material as the overlap between Pb and X based orbitals is reduced. For example, changing the A cation from Cs to MA or FA alters the tolerance factor and decreases the band gap as the B-X-B bond angle approaches 180° and the orbital overlap between the lead and halide atoms increases. These distortions can further manifest themselves as deviations in the band gap from that expected by Vegard's Law for solid solutions . [ 123 ] [ 124 ]
The room temperature crystal structures of the various bulk lead-halide perovskites have been extensively studied and have been reported for the APbX 3 perovskites. [ 125 ] The average crystal structures of the nanocrystals tend to agree with those of the bulk. Studies have, however, shown that these structures are dynamic [ 126 ] and deviate from the predicted structures due to the presence of twinned nanodomains . [ 127 ]
Calculations as well as empirical observations have demonstrated that perovskite nanocrystals are defect-tolerant semiconductor materials. As a result, they do not require epitaxial shelling or surface passivation since they are insensitive to surface defect states. In general, the perovskite nanocrystal surface is considered to be both ionic and highly dynamic. However, the ionic properties caused the instability of perovskite nanocrystals in humid condition and the degradation process can be accelerated by photoirradiation, which can alter the electronic properties of nanocrystals. [ 128 ] Initial reports utilized dynamically bound oleylammonium and oleate ligands that exhibited an equilibrium between bound and unbound states. [ 54 ] This resulted in severe instability with respect to purification and washing, which was improved in 2018 with the introduction of zwitterionic ligands. [ 84 ] The stability and quality of these colloidal materials was further improved in 2019 when it was demonstrated that deep traps could be generated by the partial destruction of the lead-halide octahedra, and that they could also be subsequently repaired to restore the quantum yield of nanocrystals. [ 129 ] [ 130 ] [ 131 ]
Perovskite NCs are promising materials for the emitting layer of light-emitting diodes (LEDs) as they offer potential advantages over organic LEDs (OLEDs) such as the elimination of precious metals (Ir, Pt) and simpler syntheses. [ 132 ] The first report of green electroluminescence (EL) was from MAPbBr 3 NCs although no efficiency values were reported. [ 79 ] It was later observed that MAPbBr 3 NCs could form in a polymer matrix when the precursors for MAPbBr 3 thin films were mixed with an aromatic polyidmide precursor. [ 133 ] The authors of this study obtained green EL with an external quantum efficiency (EQE) of up to 1.2%.
The first LEDs based on colloidal CsPbX 3 NCs demonstrated blue, green and orange EL with sub-1% EQE. [ 18 ] Since then, efficiencies have reached above 8% for green LEDs (CsPbBr 3 NCs [ 134 ] ), above 7% for red LEDs (CsPbI 3 NCs [ 135 ] ), and above 1% for blue LEDs (CsPb(Br/Cl) 3 [ 136 ] ).
Perovskite MAPbX 3 thin films have been shown to be promising materials for optical gain applications such as lasers and optical amplifiers . [ 137 ] [ 138 ] Afterwards, the lasing properties of colloidal perovskite NCs such as CsPbX 3 nanocubes, [ 19 ] [ 139 ] MAPbBr 3 nanoplatelets [ 113 ] and FAPbX 3 nanocubes [ 83 ] [ 82 ] were also demonstrated. Thresholds as low as 2 uJ cm −2 [ 140 ] have been reported for colloidal NCs (CsPbX 3 ) and 220 nJ cm −2 for MAPbI 3 nanowires. [ 141 ] Interestingly, perovskite NCs show efficient optical gain properties not only under resonant excitation, but also under two-photon excitation [ 142 ] where the excitation light falls into the transparent range of the active material. While the nature of optical gain in perovskites is not yet clearly understood, the dominant hypothesis is that the population inversion of excited states required for gain appears to be due to bi-excitonic states in the perovskite.
Perovskite nanocrystals have also been investigated as potential photocatalysts. [ 143 ] [ 144 ] [ 145 ]
Perovskite nanocrystals doped with large cations such as ethylene diamine (en) were demonstrated to exhibit hypsochromaticity concomitantly with lengthened photoluminescence lifetimes relative to their undoped counterparts. [ 146 ] This phenomenon was utilized by researchers to generate single color luminescent QR codes that could only be deciphered by measuring the photoluminescence lifetime. The lifetime measurements were carried out utilizing both time correlated single photon counting equipment as well as a prototype time-of-flight fluorescence imaging device developed by CSEM .
Ternary cesium lead halides have multiple stable phases that can be formed; these include CsPbX 3 (perovskite), Cs 4 PbX 6 (so called "zero-dimensional" phase due to disconnected [PbX 6 ] 4- octahedra), and CsPb 2 X 5 . [ 147 ] All three phases have been prepared colloidally either by a direct synthesis or via nanocrystal transformations. [ 148 ]
A rising research interest in these compounds created a disagreement within the community around the zero-dimensional Cs 4 PbBr 6 phase. Two contradicting claims exist regarding the optical properties of this material: i) the phase exhibits high photoluminescent quantum yield emission at 510-530 nm [ 149 ] [ 150 ] and ii) the phase is non-luminescent in the visible spectrum. [ 151 ] It was later demonstrated that pure, Cs 4 PbBr 6 NCs were non-luminescent, and that these could be converted to luminescent CsPbX 3 NCs and vice versa. [ 152 ] [ 153 ] [ 154 ]
A similar debate had occurred regarding the CsPb 2 Br 5 phase, which was also reported as being strongly luminescent . [ 155 ] This phase, like the Cs 4 PbBr 6 phase, is a wide gap semiconductor (~3.1 eV), but it is also an indirect-semiconductor and is non-luminescent. [ 156 ] The non-luminescent nature of this phase was further demonstrated in NH 4 Pb 2 Br 5 . [ 83 ]
Given the toxicity of lead , there is ongoing research into the discovery of lead-free perovskites for optoelectronics . [ 157 ] [ 158 ] Several lead-free perovskites have been prepared colloidally: Cs 3 Bi 2 I 9 , [ 159 ] Cs 2 PdX 6 , [ 160 ] CsSnX 3 . [ 161 ] [ 162 ] CsSnX 3 NCs, although the closest lead-free analogue to the highly luminescent CsPbX 3 NCs, do not exhibit high quantum yields (<1% PLQY) [ 161 ] CsSnX 3 NCs are also sensitive towards O 2 which causes oxidation of Sn(II) to Sn(IV) and renders the NCs non-luminescent.
Another approach to this problem relies on the replacement of the Pb(II) cation with the combination of a monovalent and a trivalent cation i.e. B(II) replaced with B(I) and B(III). [ 163 ] Double perovskite nanocrystals such as Cs 2 AgBiX 6 (X = Cl, Br, I), [ 164 ] Cs 2 AgInCl 6 (including Mn-doped variant), [ 165 ] and Cs 2 AgIn x Bi 1−x Cl 6 [ 166 ] (including Na-doped variant) [ 167 ] have been studied as potential alternatives to lead-halide perovskites, although none exhibit narrow, high PLQY emission. | https://en.wikipedia.org/wiki/Perovskite_nanocrystal |
Peroxide fusion is used to prepare samples for inductively coupled plasma (ICP), atomic absorption (AA) analysis and wet chemistry . Sodium peroxide (Na 2 O 2 ) is used to oxidize the sample that becomes soluble in a diluted acid solution. This method allows complete dissolution of numerous refractory compounds like chromite , magnetite , ilmenite , rutile , and even silicon , carbides , alloys , noble metals and materials with high sulfide contents.
Peroxide fusion can be performed either manually or with automated systems. The latter have the advantage of increasing productivity, improving safety, maintaining repeatable preparation conditions, and avoiding spattering as well as cross-contamination.
Acid digestion is the most common dissolution method used for many types of samples. Unfortunately, acid digestion involves numerous manipulations of concentrated acids. Some types of samples even require the use of perchloric acid (HClO 4 ) that is explosive when it comes into contact with any organic materials. It can be readily combined with hydrofluoric acid (HF) and brought to a fumic state to drive off this volatile acid that is extremely dangerous to human health, not to mention it will dissolve the walls of any glass container it is being processed within. Moreover, it is often difficult to get full dissolution of the sample, even when using these hazardous chemicals. | https://en.wikipedia.org/wiki/Peroxide_fusion |
The peroxide process is a method for the industrial production of hydrazine .
In this process hydrogen peroxide is used as an oxidant instead of sodium hypochlorite , which is traditionally used to generate hydrazine. The main advantage of the peroxide process to hydrazine relative to the traditional Olin Raschig process is that it does not coproduce salt. In this respect, the peroxide process is an example of green chemistry . Since many millions of kilograms of hydrazine are produced annually, this method is of both commercial and environmental significance. [ 1 ]
In the usual implementation, hydrogen peroxide is used together with acetamide . This mixture does not react with ammonia directly but does so in the presence of methyl ethyl ketone to give the oxaziridine .
Balanced equations for the individual steps are as follows. Imine formation through condensation:
Oxidation of the imine to the oxaziridine:
Condensation of the oxaziridine with a second molecule of ammonia to give the hydrazone:
The hydrazone then condenses with a second equivalent of ketone to give the ketazine :
Typical process conditions are 50 °C and atmospheric pressure, with a feed mix of H 2 O 2 :ketone:NH 3 in a molar ratio of about 1:2:4. [ 2 ] Methyl ethyl ketone is advantageous to acetone because the resulting ketazine is immiscible in the reaction mixture and can be separated by decantation. [ 2 ] A similar process based on benzophenone has also been described. [ 3 ]
The final stage involves hydrolysis of the purified ketazine:
The hydrolysis of the azine is acid-catalyzed , hence the need to isolate the azine from the initial ammonia-containing reaction mixture. It is also endothermic , [ 4 ] and so requires an increase in temperature (and pressure) to shift the equilibrium in favour of the desired products: ketone (which is recycled) and hydrazine hydrate. [ 5 ] The reaction is carried out by simple distillation of the azeotrope: typical conditions are a pressure of 8 bar and temperatures of 130 °C at the top of the column and 179 °C at the bottom of the column. The hydrazine hydrate (30–45% aqueous solution) is run off from the base of the column, while the methyl ethyl ketone is distilled off from the top of the column and recycled. [ 5 ]
The peroxide process, also called the Pechiney–Ugine–Kuhlmann process , was developed in the early 1970s by Produits Chimiques Ugine Kuhlmann . [ 6 ] [ 5 ] Originally the process used acetone instead of methyl ethyl ketone. [ 6 ] Methyl ethyl ketone is advantageous because the resulting ketazine is immiscible in the reaction mixture and can be separated by decantation. [ 2 ] [ 7 ] The world's largest hydrazine hydrate plant is in Lannemezan in France, producing 17,000 tonnes of hydrazine products per year. [ 8 ]
Before invention of the peroxide process, the Bayer ketazine process had been commercialized. In the Bayer process, the oxidation of ammonia by sodium hypochlorite is conducted in the presence of acetone. The process generates the ketazine but also sodium chloride: [ 1 ]
This chemical process -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Peroxide_process |
Detection of peroxide gives the initial evidence of rancidity in unsaturated fats and oils. Other methods are available, but peroxide value is the most widely used. It gives a measure of the extent to which an oil sample has undergone primary oxidation; extent of secondary oxidation may be determined from p-anisidine test. [ 1 ]
The double bonds found in fats and oils play a role in autoxidation . Oils with a high degree of unsaturation are most susceptible to autoxidation. The best test for autoxidation (oxidative rancidity) is determination of the peroxide value. Peroxides are intermediates in the autoxidation reaction.
Autoxidation is a free radical reaction involving oxygen that leads to deterioration of fats and oils which form off-flavours and off-odours. Peroxide value, concentration of peroxide in an oil or fat, is useful for assessing the extent to which spoilage has advanced.
The peroxide value is defined as the amount of peroxide oxygen per 1 kilogram of fat or oil. Traditionally this was expressed in units of milliequivalents , although in SI units the appropriate option would be in millimoles per kilogram (N.B. 1 milliequivalents = 0.5 millimole; because 1 mEq of O 2 =1 mmol/2 of O 2 =0.5 mmol of O 2 , where 2 is valence). The unit of milliequivalent has been commonly abbreviated as mequiv or even as meq.
The peroxide value is determined by measuring the amount of iodine which is formed by the reaction of peroxides (formed in fat or oil) with iodide ion.
The base produced in this reaction is taken up by the excess of acetic acid present. To measure the iodine, a redox titration is performed using sodium thiosulfate .
The acidic conditions (excess acetic acid) prevents formation of hypoiodite (analogous to hypochlorite ), which would interfere with the reaction.
A starch indicator solution is used, for which amylose forms a blue to black solution with iodine and is colorless when the iodine is converted to iodide titrated .
Peroxide values of fresh oils are less than 10 milliequivalents/kg; when the peroxide value is between 30* and 40 milliequivalents/kg, a rancid taste is noticeable. | https://en.wikipedia.org/wiki/Peroxide_value |
Peroxins (or peroxisomal/peroxisome biogenesis factors ) represent several protein families found in peroxisomes . [ 1 ] Deficiencies are associated with several peroxisomal disorders . Peroxins serve several functions including the recognition of cytoplasmic proteins that contain peroxisomal targeting signals (PTS) that tag them for transport by peroxisomal proteins to the peroxisome. Peroxins are structurally diverse and have been classified to different protein families . Some of them were predicted to be single-pass transmembrane proteins , for example Peroxisomal biogenesis factor 11 .
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Peroxin |
Peroxin-7 is a receptor associated with Refsum's disease and rhizomelic chondrodysplasia punctata type 1.
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Peroxin-7 |
In biochemical protein targeting , a peroxisomal targeting signal (PTS) is a region of the peroxisomal protein that receptors recognize and bind to. It is responsible for specifying that proteins containing this motif are localised to the peroxisome. [ 1 ]
All peroxisomal proteins are synthesized in the cytoplasm and must be directed to the peroxisome. [ 2 ] The first step in this process is the binding of the protein to a receptor. The receptor then directs the complex to the peroxisome. Receptors recognize and bind to a region of the peroxisomal protein called a peroxisomal targeting signal, or PTS.
Peroxisomes consist of a matrix surrounded by a specific membrane . Most peroxisomal matrix proteins contain a short sequence, usually three amino acids at the extreme carboxy tail of the protein, that serves as the PTS. The prototypic sequence (many variations exist) is serine - lysine - leucine (-SKL in the one-letter amino acid code ). [ 2 ] [ 3 ] This motif , and its variations, is known as the PTS1, and the receptor is termed the PTS1 receptor.
It was found that the PTS1 receptor is encoded by the PEX5 gene . [ 4 ] PEX5 imports folded proteins into the peroxisome, shuttling between the peroxisome and cytosol. [ 2 ] PEX5 interacts with a large number of other proteins, including Pex8p, 10p, 12p, 13p, 14p.
A few peroxisomal matrix proteins have a different, and less conserved sequence, at their amino termini. This PTS2 signal is recognized by the PTS2 receptor , encoded by the PEX7 gene.
"PEX" refers to a group of genes that were identified as being important for peroxisomal synthesis. The numerical attributions, such as PEX5, generally refer to the order in which they were first discovered.
A distinct motif is used for proteins destined for the peroxisomal membrane called the "mPTS" motif, which is more poorly defined and may consist of discontinuous subdomains. [ 2 ] One of these usually is a cluster of basic amino acids ( arginines and lysines) within a loop of protein (i.e., between membrane spans) that will face the matrix. The mPTS receptor is the product of PEX19 . [ 2 ]
Eukaryotic Linear Motif resource motif class TRG_PTS1
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Peroxisomal_targeting_signal |
A peroxisome ( / p ə ˈ r ɒ k s ɪ ˌ s oʊ m / ) is a membrane-bound organelle , a type of microbody , found in the cytoplasm of virtually all eukaryotic cells . [ 1 ] [ 2 ] Peroxisomes are oxidative organelles. Frequently, molecular oxygen serves as a co-substrate, from which hydrogen peroxide (H 2 O 2 ) is then formed. Peroxisomes owe their name to hydrogen peroxide-generating and scavenging activities. They perform key roles in lipid metabolism and the reduction of reactive oxygen species . [ 3 ]
Peroxisomes are involved in the catabolism of very long chain fatty acids , branched chain fatty acids , bile acid intermediates (in the liver ), D-amino acids , and polyamines . Peroxisomes also play a role in the biosynthesis of plasmalogens : ether phospholipids critical for the normal function of mammalian brains and lungs. [ 4 ] Peroxisomes contain approximately 10% of the total activity of two enzymes ( Glucose-6-phosphate dehydrogenase and 6-Phosphogluconate dehydrogenase ) in the pentose phosphate pathway , [ 5 ] which is important for energy metabolism. [ 4 ] It is debated whether peroxisomes are involved in isoprenoid and cholesterol synthesis in animals. [ 4 ] Other peroxisomal functions include the glyoxylate cycle in germinating seeds (" glyoxysomes "), photorespiration in leaves, [ 6 ] glycolysis in trypanosomes (" glycosomes "), and methanol and amine oxidation and assimilation in some yeasts .
Peroxisomes (microbodies) were first described by a Swedish doctoral student, J. Rhodin in 1954. [ 7 ] They were identified as organelles by Christian de Duve and Pierre Baudhuin in 1966. [ 8 ] De Duve and co-workers discovered that peroxisomes contain several oxidases involved in the production of hydrogen peroxide (H 2 O 2 ) as well as catalase involved in the decomposition of H 2 O 2 to oxygen and water. [ 9 ] Due to their role in peroxide metabolism, De Duve named them “peroxisomes”, replacing the formerly used morphological term “microbodies”. Later, it was described that firefly luciferase is targeted to peroxisomes in mammalian cells, allowing the discovery of the import targeting signal for peroxisomes, and triggering many advances in the peroxisome biogenesis field. [ 10 ] [ 11 ]
Peroxisomes are small (0.1–1 μm diameter) organelles with a fine, granular matrix, surrounded by a single biomembrane located in the cytoplasm of a cell. [ 12 ] [ 13 ] Compartmentalization creates an optimized environment to promote various metabolic reactions within peroxisomes required to sustain cellular functions and viability of the organism.
The number, size, and protein composition of peroxisomes are variable and depend on cell type and environmental conditions. For example, in baker's yeast ( S. cerevisiae ), it has been observed that, with a good glucose supply, only a few, small peroxisomes are present. In contrast, when the yeasts were supplied with long-chain fatty acids as sole carbon source up to 20 to 25 large peroxisomes can be formed. [ 14 ]
A major function of the peroxisome is the breakdown of very long chain fatty acids through beta oxidation . In animal cells, the long fatty acids are converted to medium chain fatty acids , which are subsequently shuttled to mitochondria where they eventually are broken down to carbon dioxide and water. In yeast and plant cells, this process is carried out exclusively in peroxisomes. [ 15 ] [ 16 ]
The first reactions in the formation of plasmalogen in animal cells also occur in peroxisomes. Plasmalogen is the most abundant phospholipid in myelin . Deficiency of plasmalogens causes profound abnormalities in the myelination of nerve cells , which is one reason why many peroxisomal disorders affect the nervous system. [ 15 ] Peroxisomes also play a role in the production of bile acids important for the absorption of fats and fat-soluble vitamins, such as vitamins A and K. Skin disorders are features of genetic disorders affecting peroxisome function as a result. [ 16 ]
The specific metabolic pathways that occur exclusively in mammalian peroxisomes are: [ 4 ]
Peroxisomes contain oxidative enzymes , such as D-amino acid oxidase and uric acid oxidase . [ 17 ] However the last enzyme is absent in humans, explaining the disease known as gout , caused by the accumulation of uric acid. Certain enzymes within the peroxisome, by using molecular oxygen, remove hydrogen atoms from specific organic substrates (labeled as R), in an oxidative reaction, producing hydrogen peroxide (H 2 O 2 , itself toxic):
Catalase, another peroxisomal enzyme, uses this H 2 O 2 to oxidize other substrates, including phenols , formic acid , formaldehyde , and alcohol , by means of the peroxidation reaction:
This reaction is important in liver and kidney cells, where the peroxisomes detoxify various toxic substances that enter the blood. About 25% of the ethanol that humans consume by drinking alcoholic beverages is oxidized to acetaldehyde in this way. [ 15 ] In addition, when excess H 2 O 2 accumulates in the cell, catalase converts it to H 2 O through this reaction:
In higher plants, peroxisomes contain also a complex battery of antioxidative enzymes such as superoxide dismutase , the components of the ascorbate-glutathione cycle , and the NADP-dehydrogenases of the pentose-phosphate pathway. It has been demonstrated that peroxisomes generate superoxide (O 2 •− ) and nitric oxide ( • NO) radicals. [ 18 ] [ 19 ]
There is evidence now that those reactive oxygen species including peroxisomal H 2 O 2 are also important signaling molecules in plants and animals and contribute to healthy aging and age-related disorders in humans. [ 20 ]
The peroxisome of plant cells is polarised when fighting fungal penetration. Infection causes a glucosinolate molecule to play an antifungal role to be made and delivered to the outside of the cell through the action of the peroxisomal proteins (PEN2 and PEN3). [ 21 ]
Peroxisomes in mammals and humans also contribute to anti-viral defense. [ 22 ] and the combat of pathogens [ 23 ]
Peroxisomes are derived from the smooth endoplasmic reticulum under certain experimental conditions and replicate by membrane growth and division out of pre-existing organelles. [ 24 ] [ 25 ] [ 26 ] Peroxisome matrix proteins are translated in the cytoplasm prior to import. Specific amino acid sequences (PTS or peroxisomal targeting signal ) at the C-terminus (PTS1) or N-terminus (PTS2) of peroxisomal matrix proteins signal them to be imported into the organelle by a targeting factor. There are currently 36 known proteins involved in peroxisome biogenesis and maintenance, called peroxins , [ 27 ] which participate in the process of peroxisome assembly in different organisms. In mammalian cells, there are 13 characterized peroxins. In contrast to protein import into the endoplasmic reticulum (ER) or mitochondria, proteins do not need to be unfolded to be imported into the peroxisome lumen. The matrix protein import receptors, the peroxins PEX5 and PEX7 , accompany their cargoes (containing a PTS1 or a PTS2 amino acid sequence, respectively) all the way to the peroxisome where they release the cargo into the peroxisomal matrix and then return to the cytosol – a step named recycling . A special way of peroxisomal protein targeting is called piggybacking. Proteins transported by this unique method do not have a canonical PTS but bind on a PTS protein to be transported as a complex. [ 28 ] A model describing the import cycle is referred to as the extended shuttle mechanism . [ 29 ] There is now evidence that ATP hydrolysis is required for the recycling of receptors to the cytosol. Also, ubiquitination is crucial for the export of PEX5 from the peroxisome to the cytosol. The biogenesis of the peroxisomal membrane and the insertion of peroxisomal membrane proteins (PMPs) requires the peroxins PEX19, PEX3, and PEX16. PEX19 is a PMP receptor and chaperone, which binds the PMPs and routes them to the peroxisomal membrane, where it interacts with PEX3, a peroxisomal integral membrane protein. PMPs are then inserted into the peroxisomal membrane.
The degradation of peroxisomes is called pexophagy. [ 30 ]
The diverse functions of peroxisomes require dynamic interactions and cooperation with many organelles involved in cellular lipid metabolisms such as the endoplasmic reticulum, mitochondria, lipid droplets, and lysosomes. [ 31 ]
Peroxisomes interact with mitochondria in several metabolic pathways, including β-oxidation of fatty acids and the metabolism of reactive oxygen species. [ 4 ] Both organelles are in close contact with the endoplasmic reticulum and share several proteins, including organelle fission factors. [ 32 ] Peroxisomes also interact with the endoplasmic reticulum and cooperate in the synthesis of ether lipids (plasmalogens), which are important for nerve cells (see above). In filamentous fungi, peroxisomes move on microtubules by 'hitchhiking,' a process involving contact with rapidly moving early endosomes. [ 33 ] Physical contact between organelles is often mediated by membrane contact sites, where membranes of two organelles are physically tethered to enable rapid transfer of small molecules, enable organelle communication and are crucial for coordination of cellular functions and hence human health. [ 34 ] Alterations of membrane contacts have been observed in various diseases.
Peroxisomal disorders are a class of medical conditions that typically affect the human nervous system as well as many other organ systems. Two common examples are X-linked adrenoleukodystrophy and the peroxisome biogenesis disorders . [ 35 ] [ 36 ]
PEX genes encode the protein machinery (peroxins) required for proper peroxisome assembly. Peroxisomal membrane proteins are imported through at least two routes, one of which depends on the interaction between peroxin 19 and peroxin 3, while the other is required for the import of peroxin 3, either of which may occur without the import of matrix (lumen) enzymes, which possess the peroxisomal targeting signal PTS1 or PTS2 as previously discussed. [ 37 ] Elongation of the peroxisome membrane and the final fission of the organelle are regulated by Pex11p. [ 38 ]
Genes that encode peroxin proteins include: PEX1 , PEX2 (PXMP3), PEX3 , PEX5 , PEX6 , PEX7 , PEX9, [ 39 ] [ 40 ] PEX10 , PEX11A , PEX11B , PEX11G , PEX12 , PEX13 , PEX14 , PEX16 , PEX19 , PEX26 , PEX28 , PEX30 , and PEX31 . Between organisms, PEX numbering and function can differ.
The protein content of peroxisomes varies across species or organism, but the presence of proteins common to many species has been used to suggest an endosymbiotic origin; that is, peroxisomes evolved from bacteria that invaded larger cells as parasites, and very gradually evolved a symbiotic relationship. [ 41 ] However, this view has been challenged by recent discoveries. [ 42 ] For example, peroxisome-less mutants can restore peroxisomes upon introduction of the wild-type gene.
Two independent evolutionary analyses of the peroxisomal proteome found homologies between the peroxisomal import machinery and the ERAD pathway in the endoplasmic reticulum , [ 43 ] [ 44 ] along with a number of metabolic enzymes that were likely recruited from the mitochondria . [ 44 ] The peroxisome may have had an Actinomycetota origin; [ 45 ] however, this is controversial. [ 46 ]
Other organelles of the microbody family related to peroxisomes include glyoxysomes of plants and filamentous fungi , glycosomes of kinetoplastids , [ 47 ] and Woronin bodies of filamentous fungi. | https://en.wikipedia.org/wiki/Peroxisome |
Peroxyacetyl nitrate is a peroxyacyl nitrate . It is a secondary pollutant present in photochemical smog . [ 1 ] It is thermally unstable and decomposes into peroxyethanoyl radicals and nitrogen dioxide gas. [ 2 ] It is a lachrymatory substance, meaning that it irritates the lungs and eyes. [ 3 ]
Peroxyacetyl nitrate, or PAN, is an oxidant that is more stable than ozone . Hence, it is more capable of long-range transport than ozone. It serves as a carrier for oxides of nitrogen (NOx) into rural regions and causes ozone formation in the global troposphere . [ 1 ]
PAN is produced in the atmosphere via photochemical oxidation of hydrocarbons to peroxyacetic acid radicals, which react reversibly with nitrogen dioxide ( NO 2 ) to form PAN. [ 4 ] : 2680 Night-time reaction of acetaldehyde with nitrogen trioxide is another possible source. [ 4 ] Since there are no direct emissions, it is a secondary pollutant. Next to ozone and hydrogen peroxide (H 2 O 2 ), it is an important component of photochemical smog .
Further peroxyacyl nitrates in the atmosphere are peroxypropionyl nitrate (PPN), peroxybutyryl nitrate (PBN), and peroxybenzoyl nitrate (PBzN). Chlorinated forms have also been observed. PAN is the most important peroxyacyl nitrate. PAN and its homologues reach about 5 to 20 percent of the concentration of ozone in urban areas. At lower temperatures, it is stable and can be transported over long distances, providing nitrogen oxides to otherwise unpolluted areas. At higher temperatures, it decomposes into NO 2 and the peroxyacetyl radical.
The decay of PAN in the atmosphere is mainly thermal. Thus, the long-range transport occurs through cold regions of the atmosphere, whereas the decomposition takes place at warmer levels. PAN can also be photolysed by UV radiation. It is a reservoir gas that serves both as a source and a sink of RO x - and NO x radicals. [ 5 ] Nitrogen oxides from PAN decomposition enhance ozone production in the lower troposphere .
The natural concentration of PAN in the atmosphere is below 0.1 μg/m 3 . Measurements in German cities showed values up to 25 μg/m 3 . Peak values above 200 μg/m 3 have been measured in Los Angeles in the second half of the 20th century (1 ppm of PAN corresponds to 4370 μg/m 3 ). Due to the complexity of the measurement setup, only sporadic measurements are available.
PAN is a greenhouse gas .
PAN can be produced in a lipophilic solvent from peroxyacetic acid . [ 6 ] [ 7 ] [ 8 ] [ 9 ] For the synthesis, concentrated sulfuric acid is added to degassed n - tridecane and peroxyacetic acid in an ice bath. Next, concentrated nitric acid is added.
As an alternative, PAN can also be synthesized in the gas phase via photolysis of acetone and NO 2 with a mercury lamp . [ 10 ] Methyl nitrate (CH 3 ONO 2 ) is created as a by-product.
The toxicity of PAN is higher than that of ozone. Eye irritation from photochemical smog is caused more by PAN and other trace gases than by ozone, which is only sparingly soluble. PAN is a mutagen , [ 11 ] and is considered a potential contributor to the development of skin cancer. [ citation needed ] | https://en.wikipedia.org/wiki/Peroxyacetyl_nitrate |
In chemistry , peroxycarbonate (sometimes peroxocarbonate , IUPAC name: oxocarbonate or oxidocarbonate ) or percarbonate is a divalent anion with formula CO 2− 4 . It is an oxocarbon anion that consists solely of carbon and oxygen . It is the anion of peroxycarbonic acid [ 1 ] [ 2 ] also called hydroperoxyformic acid , [ 3 ] HO−O−CO−OH .
The peroxycarbonate anion is formed, together with peroxydicarbonate C 2 O 2− 6 , at the negative electrode during electrolysis of molten lithium carbonate . [ 4 ] Lithium peroxycarbonate can be produced also by combining carbon dioxide CO 2 with lithium hydroxide in concentrated hydrogen peroxide H 2 O 2 at −10 °C. [ 5 ]
Electrolysis of a solution of lithium carbonate at -30° to -40 °C yields a solution of the Lithium percarbonate, which can liberate iodine from potassium iodide instantaneously. The crystalline salt has not been isolated.
The peroxycarbonate anion has been proposed as an intermediate to explain the catalytic effect of CO 2 on the oxidation of organic compounds by O 2 . [ 6 ]
The potassium and rubidium salts of the monovalent hydrogenperoxycarbonate anion (aka. hydroxycarbonate, biperoxycarbonate) H−O−O−CO − 2 have also been obtained. [ 7 ] [ 8 ] [ 9 ] [ 10 ] | https://en.wikipedia.org/wiki/Peroxycarbonate |
In chemistry , peroxydicarbonate (sometimes peroxodicarbonate ) is a divalent anion with the chemical formula C 2 O 2− 6 . It is one of the oxocarbon anions , which consist solely of carbon and oxygen . Its molecular structure can be viewed as two carbonate anions joined so as to form a peroxide bridge –O–O–.
The anion is formed, together with peroxocarbonate CO 2− 4 , at the negative electrode during electrolysis of molten lithium carbonate . [ 1 ] The anion can also be obtained by electrolysis of a saturated solution of rubidium carbonate in water. [ 2 ]
In addition, the peroxodicarbonate anion can be obtained by electrosynthesis on boron doped diamond (BDD) during water oxidation. [ 3 ] [ 4 ] The formal oxidation of two carbonate ions takes place at the anode. Due to the high oxidation potential of the peroxodicarbonate anion, a high anodic overpotential is necessary. This is even more important if hydroxyl radicals are involved in the formation process. Recent publications show that a concentration of 282 mmol/L of peroxodicarbonate can be reached in an undivided cell with sodium carbonate as starting material at current densities of 720 mA/cm 2 . [ 5 ] The described process is suitable for the pilot scale production of sodium peroxodicarbonate.
Potassium peroxydicarbonate K 2 C 2 O 6 was obtained by Constam and von Hansen in 1895; [ 6 ] its crystal structure was determined only in 2002. [ 7 ] It too can be obtained by electrolysis of a saturated potassium carbonate solution at −20 °C. It is a light blue crystalline solid that decomposes at 141 °C, releasing oxygen and carbon dioxide, and decomposes slowly at lower temperatures. [ 7 ]
Rubidium peroxodicarbonate is a light blue crystalline solid that decomposes at 424 K (151 °C). Its structure was published in 2003. [ 2 ] In both salts, each of the two carbonate units is planar. In the rubidium salt the whole molecule is planar, whereas in the potassium salt the two units lie on different and nearly perpendicular planes, both of which contain the O–O bond. [ 2 ] | https://en.wikipedia.org/wiki/Peroxydicarbonate |
Peroxymonosulfuric acid , also known as persulfuric acid , peroxysulfuric acid is the inorganic compound with the formula H 2 SO 5 . It is a white solid. It is a component of Caro's acid , which is a solution of peroxymonosulfuric acid in sulfuric acid containing small amounts of water. [ 4 ] Peroxymonosulfuric acid is a very strong oxidant ( E 0 = +2.51 V).
In peroxymonosulfuric acid, the S(VI) center adopts its characteristic tetrahedral geometry; the connectivity is indicated by the formula HO–O–S(O) 2 –OH. The S-O- H proton is more acidic. [ 4 ]
The German chemist Heinrich Caro first reported investigations of mixtures of hydrogen peroxide and sulfuric acid. [ 5 ]
One laboratory scale preparation of Caro's acid involves the combination of chlorosulfuric acid and hydrogen peroxide : [ 6 ]
Patents include more than one reaction for preparation of Caro's acid, usually as an intermediate for the production of potassium monopersulfate (PMPS) , a bleaching and oxidizing agent. One route employs the following reaction: [ 7 ]
This reaction is related to " piranha solution ".
H 2 SO 5 and Caro's acid have been used for a variety of disinfectant and cleaning applications, e.g., swimming pool treatment and denture cleaning. It is used in gold mining to destroy the cyanide in the waste stream (" Tailings ").
Alkali metal salts of H 2 SO 5 , especially oxone , are widely investigated.
These peroxy acids can be explosive. Explosions have been reported at Brown University [ 8 ] and Sun Oil . As with all strong oxidizing agents, peroxysulfuric acid is incompatible with organic compounds . | https://en.wikipedia.org/wiki/Peroxymonosulfuric_acid |
Peroxynitrite (sometimes called peroxonitrite ) is an ion with the formula ONOO − . It is a structural isomer of nitrate , NO − 3
Peroxynitrite can be prepared by the reaction of superoxide with nitric oxide : [ 1 ] [ 2 ] [ 3 ]
It is prepared by the reaction of hydrogen peroxide with nitrite : [ 4 ]
Its presence is indicated by the absorbance at 302 nm (pH 12, ε 302 = 1670 M −1 cm −1 ).
Peroxynitrite is weakly basic with a p K a of ~6.8.
It is reactive toward DNA and proteins .
ONOO − reacts nucleophilically with carbon dioxide . In vivo , the concentration of carbon dioxide is about 1 mM, and its reaction with ONOO − occurs quickly. Thus, under physiological conditions, the reaction of ONOO − with carbon dioxide to form nitrosoperoxycarbonate ( ONOOCO − 2 ) is by far the predominant pathway for ONOO − . ONOOCO − 2 homolyzes to form carbonate radical and nitrogen dioxide, again as a pair of caged radicals. Approximately 66% of the time, these two radicals recombine to form carbon dioxide and nitrate. The other 33% of the time, these two radicals escape the solvent cage and become free radicals. It is these radicals ( carbonate radical and nitrogen dioxide ) that are believed to cause peroxynitrite-related cellular damage.
Its conjugate acid peroxynitrous acid is highly reactive, although peroxynitrite is stable in basic solutions. [ 5 ] [ 6 ] | https://en.wikipedia.org/wiki/Peroxynitrite |
Peroxyoxalates are esters initially formed by the reaction of hydrogen peroxide with oxalate diesters or oxalyl chloride , with or without base, although the reaction is faster with base:
Peroxyoxalates are intermediates that will rapidly transform into 1,2-dioxetanedione , another high-energy intermediate. The likely mechanism of 1,2-dioxetanedione formation from peroxyoxalate in base is illustrated below:
1,2-Dioxetanedione will rapidly decompose into carbon dioxide (CO 2 ). If there is no fluorescer present, only heat will be released. However, in the presence of a fluorescer, light can be generated ( chemiluminescence ).
Peroxyoxalate chemiluminescence (CL) was first reported by Rauhut in 1967 [ 1 ] in the reaction of diphenyl oxalate . The emission is generated by the reaction of an oxalate ester with hydrogen peroxide in the presence of a suitably fluorescent energy acceptor. This reaction is used in glow sticks .
The three most widely used oxalates are bis(2,4,6-trichlorophenyl)oxalate ( TCPO ), Bis(2,4,5-trichlorophenyl-6-carbopentoxyphenyl)oxalate (CPPO) and bis(2,4-dinitrophenyl) oxalate (DNPO). Other aryl oxalates have been synthesized and evaluated with respect to their possible analytical applications. [ 2 ] Divanillyl oxalate , a more eco-friendly or "green" oxalate for chemiluminescence, decomposes into the nontoxic, biodegradable compound vanillin and works in nontoxic, biodegradable triacetin . [ 3 ] Peroxyoxalate CL is an example of indirect or sensitized chemiluminescence in which the energy from an excited intermediate is transferred to a suitable fluorescent molecule, which relaxes to the ground state by emitting a photon. Rauhut and co-workers have reported that the intermediate responsible for providing the energy of excitation is 1,2-dioxetanedione . [ 1 ] [ 4 ] The peroxyoxalate reaction is able to excite many different compounds, having emissions spanning the visible and infrared regions of the spectrum, [ 4 ] [ 5 ] and the reaction can supply up to 440 kJ mol-1, corresponding to excitation at 272 nm. [ 6 ] It has been found, however, that the chemiluminescence intensity corrected for quantum yield decreases as the singlet excitation energy of the fluorescent molecule increases. [ 7 ] There is also a linear relationship between the corrected chemiluminescence intensity and the oxidation potential of the molecule. [ 7 ] This suggests the possibility of an electron transfer step in the mechanism, as demonstrated in several other chemiluminescence systems. [ 8 ] [ 9 ] [ 10 ] [ 11 ] It has been postulated that a transient charge transfer complex is formed between the intermediate 1,2-dioxetanedione and the fluorescer , [ 12 ] and a modified mechanism was proposed involving the transfer of an electron from the fluorescer to the reactive intermediate. [ 13 ] The emission of light is thought to result from the annihilation of the fluorescer radical cation with the carbon dioxide radical anion formed when the 1,2-dioxetanedione decomposes. [ 14 ] This process is called chemically induced electron exchange luminescence (CIEEL).
Chemiluminescent reactions are widely used in analytical chemistry . [ 15 ] [ 16 ] | https://en.wikipedia.org/wiki/Peroxyoxalate |
The perpendicular axis theorem (or plane figure theorem ) states that for a planar lamina the moment of inertia about an axis perpendicular to the plane of the lamina is equal to the sum of the moments of inertia about two mutually perpendicular axes in the plane of the lamina, which intersect at the point where the perpendicular axis passes through. This theorem applies only to planar bodies and is valid when the body lies entirely in a single plane.
Define perpendicular axes x {\displaystyle x} , y {\displaystyle y} , and z {\displaystyle z} (which meet at origin O {\displaystyle O} ) so that the body lies in the x y {\displaystyle xy} plane, and the z {\displaystyle z} axis is perpendicular to the plane of the body. Let I x , I y and I z be moments of inertia about axis x , y , z respectively. Then the perpendicular axis theorem states that [ 1 ]
This rule can be applied with the parallel axis theorem and the stretch rule to find polar moments of inertia for a variety of shapes.
If a planar object has rotational symmetry such that I x {\displaystyle I_{x}} and I y {\displaystyle I_{y}} are equal, [ 2 ] then the perpendicular axes theorem provides the useful relationship:
Working in Cartesian coordinates , the moment of inertia of the planar body about the z {\displaystyle z} axis is given by: [ 3 ]
On the plane, z = 0 {\displaystyle z=0} , so these two terms are the moments of inertia about the x {\displaystyle x} and y {\displaystyle y} axes respectively, giving the perpendicular axis theorem.
The converse of this theorem is also derived similarly.
Note that ∫ x 2 d m = I y ≠ I x {\displaystyle \int x^{2}\,dm=I_{y}\neq I_{x}} because in ∫ r 2 d m {\displaystyle \int r^{2}\,dm} , r {\displaystyle r} measures the distance from the axis of rotation , so for a y -axis rotation, deviation distance from the axis of rotation of a point is equal to its x coordinate. | https://en.wikipedia.org/wiki/Perpendicular_axis_theorem |
A perpendicular paramagnetic bond is a type of chemical bond that does not exist under normal, atmospheric conditions. [ 1 ] Such a phenomenon was first hypothesized through simulation to exist in the atmospheres of white dwarf stars [ 2 ] whose magnetic fields, on the order of 10 5 teslas , [ 1 ] could allow such interactions to exist. In a very strong magnetic field, excited electrons in molecules may be stabilized, causing these molecules to abandon their original orientations parallel to the magnetic field and instead lie perpendicular to it. [ 3 ] Normally, at such intense temperatures as those near a white dwarf, more common molecular bonds cannot form and existing ones decompose. [ 2 ]
This astrophysics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Perpendicular_paramagnetic_bond |
In hydrodynamics , the Perrin friction factors are multiplicative adjustments to the translational and rotational friction of a rigid spheroid, relative to the corresponding frictions in spheres of the same volume. These friction factors were first calculated by Jean-Baptiste Perrin .
These factors pertain to spheroids (i.e., to ellipsoids of revolution), which are characterized by the axial ratio p = (a/b) , defined here as the axial semiaxis a (i.e., the semiaxis along the axis of revolution) divided by the equatorial semiaxis b . In prolate spheroids , the axial ratio p > 1 since the axial semiaxis is longer than the equatorial semiaxes. Conversely, in oblate spheroids , the axial ratio p < 1 since the axial semiaxis is shorter than the equatorial semiaxes. Finally, in spheres , the axial ratio p = 1 , since all three semiaxes are equal in length.
The formulae presented below assume "stick" (not "slip") boundary conditions, i.e., it is assumed that the velocity of the fluid is zero at the surface of the spheroid.
For brevity in the equations below, we define the Perrin S factor . For prolate spheroids (i.e., cigar-shaped spheroids with two short axes and one long axis)
where the parameter ξ {\displaystyle \xi } is defined
Similarly, for oblate spheroids (i.e., discus-shaped spheroids with two long axes and one short axis)
For spheres, S = 2 {\displaystyle S=2} , as may be shown by taking the limit p → 1 {\displaystyle p\rightarrow 1} for the prolate or oblate spheroids.
The frictional coefficient of an arbitrary spheroid of volume V {\displaystyle V} equals
where f s p h e r e {\displaystyle f_{sphere}} is the translational friction coefficient of a sphere of equivalent volume ( Stokes' law )
and f P {\displaystyle f_{P}} is the Perrin translational friction factor
The frictional coefficient is related to the diffusion constant D by the Einstein relation
Hence, f t o t {\displaystyle f_{tot}} can be measured directly using analytical ultracentrifugation , or indirectly using various methods to determine the diffusion constant (e.g., NMR and dynamic light scattering ).
There are two rotational friction factors for a general spheroid, one for a rotation about the axial semiaxis (denoted F a x {\displaystyle F_{ax}} ) and other for a rotation about one of the equatorial semiaxes (denoted F e q {\displaystyle F_{eq}} ). Perrin showed that
for both prolate and oblate spheroids. For spheres, F a x = F e q = 1 {\displaystyle F_{ax}=F_{eq}=1} , as may be seen by taking the limit p → 1 {\displaystyle p\rightarrow 1} .
These formulae may be numerically unstable when p ≈ 1 {\displaystyle p\approx 1} , since the numerator and denominator both go to zero into the p → 1 {\displaystyle p\rightarrow 1} limit. In such cases, it may be better to expand in a series, e.g.,
for oblate spheroids.
The rotational friction factors are rarely observed directly. Rather, one measures the exponential rotational relaxation(s) in response to an orienting force (such as flow, applied electric field, etc.). The time constant for relaxation of the axial direction vector is
whereas that for the equatorial direction vectors is
These time constants can differ significantly when the axial ratio ρ {\displaystyle \rho } deviates significantly from 1, especially for prolate spheroids. Experimental methods for measuring these time constants include fluorescence anisotropy , NMR , flow birefringence and dielectric spectroscopy .
It may seem paradoxical that τ a x {\displaystyle \tau _{ax}} involves F e q {\displaystyle F_{eq}} . This arises because re-orientations of the axial direction vector occur through rotations about the perpendicular axes, i.e., about the equatorial axes. Similar reasoning pertains to τ e q {\displaystyle \tau _{eq}} . | https://en.wikipedia.org/wiki/Perrin_friction_factors |
In mathematics , and more particularly in analytic number theory , Perron's formula is a formula due to Oskar Perron to calculate the sum of an arithmetic function , by means of an inverse Mellin transform .
Let { a ( n ) } {\displaystyle \{a(n)\}} be an arithmetic function , and let
be the corresponding Dirichlet series . Presume the Dirichlet series to be uniformly convergent for ℜ ( s ) > σ {\displaystyle \Re (s)>\sigma } . Then Perron's formula is
Here, the prime on the summation indicates that the last term of the sum must be multiplied by 1/2 when x is an integer . The integral is not a convergent Lebesgue integral ; it is understood as the Cauchy principal value . The formula requires that c > 0, c > σ, and x > 0.
An easy sketch of the proof comes from taking Abel's sum formula
This is nothing but a Laplace transform under the variable change x = e t . {\displaystyle x=e^{t}.} Inverting it one gets Perron's formula.
Because of its general relationship to Dirichlet series, the formula is commonly applied to many number-theoretic sums. Thus, for example, one has the famous integral representation for the Riemann zeta function :
and a similar formula for Dirichlet L -functions :
where
and χ ( n ) {\displaystyle \chi (n)} is a Dirichlet character . Other examples appear in the articles on the Mertens function and the von Mangoldt function .
Perron's formula is just a special case of the formula
where
and
the Mellin transform. The Perron formula is just the special case of the test function f ( 1 / x ) = θ ( x − 1 ) , {\displaystyle f(1/x)=\theta (x-1),} for θ ( x ) {\displaystyle \theta (x)} the Heaviside step function . | https://en.wikipedia.org/wiki/Perron's_formula |
Perron's irreducibility criterion is a sufficient condition for a polynomial to be irreducible in Z [ x ] {\displaystyle \mathbb {Z} [x]} —that is, for it to be unfactorable into the product of lower- degree polynomials with integer coefficients .
This criterion is applicable only to monic polynomials . However, unlike other commonly used criteria, Perron's criterion does not require any knowledge of prime decomposition of the polynomial's coefficients.
Suppose we have the following polynomial with integer coefficients
where a 0 ≠ 0 {\displaystyle a_{0}\neq 0} . If either of the following two conditions applies:
then f {\displaystyle f} is irreducible over the integers (and by Gauss's lemma also over the rational numbers ).
The criterion was first published by Oskar Perron in 1907 in Journal für die reine und angewandte Mathematik . [ 1 ]
A short proof can be given based on the following lemma due to Panaitopol: [ 2 ] [ 3 ]
Lemma. Let f ( x ) = x n + a n − 1 x n − 1 + ⋯ + a 1 x + a 0 {\displaystyle f(x)=x^{n}+a_{n-1}x^{n-1}+\cdots +a_{1}x+a_{0}} be a polynomial with | a n − 1 | > 1 + | a n − 2 | + ⋯ + | a 1 | + | a 0 | {\displaystyle |a_{n-1}|>1+|a_{n-2}|+\cdots +|a_{1}|+|a_{0}|} . Then exactly one zero z {\displaystyle z} of f {\displaystyle f} satisfies | z | > 1 {\displaystyle |z|>1} , and the other n − 1 {\displaystyle n-1} zeroes of f {\displaystyle f} satisfy | z | < 1 {\displaystyle |z|<1} .
Suppose that f ( x ) = g ( x ) h ( x ) {\displaystyle f(x)=g(x)h(x)} where g {\displaystyle g} and h {\displaystyle h} are integer polynomials. Since, by the above lemma, f {\displaystyle f} has only one zero with modulus not less than 1 {\displaystyle 1} , one of the polynomials g , h {\displaystyle g,h} has all its zeroes strictly inside the unit circle . Suppose that z 1 , … , z k {\displaystyle z_{1},\dots ,z_{k}} are the zeroes of g {\displaystyle g} , and | z 1 | , … , | z k | < 1 {\displaystyle |z_{1}|,\dots ,|z_{k}|<1} . Note that g ( 0 ) {\displaystyle g(0)} is a nonzero integer, and | g ( 0 ) | = | z 1 ⋯ z k | < 1 {\displaystyle |g(0)|=|z_{1}\cdots z_{k}|<1} , contradiction. Therefore, f {\displaystyle f} is irreducible.
In his publication Perron provided variants of the criterion for multivariate polynomials over arbitrary fields . In 2010, Bonciocat published novel proofs of these criteria. [ 4 ] | https://en.wikipedia.org/wiki/Perron's_irreducibility_criterion |
Perry's Chemical Engineers' Handbook (also known as Perry's Handbook , Perry's , or The Chemical Engineer's Bible ) [ 1 ] [ 2 ] was first published in 1934 and the most current ninth edition was published in July 2018. [ citation needed ] It has been a source of chemical engineering knowledge for chemical engineers , and a wide variety of other engineers and scientists, through eight previous editions spanning more than 80 years. [ citation needed ]
The subjects covered in the book include: physical properties of chemicals and other materials; mathematics ; thermodynamics ; heat transfer ; mass transfer ; fluid dynamics ; chemical reactors and chemical reaction kinetics ; transport and storage of fluid; heat transfer equipment; psychrometry and evaporative cooling ; distillation ; gas absorption ; liquid-liquid extraction ; adsorption and ion exchange ; gas–solid, liquid–solid and solid–solid operations; biochemical engineering ; waste management , materials of construction, process economics and cost estimation ; process safety and many others.
The first edition was edited by John H. Perry who was a PhD physical chemist and chemical engineer for E. I. du Pont de Nemours & Co. W. S. Calcott (ChE) of DuPont was his assistant editor. It was published in 1934. The second edition was published in 1941. The third edition was edited by John H. Perry and published in 1950 The fourth edition was edited by Robert H. Perry , Cecil H. Chilton, and Sidney D. Kirkpatrick and published in 1963. The fifth edition was edited by Robert H. Perry and published in 1973. The sixth edition ("50th Anniversary Edition") [ citation needed ] was published in 1984 and edited by Robert H. Perry and Donald W. Green. The 1997 seventh edition was edited by Robert H. Perry and Donald W. Green. The 2640 page 2007–2008 eighth edition was edited by Don W. Green and Robert H. Perry. [ 3 ] and published October 2007.
The 2018–2019 ninth edition was edited by Don W. Green and Marylee W. Southard [ 4 ] [ 5 ] Don Green, the handbook's editor-in-chief, holds a B.S. in petroleum engineering from the University of Tulsa, and M.S. and PhD. Degrees in chemical engineering from the University of Oklahoma. He is Editor of the 6th, 7th and 8th Editions of Perry's. On the other hand, Marylee Southard, the associate editor, holds B.S., M.S. and PhD Degrees in chemical engineering from the University of Kansas. She is new to the publication of Perry's, but has done significant work in inorganic chemicals production including process engineering, design and product development. [ 6 ] | https://en.wikipedia.org/wiki/Perry's_Chemical_Engineers'_Handbook |
Perry Rhodan is a German space opera franchise, named after its hero. It commenced in 1961 and has been ongoing for decades, written by an ever-changing team of authors. Having sold approximately two billion copies (in novella format) worldwide (including over one billion in Germany alone), it is the most successful science fiction book series ever written. [ citation needed ] The first billion of worldwide sales was celebrated in 1986. [ 1 ] The series has spun off into comic books , audio dramas , video games and the like. A reboot, Perry Rhodan NEO , was launched in 2011 and began publication in English in April 2021. [ 2 ]
The series has spun off into many different forms of media, but originated as a serial novella published weekly since 8 September 1961 in the Romanheft (Meaning "Magazine novel") format. These are digest-sized booklets, usually containing 66 pages, the German equivalent of the now-defunct (and generally longer) American pulp magazine . They are published by Pabel-Moewig Verlag, a subsidiary of Bauer Media Group headquartered in Hamburg . As of February 2019, 3000 booklet novels of the original series, 850 spinoff novels of the sister series Atlan and over 400 paperbacks and 200 hardcover editions have been published, totalling over 300,000 pages.
The first 126 novels (plus five novels of the spinoff series Atlan ) were translated into English and published by Ace Books between 1969 and 1978, with the same translations used for the British edition published by Futura Publications which issued only 39 novels. When Ace cancelled its translation of the series, translator Wendayne Ackerman self-published the following 19 novels (under the business name 'Master Publications') and made them available by subscription only. Financial disputes with the German publishers led to the cancellation of the American translation in 1979.
An attempt to revive the series in English was made in 1997–1998 by Vector Publications of the US which published translations of four issues (1800–1803) from the current storyline being published in Germany at the time.
The series and its spin-offs have captured a substantial fraction of the original German science fiction output and exert influence on many German writers in the field. [ citation needed ]
The series is told in an arc storyline structure. An arc—called a "cycle" [ 3 ] —would have anywhere from 25 to 100 issues devoted to it, similar subsequent cycles are referred to as a "grand-cycle". [ 4 ]
‘Perry Rhodan, der Erbe des Universums’ (Eng: ‘The Heir to the Universe’, though the American/British editions instead used the subtitle 'Peacelord of the Universe') was created by German science fiction authors K. H. Scheer and Walter Ernsting and launched in 1961 by German publishing house Arthur Moewig Verlag (now Pabel-Moewig Verlag). Originally planned as a 30 to 50 volume series, it has been published continuously every week since, celebrating the 3000th issue in 2019. [ 5 ] [ 6 ] Written by an ever-changing team of authors, many of whom, however, remained with the series for decades or life, Perry Rhodan is issued in weekly novella -size installments in the traditional German Heftroman ( pulp booklet ) format. Unlike most German Heftromane , Perry Rhodan consists not of unconnected novels but is a series with a continuous, increasingly complex plotline, with frequent back references to events. In addition to its original Heftroman form, the series now also appears in hardcovers, paperbacks, e-books, comics and audiobooks.
Over the decades there have also been comic strips , numerous collectibles, several encyclopedias, audio plays, inspired music, etc. The series has seen partial translations into several languages. It also spawned the German-Italian-Spanish 1967 movie Mission Stardust , [ 7 ] which is widely considered so terrible that many fans of the series pretend it never existed. [ 8 ] [ 9 ] [ 10 ]
Coinciding with the 50th-anniversary World Con, on 30 September 2011, a new series named Perry Rhodan Neo began publication, attracting new readers with a reboot of the story, starting in the year 2036 instead of 1971, [ 11 ] and a related but independent story-line. On April 2, 2021, light novel and manga publisher J-Novel Club announced Perry Rhodan NEO as a launch title for its new J-Novel Pulp imprint, making this the first ongoing English release of new Perry Rhodan serials in over 20 years.
It has become the most popular science fiction book series of all time. [ 12 ]
The story begins in 1971. During the first human Moon landing by US Space Force Major Perry Rhodan and his crew, they discover a marooned extraterrestrial space ship from the fictional planet Arkon, located in the (real) M13 cluster . [ 13 ] Appropriating the Arkonide technology, they proceed to unify Terra and carve out a place for humanity in the galaxy and the cosmos. Two of the accomplishments that enable them to do so are positronic brains and starship drives for near-instantaneous hyperspatial translation . These were directly borrowed from Isaac Asimov 's science fiction.
As the series progresses, major characters, including the title character, are granted relative immortality . [ 14 ] They are immune to age and disease, but not to violent death. The story continues over the course of millennia and includes flashbacks thousands and even millions of years into the past. The scope widens to encompass other galaxies, even more remote regions of space, parallel universes and cosmic structures, time travel , paranormal powers, a variety of aliens ranging from threatening to endearing, and bodiless entities, some of which have godlike powers.
The universe in which the plot regularly takes place is called the Einstein Universe (and occasionally "Meekorah"). Its laws are nearly identical to those of the real universe, in terms of late 20th century science. Newer theories about dark matter and dark energy are currently not used in the series. The laws of nature follow old theories that have been disproven, in order to protect series continuity.
This Einstein Universe is but one of many universes, each to a greater or lesser extent different from it, for example one in which time runs slower, an anti-matter universe, a shrinking universe, etc. Each universe possesses a large ensemble of parallel timelines, which are usually unreachable from each other but may be accessed by special means, thereby itself creating many more parallel timelines.
The Einstein Universe is embedded in a high-dimensional manifold, called Hyperspace . This hyperspace consists of several subspaces that different technologies use for faster-than-light travel. The exact traits of those higher dimensions are not much explained. The border of the universe is a dimension called the deep , once used for construction of the gigantic disc-shaped world Deepland.
The Psionic Web crosses invisibly through the whole universe, constantly emitting "vital energy" and "psionic energy", guaranteeing normal (organic among others) life and the wellbeing of higher entities.
The Moral Code crosses through all universes, and is linked to the Psionic Web. It is subdivided into the Cosmogenes, which are again subdivided into the Cosmonucleotids. The Cosmonucleotids determine reality and fate for their respective parts of a given universe, via messengers .
Higher beings are trying to gain control of this Code to rule reality. The Moral Code itself was not installed by the higher beings, the higher powers by themselves have no clue why or by whom the Code was made.
Once the Cosmocrats ordered Perry Rhodan to find the answer to the third ultimate question: "Who initiated the LAW and what does it accomplish?" Perry Rhodan had the chance to receive the answer at the mountain of creation, but refused, as he knew that the answer would destroy his mind. The negative Superintelligence Koltoroc had received the answer to the last ultimate question, 69 million years BC at Negane Mountain, but it is not known if it made any use of the information.
An evolutionary schema, similar to the Great Chain of Being , called the "onion-shell model" is employed in relationship to all life. Here, continuous evolution is from lower to higher lifeforms, culminating in bodiless entities. Later in the series, further lifeforms, representing stages between the known shells, were introduced.
The main shells are:
The Superintelligences are the next step above normal minds. They can be born, for example, when a species collectively gives up its bodies and unites their spirits. Such Superintelligences may claim as their domain areas consisting of up to several galaxies (the entity known as "ES ('IT')" has the Local Group as its personal domain). The Superintelligence provides mental support to the species in its domain, sometimes symbiotically (positive SI), sometimes parasitically (negative SI).
The Matter Sources/Sinks are born when a Superintelligence fuses with all life and matter in its domain while shrinking. Little more is known except that the process is gradual and that the resulting object lacks the gravitational pull possessed if the contraction produced a black hole .
The "high powers" were long thought to be the highest known lifeforms. They live in an unimaginably distant dimension and have great powers in ruling over lower beings. However they are not omniscient and they are unable to directly interact with lower beings. To enter a regular universe they have to assume a mortal shape, thus reducing their powers and sometimes their knowledge and memory. This is known as the transform syndrome. As a consequence, they rarely interact with lower beings and instead enlist individuals, organizations or entire species.
Among the high powers are two factions known as the Cosmocrats and the Chaotarchs. The Cosmocrats wish to transform all universes into a state of absolute order (a state of utmost entropy, usual symbol S). The Chaotarchs wish to do the opposite and remake all universes into absolute chaos (or negative entropy). They are engulfed in a cataclysmic neverending war involving nearly all known universes. They manipulate and doom whole species for their actions. Open warfare is just one tool among many. In the 2300–2499 cycle the Milky Way galaxy was subjected to open military assault by the forces of chaos trying to establish a bastion of chaos, a Negasphere, in the nearby galaxy Hangay.
Recent stories have revealed to the protagonists that life itself has become a rival to the high powers. Spreading uncontrollably among the universe, it can be found in nearly every niche. The Cosmocrats and Chaotarchs both use life for their own directed goals of order and disorder, but life's unplanned and unregulated cosmological actions are a disturbance for and to both. The Pangalactic Statisticians (a neutral organization of observers) have while some cosmological manipulation is caused by the cosmocrat servants and a lesser amount by the chaos servants, the majority is caused by the uncontrollable power of life itself. [ 15 ] To reduce the influence of life, the Cosmocrats have stopped their programs that encourage the development of life and intelligence. They have increased the hyperimpedance in order to reduce the effectivity and durability of most forms of hypertechnology. [ 16 ]
At least one power, called Thez, higher than either Cosmocrats and Chaotarchs has been recently identified. Thez is said to live close to the "Horizon of the LAW" so that both Cosmocrats and Chaotarchs have problems understanding it.
In the introduction to the first English-language edition of Perry Rhodan in 1969, editor Forrest J Ackerman said "[i]n Germany, all serious SF buffs claim to hate Perry Rhodan, but somebody (in unprecedented numbers) is certainly reading him." [ 17 ] Many American SF fans agreed with the first part of that statement, feeling the series was an embarrassment and too "juvenile". Tom Doherty , the new head of Ace Books in the mid-to-late '70s, concurred and ended the series in the US, even though it was profitable. [ 18 ] This decision meant that by 1980, when the original German versions of Perry Rhodan were becoming "more sophisticated and less aimed at younger readers", the series was no longer available in English. [ 19 ]
Critic Robert Reginald has described the series as the "ultimate soap opera of science fiction" [ 20 ] and standard "pulp science fiction, action stories with minimal characterization, awful dialog, but relatively complex plot development. The emphasis is always on man's expanding horizons, the wonder of science and space, the great destiny of the human race." [ 21 ]
The series' beginnings were often criticized for their description of an expansive mankind and frequent space battles; after William Voltz took over the position of storyline planner for the series in 1975 (a post he held until his death in 1984), the series developed a broader ethical scope and evolved in terms of storytelling style.
While some Anglophone critics dismiss the series, others praise it. In the US, the newer, more complex parts of the series have never been published, so critical review tends to be concentrated on the simple origins of the series. Editor John O'Neill has called Perry Rhodan "one of the richest — if not the richest — Space Operas ever written." [ 22 ]
In the 1960s, Forrest J Ackerman organized the publication in the US of an English translation of the series. His wife Wendayne handled translation. Other translators on the series included Sig Wahrmann, Stuart J. Byrne , and Dwight Decker. Number 1, containing German issues 1 and 2, was published by Ace Books starting in 1969. As Managing Editor, Ackerman soon incorporated elements reminiscent of the science fiction pulp magazines of his youth, such as unrelated short stories , serialized novels and a film review section. The series was a commercial success and was eventually being published three times per month. [ 23 ]
Ace ended its regular run of Perry Rhodan in August 1977 with double issue #117/118. This was followed by the publication of three novellas from earlier in the series which had not been translated and left out of the series by editorial decision. [ 24 ] These were accompanied by three novellas from the Perry Rhodan spinoff series Atlan .
Ace concluded its run of translations with two more Atlan novels and a novel-length In the Center of the Galaxy [German: Im Zentrum der Galaxis] [ 25 ] ' by Clark Darlton , which had appeared in German as issue 11 of the "Perry Rhodan Planet Novels" (or Planetenromane) spin-off series.
When Ace cancelled its publication of the series in 1978, translator Wendayne Ackerman self-published the following 19 novels (numbered #119 – 137) under the business name Master Publications in a subscription-only edition. This was also cancelled in 1979. In the 1990s, Vector Enterprises restarted an American version. This version lasted for four printed issues and one electronic issue and translated #1800 to #1804.
In 2006, Pabel-Moewig Verlag licensed FanPro to publish an English translation of Perry Rhodan: Lemuria . (Some material present in the German version, such as a history of generation spaceships in science fiction, was dropped from the American version.) Only the first volume was released. In 2015–16, Perry Rhodan Digital published English translations of the full six volume Perry Rhodan: Lemuria story arc in ebook format, making these available via iTunes and other digital platforms.
In April 2021, light novel and manga publisher J-Novel Club announced Perry Rhodan NEO as one of three launch titles for its J-Novel Pulp imprint, dedicated to the best of European pulp fiction. Eight volumes, each containing two original German Hefte , or "episodes", have been announced. J-Novel Club's release uses the cover art by toi8 created for Hayakawa Publishing 's 2017 release.
In line with J-Novel Club's light novel releases, new instalments are first serialized on J-Novel Club's website over a number of weeks for subscribers. The first part of each volume is free for all visitors and requires no membership or subscription to read. Following web serialization, each volume is released as an Ebook at all major digital book retailers. J-Novel Club members who purchase the books directly receive textless versions of the cover art as a bonus.
Beginning with Volume 13, the English edition features newly commissioned art by toi8.
Christian Montillon
ISBN 978-1-7183-7910-7
Contains Episodes:
Wim Vandemaan
ISBN 978-1-7183-7912-1
Contains Episodes:
Frank Borsch
ISBN 978-1-7183-7914-5
John Marshall and his superpowered companions have escaped the nefarious Clifford Monterny, but Sid, the teleporter, is unconscious and struggling to survive. On a remote island, Marshall and a team of similarly gifted individuals join forces to delve deep into Sid’s memories. To save him, they must uncover the truth about his life on the streets of Nicaragua and the “rescue” that brought him to Camp Specter, where all was not as it seemed...
Meanwhile, the only alien on Earth is now in Monterny’s clutches. Will mankind ever come together and find its way to the stars?
Contains Episodes:
Hubert Haensel
ISBN 978-1-7183-7916-9
Meanwhile, Arkonide commander Thora da Zoltral is trapped on Venus after being shot down. As she begins to explore a puzzling ancient base, she starts to realize that this might not be the first time the Arkonides have ventured to our solar system.
Little does she know that as she investigates these new developments, back on Earth, her foster father is about to stand trial in the United States. With mankind unable to put aside its petty differences and come together, the promise of the stars looks as far away as ever.
Contains Episodes:
Christian Montillon
ISBN 978-1-7183-7918-3
Back in the nascent city of Terrania, supplies are scarce. What should be the gateway to the stars is still a building site where volunteer workers survive on meager rations. New arrivals Julian and Mildred soon find their disappointment replaced with awe, however, as they meet some of the city’s strangest residents—and when a theft occurs, they step up to help Bull find the perpetrators. Meanwhile, Crest learns that his true reason for traveling to Earth is not as secret as he thought...
The world is in chaos and suspicions abound as mankind inches ever closer to the dream of the stars.
Contains Episodes:
Marc A. Herren
ISBN 978-1-7183-7920-6
His crew aren’t faring much better, with teleporter Tako Kakuta leading a group that learns sanctuary in a hospital isn’t as safe as they expected. As one of their number fights for survival, the invaders find them—and reveal curious truths about their way of life.
Meanwhile, on Earth, the Fantan are still running rampant, seeking special people, things, and moments they call “Besun.” When Adams catches the attention of one of these otherworldly beings, he stages a desperate attempt to end the invasion...via a tour of the finest that Earth’s culture has to offer.
Now that humanity knows it is not alone in the universe, alien threats are coming from all directions and in forms no one could have imagined...
Contains Episodes:
Wim Vandemaan
ISBN 978-1-7183-7922-0
Elsewhere, Rhodan meets an elderly Arkonide who has been waiting in a crumbling base for millennia, hoping his commander will one day return. Mistaking Rhodan for that commander, he tells a tale that encompasses the war-torn history of the Ferrons and far more besides...
Meanwhile, Bull and his companions remain prisoners of the Fantan. But with Gucky, their sardonic alien cellmate, they hit on the best possible way to pass the time and create an escape attempt: musical theater! They may not have a script or know all the words, but their performance of The Pirates of Penzance is sure to be a hit.
Contains Episodes:
Christian Montillon
ISBN 978-1-7183-7924-4
Further help might arrive from unexpected sources as the scattered members of Rhodan’s crew take their fates into their own hands and regroup. Stranded in the jungle, Kakuta, Morozova, and Deringhouse hatch a plan to hijack an enemy vessel and rejoin the fight.
Meanwhile, the Fantan continue to plunder Earth, but Pounder and Crest are ready to take decisive action. Will it be enough to drive away the invaders without causing untold damage to the utopian city of Terrania? One way or another, two wars are about to come to an explosive end...
Contains Episodes:
Michelle Stern
ISBN 978-1-7183-7926-8
Meanwhile, Mildred and Julian take Gucky, the alien Mousebeaver with both telepathic and telekinetic abilities, on a road trip across the USA. Their goal? To find Julian’s father, William Tifflor, who disappeared after defending Crest during his trial. With nefarious forces at play, it’s not long before Gucky is separated from his human companions...and finds himself unable to use his powers. Dangers past and present are lurking around every corner as Earth's heroes take their next step towards the stars.
Contains Episodes:
Hermann Ritter
ISBN 978-1-7183-7928-2
Far away in the Vega system, Rhodan’s crew, lost in space and time after landing on the Ferron world of Reyan, discovers a conflict brewing between the water-dwellers and the land-dwellers, two groups descended from the original colonists. Back home, Dr. Manoli and the historian Aescunnar have begun a space journey of their own as they attempt to learn more about the Arkonides’ prior activities in Earth’s solar system. Their research soon brings them to Saturn’s moons, where danger and an uncertain fate await them.
Contains Episodes:
Wim Vandemaan
ISBN 978-1-7183-7930-5
Meanwhile, Manoli and his companions have fallen ill. The mysterious sickness is thought to be linked to their time as Besun, but relations between the Fantan and Earth are fragile. Can the Fantan save their former captives’ lives? Elsewhere in the galaxy, Rhodan’s crew find themselves stranded in time and space on Ambur, the Vega system’s mysterious tenth planet, a hostile wasteland with pockets of civilization. What are Rhodan, Thora, and the others expected to give in return for their rescue by its inhabitants? And can the strangers cure the strange affliction tormenting Bull and Sue?
Contains Episodes:
Frank Borsch
ISBN 978-1-7183-7932-9
Meanwhile, Rhodan’s group arrives by stowing away on an alien ship. Their reception is all the more hostile, as they are forced to flee into the forest to avoid robots hunting them down. Driven by a vision Rhodan once had, they strive to find a path from the hemispherical planet’s round side to its strange and implausible flat side. As secrets abound and IT’s servants play a game of wits with competing agendas, what truths will be revealed about the Planet of Eternal Life, and will the heroes finally be granted the immortality they seek?
Contains Episodes:
Bernd Perplies
ISBN 978-1-7183-7934-3
Across the stars, Eric Manoli finds himself a stranger in a strange land after arriving on the Topsidan homeworld. Sheltered in a dangerous city where the locals have little love for offworlders. Can he find a way to leave the confines of his gilded cage and search for his friends? Back on Terra, intrigue is brewing as dark facets of Bai Jun’s past come to light. When a shadowy group starts to blackmail him for their own ends, he is faced with difficult decisions.
Contains Episodes:
Translations of Perry Rhodan are currently available in Brazil (#1 to #536 and #650 to #847 as of August 2011), and also from 537 to 649; 1400 in before (at present, in December 2014), including the series Atlan, Planetary Novels and Perry Rhodan NEO (at present in the n. 28), all launched by the "Project Translation", Russia, China, Japan(#1 to #800 as of May 2011), France, the Czech Republic, and the Netherlands (#1 to #2000 as of September 2009). Apart from the US version, there were also editions in Canada, Great Britain (#1 to #39), Italy and Finland. However, the latter have been discontinued.
The first language into which Perry Rhodan was translated was Hebrew . In 1965, the first four episodes appeared in Tel Aviv in a pirated translation, and which for unknown reasons ceased before publication of the fifth (it was not because it was detected by the German publishers, who only heard about it many years later). The few surviving copies of this 1965 translation are highly valued by Israeli collectors. [ 26 ]
The original series is divided into the following cycles and "grand cycles". Only the grandcycles the Great Cosmic Mystery and Thoregon have official names. The other grandcycles were not planned as such. They were named by the readers in retrospect.
Letter page and film reviews began in #6. Would later include short stories—old and new—and reprints of classic serialized novels such as Edison's Conquest of Mars by Garrett P. Serviss (reprinted as Pursuit to Mars ). Of special note is a lost chapter of the H.G. Wells novel The Time Machine that was published in this manner. Also serialized was William Ellern 's New Lensman novel.
Copies of the Ace books and the rarer magazine versions can be found in online auction sites such as eBay and fixed-price online stores like Amazon.com . Used bookstores often have some of the Ace books, but rarely the magazine versions [ citation needed ] .
Matthias Rust , the then-18 year old aviator who landed his Cessna 172 aircraft on the Bolshoy Moskvoretsky Bridge which connects the Red Square in Moscow in 1987, cited Perry Rhodan's adventures as his main inspiration for penetrating Soviet airspace. [ 27 ]
Dutch ESA astronaut André Kuipers was inspired to become an astronaut from an early age by the Perry Rhodan albums his grandmother had bought for him, and that he eventually started buying himself from his allowance. When he finally went into space, on 18 April 2004, he brought his very first booklet along with him. It was number ten in the red series, Ruimteoorlog in de Wegasector ( Space War in the Vega Sector or Raumschlacht im Wega-Sektor ). [ 28 ]
Christopher Franke , former member of German electronica group Tangerine Dream and soundtrack composer for US science-fiction television series Babylon 5 , released Perry Rhodan Pax Terra in 1996, composed of music inspired by the Perry Rhodan epic.
The German group The Psychedelic Avengers have said that they were inspired by Perry Rhodan on their 2004 release And the Curse of the Universe . Another group, Sensus, released a song "Perry Rhodan .. More Than A Million Lightyears From Home" in 1986 a presented it at the Worldcon in Saarbrücken .
In 1995, the Czech computer adventure video game Mise Quadam was created based on the motifs of the series. In 2008, was released the computer game The Immortals of Terra: A Perry Rhodan Adventure ( Rhodan: Myth of the Illochim ).
Bubonicon , an annual science fiction convention in Albuquerque , New Mexico, US, adopted as its mascot Perry Rhodent, a rat wearing only one shoe (or boot). Perry's image is reinvented each year for the convention's program and T-shirts, often by the convention's Artist Guest of Honor. [ citation needed ] | https://en.wikipedia.org/wiki/Perry_Rhodan |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.