text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Particle displacement or displacement amplitude is a measurement of distance of the movement of a sound particle from its equilibrium position in a medium as it transmits a sound wave. [ 1 ] The SI unit of particle displacement is the metre (m). In most cases this is a longitudinal wave of pressure (such as sound ), but it can also be a transverse wave , such as the vibration of a taut string. In the case of a sound wave travelling through air , the particle displacement is evident in the oscillations of air molecules with, and against, the direction in which the sound wave is travelling. [ 2 ]
A particle of the medium undergoes displacement according to the particle velocity of the sound wave traveling through the medium, while the sound wave itself moves at the speed of sound , equal to 343 m/s in air at 20 °C .
Particle displacement, denoted δ , is given by [ 3 ]
where v is the particle velocity .
The particle displacement of a progressive sine wave is given by
where
It follows that the particle velocity and the sound pressure along the direction of propagation of the sound wave x are given by
where
Taking the Laplace transforms of v and p with respect to time yields
Since φ v , 0 = φ p , 0 {\displaystyle \varphi _{v,0}=\varphi _{p,0}} , the amplitude of the specific acoustic impedance is given by
Consequently, the amplitude of the particle displacement is related to those of the particle velocity and the sound pressure by
Related Reading: | https://en.wikipedia.org/wiki/Particle_displacement |
Particle image velocimetry ( PIV ) is an optical method of flow visualization used in education [ 1 ] and research. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] It is used to obtain instantaneous velocity measurements and related properties in fluids . The fluid is seeded with tracer particles which, for sufficiently small particles, are assumed to faithfully follow the flow dynamics (the degree to which the particles faithfully follow the flow is represented by the Stokes number ). The fluid with entrained particles is illuminated so that particles are visible. The motion of the seeding particles is used to calculate speed and direction (the velocity field ) of the flow being studied.
Other techniques used to measure flows are laser Doppler velocimetry and hot-wire anemometry . The main difference between PIV and those techniques is that PIV produces two-dimensional or even three-dimensional vector fields , while the other techniques measure the velocity at a point. During PIV, the particle concentration is such that it is possible to identify individual particles in an image, but not with certainty to track it between images. When the particle concentration is so low that it is possible to follow an individual particle it is called particle tracking velocimetry , while laser speckle velocimetry is used for cases where the particle concentration is so high that it is difficult to observe individual particles in an image.
Typical PIV apparatus consists of a camera (normally a digital camera with a charge-coupled device (CCD) chip in modern systems), a strobe or laser with an optical arrangement to limit the physical region illuminated (normally a cylindrical lens to convert a light beam to a line), a synchronizer to act as an external trigger for control of the camera and laser, the seeding particles and the fluid under investigation. A fiber-optic cable or liquid light guide may connect the laser to the lens setup. PIV software is used to post-process the optical images. [ 7 ] [ 8 ]
Particle image velocimetry (PIV) is a non-intrusive optical flow measurement technique used to study fluid flow patterns and velocities. PIV has found widespread applications in various fields of science and engineering, including aerodynamics , combustion, oceanography , and biofluids . The development of PIV can be traced back to the early 20th century when researchers started exploring different methods to visualize and measure fluid flow.
The early days of PIV can be credited to the pioneering work of Ludwig Prandtl , a German physicist and engineer, who is often regarded as the father of modern aerodynamics. In the 1920s, Prandtl and his colleagues used shadowgraph and schlieren techniques to visualize and measure flow patterns in wind tunnels . These methods relied on the refractive index differences between the fluid regions of interest and the surrounding medium to generate contrast in the images. However, these methods were limited to qualitative observations and did not provide quantitative velocity measurements.
The early PIV setups were relatively simple and used photographic film as the image recording medium. A laser was used to illuminate particles, such as oil droplets or smoke, added to the flow, and the resulting particle motion was captured on film. The films were then developed and analyzed to obtain flow velocity information. These early PIV systems had limited spatial resolution and were labor-intensive, but they provided valuable insights into fluid flow behavior.
The advent of lasers in the 1960s revolutionized the field of flow visualization and measurement. Lasers provided a coherent and monochromatic light source that could be easily focused and directed, making them ideal for optical flow diagnostics. In the late 1960s and early 1970s, researchers such as Arthur L. Lavoie, Hervé L. J. H. Scohier, and Adrian Fouriaux independently proposed the concept of particle image velocimetry (PIV). PIV was initially used for studying air flows and measuring wind velocities, but its applications soon extended to other areas of fluid dynamics .
In the 1980s, the development of charge-coupled devices (CCDs) and digital image processing techniques revolutionized PIV. CCD cameras replaced photographic film as the image recording medium, providing higher spatial resolution , faster data acquisition, and real-time processing capabilities. Digital image processing techniques allowed for accurate and automated analysis of the PIV images, greatly reducing the time and effort required for data analysis.
The advent of digital imaging and computer processing capabilities in the 1980s and 1990s revolutionized PIV, leading to the development of advanced PIV techniques, such as multi-frame PIV, stereo-PIV, and time-resolved PIV. These techniques allowed for higher accuracy, higher spatial and temporal resolution, and three-dimensional measurements, expanding the capabilities of PIV and enabling its application in more complex flow systems.
In the following decades, PIV continued to evolve and advance in several key areas. One significant advancement was the use of dual or multiple exposures in PIV, which allowed for the measurement of both instantaneous and time-averaged velocity fields. Dual-exposure PIV (often referred to as "stereo PIV" or "stereo-PIV") uses two cameras to capture two consecutive images with a known time delay, allowing for the measurement of three-component velocity vectors in a plane. This provided a more complete picture of the flow field and enabled the study of complex flows, such as turbulence and vortices.
In the 2000s and beyond, PIV continued to evolve with the development of high-power lasers, high-speed cameras, and advanced image analysis algorithms. These advancements have enabled PIV to be used in extreme conditions, such as high-speed flows, combustion systems, and microscale flows, opening up new frontiers for PIV research. PIV has also been integrated with other measurement techniques, such as temperature and concentration measurements, and has been used in emerging fields, such as microscale and nanoscale flows, granular flows, and additive manufacturing.
The advancement of PIV has been driven by the development of new laser sources, cameras, and image analysis techniques. Advances in laser technology have led to the use of high-power lasers, such as Nd:YAG lasers and diode lasers , which provide increased illumination intensity and allow for measurements in more challenging environments, such as high-speed flows and combustion systems. High-speed cameras with improved sensitivity and frame rates have also been developed, enabling the capture of transient flow phenomena with high temporal resolution. Furthermore, advanced image analysis techniques, such as correlation-based algorithms, phase-based methods, and machine learning algorithms , have been developed to enhance the accuracy and efficiency of PIV measurements.
Another major advancement in PIV was the development of digital correlation algorithms for image analysis . These algorithms allowed for more accurate and efficient processing of PIV images, enabling higher spatial resolution and faster data acquisition rates. Various correlation algorithms, such as cross-correlation , Fourier-transform -based correlation, and adaptive correlation, were developed and widely used in PIV research.
PIV has also benefited from the development of computational fluid dynamics (CFD) simulations, which have become powerful tools for predicting and analyzing fluid flow behavior. PIV data can be used to validate and calibrate CFD simulations, and in turn, CFD simulations can provide insights into the interpretation and analysis of PIV data. The combination of experimental PIV measurements and numerical simulations has enabled researchers to gain a deeper understanding of fluid flow phenomena and has led to new discoveries and advancements in various scientific and engineering fields.
In addition to the technical advancements, PIV has also been integrated with other measurement techniques, such as temperature and concentration measurements, to provide more comprehensive and multi-parameter flow measurements. For example, combining PIV with thermographic phosphors or laser-induced fluorescence allows for simultaneous measurement of velocity and temperature or concentration fields, providing valuable data for studying heat transfer , mixing, and chemical reactions in fluid flows.
The historical development of PIV has been driven by the need for accurate and non-intrusive flow measurements in various fields of science and engineering. The early years of PIV were marked by the development of basic PIV techniques, such as two-frame PIV, and the application of PIV in fundamental fluid dynamics research, primarily in academic settings. As PIV gained popularity, researchers started using it in more practical applications, such as aerodynamics, combustion, and oceanography.
As PIV continues to advance and evolve, it is expected to find further applications in a wide range of fields, from fundamental research in fluid dynamics to practical applications in engineering, environmental science , and medicine. The continued development of PIV techniques, including advancements in lasers, cameras, image analysis algorithms, and integration with other measurement techniques, will further enhance its capabilities and broaden its applications.
In aerodynamics, PIV has been used to study the flow over aircraft wings, rotor blades, and other aerodynamic surfaces, providing insights into the flow behavior and aerodynamic performance of these systems.
As PIV gained popularity, it found applications in a wide range of fields beyond aerodynamics, including combustion, oceanography, biofluids, and microscale flows. In combustion research, PIV has been used to study the details of combustion processes, such as flame propagation, ignition, and fuel spray dynamics, providing valuable insights into the complex interactions between fuel and air in combustion systems. In oceanography, PIV has been used to study the motion of water currents, waves, and turbulence, aiding in the understanding of ocean circulation patterns and coastal erosion. In biofluids research, PIV has been applied to study blood flow in arteries and veins, respiratory flow, and the motion of cilia and flagella in microorganisms, providing important information for understanding physiological processes and disease mechanisms.
PIV has also been used in new and emerging fields, such as microscale and nanoscale flows, granular flows , and multiphase flows . Micro-PIV and nano-PIV have been used to study flows in microchannels , nanopores , and biological systems at the microscale and nanoscale, providing insights into the unique behaviors of fluids at these length scales. PIV has been applied to study the motion of particles in granular flows, such as avalanches and landslides, and to investigate multiphase flows, such as bubbly flows and oil-water flows, which are important in environmental and industrial processes. In microscale flows, conventional measurement techniques are challenging to apply due to the small length scales involved. Micro-PIV has been used to study flows in microfluidic devices, such as lab-on-a-chip systems, and to investigate phenomena such as droplet formation, mixing, and cell motion, with applications in drug delivery , biomedical diagnostics, and microscale engineering.
PIV has also found applications in advanced manufacturing processes, such as additive manufacturing, where understanding and optimizing fluid flow behavior is critical for achieving high-quality and high-precision products. PIV has been used to study the flow dynamics of gases, liquids, and powders in additive manufacturing processes, providing insights into the process parameters that affect the quality and properties of the manufactured products.
PIV has also been used in environmental science to study the dispersion of pollutants in air and water, sediment transport in rivers and coastal areas, and the behavior of pollutants in natural and engineered systems. In energy research, PIV has been used to study the flow behavior in wind turbines , hydroelectric power plants, and combustion processes in engines and turbines, aiding in the development of more efficient and environmentally friendly energy systems.
The seeding particles are an inherently critical component of the PIV system. Depending on the fluid under investigation, the particles must be able to match the fluid properties reasonably well. Otherwise they will not follow the flow satisfactorily enough for the PIV analysis to be considered accurate. Ideal particles will have the same density as the fluid system being used, and are spherical (these particles are called microspheres ). While the actual particle choice is dependent on the nature of the fluid, generally for macro PIV investigations they are glass beads, polystyrene , polyethylene , aluminum flakes or oil droplets (if the fluid under investigation is a gas ). Refractive index for the seeding particles should be different from the fluid which they are seeding, so that the laser sheet incident on the fluid flow will reflect off of the particles and be scattered towards the camera.
The particles are typically of a diameter in the order of 10 to 100 micrometers. As for sizing, the particles should be small enough so that response time of the particles to the motion of the fluid is reasonably short to accurately follow the flow, yet large enough to scatter a significant quantity of the incident laser light. For some experiments involving combustion, seeding particle size may be smaller, in the order of 1 micrometer, to avoid the quenching effect that the inert particles may have on flames. Due to the small size of the particles, the particles' motion is dominated by Stokes' drag and settling or rising effects. In a model where particles are modeled as spherical ( microspheres ) at a very low Reynolds number , the ability of the particles to follow the fluid's flow is inversely proportional to the difference in density between the particles and the fluid, and also inversely proportional to the square of their diameter. The scattered light from the particles is dominated by Mie scattering and so is also proportional to the square of the particles' diameters. Thus the particle size needs to be balanced to scatter enough light to accurately visualize all particles within the laser sheet plane, but small enough to accurately follow the flow.
The seeding mechanism needs to also be designed so as to seed the flow to a sufficient degree without overly disturbing the flow.
To perform PIV analysis on the flow, two exposures of laser light are required upon the camera from the flow. Originally, with the inability of cameras to capture multiple frames at high speeds, both exposures were captured on the same frame and this single frame was used to determine the flow. A process called autocorrelation was used for this analysis. However, as a result of autocorrelation the direction of the flow becomes unclear, as it is not clear which particle spots are from the first pulse and which are from the second pulse. Faster digital cameras using CCD or CMOS chips were developed since then that can capture two frames at high speed with a few hundred ns difference between the frames. This has allowed each exposure to be isolated on its own frame for more accurate cross-correlation analysis. The limitation of typical cameras is that this fast speed is limited to a pair of shots. This is because each pair of shots must be transferred to the computer before another pair of shots can be taken. Typical cameras can only take a pair of shots at a much slower speed. High speed CCD or CMOS cameras are available but are much more expensive.
For macro PIV setups, lasers are predominant due to their ability to produce high-power light beams with short pulse durations. This yields short exposure times for each frame. Nd:YAG lasers , commonly used in PIV setups, emit primarily at 1064 nm wavelength and its harmonics (532, 266, etc.) For safety reasons, the laser emission is typically bandpass filtered to isolate the 532 nm harmonics (this is green light, the only harmonic able to be seen by the naked eye). A fiber-optic cable or liquid light guide might be used to direct the laser light to the experimental setup.
The optics consist of a spherical lens and cylindrical lens combination. The cylindrical lens expands the laser into a plane while the spherical lens compresses the plane into a thin sheet. This is critical as the PIV technique cannot generally measure motion normal to the laser sheet and so ideally this is eliminated by maintaining an entirely 2-dimensional laser sheet. The spherical lens cannot compress the laser sheet into an actual 2-dimensional plane. The minimum thickness is on the order of the wavelength of the laser light and occurs at a finite distance from the optics setup (the focal point of the spherical lens). This is the ideal location to place the analysis area of the experiment.
The correct lens for the camera should also be selected to properly focus on and visualize the particles within the investigation area.
The synchronizer acts as an external trigger for both the camera(s) and the laser. While analogue systems in the form of a photosensor , rotating aperture and a light source have been used in the past, most systems in use today are digital. Controlled by a computer, the synchronizer can dictate the timing of each frame of the CCD camera's sequence in conjunction with the firing of the laser to within 1 ns precision. Thus the time between each pulse of the laser and the placement of the laser shot in reference to the camera's timing can be accurately controlled. Knowledge of this timing is critical as it is needed to determine the velocity of the fluid in the PIV analysis. Stand-alone electronic synchronizers, called digital delay generators , offer variable resolution timing from as low as 250 ps to as high as several ms. With up to eight channels of synchronized timing, they offer the means to control several flash lamps and Q-switches as well as provide for multiple camera exposures.
The frames are split into a large number of interrogation areas, or windows. It is then possible to calculate a displacement vector for each window with help of signal processing and autocorrelation or cross-correlation techniques. This is converted to a velocity using the time between laser shots and the physical size of each pixel on the camera. The size of the interrogation window should be chosen to have at least 6 particles per window on average. A visual example of PIV analysis can be seen here.
The synchronizer controls the timing between image exposures and also permits image pairs to be acquired at various times along the flow. For accurate PIV analysis, it is ideal that the region of the flow that is of interest should display an average particle displacement of about 8 pixels. This is a compromise between a longer time spacing which would allow the particles to travel further between frames, making it harder to identify which interrogation window traveled to which point, and a shorter time spacing, which could make it overly difficult to identify any displacement within the flow.
The scattered light from each particle should be in the region of 2 to 4 pixels across on the image. If too large an area is recorded, particle image size drops and peak locking might occur with loss of sub pixel precision. There are methods to overcome the peak locking effect, but they require some additional work.
If there is in house PIV expertise and time to develop a system, even though it is not trivial, it is possible to build a custom PIV system. Research grade PIV systems do, however, have high power lasers and high end camera specifications for being able to take measurements with the broadest spectrum of experiments required in research.
An example of PIV analysis without installation: [1]
PIV is closely related to digital image correlation , an optical displacement measurement technique that uses correlation techniques to study the deformation of solid materials.
The method is, to a large degree, nonintrusive. The added tracers (if they are properly chosen) generally cause negligible distortion of the fluid flow. [ 9 ]
Optical measurement avoids the need for Pitot tubes , hotwire anemometers or other intrusive Flow measurement probes. The method is capable of measuring an entire two- dimensional cross section (geometry) of the flow field simultaneously.
High speed data processing allows the generation of large numbers of image pairs which, on a personal computer may be analysed in real time or at a later time, and a high quantity of near-continuous information may be gained.
Sub pixel displacement values allow a high degree of accuracy, since each vector is the statistical average for many particles within a particular tile. Displacement can typically be accurate down to 10% of one pixel on the image plane.
In some cases the particles will, due to their higher density, not perfectly follow the motion of the fluid ( gas / liquid ). If experiments are done in water, for instance, it is easily possible to find very cheap particles (e.g. plastic powder with a diameter of ~60 μm) with the same density as water. If the density still does not fit, the density of the fluid can be tuned by increasing/ decreasing its temperature. This leads to slight changes in the Reynolds number, so the fluid velocity or the size of the experimental object has to be changed to account for this.
Particle image velocimetry methods will in general not be able to measure components along the z-axis (towards to/away from the camera). These components might not only be missed, they might also introduce an interference in the data for the x/y-components caused by parallax. These problems do not exist in Stereoscopic PIV, which uses two cameras to measure all three velocity components.
Since the resulting velocity vectors are based on cross-correlating the intensity distributions over small areas of the flow, the resulting velocity field is a spatially averaged representation of the actual velocity field. This obviously has consequences for the accuracy of spatial derivatives of the velocity field, vorticity, and spatial correlation functions that are often derived from PIV velocity fields.
PIV systems used in research often use class IV lasers and high-resolution, high-speed cameras, which bring cost and safety constraints.
Stereoscopic PIV utilises two cameras with separate viewing angles to extract the z-axis displacement. Both cameras must be focused on the same spot in the flow and must be properly calibrated to have the same point in focus.
In fundamental fluid mechanics, displacement within a unit time in the X, Y and Z directions are commonly defined by the variables U, V and W. As was previously described, basic PIV extracts the U and V displacements as functions of the in-plane X and Y directions. This enables calculations of the U x {\displaystyle U_{x}} , V y {\displaystyle V_{y}} , U y {\displaystyle U_{y}} and V x {\displaystyle V_{x}} velocity gradients. However, the other 5 terms of the velocity gradient tensor are unable to be found from this information. The stereoscopic PIV analysis also grants the Z-axis displacement component, W, within that plane. Not only does this grant the Z-axis velocity of the fluid at the plane of interest, but two more velocity gradient terms can be determined: W x {\displaystyle W_{x}} and W y {\displaystyle W_{y}} . The velocity gradient components U z {\displaystyle U_{z}} , V z {\displaystyle V_{z}} , and W z {\displaystyle W_{z}} can not be determined.
The velocity gradient components form the tensor:
This is an expansion of stereoscopic PIV by adding a second plane of investigation directly offset from the first one. Four cameras are required for this analysis. The two planes of laser light are created by splitting the laser emission with a beam splitter into two beams. Each beam is then polarized orthogonally with respect to one another. Next, they are transmitted through a set of optics and used to illuminate one of the two planes simultaneously.
The four cameras are paired into groups of two. Each pair focuses on one of the laser sheets in the same manner as single-plane stereoscopic PIV. Each of the four cameras has a polarizing filter designed to only let pass the polarized scattered light from the respective planes of interest. This essentially creates a system by which two separate stereoscopic PIV analysis setups are run simultaneously with only a minimal separation distance between the planes of interest.
This technique allows the determination of the three velocity gradient components single-plane stereoscopic PIV could not calculate: U z {\displaystyle U_{z}} , V z {\displaystyle V_{z}} , and W z {\displaystyle W_{z}} . With this technique, the entire velocity gradient tensor of the fluid at the 2-dimensional plane of interest can be quantified. A difficulty arises in that the laser sheets should be maintained close enough together so as to approximate a two-dimensional plane, yet offset enough that meaningful velocity gradients can be found in the z-direction.
There are several extensions of the dual-plane stereoscopic PIV idea available. There is an option to create several parallel laser sheets using a set of beamsplitters and quarter-wave plates, providing three or more planes, using a single laser unit and stereoscopic PIV setup, called XPIV. [ 10 ]
With the use of an epifluorescent microscope, microscopic flows can be analyzed. MicroPIV makes use of fluorescing particles that excite at a specific wavelength and emit at another wavelength. Laser light is reflected through a dichroic mirror, travels through an objective lens that focuses on the point of interest, and illuminates a regional volume. The emission from the particles, along with reflected laser light, shines back through the objective, the dichroic mirror and through an emission filter that blocks the laser light. Where PIV draws its 2-dimensional analysis properties from the planar nature of the laser sheet, microPIV utilizes the ability of the objective lens to focus on only one plane at a time, thus creating a 2-dimensional plane of viewable particles. [ 11 ] [ 12 ]
MicroPIV particles are on the order of several hundred nm in diameter, meaning they are extremely susceptible to Brownian motion. Thus, a special ensemble averaging analysis technique must be utilized for this technique. The cross-correlation of a series of basic PIV analyses are averaged together to determine the actual velocity field. Thus, only steady flows can be investigated. Special preprocessing techniques must also be utilized since the images tend to have a zero-displacement bias from background noise and low signal-noise ratios. Usually, high numerical aperture objectives are also used to capture the maximum emission light possible. Optic choice is also critical for the same reasons.
Holographic PIV (HPIV) encompasses a variety of experimental techniques which use the interference of coherent light scattered by a particle and a reference beam to encode information of the amplitude and phase of the scattered light incident on a sensor plane. This encoded information, known as a hologram , can then be used to reconstruct the original intensity field by illuminating the hologram with the original reference beam via optical methods or digital approximations. The intensity field is interrogated using 3-D cross-correlation techniques to yield a velocity field.
Off-axis HPIV uses separate beams to provide the object and reference waves. This setup is used to avoid speckle noise form being generated from interference of the two waves within the scattering medium, which would occur if they were both propagated through the medium. An off-axis experiment is a highly complex optical system comprising numerous optical elements, and the reader is referred to an example schematic in Sheng et al. [ 13 ] for a more complete presentation.
In-line holography is another approach that provides some unique advantages for particle imaging. Perhaps the largest of these is the use of forward scattered light, which is orders of magnitude brighter than scattering oriented normal to the beam direction. Additionally, the optical setup of such systems is much simpler because the residual light does not need to be separated and recombined at a different location. The in-line configuration also provides a relatively easy extension to apply CCD sensors, creating a separate class of experiments known as digital in-line holography. The complexity of such setups shifts from the optical setup to image post-processing, which involves the use of simulated reference beams. Further discussion of these topics is beyond the scope of this article and is treated in Arroyo and Hinsch [ 14 ]
A variety of issues degrade the quality of HPIV results. The first class of issues involves the reconstruction itself. In holography, the object wave of a particle is typically assumed to be spherical; however, due to Mie scattering theory, this wave is a complex shape which can distort the reconstructed particle. Another issue is the presence of substantial speckle noise which lowers the overall signal-to-noise ratio of particle images. This effect is of greater concern for in-line holographic systems because the reference beam is propagated through the volume along with the scattered object beam. Noise can also be introduced through impurities in the scattering medium, such as temperature variations and window blemishes. Because holography requires coherent imaging, these effects are much more severe than traditional imaging conditions. The combination of these factors increases the complexity of the correlation process. In particular, the speckle noise in an HPIV recording often prevents traditional image-based correlation methods from being used. Instead, single particle identification and correlation are implemented, which set limits on particle number density. A more comprehensive outline of these error sources is given in Meng et al. [ 15 ]
In light of these issues, it may seem that HPIV is too complicated and error-prone to be used for flow measurements. However, many impressive results have been obtained with all holographic approaches. Svizher and Cohen [ 16 ] used a hybrid HPIV system to study the physics of hairpin vortices. Tao et al. [ 17 ] investigated the alignment of vorticity and strain rate tensors in high Reynolds number turbulence. As a final example, Sheng et al. [ 13 ] used holographic microscopy to perform near-wall measurements of turbulent shear stress and velocity in turbulent boundary layers.
By using a rotating mirror, a high-speed camera and correcting for geometric changes, PIV can be performed nearly instantly on a set of planes throughout the flow field. Fluid properties between the planes can then be interpolated. Thus, a quasi-volumetric analysis can be performed on a target volume. Scanning PIV can be performed in conjunction with the other 2-dimensional PIV methods described to approximate a 3-dimensional volumetric analysis.
Tomographic PIV is based on the illumination, recording, and reconstruction of tracer particles within a 3-D measurement volume. The technique uses several cameras to record simultaneous views of the illuminated volume, which is then reconstructed to yield a discretized 3-D intensity field. A pair of intensity fields are analyzed using 3-D cross-correlation algorithms to calculate the 3-D, 3-C velocity field within the volume. The technique was originally developed [ 18 ] by Elsinga et al. [ 19 ] in 2006.
The reconstruction procedure is a complex under-determined inverse problem. [ citation needed ] The primary complication is that a single set of views can result from a large number of 3-D volumes. Procedures to properly determine the unique volume from a set of views are the foundation for the field of tomography. In most Tomo-PIV experiments, the multiplicative algebraic reconstruction technique (MART) is used. The advantage of this pixel-by-pixel reconstruction technique is that it avoids the need to identify individual particles. [ citation needed ] Reconstructing the discretized 3-D intensity field is computationally intensive and, beyond MART, several developments have sought to significantly reduce this computational expense, for example the multiple line-of-sight simultaneous multiplicative algebraic reconstruction technique (MLOS-SMART) [ 20 ] which takes advantage of the sparsity of the 3-D intensity field to reduce memory storage and calculation requirements.
As a rule of thumb, at least four cameras are needed for acceptable reconstruction accuracy, and best results are obtained when the cameras are placed at approximately 30 degrees normal to the measurement volume. [ 19 ] Many additional factors are necessary to consider for a successful experiment. [ citation needed ]
Tomo-PIV has been applied to a broad range of flows. Examples include the structure of a turbulent boundary layer/shock wave interaction, [ 21 ] the vorticity of a cylinder wake [ 22 ] or pitching airfoil, [ 23 ] rod-airfoil aeroacoustic experiments, [ 24 ] and to measure small-scale, micro flows. [ 25 ] More recently, Tomo-PIV has been used together with 3-D particle tracking velocimetry to understand predator-prey interactions, [ 26 ] [ 27 ] and portable version of Tomo-PIV has been used to study unique swimming organisms in Antarctica. [ 28 ]
Thermographic PIV is based on the use of thermographic phosphors as seeding particles. The use of these thermographic phosphors permits simultaneous measurement of velocity and temperature in a flow.
Thermographic phosphors consist of ceramic host materials doped with rare-earth or transition metal ions, which exhibit phosphorescence when they are illuminated with UV-light. The decay time and the spectra of this phosphorescence are temperature sensitive and offer two different methods to measure temperature. The decay time method consists on the fitting of the phosphorescence decay to an exponential function and is normally used in point measurements, although it has been demonstrated in surface measurements. The intensity ratio between two different spectral lines of the phosphorescence emission, tracked using spectral filters, is also temperature-dependent and can be employed for surface measurements.
The micrometre-sized phosphor particles used in thermographic PIV are seeded into the flow as a tracer and, after illumination with a thin laser light sheet, the temperature of the particles can be measured from the phosphorescence, normally using an intensity ratio technique. It is important that the particles are of small size so that not only they follow the flow satisfactorily but also they rapidly assume its temperature. For a diameter of 2 μm, the thermal slip between particle and gas is as small as the velocity slip.
Illumination of the phosphor is achieved using UV light. Most thermographic phosphors absorb light in a broad band in the UV and therefore can be excited using a YAG:Nd laser. Theoretically, the same light can be used both for PIV and temperature measurements, but this would mean that UV-sensitive cameras are needed. In practice, two different beams originated in separate lasers are overlapped. While one of the beams is used for velocity measurements, the other is used to measure the temperature.
The use of thermographic phosphors offers some advantageous features including ability to survive in reactive and high temperature environments, chemical stability and insensitivity of their phosphorescence emission to pressure and gas composition. In addition, thermographic phosphors emit light at different wavelengths, allowing spectral discrimination against excitation light and background.
Thermographic PIV has been demonstrated for time averaged [ 29 ] and single shot [ 30 ] measurements. Recently, also time-resolved high speed (3 kHz) measurements [ 31 ] have been successfully performed.
With the development of artificial intelligence, there have been scientific publications and commercial software proposing PIV calculations based on deep learning and convolutional neural networks. The methodology used stems mainly from optical flow neural networks popular in machine vision. A data set that includes particle images is generated to train the parameters of the networks. The result is a deep neural network for PIV which can provide estimation of dense motion, down to a maximum of one vector for one pixel if the recorded images allow. AI PIV promises a dense velocity field, not limited by the size of the interrogation window, which limits traditional PIV to one vector per 16 x 16 pixels. [ 32 ]
With the advance of digital technologies, real time processing and applications of PIV became possible. For instance, GPUs can be used to speed up substantially the direct of Fourier transform based correlations of single interrogation windows. Similarly multi-processing, parallel or multi-threading processes on several CPUs or multi-core CPUs are beneficial for the distributed processing of multiple interrogation windows or multiple images. Some of the applications use real time image processing methods, such as FPGA based on-the-fly image compression or image processing. More recently, the PIV real time measurement and processing capabilities are implemented for the future use in active flow control with the flow based feedback. [ 33 ]
PIV has been applied to a wide range of flow problems, varying from the flow over an aircraft wing in a wind tunnel to vortex formation in prosthetic heart valves. 3-dimensional techniques have been sought to analyze turbulent flow and jets.
Rudimentary PIV algorithms based on cross-correlation can be implemented in a matter of hours, while more sophisticated algorithms may require a significant investment of time. Several open source implementations are available. Application of PIV in the US education system has been limited due to high price and safety concerns of industrial research grade PIV systems.
PIV can also be used to measure the velocity field of the free surface and basal boundary in a granular flows such as those in shaken containers, [ 34 ] tumblers [ 35 ] and avalanches.
This analysis is particularly well-suited for nontransparent media such as sand, gravel, quartz, or other granular materials that are common in geophysics. This PIV approach is called "granular PIV". The set-up for granular PIV differs from the usual PIV setup in that the optical surface structure which is produced by illumination of the surface of the granular flow is already sufficient to detect the motion. This means one does not need to add tracer particles in the bulk material. | https://en.wikipedia.org/wiki/Particle_image_velocimetry |
In quantum mechanics , the particle in a one-dimensional lattice is a problem that occurs in the model of a periodic crystal lattice . The potential is caused by ions in the periodic structure of the crystal creating an electromagnetic field so electrons are subject to a regular potential inside the lattice. It is a generalization of the free electron model , which assumes zero potential inside the lattice.
When talking about solid materials, the discussion is mainly around crystals – periodic lattices. Here we will discuss a 1D lattice of positive ions. Assuming the spacing between two ions is a , the potential in the lattice will look something like this:
The mathematical representation of the potential is a periodic function with a period a . According to Bloch's theorem , [ 1 ] the wavefunction solution of the Schrödinger equation when the potential is periodic, can be written as:
ψ ( x ) = e i k x u ( x ) , {\displaystyle \psi (x)=e^{ikx}u(x),}
where u ( x ) is a periodic function which satisfies u ( x + a ) = u ( x ) . It is the Bloch factor with Floquet exponent k {\displaystyle k} which gives rise to the band structure of the energy spectrum of the Schrödinger equation with a periodic potential like the Kronig–Penney potential or a cosine function as it was shown in 1928 by Strutt. [ 2 ] The solutions can be given with the help of the Mathieu functions .
When nearing the edges of the lattice, there are problems with the boundary condition. Therefore, we can represent the ion lattice as a ring following the Born–von Karman boundary conditions . If L is the length of the lattice so that L ≫ a , then the number of ions in the lattice is so big, that when considering one ion, its surrounding is almost linear, and the wavefunction of the electron is unchanged. So now, instead of two boundary conditions we get one circular boundary condition: ψ ( 0 ) = ψ ( L ) . {\displaystyle \psi (0)=\psi (L).}
If N is the number of ions in the lattice, then we have the relation: aN = L . Replacing in the boundary condition and applying Bloch's theorem will result in a quantization for k : ψ ( 0 ) = e i k ⋅ 0 u ( 0 ) = e i k L u ( L ) = ψ ( L ) {\displaystyle \psi (0)=e^{ik\cdot 0}u(0)=e^{ikL}u(L)=\psi (L)} u ( 0 ) = e i k L u ( L ) = e i k L u ( N a ) → e i k L = 1 {\displaystyle u(0)=e^{ikL}u(L)=e^{ikL}u(Na)\to e^{ikL}=1} ⇒ k L = 2 π n → k = 2 π L n ( n = 0 , ± 1 , … , ± N 2 ) . {\displaystyle \Rightarrow kL=2\pi n\to k={2\pi \over L}n\qquad \left(n=0,\pm 1,\dots ,\pm {\frac {N}{2}}\right).}
The Kronig–Penney model (named after Ralph Kronig and William Penney [ 3 ] ) is a simple, idealized quantum-mechanical system that consists of an infinite periodic array of rectangular potential barriers .
The potential function is approximated by a rectangular potential:
Using Bloch's theorem , we only need to find a solution for a single period, make sure it is continuous and smooth, and to make sure the function u ( x ) is also continuous and smooth.
Considering a single period of the potential: We have two regions here. We will solve for each independently:
Let E be an energy value above the well (E>0)
To find u ( x ) in each region, we need to manipulate the electron's wavefunction: ψ ( 0 < x < a − b ) = A e i α x + A ′ e − i α x = e i k x ( A e i ( α − k ) x + A ′ e − i ( α + k ) x ) ⇒ u ( 0 < x < a − b ) = A e i ( α − k ) x + A ′ e − i ( α + k ) x . {\displaystyle {\begin{aligned}\psi (0<x<a-b)&=Ae^{i\alpha x}+A'e^{-i\alpha x}=e^{ikx}\left(Ae^{i(\alpha -k)x}+A'e^{-i(\alpha +k)x}\right)\\\Rightarrow u(0<x<a-b)&=Ae^{i(\alpha -k)x}+A'e^{-i(\alpha +k)x}.\end{aligned}}}
And in the same manner: u ( − b < x < 0 ) = B e i ( β − k ) x + B ′ e − i ( β + k ) x . {\displaystyle u(-b<x<0)=Be^{i(\beta -k)x}+B'e^{-i(\beta +k)x}.}
To complete the solution we need to make sure the probability function is continuous and smooth, i.e.: ψ ( 0 − ) = ψ ( 0 + ) ψ ′ ( 0 − ) = ψ ′ ( 0 + ) . {\displaystyle \psi (0^{-})=\psi (0^{+})\qquad \psi '(0^{-})=\psi '(0^{+}).}
And that u ( x ) and u′ ( x ) are periodic: u ( − b ) = u ( a − b ) u ′ ( − b ) = u ′ ( a − b ) . {\displaystyle u(-b)=u(a-b)\qquad u'(-b)=u'(a-b).}
These conditions yield the following matrix: ( 1 1 − 1 − 1 α − α − β β e i ( α − k ) ( a − b ) e − i ( α + k ) ( a − b ) − e − i ( β − k ) b − e i ( β + k ) b ( α − k ) e i ( α − k ) ( a − b ) − ( α + k ) e − i ( α + k ) ( a − b ) − ( β − k ) e − i ( β − k ) b ( β + k ) e i ( β + k ) b ) ( A A ′ B B ′ ) = ( 0 0 0 0 ) . {\displaystyle {\begin{pmatrix}1&1&-1&-1\\\alpha &-\alpha &-\beta &\beta \\e^{i(\alpha -k)(a-b)}&e^{-i(\alpha +k)(a-b)}&-e^{-i(\beta -k)b}&-e^{i(\beta +k)b}\\(\alpha -k)e^{i(\alpha -k)(a-b)}&-(\alpha +k)e^{-i(\alpha +k)(a-b)}&-(\beta -k)e^{-i(\beta -k)b}&(\beta +k)e^{i(\beta +k)b}\end{pmatrix}}{\begin{pmatrix}A\\A'\\B\\B'\end{pmatrix}}={\begin{pmatrix}0\\0\\0\\0\end{pmatrix}}.}
For us to have a non-trivial solution, the determinant of the matrix must be 0. This leads us to the following expression: cos ( k a ) = cos ( β b ) cos [ α ( a − b ) ] − α 2 + β 2 2 α β sin ( β b ) sin [ α ( a − b ) ] . {\displaystyle \cos(ka)=\cos(\beta b)\cos[\alpha (a-b)]-{\alpha ^{2}+\beta ^{2} \over 2\alpha \beta }\sin(\beta b)\sin[\alpha (a-b)].}
To further simplify the expression, we perform the following approximations: b → 0 ; V 0 → ∞ ; V 0 b = c o n s t a n t {\displaystyle b\to 0;\quad V_{0}\to \infty ;\quad V_{0}b=\mathrm {constant} } ⇒ β 2 b = c o n s t a n t ; α 2 b → 0 {\displaystyle \Rightarrow \beta ^{2}b=\mathrm {constant} ;\quad \alpha ^{2}b\to 0} ⇒ β b → 0 ; sin ( β b ) → β b ; cos ( β b ) → 1. {\displaystyle \Rightarrow \beta b\to 0;\quad \sin(\beta b)\to \beta b;\quad \cos(\beta b)\to 1.}
The expression will now be: cos ( k a ) = cos ( α a ) + P sin ( α a ) α a , P = m V 0 b a ℏ 2 . {\displaystyle \cos(ka)=\cos(\alpha a)+P{\frac {\sin(\alpha a)}{\alpha a}},\qquad P={\frac {mV_{0}ba}{\hbar ^{2}}}.}
For energy values inside the well ( E < 0), we get: cos ( k a ) = cos ( β b ) cosh [ α ( a − b ) ] − β 2 − α 2 2 α β sin ( β b ) sinh [ α ( a − b ) ] , {\displaystyle \cos(ka)=\cos(\beta b)\cosh[\alpha (a-b)]-{\beta ^{2}-\alpha ^{2} \over 2\alpha \beta }\sin(\beta b)\sinh[\alpha (a-b)],} with α 2 = 2 m | E | ℏ 2 {\displaystyle \alpha ^{2}={2m|E| \over \hbar ^{2}}} and β 2 = 2 m ( V 0 − | E | ) ℏ 2 {\displaystyle \beta ^{2}={\frac {2m(V_{0}-|E|)}{\hbar ^{2}}}} .
Following the same approximations as above ( b → 0 ; V 0 → ∞ ; V 0 b = c o n s t a n t {\displaystyle b\to 0;\,V_{0}\to \infty ;\,V_{0}b=\mathrm {constant} } ), we arrive at cos ( k a ) = cosh ( α a ) + P sinh ( α a ) α a {\displaystyle \cos(ka)=\cosh(\alpha a)+P{\frac {\sinh(\alpha a)}{\alpha a}}} with the same formula for P as in the previous case ( P = m V 0 b a ℏ 2 ) {\displaystyle \left(P={\frac {mV_{0}ba}{\hbar ^{2}}}\right)} .
In the previous paragraph, the only variables not determined by the parameters of the physical system are the energy E and the crystal momentum k . By picking a value for E , one can compute the right hand side, and then compute k by taking the arccos {\displaystyle \arccos } of both sides. Thus, the expression gives rise to the dispersion relation .
The right hand side of the last expression above can sometimes be greater than 1 or less than –1, in which case there is no value of k that can make the equation true. Since α a ∝ E {\displaystyle \alpha a\propto {\sqrt {E}}} , that means there are certain values of E for which there are no eigenfunctions of the Schrödinger equation. These values constitute the band gap .
Thus, the Kronig–Penney model is one of the simplest periodic potentials to exhibit a band gap.
An alternative treatment [ 4 ] to a similar problem is given. Here we have a delta periodic potential: V ( x ) = A ⋅ ∑ n = − ∞ ∞ δ ( x − n a ) . {\displaystyle V(x)=A\cdot \sum _{n=-\infty }^{\infty }\delta (x-na).}
A is some constant, and a is the lattice constant (the spacing between each site). Since this potential is periodic, we could expand it as a Fourier series: V ( x ) = ∑ K V ~ ( K ) ⋅ e i K x , {\displaystyle V(x)=\sum _{K}{\tilde {V}}(K)\cdot e^{iKx},} where V ~ ( K ) = 1 a ∫ − a / 2 a / 2 d x V ( x ) e − i K x = 1 a ∫ − a / 2 a / 2 d x ∑ n = − ∞ ∞ A ⋅ δ ( x − n a ) e − i K x = A a . {\displaystyle {\tilde {V}}(K)={\frac {1}{a}}\int _{-a/2}^{a/2}dx\,V(x)\,e^{-iKx}={\frac {1}{a}}\int _{-a/2}^{a/2}dx\sum _{n=-\infty }^{\infty }A\cdot \delta (x-na)\,e^{-iKx}={\frac {A}{a}}.}
The wave-function, using Bloch's theorem, is equal to ψ k ( x ) = e i k x u k ( x ) {\displaystyle \psi _{k}(x)=e^{ikx}u_{k}(x)} where u k ( x ) {\displaystyle u_{k}(x)} is a function that is periodic in the lattice, which means that we can expand it as a Fourier series as well: u k ( x ) = ∑ K u ~ k ( K ) e i K x . {\displaystyle u_{k}(x)=\sum _{K}{\tilde {u}}_{k}(K)e^{iKx}.}
Thus the wave function is: ψ k ( x ) = ∑ K u ~ k ( K ) e i ( k + K ) x . {\displaystyle \psi _{k}(x)=\sum _{K}{\tilde {u}}_{k}(K)\,e^{i(k+K)x}.}
Putting this into the Schrödinger equation, we get: [ ℏ 2 ( k + K ) 2 2 m − E k ] u ~ k ( K ) + ∑ K ′ V ~ ( K − K ′ ) u ~ k ( K ′ ) = 0 {\displaystyle \left[{\frac {\hbar ^{2}(k+K)^{2}}{2m}}-E_{k}\right]{\tilde {u}}_{k}(K)+\sum _{K'}{\tilde {V}}(K-K')\,{\tilde {u}}_{k}(K')=0} or rather: [ ℏ 2 ( k + K ) 2 2 m − E k ] u ~ k ( K ) + A a ∑ K ′ u ~ k ( K ′ ) = 0 {\displaystyle \left[{\frac {\hbar ^{2}(k+K)^{2}}{2m}}-E_{k}\right]{\tilde {u}}_{k}(K)+{\frac {A}{a}}\sum _{K'}{\tilde {u}}_{k}(K')=0}
Now we recognize that: u k ( 0 ) = ∑ K ′ u ~ k ( K ′ ) {\displaystyle u_{k}(0)=\sum _{K'}{\tilde {u}}_{k}(K')}
Plug this into the Schrödinger equation: [ ℏ 2 ( k + K ) 2 2 m − E k ] u ~ k ( K ) + A a u k ( 0 ) = 0 {\displaystyle \left[{\frac {\hbar ^{2}(k+K)^{2}}{2m}}-E_{k}\right]{\tilde {u}}_{k}(K)+{\frac {A}{a}}u_{k}(0)=0}
Solving this for u ~ k ( K ) {\displaystyle {\tilde {u}}_{k}(K)} we get: u ~ k ( K ) = 2 m ℏ 2 A a f ( k ) 2 m E k ℏ 2 − ( k + K ) 2 = 2 m ℏ 2 A a 2 m E k ℏ 2 − ( k + K ) 2 u k ( 0 ) {\displaystyle {\tilde {u}}_{k}(K)={\frac {{\frac {2m}{\hbar ^{2}}}{\frac {A}{a}}f(k)}{{\frac {2mE_{k}}{\hbar ^{2}}}-(k+K)^{2}}}={\frac {{\frac {2m}{\hbar ^{2}}}{\frac {A}{a}}}{{\frac {2mE_{k}}{\hbar ^{2}}}-(k+K)^{2}}}\,u_{k}(0)}
We sum this last equation over all values of K to arrive at: ∑ K u ~ k ( K ) = ∑ K 2 m ℏ 2 A a 2 m E k ℏ 2 − ( k + K ) 2 u k ( 0 ) {\displaystyle \sum _{K}{\tilde {u}}_{k}(K)=\sum _{K}{\frac {{\frac {2m}{\hbar ^{2}}}{\frac {A}{a}}}{{\frac {2mE_{k}}{\hbar ^{2}}}-(k+K)^{2}}}\,u_{k}(0)}
Or: u k ( 0 ) = ∑ K 2 m ℏ 2 A a 2 m E k ℏ 2 − ( k + K ) 2 u k ( 0 ) {\displaystyle u_{k}(0)=\sum _{K}{\frac {{\frac {2m}{\hbar ^{2}}}{\frac {A}{a}}}{{\frac {2mE_{k}}{\hbar ^{2}}}-(k+K)^{2}}}\,u_{k}(0)}
Conveniently, u k ( 0 ) {\displaystyle u_{k}(0)} cancels out and we get: 1 = ∑ K 2 m ℏ 2 A a 2 m E k ℏ 2 − ( k + K ) 2 {\displaystyle 1=\sum _{K}{\frac {{\frac {2m}{\hbar ^{2}}}{\frac {A}{a}}}{{\frac {2mE_{k}}{\hbar ^{2}}}-(k+K)^{2}}}}
Or: ℏ 2 2 m a A = ∑ K 1 2 m E k ℏ 2 − ( k + K ) 2 {\displaystyle {\frac {\hbar ^{2}}{2m}}{\frac {a}{A}}=\sum _{K}{\frac {1}{{\frac {2mE_{k}}{\hbar ^{2}}}-(k+K)^{2}}}}
To save ourselves some unnecessary notational effort we define a new variable: α 2 := 2 m E k ℏ 2 {\displaystyle \alpha ^{2}:={\frac {2mE_{k}}{\hbar ^{2}}}} and finally our expression is: ℏ 2 2 m a A = ∑ K 1 α 2 − ( k + K ) 2 {\displaystyle {\frac {\hbar ^{2}}{2m}}{\frac {a}{A}}=\sum _{K}{\frac {1}{\alpha ^{2}-(k+K)^{2}}}}
Now, K is a reciprocal lattice vector, which means that a sum over K is actually a sum over integer multiples of 2 π a {\displaystyle {\frac {2\pi }{a}}} : ℏ 2 2 m a A = ∑ n = − ∞ ∞ 1 α 2 − ( k + 2 π n a ) 2 {\displaystyle {\frac {\hbar ^{2}}{2m}}{\frac {a}{A}}=\sum _{n=-\infty }^{\infty }{\frac {1}{\alpha ^{2}-(k+{\frac {2\pi n}{a}})^{2}}}}
We can juggle this expression a little bit to make it more suggestive (use partial fraction decomposition ): ℏ 2 2 m a A = ∑ n = − ∞ ∞ 1 α 2 − ( k + 2 π n a ) 2 = − 1 2 α ∑ n = − ∞ ∞ [ 1 ( k + 2 π n a ) − α − 1 ( k + 2 π n a ) + α ] = − a 4 α ∑ n = − ∞ ∞ [ 1 π n + k a 2 − α a 2 − 1 π n + k a 2 + α a 2 ] = − a 4 α [ ∑ n = − ∞ ∞ 1 π n + k a 2 − α a 2 − ∑ n = − ∞ ∞ 1 π n + k a 2 + α a 2 ] {\displaystyle {\begin{aligned}{\frac {\hbar ^{2}}{2m}}{\frac {a}{A}}&=\sum _{n=-\infty }^{\infty }{\frac {1}{\alpha ^{2}-(k+{\frac {2\pi n}{a}})^{2}}}\\&=-{\frac {1}{2\alpha }}\sum _{n=-\infty }^{\infty }\left[{\frac {1}{(k+{\frac {2\pi n}{a}})-\alpha }}-{\frac {1}{(k+{\frac {2\pi n}{a}})+\alpha }}\right]\\&=-{\frac {a}{4\alpha }}\sum _{n=-\infty }^{\infty }\left[{\frac {1}{\pi n+{\frac {ka}{2}}-{\frac {\alpha a}{2}}}}-{\frac {1}{\pi n+{\frac {ka}{2}}+{\frac {\alpha a}{2}}}}\right]\\&=-{\frac {a}{4\alpha }}\left[\sum _{n=-\infty }^{\infty }{\frac {1}{\pi n+{\frac {ka}{2}}-{\frac {\alpha a}{2}}}}-\sum _{n=-\infty }^{\infty }{\frac {1}{\pi n+{\frac {ka}{2}}+{\frac {\alpha a}{2}}}}\right]\end{aligned}}}
If we use a nice identity of a sum of the cotangent function ( Equation 18 ) which says: cot ( x ) = ∑ n = − ∞ ∞ 1 2 π n + 2 x − 1 2 π n − 2 x {\displaystyle \cot(x)=\sum _{n=-\infty }^{\infty }{\frac {1}{2\pi n+2x}}-{\frac {1}{2\pi n-2x}}} and plug it into our expression we get to: ℏ 2 2 m a A = − a 4 α [ cot ( k a 2 − α a 2 ) − cot ( k a 2 + α a 2 ) ] {\displaystyle {\frac {\hbar ^{2}}{2m}}{\frac {a}{A}}=-{\frac {a}{4\alpha }}\left[\cot \left({\tfrac {ka}{2}}-{\tfrac {\alpha a}{2}}\right)-\cot \left({\tfrac {ka}{2}}+{\tfrac {\alpha a}{2}}\right)\right]}
We use the sum of cot and then, the product of sin (which is part of the formula for the sum of cot ) to arrive at: cos ( k a ) = cos ( α a ) + m A ℏ 2 α sin ( α a ) {\displaystyle \cos(ka)=\cos(\alpha a)+{\frac {mA}{\hbar ^{2}\alpha }}\sin(\alpha a)}
This equation shows the relation between the energy (through α ) and the wave-vector, k , and as you can see, since the left hand side of the equation can only range from −1 to 1 then there are some limits on the values that α (and thus, the energy) can take, that is, at some ranges of values of the energy, there is no solution according to these equation, and thus, the system will not have those energies: energy gaps. These are the so-called band-gaps, which can be shown to exist in any shape of periodic potential (not just delta or square barriers).
For a different and detailed calculation of the gap formula (i.e. for the gap between bands) and the level splitting of eigenvalues of the one-dimensional Schrödinger equation see Müller-Kirsten. [ 5 ] Corresponding results for the cosine potential (Mathieu equation) are also given in detail in this reference.
In some cases, the Schrödinger equation can be solved analytically on a one-dimensional lattice of finite length [ 6 ] [ 7 ] using the theory of periodic differential equations. [ 8 ] The length of the lattice is assumed to be L = N a {\displaystyle L=Na} , where a {\displaystyle a} is the potential period and the number of periods N {\displaystyle N} is a positive integer. The two ends of the lattice are at τ {\displaystyle \tau } and L + τ {\displaystyle L+\tau } , where τ {\displaystyle \tau } determines the point of termination. The wavefunction vanishes outside the interval [ τ , L + τ ] {\displaystyle [\tau ,L+\tau ]} .
The eigenstates of the finite system can be found in terms of the Bloch states of an infinite system with the same periodic potential. If there is a band gap between two consecutive energy bands of the infinite system, there is a sharp distinction between two types of states in the finite lattice. For each energy band of the infinite system, there are N − 1 {\displaystyle N-1} bulk states whose energies depend on the length N {\displaystyle N} but not on the termination τ {\displaystyle \tau } . These states are standing waves constructed as a superposition of two Bloch states with momenta k {\displaystyle k} and − k {\displaystyle -k} , where k {\displaystyle k} is chosen so that the wavefunction vanishes at the boundaries. The energies of these states match the energy bands of the infinite system. [ 6 ]
For each band gap, there is one additional state. The energies of these states depend on the point of termination τ {\displaystyle \tau } but not on the length N {\displaystyle N} . [ 6 ] The energy of such a state can lie either at the band edge or within the band gap. If the energy is within the band gap, the state is a surface state localized at one end of the lattice, but if the energy is at the band edge, the state is delocalized across the lattice. | https://en.wikipedia.org/wiki/Particle_in_a_one-dimensional_lattice |
A Particle mass analyser (PMA) is an instrument for classifying aerosol particles according to their mass-to-charge ratio using opposing electrical and centrifugal forces . This allows the classifier to select particles of a specified mass-to-charge ratio independent of particle shape. [ 1 ]
It one of the three types of monodisperse aerosol classifier, the others being the differential mobility analyser (DMA, for electrical mobility size), and the aerodynamic aerosol classifier (AAC, for relaxation time, or aerodynamic diameter ). The corresponding three quantities are related by the expression τ = mB , where τ is relaxation time, m is mass and B is mobility.
Further work improved the technique by engineering the centrifugal force to match the electrostatic force across the whole classification region, thus increasing the throughput. [ 2 ] | https://en.wikipedia.org/wiki/Particle_mass_analyser |
Particle Mesh ( PM ) is a computational method for determining the forces in a system of particles. These particles could be atoms, stars, or fluid components and so the method is applicable to many fields, including molecular dynamics and astrophysics. The basic principle is that a system of particles is converted into a grid (or "mesh") of density values. The potential is then solved for this density grid, and forces are applied to each particle based on what cell it is in, and where in the cell it lies.
Various methods for converting a system of particles into a grid of densities exist. One method is that each particle simply gives its mass to the closest point in the mesh. Another method is the Cloud-in-Cell (CIC) method, where the particles are modelled as constant density cubes, and one particle can contribute mass to several cells.
Once the density distribution is found, the potential energy of each point in the mesh can be determined from the differential form of Gauss's law , which—after identifying the electric field E as the negative gradient of the electric potential Φ —gives rise to a Poisson equation that is easily solved after applying the Fourier transform. Thus it is faster to do a PM calculation than to simply add up all the interactions on a particle due to all other particles for two reasons: firstly, there are usually fewer grid points than particles, so the number of interactions to calculate is smaller, and secondly the grid technique permits the use of Fourier transform techniques to evaluate the potential, and these can be very fast .
PM is considered an obsolete method as it does not model close interaction between particles well. It has been supplanted by the Particle-Particle Particle-Mesh method, which uses a straight particle-particle sum between nearby particles in addition to the PM calculation.
This computational physics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Particle_mesh |
Particle physics or high-energy physics is the study of fundamental particles and forces that constitute matter and radiation . The field also studies combinations of elementary particles up to the scale of protons and neutrons , while the study of combinations of protons and neutrons is called nuclear physics .
The fundamental particles in the universe are classified in the Standard Model as fermions (matter particles) and bosons (force-carrying particles). There are three generations of fermions, although ordinary matter is made only from the first fermion generation. The first generation consists of up and down quarks which form protons and neutrons , and electrons and electron neutrinos . The three fundamental interactions known to be mediated by bosons are electromagnetism , the weak interaction , and the strong interaction .
Quarks cannot exist on their own but form hadrons . Hadrons that contain an odd number of quarks are called baryons and those that contain an even number are called mesons . Two baryons, the proton and the neutron , make up most of the mass of ordinary matter. Mesons are unstable and the longest-lived last for only a few hundredths of a microsecond . They occur after collisions between particles made of quarks, such as fast-moving protons and neutrons in cosmic rays . Mesons are also produced in cyclotrons or other particle accelerators .
Particles have corresponding antiparticles with the same mass but with opposite electric charges . For example, the antiparticle of the electron is the positron . The electron has a negative electric charge, the positron has a positive charge. These antiparticles can theoretically form a corresponding form of matter called antimatter . Some particles, such as the photon , are their own antiparticle.
These elementary particles are excitations of the quantum fields that also govern their interactions. The dominant theory explaining these fundamental particles and fields, along with their dynamics, is called the Standard Model . The reconciliation of gravity to the current particle physics theory is not solved; many theories have addressed this problem, such as loop quantum gravity , string theory and supersymmetry theory .
Experimental particle physics is the study of these particles in radioactive processes and in particle accelerators such as the Large Hadron Collider . Theoretical particle physics is the study of these particles in the context of cosmology and quantum theory . The two are closely interrelated: the Higgs boson was postulated theoretically before being confirmed by experiments.
The idea that all matter is fundamentally composed of elementary particles dates from at least the 6th century BC. [ 1 ] In the 19th century, John Dalton , through his work on stoichiometry , concluded that each element of nature was composed of a single, unique type of particle. [ 2 ] The word atom , after the Greek word atomos meaning "indivisible", has since then denoted the smallest particle of a chemical element , but physicists later discovered that atoms are not, in fact, the fundamental particles of nature, but are conglomerates of even smaller particles, such as the electron . The early 20th century explorations of nuclear physics and quantum physics led to proofs of nuclear fission in 1939 by Lise Meitner (based on experiments by Otto Hahn ), and nuclear fusion by Hans Bethe in that same year; both discoveries also led to the development of nuclear weapons . Bethe's 1947 calculation of the Lamb shift is credited with having "opened the way to the modern era of particle physics". [ 3 ]
Throughout the 1950s and 1960s, a bewildering variety of particles was found in collisions of particles from beams of increasingly high energy. It was referred to informally as the " particle zoo ". Important discoveries such as the CP violation by James Cronin and Val Fitch brought new questions to matter-antimatter imbalance . [ 4 ] After the formulation of the Standard Model during the 1970s, physicists clarified the origin of the particle zoo. The large number of particles was explained as combinations of a (relatively) small number of more fundamental particles and framed in the context of quantum field theories . This reclassification marked the beginning of modern particle physics. [ 5 ] [ 6 ]
The current state of the classification of all elementary particles is explained by the Standard Model , which gained widespread acceptance in the mid-1970s after experimental confirmation of the existence of quarks . It describes the strong , weak , and electromagnetic fundamental interactions , using mediating gauge bosons . The species of gauge bosons are eight gluons , W − , W + and Z bosons , and the photon . [ 7 ] The Standard Model also contains 24 fundamental fermions (12 particles and their associated anti-particles), which are the constituents of all matter . [ 8 ] Finally, the Standard Model also predicted the existence of a type of boson known as the Higgs boson . On 4 July 2012, physicists with the Large Hadron Collider at CERN announced they had found a new particle that behaves similarly to what is expected from the Higgs boson. [ 9 ]
The Standard Model, as currently formulated, has 61 elementary particles. [ 10 ] Those elementary particles can combine to form composite particles, accounting for the hundreds of other species of particles that have been discovered since the 1960s. The Standard Model has been found to agree with almost all the experimental tests conducted to date. However, most particle physicists believe that it is an incomplete description of nature and that a more fundamental theory awaits discovery (See Theory of Everything ). In recent years, measurements of neutrino mass have provided the first experimental deviations from the Standard Model, since neutrinos do not have mass in the Standard Model. [ 11 ]
Modern particle physics research is focused on subatomic particles , including atomic constituents, such as electrons , protons , and neutrons (protons and neutrons are composite particles called baryons , made of quarks ), that are produced by radioactive and scattering processes; such particles are photons , neutrinos , and muons , as well as a wide range of exotic particles . [ 12 ] All particles and their interactions observed to date can be described almost entirely by the Standard Model. [ 7 ]
Dynamics of particles are also governed by quantum mechanics ; they exhibit wave–particle duality , displaying particle-like behaviour under certain experimental conditions and wave -like behaviour in others. In more technical terms, they are described by quantum state vectors in a Hilbert space , which is also treated in quantum field theory . Following the convention of particle physicists, the term elementary particles is applied to those particles that are, according to current understanding, presumed to be indivisible and not composed of other particles. [ 10 ]
Ordinary matter is made from first- generation quarks ( up , down ) and leptons ( electron , electron neutrino ). [ 13 ] Collectively, quarks and leptons are called fermions , because they have a quantum spin of half-integers (−1/2, 1/2, 3/2, etc.). This causes the fermions to obey the Pauli exclusion principle , where no two particles may occupy the same quantum state . [ 14 ] Quarks have fractional elementary electric charge (−1/3 or 2/3) [ 15 ] and leptons have whole-numbered electric charge (0 or -1). [ 16 ] Quarks also have color charge , which is labeled arbitrarily with no correlation to actual light color as red, green and blue. [ 17 ] Because the interactions between the quarks store energy which can convert to other particles when the quarks are far apart enough, quarks cannot be observed independently. This is called color confinement . [ 17 ]
There are three known generations of quarks (up and down, strange and charm , top and bottom ) and leptons (electron and its neutrino, muon and its neutrino , tau and its neutrino ), with strong indirect evidence that a fourth generation of fermions does not exist. [ 18 ]
Bosons are the mediators or carriers of fundamental interactions, such as electromagnetism , the weak interaction , and the strong interaction . [ 19 ] Electromagnetism is mediated by the photon , the quanta of light . [ 20 ] : 29–30 The weak interaction is mediated by the W and Z bosons . [ 21 ] The strong interaction is mediated by the gluon , which can link quarks together to form composite particles. [ 22 ] Due to the aforementioned color confinement, gluons are never observed independently. [ 23 ] The Higgs boson gives mass to the W and Z bosons via the Higgs mechanism [ 24 ] – the gluon and photon are expected to be massless . [ 23 ] All bosons have an integer quantum spin (0 and 1) and can have the same quantum state . [ 19 ]
Most aforementioned particles have corresponding antiparticles , which compose antimatter . Normal particles have positive lepton or baryon number , and antiparticles have these numbers negative. [ 25 ] Most properties of corresponding antiparticles and particles are the same, with a few gets reversed; the electron's antiparticle, positron, has an opposite charge. To differentiate between antiparticles and particles, a plus or negative sign is added in superscript . For example, the electron and the positron are denoted e − and e + . [ 26 ] However, in the case that the particle has a charge of 0 (equal to that of the antiparticle), the antiparticle is denoted with a line above the symbol. As such, an electron neutrino is ν e , whereas its antineutrino is ν e . When a particle and an antiparticle interact with each other, they are annihilated and convert to other particles. [ 27 ] Some particles, such as the photon or gluon, have no antiparticles. [ citation needed ]
Quarks and gluons additionally have color charges, which influences the strong interaction. Quark's color charges are called red, green and blue (though the particle itself have no physical color), and in antiquarks are called antired, antigreen and antiblue. [ 17 ] The gluon can have eight color charges , which are the result of quarks' interactions to form composite particles (gauge symmetry SU(3) ). [ 28 ]
The neutrons and protons in the atomic nuclei are baryons – the neutron is composed of two down quarks and one up quark, and the proton is composed of two up quarks and one down quark. [ 29 ] A baryon is composed of three quarks, and a meson is composed of two quarks (one normal, one anti). Baryons and mesons are collectively called hadrons . Quarks inside hadrons are governed by the strong interaction, thus are subjected to quantum chromodynamics (color charges). The bounded quarks must have their color charge to be neutral, or "white" for analogy with mixing the primary colors . [ 30 ] More exotic hadrons can have other types, arrangement or number of quarks ( tetraquark , pentaquark ). [ 31 ]
An atom is made from protons, neutrons and electrons. [ 32 ] By modifying the particles inside a normal atom, exotic atoms can be formed. [ 33 ] A simple example would be the hydrogen-4.1 , which has one of its electrons replaced with a muon. [ 34 ]
The graviton is a hypothetical particle that can mediate the gravitational interaction, but it has not been detected or completely reconciled with current theories. [ 35 ] Many other hypothetical particles have been proposed to address the limitations of the Standard Model. Notably, supersymmetric particles aim to solve the hierarchy problem , axions address the strong CP problem , and various other particles are proposed to explain the origins of dark matter and dark energy .
The world's major particle physics laboratories are:
Theoretical particle physics attempts to develop the models, theoretical framework, and mathematical tools to understand current experiments and make predictions for future experiments (see also theoretical physics ). There are several major interrelated efforts being made in theoretical particle physics today.
One important branch attempts to better understand the Standard Model and its tests. Theorists make quantitative predictions of observables at collider and astronomical experiments, which along with experimental measurements is used to extract the parameters of the Standard Model with less uncertainty. This work probes the limits of the Standard Model and therefore expands scientific understanding of nature's building blocks. Those efforts are made challenging by the difficulty of calculating high precision quantities in quantum chromodynamics . Some theorists working in this area use the tools of perturbative quantum field theory and effective field theory , referring to themselves as phenomenologists . [ citation needed ] Others make use of lattice field theory and call themselves lattice theorists .
Another major effort is in model building where model builders develop ideas for what physics may lie beyond the Standard Model (at higher energies or smaller distances). This work is often motivated by the hierarchy problem and is constrained by existing experimental data. [ 48 ] [ 49 ] It may involve work on supersymmetry , alternatives to the Higgs mechanism , extra spatial dimensions (such as the Randall–Sundrum models ), Preon theory, combinations of these, or other ideas. Vanishing-dimensions theory is a particle physics theory suggesting that systems with higher energy have a smaller number of dimensions. [ 50 ]
A third major effort in theoretical particle physics is string theory . String theorists attempt to construct a unified description of quantum mechanics and general relativity by building a theory based on small strings, and branes rather than particles. If the theory is successful, it may be considered a " Theory of Everything ", or "TOE". [ 51 ]
There are also other areas of work in theoretical particle physics ranging from particle cosmology to loop quantum gravity . [ citation needed ]
In principle, all physics (and practical applications developed therefrom) can be derived from the study of fundamental particles. In practice, even if "particle physics" is taken to mean only "high-energy atom smashers", many technologies have been developed during these pioneering investigations that later find wide uses in society. Particle accelerators are used to produce medical isotopes for research and treatment (for example, isotopes used in PET imaging ), or used directly in external beam radiotherapy . The development of superconductors has been pushed forward by their use in particle physics. The World Wide Web and touchscreen technology were initially developed at CERN . Additional applications are found in medicine, national security, industry, computing, science, and workforce development, illustrating a long and growing list of beneficial practical applications with contributions from particle physics. [ 52 ]
Major efforts to look for physics beyond the Standard Model include the Future Circular Collider proposed for CERN [ 53 ] and the Particle Physics Project Prioritization Panel (P5) in the US that will update the 2014 P5 study that recommended the Deep Underground Neutrino Experiment , among other experiments. | https://en.wikipedia.org/wiki/Particle_physics |
Particle physics is the study of the interactions of elementary particles at high energies, whilst physical cosmology studies the universe as a single physical entity. The interface between these two fields is sometimes referred to as particle cosmology . Particle physics must be taken into account in cosmological models of the early universe , when the average energy density was very high. The processes of particle pair production , scattering and decay influence the cosmology.
As a rough approximation, a particle scattering or decay process is important at a particular cosmological epoch if its time scale is shorter than or similar to the time scale of the universe's expansion . The latter quantity is 1 H {\displaystyle {\frac {1}{H}}} where H {\displaystyle H} is the time-dependent Hubble parameter . This is roughly equal to the age of the universe at that time.
For example, the pion has a mean lifetime to decay of about 26 nanoseconds . This means that particle physics processes involving pion decay can be neglected until roughly that much time has passed since the Big Bang .
Cosmological observations of phenomena such as the cosmic microwave background and the cosmic abundance of elements , together with the predictions of the Standard Model of particle physics, place constraints on the physical conditions in the early universe. The success of the Standard Model at explaining these observations support its validity under conditions beyond those which can be produced in a laboratory . Conversely, phenomena discovered through cosmological observations , such as dark matter and baryon asymmetry , suggest the presence of physics that goes beyond the Standard Model .
This physical cosmology -related article is a stub . You can help Wikipedia by expanding it .
This particle physics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Particle_physics_in_cosmology |
Particle radiation is the radiation of energy by means of fast-moving subatomic particles . Particle radiation is referred to as a particle beam if the particles are all moving in the same direction, similar to a light beam .
Due to the wave–particle duality , all moving particles also have wave character. Higher energy particles more easily exhibit particle characteristics, while lower energy particles more easily exhibit wave characteristics.
Particles can be electrically charged or uncharged:
Particle radiation can be emitted by an unstable atomic nucleus (via radioactive decay ), or it can be produced from some other kind of nuclear reaction . Many types of particles may be emitted:
Mechanisms that produce particle radiation include:
Charged particles ( electrons , mesons, protons , alpha particles, heavier HZE ions , etc.) can be produced by particle accelerators . Ion irradiation is widely used in the semiconductor industry to introduce dopants into
materials, a method known as ion implantation .
Particle accelerators can also produce neutrino beams. Neutron beams are mostly produced by nuclear reactors .
In radiation protection , radiation is often separated into two categories, ionizing and non-ionizing , to denote the level of danger posed to humans. Ionization is the process of removing electrons from atoms, leaving two electrically charged particles (an electron and a positively charged ion) behind. [ 1 ] The negatively charged electrons and positively charged ions created by ionizing radiation may cause damage in living tissue. Basically, a particle is ionizing if its energy is higher than the ionization energy of a typical substance, i.e., a few eV , and interacts with electrons significantly.
According to the International Commission on Non-Ionizing Radiation Protection , electromagnetic radiations from ultraviolet to infrared, to radiofrequency (including microwave) radiation, static and time-varying electric and magnetic fields, and ultrasound belong to the non-ionizing radiations. [ 2 ]
The charged particles mentioned above all belong to the ionizing radiations. When passing through matter, they ionize and thus lose energy in many small steps. The distance to the point where the charged particle has lost all its energy is called the range of the particle. The range depends upon the type of particle, its initial energy, and the material it traverses. Similarly, the energy loss per unit path length, the ' stopping power ', depends on the type and energy of the charged particle and upon the material. The stopping power and hence, the density of ionization, usually increases toward the end of range and reaches a maximum, the Bragg Peak , shortly before the energy drops to zero. [ 1 ] | https://en.wikipedia.org/wiki/Particle_radiation |
Particle size analysis , particle size measurement , or simply particle sizing , is the collective name of the technical procedures, or laboratory techniques which determines the size range , and/or the average, or mean size of the particles in a powder or liquid sample .
Particle size analysis is part of particle science , and it is generally carried out in particle technology laboratories.
The particle size measurement is typically achieved by means of devices, called Particle Size Analyzers (PSA), which are based on different technologies, such as high definition image processing , analysis of Brownian motion , gravitational settling of the particle and light scattering ( Rayleigh and Mie scattering) of the particles.
The particle size can have considerable importance in a number of industries including the chemical, food, mining, forestry, agriculture, cosmetics, pharmaceutical, energy, and aggregate industries.
Particle size analysis based on light scattering has widespread application in many fields, as it allows relatively easy optical characterization of samples enabling improved quality control of products in many industries including pharmaceutical, food, cosmetic, and polymer production. [ 1 ] Recent years have seen many advancements in light scattering technologies for particle characterization.
For particles in the lower nanometer to lower micrometer range, dynamic light scattering (DLS) [ 2 ] has now become an industry standard technique. It is also by far the most widely used light scattering technique for particle characterization in the academic world. [ 3 ] This method analyzes the fluctuations of scattered light by particles in suspension when illuminated with a laser to determine the velocity of the Brownian motion, which can then be used to obtain the hydrodynamic size of particles using the Stokes-Einstein relationship. DLS is a fast and non-invasive technique, which is also precise and highly repeatable. [ 4 ] Furthermore, since the technique is based on the measurement of light scattering as a function of time, the technique is considered absolute and the DLS instruments do not require calibration. [ 3 ] Amongst its disadvantages is the fact that it does not properly resolve highly polydisperse samples, while the presence of large particles can affect size accuracy. Other scattering techniques have emerged, such as nanoparticle tracking analysis (NTA), [ 5 ] which tracks individual particle movement through scattering using image recording. NTA also measures the hydrodynamic size of particles from the diffusion coefficient but is capable of overcoming some of the limitations posed by DLS. [ 6 ] The next generation of NTA technology is called interferometric nanoparticle tracking analysis (iNTA) [ 7 ] and is based on the interferometric scattering microscopy (iSCAT). In contrast to NTA, iNTA has a superior size resolution and gives access to the effective refractive index of the particles.
While the above-mentioned techniques are best suited for measuring particles typically in the submicron region, particle size analyzers (PSAs) based on static light scattering or laser diffraction (LD) [ 8 ] have become the most popular and widely used instruments for measuring particles from hundreds of nanometers to several millimeters. Similar scattering theory is also utilized in systems based on non-electromagnetic wave propagation, such as ultrasonic analyzers. In LD PSAs, a laser beam is used to irradiate a dilute suspension of particles. The light scattered by the particles in the forward direction is focused by a lens onto a large array of concentric photodetector rings. The smaller the particle is, the larger the scattering angle of the laser beam is. Thus, by measuring the angle-dependent scattered intensity, one can infer the particle size distribution using Fraunhofer or Mie scattering models. [ 9 ] [ 10 ] In the latter case, prior knowledge of the refractive index of the particle being measured as well as the dispersant is required.
Commercial LD PSAs have gained popularity due to their broad dynamic range, rapid measurement, high reproducibility and the capability to perform online measurements. However, these devices are generally large in size (~700 × 300 × 450 mm), heavy (~30 kg) and expensive (in the 50–200 K€ range). On the one hand, the large size of common devices is due to the large distance needed between the sample and the detectors to provide the desired angular resolution. Furthermore, their high price is mainly due to the use of expensive laser sources and a large number of detectors, i.e., one sensor for each scattering angle to be monitored. Some commercial devices contain up to twenty sensors. This complexity of commercial LD PSAs, together with the fact that they often require maintenance and highly trained personnel, make them impractical in the majority of online industrial applications, which require the installation of probes in processing environments, often at multiple locations. An alternative method for PSD is cuvette-based SPR technique, that simultaneously measures the particle size ranging 10 nm-10 μm and concentration in a standard spectrophotometer. The optical filter inserted in the cuvette consists of nano-photonic crystals with very high angular resolution, which enables the analysis of PSD by automatically quantifying Mie scattering and Rayleigh scattering . [ 11 ]
The application of LD PSAs is also normally restricted to dilute suspensions. This is because the optical models used to estimate the particle size distribution (PSD) are based on a single scattering approximation. In practice, most industrial processes require measuring concentrated suspensions, where multiple scattering becomes a prominent effect. Multiple scattering in dense media leads to an underestimation of the particle size since the light scattered by the particles encounters diffraction points multiple times before reaching the detector, which in turn increases the apparent scattering angle. To overcome this issue, LD PSAs require appropriate sampling and dilution systems, which increase capital investments and operational costs. Another approach is to apply multiple scattering correction models together with the optical models to compute the PSD. A large number of algorithms for multiple scattering correction can be found in the literature. [ 12 ] [ 13 ] [ 14 ] However, these algorithms typically require implementing a complex correction, which increases the computation time and is often not suitable for online measurements. [ 14 ] An alternative approach to compute the PSD without the use of optical models and complex correction factors is to apply machine learning (ML) techniques. [ 15 ]
Microfluidic diffusional sizing (MDS) is a method of particle size analysis dependent on the diffusion of particles within a laminar flow . The method has found applications in proteomics and related fields where nano-sized particles may vary in size depending on their environment. [ 16 ]
Typically, paints and coatings are subjected to multiple rounds of particle size analysis, as the particle size of the individual components influences parameters as diverse as hint strength, hiding power, gloss, viscosity, stability and weather resistance. [ 17 ]
The size of materials being processed in an operation is very important. Having oversize material being conveyed will cause damage to equipment and slow down production. Particle-size analysis also helps the effectiveness of SAG Mills when crushing material.
In the building industry, the particle size can directly affect the strength of the final material, as it observed for cement . [ 18 ] Two of the most used techniques used for the particle size characterization of minerals are sieving and laser diffraction. These techniques are faster and cheaper compared to image-based techniques.
The optimization of the particle size distribution facilitates the pumping, mixing and transportation of foodstuff. Particle size analysis is usually done with any milled food, such as coffee, flour, cocoa powder. It is especially helpful with chocolate quality to ensure there is a consistent taste and feeling when eaten. Furthermore, in the case of food emulsions , particle size analysis is relevant to predict stability and shelf-life, and optimize homogenization. [ 19 ]
The gradation of soils, or soil texture , affects water and nutrient holding and drainage capabilities. For sand-based soils, particle size can be the dominant characteristic affecting soil performances and hence crop. Sieving has long been the technique of choice for soil texture analysis, although laser diffraction instruments are increasingly used as they considerably speed up the analytical process, and provide highly reproducible results. [ 20 ]
Particle size analysis in the agriculture industry is paramount because unwanted materials will contaminate products if they are not detected. By having an automated particle size analyzer , companies can closely monitor their processes.
Wood particles used to make various types of products rely on particle-size analysis to maintain high quality standards. By doing so, companies reduce waste and become more productive.
Having properly sized particles allow aggregate companies to create long-lasting roads and other products. Particle size analysis is also routinely conducted on bitumen emusions to predict their stability and their behavior. [ 21 ]
Particle size analyzers are used also in biology to measure protein aggregation .
DLS is a particularly appreciated technique for the characterization of nanoparticles designed for drug delivery, such as vaccines. DLS instruments are for instance part of the quality control process for mRNA vaccines formulated in lipid nanoparticle carriers. [ 22 ]
There is a large number of methods for the determination of particle size, and it is important to acknowledge that these different methods are not expected to give identical results. The size of a particle depends on the method used for its measurement, and it is important to choose the method that is most relevant to the application.
The "See also" section covers many of these techniques. In most of them, the particle size is inferred from a measurement of, for example: light scattering; electrical resistance; particle motion, rather than a direct measurement of particle diameter. This enables rapid measurement of a particle size distribution by an instrument, but does require some form of calibration or assumptions regarding the nature of the particles. Most often this includes the assumption of spherical particles, thus giving a result which is an equivalent spherical diameter . Thus, it is usual for measured particle size distributions to be different when comparing the results between different equipment. The most appropriate method to use is normally the one where the method is aligned to the end use of the data.
For example, to choose whether a chemical compound should be measured by dynamic light scattering or laser diffraction , one generally considers the expected size range, the sample type (liquid or solid), the amount of sample available, the chemical stability, as well its application field. [ 23 ] If designing a sedimentation vessel, then a sedimentation technique for sizing is most relevant. However, this approach is often not possible, and an alternative technique must be used. An online Expert system to assist in the selection (and elimination) of particle size analysis equipment has been developed. [ 24 ] | https://en.wikipedia.org/wiki/Particle_size_analysis |
Particle technology is the science and technology of handling and processing particles and powders . It encompasses the production, handling, modification, and use of a wide variety of particulate materials, both wet and dry. Particle handling may include transportation and storage. Particle sizes range from nanometers to centimeters. Particles can be characterized by diverse metrics. The scope of particle technology spans many industries including chemical, petrochemical, agricultural, food, pharmaceuticals, mineral processing , civil engineering, advanced materials, energy, and the environment. [ 1 ]
Particle technology thus deals with:
Particles are characterized by their individual size and shape, and by the distribution of these properties in bulk quantities. Spherical particles are defined by diameter or radius, and non-spherical particles are defined by the dimensions of their geometric equivalent. The space between particles in bulk means that the bulk density is less than the density of individual particles. The difference between bulk density and particle density may have implications for storage, transportation or other handling of particles. The way in which they move over each other or lock together determines stability or flowability, which is tested by the triaxial shear test .
Particle samples can be visualized using microscopy , most commonly by scanning electron microscopy (SEM) or transmission electron microscopy (TEM). [ 1 ] Both SEM and TEM can determine pore structure, surface area and structure of a particle. SEM achieves particle visualization by directing a beam of electrons at the particle sample and creating signals upon interaction with the sample, building a 3D image of the sample's topography and surface structure. TEM uses a similar beam of electrons, but the electrons are directed at a thin slice of the sample to form an image of the electrons that pass through the slice. [ 2 ] Particle microscopy can reveal properties or defects in a particle.
Optics can quantify particle size. Measuring light scattering and diffraction caused by a particle are detectable methods of identifying particle size, and are commonly used in the following techniques:
Many industries use particle technologies for particle transportation, separation and fluidization. [ 1 ] A variety of production methods are required for particulate materials due to the large differences between them. Three major areas of production techniques and their common applications are listed below.
Agglomeration is the process of primary particles (of smaller size) coming into contact with each other and forming larger clusters. It occurs in dry powders when particle size is smaller than around 10 μm or when conditions are humid, and in liquids when particles have zero surface charge. It is often induced by Brownian motion in liquids. [ 5 ]
Aggregation is another process of forming clusters from particles, but where the particles have stronger bonds due to larger surface area of contact. It occurs mostly in homogenous liquid mixtures. [ 6 ]
Crystallization , either in batches or continuous processes, allows the formation of high-purity crystalline particles from solutions. The product usually has particle size in the millimeter range. [ 6 ]
Precipitation also forms particulate product from solution. It occurs from two soluble compounds forming an insoluble product in a medium, often aqueous. While the initial particle size of the precipitate formed is only in the nanometer range, the primary particles often spontaneously agglomerate or aggregate to form much larger particles. Polymerization is a special form of precipitation where minimally soluble monomers in an aqueous solution form emulsion droplets with zero solubility. [ 6 ]
Granulation is the process of forming granular material from powders or smaller particles. It occurs when a binder liquid is mixed with ingredient particles to form compact clusters. These clusters can be further processed and compressed into tablet form for other applications. [ 6 ]
Extrusion forms objects of a fixed cross-sectional shape when the starting material is pushed through a die with the desired cross-section. This technique is often used for plastic, metal and rubber granules. In the food industry, extrusion is also used extensively for making pasta, crouton, cereal, cookie dough, pet food, etc. to achieve uniformity of these items. [ 7 ]
Comminution is the mechanical reduction of the size of solid materials. It includes crushing, cutting, grinding, milling, vibrating, and other processes. Crushing and cutting breaks down large pieces of dry or tough material to the centimeter range. Milling can be applied to both dry and wet material, resulting in particle size in the millimeter range. [ citation needed ]
Atomization is the process of breaking liquids into a spray of much smaller droplets, like an aerosol . The resulting size of these particles or droplets is usually in the nanometer to micrometer range. There are many industrial applications of liquid atomization, including spray drying , film coating, making nano-emulsions, etc. [ citation needed ] Other applications include fire sprinklers, crop sprayers, dry shampoos, etc. [ 8 ]
Emulsification is the process of dispersing particles from two or more immiscible liquids together. Oftentimes, one of the immiscible liquids is aqueous (water as solvent) and the other is organic (oil as solvent). Industrial processes usually involve dispersion of the organic solution into the aqueous solution by mixing with high-energy shears or strong turbulence. [ 9 ] Due to the unstable nature of emulsions , surfactants or emulsifiers are required to stabilize the final product to achieve longer shelf life. [ 6 ] Common applications of emulsions include food, pharmaceuticals and lubricants. Some examples of food emulsions are milk, mayonnaise, butter, and ice cream. Some examples of pharmaceutical and lubricant emulsions are ointments, creams, oil-soluble vitamins, and some medications. [ 10 ] [ 11 ] | https://en.wikipedia.org/wiki/Particle_technology |
Particle therapy is a form of external beam radiotherapy using beams of energetic neutrons , protons , or other heavier positive ions for cancer treatment. The most common type of particle therapy as of August 2021 is proton therapy . [ 1 ]
In contrast to X-rays ( photon beams) used in older radiotherapy, particle beams exhibit a Bragg peak in energy loss through the body, delivering their maximum radiation dose at or near the tumor and minimizing damage to surrounding normal tissues.
Particle therapy is also referred to more technically as hadron therapy , excluding photon and electron therapy . Neutron capture therapy , which depends on a secondary nuclear reaction, is also not considered here. Muon therapy, a rare type of particle therapy not within the categories above, has also been studied theoretically; [ 2 ] however, muons are still most commonly used for imaging, rather than therapy. [ 3 ]
Particle therapy works by aiming energetic ionizing particles at the target tumor. [ 4 ] [ 5 ] These particles damage the DNA of tissue cells, ultimately causing their death. Because of their reduced ability to repair DNA, cancerous cells are particularly vulnerable to such damage.
The figure shows how beams of electrons, X-rays or protons of different energies (expressed in MeV ) penetrate human tissue. Electrons have a short range and are therefore only of interest close to the skin (see electron therapy ). Bremsstrahlung X-rays penetrate more deeply, but the dose absorbed by the tissue then shows the typical exponential decay with increasing thickness. For protons and heavier ions, on the other hand, the dose increases while the particle penetrates the tissue and loses energy continuously. Hence the dose increases with increasing thickness up to the Bragg peak that occurs near the end of the particle's range . Beyond the Bragg peak, the dose drops to zero (for protons) or almost zero (for heavier ions).
The advantage of this energy deposition profile is that less energy is deposited into the healthy tissue surrounding the target tissue. This enables higher dose prescription to the tumor, theoretically leading to a higher local control rate, as well as achieving a low toxicity rate. [ 6 ]
The ions are first accelerated by means of a cyclotron or synchrotron . The final energy of the emerging particle beam defines the depth of penetration, and hence, the location of the maximum energy deposition. Since it is easy to deflect the beam by means of electro-magnets in a transverse direction, it is possible to employ a raster scan method, i.e., to scan the target area quickly, as the electron beam scans a TV tube. If, in addition, the beam energy and hence the depth of penetration is varied, an entire target volume can be covered in three dimensions, providing an irradiation exactly following the shape of the tumor. This is one of the great advantages compared to conventional X-ray therapy.
At the end of 2008, 28 treatment facilities were in operation worldwide and over 70,000 patients had been treated by means of pions , [ 7 ] [ 8 ] protons and heavier ions. Most of this therapy has been conducted using protons. [ 9 ]
At the end of 2013, 105,000 patients had been treated with proton beams, [ 10 ] and approximately 13,000 patients had received carbon-ion therapy. [ 11 ]
As of April 1, 2015, for proton beam therapy, there are 49 facilities in the world, including 14 in the US with another 29 facilities under construction. For Carbon-ion therapy, there are eight centers operating and four under construction. [ 11 ] Carbon-ion therapy centers exist in Japan, Germany, Italy, and China. Two US federal agencies are hoping to stimulate the establishment of at least one US heavy-ion therapy center. [ 11 ]
Proton therapy is a type of particle therapy that uses a beam of protons to irradiate diseased tissue , most often to treat cancer . The chief advantage of proton therapy over other types of external beam radiotherapy (e.g., radiation therapy , or photon therapy) is that the dose of protons is deposited over a narrow range of depth, which results in minimal entry, exit, or scattered radiation dose to healthy nearby tissues. High dose rates are key in cancer treatment advancements. PSI demonstrated that for cyclotron-based proton therapy facility using momentum cooling, it is possible to achieve remarkable dose rates of 952 Gy/s and 2105 Gy/s at the Bragg peak (in water) for 70 MeV and 230 MeV beams, respectively. When combined with field-specific ridge filters, Bragg peak-based FLASH proton therapy becomes feasible. [ 12 ]
Fast neutron therapy utilizes high energy neutrons typically between 50 and 70 MeV to treat cancer . Most fast neutron therapy beams are produced by reactors, cyclotrons (d+Be) and linear accelerators. Neutron therapy is currently available in Germany, Russia, South Africa and the United States. In the United States, the only treatment center still operational is in Seattle, Washington. The Seattle center use a cyclotron which produces a proton beam impinging upon a beryllium target.
Carbon ion therapy (C-ion RT) was pioneered at the National Institute of Radiological Sciences (NIRS) in Chiba, Japan, which began treating patients with carbon ion beams in 1994. This facility was the first to utilize carbon ions clinically, marking a significant advancement in particle therapy for cancer treatment. The therapeutic advantages of carbon ions were recognized earlier, but NIRS was instrumental in establishing its clinical application. [ 13 ] [ 14 ]
C-ion RT uses particles more massive than protons or neutrons. [ 15 ] Carbon ion radiotherapy has increasingly garnered scientific attention as technological delivery options have improved and clinical studies have demonstrated its treatment advantages for many cancers such as prostate, head and neck, lung, and liver cancers, bone and soft tissue sarcomas, locally recurrent rectal cancer, and pancreatic cancer, including locally advanced disease. It also has clear advantages to treat otherwise intractable hypoxic and radio-resistant cancers while opening the door for substantially hypo-fractionated treatment of normal and radio-sensitive disease.
By mid 2017, more than 15,000 patients have been treated worldwide in over 8 operational centers. Japan has been a conspicuous leader in this field. There are five heavy-ion radiotherapy facilities in operation and plans exist to construct several more facilities in the near future. In Germany this type of treatment is available at the Heidelberg Ion-Beam Therapy Center (HIT) and at the Marburg Ion-Beam Therapy Center (MIT). In Italy the National Centre of Oncological Hadrontherapy (CNAO) provides this treatment. Austria will open a CIRT center in 2017, with centers in South Korea, Taiwan, and China soon to open. No CIRT facility now operates in the United States but several are in various states of development. [ 16 ]
From a radiation biology standpoint, there is considerable rationale to support use of heavy-ion beams in treating cancer patients. All proton and other heavy ion beam therapies exhibit a defined Bragg peak in the body so they deliver their maximum lethal dosage at or near the tumor. This minimizes harmful radiation to the surrounding normal tissues. However, carbon-ions are heavier than protons and so provide a higher relative biological effectiveness (RBE), which increases with depth to reach the maximum at the end of the beam's range. Thus the RBE of a carbon ion beam increases as the ions advance deeper into the tumor-lying region. [ 17 ] CIRT provides the highest linear energy transfer (LET) of any currently available form of clinical radiation. [ 18 ] This high energy delivery to the tumor results in many double-strand DNA breaks which are very difficult for the tumor to repair. Conventional radiation produces principally single strand DNA breaks which can allow many of the tumor cells to survive. The higher outright cell mortality produced by CIRT may also provide a clearer antigen signature to stimulate the patient's immune system. [ 19 ] [ 20 ]
The precision of particle therapy of tumors situated in thorax and abdominal region is strongly affected by the target motion. The mitigation of its negative influence requires advanced techniques of tumor position monitoring (e.g., fluoroscopic imaging of implanted radio-opaque fiducial markers or electromagnetic detection of inserted transponders) and irradiation (gating, rescanning, gated rescanning and tumor tracking). [ 21 ] | https://en.wikipedia.org/wiki/Particle_therapy |
Particle tracking velocimetry ( PTV ) is a velocimetry method i.e. a technique to measure velocities and trajectories of moving objects. In fluid mechanics research these objects are neutrally buoyant particles that are suspended in fluid flow. As the name suggests, individual particles are tracked, so this technique is a Lagrangian approach, in contrast to particle image velocimetry (PIV), which is an Eulerian method that measures the velocity of the fluid as it passes the observation point, that is fixed in space. There are two experimental PTV methods:
The 3-D particle tracking velocimetry (PTV) belongs to the class of whole-field velocimetry techniques used in the study of turbulent flows, allowing the determination of instantaneous velocity and vorticity distributions over two or three spatial dimensions. 3-D PTV yields a time series of instantaneous 3-component velocity vectors in the form of fluid element trajectories. At any instant, the data density can easily exceed 10 velocity vectors per cubic centimeter. The method is based on stereoscopic imaging (using 2 to 4 cameras) and synchronous recording of the motion of flow tracers, i.e. small particles suspended in the flow, illuminated by a strobed light source. The 3-D particle coordinates as a function of time are then derived by use of image and photogrammetric analysis of each stereoscopic set of frames. The 3-D particle positions are tracked in the time domain to derive the particle trajectories. The ability to follow (track) a spatially dense set of individual particles for a sufficiently long period of time, and to perform statistical analysis of their properties, permits a Lagrangian description of the turbulent flow process. This is a unique advantage of the 3-D PTV method.
A typical implementation of the 3D-PTV consists of two, three or four digital cameras, installed in an angular configuration and synchronously recording the diffracted or fluorescent light from the flow tracers seeded in the flow. The flow is illuminated by a collimated laser beam, or by another source of light that is often strobed, synchronously with the camera frame rate, to reduce the effective exposure time of the moving optical targets and "freeze" their position on each frame. There is no restriction on the light to be coherent or monochromatic ; only its illuminance has to be sufficient for imaging the tracer particles in the observational volume. Particles or tracers could be fluorescent , diffractive , tracked through as many consecutive frames as possible, and on as many cameras as possible to maximize positioning accuracy. In principle, two cameras in a stereoscopic configuration are sufficient in order to determine the three coordinates of a particle in space, but in most practical situations three or four cameras are used to reach a satisfactory 3-D positioning accuracy, as well as increase the trajectory yield when studying fully turbulent flows.
Several versions of 3D-PTV schemes exist. Most of these utilize either 3 CCDs [ 1 ] or 4 CCDs. [ 2 ]
The use of white light for illuminating the observation volume, rather than laser-based illumination, substantially reduces both the cost, and the health and safety requirements. [ citation needed ] Initial development of the 3-D PTV method started as a joint project between the Institute of Geodesy and Photogrammetry and the Institute of Hydraulics of ETH Zurich. [ citation needed ] Further developments of the technique include real-time image processing using on-camera FPGA chip. [ 3 ] | https://en.wikipedia.org/wiki/Particle_tracking_velocimetry |
Particle velocity (denoted v or SVL ) is the velocity of a particle (real or imagined) in a medium as it transmits a wave . The SI unit of particle velocity is the metre per second (m/s). In many cases this is a longitudinal wave of pressure as with sound , but it can also be a transverse wave as with the vibration of a taut string.
When applied to a sound wave through a medium of a fluid like air, particle velocity would be the physical speed of a parcel of fluid as it moves back and forth in the direction the sound wave is travelling as it passes.
Particle velocity should not be confused with the speed of the wave as it passes through the medium, i.e. in the case of a sound wave, particle velocity is not the same as the speed of sound . The wave moves relatively fast, while the particles oscillate around their original position with a relatively small particle velocity. Particle velocity should also not be confused with the velocity of individual molecules, which depends mostly on the temperature and molecular mass .
In applications involving sound, the particle velocity is usually measured using a logarithmic decibel scale called particle velocity level . Mostly pressure sensors (microphones) are used to measure sound pressure which is then propagated to the velocity field using Green's function .
Particle velocity, denoted v {\displaystyle \mathbf {v} } , is defined by
where δ {\displaystyle \delta } is the particle displacement .
The particle displacement of a progressive sine wave is given by
where
It follows that the particle velocity and the sound pressure along the direction of propagation of the sound wave x are given by
where
Taking the Laplace transforms of v {\displaystyle v} and p {\displaystyle p} with respect to time yields
Since φ v , 0 = φ p , 0 {\displaystyle \varphi _{v,0}=\varphi _{p,0}} , the amplitude of the specific acoustic impedance is given by
Consequently, the amplitude of the particle velocity is related to those of the particle displacement and the sound pressure by
Sound velocity level (SVL) or acoustic velocity level or particle velocity level is a logarithmic measure of the effective particle velocity of a sound relative to a reference value. Sound velocity level, denoted L v and measured in dB , is defined by [ 1 ]
where
The commonly used reference particle velocity in air is [ 2 ]
The proper notations for sound velocity level using this reference are L v /(5 × 10 −8 m/s) or L v (re 5 × 10 −8 m/s) , but the notations dB SVL , dB(SVL) , dBSVL, or dB SVL are very common, even though they are not accepted by the SI. [ 3 ] | https://en.wikipedia.org/wiki/Particle_velocity |
In particle physics , the term particle zoo [ 1 ] [ 2 ] is used colloquially to describe the relatively extensive list of known subatomic particles by analogy to the variety of species in a zoo .
In the history of particle physics , the topic of particles was considered to be particularly confusing in the late 1960s. Before the discovery of quarks, hundreds of strongly interacting particles ( hadrons ) were known and believed to be distinct elementary particles . It was later discovered that they were not elementary particles, but rather composites of quarks . The set of particles believed today to be elementary is known as the Standard Model and includes quarks , bosons and leptons .
The term " subnuclear zoo " was coined or popularized by Robert Oppenheimer in 1956 at the VI Rochester International Conference on High Energy Physics . [ 3 ]
This particle physics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Particle_zoo |
In mathematics , the Riemann zeta function is a function in complex analysis , which is also important in number theory . It is often denoted ζ ( s ) {\displaystyle \zeta (s)} and is named after the mathematician Bernhard Riemann . When the argument s {\displaystyle s} is a real number greater than one, the zeta function satisfies the equation ζ ( s ) = ∑ n = 1 ∞ 1 n s . {\displaystyle \zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}\,.} It can therefore provide the sum of various convergent infinite series , such as ζ ( 2 ) = 1 1 2 + {\textstyle \zeta (2)={\frac {1}{1^{2}}}+} 1 2 2 + {\textstyle {\frac {1}{2^{2}}}+} 1 3 2 + … . {\textstyle {\frac {1}{3^{2}}}+\ldots \,.} Explicit or numerically efficient formulae exist for ζ ( s ) {\displaystyle \zeta (s)} at integer arguments, all of which have real values, including this example. This article lists these formulae, together with tables of values. It also includes derivatives and some series composed of the zeta function at integer arguments.
The same equation in s {\displaystyle s} above also holds when s {\displaystyle s} is a complex number whose real part is greater than one, ensuring that the infinite sum still converges. The zeta function can then be extended to the whole of the complex plane by analytic continuation , except for a simple pole at s = 1 {\displaystyle s=1} . The complex derivative exists in this more general region, making the zeta function a meromorphic function . The above equation no longer applies for these extended values of s {\displaystyle s} , for which the corresponding summation would diverge. For example, the full zeta function exists at s = − 1 {\displaystyle s=-1} (and is therefore finite there), but the corresponding series would be 1 + 2 + 3 + … , {\textstyle 1+2+3+\ldots \,,} whose partial sums would grow indefinitely large.
The zeta function values listed below include function values at the negative even numbers ( s = −2 , −4 , etc. ), for which ζ ( s ) = 0 and which make up the so-called trivial zeros . The Riemann zeta function article includes a colour plot illustrating how the function varies over a continuous rectangular region of the complex plane. The successful characterisation of its non-trivial zeros in the wider plane is important in number theory, because of the Riemann hypothesis .
At zero , one has ζ ( 0 ) = B 1 − = − B 1 + = − 1 2 {\displaystyle \zeta (0)={B_{1}^{-}}=-{B_{1}^{+}}=-{\tfrac {1}{2}}\!}
At 1 there is a pole , so ζ (1) is not finite but the left and right limits are: lim ε → 0 ± ζ ( 1 + ε ) = ± ∞ {\displaystyle \lim _{\varepsilon \to 0^{\pm }}\zeta (1+\varepsilon )=\pm \infty } Since it is a pole of first order, it has a complex residue lim ε → 0 ε ζ ( 1 + ε ) = 1 . {\displaystyle \lim _{\varepsilon \to 0}\varepsilon \zeta (1+\varepsilon )=1\,.}
For the even positive integers n {\displaystyle n} , one has the relationship to the Bernoulli numbers B n {\displaystyle B_{n}} :
ζ ( n ) = ( − 1 ) n 2 + 1 ( 2 π ) n B n 2 ( n ! ) . {\displaystyle \zeta (n)=(-1)^{{\tfrac {n}{2}}+1}{\frac {(2\pi )^{n}B_{n}}{2(n!)}}\,.}
The computation of ζ ( 2 ) {\displaystyle \zeta (2)} is known as the Basel problem . The value of ζ ( 4 ) {\displaystyle \zeta (4)} is related to the Stefan–Boltzmann law and Wien approximation in physics. The first few values are given by: ζ ( 2 ) = 1 + 1 2 2 + 1 3 2 + ⋯ = π 2 6 ζ ( 4 ) = 1 + 1 2 4 + 1 3 4 + ⋯ = π 4 90 ζ ( 6 ) = 1 + 1 2 6 + 1 3 6 + ⋯ = π 6 945 ζ ( 8 ) = 1 + 1 2 8 + 1 3 8 + ⋯ = π 8 9450 ζ ( 10 ) = 1 + 1 2 10 + 1 3 10 + ⋯ = π 10 93555 ζ ( 12 ) = 1 + 1 2 12 + 1 3 12 + ⋯ = 691 π 12 638512875 ζ ( 14 ) = 1 + 1 2 14 + 1 3 14 + ⋯ = 2 π 14 18243225 ζ ( 16 ) = 1 + 1 2 16 + 1 3 16 + ⋯ = 3617 π 16 325641566250 . {\displaystyle {\begin{aligned}\zeta (2)&=1+{\frac {1}{2^{2}}}+{\frac {1}{3^{2}}}+\cdots ={\frac {\pi ^{2}}{6}}\\[4pt]\zeta (4)&=1+{\frac {1}{2^{4}}}+{\frac {1}{3^{4}}}+\cdots ={\frac {\pi ^{4}}{90}}\\[4pt]\zeta (6)&=1+{\frac {1}{2^{6}}}+{\frac {1}{3^{6}}}+\cdots ={\frac {\pi ^{6}}{945}}\\[4pt]\zeta (8)&=1+{\frac {1}{2^{8}}}+{\frac {1}{3^{8}}}+\cdots ={\frac {\pi ^{8}}{9450}}\\[4pt]\zeta (10)&=1+{\frac {1}{2^{10}}}+{\frac {1}{3^{10}}}+\cdots ={\frac {\pi ^{10}}{93555}}\\[4pt]\zeta (12)&=1+{\frac {1}{2^{12}}}+{\frac {1}{3^{12}}}+\cdots ={\frac {691\pi ^{12}}{638512875}}\\[4pt]\zeta (14)&=1+{\frac {1}{2^{14}}}+{\frac {1}{3^{14}}}+\cdots ={\frac {2\pi ^{14}}{18243225}}\\[4pt]\zeta (16)&=1+{\frac {1}{2^{16}}}+{\frac {1}{3^{16}}}+\cdots ={\frac {3617\pi ^{16}}{325641566250}}\,.\end{aligned}}}
Taking the limit n → ∞ {\displaystyle n\rightarrow \infty } , one obtains ζ ( ∞ ) = 1 {\displaystyle \zeta (\infty )=1} .
The relationship between zeta at the positive even integers and powers of pi may be written as
a n ζ ( 2 n ) = π 2 n b n {\displaystyle a_{n}\zeta (2n)=\pi ^{2n}b_{n}}
where a n {\displaystyle a_{n}} and b n {\displaystyle b_{n}} are coprime positive integers for all n {\displaystyle n} . These are given by the integer sequences OEIS : A002432 and OEIS : A046988 , respectively, in OEIS . Some of these values are reproduced below:
If we let η n = b n / a n {\displaystyle \eta _{n}=b_{n}/a_{n}} be the coefficient of π 2 n {\displaystyle \pi ^{2n}} as above, ζ ( 2 n ) = ∑ ℓ = 1 ∞ 1 ℓ 2 n = η n π 2 n {\displaystyle \zeta (2n)=\sum _{\ell =1}^{\infty }{\frac {1}{\ell ^{2n}}}=\eta _{n}\pi ^{2n}} then we find recursively,
η 1 = 1 / 6 η n = ∑ ℓ = 1 n − 1 ( − 1 ) ℓ − 1 η n − ℓ ( 2 ℓ + 1 ) ! + ( − 1 ) n + 1 n ( 2 n + 1 ) ! {\displaystyle {\begin{aligned}\eta _{1}&=1/6\\\eta _{n}&=\sum _{\ell =1}^{n-1}(-1)^{\ell -1}{\frac {\eta _{n-\ell }}{(2\ell +1)!}}+(-1)^{n+1}{\frac {n}{(2n+1)!}}\end{aligned}}}
This recurrence relation may be derived from that for the Bernoulli numbers .
Also, there is another recurrence:
ζ ( 2 n ) = 1 n + 1 2 ∑ k = 1 n − 1 ζ ( 2 k ) ζ ( 2 n − 2 k ) for n > 1 {\displaystyle \zeta (2n)={\frac {1}{n+{\frac {1}{2}}}}\sum _{k=1}^{n-1}\zeta (2k)\zeta (2n-2k)\quad {\text{ for }}\quad n>1} which can be proved, using that d d x cot ( x ) = − 1 − cot 2 ( x ) {\displaystyle {\frac {d}{dx}}\cot(x)=-1-\cot ^{2}(x)}
The values of the zeta function at non-negative even integers have the generating function : ∑ n = 0 ∞ ζ ( 2 n ) x 2 n = − π x 2 cot ( π x ) = − 1 2 + π 2 6 x 2 + π 4 90 x 4 + π 6 945 x 6 + ⋯ {\displaystyle \sum _{n=0}^{\infty }\zeta (2n)x^{2n}=-{\frac {\pi x}{2}}\cot(\pi x)=-{\frac {1}{2}}+{\frac {\pi ^{2}}{6}}x^{2}+{\frac {\pi ^{4}}{90}}x^{4}+{\frac {\pi ^{6}}{945}}x^{6}+\cdots } Since lim n → ∞ ζ ( 2 n ) = 1 {\displaystyle \lim _{n\rightarrow \infty }\zeta (2n)=1} The formula also shows that for n ∈ N , n → ∞ {\displaystyle n\in \mathbb {N} ,n\rightarrow \infty } , | B 2 n | ∼ ( 2 n ) ! 2 ( 2 π ) 2 n {\displaystyle \left|B_{2n}\right|\sim {\frac {(2n)!\,2}{\;~(2\pi )^{2n}\,}}}
The sum of the harmonic series is infinite. ζ ( 1 ) = 1 + 1 2 + 1 3 + ⋯ = ∞ {\displaystyle \zeta (1)=1+{\frac {1}{2}}+{\frac {1}{3}}+\cdots =\infty \!}
The value ζ (3) is also known as Apéry's constant and has a role in the electron's gyromagnetic ratio.
The value ζ (3) also appears in Planck's law .
These and additional values are:
It is known that ζ (3) is irrational ( Apéry's theorem ) and that infinitely many of the numbers ζ (2 n + 1) : n ∈ N {\displaystyle \mathbb {N} } , are irrational. [ 1 ] There are also results on the irrationality of values of the Riemann zeta function at the elements of certain subsets of the positive odd integers; for example, at least one of ζ (5), ζ (7), ζ (9), or ζ (11) is irrational. [ 2 ]
The positive odd integers of the zeta function appear in physics, specifically correlation functions of antiferromagnetic XXX spin chain . [ 3 ]
Most of the identities following below are provided by Simon Plouffe . They are notable in that they converge quite rapidly, giving almost three digits of precision per iteration, and are thus useful for high-precision calculations.
Plouffe stated the following identities without proof. [ 4 ] Proofs were later given by other authors. [ 5 ]
ζ ( 5 ) = 1 294 π 5 − 72 35 ∑ n = 1 ∞ 1 n 5 ( e 2 π n − 1 ) − 2 35 ∑ n = 1 ∞ 1 n 5 ( e 2 π n + 1 ) ζ ( 5 ) = 12 ∑ n = 1 ∞ 1 n 5 sinh ( π n ) − 39 20 ∑ n = 1 ∞ 1 n 5 ( e 2 π n − 1 ) + 1 20 ∑ n = 1 ∞ 1 n 5 ( e 2 π n + 1 ) {\displaystyle {\begin{aligned}\zeta (5)&={\frac {1}{294}}\pi ^{5}-{\frac {72}{35}}\sum _{n=1}^{\infty }{\frac {1}{n^{5}(e^{2\pi n}-1)}}-{\frac {2}{35}}\sum _{n=1}^{\infty }{\frac {1}{n^{5}(e^{2\pi n}+1)}}\\\zeta (5)&=12\sum _{n=1}^{\infty }{\frac {1}{n^{5}\sinh(\pi n)}}-{\frac {39}{20}}\sum _{n=1}^{\infty }{\frac {1}{n^{5}(e^{2\pi n}-1)}}+{\frac {1}{20}}\sum _{n=1}^{\infty }{\frac {1}{n^{5}(e^{2\pi n}+1)}}\end{aligned}}}
ζ ( 7 ) = 19 56700 π 7 − 2 ∑ n = 1 ∞ 1 n 7 ( e 2 π n − 1 ) {\displaystyle \zeta (7)={\frac {19}{56700}}\pi ^{7}-2\sum _{n=1}^{\infty }{\frac {1}{n^{7}(e^{2\pi n}-1)}}\!}
Note that the sum is in the form of a Lambert series .
By defining the quantities
S ± ( s ) = ∑ n = 1 ∞ 1 n s ( e 2 π n ± 1 ) {\displaystyle S_{\pm }(s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}(e^{2\pi n}\pm 1)}}}
a series of relationships can be given in the form
0 = a n ζ ( n ) − b n π n + c n S − ( n ) + d n S + ( n ) {\displaystyle 0=a_{n}\zeta (n)-b_{n}\pi ^{n}+c_{n}S_{-}(n)+d_{n}S_{+}(n)}
where a n , b n , c n and d n are positive integers. Plouffe gives a table of values:
These integer constants may be expressed as sums over Bernoulli numbers, as given in (Vepstas, 2006) below.
A fast algorithm for the calculation of Riemann's zeta function for any integer argument is given by E. A. Karatsuba. [ 6 ] [ 7 ] [ 8 ]
In general, for negative integers (and also zero), one has
ζ ( − n ) = ( − 1 ) n B n + 1 n + 1 {\displaystyle \zeta (-n)=(-1)^{n}{\frac {B_{n+1}}{n+1}}}
The so-called "trivial zeros" occur at the negative even integers:
ζ ( − 2 n ) = 0 {\displaystyle \zeta (-2n)=0} ( Ramanujan summation )
The first few values for negative odd integers are
ζ ( − 1 ) = − 1 12 ζ ( − 3 ) = 1 120 ζ ( − 5 ) = − 1 252 ζ ( − 7 ) = 1 240 ζ ( − 9 ) = − 1 132 ζ ( − 11 ) = 691 32760 ζ ( − 13 ) = − 1 12 {\displaystyle {\begin{aligned}\zeta (-1)&=-{\frac {1}{12}}\\[4pt]\zeta (-3)&={\frac {1}{120}}\\[4pt]\zeta (-5)&=-{\frac {1}{252}}\\[4pt]\zeta (-7)&={\frac {1}{240}}\\[4pt]\zeta (-9)&=-{\frac {1}{132}}\\[4pt]\zeta (-11)&={\frac {691}{32760}}\\[4pt]\zeta (-13)&=-{\frac {1}{12}}\end{aligned}}}
However, just like the Bernoulli numbers , these do not stay small for increasingly negative odd values. For details on the first value, see 1 + 2 + 3 + 4 + · · · .
So ζ ( m ) can be used as the definition of all (including those for index 0 and 1) Bernoulli numbers.
The derivative of the zeta function at the negative even integers is given by
ζ ′ ( − 2 n ) = ( − 1 ) n ( 2 n ) ! 2 ( 2 π ) 2 n ζ ( 2 n + 1 ) . {\displaystyle \zeta ^{\prime }(-2n)=(-1)^{n}{\frac {(2n)!}{2(2\pi )^{2n}}}\zeta (2n+1)\,.}
The first few values of which are
ζ ′ ( − 2 ) = − ζ ( 3 ) 4 π 2 ζ ′ ( − 4 ) = 3 4 π 4 ζ ( 5 ) ζ ′ ( − 6 ) = − 45 8 π 6 ζ ( 7 ) ζ ′ ( − 8 ) = 315 4 π 8 ζ ( 9 ) . {\displaystyle {\begin{aligned}\zeta ^{\prime }(-2)&=-{\frac {\zeta (3)}{4\pi ^{2}}}\\[4pt]\zeta ^{\prime }(-4)&={\frac {3}{4\pi ^{4}}}\zeta (5)\\[4pt]\zeta ^{\prime }(-6)&=-{\frac {45}{8\pi ^{6}}}\zeta (7)\\[4pt]\zeta ^{\prime }(-8)&={\frac {315}{4\pi ^{8}}}\zeta (9)\,.\end{aligned}}}
One also has
ζ ′ ( 0 ) = − 1 2 ln ( 2 π ) ζ ′ ( − 1 ) = 1 12 − ln A ζ ′ ( 2 ) = 1 6 π 2 ( γ + ln 2 − 12 ln A + ln π ) {\displaystyle {\begin{aligned}\zeta ^{\prime }(0)&=-{\frac {1}{2}}\ln(2\pi )\\[4pt]\zeta ^{\prime }(-1)&={\frac {1}{12}}-\ln A\\[4pt]\zeta ^{\prime }(2)&={\frac {1}{6}}\pi ^{2}(\gamma +\ln 2-12\ln A+\ln \pi )\end{aligned}}}
where A is the Glaisher–Kinkelin constant . The first of these identities implies that the regularized product of the reciprocals of the positive integers is 1 / 2 π {\displaystyle 1/{\sqrt {2\pi }}} , thus the amusing "equation" ∞ ! = 2 π {\displaystyle \infty !={\sqrt {2\pi }}} . [ 9 ]
From the logarithmic derivative of the functional equation,
2 ζ ′ ( 1 / 2 ) ζ ( 1 / 2 ) = log ( 2 π ) + π cos ( π / 4 ) 2 sin ( π / 4 ) − Γ ′ ( 1 / 2 ) Γ ( 1 / 2 ) = log ( 2 π ) + π 2 + 2 log 2 + γ . {\displaystyle 2{\frac {\zeta '(1/2)}{\zeta (1/2)}}=\log(2\pi )+{\frac {\pi \cos(\pi /4)}{2\sin(\pi /4)}}-{\frac {\Gamma '(1/2)}{\Gamma (1/2)}}=\log(2\pi )+{\frac {\pi }{2}}+2\log 2+\gamma \,.}
The following sums can be derived from the generating function: ∑ k = 2 ∞ ζ ( k ) x k − 1 = − ψ 0 ( 1 − x ) − γ {\displaystyle \sum _{k=2}^{\infty }\zeta (k)x^{k-1}=-\psi _{0}(1-x)-\gamma } where ψ 0 is the digamma function .
∑ k = 2 ∞ ( ζ ( k ) − 1 ) = 1 ∑ k = 1 ∞ ( ζ ( 2 k ) − 1 ) = 3 4 ∑ k = 1 ∞ ( ζ ( 2 k + 1 ) − 1 ) = 1 4 ∑ k = 2 ∞ ( − 1 ) k ( ζ ( k ) − 1 ) = 1 2 {\displaystyle {\begin{aligned}\sum _{k=2}^{\infty }(\zeta (k)-1)&=1\\[4pt]\sum _{k=1}^{\infty }(\zeta (2k)-1)&={\frac {3}{4}}\\[4pt]\sum _{k=1}^{\infty }(\zeta (2k+1)-1)&={\frac {1}{4}}\\[4pt]\sum _{k=2}^{\infty }(-1)^{k}(\zeta (k)-1)&={\frac {1}{2}}\end{aligned}}}
Series related to the Euler–Mascheroni constant (denoted by γ ) are ∑ k = 2 ∞ ( − 1 ) k ζ ( k ) k = γ ∑ k = 2 ∞ ζ ( k ) − 1 k = 1 − γ ∑ k = 2 ∞ ( − 1 ) k ζ ( k ) − 1 k = ln 2 + γ − 1 {\displaystyle {\begin{aligned}\sum _{k=2}^{\infty }(-1)^{k}{\frac {\zeta (k)}{k}}&=\gamma \\[4pt]\sum _{k=2}^{\infty }{\frac {\zeta (k)-1}{k}}&=1-\gamma \\[4pt]\sum _{k=2}^{\infty }(-1)^{k}{\frac {\zeta (k)-1}{k}}&=\ln 2+\gamma -1\end{aligned}}}
and using the principal value ζ ( k ) = lim ε → 0 ζ ( k + ε ) + ζ ( k − ε ) 2 {\displaystyle \zeta (k)=\lim _{\varepsilon \to 0}{\frac {\zeta (k+\varepsilon )+\zeta (k-\varepsilon )}{2}}} which of course affects only the value at 1, these formulae can be stated as
∑ k = 1 ∞ ( − 1 ) k ζ ( k ) k = 0 ∑ k = 1 ∞ ζ ( k ) − 1 k = 0 ∑ k = 1 ∞ ( − 1 ) k ζ ( k ) − 1 k = ln 2 {\displaystyle {\begin{aligned}\sum _{k=1}^{\infty }(-1)^{k}{\frac {\zeta (k)}{k}}&=0\\[4pt]\sum _{k=1}^{\infty }{\frac {\zeta (k)-1}{k}}&=0\\[4pt]\sum _{k=1}^{\infty }(-1)^{k}{\frac {\zeta (k)-1}{k}}&=\ln 2\end{aligned}}}
and show that they depend on the principal value of ζ (1) = γ .
Zeros of the Riemann zeta except negative even integers are called "nontrivial zeros". The Riemann hypothesis states that the real part of every nontrivial zero must be 1 / 2 . In other words, all known nontrivial zeros of the Riemann zeta are of the form z = 1 / 2 + y i where y is a real number. The following table contains the decimal expansion of Im( z ) for the first few nontrivial zeros:
Andrew Odlyzko computed the first 2 million nontrivial zeros accurate to within 4 × 10 −9 , and the first 100 zeros accurate within 1000 decimal places. See their website for the tables and bibliographies. [ 10 ] [ 11 ] A table of about 103 billion zeros with high precision (of ±2 −102 ≈±2·10 −31 ) is available for interactive access and download (although in a very inconvenient compressed format) via LMFDB . [ 12 ]
Although evaluating particular values of the zeta function is difficult, often certain ratios can be found by inserting particular values of the gamma function into the functional equation
ζ ( s ) = 2 s π s − 1 sin ( π s 2 ) Γ ( 1 − s ) ζ ( 1 − s ) {\displaystyle \zeta (s)=2^{s}\pi ^{s-1}\sin \left({\frac {\pi s}{2}}\right)\Gamma (1-s)\zeta (1-s)}
We have simple relations for half-integer arguments
ζ ( 3 / 2 ) ζ ( − 1 / 2 ) = − 4 π ζ ( 5 / 2 ) ζ ( − 3 / 2 ) = − 16 π 2 3 ζ ( 7 / 2 ) ζ ( − 5 / 2 ) = 64 π 3 15 ζ ( 9 / 2 ) ζ ( − 7 / 2 ) = 256 π 4 105 {\displaystyle {\begin{aligned}{\frac {\zeta (3/2)}{\zeta (-1/2)}}&=-4\pi \\{\frac {\zeta (5/2)}{\zeta (-3/2)}}&=-{\frac {16\pi ^{2}}{3}}\\{\frac {\zeta (7/2)}{\zeta (-5/2)}}&={\frac {64\pi ^{3}}{15}}\\{\frac {\zeta (9/2)}{\zeta (-7/2)}}&={\frac {256\pi ^{4}}{105}}\end{aligned}}}
Other examples follow for more complicated evaluations and relations of the gamma function. For example a consequence of the relation
Γ ( 3 4 ) = ( π 2 ) 1 4 AGM ( 2 , 1 ) 1 2 {\displaystyle \Gamma \left({\tfrac {3}{4}}\right)=\left({\tfrac {\pi }{2}}\right)^{\tfrac {1}{4}}{\operatorname {AGM} \left({\sqrt {2}},1\right)}^{\tfrac {1}{2}}}
is the zeta ratio relation
ζ ( 3 / 4 ) ζ ( 1 / 4 ) = 2 π ( 2 − 2 ) AGM ( 2 , 1 ) {\displaystyle {\frac {\zeta (3/4)}{\zeta (1/4)}}=2{\sqrt {\frac {\pi }{(2-{\sqrt {2}})\operatorname {AGM} \left({\sqrt {2}},1\right)}}}}
where AGM is the arithmetic–geometric mean . In a similar vein, it is possible to form radical relations, such as from
the analogous zeta relation is
ζ ( 1 / 5 ) 2 ζ ( 7 / 10 ) ζ ( 9 / 10 ) ζ ( 1 / 10 ) ζ ( 3 / 10 ) ζ ( 4 / 5 ) 2 = ( 5 − 5 ) ( 10 + 5 + 5 ) 10 ⋅ 2 3 10 {\displaystyle {\frac {\zeta (1/5)^{2}\zeta (7/10)\zeta (9/10)}{\zeta (1/10)\zeta (3/10)\zeta (4/5)^{2}}}={\frac {(5-{\sqrt {5}})\left({\sqrt {10}}+{\sqrt {5+{\sqrt {5}}}}\right)}{10\cdot 2^{\tfrac {3}{10}}}}} | https://en.wikipedia.org/wiki/Particular_values_of_the_Riemann_zeta_function |
The gamma function is an important special function in mathematics . Its particular values can be expressed in closed form for integer and half-integer arguments, but no simple expressions are known for the values at rational points in general. Other fractional arguments can be approximated through efficient infinite products, infinite series, and recurrence relations.
For positive integer arguments, the gamma function coincides with the factorial . That is,
and hence
and so on. For non-positive integers, the gamma function is not defined.
For positive half-integers k 2 {\displaystyle {\frac {k}{2}}} where k ∈ 2 N ∗ + 1 {\displaystyle k\in 2\mathbb {N} ^{*}+1} is an odd integer greater or equal 3 {\displaystyle 3} , the function values are given exactly by
or equivalently, for non-negative integer values of n :
where n !! denotes the double factorial . In particular,
and by means of the reflection formula ,
In analogy with the half-integer formula,
where n ! ( q ) denotes the q th multifactorial of n . Numerically,
As n {\displaystyle n} tends to infinity,
where γ {\displaystyle \gamma } is the Euler–Mascheroni constant and ∼ {\displaystyle \sim } denotes asymptotic equivalence .
It is unknown whether these constants are transcendental in general, but Γ( 1 / 3 ) and Γ( 1 / 4 ) were shown to be transcendental by G. V. Chudnovsky . Γ( 1 / 4 ) / 4 √ π has also long been known to be transcendental, and Yuri Nesterenko proved in 1996 that Γ( 1 / 4 ) , π , and e π are algebraically independent .
For n ≥ 2 {\displaystyle n\geq 2} at least one of the two numbers Γ ( 1 n ) {\displaystyle \Gamma \left({\tfrac {1}{n}}\right)} and Γ ( 2 n ) {\displaystyle \Gamma \left({\tfrac {2}{n}}\right)} is transcendental. [ 1 ]
The number Γ ( 1 4 ) {\displaystyle \Gamma \left({\tfrac {1}{4}}\right)} is related to the lemniscate constant ϖ {\displaystyle \varpi } by
Borwein and Zucker have found that Γ( n / 24 ) can be expressed algebraically in terms of π , K ( k (1)) , K ( k (2)) , K ( k (3)) , and K ( k (6)) where K ( k ( N )) is a complete elliptic integral of the first kind . This permits efficiently approximating the gamma function of rational arguments to high precision using quadratically convergent arithmetic–geometric mean iterations. For example:
No similar relations are known for Γ( 1 / 5 ) or other denominators.
In particular, where AGM() is the arithmetic–geometric mean , we have [ 2 ]
Other formulas include the infinite products
and
where A is the Glaisher–Kinkelin constant and G is Catalan's constant .
The following two representations for Γ( 3 / 4 ) were given by I. Mező [ 3 ]
and
where θ 1 and θ 4 are two of the Jacobi theta functions .
There also exist a number of Malmsten integrals for certain values of the gamma function: [ 4 ]
Some product identities include:
In general:
From those products can be deduced other values, for example, from the former equations for ∏ r = 1 3 Γ ( r 4 ) {\displaystyle \prod _{r=1}^{3}\Gamma \left({\tfrac {r}{4}}\right)} , Γ ( 1 4 ) {\displaystyle \Gamma \left({\tfrac {1}{4}}\right)} and Γ ( 2 4 ) {\displaystyle \Gamma \left({\tfrac {2}{4}}\right)} , can be deduced:
Γ ( 3 4 ) = ( π 2 ) 1 4 AGM ( 2 , 1 ) 1 2 {\displaystyle \Gamma \left({\tfrac {3}{4}}\right)=\left({\tfrac {\pi }{2}}\right)^{\tfrac {1}{4}}{\operatorname {AGM} \left({\sqrt {2}},1\right)}^{\tfrac {1}{2}}}
Other rational relations include
and many more relations for Γ( n / d ) where the denominator d divides 24 or 60. [ 6 ]
Gamma quotients with algebraic values must be "poised" in the sense that the sum of arguments is the same (modulo 1) for the denominator and the numerator.
A more sophisticated example:
The gamma function at the imaginary unit i = √ −1 gives OEIS : A212877 , OEIS : A212878 :
It may also be given in terms of the Barnes G -function :
Curiously enough, Γ ( i ) {\displaystyle \Gamma (i)} appears in the below integral evaluation: [ 8 ]
Here { ⋅ } {\displaystyle \{\cdot \}} denotes the fractional part .
Because of the Euler Reflection Formula , and the fact that Γ ( z ¯ ) = Γ ¯ ( z ) {\displaystyle \Gamma ({\bar {z}})={\bar {\Gamma }}(z)} , we have an expression for the modulus squared of the Gamma function evaluated on the imaginary axis:
The above integral therefore relates to the phase of Γ ( i ) {\displaystyle \Gamma (i)} .
The gamma function with other complex arguments returns
The gamma function has a local minimum on the positive real axis
with the value
Integrating the reciprocal gamma function along the positive real axis also gives the Fransén–Robinson constant .
On the negative real axis, the first local maxima and minima (zeros of the digamma function ) are:
The only values of x > 0 for which Γ( x ) = x are x = 1 and x ≈ 3.562 382 285 390 897 691 415 644 3427 ... OEIS : A218802 . | https://en.wikipedia.org/wiki/Particular_values_of_the_gamma_function |
Particulate inorganic carbon ( PIC ) can be contrasted with dissolved inorganic carbon (DIC), the other form of inorganic carbon found in the ocean. These distinctions are important in chemical oceanography . Particulate inorganic carbon is sometimes called suspended inorganic carbon . In operational terms , it is defined as the inorganic carbon in particulate form that is too large to pass through the filter used to separate dissolved inorganic carbon.
Most PIC is calcium carbonate , CaCO 3 , particularly in the form of calcite , but also in the form of aragonite . Calcium carbonate makes up the shells of many marine organisms . It also forms during whiting events and is excreted by marine fish during osmoregulation .
Carbon compounds can be distinguished as either organic or inorganic, and dissolved or particulate, depending on their composition. Organic carbon forms the backbone of key component of organic compounds such as – proteins , lipids , carbohydrates , and nucleic acids . Inorganic carbon is found primarily in simple compounds such as carbon dioxide, carbonic acid, bicarbonate, and carbonate (CO 2 , H 2 CO 3 , HCO 3 − , CO 3 2− respectively).
Marine carbon is further separated into particulate and dissolved phases. These pools are operationally defined by physical separation – dissolved carbon passes through a 0.2 μm filter, and particulate carbon does not.
There are two main types of inorganic carbon that are found in the oceans. Dissolved inorganic carbon (DIC) is made up of bicarbonate (HCO 3 − ), carbonate (CO 3 2− ) and carbon dioxide (including both dissolved CO 2 and carbonic acid H 2 CO 3 ). DIC can be converted to particulate inorganic carbon (PIC) through precipitation of CaCO 3 (biologically or abiotically). DIC can also be converted to particulate organic carbon (POC) through photosynthesis and chemoautotrophy (i.e. primary production). DIC increases with depth as organic carbon particles sink and are respired. Free oxygen decreases as DIC increases because oxygen is consumed during aerobic respiration.
Particulate inorganic carbon (PIC) is the other form of inorganic carbon found in the ocean. Most PIC is the CaCO 3 that makes up shells of various marine organisms, but can also form in whiting events . Marine fish also excrete calcium carbonate during osmoregulation . [ 4 ]
Some of the inorganic carbon species in the ocean, such as bicarbonate and carbonate , are major contributors to alkalinity , a natural ocean buffer that prevents drastic changes in acidity (or pH ). The marine carbon cycle also affects the reaction and dissolution rates of some chemical compounds, regulates the amount of carbon dioxide in the atmosphere and Earth's temperature. [ 5 ]
Particulate inorganic carbon (PIC) usually takes the form of calcium carbonate (CaCO 3 ), and plays a key part in the ocean carbon cycle. [ 8 ] This biologically fixed carbon is used as a protective coating for many planktonic species (coccolithophores, foraminifera) as well as larger marine organisms (mollusk shells). Calcium carbonate is also excreted at high rates during osmoregulation by fish, and can form in whiting events . [ 9 ] While this form of carbon is not directly taken from the atmospheric budget, it is formed from dissolved forms of carbonate which are in equilibrium with CO 2 and then responsible for removing this carbon via sequestration. [ 10 ]
While this process does manage to fix a large amount of carbon, two units of alkalinity are sequestered for every unit of sequestered carbon. [ 11 ] [ 12 ] The formation and sinking of CaCO 3 therefore drives a surface to deep alkalinity gradient which serves to raise the pH of surface waters, shifting the speciation of dissolved carbon to raise the partial pressure of dissolved CO 2 in surface waters, which actually raises atmospheric levels. In addition, the burial of CaCO 3 in sediments serves to lower overall oceanic alkalinity , tending to raise pH and thereby atmospheric CO 2 levels if not counterbalanced by the new input of alkalinity from weathering. [ 13 ] The portion of carbon that is permanently buried at the sea floor becomes part of the geologic record. Calcium carbonate often forms remarkable deposits that can then be raised onto land through tectonic motion as in the case with the White Cliffs of Dover in Southern England. These cliffs are made almost entirely of the plates of buried coccolithophores . [ 14 ]
The carbonate pump , sometimes called the carbonate counter pump, starts with marine organisms at the ocean's surface producing particulate inorganic carbon (PIC) in the form of calcium carbonate ( calcite or aragonite , CaCO 3 ). This CaCO 3 is what forms hard body parts like shells . [ 5 ] The formation of these shells increases atmospheric CO 2 due to the production of CaCO 3 [ 15 ] in the following reaction with simplified stoichiometry: [ 16 ]
Coccolithophores , a nearly ubiquitous group of phytoplankton that produce shells of calcium carbonate, are the dominant contributors to the carbonate pump. [ 5 ] Due to their abundance, coccolithophores have significant implications on carbonate chemistry, in the surface waters they inhabit and in the ocean below: they provide a large mechanism for the downward transport of CaCO 3 . [ 18 ] The air-sea CO 2 flux induced by a marine biological community can be determined by the rain ratio - the proportion of carbon from calcium carbonate compared to that from organic carbon in particulate matter sinking to the ocean floor, (PIC/POC). [ 17 ] The carbonate pump acts as a negative feedback on CO 2 taken into the ocean by the solubility pump. It occurs with lesser magnitude than the solubility pump.
The carbonate pump is sometimes referred to as the "hard tissue" component of the biological pump . [ 19 ] Some surface marine organisms, like coccolithophores , produce hard structures out of calcium carbonate, a form of particulate inorganic carbon, by fixing bicarbonate. [ 20 ] This fixation of DIC is an important part of the oceanic carbon cycle.
While the biological carbon pump fixes inorganic carbon (CO 2 ) into particulate organic carbon in the form of sugar (C 6 H 12 O 6 ), the carbonate pump fixes inorganic bicarbonate and causes a net release of CO 2 . [ 20 ] In this way, the carbonate pump could be termed the carbonate counter pump. It works counter to the biological pump by counteracting the CO 2 flux from the biological pump. [ 15 ]
An aragonite sea contains aragonite and high-magnesium calcite as the primary inorganic calcium carbonate precipitates. The chemical conditions of the seawater must be notably high in magnesium content relative to calcium (high Mg/Ca ratio) for an aragonite sea to form. This is in contrast to a calcite sea in which seawater low in magnesium content relative to calcium (low Mg/Ca ratio) favors the formation of low-magnesium calcite as the primary inorganic marine calcium carbonate precipitate.
The Early Paleozoic and the Middle to Late Mesozoic oceans were predominantly calcite seas, whereas the Middle Paleozoic through the Early Mesozoic and the Cenozoic (including today) are characterized by aragonite seas. [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] [ 26 ] [ 27 ] [ 28 ]
Aragonite seas occur due to several factors, the most obvious of these is a high seawater Mg/Ca ratio (Mg/Ca > 2), which occurs during intervals of slow seafloor spreading . [ 24 ] However, the sea level , temperature, and calcium carbonate saturation state of the surrounding system also determine which polymorph of calcium carbonate (aragonite, low-magnesium calcite, high-magnesium calcite) will form. [ 29 ] [ 30 ]
Likewise, the occurrence of calcite seas is controlled by the same suite of factors controlling aragonite seas, with the most obvious being a low seawater Mg/Ca ratio (Mg/Ca < 2), which occurs during intervals of rapid seafloor spreading. [ 24 ] [ 28 ]
A whiting event is a phenomenon that occurs when a suspended cloud of fine-grained calcium carbonate precipitates in water bodies , typically during summer months, as a result of photosynthetic microbiological activity or sediment disturbance. [ 31 ] [ 32 ] [ 33 ] The phenomenon gets its name from the white, chalky color it imbues to the water. These events have been shown to occur in temperate waters as well as tropical ones, and they can span for hundreds of meters. [ 33 ] They can also occur in both marine and freshwater environments. [ 34 ] The origin of whiting events is debated among the scientific community, and it is unclear if there is a single, specific cause. Generally, they are thought to result from either bottom sediment re-suspension or by increased activity of certain microscopic life such as phytoplankton . [ 35 ] [ 36 ] [ 31 ] Because whiting events affect aquatic chemistry, physical properties, and carbon cycling , studying the mechanisms behind them holds scientific relevance in various ways. [ 37 ] [ 32 ] [ 38 ] [ 39 ] [ 40 ]
The Great Calcite Belt (GCB) of the Southern Ocean is a region of elevated summertime upper ocean calcite concentration derived from coccolithophores , despite the region being known for its diatom predominance. The overlap of two major phytoplankton groups, coccolithophores and diatoms, in the dynamic frontal systems characteristic of this region provides an ideal setting to study environmental
influences on the distribution of different species within these taxonomic groups. [ 41 ]
The Great Calcite Belt, defined as an elevated particulate inorganic carbon (PIC) feature occurring alongside seasonally elevated chlorophyll a in austral spring and summer in the Southern Ocean, [ 42 ] plays an important role in climate fluctuations, [ 43 ] [ 44 ] accounting for over 60% of the Southern Ocean area (30–60° S). [ 45 ] The region between 30° and 50° S has the highest uptake of anthropogenic carbon dioxide (CO 2 ) alongside the North Atlantic and North Pacific oceans. [ 46 ] Knowledge of the impact of interacting environmental influences on phytoplankton distribution in the Southern Ocean is limited. For example, more understanding is needed of how light and iron availability or temperature and pH interact to control phytoplankton biogeography . [ 47 ] [ 48 ] [ 49 ] Hence, if model parameterizations are to improve to provide accurate predictions of biogeochemical change, a multivariate understanding of the full suite of environmental drivers is required. [ 50 ] [ 41 ]
The Southern Ocean has often been considered as a microplankton -dominated (20–200 μm) system with phytoplankton blooms dominated by large diatoms and Phaeocystis sp. [ 51 ] [ 52 ] [ 53 ] However, since the identification of the GCB as a consistent feature [ 42 ] [ 54 ] and the recognition of picoplankton (< 2 μm) and nanoplankton (2–20 μm) importance in high-nutrient, low-chlorophyll (HNLC) waters, [ 55 ] the dynamics of small (bio)mineralizing plankton and their export need to be acknowledged. The two dominant biomineralizing phytoplankton groups in the GCB are coccolithophores and diatoms. Coccolithophores are generally found north of the polar front, [ 56 ] though Emiliania huxleyi has been observed as far south as 58° S in the Scotia Sea , [ 57 ] at 61° S across Drake Passage , [ 49 ] and at 65°S south of Australia. [ 58 ] [ 41 ]
Diatoms are present throughout the GCB, with the polar front marking a strong divide between different size fractions. [ 59 ] North of the polar front, small diatom species, such as Pseudo-nitzschia spp. and Thalassiosira spp., tend to dominate numerically, whereas large diatoms with higher silicic acid requirements (e.g., Fragilariopsis kerguelensis ) are generally more abundant south of the polar front. [ 59 ] High abundances of nanoplankton (coccolithophores, small diatoms, chrysophytes ) have also been observed on the Patagonian Shelf [ 52 ] and in the Scotia Sea . [ 60 ] Currently, few studies incorporate small biomineralizing phytoplankton to species level. [ 59 ] [ 51 ] [ 52 ] [ 60 ] Rather, the focus has often been on the larger and noncalcifying species in the Southern Ocean due to sample preservation issues (i.e., acidified Lugol’s solution dissolves calcite , and light microscopy restricts accurate identification to cells > 10 μm. [ 60 ] In the context of climate change and future ecosystem function, the distribution of biomineralizing phytoplankton is important to define when considering phytoplankton interactions with carbonate chemistry , [ 61 ] [ 62 ] and ocean biogeochemistry . [ 63 ] [ 64 ] [ 65 ] [ 41 ]
The Great Calcite Belt spans the major Southern Ocean circumpolar fronts: the Subantarctic front, the polar front, the Southern Antarctic Circumpolar Current front, and occasionally the southern boundary of the Antarctic Circumpolar Current . [ 66 ] [ 67 ] [ 68 ] The subtropical front (at approximately 10 °C) acts as the northern boundary of the GCB and is associated with a sharp increase in PIC southwards. [ 45 ] These fronts divide distinct environmental and biogeochemical zones, making the GCB an ideal study area to examine controls on phytoplankton communities in the open ocean. [ 53 ] [ 47 ] A high PIC concentration observed in the GCB (1 μmol PIC L −1 ) compared to the global average (0.2 μmol PIC L −1 ) and significant quantities of detached E. huxleyi coccoliths (in concentrations > 20,000 coccoliths mL −1 ) [ 45 ] both characterize the GCB. The GCB is clearly observed in satellite imagery [ 42 ] spanning from the Patagonian Shelf [ 69 ] [ 70 ] across the Atlantic, Indian, and Pacific oceans and completing Antarctic circumnavigation via the Drake Passage. [ 41 ]
Since the industrial revolution 30% of the anthropogenic CO 2 has been absorbed by the oceans, [ 71 ] resulting in ocean acidification , [ 72 ] which is a threat to calcifying alga . [ 73 ] [ 74 ] As a result, there has been profound interest in these calcifying algae, boosted by their major role in the global carbon cycle. [ 75 ] [ 76 ] [ 77 ] [ 78 ] [ 79 ] Globally, coccolithophores , particularly Emiliania huxleyi , are considered to be the most dominant calcifying algae, which blooms can even be seen from outer space. [ 80 ] Calcifying algae create an exoskeleton from calcium carbonate platelets ( coccoliths ), providing ballast which enhances the organic and inorganic carbon flux to the deep sea. [ 75 ] [ 81 ] Organic carbon is formed by means of photosynthesis, where CO 2 is fixed and converted into organic molecules, causing removal of CO 2 from the seawater. Counterintuitively, the production of coccoliths leads to the release of CO 2 in the seawater, due to removal of carbonate from the seawater, which reduces the alkalinity and causes acidification . [ 82 ] Therefore, the ratio between particulate inorganic carbon (PIC) and particulate organic carbon (POC) is an important measure for the net release or uptake of CO 2 . In short, the PIC:POC ratio is a key characteristic required to understand and predict the impact of climate change on the global ocean carbon cycle . [ 83 ] [ 72 ] [ 77 ] [ 84 ] [ 85 ] [ 79 ] [ 86 ] | https://en.wikipedia.org/wiki/Particulate_inorganic_carbon |
Particulate organic matter (POM) is a fraction of total organic matter operationally defined as that which does not pass through a filter pore size that typically ranges in size from 0.053 millimeters (53 μm) to 2 millimeters. [ 3 ]
Particulate organic carbon (POC) is a closely related term often used interchangeably with POM. POC refers specifically to the mass of carbon in the particulate organic material, while POM refers to the total mass of the particulate organic matter. In addition to carbon, POM includes the mass of the other elements in the organic matter, such as nitrogen, oxygen and hydrogen. In this sense POC is a component of POM and there is typically about twice as much POM as POC. [ 4 ] Many statements that can be made about POM apply equally to POC, and much of what is said in this article about POM could equally have been said of POC.
Particulate organic matter is sometimes called suspended organic matter, macroorganic matter, or coarse fraction organic matter. When land samples are isolated by sieving or filtration, this fraction includes partially decomposed detritus and plant material, pollen , and other materials. [ 5 ] [ 6 ] When sieving to determine POM content, consistency is crucial because isolated size fractions will depend on the force of agitation. [ 7 ]
POM is readily decomposable, serving many soil functions and providing terrestrial material to water bodies. It is a source of food for both soil organisms and aquatic organisms and provides nutrients for plants. In water bodies, POM can contribute substantially to turbidity, limiting photic depth which can suppress primary productivity. POM also enhances soil structure leading to increased water infiltration , aeration and resistance to erosion . [ 5 ] [ 8 ] Soil management practices, such as tillage and compost / manure application, alter the POM content of soil and water. [ 5 ] [ 6 ]
Particulate organic carbon (POC) is operationally defined as all combustible, non- carbonate carbon that can be collected on a filter . The oceanographic community has historically used a variety of filters and pore sizes, most commonly 0.7, 0.8, or 1.0 μm glass or quartz fiber filters. The biomass of living zooplankton is intentionally excluded from POC through the use of a pre-filter or specially designed sampling intakes that repel swimming organisms. [ 9 ] Sub-micron particles, including most marine prokaryotes , which are 0.2–0.8 μm in diameter, are often not captured but should be considered part of POC rather than dissolved organic carbon (DOC), which is usually operationally defined as < 0.2 μm. [ 10 ] [ 9 ] Typically POC is considered to contain suspended and sinking particles ≥ 0.2 μm in size, which therefore includes biomass from living microbial cells, detrital material including dead cells, fecal pellets , other aggregated material, and terrestrially-derived organic matter. Some studies further divide POC operationally based on its sinking rate or size, [ 11 ] with ≥ 51 μm particles sometimes equated to the sinking fraction. [ 12 ] Both DOC and POC play major roles in the carbon cycle , but POC is the major pathway by which organic carbon produced by phytoplankton is exported – mainly by gravitational settling – from the surface to the deep ocean and eventually to sediments , and is thus a key component of the biological pump . [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 9 ]
Soil organic matter is anything in the soil of biological origin. Carbon is its key component comprising about 58% by weight. Simple assessment of total organic matter is obtained by measuring organic carbon in soil. Living organisms (including roots) contribute about 15% of the total organic matter in soil. These are critical to operation of the soil carbon cycle . What follows refers to the remaining 85% of the soil organic matter - the non-living component. [ 18 ]
As shown below, non-living organic matter in soils can be grouped into four distinct categories on the basis of size, behaviour and persistence. [ 19 ] These categories are arranged in order of decreasing ability to decompose. Each of them contribute to soil health in different ways. [ 19 ] [ 18 ]
relatively simple molecules from decomposing materials (< 0.45 microns)
litter of plant and herbivore origin (< 2 mm)
detritus (2 mm – 54 micron)
amorphous colloidal particles (< 53 microns)
charcoals and related compounds
Dissolved organic matter (DOM): is the organic matter which dissolves in soil water. It comprises the relatively simple organic compounds (e.g. organic acids, sugars and amino acids) which easily decompose. It has a turnover time of less than 12 months. Exudates from plant roots (mucilages and gums) are included here. [ 18 ]
Particulate organic matter (POM): is the organic matter that retains evidence of its original cellular structure, [ 18 ] and is discussed further in the next section.
Humus : is usually the largest proportion of organic matter in soil, contributing 45 to 75%. Typically it adheres to soil minerals, and plays an important role structuring soil. Humus is the end product of soil organism activity, is chemically complex, and does not have recognisable characteristics of its origin. Humus is of very small unit size and has large surface area in relation to its weight. It holds nutrients, has high water holding capacity and significant cation exchange capacity , buffers pH change and can hold cations. Humus is quite slow to decompose and exists in soil for decades. [ 18 ]
Resistant organic matter: has a high carbon content and includes charcoal, charred plant materials, graphite and coal. Turnover times are long and estimated in hundreds of years. It is not biologically active but contributes positively to soil structural properties, including water holding capacity, cation exchange capacity and thermal properties. [ 18 ]
Particulate organic matter (POM) includes steadily decomposing plant litter and animal faeces, and the detritus from the activity of microorganisms. Most of it continually undergoes decomposition by microorganisms (when conditions are sufficiently moist) and usually has a turnover time of less than 10 years. Less active parts may take 15 to 100 years to turnover. Where it is still at the soil surface and relatively fresh, particulate organic matter intercepts the energy of raindrops and protects physical soil surfaces from damage. As it is decomposes, particulate organic matter provides much of the energy required by soil organisms as well as providing a steady release of nutrients into the soil environment. [ 18 ]
The decomposition of POM provides energy and nutrients. Nutrients not taken up by soil organisms may be available for plant uptake. [ 6 ] The amount of nutrients released ( mineralized ) during decomposition depends on the biological and chemical characteristics of the POM, such as the C:N ratio . [ 6 ] In addition to nutrient release, decomposers colonizing POM play a role in improving soil structure. [ 20 ] Fungal mycelium entangle soil particles and release sticky, cement-like, polysaccharides into the soil; ultimately forming soil aggregates [ 20 ]
Soil POM content is affected by organic inputs and the activity of soil decomposers. The addition of organic materials, such as manure or crop residues , typically results in an increase in POM. [ 6 ] Alternatively, repeated tillage or soil disturbance increases the rate of decomposition by exposing soil organisms to oxygen and organic substrates ; ultimately, depleting POM. Reduction in POM content is observed when native grasslands are converted to agricultural land. [ 5 ] Soil temperature and moisture also affect the rate of POM decomposition. [ 6 ] Because POM is a readily available (labile) source of soil nutrients, is a contributor to soil structure, and is highly sensitive to soil management, it is frequently used as an indicator to measure soil quality . [ 8 ]
In poorly-managed soils, particularly on sloped ground, erosion and transport of soil sediment rich in POM can contaminate water bodies. [ 8 ] Because POM provides a source of energy and nutrients, rapid build-up of organic matter in water can result in eutrophication . [ 20 ] Suspended organic materials can also serve as a potential vector for the pollution of water with fecal bacteria , toxic metals or organic compounds.
Life and particulate organic matter in the ocean have fundamentally shaped the planet. On the most basic level, particulate organic matter can be defined as both living and non-living matter of biological origin with a size of ≥0.2 μm in diameter, including anything from a small bacterium (0.2 μm in size) to blue whales (20 m in size). [ 22 ] Organic matter plays a crucial role in regulating global marine biogeochemical cycles and events, from the Great Oxidation Event in Earth's early history [ 23 ] to the sequestration of atmospheric carbon dioxide in the deep ocean. [ 24 ] Understanding the distribution, characteristics, dynamics, and changes over time of particulate matter in the ocean is hence fundamental in understanding and predicting the marine ecosystem, from food web dynamics to global biogeochemical cycles. [ 25 ] [ 26 ]
Optical particle measurements are emerging as an important technique for understanding the ocean carbon cycle, including contributions to estimates of their downward flux, which sequesters carbon dioxide in the deep sea. Optical instruments can be used from ships or installed on autonomous platforms, delivering much greater spatial and temporal coverage of particles in the mesopelagic zone of the ocean than traditional techniques, such as sediment traps . Technologies to image particles have advanced greatly over the last two decades, but the quantitative translation of these immense datasets into biogeochemical properties remains a challenge. In particular, advances are needed to enable the optimal translation of imaged objects into carbon content and sinking velocities. In addition, different devices often measure different optical properties, leading to difficulties in comparing results. [ 25 ]
Marine primary production can be divided into new production from allochthonous nutrient inputs to the euphotic zone , and regenerated production from nutrient recycling in the surface waters. The total new production in the ocean roughly equates to the sinking flux of particulate organic matter to the deep ocean, about 4 × 10 9 tons of carbon annually. [ 27 ]
Sinking oceanic particles encompass a wide range of shape, porosity, ballast and other characteristics. The model shown in the diagram at the right attempts to capture some of the predominant features that influence the shape of the sinking flux profile (red line). [ 21 ] The sinking of organic particles produced in the upper sunlit layers of the ocean forms an important limb of the oceanic biological pump, which impacts the sequestration of carbon and resupply of nutrients in the mesopelagic ocean. Particles raining out from the upper ocean undergo remineralization by bacteria colonized on their surface and interior, leading to an attenuation in the sinking flux of organic matter with depth. The diagram illustrates a mechanistic model for the depth-dependent, sinking, particulate mass flux constituted by a range of sinking, remineralizing particles. [ 21 ]
Marine snow varies in shape, size and character, ranging from individual cells to pellets and aggregates, most of which is rapidly colonized and consumed by heterotrophic bacteria, contributing to the attenuation of the sinking flux with depth. [ 21 ]
The range of recorded sinking velocities of particles in the oceans spans from negative (particles float toward the surface) [ 28 ] [ 29 ] to several km per day (as with salp fecal pellets) [ 30 ] When considering the sinking velocity of an individual particle, a first approximation can be obtained from Stokes' law (originally derived for spherical, non-porous particles and laminar flow) combined with White's approximation, [ 31 ] which suggest that sinking velocity increases linearly with excess density (the difference from the water density) and the square of particle diameter (i.e., linearly with the particle area). Building on these expectations, many studies have tried to relate sinking velocity primarily to size, which has been shown to be a useful predictor for particles generated in controlled environments (e.g., roller tanks. [ 32 ] [ 33 ] [ 34 ] However, strong relationships were only observed when all particles were generated using the same water/plankton community. [ 35 ] When particles were made by different plankton communities, size alone was a bad predictor (e.g., Diercks and Asper, 1997) strongly supporting notions that particle densities and shapes vary widely depending on the source material. [ 35 ] [ 25 ]
Packaging and porosity contribute appreciably to determining sinking velocities. On the one hand, adding ballasting materials, such as diatom frustules, to aggregates may lead to an increase in sinking velocities owing to the increase in excess density. On the other hand, the addition of ballasting mineral particles to marine particle populations frequently leads to smaller more densely packed aggregates that sink slower because of their smaller size. [ 36 ] [ 37 ] Mucous-rich particles have been shown to float despite relatively large sizes, [ 28 ] [ 38 ] whereas oil- or plastic-containing aggregates have been shown to sink rapidly despite the presence of substances with an excess density smaller than seawater. [ 39 ] [ 40 ] In natural environments, particles are formed through different mechanisms, by different organisms, and under varying environmental conditions that affect aggregation (e.g., salinity, pH, minerals), ballasting (e.g., dust deposition, sediment load; [ 35 ] [ 34 ] van der Jagt et al., 2018) and sinking behaviour (e.g., viscosity; [ 41 ] ). A universal conversion of size-to-sinking velocity is hence impracticable. [ 42 ] [ 25 ]
Along with dissolved organic matter , POM drives the lower aquatic food web by providing energy in the form of carbohydrates, sugars, and other polymers that can be degraded. POM in water bodies is derived from terrestrial inputs (e.g. soil organic matter, leaf litterfall), submerged or floating aquatic vegetation, or autochthonous production of algae (living or detrital). Each source of POM has its own chemical composition that affects its lability, or accessibility to the food web. Algal-derived POM is thought to be most labile, but there is growing evidence that terrestrially-derived POM can supplement the diets of micro-organisms such as zooplankton when primary productivity is limited. [ 43 ] [ 44 ]
The dynamics of the particulate organic carbon (POC) pool in the ocean are central to the marine carbon cycle . POC is the link between surface primary production, the deep ocean, and sediments. The rate at which POC is degraded in the dark ocean can impact atmospheric CO 2 concentration. Therefore, a central focus of marine organic geochemistry studies is to improve the understanding of POC distribution, composition, and cycling. The last few decades have seen improvements in analytical techniques that have greatly expanded what can be measured, both in terms of organic compound structural diversity and isotopic composition, and complementary molecular omics studies . [ 9 ]
As illustrated in the diagram, phytoplankton fix carbon dioxide in the euphotic zone using solar energy and produce POC. POC formed in the euphotic zone is processed by marine microorganisms (microbes), zooplankton and their consumers into organic aggregates ( marine snow ), which is then exported to the mesopelagic (200–1000 m depth) and bathypelagic zones by sinking and vertical migration by zooplankton and fish. [ 46 ] [ 47 ] [ 48 ]
The biological carbon pump describes the collection of biogeochemical processes associated with the production, sinking, and remineralization of organic carbon in the ocean. [ 49 ] [ 50 ] In brief, photosynthesis by microorganisms in the upper tens of meters of the water column fix inorganic carbon (any of the chemical species of dissolved carbon dioxide) into biomass . When this biomass sinks to the deep ocean, a portion of it fuels the metabolism of the organisms living there, including deep-sea fish and benthic organisms. [ 48 ] Zooplankton play a critical role in shaping particle flux through ingestion and fragmentation of particles, [ 51 ] [ 52 ] [ 53 ] [ 54 ] [ 55 ] [ 56 ] production of fast-sinking fecal material [ 48 ] [ 30 ] and active vertical migration. [ 57 ] [ 58 ] [ 59 ] [ 25 ]
Besides the importance of "exported" organic carbon as a food source for deep ocean organisms, the biological carbon pump provides a valuable ecosystem function: Exported organic carbon transports an estimated 5–20 Gt C each year to the deep ocean, [ 60 ] where some of it (~0.2–0.5 Gt C) [ 61 ] is sequestered for several millennia. The biological carbon pump is hence of similar magnitude to current carbon emissions from fossil fuels (~10 Gt C year−1). Any changes in its magnitude caused by a warming world may have direct implications for both deep-sea organisms and atmospheric carbon dioxide concentrations. [ 62 ] [ 47 ] [ 25 ]
The magnitude and efficiency (amount of carbon sequestered relative to primary production) of the biological carbon pump, hence ocean carbon storage, is partly determined by the amount of organic matter exported and the rate at which it is remineralized (i.e., the rate with which sinking organic matter is reworked and respired in the mesopelagic zone region. [ 62 ] [ 63 ] [ 64 ] Especially particle size and composition are important parameters determining how fast a particle sinks, [ 65 ] [ 63 ] how much material it contains, [ 66 ] and which organisms can find and utilize it. [ 67 ] [ 68 ] [ 69 ] [ 25 ]
Sinking particles can be phytoplankton, zooplankton, detritus, fecal pellets, or a mix of these. [ 70 ] [ 71 ] [ 48 ] They range in size from a few micrometers to several centimeters, with particles of a diameter of >0.5 mm being referred to as marine snow . [ 72 ] In general, particles in a fluid are thought to sink once their densities are higher than the ambient fluid, i.e., when excess densities are larger than zero. Larger individual phytoplankton cells can thus contribute to sedimentary fluxes. For example, large diatom cells and diatom chains with a diameter of >5 μm have been shown to sink at rates up to several 10 s meters per day, though this is only possible owing to the heavy ballast of a silica frustule . [ 73 ] [ 74 ] Both size and density affect particle sinking velocity; for example, for sinking velocities that follow Stokes' Law , doubling the size of the particle increases the sinking speed by a factor of 4. [ 75 ] [ 73 ] However, the highly porous nature of many marine particles means that they do not obey Stokes' Law because small changes in particle density (i.e., compactness) can have a large impact on their sinking velocities. [ 63 ] Large sinking particles are typically of two types: (1) aggregates formed from a number of primary particles, including phytoplankton, bacteria, fecal pellets , live protozoa and zooplankton and debris, and (2) zooplankton fecal pellets , which can dominate particle flux events and sink at velocities exceeding 1,000 m d −1 . [ 48 ] [ 25 ]
Knowing the size, abundance, structure and composition (e.g. carbon content) of settling particles is important as these characteristics impose fundamental constraints on the biogeochemical cycling of carbon. For example, changes in climate are expected to facilitate a shift in species composition in a manner that alters the elemental composition of particulate matter, cell size and the trajectory of carbon through the food web , influencing the proportion of biomass exported to depth. [ 76 ] As such, any climate-induced change in the structure or function of phytoplankton communities is likely to alter the efficiency of the biological carbon pump, with feedbacks on the rate of climate change. [ 77 ] [ 78 ] [ 25 ]
The consumption of the bioluminescent POC by fish can lead to the emission of bioluminescent fecal pellets (repackaging), which can also be produced with non-bioluminescent POC if the fish gut is already charged with bioluminescent bacteria. [ 80 ]
In the diagram on the right, the sinking POC is moving downward followed by a chemical plume. [ 81 ] The plain white arrows represent the carbon flow. Panel (a) represents the classical view of a non-bioluminescent particle. The length of the plume is identified by the scale on the side. [ 82 ] Panel (b) represents the case of a glowing particle in the bioluminescence shunt hypothesis. Bioluminescent bacteria are represented aggregated onto the particle. Their light emission is shown as a bluish cloud around it. Blue dotted arrows represent the visual detection and the movement toward the particle of the consumer organisms. Increasing the visual detection allows a better detection by upper trophic levels, potentially leading to the fragmentation of sinking POC into suspended POC due to sloppy feeding. [ 80 ] | https://en.wikipedia.org/wiki/Particulate_organic_matter |
Particulate pollution is pollution of an environment that consists of particles suspended in some medium. There are three primary forms: atmospheric particulate matter , [ 1 ] marine debris , [ 2 ] and space debris . [ 3 ] Some particles are released directly from a specific source, while others form in chemical reactions in the atmosphere. Particulate pollution can be derived from either natural sources or anthropogenic processes.
Atmospheric particulate matter , also known as particulate matter , or PM, describes solids and/or liquid particles suspended in a gas , most commonly the Earth 's atmosphere . [ 1 ] Particles in the atmosphere can be divided into two types, depending on the way they are emitted. Primary particles, such as mineral dust , are emitted into the atmosphere. [ 4 ] Secondary particles, such as ammonium nitrate , are formed in the atmosphere through gas-to-particle conversion. [ 4 ]
Some particulates occur naturally, originating from volcanoes , dust storms , forest and grassland fires , living vegetation and sea spray . Human activities, such as the burning of fossil fuels in vehicles, [ 5 ] wood burning , [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] stubble burning , power plants , road dust , wet cooling towers in cooling systems and various industrial processes, also generate significant amounts of particulates. Coal combustion in developing countries is the primary method for heating homes and supplying energy . Because salt spray over the oceans is the overwhelmingly most common form of particulate in the atmosphere, anthropogenic aerosols—those made by human activities—currently account for about 10 percent of the total mass of aerosols in our atmosphere. [ 11 ]
Microplastics are an emerging source of atmospheric pollution, particularly fine plastic fibers that are light enough to be carried by the wind. [ 12 ] Microplastics traveling in the air cannot be traced back to their specific original sources, as the wind can blow the infinitesimal particles thousands of miles from where they were originally shed. Microplastics are being found in very remote regions of the Earth, where there are no apparent nearby sources of plastic. [ 13 ] A common source of airborne microplastic fibers is plastic textiles. While most atmospheric microplastics tend to come from land, microplastics are also entering the atmosphere through ocean and sea mist. [ 14 ]
Domestic combustion pollution is mainly composed of burning fuel including wood, gas, and charcoal in activities of heating, cooking, agriculture, and wildfires. [ 15 ] Major domestic pollutants contain 17% of carbon dioxide, 13% of carbon monoxide, 6% of nitrogen monoxide, polycyclic aromatic hydrocarbons, and fine and ultrafine particles. [ 16 ]
In the United Kingdom domestic combustion is the largest single source of PM2.5 annually. [ 17 ] [ 18 ] In some towns and cities in New South Wales wood smoke may be responsible for 60% of fine particle air pollution in the winter. [ 19 ] Research conducted about biomass burning in 2015, estimated that 38% of European total particulate pollution emissions are composed of domestic wood burning. [ 20 ]
The particulate pollutant is often in microscopic size that enables it to infiltrate into interior space even if windows and doors are closed. [ citation needed ] The main component of woodsmoke, black carbon significantly appears in the indoor environment compared to other ambient pollutants. [ citation needed ] If the room is sealed tight enough to prevent woodsmoke transmission, it will also prevent oxygen exchange from indoors to outdoor. [ citation needed ] The regular dusk mask also can help little with particulate pollutants since they are designed to filter out larger particles. [ 21 ] Musk with HEPA filter can filter out microscopic pollutants but cause difficulty of breathing to the population with lung disease. [ 21 ]
Living under high concentrations of pollutants can lead to headaches, fatigue, lung disease, asthma, and throat and eye irritation. [ 15 ] One of the most common diseases among those living among pollutants is chronic obstructive pulmonary disease (COPD). [ 22 ] Exposure to wood and charcoal smoke is significantly associated with COPD diagnoses among those living in developing and developed countries. [ 21 ] Exposure to woodsmoke intensifies the respiratory systems and increases the risk of hospital admissions. [ 21 ]
Marine debris and marine aerosols refer to particulates suspended in a liquid , usually water on the Earth 's surface. Particulates in water are a kind of water pollution measured as total suspended solids , a water quality measurement listed as a conventional pollutant in the U.S. Clean Water Act , a water quality law . [ 23 ] Notably, some of the same kinds of particles can be suspended both in air and water , and pollutants specifically may be carried in the air and deposited in water, or fall to the ground as acid rain . [ 24 ] The majority of marine aerosols are created through the bubble bursting of breaking waves and capillary action on the ocean surface due to the stress exerted from surface winds. [ 2 ] Among common marine aerosols, pure sea salt aerosols are the major component of marine aerosols with an annual global emission between 2,000-10,000 teragrams annually. [ 2 ] Through interactions with water, many marine aerosols help to scatter light, and aid in cloud condensation and ice nuclei (IN); thus, affecting the atmospheric radiation budget . [ 2 ] When they interact with anthropogenic pollution, marine aerosols can affect biogeochemical cycles through the depletion of acids such as nitric acid and halogens . [ 2 ]
Space debris describes particulates in the vacuum of outer space , specifically particles originating from human activity that remain in geocentric orbit around the Earth . The International Association of Astronauts define space debris as "any man-made Earth orbiting object which is non-functional with no reasonable expectation of assuming or resuming its intended function or any other function for which it is or can be expected to be authorized, including fragments and parts thereof". [ 3 ]
Space debris is classified by size and operational purpose, and divided into four main subsets : inactive payloads , operational debris, fragmentation debris and microparticulate matter. [ 3 ] Inactive payloads refer to any launched space objects that have lost the capability to reconnect to its corresponding space operator; thus, preventing a return to Earth. [ 25 ] In contrast, operational debris describes the matter associated with the propulsion of a larger entity into space, which may include upper rocket stages and ejected nose cones. [ 25 ] Fragmentation debris refers to any object in space that has become dissociated from a larger entity by means of explosion , collision or deterioration . [ 26 ] Microparticulate matter describes space matter that typically cannot be seen singly with the naked eye , including particles, gases, and spaceglow. [ 25 ]
In response to research that concluded that impacts from Earth orbital debris could lead to greater hazards to spacecraft than the natural meteoroid environment, NASA began the orbital debris program in 1979, initiated by the Space Sciences branch at Johnson Space Center (JSC). [ 27 ] Beginning with an initial budget of $70,000, the NASA orbital debris program began with the initial goals of characterizing hazards induced by space debris and creating mitigation standards that would minimize the growth of the orbital debris environment. [ 28 ] By 1990, the NASA orbital debris program created a debris monitoring program, which included mechanisms to sample the low Earth orbit (LEO) environment for debris as small as 6mm using the Haystack X-band ground radar. [ 27 ]
Particulate pollution is observed around the globe in varying sizes and compositions and is the focus of many epidemiological studies. Particulate matter (PM) is generally classified into two main size categories: PM 10 and PM 2.5 . PM 10 , also known as coarse particulate matter, consists of particles 10 micrometers (μm) and smaller, while PM 2.5 , also called fine particulate matter, consists of particles 2.5 μm and smaller. [ 29 ] Particles 2.5 μm or smaller in size are especially notable as they can be inhaled into the lower respiratory system , and with enough exposure, absorbed into the bloodstream . Particulate pollution can occur directly or indirectly from a number of sources including, but not limited to: agriculture, automobiles, construction, forest fires, chemical pollutants, and power plants. [ 30 ]
Exposure to particulates of any size and composition may occur acutely over a short duration, or chronically over a long duration. [ 31 ] Particulate exposure has been associated with adverse respiratory symptoms ranging from irritation of the airways, aggravated asthma , coughing, and difficulty breathing from acute exposure to symptoms such as irregular heartbeat , lung cancer, kidney disease , chronic bronchitis, and premature death in individuals who suffer from pre-existing cardiovascular or lung diseases due to chronic exposure. [ 29 ] The severity of health effects generally depends upon the size of the particles as well as the health status of the individual exposed; older adults, children, pregnant women, and immunocompromised populations are at the greatest risk for adverse health outcomes. [ 32 ] Short-term exposure to particulate pollution has been linked to adverse health impacts. [ 33 ] [ 34 ]
As a result, the US Environmental Protection Agency (EPA) and various health agencies around the world have established thresholds for concentrations of PM 2.5 and PM 10 that are determined to be acceptable. However, there is no known safe level of exposure and thus, any exposure to particulate pollution is likely to increase an individual's risk of adverse health effects. [ 35 ] In European countries, air quality at or above 10 micrograms per cubic meter of air (μg/m 3 ) for PM 2.5 increases the all-causes daily mortality rate by 0.2-0.6% and the cardiopulmonary mortality rate by 6-13%. [ 35 ]
Worldwide, PM 10 concentrations of 70 μg/m 3 and PM 2.5 concentrations of 35 μg/m 3 have been shown to increase long-term mortality by 15%. [ 29 ] More so, approximately 4.2 million of all premature deaths observed in 2016 occurred due to airborne particulate pollution, 91% of which occurred in countries with low to middle socioeconomic status. Of these premature deaths, 58% were attributed to strokes and ischaemic heart diseases, 8% attributed to COPD ( Chronic Obstructive Pulmonary Disease ), and 6% to lung cancer. [ 36 ]
In 2006, the EPA conducted air quality designations in all 50 states, denoting areas of high pollution based on criteria such as air quality monitoring data, recommendations submitted by the states, and other technical information; and reduced the National Ambient Air Quality Standard for daily exposure to particulates in the 2.5 micrometers and smaller category from 15 μg/m 3 to 12 μg/m 3 in 2012. [ 37 ] As a result, U.S. annual PM 2.5 averages have decreased from 13.5 μg/m 3 to 8.02 μg/m 3 , between 2000 and 2017. [ 38 ]
Microplastics prove to be particularly concerning as particulate matter for their reactivity and ability to become contaminated. Microplastic particles, depending on their composition, can form carbonyl bonds on the surface, causing contaminants such as heavy metals to be adsorbed by the particle. [ 39 ] When microplastic particles are inhaled, they persist in the lungs and cause inflammation. [ 40 ] More research is needed to understand the long-term health effects of microplastics in the human body.
Particulate matter (PM), particularly PM2.5, was found to be harmful to aquatic invertebrates. [ 41 ] These aquatic invertebrates include fish, crustaceans, and Mollusca. In a study by Han et al, the effects of PM<2.5 micrometers on life history traits and oxidative stress were observed in Tigriopus japonicus. Exposure to particulate matter of less than 2.5 micrometers in diameter led to significant changes in ROS levels, indicating that particulate matter exposure was a causative agent of oxidative stress in Tigriopus japonicus . [ 42 ] In addition to aquatic invertebrates, negative effects of particulate matter have been noted in mammals as well. Following acute exposure to ambient particulate matter, rats showed a significant increase in neutrophils and a significant decrease in lymphocytes, indicating that particulate matter exposure can result in activation of the sympathetic stress response. [ 43 ] | https://en.wikipedia.org/wiki/Particulate_pollution |
Partition chromatography theory and practice was introduced through the work and publications of Archer Martin and Richard Laurence Millington Synge during the 1940s. [ 1 ] They would later receive the 1952 Nobel Prize in Chemistry "for their invention of partition chromatography". [ 2 ]
The process of separating mixtures of chemical compounds by passing them through a column that contains a solid stationary phase that was eluted with a mobile phase ( column chromatography ) was well known at that time. [ 3 ] Chromatographic separation was considered to occur by an adsorption process whereby compounds adhered to a solid media and were washed off the column with a solvent, mixture of solvents, or solvent gradient. In contrast, Martin and Synge developed and described a chromatographic separation process whereby compounds were partitioned between two liquid phases similar to the separatory funnel liquid-liquid separation dynamic. This was an important departure, both in theory and under equilibrium conditions. [ 4 ] Martin and Synge initially attempted to devise a method of performing a sequential liquid-liquid extraction with serially connected glass vessels that functioned as separatory funnels. [ 1 ] The seminal article presenting their early studies described a rather complicated instrument that allowed partitioning of amino acids between water and chloroform phases. The process was termed "counter-current liquid-liquid extraction." [ 5 ] Martin and Synge described the theory of this technique in reference to continuous fractional distillation described by Randall and Longtin. [ 6 ] This approach was deemed too cumbersome, so they developed a method of absorbing water onto silica gel as the stationary phase and using a solvent, such as chloroform, as the mobile phase. [ 7 ] This work was published in 1941 as "a new form of chromatogram employing two liquid phases." [ 8 ] The article describes both the theory in terms of the partition coefficient of a compound, and the application of the process to the separation of amino acids on a water-impregnated silica column eluted with a water:chloroform: n -butanol solvent mixture.
The previously described work of Martin and Synge impacted the development of the previously known column chromatography and inspired new forms of chromatography such as countercurrent distribution , [ 9 ] paper chromatography , [ 10 ] and gas-liquid chromatography which is more commonly known as gas chromatography . The modification of silica gel stationary phase led to many creative ways of modifying stationary phases in order to influence the separation characteristics. The most notable modification was the chemical bonding of alkane functional groups to silica gel to produce reversed-phase media. [ 11 ] The original problem that Martin and Synge encountered with devising an instrument that would employ two free-flowing liquid phases was solved by Lyman C. Craig in 1944, and commercial counter-current distribution instruments were used for many important discoveries. [ 12 ] The introduction of paper chromatography was an important analytical technique which gave rise to thin-layer chromatography . [ 13 ] Finally, gas-liquid chromatography, a fundamental technique in modern analytical chemistry, was described by Martin with coauthors A. T. James and G. Howard Smith in 1952. [ 14 ] | https://en.wikipedia.org/wiki/Partition_chromatography |
In the physical sciences , a partition coefficient ( P ) or distribution coefficient ( D ) is the ratio of concentrations of a compound in a mixture of two immiscible solvents at equilibrium . This ratio is therefore a comparison of the solubilities of the solute in these two liquids. The partition coefficient generally refers to the concentration ratio of un-ionized species of compound, whereas the distribution coefficient refers to the concentration ratio of all species of the compound (ionized plus un-ionized). [ 1 ]
In the chemical and pharmaceutical sciences , both phases usually are solvents . [ 2 ] Most commonly, one of the solvents is water, while the second is hydrophobic , such as 1-octanol . [ 3 ] Hence the partition coefficient measures how hydrophilic ("water-loving") or hydrophobic ("water-fearing") a chemical substance is. Partition coefficients are useful in estimating the distribution of drugs within the body. Hydrophobic drugs with high octanol-water partition coefficients are mainly distributed to hydrophobic areas such as lipid bilayers of cells. Conversely, hydrophilic drugs (low octanol/water partition coefficients) are found primarily in aqueous regions such as blood serum . [ 4 ]
If one of the solvents is a gas and the other a liquid, a gas/liquid partition coefficient can be determined. For example, the blood/gas partition coefficient of a general anesthetic measures how easily the anesthetic passes from gas to blood. [ 5 ] Partition coefficients can also be defined when one of the phases is solid , for instance, when one phase is a molten metal and the second is a solid metal, [ 6 ] or when both phases are solids. [ 7 ] The partitioning of a substance into a solid results in a solid solution .
Partition coefficients can be measured experimentally in various ways (by shake-flask, HPLC , etc.) or estimated by calculation based on a variety of methods (fragment-based, atom-based, etc.).
If a substance is present as several chemical species in the partition system due to association or dissociation , each species is assigned its own K ow value. A related value, D, does not distinguish between different species, only indicating the concentration ratio of the substance between the two phases. [ citation needed ]
Despite formal recommendation to the contrary, the term partition coefficient remains the predominantly used term in the scientific literature. [ 8 ] [ additional citation(s) needed ]
In contrast, the IUPAC recommends that the title term no longer be used, rather, that it be replaced with more specific terms. [ 9 ] For example, partition constant , defined as
where K D is the process equilibrium constant , [A] represents the concentration of solute A being tested, and "org" and "aq" refer to the organic and aqueous phases respectively. The IUPAC further recommends "partition ratio" for cases where transfer activity coefficients can be determined, and "distribution ratio" for the ratio of total analytical concentrations of a solute between phases, regardless of chemical form. [ 9 ]
The partition coefficient , abbreviated P , is defined as a particular ratio of the concentrations of a solute between the two solvents (a biphase of liquid phases), specifically for un- ionized solutes, and the logarithm of the ratio is thus log P . [ 10 ] : 275ff When one of the solvents is water and the other is a non-polar solvent , then the log P value is a measure of lipophilicity or hydrophobicity . [ 10 ] : 275ff [ 11 ] : 6 The defined precedent is for the lipophilic and hydrophilic phase types to always be in the numerator and denominator respectively; for example, in a biphasic system of n - octanol (hereafter simply "octanol") and water:
To a first approximation, the non-polar phase in such experiments is usually dominated by the un-ionized form of the solute, which is electrically neutral, though this may not be true for the aqueous phase. To measure the partition coefficient of ionizable solutes , the pH of the aqueous phase is adjusted such that the predominant form of the compound in solution is the un-ionized, or its measurement at another pH of interest requires consideration of all species, un-ionized and ionized (see following).
A corresponding partition coefficient for ionizable compounds, abbreviated log P I , is derived for cases where there are dominant ionized forms of the molecule, such that one must consider partition of all forms, ionized and un-ionized, between the two phases (as well as the interaction of the two equilibria, partition and ionization). [ 11 ] : 57ff, 69f [ 12 ] M is used to indicate the number of ionized forms; for the I -th form ( I = 1, 2, ... , M ) the logarithm of the corresponding partition coefficient, log P oct/wat I {\displaystyle \log P_{\text{oct/wat}}^{I}} , is defined in the same manner as for the un-ionized form. For instance, for an octanol–water partition, it is
To distinguish between this and the standard, un-ionized, partition coefficient, the un-ionized is often assigned the symbol log P 0 , such that the indexed log P oct/wat I {\displaystyle \log P_{\text{oct/wat}}^{I}} expression for ionized solutes becomes simply an extension of this, into the range of values I > 0 . [ citation needed ]
The distribution coefficient , log D , is the ratio of the sum of the concentrations of all forms of the compound (ionized plus un-ionized) in each of the two phases, one essentially always aqueous; as such, it depends on the pH of the aqueous phase, and log D = log P for non-ionizable compounds at any pH. [ 13 ] [ 14 ] For measurements of distribution coefficients, the pH of the aqueous phase is buffered to a specific value such that the pH is not significantly perturbed by the introduction of the compound. The value of each log D is then determined as the logarithm of a ratio—of the sum of the experimentally measured concentrations of the solute's various forms in one solvent, to the sum of such concentrations of its forms in the other solvent; it can be expressed as [ 10 ] : 275–8
In the above formula, the superscripts "ionized" each indicate the sum of concentrations of all ionized species in their respective phases. In addition, since log D is pH-dependent, the pH at which the log D was measured must be specified. In areas such as drug discovery—areas involving partition phenomena in biological systems such as the human body—the log D at the physiologic pH = 7.4 is of particular interest. [ citation needed ]
It is often convenient to express the log D in terms of P I , defined above (which includes P 0 as state I = 0 ), thus covering both un-ionized and ionized species. [ 12 ] For example, in octanol–water:
which sums the individual partition coefficients (not their logarithms), and where f I {\displaystyle f^{I}} indicates the pH-dependent mole fraction of the I -th form (of the solute) in the aqueous phase, and other variables are defined as previously. [ 12 ] [ verification needed ]
The values for the octanol-water system in the following table are from the Dortmund Data Bank . [ 15 ] [ better source needed ] They are sorted by the partition coefficient, smallest to largest (acetamide being hydrophilic, and 2,2',4,4',5-pentachlorobiphenyl lipophilic), and are presented with the temperature at which they were measured (which impacts the values). [ citation needed ]
Values for other compounds may be found in a variety of available reviews and monographs. [ 2 ] : 551ff [ 21 ] [ page needed ] [ 22 ] : 1121ff [ 23 ] [ page needed ] [ 24 ] Critical discussions of the challenges of measurement of log P and related computation of its estimated values (see below) appear in several reviews. [ 11 ] [ 24 ]
A drug's distribution coefficient strongly affects how easily the drug can reach its intended target in the body, how strong an effect it will have once it reaches its target, and how long it will remain in the body in an active form. [ 25 ] Hence, the log P of a molecule is one criterion used in decision-making by medicinal chemists in pre-clinical drug discovery, for example, in the assessment of druglikeness of drug candidates. [ 26 ] Likewise, it is used to calculate lipophilic efficiency in evaluating the quality of research compounds, where the efficiency for a compound is defined as its potency , via measured values of pIC 50 or pEC 50 , minus its value of log P . [ 27 ]
In the context of pharmacokinetics (how the body absorbs, metabolizes, and excretes a drug), the distribution coefficient has a strong influence on ADME properties of the drug. Hence the hydrophobicity of a compound (as measured by its distribution coefficient) is a major determinant of how drug-like it is. More specifically, for a drug to be orally absorbed, it normally must first pass through lipid bilayers in the intestinal epithelium (a process known as transcellular transport). For efficient transport, the drug must be hydrophobic enough to partition into the lipid bilayer, but not so hydrophobic, that once it is in the bilayer, it will not partition out again. [ 29 ] [ 30 ] Likewise, hydrophobicity plays a major role in determining where drugs are distributed within the body after absorption and, as a consequence, in how rapidly they are metabolized and excreted.
In the context of pharmacodynamics (how the drug affects the body), the hydrophobic effect is the major driving force for the binding of drugs to their receptor targets. [ 31 ] [ 32 ] On the other hand, hydrophobic drugs tend to be more toxic because they, in general, are retained longer, have a wider distribution within the body (e.g., intracellular ), are somewhat less selective in their binding to proteins, and finally are often extensively metabolized. In some cases the metabolites may be chemically reactive. Hence it is advisable to make the drug as hydrophilic as possible while it still retains adequate binding affinity to the therapeutic protein target. [ 33 ] For cases where a drug reaches its target locations through passive mechanisms (i.e., diffusion through membranes), the ideal distribution coefficient for the drug is typically intermediate in value (neither too lipophilic, nor too hydrophilic); in cases where molecules reach their targets otherwise, no such generalization applies. [ citation needed ]
The hydrophobicity of a compound can give scientists an indication of how easily a compound might be taken up in groundwater to pollute waterways, and its toxicity to animals and aquatic life. [ 34 ] Partition coefficient can also be used to predict the mobility of radionuclides in groundwater. [ 35 ] In the field of hydrogeology , the octanol–water partition coefficient K ow is used to predict and model the migration of dissolved hydrophobic organic compounds in soil and groundwater.
Hydrophobic insecticides and herbicides tend to be more active. Hydrophobic agrochemicals in general have longer half-lives and therefore display increased risk of adverse environmental impact. [ 36 ]
In metallurgy , the partition coefficient is an important factor in determining how different impurities are distributed between molten and solidified metal. It is a critical parameter for purification using zone melting , and determines how effectively an impurity can be removed using directional solidification , described by the Scheil equation . [ 6 ]
Many other industries take into account distribution coefficients, for example in the formulation of make-up, topical ointments, dyes, hair colors and many other consumer products. [ 37 ]
A number of methods of measuring distribution coefficients have been developed, including the shake-flask, separating funnel method, reverse-phase HPLC, and pH-metric techniques. [ 10 ] : 280
In this method the solid particles present into the two immiscible liquids can be easily separated by suspending those solid particles directly into these immiscible or somewhat miscible liquids.
The classical and most reliable method of log P determination is the shake-flask method , which consists of dissolving some of the solute in question in a volume of octanol and water, then measuring the concentration of the solute in each solvent. [ 38 ] [ 39 ] The most common method of measuring the distribution of the solute is by UV/VIS spectroscopy . [ 38 ]
A faster method of log P determination makes use of high-performance liquid chromatography . The log P of a solute can be determined by correlating its retention time with similar compounds with known log P values. [ 40 ]
An advantage of this method is that it is fast (5–20 minutes per sample). However, since the value of log P is determined by linear regression , several compounds with similar structures must have known log P values, and extrapolation from one chemical class to another—applying a regression equation derived from one chemical class to a second one—may not be reliable, since each chemical classes will have its characteristic regression parameters . [ citation needed ]
The pH-metric set of techniques determine lipophilicity pH profiles directly from a single acid-base titration in a two-phase water–organic-solvent system. [ 10 ] : 280–4 Hence, a single experiment can be used to measure the logarithms of the partition coefficient (log P ) giving the distribution of molecules that are primarily neutral in charge, as well as the distribution coefficient (log D ) of all forms of the molecule over a pH range, e.g., between 2 and 12. The method does, however, require the separate determination of the pK a value(s) of the substance.
Polarized liquid interfaces have been used to examine the thermodynamics and kinetics of the transfer of charged species from one phase to another. Two main methods exist. The first is ITIES , "interfaces between two immiscible electrolyte solutions". [ 41 ] The second is droplet experiments. [ 42 ] Here a reaction at a triple interface between a conductive solid, droplets of a redox active liquid phase and an electrolyte solution have been used to determine the energy required to transfer a charged species across the interface. [ 43 ]
There are attempts to provide partition coefficients for drugs at a single-cell level. [ 44 ] [ 45 ] This strategy requires methods for the determination of concentrations in individual cells, i.e., with Fluorescence correlation spectroscopy or quantitative Image analysis . Partition coefficient at a single-cell level provides information on cellular uptake mechanism. [ 45 ]
There are many situations where prediction of partition coefficients prior to experimental measurement is useful. For example, tens of thousands of industrially manufactured chemicals are in common use, but only a small fraction have undergone rigorous toxicological evaluation. Hence there is a need to prioritize the remainder for testing. QSAR equations, which in turn are based on calculated partition coefficients, can be used to provide toxicity estimates. [ 46 ] [ 47 ] Calculated partition coefficients are also widely used in drug discovery to optimize screening libraries [ 48 ] [ 49 ] and to predict druglikeness of designed drug candidates before they are synthesized. [ 50 ] As discussed in more detail below, estimates of partition coefficients can be made using a variety of methods, including fragment-based, atom-based, and knowledge-based that rely solely on knowledge of the structure of the chemical. Other prediction methods rely on other experimental measurements such as solubility. The methods also differ in accuracy and whether they can be applied to all molecules, or only ones similar to molecules already studied.
Standard approaches of this type, using atomic contributions, have been named by those formulating them with a prefix letter: AlogP, [ 51 ] XlogP, [ 52 ] MlogP, [ 53 ] etc. A conventional method for predicting log P through this type of method is to parameterize the distribution coefficient contributions of various atoms to the overall molecular partition coefficient, which produces a parametric model . This parametric model can be estimated using constrained least-squares estimation , using a training set of compounds with experimentally measured partition coefficients. [ 51 ] [ 53 ] [ 54 ] In order to get reasonable correlations, the most common elements contained in drugs (hydrogen, carbon, oxygen, sulfur, nitrogen, and halogens) are divided into several different atom types depending on the environment of the atom within the molecule. While this method is generally the least accurate, the advantage is that it is the most general, being able to provide at least a rough estimate for a wide variety of molecules. [ 53 ]
The most common of these uses a group contribution method and is termed cLogP. It has been shown that the log P of a compound can be determined by the sum of its non-overlapping molecular fragments (defined as one or more atoms covalently bound to each other within the molecule). Fragmentary log P values have been determined in a statistical method analogous to the atomic methods (least-squares fitting to a training set). In addition, Hammett-type corrections are included to account of electronic and steric effects . This method in general gives better results than atomic-based methods, but cannot be used to predict partition coefficients for molecules containing unusual functional groups for which the method has not yet been parameterized (most likely because of the lack of experimental data for molecules containing such functional groups). [ 21 ] : 125ff [ 23 ] : 1–193
A typical data-mining -based prediction uses support-vector machines , [ 55 ] decision trees , or neural networks . [ 56 ] This method is usually very successful for calculating log P values when used with compounds that have similar chemical structures and known log P values. Molecule mining approaches apply a similarity-matrix-based prediction or an automatic fragmentation scheme into molecular substructures. Furthermore, there exist also approaches using maximum common subgraph searches or molecule kernels .
For cases where the molecule is un-ionized: [ 13 ] [ 14 ]
For other cases, estimation of log D at a given pH, from log P and the known mole fraction of the un-ionized form, f 0 {\displaystyle f^{0}} , in the case where partition of ionized forms into non-polar phase can be neglected, can be formulated as [ 13 ] [ 14 ]
The following approximate expressions are valid only for monoprotic acids and bases : [ 13 ] [ 14 ]
Further approximations for when the compound is largely ionized: [ 13 ] [ 14 ]
For prediction of p K a , which in turn can be used to estimate log D , Hammett type equations have frequently been applied. [ 57 ] [ 58 ]
If the solubility, S , of an organic compound is known or predicted in both water and 1-octanol, then log P can be estimated as [ 46 ] [ 59 ]
There are a variety of approaches to predict solubilities , and so log S . [ 60 ] [ 61 ]
The partition coefficient between n -Octanol and water is known as the n -octanol-water partition coefficient , or K ow . [ 62 ] It is also frequently referred to by the symbol P, especially in the English literature. It is also known as n -octanol-water partition ratio . [ 63 ] [ 64 ] [ 65 ]
K ow , being a type of partition coefficient, serves as a measure of the relationship between lipophilicity (fat solubility) and hydrophilicity (water solubility) of a substance. The value is greater than one if a substance is more soluble in fat-like solvents such as n-octanol, and less than one if it is more soluble in water. [ citation needed ]
Values for log K ow typically range between -3 (very hydrophilic) and +10 (extremely lipophilic/hydrophobic). [ 66 ]
The values listed here [ 67 ] are sorted by the partition coefficient. Acetamide is hydrophilic, and 2,2′,4,4′,5-Pentachlorobiphenyl is lipophilic. | https://en.wikipedia.org/wiki/Partition_coefficient |
Partition equilibrium is a special case of chemical equilibrium wherein one or more solutes are in equilibrium between two immiscible solvents. [ 1 ] The most common chemical equilibrium systems involve reactants and products in the same phase - either all gases or all solutions. However, it is also possible to get equilibria between substances in different phases, such a liquid and gas that do not mix (are immiscible). One example is gas-liquid partition equilibrium chromatography, where an analyte equilibrates between a gas and liquid phase. [ 2 ] Partition equilibria are described by Nernst 's distribution law . [ 3 ] Partition equilibrium are most commonly seen and used for Liquid–liquid extraction .
The time until a partition equilibrium emerges is influenced by many factors, such as: temperature, relative concentrations, surface area of interface, degree of stirring, and the nature of the solvents and solute.
For example, ammonia (NH 3 ) is soluble in both water (aq) and the organic solvent trichloromethane (CHCl 3 ) - two immiscible solvents. If ammonia is first dissolved in water, and then an equal volume of trichloromethane is added, and the two liquids shaken together, the following equilibrium is established:
The equilibrium concentrations of ammonia in each layer can be established by titration with standard acid solution. It can thus be determined that K c remains constant, with a value of 0.4 in this case.
This kind of equilibrium constant measures how a substance distributes or partitions itself between two immiscible solvents. It is called the partition coefficient or distribution coefficient.
See: Partition chromatography , Gas chromatography
Partition equilibrium chromatography is a type of chromatography that is typically used in gas chromatography (GC) and high performance liquid chromatography (HPLC). The stationary phase in GC is a high boiling liquid bonded to solid surface and the mobile phase is a gas. [ 4 ] In gas-liquid chromatography, analyte from the mobile gas phase equilibrates with the liquid phase. Molecules more soluble in the liquid phase will remain longer in the column, allowing for separation using partition equilibriums. [ 4 ] | https://en.wikipedia.org/wiki/Partition_equilibrium |
In mathematics , a partition of an interval [ a , b ] on the real line is a finite sequence x 0 , x 1 , x 2 , …, x n of real numbers such that
In other terms, a partition of a compact interval I is a strictly increasing sequence of numbers (belonging to the interval I itself) starting from the initial point of I and arriving at the final point of I .
Every interval of the form [ x i , x i + 1 ] is referred to as a subinterval of the partition x .
Another partition Q of the given interval [a, b] is defined as a refinement of the partition P , if Q contains all the points of P and possibly some other points as well; the partition Q is said to be “finer” than P . Given two partitions, P and Q , one can always form their common refinement , denoted P ∨ Q , which consists of all the points of P and Q , in increasing order. [ 1 ]
The norm (or mesh ) of the partition
is the length of the longest of these subintervals [ 2 ] [ 3 ]
Partitions are used in the theory of the Riemann integral , the Riemann–Stieltjes integral and the regulated integral . Specifically, as finer partitions of a given interval are considered, their mesh approaches zero and the Riemann sum based on a given partition approaches the Riemann integral . [ 4 ]
A tagged partition or Perron Partition is a partition of a given interval together with a finite sequence of numbers t 0 , …, t n − 1 subject to the conditions that for each i ,
In other words, a tagged partition is a partition together with a distinguished point of every subinterval: its mesh is defined in the same way as for an ordinary partition. [ 5 ] | https://en.wikipedia.org/wiki/Partition_of_an_interval |
In mathematics , a partition of unity on a topological space X {\displaystyle X} is a set R {\displaystyle R} of continuous functions from X {\displaystyle X} to the unit interval [0,1] such that for every point x ∈ X {\displaystyle x\in X} :
Partitions of unity are useful because they often allow one to extend local constructions to the whole space. They are also important in the interpolation of data, in signal processing , and the theory of spline functions .
The existence of partitions of unity assumes two distinct forms:
Thus one chooses either to have the supports indexed by the open cover, or compact supports. If the space is compact , then there exist partitions satisfying both requirements.
A finite open cover always has a continuous partition of unity subordinate to it, provided the space is locally compact and Hausdorff. [ 1 ] Paracompactness of the space is a necessary condition to guarantee the existence of a partition of unity subordinate to any open cover . Depending on the category to which the space belongs, this may also be a sufficient condition. [ 2 ] In particular, a compact set in the Euclidean space admits a smooth partition of unity subordinate to any finite open cover. The construction uses mollifiers (bump functions), which exist in continuous and smooth manifolds , but not necessarily in analytic manifolds . Thus for an open cover of an analytic manifold, an analytic partition of unity subordinate to that open cover generally does not exist. See analytic continuation .
If R {\displaystyle R} and T {\displaystyle T} are partitions of unity for spaces X {\displaystyle X} and Y {\displaystyle Y} respectively, then the set of all pairs { ρ ⊗ τ : ρ ∈ R , τ ∈ T } {\displaystyle \{\rho \otimes \tau :\ \rho \in R,\ \tau \in T\}} is a partition of unity for the cartesian product space X × Y {\displaystyle X\times Y} . The tensor product of functions act as ( ρ ⊗ τ ) ( x , y ) = ρ ( x ) τ ( y ) . {\displaystyle (\rho \otimes \tau )(x,y)=\rho (x)\tau (y).}
Let p {\displaystyle p} and q {\displaystyle q} be antipodal points on the circle S 1 {\displaystyle S^{1}} . We can construct a partition of unity on S 1 {\displaystyle S^{1}} by looking at a chart on the complement of the point p ∈ S 1 {\displaystyle p\in S^{1}} that sends S 1 − { p } {\displaystyle S^{1}-\{p\}} to R {\displaystyle \mathbb {R} } with center q ∈ S 1 {\displaystyle q\in S^{1}} . Now let Φ {\displaystyle \Phi } be a bump function on R {\displaystyle \mathbb {R} } defined by Φ ( x ) = { exp ( 1 x 2 − 1 ) x ∈ ( − 1 , 1 ) 0 otherwise {\displaystyle \Phi (x)={\begin{cases}\exp \left({\frac {1}{x^{2}-1}}\right)&x\in (-1,1)\\0&{\text{otherwise}}\end{cases}}} then, both this function and 1 − Φ {\displaystyle 1-\Phi } can be extended uniquely onto S 1 {\displaystyle S^{1}} by setting Φ ( p ) = 0 {\displaystyle \Phi (p)=0} . Then, the pair of functions { ( S 1 − { p } , Φ ) , ( S 1 − { q } , 1 − Φ ) } {\displaystyle \{(S^{1}-\{p\},\Phi ),(S^{1}-\{q\},1-\Phi )\}} forms a partition of unity over S 1 {\displaystyle S^{1}} .
Sometimes a less restrictive definition is used: the sum of all the function values at a particular point is only required to be positive, rather than 1, for each point in the space. However, given such a set of functions { ψ i } i = 1 ∞ {\displaystyle \{\psi _{i}\}_{i=1}^{\infty }} one can obtain a partition of unity in the strict sense by dividing by the sum; the partition becomes { σ − 1 ψ i } i = 1 ∞ {\displaystyle \{\sigma ^{-1}\psi _{i}\}_{i=1}^{\infty }} where σ ( x ) := ∑ i = 1 ∞ ψ i ( x ) {\textstyle \sigma (x):=\sum _{i=1}^{\infty }\psi _{i}(x)} , which is well defined since at each point only a finite number of terms are nonzero. Even further, some authors drop the requirement that the supports be locally finite, requiring only that ∑ i = 1 ∞ ψ i ( x ) < ∞ {\textstyle \sum _{i=1}^{\infty }\psi _{i}(x)<\infty } for all x {\displaystyle x} . [ 3 ]
In the field of operator algebras , a partition of unity is composed of projections [ 4 ] p i = p i ∗ = p i 2 {\displaystyle p_{i}=p_{i}^{*}=p_{i}^{2}} . In the case of C ∗ {\displaystyle \mathrm {C} ^{*}} -algebras , it can be shown that the entries are pairwise orthogonal : [ 5 ] p i p j = δ i , j p i ( p i , p j ∈ R ) . {\displaystyle p_{i}p_{j}=\delta _{i,j}p_{i}\qquad (p_{i},\,p_{j}\in R).} Note it is not the case that in a general *-algebra that the entries of a partition of unity are pairwise orthogonal. [ 6 ]
If a {\displaystyle a} is a normal element of a unital C ∗ {\displaystyle \mathrm {C} ^{*}} -algebra A {\displaystyle A} , and has finite spectrum σ ( a ) = { λ 1 , … , λ N } {\displaystyle \sigma (a)=\{\lambda _{1},\dots ,\lambda _{N}\}} , then the projections in the spectral decomposition : a = ∑ i = 1 N λ i P i , {\displaystyle a=\sum _{i=1}^{N}\lambda _{i}\,P_{i},} form a partition of unity. [ 7 ]
In the field of compact quantum groups , the rows and columns of the fundamental representation u ∈ M N ( C ) {\displaystyle u\in M_{N}(C)} of a quantum permutation group ( C , u ) {\displaystyle (C,u)} form partitions of unity. [ 8 ]
A partition of unity can be used to define the integral (with respect to a volume form ) of a function defined over a manifold: one first defines the integral of a function whose support is contained in a single coordinate patch of the manifold; then one uses a partition of unity to define the integral of an arbitrary function; finally one shows that the definition is independent of the chosen partition of unity.
A partition of unity can be used to show the existence of a Riemannian metric on an arbitrary manifold.
Method of steepest descent employs a partition of unity to construct asymptotics of integrals.
Linkwitz–Riley filter is an example of practical implementation of partition of unity to separate input signal into two output signals containing only high- or low-frequency components.
The Bernstein polynomials of a fixed degree m are a family of m +1 linearly independent single-variable polynomials that are a partition of unity for the unit interval [ 0 , 1 ] {\displaystyle [0,1]} .
The weak Hilbert Nullstellensatz asserts that if f 1 , … , f r ∈ C [ x 1 , … , x n ] {\displaystyle f_{1},\ldots ,f_{r}\in \mathbb {C} [x_{1},\ldots ,x_{n}]} are polynomials with no common vanishing points in C n {\displaystyle \mathbb {C} ^{n}} , then there are polynomials a 1 , … , a r {\displaystyle a_{1},\ldots ,a_{r}} with a 1 f 1 + ⋯ + a r f r = 1 {\displaystyle a_{1}f_{1}+\cdots +a_{r}f_{r}=1} . That is, ρ i = a i f i {\displaystyle \rho _{i}=a_{i}f_{i}} form a polynomial partition of unity subordinate to the Zariski-open cover U i = { x ∈ C n ∣ f i ( x ) ≠ 0 } {\displaystyle U_{i}=\{x\in \mathbb {C} ^{n}\mid f_{i}(x)\neq 0\}} .
Partitions of unity are used to establish global smooth approximations for Sobolev functions in bounded domains. [ 9 ] | https://en.wikipedia.org/wiki/Partition_of_unity |
Partner-optimized inventory management , also known as partnerized inventory management or sometimes just the abbreviation PIM is an inventory management technique or model often used in deterministic inventory systems in which a significant portion of the total inventory regularly becomes stochastic in nature, due to slowing and/or low demand such as is typical in heavy machinery and construction equipment where the products themselves are extremely durable and have long lives in the field. Inventory in these cases needs to be maintained for an extended time to allow for repairs and product support perhaps as much as two or more decades after a manufacturer has ceased production.
Traditional inventory management techniques break down in cases where a manufacturer maintains inventory to supply future maintenance of their in-service equipment. As demand for goods approaches zero, liquidation of inventory is indicated in most revenue management models. [ 1 ] Zero inventory to service products in the field, however, fails the organization in other business areas. Possible costs to manufacture replacement inventory and the harder-to-calculate costs of customer confidence erosion can be greater over time than the immediate financial concerns that are remedied by liquidating inventory entirely by scrapping or discarding it as waste . [ 2 ]
While scrapping returns inventory to a state of raw materials, Partner-Optimized Inventory Management (PIM) returns inventory to the market as intermediate goods to be used in production of other goods or non-capital spare parts . [ 3 ] An organization that uses the PIM model mitigates the immediate pinch point caused by inventory reduction by retaining as-needed mutual access to inventory through the marketplace for an indeterminate time rather than losing access immediately and irrevocably through scrapping or discarding the inventory as waste. | https://en.wikipedia.org/wiki/Partnerized_inventory_management |
The Partnership of a European Group of Aeronautics and Space UniversitieS ( PEGASUS ) is a network [ 1 ] of aeronautical universities in Europe [ 2 ] created in order to facilitate student exchanges and collaborative research between universities.
It has been originally created by the groupement des écoles aéronautiques françaises (group of French aeronautical grandes écoles ) (ENAC, ENSMA and ISAE ) [ 3 ] in 1998. [ 4 ]
The European manufacturers like Airbus have close contact with PEGASUS network. [ 5 ]
The network consists of 30 universities in 12 countries: [ 6 ] | https://en.wikipedia.org/wiki/Partnership_of_a_European_Group_of_Aeronautics_and_Space_Universities |
Partnership on Artificial Intelligence to Benefit People and Society, otherwise known as Partnership on AI (PAI), is a nonprofit coalition committed to the responsible use of artificial intelligence . Coming into inception in September 2016, PAI grouped together members from over 90 companies and non-profits in order to explore best practice recommendations for the tech community. [ 1 ]
The Partnership on AI was publicly announced on September 28, 2016 with founding members Amazon , Facebook , Google , DeepMind , Microsoft , and IBM , with interim co-chairs Eric Horvitz of Microsoft Research and Mustafa Suleyman of DeepMind. [ 2 ] [ 3 ] [ 4 ] [ 5 ] More than 100 partners from academia, civil society, industry, and nonprofits are member organizations in 2019. [ 6 ]
In January 2017, Apple head of advanced development for Siri , Tom Gruber , joined the Partnership on AI's board. [ 7 ] In October 2017, Terah Lyons joined the Partnership on AI as the organization's founding executive director. [ 8 ] Lyons brought to the organization her expertise in technology governance , with a specific focus in machine intelligence, AI, and robotics policy, having formerly served as Policy Advisor to the United States Chief Technology Officer Megan Smith . Lyons was succeeded by Partnership on AI board member Rebecca Finlay as interim executive director. Finlay was named CEO of Partnership on AI on October 26, 2021.
In October 2018, Baidu became the first Chinese firm to join the Partnership. [ 9 ]
In November 2020 the Partnership on AI announced the AI Incident Database (AIID), [ 10 ] which is a tool to identify, assess, manage, and communicate AI risk and harm.
In August 2021, the Partnership on AI submitted a response to the National Institute of Standards and Technology (NIST). The response provided examples of PAI’s work related to AI risk management, such as the Safety Critical AI report on responsible publication of AI research, the ABOUT ML project on documentation and transparency in machine learning lifecycles, and the AI Incident Database. [ 11 ] The response also highlighted how the AI Incident Database involves some of the minimum attributes in NIST’s AI RMF, such as being consensus-driven, risk-based, adaptable, and consistent with other approaches to managing AI risk. [ 11 ]
On October 26, 2021, Rebecca Finlay was named CEO. [ 12 ]
In February 2023, the Partnership on AI (PAI) launched a novel framework aimed at guiding the ethical development and use of synthetic media. This initiative was backed by a variety of initial partners, including notable entities such as Adobe, BBC, CBC/Radio-Canada, Bumble, OpenAI, TikTok, WITNESS, and synthetic media startups Synthesia, D-ID, and Respeecher. The framework, which emphasizes transparency, creativity, and safety, was the result of a year-long collaborative process involving contributions from a wide range of stakeholders, including synthetic media startups, social media platforms, news organizations, advocacy groups, academic institutions, policy professionals, and public commenters. [ 13 ]
Partnership on AI has a multiple pronged approach to achieve impact. Their initiatives are separated into five different programs: AI and media integrity; AI, work, and the economy; justice, transparency, and accountability; inclusive research and design; and security for AI. These programs aim to produce value through specific outputs, methodological tools, and articles. [ 14 ]
Through the program on AI & Media Integrity, PAI actively endeavors to establish best practices that ensure AI's positive influence on the global information ecosystem. Recognizing the potential for AI to facilitate harmful online content and amplify existing negative narratives, PAI is committed to mitigating these risks and fostering a responsible AI presence. [ 15 ]
The AI, Labor, and the Economy program serves as a collaborative platform, uniting economists, worker representative organizations, and PAI's partners to formulate a cohesive response on how AI can contribute to an inclusive economic future. The recent release of PAI's "Guidelines for AI and Shared Prosperity" on June 7, 2023, outlines a blueprint for the judicious use of AI across various stages, guiding organizations, policymakers, and labor entities. [ 16 ]
The Fairness, Transparency, and Accountability program, in conjunction with the Inclusive Research & Design program, strives to reshape the AI landscape towards justice and fairness. By exploring the intersections between AI and fundamental human values, the former establishes guidelines for algorithmic equity, explainability, and responsibility. Simultaneously, the latter empowers communities by providing guidelines on co-creating AI solutions, fostering inclusivity throughout the research and design process. [ 17 ] [ 18 ]
The Safety Critical AI program addresses the growing deployment of AI systems in pivotal sectors like medicine, finance, transportation, and social media. With a focus on anticipating and mitigating potential risks, the program brings together partners and stakeholders to develop best practices that span the entire AI research and development lifecycle. Notable initiatives include the establishment of the AI incident Database, formulation of norms for responsible publication, and the creation of the innovative AI learning environment SafeLife. [ 19 ]
The association is also built of thematic foundations that drive Partnership on AI's focus. Atop the programs mentioned above, Partnership on AI looks to expand upon the social impact of AI, encouraging positive social utility. The organization has highlighted potential benefits of AI within public welfare, education, sustainability, etc. With these specific use cases, Partnership on AI is developing an ethical framework in which to analyze and AI's measure of ethical efficacy. The ethical framework places an emphasis on inclusive participatory practices that enhance equity in AI. [ 20 ]
The Partnership on AI has been involved in several initiatives aimed at promoting the responsible use of AI. One of their key initiatives is the development of a framework for the safe deployment of AI models. This framework guides model providers in developing and deploying AI models in a manner that ensures safety for society and can adapt to evolving capabilities and uses. [ 21 ]
In collaboration with DeepMind, the Partnership on AI has also launched a study to investigate the high attrition rates among women and minoritized individuals in tech. [ 22 ]
Recognizing the importance of explainability in AI, the Partnership on AI hosted a one-day, in-person workshop focused on the deployment of “explainable artificial intelligence” (XAI). This event brought together experts from various industries to discuss and explore the concept of XAI. [ 23 ]
In an effort to support information integrity, the Partnership on AI collaborated with First Draft to investigate effective strategies for addressing deceptive content online. [ 24 ] This initiative reflects the organization’s methodical approach to identifying and promoting best practices in AI.
The Partnership on AI is also creating resources to facilitate effective engagement between AI practitioners and impacted communities. [ 25 ]
In November 2020, the Partnership on AI announced the AI Incident Database (AIID), a project dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. The AIID, which shifted to a new special-purpose independent non-profit in 2022, serves as a valuable resource for understanding and mitigating the potential risks associated with AI. [ 26 ]
Most recently, PAI conducted the PAI's 2023 Policy Forum. This event, held in London, was a gathering of diverse stakeholders to explore recent trends in AI policy globally and strategies for ensuring AI safety. During the event, the Partnership on AI (PAI) unveiled their "Guidance for Safe Foundation Model Deployment" for public feedback. This guidance, shaped by the Safety Critical AI Steering Committee and contributions from PAI's worldwide network, offers flexible principles for managing risks linked to large-scale AI implementation. Participants included policymakers, AI professionals, philanthropy and civil society members, and academic experts. [ 27 ]
The Board of Directors of the Partnership on AI (PAI) as of 2023 includes:
In October 2020, Access Now , announced its official resignation from PAI in a letter. Access Now stated that it had found that there was an increasingly smaller role for civil society to play within PAI and that PAI had not influenced or changed the attitude of member companies or encouraged them to respond to or consult with civil society on a systematic basis. Access Now also expressed its disagreement with PAI’s approach to AI ethics and risk assessment, and its advocacy for an outright ban on technologies that are fundamentally incompatible with human rights, such as facial recognition or other biometric technologies that enable mass surveillance. [ 29 ] | https://en.wikipedia.org/wiki/Partnership_on_AI |
In science and engineering parts-per notation is a set of pseudo-units to describe the small values of miscellaneous dimensionless quantities , e.g. mole fraction or mass fraction .
Since these fractions are quantity-per-quantity measures, they are pure numbers with no associated units of measurement . Commonly used are
This notation is not part of the International System of Units - SI system and its meaning is ambiguous.
Parts-per notation is often used describing dilute solutions in chemistry , for instance, the relative abundance of dissolved minerals or pollutants in water . The quantity "1 ppm" can be used for a mass fraction if a water-borne pollutant is present at one-millionth of a gram per gram of sample solution. When working with aqueous solutions , it is common to assume that the density of water is 1.00 g/mL. Therefore, it is common to equate 1 kilogram of water with 1 L of water. Consequently, 1 ppm corresponds to 1 mg/L and 1 ppb corresponds to 1 μg/L.
Similarly, parts-per notation is used also in physics and engineering to express the value of various proportional phenomena. For instance, a special metal alloy might expand 1.2 micrometers per meter of length for every degree Celsius and this would be expressed as " α = 1.2 ppm/°C". Parts-per notation is also employed to denote the change, stability, or uncertainty in measurements. For instance, the accuracy of land-survey distance measurements when using a laser rangefinder might be 1 millimeter per kilometer of distance; this could be expressed as " Accuracy = 1 ppm." [ a ]
Parts-per notations are all dimensionless quantities: in mathematical expressions, the units of measurement always cancel. In fractions like "2 nanometers per meter" (2 n m / m = 2 nano = 2×10 −9 = 2 ppb = 2 × 0.000 000 001 ), so the quotients are pure-number coefficients with positive values less than or equal to 1. When parts-per notations, including the percent symbol (%), are used in regular prose (as opposed to mathematical expressions), they are still pure-number dimensionless quantities. However, they generally take the literal "parts per" meaning of a comparative ratio (e.g. "2 ppb" would generally be interpreted as "two parts in a billion parts"). [ 1 ]
Parts-per notations may be expressed in terms of any unit of the same measure. For instance, the expansion coefficient of some brass alloy, α = 18.7 ppm/°C, may be expressed as 18.7 ( μm / m )/°C, or as 18.7 (μ in / in )/°C; the numeric value representing a relative proportion does not change with the adoption of a different unit of length. [ b ] Similarly, a metering pump that injects a trace chemical into the main process line at the proportional flow rate Q p = 12 ppm, is doing so at a rate that may be expressed in a variety of volumetric units, including 125 μ L / L , 125 μ gal / gal , 125 cm 3 / m 3 , etc.
In nuclear magnetic resonance spectroscopy (NMR), chemical shift is usually expressed in ppm. It represents the difference of a measured frequency in parts per million from the reference frequency. The reference frequency depends on the instrument's magnetic field and the element being measured. It is usually expressed in MHz . Typical chemical shifts are rarely more than a few hundred Hz from the reference frequency, so chemical shifts are conveniently expressed in ppm ( Hz /MHz). Parts-per notation gives a dimensionless quantity that does not depend on the instrument's field strength.
Although the International Bureau of Weights and Measures (an international standards organization known also by its French -language initials BIPM) recognizes the use of parts-per notation, it is not formally part of the International System of Units (SI). [ 1 ] Note that although " percent " (%) is not formally part of the SI, both the BIPM and the International Organization for Standardization (ISO) take the position that "in mathematical expressions, the internationally recognized symbol % (percent) may be used with the SI to represent the number 0.01" for dimensionless quantities. [ 1 ] [ 4 ] According to IUPAP , "a continued source of annoyance to unit purists has been the continued use of percent, ppm, ppb, and ppt". [ 5 ] Although SI-compliant expressions should be used as an alternative, the parts-per notation remains nevertheless widely used in technical disciplines. The main problems with the parts-per notation are set out below.
Because the named numbers starting with a " billion " have different values in different countries, the BIPM suggests avoiding the use of "ppb" and "ppt" to prevent misunderstanding. The U.S. National Institute of Standards and Technology (NIST) takes the stringent position, stating that "the language-dependent terms [...] are not acceptable for use with the SI to express the values of quantities". [ 6 ]
Although "ppt" usually means "parts per trillion", it occasionally means "parts per thousand". Unless the meaning of "ppt" is defined explicitly, it has to be determined from the context. [ citation needed ]
Another problem of the parts-per notation is that it may refer to mass fraction , mole fraction or volume fraction . Since it is usually not stated which quantity is used, it is better to write the units out, such as kg/kg, mol/mol or m 3 /m 3 , even though they are all dimensionless. [ 7 ] The difference is quite significant when dealing with gases, and it is very important to specify which quantity is being used. For example, the conversion factor between a mass fraction of 1 ppb and a mole fraction of 1 ppb is about 4.7 for the greenhouse gas CFC-11 in air (Molar mass of CFC-11 / Mean molar mass of air = 137.368 / 28.97 = 4.74). For volume fraction, the suffix "V" or "v" is sometimes appended to the parts-per notation (e.g. ppmV, ppbv, pptv). [ 8 ] [ 9 ] However, ppbv and pptv are usually used to mean mole fractions – "volume fraction" would literally mean what volume of a pure substance is included in a given volume of a mixture, and this is rarely used except in the case of alcohol by volume .
To distinguish the mass fraction from volume fraction or mole fraction, the letter "w" (standing for "weight") is sometimes added to the abbreviation (e.g. ppmw, ppbw). [ 10 ]
The usage of the parts-per notation is generally quite fixed within each specific branch of science, but often in a way that is inconsistent with its usage in other branches, leading some researchers to assume that their own usage (mass/mass, mol/mol, volume/volume, mass/volume, or others) is correct and that other usages are incorrect. This assumption sometimes leads them to not specify the details of their own usage in their publications, and others may therefore misinterpret their results. For example, electrochemists often use volume/volume, while chemical engineers may use mass/mass as well as volume/volume, while chemists , the field of occupational safety and the field of permissible exposure limit (e.g. permitted gas exposure limit in air ) may use mass/volume. Unfortunatelly, many academic publications of otherwise excellent level fail to specify their use of the parts-per notation, which irritates some readers, especially those who are not experts in the particular fields in those publications, because parts-per-notation, without specifying what it stands for, can mean anything. [ citation needed ]
SI-compliant units that can be used as alternatives are shown in the chart below. Expressions that the BIPM explicitly does not recognize as being suitable for denoting dimensionless quantities with the SI are marked with ! .
Note that the notations in the "SI units" column above are for the most part dimensionless quantities ; that is, the units of measurement factor out in expressions like "1 nm/m" (1 n m / m =1 × 10 −9 ) so the ratios are pure-number coefficients with values less than 1.
Because of the cumbersome nature of expressing certain dimensionless quantities per SI guidelines, the International Union of Pure and Applied Physics (IUPAP) in 1999 proposed the adoption of the special name "uno" (symbol: U) to represent the number 1 in dimensionless quantities. [ 5 ] In 2004, a report to the International Committee for Weights and Measures (CIPM) stated that the response to the proposal of the uno "had been almost entirely negative", and the principal proponent "recommended dropping the idea". [ 12 ] To date, the uno has not been adopted by any standards organization . | https://en.wikipedia.org/wiki/Parts-per_notation |
Parts Manufacturer Approval ( PMA ) is an approval granted by the United States Federal Aviation Administration (FAA) to a manufacturer of aircraft parts. [ 1 ]
It is generally illegal in the United States to install replacement or modification parts on a certificated aircraft without an airworthiness release such as a Supplemental Type Certificate (STC) or Parts Manufacturing Approval (PMA). There are a number of other methods of compliance, including parts manufactured to government or industry standards, parts manufactured under technical standard order authorization [TSO], owner-/operator-produced parts, experimental aircraft, field approvals, etc. [ 2 ] [ 3 ]
PMA-holding manufacturers are permitted to make replacement parts for aircraft, even though they are not the original manufacturer of the aircraft. [ 4 ] The process is analogous to 'after-market' parts for automobiles, except that the United States aircraft parts production market remains tightly regulated by the FAA.
An applicant for a PMA applies for approval from the FAA. The FAA prioritizes its review of a new application based on its internal process called Project Prioritization. [ 5 ]
The FAA Order covering the application for PMA is Order 8110.42 revision D. This document is worded as instructions to the FAA reviewing personnel. An accompanying Advisory Circular (AC) 21.303-4 is intended to address the applicant. 8110.42C addressed both the applicant and the reviewer. Per the order, application for a PMA can be made per the following ways: Identicality in which the applicant attempts to convince the FAA that the PMA part is identical to the OAH (Original Approval Holder) part. Identicality by Licensure i s accomplished by providing evidence to the FAA that the applicant has licensed the part data from the OAH. This evidence is usually in the form of an Assist Letter provided to the applicant by the OAH. PMA may also be granted based upon prior approval of an STC . As an example: If an STC were granted to alter an existing aircraft design then that approval would also apply to the parts needed to make that modification. A PMA would be required, however, to manufacture the parts. The last method to obtain a PMA is Test & Computation. This approach consist of one or a combination of both of the following methods: General Analysis and Comparative Analysis . General analysis compares the proposed part to the functional requirements of that part when installed. Comparative Analysis compares the function of the proposed part to the OAH part. As an example: If a PMA application for flight control cables were to show that the PMA part exceeds the pull strength requirements of the aircraft system it is meant for, that is general analysis. To show that it exceeds that of the OAH part is comparative analysis. The modern trend is to use a variety of techniques in combination in order to obtain approval of complicated parts - relying on the techniques that are most accurate and best able to provide the proof of airworthiness desired. [ 6 ] The cognizant regional FAA Aircraft Certification Office (ACO) determines if the applicant has shown compliance with all relevant airworthiness regulations and is thus entitled to design approval.
The second step in the application process is to apply to the FAA Manufacturing Inspection Divisional Office (MIDO) to obtain approval of the manufacturing quality assurance system (known as production approval). Production approval will be granted when the FAA is satisfied that the system will not permit parts to leave the system until the parts have been verified to meet the requirements of the approved design, and the system otherwise meets the requirements of the FAA quality system regulations. [ 7 ] A Production Approval Holder (PAH) will typically already have satisfied this requirement before PMA application is made.
PMA applications based upon licensure or STC do not require ACO approval (since the data has already been approved) and can go straight to the MIDO.
Under the Civil Air Regulations (CARs), the government had the authority to approve aircraft parts in a predecessor to the PMA rules. This authority was found in each of the sets of airworthiness standards published in the Civil Air Regulations. [ 8 ] CAR 3.31, for example, permitted the Administrator to approve aircraft parts as early as 1947. [ 9 ]
In 1952, the Civil Aeronautics Board adjusted the location of the parts production authority from the ".31" regulations to the ".18" regulations. [ 10 ] For example, the CAR 3 authority for modification and replacement parts could be found in section 3.18 after 1952.
In 1955, the Civil Aeronautics Board separated the parts authority out of the airworthiness standards, and placed it in a more general location so that one standard would apply to replacement and modification parts for all different forms of aircraft. [ 11 ]
In 1965 CAR 1.55 became Federal Aviation Regulation section 21.303. [ 12 ]
The 1965 regulatory change also imposed specific obligations on the PMA holder related to the Fabrication Inspection System. [ 13 ]
Amendment 21-38 of Part 21 was published May 26, 1972. [ 14 ] This was the next rule change to affect PMAs. This rule eliminated the incorporation by reference of type certification requirements in favor of PMA-specific data submission requirements. This change established the separate process and separate requirements for data that must be submitted by an applicant for a PMA (prior to this there was no explicit distinction between the application data requirements for type certificated products and the data requirements for PMAed articles). [ 15 ]
The aircraft parts aftermarket expanded greatly in the 1980s as airlines sought to reduce the costs of spares by finding alternative sources of parts. During this time period, though, many manufacturers failed to obtain PMA approvals from the FAA.
In the 1990s, the FAA engaged in an "Enhanced Enforcement" program that educated the industry about the importance of approval and as a consequence a huge number of parts were approved through formal FAA mechanisms. [ 16 ] Under this program, companies that had previously manufactured aircraft parts without PMAs could apply for PMAs in order to bring their manufacturing operations into full compliance with the regulations. This movement brought an explosion of PMA parts to the marketplace.
The FAA published a significant revision to the U.S. manufacturing regulations on October 16, 2009. [ 17 ] This new rule eliminates some of the legal distinctions between forms of production approval issued by the FAA, which should have the effect of further demonstrating the FAA's support of the quality systems implemented by PMA manufacturers. Specifically, instead of having a separate body of regulations for a PMA Fabrication Inspection System (FIS), [ 18 ] as was the case in prior regulations, the PMA regulations now include a cross reference to the 14 C.F.R. § 21.137, [ 19 ] which is the regulation defining the elements of a quality system for all production approval holders. [ 20 ] In practice, all production approval holders were held to the same production quality standards before the rule change [ 21 ] – this will now be more obvious in the FAA's regulations. Accomplishing this harmonization of standards was an important goal of the Modification and Replacement Parts Association ( MARPA ).
The new rule became effective April 16, 2011. [ 22 ] The FAA's FAQ on Part 21 stated that PMA quality systems would be evaluated for compliance by the FAA during certificate management activity after the compliance date of the rule. [ 23 ] Today, all FAA production approvals – whether for complete aircraft or for piece parts – rely on a common set of quality assurance system elements. E.g. 14 C.F.R. §§ 21.137 (quality system requirements for production certificates), 21.307 (requiring PMA holders to establish a quality system that meets the requirements of § 21.137), 21.607 (requiring TSOA holders to establish a quality system that meets the requirements of § 21.137).
The FAA is also working on new policies concerning parts fabricated in the course of repair. This practice has historically been confused with PMA manufacturing, although the two are actually quite different practices supported by different FAA regulations. [ 24 ] Today, FAA Advisory Circular 43.18 provides guidance for the fabrication of parts to be consumed purely during a maintenance operation, [ 25 ] and additional guidance is expected to be released in the near future. One of the key features of FAC 43.18 is that it recommends implementation of a quality assurance system quite similar to the fabrication inspection systems that PMA manufacturers are required to have.
The trade association representing the PMA industry is the Modification and Replacement Parts Association ( MARPA ). MARPA works closely with the FAA [ 26 ] [ 27 ] and other agencies to promote PMA safety.
The United States has Bilateral Aviation Safety Agreements (BASA) with most of its major trading partners, and the standard language of these BASAs requires the trading partner to treat FAA-PMA as an importable aircraft part that is airworthy and eligible for installation on aircraft registered in the importing jurisdiction. [ 28 ] This process has been facilitated by the International Air Transport Association (IATA) which has published a book on accepting PMA parts. [ 29 ]
Although the PMA industry began in the United States, several countries have begun promoting production of approved aircraft parts within their own borders. These jurisdictions include:
Other jurisdictions have established PMA regulations and are working with trading partners to achieve acceptance of their PMA industries, and thus should be expected to enter the PMA marketplace in the near future. For example, Japan has PMA regulations and has secured a bilateral agreement with the United States that authorizes the export of these parts to the United States as airworthy aircraft parts. [ 33 ] | https://en.wikipedia.org/wiki/Parts_Manufacturer_Approval |
Parts stress modelling is a method in engineering and especially electronics to find an expected value for the rate of failure of the mechanical and electronic components of a system. It is based upon the idea that the more components that there are in the system, and the greater stress that they undergo in operation, the more often they will fail.
Parts count modelling is a simpler variant of the method, with component stress not taken into account.
Various organisations have published standards specifying how parts stress modelling should be carried out. Some from electronics are:
These "standards" produce different results, often by a factor of more than two, for the same modelled system. The differences illustrate the fact that this modelling is not an exact science. System designers often have to do the modelling using a standard specified by a customer, so that the customer can compare the results with other systems modelled in the same way.
All of these standards compute an expected overall failure rate for all the components in the system, which is not necessarily the rate at which the system as a whole fails. Systems often incorporate redundancy or fault tolerance so that they do not fail when an individual component fails.
Several companies provide programs for performing parts stress modelling calculations. It's also possible to do the modelling with a spreadsheet .
All these models implicitly assume the idea of "random failure". Individual components fail at random times but at a predictable rate, analogous to the process of nuclear decay . One justification for this idea is that components fail by a process of wearout, a predictable decay after manufacture, but that the wearout life of individual components is scattered widely about some very long mean. The observed "random" failures are then just the extreme outliers at the early edge of this distribution. However, this may not be the whole picture.
All the models use basically the same process, with detailed variations.
Other global modification parameters can be employed, which are assumed to have the same effect on every component failure rate. The most usual are the environment, such as ground benign or airborne, commercial , and the purchasing quality assurance process. The standards specify overall multiplier factors for these various choices. | https://en.wikipedia.org/wiki/Parts_stress_modelling |
Parvaresh–Vardy codes are a family of error-correcting codes first described in 2005 by Farzad Parvaresh and Alexander Vardy . [ 1 ] They can be used for efficient list-decoding .
This theoretical computer science –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Parvaresh–Vardy_code |
Order ( Latin : ordo ) is one of the eight major hierarchical taxonomic ranks in Linnaean taxonomy . It is classified between family and class . In biological classification , the order is a taxonomic rank used in the classification of organisms and recognized by the nomenclature codes . An immediately higher rank, superorder , is sometimes added directly above order, with suborder directly beneath order. An order can also be defined as a group of related families.
What does and does not belong to each order is determined by a taxonomist , as is whether a particular order should be recognized at all. Often there is no exact agreement, with different taxonomists each taking a different position. There are no hard rules that a taxonomist needs to follow in describing or recognizing an order. Some taxa are accepted almost universally, while others are recognized only rarely. [ 1 ]
The name of an order is usually written with a capital letter. [ 2 ] For some groups of organisms, their orders may follow consistent naming schemes . Orders of plants , fungi , and algae use the suffix -ales (e.g. Dictyotales ). [ 3 ] Orders of birds and fishes [ 4 ] use the Latin suffix -iformes meaning 'having the form of' (e.g. Passeriformes ), but orders of mammals , reptiles , amphibians and invertebrates are not so consistent (e.g. Artiodactyla , Anura , Crocodylia , Actiniaria , Primates ).
For some clades covered by the International Code of Zoological Nomenclature , several additional classifications are sometimes used, although not all of these are officially recognized.
In their 1997 classification of mammals , McKenna and Bell used two extra levels between superorder and order: grandorder and mirorder . [ 5 ] Michael Novacek (1986) inserted them at the same position. Michael Benton (2005) inserted them between superorder and magnorder instead. [ 6 ] This position was adopted by Systema Naturae 2000 and others.
In botany , the ranks of subclass and suborder are secondary ranks pre-defined as respectively above and below the rank of order. [ 7 ] Any number of further ranks can be used as long as they are clearly defined. [ 7 ]
The superorder rank is commonly used, with the ending -anae that was initiated by Armen Takhtajan 's publications from 1966 onwards. [ 8 ]
The order as a distinct rank of biological classification having its own distinctive name (and not just called a higher genus ( genus summum )) was first introduced by the German botanist Augustus Quirinus Rivinus in his classification of plants that appeared in a series of treatises in the 1690s. Carl Linnaeus was the first to apply it consistently to the division of all three kingdoms of nature (then minerals , plants , and animals ) in his Systema Naturae (1735, 1st. Ed.).
For plants, Linnaeus' orders in the Systema Naturae and the Species Plantarum were strictly artificial, introduced to subdivide the artificial classes into more comprehensible smaller groups. When the word ordo was first consistently used for natural units of plants, in 19th-century works such as the Prodromus Systematis Naturalis Regni Vegetabilis of Augustin Pyramus de Candolle and the Genera Plantarum of Bentham & Hooker, it indicated taxa that are now given the rank of family (see ordo naturalis , ' natural order ').
In French botanical publications, from Michel Adanson 's Familles naturelles des plantes (1763) and until the end of the 19th century, the word famille (plural: familles ) was used as a French equivalent for this Latin ordo . This equivalence was explicitly stated in the Alphonse Pyramus de Candolle 's Lois de la nomenclature botanique (1868), the precursor of the currently used International Code of Nomenclature for algae, fungi, and plants .
In the first international Rules of botanical nomenclature from the International Botanical Congress of 1905, the word family ( familia ) was assigned to the rank indicated by the French famille , while order ( ordo ) was reserved for a higher rank, for what in the 19th century had often been named a 'cohort' ( cohors , [ 10 ] plural cohortes ).
Some of the plant families still retain the names of Linnaean "natural orders" or even the names of pre-Linnaean natural groups recognized by Linnaeus as orders in his natural classification (e.g. Palmae or Labiatae ). Such names are known as descriptive family names.
In the field of zoology , the Linnaean orders were used more consistently. That is, the orders in the zoology part of the Systema Naturae refer to natural groups. Some of his ordinal names are still in use, e.g. Lepidoptera (moths and butterflies) and Diptera (flies, mosquitoes, midges, and gnats). [ 11 ]
In virology , the International Committee on Taxonomy of Viruses 's virus classification includes fifteen taxonomic ranks to be applied for viruses , viroids and satellite nucleic acids : realm , subrealm , kingdom , subkingdom, phylum , subphylum , class, subclass, order, suborder, family, subfamily , genus, subgenus , and species. [ 12 ] There are currently fourteen viral orders, each ending in the suffix -virales . [ 13 ] | https://en.wikipedia.org/wiki/Parvorder |
Parylene is the common name of a polymer whose backbone consists of para - benzenediyl rings − C 6 H 4 − connected by 1,2-ethanediyl bridges − CH 2 − CH 2 −. It can be obtained by polymerization of para -xylylene H 2 C = C 6 H 4 = CH 2 .
The name is also used for several polymers with the same backbone, where some hydrogen atoms are replaced by other functional groups . Some of these variants are designated in commerce by letter-number codes such as "parylene C" and "parylene AF-4". Some of these names are registered trademarks in some countries.
Coatings of parylene are often applied to electronic circuits and other equipment as electrical insulation , moisture barriers, or protection against corrosion and chemical attack ( conformal coating ). They are also used to reduce friction and in medicine to prevent adverse reactions to implanted devices . These coatings are typically applied by chemical vapor deposition in an atmosphere of the monomer para -xylylene.
Parylene is considered a "green" polymer because its polymerization needs no initiator or other chemicals to terminate the chain; and the coatings can be applied at or near room temperature, without any solvent .
Parylene was discovered in 1947 by Michael Szwarc as one of the thermal decomposition products of para -xylene H 3 C − C 6 H 4 − CH 3 above 1000 °C. Szwarc identified para -xylylene as the precursor by observing that reaction with iodine yielded para -xylylene di-iodide as the only product. The reaction yield was only a few percent. [ 1 ] [ 2 ]
A more efficient route was found in 1965 by William F. Gorham at Union Carbide. He deposited parylene films by the thermal decomposition of [2.2]paracyclophane at temperatures exceeding 550 °C and in vacuum below 1 Torr. This process did not require a solvent and resulted in chemically resistant films free from pinholes. Union Carbide commercialized a parylene coating system in 1965. [ 1 ] [ 2 ]
Union Carbide went on to undertake research into the synthesis of numerous parylene precursors, including parylene AF-4, throughout the 1960s into the early 1970s. Union Carbide purchased NovaTran (a parylene coater) in 1984 and combined it with other electronic chemical coating businesses to form the Specialty Coating Systems division. The division was sold to Cookson Electronics in 1994. [ 3 ]
There are parylene coating service companies located around the world, but there is limited commercial availability of parylene. The [2.2]paracyclophane precursors can be purchased for parylene N, C, D, AF-4 and VT-4. Parylene services are provided for N, C, AF-4, VT-4 and E (copolymer of N and E).
Parylene N is the un-substituted polymer obtained by polymerization of the para -xylene intermediate.
Derivatives of parylene can be obtained by replacing hydrogen atoms on the phenyl ring or the aliphatic bridge by other functional groups. The most common of these variants is parylene C, which has one hydrogen atom in the aryl ring replaced by chlorine . Another common variant is parylene D, with two such substitutions on the ring.
Parylene C is the most used variety, due to its low cost of its precursor and to the balance of its properties as dielectric and moisture barrier properties and ease of deposition. A major disadvantage for many applications is its insolubility in any solvent at room temperature, which prevents removal of the coating when the part has to be re-worked.
Parylene C is also the most commonly used because of its relatively low cost. [ 4 ] It can be deposited at room temperature while still possessing a high degree of conformality and uniformity and a moderate deposition rate in a batch process.
Also, the chlorine on the phenyl ring of the parylene C repeat unit is problematic for RoHS compliance, especially for the printed circuit board manufacture. Moreover, some of the dimer precursor is decomposed by breaking of the aryl-chlorine bond during pyrolysis, generating carbonaceous material that contaminates the coating, and hydrogen chloride HCl that may harm vacuum pumps and other equipment. The chlorine atom leaves the phenyl ring in the pyrolysis tube at all temperatures; however, optimizing the pyrolysis temperature will minimize this problem. The free-radical (phenyl radical) generated in this process is not resonance-stabilized and mitigates the deposition of a parylene-like material on the downside of the pyrolysis tube. This material becomes carbonized and generates particles in situ to contaminate clean rooms and create defects on printed-circuit boards that are often called "stringers and nodules". Parylene N and E do not have this problem and therefore are preferred for manufacturing and clean room use.
Another common halogenated variant is parylene AF-4, with the four hydrogen atoms on the aliphatic chain replaced by fluorine atoms. This variant is also marketed under the trade names of parylene SF ( Kisco ) and HT parylene ( SCS ). The − CF 2 − unit that comprises the ethylene chain is the same as the repeating unit of PTFE (Teflon), consistent with its superior oxidative and UV stability. Parylene AF-4 has been used to protect outdoor LED displays and lighting from water, salt and pollutants successfully.
Another fluorinated variant is parylene VT-4 (also called parylene F), with fluorine substituted for the four hydrogens on the aryl ring. This variant is marketed by Kisco with the trademark Parylene CF. Because of the aliphatic −CH 2 − units, it has poor oxidative and UV stability, but still better than N, C, or D.
The hydrogen atoms can be replaced also by alkyl groups . Substitution may occur on either the phenyl ring or the ethylene bridge, or both.
Specifically, replacement of one hydrogen on the phenyl ring by a methyl group or an ethyl group yields parylene M and E respectively.
These substitutions increase the intermolecular (chain-to-chain) distance, which makes the polymer more soluble and permeable. For example, compared to parylene C, parylene M was shown to have a lower dielectric constant (2.48 vs. 3.2 at 1 kHz ). Parylene E had a lower tensile modulus (175 kpsi (1.21 GPa) vs. 460 kpsi (3.17 GPa)), a lower dielectric constant (2.34 vs. 3.05 at 10 kHz), slightly worse moisture barrier properties (4.1 vs. 0.6 g·mil/(atom·100 in²·24 hr) (11 vs. 1.6 kg·m·pmol⁻¹·m⁻²·s⁻¹)), and equivalent dielectric breakdown 5–6 kV/ mil for a 1-mil coating) but better solubility. [ 5 ] [ 6 ] However, the copolymer of parylene N and E has equivalent barrier performance of parylene C.
Replacement of one hydrogen by methyl on each carbon of the ethyl bridge yields parylene AM-2, [−(CH 3 )CH−(C 6 H 4 )−(CH 3 )CH−] n (not to be confused with an amine -substituted variant trademarked by Kisco). The solubility of parylene AM-2 is not as good as parylene E.
While parylene coatings are mostly used to protect an object from water and other chemicals, some applications require a coating that can bind to adhesives or other coated parts, or immobilize various molecules such as dyes, catalysts, or enzymes.
These "reactive" parylene coatings can be obtained with chemically active substituents. Two commercially available products are parylene A, featuring one amine substituent − NH 2 in each unit, and parylene AM, with one methylene amine group − CH 2 NH 2 per unit. Both are trademarks of Kisco.
Parylene AM is more reactive than the A variant. The amine of the latter, being adjacent to the phenyl ring, is in resonance stabilization and therefore less basic. However, parylene A is much easier to synthesize and hence cheaper.
Another reactive variant is parylene X, which features an ethinyl group − C≡CH attached to the phenyl ring in some of the units. This variant, which contains no elements other than hydrogen and carbon, can be cross-linked by heat or with UV light and can react with copper or silver salts to generate the corresponding metalorganic complexes Cu-acetylide or Ag-acetylide . It can also undergo " click chemistry " and can be used as an adhesive , allowing parylene-to-parylene bonding without any by-products during processing. Unlike most other variants, parylene X is amorphous (non-crystalline).
It is possible to attach a chromophore directly to the [2.2]paracyclophane base molecule to impart color to parylene. [ citation needed ]
Copolymers [ 7 ] and nanocomposites (SiO 2 /parylene C) [ 8 ] of parylene have been deposited at near-room temperature previously. With strongly electron withdrawing comonomers, parylene can be used as an initiator to initiate polymerizations, such as with N-phenyl maleimide . Using the parylene C/SiO 2 nanocomposites, parylene C could be used as a sacrificial layer to make nanoporous silica thin films with a porosity of >90%. [ 9 ]
Parylene thin films and coatings are transparent; however, they are not amorphous except for the alkylated parylenes, e.g. parylene E. As a result of this semi-crystallinity, they scatter light. Parylene N and C have a low degree of crystallinity; however, parylene VT-4 and AF-4 are highly crystalline ~60% in their as-deposited condition (hexagonal crystal structure) and therefore are generally not suitable as optical materials.
Parylene C will become more crystalline if heated at elevated temperatures until its melting point at 270 °C.
Parylene N has a monoclinic crystal structure in its as-deposited condition and it does not appreciably become more crystalline until it undergoes a crystallographic phase transformation at ~220 °C to hexagonal, at which point it becomes highly crystalline like the fluorinated parylenes. It can reach 80% crystallinity at anneal temperatures up to 400 °C, after which point it degrades.
Parylenes are relatively flexible (0.5 GPa for parylene N), [ 10 ] except for cross-linked parylene X (1.0 GPa), [ 11 ] and have poor oxidative resistance (~60–100 °C, depending on failure criteria) and UV stability, [ 12 ] except for parylene AF-4. However, parylene AF-4 is more expensive due to a three-step synthesis of its precursor with low yield and poor deposition efficiency. Their UV stability is so poor that parylene cannot be exposed to regular sunlight without yellowing.
Nearly all the parylenes are insoluble at room temperature, except for the alkylated parylenes, one of which is parylene E, [ 6 ] and the alkylated-ethynyl parylenes. [ 13 ] This lack of solubility has made it difficult to re-work printed circuit boards coated with parylene.
As a moisture diffusion barrier, the efficacy of halogneated parylene coatings scales non-linearly with their density. Halogen atoms such as F, Cl and Br add much density to the coating and therefore allow the coating to be a better diffusion barrier; however, if parylenes are used as a diffusion barrier against water then the apolar chemistries such as parylene E are much more effective. For moisture barriers the three principal material parameters to be optimized are: coating density, coating polarity (olefin chemistry is best) and a glass-transition temperature above room temperature and ideally above the service limit of the printed-circuit board, device or part. In this regard parylene E is a best choice although it has a low density compared to, for example, parylene C.
Parylene coatings are generally applied by chemical vapor deposition in an atmosphere of the monomer para -xylylene or a derivative thereof. This method has one very strong benefit, namely it does not generate any byproducts besides the parylene polymer, which would need to be removed from the reaction chamber and could interfere with the polymerization.
Parts to be coated need to be clean in order to ensure good adherence of the film. Since the monomer diffuses, areas that are not to be coated must be hermetically sealed, without gaps, crevices or other openings. The part must be maintained in a relatively narrow window of pressure and temperature. [ 15 ]
The process involves three steps: generation of the gaseous monomer, adsorption on the part's surface, and polymerization of adsorbed film.
Polymerization of the adsorbed p -xylylene monomer requires a minimum threshold temperature. For parylene N, its threshold temperature is 40 °C.
The p -xylylene intermediate has two quantum mechanical states, the benzoid state (triplet state) and the quinoid state (singlet state). The triplet state is effectively the initiator and the singlet state is effectively the monomer. The triplet state can be de-activated when in contact with transition metals or metal oxides including Cu/CuO x . [ 16 ] [ 17 ] Many of the parylenes exhibit this selectivity based on quantum mechanical deactivation of the triplet state, including parylene X.
Polymerization may proceed by a variety of routes that differ in the transient termination of the growing chains, such as a radical group − CH • 2 or a negative anion group CH − 2 :
The monomer polymerizes only after it is physically adsorbed ( physisorbed ) on the part's surface. This process has inverse Arrhenius kinetics , meaning that it is stronger at lower temperatures than higher temperatures. There is critical threshold temperature above which there is practically no physisorption, and hence no deposition. The closer the deposition temperature is to the threshold temperature the weaker the physisorption. Parylene C has a higher threshold temperature, 90 °C, and therefore has a much higher deposition rate, greater than 1 nm /s, while still yielding fairly uniform coatings. [ 4 ] In contrast, the threshold temperature of parylene AF-4 is very close to room temperature (30–35 °C), as a result, its deposition efficiency is poor. [ 18 ]
An important property of the monomer is the so-called 'sticking coefficient', that expresses the degree to which it adsorbs on the polymer. A lower coefficient results more uniform deposition thickness and a more conformal coating.
Another relevant property for the deposition process is polarizability, which determines how strongly the monomer interacts with the surface. Deposition of halogenated parylenes strongly correlates with molecular weight of the monomer. The fluorinated variants are an exception: the polarizability of parylene AF-4 is low, resulting in inefficient deposition.
The p -xylylene monomer is normally generated during the coating process by evaporating the cyclic dimer [2.2] para - cyclophane at a relatively low temperature, then decomposing the vapor at 450–700 °C and pressure 0.01–1.0 Torr . This method (Gorham process) yields 100% monomer with no by-products or decomposition of the monomer. [ 19 ] [ 20 ] [ 21 ]
The dimer can be synthesized from p -xylene involving several steps involving bromination , amination and Hofmann elimination . [ 22 ]
The same method can be used to deposit substituted parylenes. For example, parylene C can be obtained from the dimeric precursor dichloro[2.2] para -cyclophane , except that the temperature must be carefully controlled since the chlorine - aryl bond breaks at 680 °C.
The standard Gorham process [ 5 ] is shown above for parylene AF-4. The octafluoro[2.2] para -cyclophane precursor dimer can be sublimed below <100 °C and cracked at 700–750 °C, higher than the temperature (680 °C) used to crack the unsubstituted cyclophane since the −CF 2 −CF 2 − bond is stronger than the −CH 2 −CH 2 − bond. This resonance-stabilized intermediate is transported to a room temperature deposition chamber where polymerization occurs under low pressure (1–100 mTorr) conditions. [ 18 ]
Another route to generation of the monomer is to use a para -xylene precursor with a suitable substituent on each methyl groups , whose elimination generates para -xylylene.
Selection of a leaving group may consider its toxicity (which excludes sulfur and amine-based reactions), how easily it leaves the precursor, and possible interference with the polymerization. The leaving group can either be trapped before the deposition chamber, or it can be highly volatile so that it does not condense in the latter. [ 23 ]
For example, the precursor α,α'-dibromo-α,α,α',α'-tetrafluoro- para -xylene (CF 2 Br) 2 (C 6 H 4 ) yields parylene AF-4 with elimination of bromine . [ 24 ]
The advantage to this process is the low cost of synthesis for the precursor. The precursor is also a liquid and can be delivered by standard methods developed in the semiconductor industry, such as with a vaporizer, vaporizer with a bubbler , or a mass-flow controller . Originally the precursor was just thermally cracked, [ 25 ] but suitable catalysts lower the pyrolysis temperature, resulting in less char residue and a better coating. [ 26 ] [ 27 ] By either method an atomic bromine free-radical is given off from each methyl end, which can be converted to hydrogen bromide HBr and removed from monomer flow. Special precautions are needed since bromine and HBr are toxic and corrosive towards most metals and metal alloys, and bromine can damage viton O-rings .
A similar synthesis for parylene N uses the precursor α,α'-dimethoxy- p -xylene . [ 28 ] The methoxy group H 3 CO − is the leaving group; while it condenses in the deposition chamber, it does not interfere with the deposition of the polymer. [ 23 ] This precursor is much less expensive than [2.2] para -cyclophane. Moreover, being a liquid just above room temperature, this precursor can delivered reliably using a mass-flow controller ; whereas the generation and delivery of the gaseous monomer of the Gorham process are difficult to measure and control. [ 29 ]
The same chemistry can generate parylene AM-2 can be generated from the precursor α,α'-dimethyl-α,α'-dimethoxy- p -xylene.
Another example of this approach is the synthesis of parylene AF-4 from α,α'-diphenoxy-α,α,α',α'-tetrafluoro- para -xylene. In this case, the leaving group is phenoxy CH 5 O −, which can be condensed before the deposition chamber. [ 30 ]
Parylenes may confer several desirable qualities to the coated parts. Among other properties, they are
Since the coating process takes place at ambient temperature in a mild vacuum, it can be applied even to temperature-sensitive objects such as dry biological specimens. The low temperature also results in low intrinsic stress in the thin film. Moreover, the only gas in the deposition chamber is the monomer, without any solvents, catalysts, or byproducts that could attack the object.
Parylene AF-4 and VT-4 are both fluorinated and as a result very expensive compared to parylene N and C, which has severely limited their commercial use, except for niche applications.
Parylene C and to a lesser extent AF-4, SF, HT (all the same polymer) are used for coating printed circuit boards (PCBs) and medical devices . There are numerous other applications as parylene is an excellent moisture barrier. It is the most bio-accepted coating for stents, defibrillators, pacemakers and other devices permanently implanted into the body. [ 33 ]
The classic molecular layer chemistries are self-assembled monolayers (SAMs). SAMs are long-chain alkyl chains, which interact with surfaces based on sulfur-metal interaction (alkylthiolates) [ 34 ] or a sol-gel type reaction with a hydroxylated oxide surface (trichlorosilyl alkyls or trialkoxy alkyls). [ 35 ] However, unless the gold or oxide surface is carefully treated and the alkyl chain is long, these SAMs form disordered monolayers, which do not pack well. [ 36 ] [ 37 ] This lack of packing causes issues in, for example, stiction in MEMS devices. [ 38 ]
The observation that parylenes could form ordered molecular layers (MLs) came with contact angle measurements, where MLs thicker than 10 Å had an equilibrium contact angle of 80 degrees (same as bulk parylene N) but those thinner had a reduced contact angle. [ 32 ] This was also confirmed with electrical measurements (bias-temperature stress measurements) using metal-insulator-semiconductor capacitors (MISCAPs). [ 39 ] In short, parylene N and AF-4 (those parylenes with no functional groups) are pin-hole free at ~14 Å. This results because the parylene repeat units possess a phenyl ring and due to the high electronic polarizability of the phenyl ring adjacent repeat units order themselves in the XY-plane. As a result of this interaction parylene MLs are surface independent, except for transition metals, which de-activate the triplet (benzoid) state and therefore the parylenes cannot be initiated. This finding of parylenes as molecular layers is very powerful for industrial applications because of the robustness of the process and that the MLs are deposited at room temperature. In this way parylenes can be used as diffusion barriers and for reducing the polarizability of surface (de-activation of oxide surfaces). Combining the properties of the reactive parylenes with the observation that they can form dense pin-hole-free molecular layers, parylene X has been utilized as a genome sequencing interface layer.
One caveat with the molecular layer parylenes, namely they are deposited as oligomers and not high polymer. [ 32 ] As a result, a vacuum anneal is needed to convert the oligomers to high polymer. For parylene N that temperature is 250 °C, whereas it is 300 °C for payrlene AF-4.
Parylene films have been used in various applications, including [ 1 ]
Conformal coating | https://en.wikipedia.org/wiki/Parylene |
In philosophy , Pascal's mugging is a thought experiment demonstrating a problem in expected utility maximization. A rational agent should choose actions whose outcomes, when weighted by their probability, have higher utility . But some very unlikely outcomes may have very great utilities, and these utilities can grow faster than the probability diminishes. Hence the agent should focus more on vastly improbable cases with implausibly high rewards; this leads first to counter-intuitive choices, and then to incoherence as the utility of every choice becomes unbounded.
The name refers to Pascal's Wager , but unlike the wager, it does not require infinite rewards. [ 1 ] This sidesteps many objections to the Pascal's Wager dilemma that are based on the nature of infinity. [ 2 ]
The term "Pascal's mugging" to refer to this problem was originally coined by Eliezer Yudkowsky in the LessWrong forum. [ 3 ] [ 2 ] Philosopher Nick Bostrom later elaborated the thought experiment in the form of a fictional dialogue. [ 2 ] Subsequently, other authors published their own sequels to the events of this first dialogue, adopting the same literary style. [ 4 ] [ 5 ]
In Bostrom's description, [ 2 ] Blaise Pascal is accosted by a mugger who has forgotten their weapon. However, the mugger proposes a deal: the philosopher gives them his wallet, and in exchange the mugger will return twice the amount of money tomorrow. Pascal declines, pointing out that it is unlikely the deal will be honoured. The mugger then continues naming higher rewards, pointing out that even if it is just one chance in 1000 that they will be honourable, it would make sense for Pascal to make a deal for a 2000 times return. Pascal responds that the probability of that high return is even lower than one in 1000. The mugger argues back that for any low but strictly greater than 0 probability of being able to pay back a large amount of money (or pure utility) there exists a finite amount that makes it rational to take the bet. In one example, the mugger succeeds by promising Pascal 1,000 quadrillion happy days of life. Convinced by the argument, Pascal gives the mugger the wallet.
In one of Yudkowsky's examples, the mugger succeeds by saying "give me five dollars, or I'll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3 ↑ ↑ ↑ ↑ 3 {\displaystyle 3\uparrow \uparrow \uparrow \uparrow 3} people". Here, the number 3 ↑ ↑ ↑ ↑ 3 {\displaystyle 3\uparrow \uparrow \uparrow \uparrow 3} uses Knuth's up-arrow notation ; writing the number out in base 10 would require enormously more writing material than there are atoms in the known universe. [ 3 ]
The supposed paradox results from two inconsistent views. On the one side, by multiplying an expected utility calculation, assuming loss of five dollars to be valued at f {\displaystyle f} , loss of a life to be valued at l {\displaystyle l} , and probability that the mugger is telling the truth at t {\displaystyle t} , the solution is to give the money if and only if ( 3 ↑ ↑ ↑ ↑ 3 ) × t × l > f {\displaystyle (3\uparrow \uparrow \uparrow \uparrow 3)\times t\times l>f} . Assuming that l {\displaystyle l} is higher than f {\displaystyle f} , so long as t {\displaystyle t} is higher than 1 / ( 3 ↑ ↑ ↑ ↑ 3 ) {\displaystyle 1/(3\uparrow \uparrow \uparrow \uparrow 3)} , which is assumed to be true, [ note 1 ] it is considered rational to pay the mugger. On the other side of the argument, paying the mugger is intuitively irrational due to its exploitability. If the person being mugged agrees to this sequence of logic, then they can be exploited repeatedly for all of their money, resulting in a Dutch-book , which is typically considered irrational. Views on which of these arguments is logically correct differ. [ 3 ]
Moreover, in many reasonable-seeming decision systems, Pascal's mugging causes the expected utility of any action to fail to converge, as an unlimited chain of successively dire scenarios similar to Pascal's mugging would need to be factored in. [ 7 ] [ 8 ]
Some of the arguments concerning this paradox affect not only the expected utility maximization theory, but may also apply to other theoretical systems, such as consequentialist ethics , for example. [ note 2 ]
Philosopher Nick Bostrom argues that Pascal's mugging, like Pascal's wager, suggests that giving a superintelligent artificial intelligence a flawed decision theory could be disastrous. [ 10 ] Pascal's mugging may also be relevant when considering low-probability, high-stakes events such as existential risk or charitable interventions with a low probability of success but extremely high rewards. Common sense seems to suggest that spending effort on too unlikely scenarios is irrational.
One advocated remedy might be to only use bounded utility functions: rewards cannot be arbitrarily large. [ 7 ] [ 11 ] Another approach is to use Bayesian reasoning to (qualitatively) judge the quality of evidence and probability estimates rather than naively calculate expectations. [ 6 ] Other approaches are to penalize the prior probability of hypotheses that argue that we are in a surprisingly unique position to affect large numbers of other people who cannot symmetrically affect us, [ note 3 ] reject providing the probability of a payout first, [ 15 ] or abandon quantitative decision procedures in the presence of extremely large risks. [ 8 ] | https://en.wikipedia.org/wiki/Pascal's_mugging |
In mathematics , Pascal's rule (or Pascal's formula ) is a combinatorial identity about binomial coefficients . The binomial coefficients are the numbers that appear in Pascal's triangle . Pascal's rule states that for positive integers n and k , ( n − 1 k ) + ( n − 1 k − 1 ) = ( n k ) , {\displaystyle {n-1 \choose k}+{n-1 \choose k-1}={n \choose k},} where ( n k ) {\displaystyle {\tbinom {n}{k}}} is the binomial coefficient, namely the coefficient of the x k term in the expansion of (1 + x ) n . There is no restriction on the relative sizes of n and k ; [ 1 ] in particular, the above identity remains valid when n < k since ( n k ) = 0 {\displaystyle {\tbinom {n}{k}}=0} whenever n < k .
Together with the boundary conditions ( n 0 ) = ( n n ) = 1 {\displaystyle {\tbinom {n}{0}}={\tbinom {n}{n}}=1} for all nonnegative integers n , Pascal's rule determines that ( n k ) = n ! k ! ( n − k ) ! , {\displaystyle {\binom {n}{k}}={\frac {n!}{k!(n-k)!}},} for all integers 0 ≤ k ≤ n . In this sense, Pascal's rule is the recurrence relation that defines the binomial coefficients.
Pascal's rule can also be generalized to apply to multinomial coefficients .
Pascal's rule has an intuitive combinatorial meaning, that is clearly expressed in this counting proof. [ 2 ] : 44
Proof . Recall that ( n k ) {\displaystyle {\tbinom {n}{k}}} equals the number of subsets with k elements from a set with n elements. Suppose one particular element is uniquely labeled X in a set with n elements.
To construct a subset of k elements containing X , include X and choose k − 1 elements from the remaining n − 1 elements in the set. There are ( n − 1 k − 1 ) {\displaystyle {\tbinom {n-1}{k-1}}} such subsets.
To construct a subset of k elements not containing X , choose k elements from the remaining n − 1 elements in the set. There are ( n − 1 k ) {\displaystyle {\tbinom {n-1}{k}}} such subsets.
Every subset of k elements either contains X or not. The total number of subsets with k elements in a set of n elements is the sum of the number of subsets containing X and the number of subsets that do not contain X , ( n − 1 k − 1 ) + ( n − 1 k ) {\displaystyle {\tbinom {n-1}{k-1}}+{\tbinom {n-1}{k}}} .
This equals ( n k ) {\displaystyle {\tbinom {n}{k}}} ; therefore, ( n k ) = ( n − 1 k − 1 ) + ( n − 1 k ) {\displaystyle {\tbinom {n}{k}}={\tbinom {n-1}{k-1}}+{\tbinom {n-1}{k}}} .
Alternatively, the algebraic derivation of the binomial case follows. ( n − 1 k ) + ( n − 1 k − 1 ) = ( n − 1 ) ! k ! ( n − 1 − k ) ! + ( n − 1 ) ! ( k − 1 ) ! ( n − k ) ! = ( n − 1 ) ! [ n − k k ! ( n − k ) ! + k k ! ( n − k ) ! ] = ( n − 1 ) ! n k ! ( n − k ) ! = n ! k ! ( n − k ) ! = ( n k ) . {\displaystyle {\begin{aligned}{n-1 \choose k}+{n-1 \choose k-1}&={\frac {(n-1)!}{k!(n-1-k)!}}+{\frac {(n-1)!}{(k-1)!(n-k)!}}\\&=(n-1)!\left[{\frac {n-k}{k!(n-k)!}}+{\frac {k}{k!(n-k)!}}\right]\\&=(n-1)!{\frac {n}{k!(n-k)!}}\\&={\frac {n!}{k!(n-k)!}}\\&={\binom {n}{k}}.\end{aligned}}}
An alternative algebraic proof using the alternative definition of binomial coefficients: ( n k ) = n ( n − 1 ) ⋯ ( n − k + 1 ) k ! {\displaystyle {\tbinom {n}{k}}={\frac {n(n-1)\cdots (n-k+1)}{k!}}} . Indeed
( n − 1 k ) + ( n − 1 k − 1 ) = ( n − 1 ) ⋯ ( ( n − 1 ) − k + 1 ) k ! + ( n − 1 ) ⋯ ( ( n − 1 ) − ( k − 1 ) + 1 ) ( k − 1 ) ! = ( n − 1 ) ⋯ ( n − k ) k ! + ( n − 1 ) ⋯ ( n − k + 1 ) ( k − 1 ) ! = ( n − 1 ) ⋯ ( n − k + 1 ) ( k − 1 ) ! [ n − k k + 1 ] = ( n − 1 ) ⋯ ( n − k + 1 ) ( k − 1 ) ! ⋅ n k = n ( n − 1 ) ⋯ ( n − k + 1 ) k ! = ( n k ) . {\displaystyle {\begin{aligned}{n-1 \choose k}+{n-1 \choose k-1}&={\frac {(n-1)\cdots ((n-1)-k+1)}{k!}}+{\frac {(n-1)\cdots ((n-1)-(k-1)+1)}{(k-1)!}}\\&={\frac {(n-1)\cdots (n-k)}{k!}}+{\frac {(n-1)\cdots (n-k+1)}{(k-1)!}}\\&={\frac {(n-1)\cdots (n-k+1)}{(k-1)!}}\left[{\frac {n-k}{k}}+1\right]\\&={\frac {(n-1)\cdots (n-k+1)}{(k-1)!}}\cdot {\frac {n}{k}}\\&={\frac {n(n-1)\cdots (n-k+1)}{k!}}\\&={\binom {n}{k}}.\end{aligned}}}
Since ( z k ) = z ( z − 1 ) ⋯ ( z − k + 1 ) k ! {\displaystyle {\tbinom {z}{k}}={\frac {z(z-1)\cdots (z-k+1)}{k!}}} is used as the extended definition of the binomial coefficient when z is a complex number, thus the above alternative algebraic proof shows that Pascal's rule holds more generally when n is replaced by any complex number.
Pascal's rule can be generalized to multinomial coefficients. [ 2 ] : 144 For any integer p such that p ≥ 2 {\displaystyle p\geq 2} , k 1 , k 2 , k 3 , … , k p ∈ N + , {\displaystyle k_{1},k_{2},k_{3},\dots ,k_{p}\in \mathbb {N} ^{+}\!,} and n = k 1 + k 2 + k 3 + ⋯ + k p ≥ 1 {\displaystyle n=k_{1}+k_{2}+k_{3}+\cdots +k_{p}\geq 1} , ( n − 1 k 1 − 1 , k 2 , k 3 , … , k p ) + ( n − 1 k 1 , k 2 − 1 , k 3 , … , k p ) + ⋯ + ( n − 1 k 1 , k 2 , k 3 , … , k p − 1 ) = ( n k 1 , k 2 , k 3 , … , k p ) {\displaystyle {n-1 \choose k_{1}-1,k_{2},k_{3},\dots ,k_{p}}+{n-1 \choose k_{1},k_{2}-1,k_{3},\dots ,k_{p}}+\cdots +{n-1 \choose k_{1},k_{2},k_{3},\dots ,k_{p}-1}={n \choose k_{1},k_{2},k_{3},\dots ,k_{p}}} where ( n k 1 , k 2 , k 3 , … , k p ) {\displaystyle {n \choose k_{1},k_{2},k_{3},\dots ,k_{p}}} is the coefficient of the x 1 k 1 x 2 k 2 ⋯ x p k p {\displaystyle x_{1}^{k_{1}}x_{2}^{k_{2}}\cdots x_{p}^{k_{p}}} term in the expansion of ( x 1 + x 2 + ⋯ + x p ) n {\displaystyle (x_{1}+x_{2}+\dots +x_{p})^{n}} .
The algebraic derivation for this general case is as follows. [ 2 ] : 144 Let p be an integer such that p ≥ 2 {\displaystyle p\geq 2} , k 1 , k 2 , k 3 , … , k p ∈ N + , {\displaystyle k_{1},k_{2},k_{3},\dots ,k_{p}\in \mathbb {N} ^{+}\!,} and n = k 1 + k 2 + k 3 + ⋯ + k p ≥ 1 {\displaystyle n=k_{1}+k_{2}+k_{3}+\cdots +k_{p}\geq 1} . Then ( n − 1 k 1 − 1 , k 2 , k 3 , … , k p ) + ( n − 1 k 1 , k 2 − 1 , k 3 , … , k p ) + ⋯ + ( n − 1 k 1 , k 2 , k 3 , … , k p − 1 ) = ( n − 1 ) ! ( k 1 − 1 ) ! k 2 ! k 3 ! ⋯ k p ! + ( n − 1 ) ! k 1 ! ( k 2 − 1 ) ! k 3 ! ⋯ k p ! + ⋯ + ( n − 1 ) ! k 1 ! k 2 ! k 3 ! ⋯ ( k p − 1 ) ! = k 1 ( n − 1 ) ! k 1 ! k 2 ! k 3 ! ⋯ k p ! + k 2 ( n − 1 ) ! k 1 ! k 2 ! k 3 ! ⋯ k p ! + ⋯ + k p ( n − 1 ) ! k 1 ! k 2 ! k 3 ! ⋯ k p ! = ( k 1 + k 2 + ⋯ + k p ) ( n − 1 ) ! k 1 ! k 2 ! k 3 ! ⋯ k p ! = n ( n − 1 ) ! k 1 ! k 2 ! k 3 ! ⋯ k p ! = n ! k 1 ! k 2 ! k 3 ! ⋯ k p ! = ( n k 1 , k 2 , k 3 , … , k p ) . {\displaystyle {\begin{aligned}&{}\quad {n-1 \choose k_{1}-1,k_{2},k_{3},\dots ,k_{p}}+{n-1 \choose k_{1},k_{2}-1,k_{3},\dots ,k_{p}}+\cdots +{n-1 \choose k_{1},k_{2},k_{3},\dots ,k_{p}-1}\\&={\frac {(n-1)!}{(k_{1}-1)!k_{2}!k_{3}!\cdots k_{p}!}}+{\frac {(n-1)!}{k_{1}!(k_{2}-1)!k_{3}!\cdots k_{p}!}}+\cdots +{\frac {(n-1)!}{k_{1}!k_{2}!k_{3}!\cdots (k_{p}-1)!}}\\&={\frac {k_{1}(n-1)!}{k_{1}!k_{2}!k_{3}!\cdots k_{p}!}}+{\frac {k_{2}(n-1)!}{k_{1}!k_{2}!k_{3}!\cdots k_{p}!}}+\cdots +{\frac {k_{p}(n-1)!}{k_{1}!k_{2}!k_{3}!\cdots k_{p}!}}={\frac {(k_{1}+k_{2}+\cdots +k_{p})(n-1)!}{k_{1}!k_{2}!k_{3}!\cdots k_{p}!}}\\&={\frac {n(n-1)!}{k_{1}!k_{2}!k_{3}!\cdots k_{p}!}}={\frac {n!}{k_{1}!k_{2}!k_{3}!\cdots k_{p}!}}={n \choose k_{1},k_{2},k_{3},\dots ,k_{p}}.\end{aligned}}}
This article incorporates material from Pascal's triangle on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License .
This article incorporates material from Pascal's rule proof on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License . | https://en.wikipedia.org/wiki/Pascal's_rule |
In projective geometry , Pascal's theorem (also known as the hexagrammum mysticum theorem , Latin for mystical hexagram ) states that if six arbitrary points are chosen on a conic (which may be an ellipse , parabola or hyperbola in an appropriate affine plane ) and joined by line segments in any order to form a hexagon , then the three pairs of opposite sides of the hexagon ( extended if necessary) meet at three points which lie on a straight line, called the Pascal line of the hexagon. It is named after Blaise Pascal .
The theorem is also valid in the Euclidean plane , but the statement needs to be adjusted to deal with the special cases when opposite sides are parallel.
This theorem is a generalization of Pappus's (hexagon) theorem , which is the special case of a degenerate conic of two lines with three points on each line.
The most natural setting for Pascal's theorem is in a projective plane since any two lines meet and no exceptions need to be made for parallel lines. However, the theorem remains valid in the Euclidean plane, with the correct interpretation of what happens when some opposite sides of the hexagon are parallel.
If exactly one pair of opposite sides of the hexagon are parallel, then the conclusion of the theorem is that the "Pascal line" determined by the two points of intersection is parallel to the parallel sides of the hexagon. If two pairs of opposite sides are parallel, then all three pairs of opposite sides form pairs of parallel lines and there is no Pascal line in the Euclidean plane (in this case, the line at infinity of the extended Euclidean plane is the Pascal line of the hexagon).
Pascal's theorem is the polar reciprocal and projective dual of Brianchon's theorem . It was formulated by Blaise Pascal in a note written in 1639 when he was 16 years old and published the following year as a broadside titled "Essay pour les coniques. Par B. P." [ 1 ]
Pascal's theorem is a special case of the Cayley–Bacharach theorem .
A degenerate case of Pascal's theorem (four points) is interesting; given points ABCD on a conic Γ , the intersection of alternate sides, AB ∩ CD , BC ∩ DA , together with the intersection of tangents at opposite vertices ( A , C ) and ( B , D ) are collinear in four points; the tangents being degenerate 'sides', taken at two possible positions on the 'hexagon' and the corresponding Pascal line sharing either degenerate intersection. This can be proven independently using a property of pole-polar . If the conic is a circle, then another degenerate case says that for a triangle, the three points that appear as the intersection of a side line with the corresponding side line of the Gergonne triangle , are collinear.
Six is the minimum number of points on a conic about which special statements can be made, as five points determine a conic .
The converse is the Braikenridge–Maclaurin theorem , named for 18th-century British mathematicians William Braikenridge and Colin Maclaurin ( Mills 1984 ), which states that if the three intersection points of the three pairs of lines through opposite sides of a hexagon lie on a line, then the six vertices of the hexagon lie on a conic; the conic may be degenerate, as in Pappus's theorem. [ 2 ] The Braikenridge–Maclaurin theorem may be applied in the Braikenridge–Maclaurin construction , which is a synthetic construction of the conic defined by five points, by varying the sixth point.
The theorem was generalized by August Ferdinand Möbius in 1847, as follows: suppose a polygon with 4 n + 2 sides is inscribed in a conic section, and opposite pairs of sides are extended until they meet in 2 n + 1 points. Then if 2 n of those points lie on a common line, the last point will be on that line, too.
If six unordered points are given on a conic section, they can be connected into a hexagon in 60 different ways, resulting in 60 different instances of Pascal's theorem and 60 different Pascal lines. This configuration of 60 lines is called the Hexagrammum Mysticum . [ 3 ] [ 4 ]
As Thomas Kirkman proved in 1849, these 60 lines can be associated with 60 points in such a way that each point is on three lines and each line contains three points. The 60 points formed in this way are now known as the Kirkman points . [ 5 ] The Pascal lines also pass, three at a time, through 20 Steiner points . There are 20 Cayley lines which consist of a Steiner point and three Kirkman points. The Steiner points also lie, four at a time, on 15 Plücker lines . Furthermore, the 20 Cayley lines pass four at a time through 15 points known as the Salmon points . [ 6 ]
Pascal's original note [ 1 ] has no proof, but there are various modern proofs of the theorem.
It is sufficient to prove the theorem when the conic is a circle, because any (non-degenerate) conic can be reduced to a circle by a projective transformation. This was realised by Pascal, whose first lemma states the theorem for a circle. His second lemma states that what is true in one plane remains true upon projection to another plane. [ 1 ] Degenerate conics follow by continuity (the theorem is true for non-degenerate conics, and thus holds in the limit of degenerate conic).
A short elementary proof of Pascal's theorem in the case of a circle was found by van Yzeren (1993) , based on the proof in ( Guggenheimer 1967 ). This proof proves the theorem for circle and then generalizes it to conics.
A short elementary computational proof in the case of the real projective plane was found by Stefanovic (2010) .
We can infer the proof from existence of isogonal conjugate too. If we are to show that X = AB ∩ DE , Y = BC ∩ EF , Z = CD ∩ FA are collinear for concyclic ABCDEF , then notice that △ EYB and △ CYF are similar, and that X and Z will correspond to the isogonal conjugate if we overlap the similar triangles. This means that ∠ CYX = ∠ CYZ , hence making XYZ collinear.
A short proof can be constructed using cross-ratio preservation. Projecting tetrad ABCE from D onto line AB , we obtain tetrad ABPX , and projecting tetrad ABCE from F onto line BC , we obtain tetrad QBCY . This therefore means that R ( AB ; PX ) = R ( QB ; CY ) , where one of the points in the two tetrads overlap, hence meaning that other lines connecting the other three pairs must coincide to preserve cross ratio. Therefore, XYZ are collinear.
Another proof for Pascal's theorem for a circle uses Menelaus' theorem repeatedly.
Dandelin , the geometer who discovered the celebrated Dandelin spheres , came up with a beautiful proof using "3D lifting" technique that is analogous to the 3D proof of Desargues' theorem . The proof makes use of the property that for every conic section we can find a one-sheet hyperboloid which passes through the conic.
There also exists a simple proof for Pascal's theorem for a circle using the law of sines and similarity .
Pascal's theorem has a short proof using the Cayley–Bacharach theorem that given any 8 points in general position, there is a unique ninth point such that all cubics through the first 8 also pass through the ninth point. In particular, if 2 general cubics intersect in 8 points then any other cubic through the same 8 points meets the ninth point of intersection of the first two cubics. Pascal's theorem follows by taking the 8 points as the 6 points on the hexagon and two of the points (say, M and N in the figure) on the would-be Pascal line, and the ninth point as the third point ( P in the figure). The first two cubics are two sets of 3 lines through the 6 points on the hexagon (for instance, the set AB, CD, EF , and the set BC, DE, FA ), and the third cubic is the union of the conic and the line MN . Here the "ninth intersection" P cannot lie on the conic by genericity, and hence it lies on MN .
The Cayley–Bacharach theorem is also used to prove that the group operation on cubic elliptic curves is associative. The same group operation can be applied on a conic if we choose a point E on the conic and a line MP in the plane. The sum of A and B is obtained by first finding the intersection point of line AB with MP , which is M . Next A and B add up to the second intersection point of the conic with line EM , which is D . Thus if Q is the second intersection point of the conic with line EN , then
Thus the group operation is associative. On the other hand, Pascal's theorem follows from the above associativity formula, and thus from the associativity of the group operation of elliptic curves by way of continuity.
Suppose f is the cubic polynomial vanishing on the three lines through AB, CD, EF and g is the cubic vanishing on the other three lines BC, DE, FA . Pick a generic point P on the conic and choose λ so that the cubic h = f + λg vanishes on P . Then h = 0 is a cubic that has 7 points A, B, C, D, E, F, P in common with the conic. But by Bézout's theorem a cubic and a conic have at most 3 × 2 = 6 points in common, unless they have a common component. So the cubic h = 0 has a component in common with the conic which must be the conic itself, so h = 0 is the union of the conic and a line. It is now easy to check that this line is the Pascal line.
Again given the hexagon on a conic of Pascal's theorem with the above notation for points (in the first figure), we have [ 7 ]
There exist 5-point, 4-point and 3-point degenerate cases of Pascal's theorem. In a degenerate case, two previously connected points of the figure will formally coincide and the connecting line becomes the tangent at the coalesced point. See the degenerate cases given in the added scheme and the external link on circle geometries . If one chooses suitable lines of the Pascal-figures as lines at infinity one gets many interesting figures on parabolas and hyperbolas . | https://en.wikipedia.org/wiki/Pascal's_theorem |
Pascal's wager is a philosophical argument advanced by Blaise Pascal (1623–1662), seventeenth-century French mathematician, philosopher, physicist, and theologian. [ 1 ] This argument posits that individuals essentially engage in a life-defining gamble regarding the belief in the existence of God .
Pascal contends that a rational person should adopt a lifestyle consistent with the existence of God and actively strive to believe in God. The reasoning behind this stance lies in the potential outcomes: if God does not exist, the individual incurs only finite losses, potentially sacrificing certain pleasures and luxuries. However, if God does indeed exist, they stand to gain immeasurably, as represented for example by an eternity in Heaven in Abrahamic tradition , while simultaneously avoiding boundless losses associated with an eternity in Hell . [ 2 ]
The original articulation of this wager can be found in Pascal's posthumously published work titled Pensées ("Thoughts"), which comprises a compilation of previously unpublished notes. [ 3 ] Notably, Pascal's wager is significant as it marks the initial formal application of decision theory , existentialism , pragmatism , and voluntarism . [ 4 ]
Critics of the wager question the ability to provide definitive proof of God's existence. The argument from inconsistent revelations highlights the presence of various belief systems, each claiming exclusive access to divine truths. Additionally, the argument from inauthentic belief raises concerns about the genuineness of faith in God if solely motivated by potential benefits and losses.
The wager uses the following logic (excerpts from Pensées , part III, §233):
Pascal asks the reader to analyze humankind's position, where our actions can be enormously consequential, but our understanding of those consequences is flawed. While we can discern a great deal through reason , we are ultimately forced to gamble. Pascal cites a number of distinct areas of uncertainty in human life:
We understand nothing of the works of God unless we take it as a principle that He wishes to blind some and to enlighten others. [ 5 ]
Pascal describes humanity as a finite being trapped within divine incomprehensibility , briefly thrust into being from non-being, with no explanation of "Why?" or "What?" or "How?" On Pascal's view, human finitude constrains our ability to achieve truth reliably.
Given that reason alone cannot determine whether God exists, Pascal concludes that this question functions as a coin toss. However, even if we do not know the outcome of this coin toss, we must base our actions on some expectation about the consequence. We must decide whether to live as though God exists, or whether to live as though God does not exist, even though we may be mistaken in either case.
In Pascal's assessment, participation in this wager is not optional. Merely by existing in a state of uncertainty, we are forced to choose between the available courses of action for practical purposes.
The Pensées passage on Pascal's wager is as follows:
If there is a God, He is infinitely incomprehensible, since, having neither parts nor limits, He has no affinity to us. We are then incapable of knowing either what He is or if He is....
..."God is, or He is not." But to which side shall we incline? Reason can decide nothing here. There is infinite chaos that separated us. A game is being played at the extremity of this infinite distance where heads or tails will turn up. What will you wager? According to reason, you can do neither the one thing nor the other; according to reason, you can defend neither of the propositions.
Do not, then, reprove for error those who have made a choice; for you know nothing about it. "No, but I blame them for having made, not this choice, but a choice; for again both he who chooses heads and he who chooses tails are equally at fault, they are both in the wrong. The true course is not to wager at all."
Yes; but you must wager. It is not optional. You are embarked. Which will you choose then? Let us see. Since you must choose, let us see which interests you least. You have two things to lose, the true and the good; and two things to stake, your reason and your will, your knowledge and your happiness; and your nature has two things to shun, error and misery. Your reason is no more shocked in choosing one rather than the other since you must of necessity choose. This is one point settled. But your happiness? Let us weigh the gain and the loss in wagering that God is. Let us estimate these two chances. If you gain, you gain all; if you lose, you lose nothing. Wager, then, without hesitation that He is.
"That is very fine. Yes, I must wager; but I may perhaps wager too much." Let us see. Since there is an equal risk of gain and of loss, if you had only to gain two lives, instead of one, you might still wager. But if there were three lives to gain, you would have to play (since you are under the necessity of playing), and you would be imprudent, when you are forced to play, not to change your life to gain three at a game where there is an equal risk of loss and gain. But there is an eternity of life and happiness. And this being so, if there were an infinity of chances, of which one only would be for you, you would still be right in wagering one to win two, and you would act stupidly, being obliged to play, by refusing to stake one life against three at a game in which out of an infinity of chances there is one for you if there were an infinity of an infinitely happy life to gain. But there is here an infinity of an infinitely happy life to gain, a chance of gain against a finite number of chances of loss, and what you stake is finite. [ 7 ]
Pascal begins by painting a situation where both the existence and non-existence of God are impossible to prove by human reason. So, supposing that reason cannot determine the truth between the two options, one must "wager" by weighing the possible consequences. Pascal's assumption is that, when it comes to making the decision, no one can refuse to participate; withholding assent is impossible because we are already "embarked", effectively living out the choice.
We only have two things to stake, our "reason" and our "happiness". Pascal considers that if there is " equal risk of loss and gain" (i.e. a coin toss), then human reason is powerless to address the question of whether God exists. That being the case, then human reason can only decide the question according to possible resulting happiness of the decision, weighing the gain and loss in believing that God exists and likewise in believing that God does not exist.
He points out that if a wager were between the equal chance of gaining two lifetimes of happiness and gaining nothing, then a person would be a fool to bet on the latter. The same would go if it were three lifetimes of happiness versus nothing. He then argues that it is simply unconscionable by comparison to bet against an eternal life of happiness for the possibility of gaining nothing. The wise decision is to wager that God exists, since "If you gain, you gain all; if you lose, you lose nothing", meaning one can gain eternal life if God exists, but if not, one will be no worse off in death than if one had not believed. On the other hand, if you bet against God, win or lose, you either gain nothing or lose everything. You are either unavoidably annihilated (in which case, nothing matters one way or the other) or miss the opportunity of eternal happiness. In note 194, speaking about those who live apathetically betting against God, he sums up by remarking, "It is to the glory of religion to have for enemies men so unreasonable..."
Pascal addressed the difficulty that reason and rationality pose to genuine belief by proposing that "acting as if [one] believed" could "cure [one] of unbelief":
But at least learn your inability to believe, since reason brings you to this, and yet you cannot believe. Endeavor then to convince yourself, not by increase of proofs of God, but by the abatement of your passions. You would like to attain faith, and do not know the way; you would like to cure yourself of unbelief and ask the remedy for it. Learn of those who have been bound like you, and who now stake all their possessions. These are people who know the way which you would follow, and who are cured of an ill of which you would be cured. Follow the way by which they began; by acting as if they believed, taking the holy water, having masses said, etc. Even this will naturally make you believe, and deaden your acuteness. [ 8 ]
The possibilities defined by Pascal's wager can be thought of as a decision under uncertainty with the values of the following decision matrix .
Given these values, the option of living as if God exists (B) dominates the option of living as if God does not exist (¬B), as long as one assumes a positive probability that God exists. In other words, the expected value gained by choosing B is greater than or equal to that of choosing ¬B.
In fact, according to decision theory, the only value that matters in the above matrix is the +∞ (infinitely positive). Any matrix of the following type (where f 1 , f 2 , and f 3 are all negative or finite positive numbers) results in (B) as being the only rational decision. [ 4 ]
Pascal's intent was not to provide an argument to convince atheists to believe, but (a) to show the fallacy of attempting to use logical reasoning to prove or disprove God, and (b) to persuade atheists to sinlessness, as an aid to attaining faith ("it is this which will lessen the passions, which are your stumbling-blocks"). As Laurent Thirouin writes (note that the numbering of the items in the Pensees is not standardized; Thirouin's 418 is this article's 233):
The celebrity of fragment 418 has been established at the price of mutilation. By titling this text "the wager", readers have been fixated only on one part of Pascal's reasoning. It doesn't conclude with a QED at the end of the mathematical part. The unbeliever who had provoked this long analysis to counter his previous objection ("Maybe I bet too much") is still not ready to join the apologist on the side of faith. He put forward two new objections, undermining the foundations of the wager: the impossibility to know, and the obligation of playing. [ 9 ]
To be put at the beginning of Pascal's planned book, the wager was meant to show that logical reasoning cannot support faith or lack thereof:
We have to accept reality and accept the reaction of the libertine when he rejects arguments he is unable to counter. The conclusion is evident: if men believe or refuse to believe, it is not how some believers sometimes say and most unbelievers claim because their own reason justifies the position they have adopted. Belief in God doesn't depend upon rational evidence, no matter which position. [ 10 ]
Frederick Copleston writes that Pascal did not intend the wager as proof of God's existence or even a substitute for such proofs. He argues that the wager must be understood in the context of Pascal addressing the wager to those who "though they are also unconvinced by the arguments of sceptics and atheists" also "remain in a state of suspended judgment". Pascal's aim was to prepare "their minds and the production of dispositions favourable to belief". [ 11 ]
Criticism of Pascal's wager began soon after it was published. Non-believers questioned the "benefits" of a deity whose "realm" is beyond reason and the religiously orthodox, who primarily took issue with the wager's deistic and agnostic language. Believers criticized it for not proving God's existence, the encouragement of false belief, and the problem of which religion and which God should be worshipped. [ 12 ] [ 13 ]
The probabilist mathematician Pierre Simon de Laplace ridiculed the use of probability in theology, believing that even following Pascal's reasoning, it is not worth making a bet, for the hope of profit – equal to the product of the value of the testimonies (infinitely small) and the value of the happiness they promise (which is significant but finite) – must necessarily be infinitely small. [ 14 ]
Voltaire (another prominent French writer of the age of Enlightenment ), a generation after Pascal, regarded the idea of the wager as a "proof of God" as "indecent and childish", adding, "the interest I have to believe a thing is no proof that such a thing exists". [ 15 ] Pascal, however, did not advance the wager as a proof of God's existence but rather as a necessary pragmatic decision which is "impossible to avoid" for any living person. [ 16 ] He argued that abstaining from making a wager is not an option and that "reason is incapable of divining the truth"; thus, a decision of whether to believe in the existence of God must be made by "considering the consequences of each possibility".
Voltaire's critique concerns not the nature of the Pascalian wager as proof of God's existence, but the contention that the very belief Pascal tried to promote is not convincing. Voltaire hints at the fact that Pascal, as a Jansenist , believed that only a small, and already predestined, portion of humanity would eventually be saved by God.
Voltaire explained that no matter how far someone is tempted with rewards to believe in Christian salvation , the result will be at best a faint belief. [ a ] Pascal, in his Pensées , agrees with this, not stating that people can choose to believe (and therefore make a safe wager), but rather that some cannot believe.
As Étienne Souriau explained, in order to accept Pascal's argument, the bettor needs to be certain that God seriously intends to honour the bet; he says that the wager assumes that God also accepts the bet, which is not proved; Pascal's bettor is here like the fool who seeing a leaf floating on a river's waters and quivering at some point, for a few seconds, between the two sides of a stone, says: "I bet a million with Rothschild that it takes finally the left path." And, effectively, the leaf passed on the left side of the stone, but unfortunately for the fool Rothschild never said "I [will take that] bet". [ 17 ]
Since there have been many religions throughout history, and therefore many conceptions of God (or gods), some assert that all of them need to be factored into the wager, in an argumentation known as the argument from inconsistent revelations. This, its proponents argue, would lead to a high probability of believing in "the wrong god" and would eliminate the mathematical advantage Pascal claimed with his wager. Denis Diderot , a contemporary of Voltaire, expressed this opinion when asked about the wager, saying "an Imam could reason the same way". [ 18 ] J. L. Mackie writes that "the church within which alone salvation is to be found is not necessarily the Church of Rome, but perhaps that of the Anabaptists or the Mormons or the Muslim Sunnis or the worshipers of Kali or of Odin ." [ 19 ]
Pascal considers this type of objection briefly in the notes compiled into the Pensées , and dismisses it: [ 20 ]
What say [the unbelievers] then? "Do we not see," say they, "that the brutes live and die like men, and Turks like Christians? They have their ceremonies, their prophets, their doctors, their saints, their monks, like us," etc. If you care but little to know the truth, that is enough to leave you in repose. But if you desire with all your heart to know it, it is not enough; look at it in detail. That would be sufficient for a question in philosophy; but not here, where everything is at stake. And yet, after a superficial reflection of this kind, we go to amuse ourselves, etc. Let us inquire of this same religion whether it does not give a reason for this obscurity; perhaps it will teach it to us. [ 5 ]
Pascal says that the skepticism of unbelievers who rest content with the many-religions objection has seduced them into a fatal "repose". If they were really bent on knowing the truth, they would be persuaded to examine "in detail" whether Christianity is like any other religion, but they just cannot be bothered. [ 21 ] Their objection might be sufficient were the subject concerned merely some "question in philosophy", but not "here, where everything is at stake". In "a matter where they themselves, their eternity, their all are concerned", [ 5 ] they can manage no better than "a superficial reflection" ("une reflexion légère") and, thinking they have scored a point by asking a leading question , they go off to amuse themselves. [ 22 ]
As Pascal scholars observe, Pascal regarded the many-religions objection as a rhetorical ploy, a "trap" that he had no intention of falling into. [ 23 ]
David Wetsel notes that Pascal's treatment of the pagan religions is brisk: "As far as Pascal is concerned, the demise of the pagan religions of antiquity speaks for itself. Those pagan religions which still exist in the New World, in India, and in Africa are not even worth a second glance. They are obviously the work of superstition and ignorance and have nothing in them which might interest 'les gens habiles' ('clever men') [ 24 ] [ 25 ] Islam warrants more attention, being distinguished from paganism (which for Pascal presumably includes all the other non-Christian religions) by its claim to be a revealed religion. Nevertheless, Pascal concludes that the religion founded by Mohammed can on several counts be shown to be devoid of divine authority, and that therefore, as a path to the knowledge of God, it is as much a dead end as paganism." [ 26 ] Judaism, in view of its close links to Christianity, he deals with elsewhere. [ 27 ]
The many-religions objection is taken more seriously by some later apologists of the wager, who argue that of the rival options only those awarding infinite happiness affect the wager's dominance . In the opinion of these apologists "finite, semi-blissful promises such as Kali's or Odin's" therefore drop out of consideration. [ 4 ] Also, the infinite bliss that the rival conception of God offers has to be mutually exclusive. If Christ's promise of bliss can be attained concurrently with Jehovah 's and Allah 's (all three being identified as the God of Abraham ), there is no conflict in the decision matrix in the case where the cost of believing in the wrong conception of God is neutral (limbo/purgatory/spiritual death), although this would be countered with an infinite cost in the case where not believing in the correct conception of God results in punishment (hell). [ 28 ]
Ecumenical interpretations of the wager [ 29 ] argues that it could even be suggested that believing in a generic God, or a god by the wrong name, is acceptable so long as that conception of God has similar essential characteristics of the conception of God considered in Pascal's wager (perhaps the God of Aristotle ). Proponents of this line of reasoning suggest that either all of the conceptions of God or gods throughout history truly boil down to just a small set of "genuine options", or that if Pascal's wager can simply bring a person to believe in "generic theism", it has done its job. [ 28 ]
Pascal argues implicitly for the uniqueness of Christianity in the wager itself, writing: "If there is a God, He is infinitely incomprehensible...Who then can blame the Christians for not being able to give reasons for their beliefs, professing as they do a religion which they cannot explain by reason?" [ 30 ]
Some critics argue that Pascal's wager, for those who cannot believe, suggests feigning belief to gain eternal reward. Richard Dawkins argues that this would be dishonest and immoral and that, in addition to this, it is absurd to think that God, being just and omniscient, would not see through this deceptive strategy on the part of the "believer", thus nullifying the benefits of the wager. [ 13 ] William James in his ' Will to Believe ' states that "We feel that a faith in masses and holy water adopted wilfully after such a mechanical calculation would lack the inner soul of faith's reality; and if we were ourselves in the place of the Deity, we should probably take particular pleasure in cutting off believers of this pattern from their infinite reward. It is evident that unless there be some pre-existing tendency to believe in masses and holy water, the option offered to the will by Pascal is not a living option". [ 31 ]
Since these criticisms are concerned not with the validity of the wager itself, but with its possible aftermath—namely that a person who has been convinced of the overwhelming odds in favor of belief might still find themself unable to sincerely believe—they are tangential to the thrust of the wager. What such critics are objecting to is Pascal's subsequent advice to an unbeliever who, having concluded that the only rational way to wager is in favor of God's existence, points out, reasonably enough, that this by no means makes them a believer. This hypothetical unbeliever complains, "I am so made that I cannot believe. What would you have me do?" [ 5 ] Pascal, far from suggesting that God can be deceived by outward show, says that God does not regard it at all: "God looks only at what is inward." [ 5 ] For a person who is already convinced of the odds of the wager but cannot seem to put their heart into the belief, he offers practical advice.
Explicitly addressing the question of inability to believe, Pascal argues that if the wager is valid, the inability to believe is irrational, and therefore must be caused by feelings: "your inability to believe, because reason compels you to [believe] and yet you cannot, [comes] from your passions." This inability, therefore, can be overcome by diminishing these irrational sentiments: "Learn from those who were bound like you. . . . Follow the way by which they began; by acting as if they believed, taking the holy water, having masses said, etc. Even this will naturally make you believe, and deaden your acuteness.—'But this is what I am afraid of.'—And why? What have you to lose?" [ 32 ]
An uncontroversial doctrine in both Roman Catholic and Protestant theology is that mere belief in God is insufficient to attain salvation, the standard cite being James 2:19 ( KJV ): "Thou believest that there is one God; thou doest well: the devils also believe, and tremble." Salvation requires "faith" not just in the sense of belief, but of trust and obedience. Pascal and his sister , a nun, were among the leaders of Roman Catholicism's Jansenist school of thought whose doctrine of salvation was close to Protestantism in emphasizing faith over works. Both Jansenists and Protestants followed St. Augustine in this emphasis (Martin Luther belonged to the Augustinian Order of monks). Augustine wrote
So our faith has to be distinguished from the faith of the demons. Our faith, you see, purifies the heart, their faith makes them guilty. They act wickedly, and so they say to the Lord, "What have you to do with us?" When you hear the demons saying this, do you imagine they don't recognize him? "We know who you are," they say. "You are the Son of God" (Lk 4:34). Peter says this and he is praised for it; 14 the demon says it, and is condemned. Why's that, if not because the words may be the same, but the heart is very different? So let us distinguish our faith, and see that believing is not enough. That's not the sort of faith that purifies the heart. [ 33 ]
Since Pascal's position was that "saving" belief in God required more than logical assent , accepting the wager could only be a first step. Hence his advice on what steps one could take to arrive at belief. [ citation needed ]
Some other critics [ who? ] have objected to Pascal's wager on the grounds that he wrongly assumes what type of epistemic character God would likely value in his rational creatures if he existed. [ citation needed ]
Since at least 1992, some scholars have analogized Pascal's wager to decisions about climate change . [ 49 ] Two differences from Pascal's wager are posited regarding climate change: first, climate change is more likely than Pascal's God to exist, as there is scientific evidence for one but not the other. [ 50 ] Secondly, the calculated penalty for unchecked climate change would be large, but is not generally considered to be infinite. [ 51 ] Magnate Warren Buffett has written that climate change "bears a similarity to Pascal's Wager on the Existence of God. Pascal, it may be recalled, argued that if there were only a tiny probability that God truly existed, it made sense to behave as if He did because the rewards could be infinite whereas the lack of belief risked eternal misery. Likewise, if there is only a 1% chance the planet is heading toward a truly major disaster and delay means passing a point of no return, inaction now is foolhardy." [ 52 ] [ 53 ] | https://en.wikipedia.org/wiki/Pascal's_wager |
Pascal Elias Saikaly is a Lebanese professor of Environmental Science and Engineering . He is best known for the use of omics for applied studies of microbiology in engineered and natural wastewater treatment systems, including bioelectrochemistry , membrane bioreactors , and granular sludge. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
Saikaly collaborates with and leads teams of scientist and engineers who have developed novel approach to harvest electrical energy from wastewater while simultaneously producing useful byproducts. [ 5 ] In particular, he combines advances from nanotechnology and materials research with advances from microbial ecology to develop devices to create bioelectricity . [ 6 ] This work supports the long-term strategic efforts of the King Abdullah University of Science and Technology to research and commercialize alternative sources of energy. Saikaly's research addresses broader issues of importance in water-limited environments, including the use of seawater for toilet flushing. [ 7 ]
Saikaly earned his B.S. and M.S. from the American University of Beirut . In 2005, he completed his Ph.D. at the University of Cincinnati . From 2005 to 2007, he was completed postdoctoral studies at North Carolina State University . From 2008 to 2010, he was an assistant professor at the American University of Beirut . In 2010, he joined the faculty of King Abdullah University of Science and Technology , where he is currently a full professor.
Saikaly has more than 100 publications listed on Scopus that have been cited a total of more than 3000 times, giving him an h-index of more than 30. His most cited articles include: | https://en.wikipedia.org/wiki/Pascal_Saikaly |
In geometry , Pasch's theorem , stated in 1882 by the German mathematician Moritz Pasch , [ 1 ] is a result in plane geometry which cannot be derived from Euclid's postulates .
The statement is as follows:
Pasch's theorem — Given points a , b , c , and d on a line, if it is known that the points are ordered as ( a , b , c ) and ( b , c , d ), then it is also true that ( a , b , d ). [ 2 ]
[Here, for example, ( a , b , c ) means that point b lies between points a and c .]
David Hilbert originally included Pasch's theorem as an axiom in his modern treatment of Euclidean geometry in The Foundations of Geometry (1899). However, it was found by E. H. Moore in 1902 that the axiom is redundant, [ 3 ] and revised editions now list it as a theorem. Thus Pasch's theorem is also known as Hilbert's discarded axiom .
Pasch's axiom , a separate statement, is also included and remains an axiom in Hilbert's treatment.
This elementary geometry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pasch's_theorem |
Paschen's law is an equation that gives the breakdown voltage , that is, the voltage necessary to start a discharge or electric arc , between two electrodes in a gas as a function of pressure and gap length. [ 2 ] [ 3 ] It is named after Friedrich Paschen who discovered it empirically in 1889. [ 4 ]
Paschen studied the breakdown voltage of various gases between parallel metal plates as the gas pressure and gap distance were varied:
For a given gas, the voltage is a function only of the product of the pressure and gap length. [ 2 ] [ 3 ] The curve he found of voltage versus the pressure-gap length product (right) is called Paschen's curve . He found an equation that fit these curves, which is now called Paschen's law. [ 3 ]
At higher pressures and gap lengths, the breakdown voltage is approximately proportional to the product of pressure and gap length, and the term Paschen's law is sometimes used to refer to this simpler relation. [ 5 ] However, this is only roughly true, over a limited range of the curve.
Early vacuum experimenters found a rather surprising behavior. An arc would sometimes take place in a long irregular path rather than at the minimal distance between the electrodes. For example, in air, at a pressure of one atmosphere , the distance for minimal breakdown voltage is about 7.5 μm. The voltage required to arc this distance is 327 V, which is insufficient to ignite the arcs for gaps that are either wider or narrower. For a 3.5 μm gap, the required voltage is 533 V, nearly twice as much. If 500 V were applied, it would not be sufficient to arc at the 2.85 μm distance, but would arc at a 7.5 μm distance.
Paschen found that breakdown voltage was described by the equation [ 1 ]
where V B {\displaystyle V_{\text{B}}} is the breakdown voltage in volts , p {\displaystyle p} is the pressure in pascals , d {\displaystyle d} is the gap distance in meters , γ se {\displaystyle \gamma _{\text{se}}} is the secondary-electron-emission coefficient (the number of secondary electrons produced per incident positive ion), A {\displaystyle A} is the saturation ionization in the gas at a particular E / p {\displaystyle E/p} ( electric field /pressure), and B {\displaystyle B} is related to the excitation and ionization energies.
The constants A {\displaystyle A} and B {\displaystyle B} interpolate the first Townsend coefficient α = A p e − B p / E {\displaystyle \alpha =Ape^{-Bp/E}} . They are determined experimentally and found to be roughly constant over a restricted range of E / p {\displaystyle E/p} for any given gas. For example, air with an E / p {\displaystyle E/p} in the range of 450 to 7500 V/(kPa·cm), A {\displaystyle A} = 112.50 (kPa·cm) −1 and B {\displaystyle B} = 2737.50 V/(kPa·cm). [ 6 ]
The graph of this equation is the Paschen curve. By differentiating it with respect to p d {\displaystyle pd} and setting the derivative to zero, the minimal voltage can be found. This yields
and predicts the occurrence of a minimal breakdown voltage for p d {\displaystyle pd} = 7.5×10 −6 m·atm. This is 327 V in air at standard atmospheric pressure at a distance of 7.5 μm.
The composition of the gas determines both the minimal arc voltage and the distance at which it occurs. For argon , the minimal arc voltage is 137 V at a larger 12 μm. For sulfur dioxide , the minimal arc voltage is 457 V at only 4.4 μm.
For air at standard conditions for temperature and pressure (STP), the voltage needed to arc a 1-metre gap is about 3.4 MV. [ 7 ] The intensity of the electric field for this gap is therefore 3.4 MV/m.
The electric field needed to arc across the minimal-voltage gap is much greater than what is necessary to arc a gap of one metre. At large gaps (or large pd) Paschen's Law is known to fail. The Meek Criteria for breakdown is usually used for large gaps. [ 8 ] It takes into account non-uniformity in the electric field and formation of streamers due to the build up of charge within the gap that can occur over long distances. For a 7.5 μm gap the arc voltage is 327 V, which is 43 MV/m. This is about 14 times greater than the field strength for the 1.5-metre gap. The phenomenon is well verified experimentally and is referred to as the Paschen minimum.
The equation loses accuracy for gaps under about 10 μm in air at one atmosphere [ 9 ] and incorrectly predicts an infinite arc voltage at a gap of about 2.7 μm. Breakdown voltage can also differ from the Paschen curve prediction for very small electrode gaps, when field emission from the cathode surface becomes important.
The mean free path of a molecule in a gas is the average distance between its collision with other molecules. This is inversely proportional to the pressure of the gas, given constant temperature. In air at STP the mean free path of molecules is about 96 nm. Since electrons are much smaller, their average distance between colliding with molecules is about 5.6 times longer, or about 0.5 μm. This is a substantial fraction of the 7.5 μm spacing between the electrodes for minimal arc voltage. If the electron is in an electric field of 43 MV/m, it will be accelerated and acquire 21.5 eV of energy in 0.5 μm of travel in the direction of the field. The first ionization energy needed to dislodge an electron from nitrogen molecule is about 15.6 eV. The accelerated electron will acquire more than enough energy to ionize a nitrogen molecule. This liberated electron will in turn be accelerated, which will lead to another collision. A chain reaction then leads to avalanche breakdown , and an arc takes place from the cascade of released electrons. [ 10 ]
More collisions will take place in the electron path between the electrodes in a higher-pressure gas. When the pressure–gap product p d {\displaystyle pd} is high, an electron will collide with many different gas molecules as it travels from the cathode to the anode. Each of the collisions randomizes the electron direction, so the electron is not always being accelerated by the electric field —sometimes it travels back towards the cathode and is decelerated by the field.
Collisions reduce the electron's energy and make it more difficult for it to ionize a molecule. Energy losses from a greater number of collisions require larger voltages for the electrons to accumulate sufficient energy to ionize many gas molecules, which is required to produce an avalanche breakdown .
On the left side of the Paschen minimum, the p d {\displaystyle pd} product is small. The electron mean free path can become long compared to the gap between the electrodes. In this case, the electrons might gain large amounts of energy, but have fewer ionizing collisions. A greater voltage is therefore required to assure ionization of enough gas molecules to start an avalanche.
To calculate the breakthrough voltage, a homogeneous electrical field is assumed. This is the case in a parallel-plate capacitor setup. The electrodes may have the distance d {\displaystyle d} . The cathode is located at the point x = 0 {\displaystyle x=0} .
To get impact ionization , the electron energy E e {\displaystyle E_{e}} must become greater than the ionization energy E I {\displaystyle E_{\text{I}}} of the gas atoms between the plates. Per length of path x {\displaystyle x} a number of α {\displaystyle \alpha } ionizations will occur. α {\displaystyle \alpha } is known as the first Townsend coefficient as it was introduced by Townsend. [ 11 ] The increase of the electron current Γ e {\displaystyle \Gamma _{e}} , can be described for the assumed setup as
(So the number of free electrons at the anode is equal to the number of free electrons at the cathode that were multiplied by impact ionization. The larger d {\displaystyle d} and/or α {\displaystyle \alpha } , the more free electrons are created.)
The number of created electrons is
Neglecting possible multiple ionizations of the same atom, the number of created ions is the same as the number of created electrons:
Γ i {\displaystyle \Gamma _{i}} is the ion current. To keep the discharge going on, free electrons must be created at the cathode surface. This is possible because the ions hitting the cathode release secondary electrons at the impact. (For very large applied voltages also field electron emission can occur.) Without field emission, we can write
where γ {\displaystyle \gamma } is the mean number of generated secondary electrons per ion. This is also known as the second Townsend coefficient. Assuming that Γ i ( d ) = 0 {\displaystyle \Gamma _{i}(d)=0} , one gets the relation between the Townsend coefficients by putting ( 4 ) into ( 3 ) and transforming:
What is the amount of α {\displaystyle \alpha } ? The number of ionization depends upon the probability that an electron hits a gas molecule. This probability P {\displaystyle P} is the relation of the cross-sectional area of a collision between electron and ion σ {\displaystyle \sigma } in relation to the overall area A {\displaystyle A} that is available for the electron to fly through:
As expressed by the second part of the equation, it is also possible to express the probability as relation of the path traveled by the electron x {\displaystyle x} to the mean free path λ {\displaystyle \lambda } (distance at which another collision occurs).
N {\displaystyle N} is the number of molecules which electrons can hit. It can be calculated using the equation of state of the ideal gas
The adjoining sketch illustrates that σ = π ( r a + r b ) 2 {\displaystyle \sigma =\pi (r_{a}+r_{b})^{2}} . As the radius of an electron can be neglected compared to the radius of an ion r I {\displaystyle r_{I}} it simplifies to σ = π r I 2 {\displaystyle \sigma =\pi r_{I}^{2}} . Using this relation, putting ( 7 ) into ( 6 ) and transforming to λ {\displaystyle \lambda } one gets
where the factor L {\displaystyle L} was only introduced for a better overview.
The alteration of the current of not yet collided electrons at every point in the path x {\displaystyle x} can be expressed as
This differential equation can easily be solved:
The probability that λ > x {\displaystyle \lambda >x} (that there was not yet a collision at the point x {\displaystyle x} ) is
According to its definition α {\displaystyle \alpha } is the number of ionizations per length of path and thus the relation of the probability that there was no collision in the mean free path of the ions, and the mean free path of the electrons:
It was hereby considered that the energy E {\displaystyle E} that a charged particle can get between a collision depends on the electric field strength E {\displaystyle {\mathcal {E}}} and the charge Q {\displaystyle Q} :
For the parallel-plate capacitor we have E = U d {\displaystyle {\mathcal {E}}={\frac {U}{d}}} , where U {\displaystyle U} is the applied voltage. As a single ionization was assumed Q {\displaystyle Q} is the elementary charge e {\displaystyle e} . We can now put ( 13 ) and ( 8 ) into ( 12 ) and get
Putting this into (5) and transforming to U {\displaystyle U} we get the Paschen law for the breakdown voltage U b r e a k d o w n {\displaystyle U_{\mathrm {breakdown} }} that was first investigated by Paschen in [ 4 ] and whose formula was first derived by Townsend in [ 12 ]
Plasma ignition in the definition of Townsend ( Townsend discharge ) is a self-sustaining discharge, independent of an external source of free electrons. This means that electrons from the cathode can reach the anode in the distance d {\displaystyle d} and ionize at least one atom on their way. So according to the definition of α {\displaystyle \alpha } this relation must be fulfilled:
If α d = 1 {\displaystyle \alpha d=1} is used instead of ( 5 ) one gets for the breakdown voltage
Paschen's law requires that:
Different gases will have different mean free paths for molecules and electrons. This is because different molecules have ionization cross sections, that is, different effective diameters. Noble gases like helium and argon are monatomic , which makes them harder to ionize and tend to have smaller effective diameters. This gives them greater mean free paths.
Ionization potentials differ between molecules, as well as the speed that they recapture electrons after they have been knocked out of orbit. All three effects change the number of collisions needed to cause an exponential growth in free electrons. These free electrons are necessary to cause an arc. | https://en.wikipedia.org/wiki/Paschen's_law |
The Zeeman effect ( Dutch: [ˈzeːmɑn] ) is the splitting of a spectral line into several components in the presence of a static magnetic field . It is caused by the interaction of the magnetic field with the magnetic moment of the atomic electron associated with its orbital motion and spin ; this interaction shifts some orbital energies more than others, resulting in the split spectrum. The effect is named after the Dutch physicist Pieter Zeeman , who discovered it in 1896 and received a Nobel Prize in Physics for this discovery. It is analogous to the Stark effect , the splitting of a spectral line into several components in the presence of an electric field . Also, similar to the Stark effect, transitions between different components have, in general, different intensities, with some being entirely forbidden (in the dipole approximation), as governed by the selection rules .
Since the distance between the Zeeman sub-levels is a function of magnetic field strength, this effect can be used to measure magnetic field strength, e.g. that of the Sun and other stars or in laboratory plasmas .
In 1896 Zeeman learned that his laboratory had one of Henry Augustus Rowland 's highest resolving diffraction gratings . Zeeman had read James Clerk Maxwell 's article in Encyclopædia Britannica describing Michael Faraday 's failed attempts to influence light with magnetism. Zeeman wondered if the new spectrographic techniques could succeed where early efforts had not. [ 1 ] : 75
When illuminated by a slit-shaped source, the grating produces a long array of slit images corresponding to different wavelengths. Zeeman placed a piece of asbestos soaked in salt water into a Bunsen burner flame at the source of the grating: he could easily see two lines for sodium light emission. Energizing a 10- kilogauss magnet around the flame, he observed a slight broadening of the sodium images. [ 1 ] : 76
When Zeeman switched to cadmium as the source, he observed the images split when the magnet was energized. These splittings could be analyzed with Hendrik Lorentz 's then-new electron theory . In retrospect, we now know that the magnetic effects on sodium require quantum-mechanical treatment. [ 1 ] : 77 Zeeman and Lorentz were awarded the 1902 Nobel Prize; in his acceptance speech Zeeman explained his apparatus and showed slides of the spectrographic images. [ 2 ]
Historically, one distinguishes between the normal and an anomalous Zeeman effect (discovered by Thomas Preston in Dublin, Ireland [ 3 ] ). The anomalous effect appears on transitions where the net spin of the electrons is non-zero. It was called "anomalous" because the electron spin had not yet been discovered, and so there was no good explanation for it at the time that Zeeman observed the effect. Wolfgang Pauli recalled that when asked by a colleague as to why he looked unhappy, he replied: "How can one look happy when he is thinking about the anomalous Zeeman effect?" [ 4 ]
At higher magnetic field strength the effect ceases to be linear. At even higher field strengths, comparable to the strength of the atom's internal field, the electron coupling is disturbed and the spectral lines rearrange. This is called the Paschen–Back effect .
In modern scientific literature, these terms are rarely used, with a tendency to use just the "Zeeman effect". Another rarely used obscure term is inverse Zeeman effect , [ 5 ] referring to the Zeeman effect in an absorption spectral line.
A similar effect, splitting of the nuclear energy levels in the presence of a magnetic field, is referred to as the nuclear Zeeman effect . [ 6 ]
The total Hamiltonian of an atom in a magnetic field is H = H 0 + V M , {\displaystyle H=H_{0}+V_{\text{M}},} where H 0 {\displaystyle H_{0}} is the unperturbed Hamiltonian of the atom, and V M {\displaystyle V_{\text{M}}} is the perturbation due to the magnetic field: V M = − μ → ⋅ B → , {\displaystyle V_{\text{M}}=-{\vec {\mu }}\cdot {\vec {B}},} where μ → {\displaystyle {\vec {\mu }}} is the magnetic moment of the atom. The magnetic moment consists of the electronic and nuclear parts; however, the latter is many orders of magnitude smaller and will be neglected here. Therefore, μ → ≈ − μ B g J → ℏ , {\displaystyle {\vec {\mu }}\approx -{\frac {\mu _{\text{B}}g{\vec {J}}}{\hbar }},} where μ B {\displaystyle \mu _{\text{B}}} is the Bohr magneton , J → {\displaystyle {\vec {J}}} is the total electronic angular momentum , and g {\displaystyle g} is the Landé g-factor .
A more accurate approach is to take into account that the operator of the magnetic moment of an electron is a sum of the contributions of the orbital angular momentum L → {\displaystyle {\vec {L}}} and the spin angular momentum S → {\displaystyle {\vec {S}}} , with each multiplied by the appropriate gyromagnetic ratio : μ → = − μ B ( g l L → + g s S → ) ℏ , {\displaystyle {\vec {\mu }}=-{\frac {\mu _{\text{B}}(g_{l}{\vec {L}}+g_{s}{\vec {S}})}{\hbar }},} where g l = 1 {\displaystyle g_{l}=1} , and g s ≈ 2.0023193 {\displaystyle g_{s}\approx 2.0023193} (the anomalous gyromagnetic ratio , deviating from 2 due to the effects of quantum electrodynamics ). In the case of the LS coupling , one can sum over all electrons in the atom: g J → = ⟨ ∑ i ( g l l → i + g s s → i ) ⟩ = ⟨ ( g l L → + g s S → ) ⟩ , {\displaystyle g{\vec {J}}={\Big \langle }\sum _{i}(g_{l}{\vec {l}}_{i}+g_{s}{\vec {s}}_{i}){\Big \rangle }={\big \langle }(g_{l}{\vec {L}}+g_{s}{\vec {S}}){\big \rangle },} where L → {\displaystyle {\vec {L}}} and S → {\displaystyle {\vec {S}}} are the total spin momentum and spin of the atom, and averaging is done over a state with a given value of the total angular momentum.
If the interaction term V M {\displaystyle V_{\text{M}}} is small (less than the fine structure ), it can be treated as a perturbation; this is the Zeeman effect proper. In the Paschen–Back effect, described below, V M {\displaystyle V_{\text{M}}} exceeds the LS coupling significantly (but is still small compared to H 0 {\displaystyle H_{0}} ). In ultra-strong magnetic fields, the magnetic-field interaction may exceed H 0 {\displaystyle H_{0}} , in which case the atom can no longer exist in its normal meaning, and one talks about Landau levels instead. There are intermediate cases that are more complex than these limit cases.
If the spin–orbit interaction dominates over the effect of the external magnetic field, L → {\displaystyle {\vec {L}}} and S → {\displaystyle {\vec {S}}} are not separately conserved, only the total angular momentum J → = L → + S → {\displaystyle {\vec {J}}={\vec {L}}+{\vec {S}}} is. The spin and orbital angular momentum vectors can be thought of as precessing about the (fixed) total angular momentum vector J → {\displaystyle {\vec {J}}} . The (time-)"averaged" spin vector is then the projection of the spin onto the direction of J → {\displaystyle {\vec {J}}} : S → avg = ( S → ⋅ J → ) J 2 J → , {\displaystyle {\vec {S}}_{\text{avg}}={\frac {({\vec {S}}\cdot {\vec {J}})}{J^{2}}}{\vec {J}},} and for the (time-)"averaged" orbital vector: L → avg = ( L → ⋅ J → ) J 2 J → . {\displaystyle {\vec {L}}_{\text{avg}}={\frac {({\vec {L}}\cdot {\vec {J}})}{J^{2}}}{\vec {J}}.}
Thus ⟨ V M ⟩ = μ B ℏ J → ( g L L → ⋅ J → J 2 + g S S → ⋅ J → J 2 ) ⋅ B → . {\displaystyle \langle V_{\text{M}}\rangle ={\frac {\mu _{\text{B}}}{\hbar }}{\vec {J}}\left(g_{L}{\frac {{\vec {L}}\cdot {\vec {J}}}{J^{2}}}+g_{S}{\frac {{\vec {S}}\cdot {\vec {J}}}{J^{2}}}\right)\cdot {\vec {B}}.} Using L → = J → − S → {\displaystyle {\vec {L}}={\vec {J}}-{\vec {S}}} and squaring both sides, we get S → ⋅ J → = 1 2 ( J 2 + S 2 − L 2 ) = ℏ 2 2 [ j ( j + 1 ) − l ( l + 1 ) + s ( s + 1 ) ] , {\displaystyle {\vec {S}}\cdot {\vec {J}}={\frac {1}{2}}(J^{2}+S^{2}-L^{2})={\frac {\hbar ^{2}}{2}}[j(j+1)-l(l+1)+s(s+1)],} and using S → = J → − L → {\displaystyle {\vec {S}}={\vec {J}}-{\vec {L}}} and squaring both sides, we get L → ⋅ J → = 1 2 ( J 2 − S 2 + L 2 ) = ℏ 2 2 [ j ( j + 1 ) + l ( l + 1 ) − s ( s + 1 ) ] . {\displaystyle {\vec {L}}\cdot {\vec {J}}={\frac {1}{2}}(J^{2}-S^{2}+L^{2})={\frac {\hbar ^{2}}{2}}[j(j+1)+l(l+1)-s(s+1)].}
Combining everything and taking J z = ℏ m j {\displaystyle J_{z}=\hbar m_{j}} , we obtain the magnetic potential energy of the atom in the applied external magnetic field: V M = μ B B m j [ g L j ( j + 1 ) + l ( l + 1 ) − s ( s + 1 ) 2 j ( j + 1 ) + g S j ( j + 1 ) − l ( l + 1 ) + s ( s + 1 ) 2 j ( j + 1 ) ] = μ B B m j [ 1 + ( g S − 1 ) j ( j + 1 ) − l ( l + 1 ) + s ( s + 1 ) 2 j ( j + 1 ) ] = μ B B m j g J , {\displaystyle {\begin{aligned}V_{\text{M}}&=\mu _{\text{B}}Bm_{j}\left[g_{L}{\frac {j(j+1)+l(l+1)-s(s+1)}{2j(j+1)}}+g_{S}{\frac {j(j+1)-l(l+1)+s(s+1)}{2j(j+1)}}\right]\\&=\mu _{\text{B}}Bm_{j}\left[1+(g_{S}-1){\frac {j(j+1)-l(l+1)+s(s+1)}{2j(j+1)}}\right]\\&=\mu _{\text{B}}Bm_{j}g_{J},\end{aligned}}} where the quantity in square brackets is the Landé g-factor g J {\displaystyle g_{J}} of the atom ( g L = 1 , {\displaystyle g_{L}=1,} g S ≈ 2 {\displaystyle g_{S}\approx 2} ), and m j {\displaystyle m_{j}} is the z component of the total angular momentum.
For a single electron above filled shells, with s = 1 / 2 {\displaystyle s=1/2} and j = l ± s {\displaystyle j=l\pm s} , the Landé g-factor can be simplified to g J = 1 ± g S − 1 2 l + 1 . {\displaystyle g_{J}=1\pm {\frac {g_{S}-1}{2l+1}}.}
Taking V M {\displaystyle V_{\text{M}}} to be the perturbation, the Zeeman correction to the energy is E Z ( 1 ) = ⟨ n l j m j | H Z ′ | n l j m j ⟩ = ⟨ V M ⟩ Ψ = μ B g J B ext m j . {\displaystyle E_{\text{Z}}^{(1)}=\langle nljm_{j}|H_{\text{Z}}^{'}|nljm_{j}\rangle =\langle V_{\text{M}}\rangle _{\Psi }=\mu _{\text{B}}g_{J}B_{\text{ext}}m_{j}.}
The Lyman-alpha transition in hydrogen in the presence of the spin–orbit interaction involves the transitions 2 2 P 1 / 2 → 1 2 S 1 / 2 {\displaystyle 2\,^{2}\!P_{1/2}\to 1\,^{2}\!S_{1/2}} and 2 2 P 3 / 2 → 1 2 S 1 / 2 . {\displaystyle 2\,^{2}\!P_{3/2}\to 1\,^{2}\!S_{1/2}.}
In the presence of an external magnetic field, the weak-field Zeeman effect splits the 1 2 S 1 / 2 {\displaystyle 1\,^{2}\!S_{1/2}} and 2 2 P 1 / 2 {\displaystyle 2\,^{2}\!P_{1/2}} levels into 2 states each ( m j = + 1 / 2 , − 1 / 2 {\displaystyle m_{j}=+1/2,-1/2} ) and the 2 2 P 3 / 2 {\displaystyle 2\,^{2}\!P_{3/2}} level into 4 states ( m j = + 3 / 2 , + 1 / 2 , − 1 / 2 , − 3 / 2 {\displaystyle m_{j}=+3/2,+1/2,-1/2,-3/2} ). The Landé g-factors for the three levels are g J = 2 for 1 2 S 1 / 2 ( j = 1 / 2 , l = 0 ) , g J = 2 / 3 for 2 2 P 1 / 2 ( j = 1 / 2 , l = 1 ) , g J = 4 / 3 for 2 2 P 3 / 2 ( j = 3 / 2 , l = 1 ) . {\displaystyle {\begin{aligned}g_{J}&=2&&{\text{for}}\ 1\,^{2}\!S_{1/2}\ (j=1/2,l=0),\\g_{J}&=2/3&&{\text{for}}\ 2\,^{2}\!P_{1/2}\ (j=1/2,l=1),\\g_{J}&=4/3&&{\text{for}}\ 2\,^{2}\!P_{3/2}\ (j=3/2,l=1).\end{aligned}}}
Note in particular that the size of the energy splitting is different for the different orbitals because the g J values are different. Fine-structure splitting occurs even in the absence of a magnetic field, as it is due to spin–orbit coupling. Depicted on the right is the additional Zeeman splitting, which occurs in the presence of magnetic fields.
The Paschen–Back effect is the splitting of atomic energy levels in the presence of a strong magnetic field. This occurs when an external magnetic field is sufficiently strong to disrupt the coupling between orbital ( L → {\displaystyle {\vec {L}}} ) and spin ( S → {\displaystyle {\vec {S}}} ) angular momenta. This effect is the strong-field limit of the Zeeman effect. When s = 0 {\displaystyle s=0} , the two effects are equivalent. The effect was named after the German physicists Friedrich Paschen and Ernst E. A. Back . [ 7 ]
When the magnetic-field perturbation significantly exceeds the spin–orbit interaction, one can safely assume [ H 0 , S ] = 0 {\displaystyle [H_{0},S]=0} . This allows the expectation values of L z {\displaystyle L_{z}} and S z {\displaystyle S_{z}} to be easily evaluated for a state | ψ ⟩ {\displaystyle |\psi \rangle } . The energies are simply
The above may be read as implying that the LS-coupling is completely broken by the external field. However, m l {\displaystyle m_{l}} and m s {\displaystyle m_{s}} are still "good" quantum numbers. Together with the selection rules for an electric dipole transition , i.e., Δ s = 0 , Δ m s = 0 , Δ l = ± 1 , Δ m l = 0 , ± 1 {\displaystyle \Delta s=0,\Delta m_{s}=0,\Delta l=\pm 1,\Delta m_{l}=0,\pm 1} this allows to ignore the spin degree of freedom altogether. As a result, only three spectral lines will be visible, corresponding to the Δ m l = 0 , ± 1 {\displaystyle \Delta m_{l}=0,\pm 1} selection rule. The splitting Δ E = B μ B Δ m l {\displaystyle \Delta E=B\mu _{\rm {B}}\Delta m_{l}} is independent of the unperturbed energies and electronic configurations of the levels being considered.
More precisely, if s ≠ 0 {\displaystyle s\neq 0} , each of these three components is actually a group of several transitions due to the residual spin–orbit coupling and relativistic corrections (which are of the same order, known as 'fine structure'). The first-order perturbation theory with these corrections yields the following formula for the hydrogen atom in the Paschen–Back limit: [ 8 ]
In this example, the fine-structure corrections are ignored.
( n = 2 , l = 1 {\displaystyle n=2,l=1} )
∣ m l , m s ⟩ {\displaystyle \mid m_{l},m_{s}\rangle }
( n = 1 , l = 0 {\displaystyle n=1,l=0} )
∣ m l , m s ⟩ {\displaystyle \mid m_{l},m_{s}\rangle }
In the magnetic dipole approximation, the Hamiltonian which includes both the hyperfine and Zeeman interactions is [ citation needed ]
where A {\displaystyle A} is the hyperfine splitting at zero applied magnetic field, μ B {\displaystyle \mu _{\rm {B}}} and μ N {\displaystyle \mu _{\rm {N}}} are the Bohr magneton and nuclear magneton , respectively (note that the last term in the expression above describes the nuclear Zeeman effect), J → {\displaystyle {\vec {J}}} and I → {\displaystyle {\vec {I}}} are the electron and nuclear angular momentum operators and g J {\displaystyle g_{J}} is the Landé g-factor : g J = g L J ( J + 1 ) + L ( L + 1 ) − S ( S + 1 ) 2 J ( J + 1 ) + g S J ( J + 1 ) − L ( L + 1 ) + S ( S + 1 ) 2 J ( J + 1 ) . {\displaystyle g_{J}=g_{L}{\frac {J(J+1)+L(L+1)-S(S+1)}{2J(J+1)}}+g_{S}{\frac {J(J+1)-L(L+1)+S(S+1)}{2J(J+1)}}.}
In the case of weak magnetic fields, the Zeeman interaction can be treated as a perturbation to the | F , m f ⟩ {\displaystyle |F,m_{f}\rangle } basis. In the high field regime, the magnetic field becomes so strong that the Zeeman effect will dominate, and one must use a more complete basis of | I , J , m I , m J ⟩ {\displaystyle |I,J,m_{I},m_{J}\rangle } or just | m I , m J ⟩ {\displaystyle |m_{I},m_{J}\rangle } since I {\displaystyle I} and J {\displaystyle J} will be constant within a given level.
To get the complete picture, including intermediate field strengths, we must consider eigenstates which are superpositions of the | F , m F ⟩ {\displaystyle |F,m_{F}\rangle } and | m I , m J ⟩ {\displaystyle |m_{I},m_{J}\rangle } basis states. For J = 1 / 2 {\displaystyle J=1/2} , the Hamiltonian can be solved analytically, resulting in the Breit–Rabi formula (named after Gregory Breit and Isidor Isaac Rabi ). Notably, the electric quadrupole interaction is zero for L = 0 {\displaystyle L=0} ( J = 1 / 2 {\displaystyle J=1/2} ), so this formula is fairly accurate.
We now utilize quantum mechanical ladder operators , which are defined for a general angular momentum operator L {\displaystyle L} as
These ladder operators have the property
as long as m L {\displaystyle m_{L}} lies in the range − L , … . . . , L {\displaystyle {-L,\dots ...,L}} (otherwise, they return zero). Using ladder operators J ± {\displaystyle J_{\pm }} and I ± {\displaystyle I_{\pm }} We can rewrite the Hamiltonian as
We can now see that at all times, the total angular momentum projection m F = m J + m I {\displaystyle m_{F}=m_{J}+m_{I}} will be conserved. This is because both J z {\displaystyle J_{z}} and I z {\displaystyle I_{z}} leave states with definite m J {\displaystyle m_{J}} and m I {\displaystyle m_{I}} unchanged, while J + I − {\displaystyle J_{+}I_{-}} and J − I + {\displaystyle J_{-}I_{+}} either increase m J {\displaystyle m_{J}} and decrease m I {\displaystyle m_{I}} or vice versa, so the sum is always unaffected. Furthermore, since J = 1 / 2 {\displaystyle J=1/2} there are only two possible values of m J {\displaystyle m_{J}} which are ± 1 / 2 {\displaystyle \pm 1/2} . Therefore, for every value of m F {\displaystyle m_{F}} there are only two possible states, and we can define them as the basis:
This pair of states is a two-level quantum mechanical system . Now we can determine the matrix elements of the Hamiltonian:
Solving for the eigenvalues of this matrix – as can be done by hand (see two-level quantum mechanical system ), or more easily, with a computer algebra system – we arrive at the energy shifts:
where Δ W {\displaystyle \Delta W} is the splitting (in units of Hz) between two hyperfine sublevels in the absence of magnetic field B {\displaystyle B} , x {\displaystyle x} is referred to as the 'field strength parameter' (Note: for m F = ± ( I + 1 / 2 ) {\displaystyle m_{F}=\pm (I+1/2)} the expression under the square root is an exact square, and so the last term should be replaced by + h Δ W 2 ( 1 ± x ) {\displaystyle +{\frac {h\Delta W}{2}}(1\pm x)} ). This equation is known as the Breit–Rabi formula and is useful for systems with one valence electron in an s {\displaystyle s} ( J = 1 / 2 {\displaystyle J=1/2} ) level. [ 9 ] [ 10 ]
Note that index F {\displaystyle F} in Δ E F = I ± 1 / 2 {\displaystyle \Delta E_{F=I\pm 1/2}} should be considered not as total angular momentum of the atom but as asymptotic total angular momentum . It is equal to total angular momentum only if B = 0 {\displaystyle B=0} otherwise eigenvectors corresponding different eigenvalues of the Hamiltonian are the superpositions of states with different F {\displaystyle F} but equal m F {\displaystyle m_{F}} (the only exceptions are | F = I + 1 / 2 , m F = ± F ⟩ {\displaystyle |F=I+1/2,m_{F}=\pm F\rangle } ).
George Ellery Hale was the first to notice the Zeeman effect in the solar spectra, indicating the existence of strong magnetic fields in sunspots. Such fields can be quite high, on the order of 0.1 tesla or higher. Today, the Zeeman effect is used to produce magnetograms showing the variation of magnetic field on the Sun, [ 11 ] and to analyze the magnetic field geometries in other stars. [ 12 ]
The Zeeman effect is utilized in many laser cooling applications such as a magneto-optical trap and the Zeeman slower . [ 13 ]
Zeeman-energy mediated coupling of spin and orbital motions
is used in spintronics for controlling electron spins in quantum dots through electric dipole spin resonance . [ 14 ]
Old high-precision frequency standards, i.e. hyperfine structure transition-based atomic clocks, may require periodic fine-tuning due to exposure to magnetic fields. This is carried out by measuring the Zeeman effect on specific hyperfine structure transition levels of the source element (cesium) and applying a uniformly precise, low-strength magnetic field to said source, in a process known as degaussing . [ 15 ]
The Zeeman effect may also be utilized to improve accuracy in atomic absorption spectroscopy . [ citation needed ]
A theory about the magnetic sense of birds assumes that a protein in the retina is changed due to the Zeeman effect. [ 16 ]
The nuclear Zeeman effect is important in such applications as nuclear magnetic resonance spectroscopy, magnetic resonance imaging (MRI), and Mössbauer spectroscopy . [ citation needed ]
The electron spin resonance spectroscopy is based on the Zeeman effect. [ citation needed ]
The Zeeman effect can be demonstrated by placing a sodium vapor source in a powerful electromagnet and viewing a sodium vapor lamp through the magnet opening (see diagram). With magnet off, the sodium vapor source will block the lamp light; when the magnet is turned on the lamp light will be visible through the vapor.
The sodium vapor can be created by sealing sodium metal in an evacuated glass tube and heating it while the tube is in the magnet. [ 17 ]
Alternatively, salt ( sodium chloride ) on a ceramic stick can be placed in the flame of Bunsen burner as the sodium vapor source. When the magnetic field is energized, the lamp image will be brighter. [ 18 ] However, the magnetic field also affects the flame, making the observation depend upon more than just the Zeeman effect. [ 17 ] These issues also plagued Zeeman's original work; he devoted considerable effort to ensure his observations were truly an effect of magnetism on light emission. [ 19 ]
When salt is added to the Bunsen burner, it dissociates to give sodium and chloride . The sodium atoms get excited due to photons from the sodium vapour lamp, with electrons excited from 3s to 3p states, absorbing light in the process. The sodium vapour lamp emits light at 589nm, which has precisely the energy to excite an electron of a sodium atom. If it was an atom of another element, like chlorine, shadow will not be formed. [ 20 ] [ failed verification ] When a magnetic field is applied, due to the Zeeman effect the spectral line of sodium gets split into several components. This means the energy difference between the 3s and 3p atomic orbitals will change. As the sodium vapour lamp don't precisely deliver the right frequency anymore, light doesn't get absorbed and passes through, resulting in the shadow dimming. As the magnetic field strength is increased, the shift in the spectral lines increases and lamp light is transmitted. [ citation needed ] | https://en.wikipedia.org/wiki/Paschen–Back_effect |
A PASER (an acronym from Particle Acceleration by Stimulated Emission of Radiation) is a device that accelerates a coherent beam of electrons . This process was demonstrated for the first time in 2006 at the Brookhaven National Lab by a team of physicists from the Technion-Israel Institute of Technology. [ 1 ]
Relativistic electrons from a conventional particle accelerator pass through a vibrationally excited carbon dioxide medium in which the electrons undergo millions of collisions with excited carbon dioxide molecules and are accelerated in a coherent fashion. No heat is generated in this quantum energy transfer, thus all the energy transferred to the electrons is used in accelerating the electrons. The electron beam created from this process may result in electrons that are highly collimated in velocity in comparison to other acceleration methods.
The vibrationally excited carbon dioxide is the same medium used in a carbon dioxide laser . This medium resonantly amplifies light with a wavelength near 10.6 or 9.4 micrometers , corresponding to a frequency of approximately 30 terahertz . In order to be accelerated, incident electrons must be microbunched at this frequency. An appropriately bunched electron beam strikes excited carbon dioxide molecules resonantly in order to efficiently stimulate energy emission. | https://en.wikipedia.org/wiki/Paser |
In computer security , pass the hash is a hacking technique that allows an attacker to authenticate to a remote server or service by using the underlying NTLM or LanMan hash of a user's password, instead of requiring the associated plaintext password as is normally the case. It replaces the need for stealing the plaintext password to gain access with stealing the hash.
The attack exploits an implementation weakness in the authentication protocol, where password hashes remain static from session to session until the password is next changed.
This technique can be performed against any server or service accepting LM or NTLM authentication, whether it runs on a machine with Windows, Unix, or any other operating system.
On systems or services using NTLM authentication, users' passwords are never sent in cleartext over the wire. Instead, they are provided to the requesting system, like a domain controller , as a hash in a response to a challenge–response authentication scheme. [ 1 ]
Native Windows applications ask users for the cleartext password, then call APIs like LsaLogonUser [ 2 ] that convert that password to one or two hash values (the LM or NT hashes) and then send that to the remote server during NTLM authentication. [ Notes 1 ] [ 3 ]
If an attacker has the hashes of a user's password, they do not need the cleartext password; they can simply use the hash to authenticate with a server and impersonate that user. [ 4 ] [ 5 ] [ 6 ] In other words, from an attacker's perspective, hashes are functionally equivalent to the original passwords that they were generated from.
The pass the hash technique was originally published by Paul Ashton in 1997 [ 6 ] and consisted of a modified Samba SMB client that accepted user password hashes instead of cleartext passwords. Later versions of Samba and other third-party implementations of the SMB and NTLM protocols also included the functionality.
This implementation of the technique was based on an SMB stack created by a third-party (e.g., Samba and others), and for this reason suffered from a series of limitations from a hacker's perspective, including limited or partial functionality: The SMB protocol has continued to evolve over the years, this means that third parties creating their own implementation of the SMB protocol need to implement changes and additions to the protocol after they are introduced by newer versions of Windows and SMB (historically by reverse engineering, which is very complex and time-consuming). This means that even after performing NTLM authentication successfully using the pass the hash technique, tools like Samba's SMB client might not have implemented the functionality the attacker might want to use. This meant that it was difficult to attack Windows programs that use DCOM or RPC.
Also, because attackers were restricted to using third-party clients when carrying out attacks, it was not possible to use built-in Windows applications, like Net.exe or the Active Directory Users and Computers tool amongst others, because they asked the attacker or user to enter the cleartext password to authenticate, and not the corresponding password hash value.
In 2008, Hernan Ochoa published a tool called the "Pass-the-Hash Toolkit" [ 7 ] that allowed 'pass the hash' to be performed natively on Windows. It allowed the user name, domain name, and password hashes cached in memory by the Local Security Authority to be changed at runtime after a user was authenticated — this made it possible to 'pass the hash' using standard Windows applications, and thereby to undermine fundamental authentication mechanisms built into the operating system.
The tool also introduced a new technique which allowed dumping password hashes cached in the memory of the lsass.exe process (not in persistent storage on disk), which quickly became widely used by penetration testers (and attackers). This hash harvesting technique is more advanced than previously used techniques (e.g. dumping the local Security Accounts Manager database (SAM) using pwdump and similar tools), mainly because hash values stored in memory could include credentials of domain users (and domain administrators) that logged into the machine. For example, the hashes of authenticated domain users that are not stored persistently in the local SAM can also be dumped. This makes it possible for a penetration tester (or attacker) to compromise a whole Windows domain after compromising a single machine that was a member of that domain. Furthermore, the attack can be implemented instantaneously and without any requirement for expensive computing resources to carry out a brute force attack.
This toolkit has subsequently been superseded by "Windows Credential Editor", which extends the original tool's functionality and operating system support. [ 8 ] [ 9 ] Some antivirus vendors classify the toolkit as malware. [ 10 ] [ 11 ]
Before an attacker can carry out a pass-the-hash attack, they must obtain the password hashes of the target user accounts. To this end, penetration testers and attackers can harvest password hashes using a number of different methods:
Any system using LM or NTLM authentication in combination with any communication protocol (SMB, FTP, RPC, HTTP etc.) is at risk from this attack. [ 1 ] The exploit is very difficult to defend against, due to possible exploits in Windows and applications running on Windows that can be used by an attacker to elevate their privileges and then carry out the hash harvesting that facilitates the attack. Furthermore, it may only require one machine in a Windows domain to not be configured correctly or be missing a security patch for an attacker to find a way in. A wide range of penetration testing tools are furthermore available to automate the process of discovering a weakness on a machine.
There is no single defense against the technique, thus standard defense in depth practices apply [ 12 ] – for example use of firewalls , intrusion prevention systems , 802.1x authentication , IPsec , antivirus software , reducing the number of people with elevated privileges, [ 13 ] pro-active security patching [ 14 ] etc. Preventing Windows from storing cached credentials may limit attackers to obtaining hashes from memory, which usually means that the target account must be logged into the machine when the attack is executed. [ 15 ] Allowing domain administrators to log into systems that may be compromised or untrusted will create a scenario where the administrators' hashes become the targets of attackers; limiting domain administrator logons to trusted domain controllers can therefore limit the opportunities for an attacker. [ 12 ] The principle of least privilege suggests that a least user access (LUA) approach should be taken, in that users should not use accounts with more privileges than necessary to complete the task at hand. [ 12 ] Configuring systems not to use LM or NTLM can also strengthen security, but newer exploits are able to forward Kerberos tickets in a similar way. [ 16 ] Limiting the scope of debug privileges on system may frustrate some attacks that inject code or steal hashes from the memory of sensitive processes. [ 12 ]
Restricted Admin Mode is a new Windows operating system feature introduced in 2014 via security bulletin 2871997, which is designed to reduce the effectiveness of the attack. [ 17 ] | https://en.wikipedia.org/wiki/Pass_the_hash |
The Passano Foundation , established in 1945, provides an annual award to a research scientist whose work – done in the United States – is thought to have immediate practical benefits. Many Passano laureates have subsequently won the Nobel Prize .
This article about a scientific organization is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Passano_Foundation |
Passavant's ridge [ 1 ] is a mucous elevation situated behind the floor of the naso-pharynx .
It is also known as Passavant's pad or palatopharyngeal ridge. The prominence of mucous tissue is formed by the contraction of superior constrictor during swallowing. Palatopharyngeus muscle originates from the upper surface of the palatal aponeurosis by anterior and posterior fascicle , which are separated by the insertion of levator veli palatini . Both fasciculi join laterally to form a single muscle that passes downward and backward under cover of the palatopharyngeal arch. In the pharynx , it joins with the salpingopharyngeus muscles and is inserted. A few fibers of palatopharyngeus muscle sweep backward under cover of the Passavant's ridge and form a U-shaped sling of palatopharyngeal sphincter. When the soft palate is elevated it comes in contact with ridge, the two together closing pharyngeal isthmus between nasopharynx and oropharynx . [ 2 ] | https://en.wikipedia.org/wiki/Passavant's_ridge |
In tissue and organ transplantation , the passenger leukocyte theory is the proposition that leucocytes within a transplanted allograft sensitize the recipient's alloreactive T-lymphocytes , causing transplant rejection. [ 1 ]
The concept was first proposed by George Davis Snell [ 2 ] and the term coined in 1968 when Elkins and Guttmann showed that leukocytes present in a donor graft initiate an immune response in the recipient of a transplant. [ 3 ]
This immunology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Passenger_leukocyte |
Passenger load factor , or load factor , measures the capacity utilization of public transport services like airlines , passenger railways , and intercity bus services . It is generally used to assess how efficiently a transport provider fills seats and generates fare revenue .
According to the International Air Transport Association , the worldwide load factor for the passenger airline industry during 2015 was 79.7%. [ 1 ]
Passenger load factor is an important parameter for the assessment of the performance of any transport system. Almost all transport systems have high fixed costs, and these costs can only be recovered through selling tickets. [ 2 ] Airlines often calculate a load factor at which the airline will break even; this is called the break-even load factor. [ 3 ] At a load factor lower than the break even level, the airline will lose money, and above will record a profit.
The environmental performance of any transport mode improves as the load factor increases. The weight of passengers is normally a small part of the total weight of any transport vehicle, so increasing the number of passengers changes the emissions and fuel consumption to only a small degree. As a vehicle is more highly loaded, the fuel consumed per passenger drops, and fully loaded transport vehicles can be very fuel efficient.
Very heavy loading of a transport vehicle is described as a crush load . Crush loading is a very high level of loading where passengers are crushed against one another. Commenting in May 2017 on the United Express Flight 3411 incident , in which a passenger was forcibly removed, investor Warren Buffett said that passenger demand for cheap flights was resulting in high load factors, resulting in "a fair amount of discomfort." [ 4 ]
Specifically, the load factor is the dimensionless ratio of passenger-kilometres travelled to seat-kilometres available. For example, say that on a particular day an airline makes 5 scheduled flights, each of which travels 200 kilometers and has 100 seats, and sells 60 tickets for each flight. To calculate its load factor:
( 5 flights ) ( 200 km/flight ) ( 60 passengers ) ( 5 flights ) ( 200 km/flight ) ( 100 seats ) = 60 , 000 passenger ⋅ km 100 , 000 seat ⋅ km = 0.6 = 60 % {\displaystyle {\frac {(5\ {\text{flights}})(200\ {\text{km/flight}})(60\ {\text{passengers}})}{(5\ {\text{flights}})(200\ {\text{km/flight}})(100\ {\text{seats}})}}={\frac {60,000\ {\text{passenger }}\cdot {\text{ km}}}{100,000\ {\text{seat }}\cdot {\text{ km}}}}=0.6=60\%}
Thus, during that day the airline flew 60,000 passenger-kilometres and 100,000 seat-kilometres, for an overall load factor of 60% (0.6). | https://en.wikipedia.org/wiki/Passenger_load_factor |
The Passerini reaction is a chemical reaction involving an isocyanide , an aldehyde (or ketone ), and a carboxylic acid to form a α- acyloxy amide . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] This addition reaction is one of the oldest isocyanide -based multicomponent reactions and was first described in 1921 by Mario Passerini in Florence, Italy. [ 6 ] [ 7 ] It is typically carried out in aprotic solvents but can alternatively be performed in water, ionic liquids, or deep eutectic solvents . [ 7 ] It is a third order reaction; first order in each of the reactants. The Passerini reaction is often used in combinatorial and medicinal chemistry with recent utility in green chemistry and polymer chemistry . [ 6 ] [ 8 ] [ 9 ] As isocyanides exhibit high functional group tolerance, chemoselectivity , regioselectivity , and stereoselectivity , the Passerini reaction has a wide range of synthetic applications. [ 6 ] [ 10 ] [ 11 ] [ 12 ]
The Passerini reaction has been hypothesized to occur through two mechanistic pathways. [ 10 ] [ 7 ] [ 11 ] The reaction pathways are dependent on the solvent used.
A concerted mechanism , seen in S N 2 and Diels−Alder reactions, is theorized to occur when the Passerini reagents are present at high concentration in aprotic solvents. [ 10 ]
This mechanism involves a trimolecular reaction between the isocyanide, carboxylic acid, and carbonyl in a sequence of nucleophilic additions . The reaction proceeds first through an imidate intermediate and then undergoes Mumm rearrangement to afford the Passerini product. [ 13 ] [ 14 ]
As the Mumm rearrangement requires a second carboxylic acid molecule, this mechanism classifies the Passerini reaction as an organocatalytic reaction. [ 14 ] [ 15 ]
In polar solvents, such as methanol or water , the carbonyl is protonated before nucleophilic addition of the isocyanide, affording a nitrilium ion intermediate. This is followed by the addition of a carboxylate, acyl group transfer and proton transfer respectively to give the desired Passerini product. [ 11 ] [ 7 ]
Molecular weights of polymers synthesized through the Passerini can be controlled through stoichiometric means. [ 16 ] For example, polymer chain length and weight can adjusted through isocyanide stoichiometry, and polymer geometry can be influenced through starting reagents. [ 16 ] [ 17 ] To facilitate the Passerini reaction between bulky, sterically hindered reagents, a vortex fluidic device can be used to induce high shear conditions. These conditions emulate the effects of high temperature and pressure, allowing the Passerini reaction to proceed fairly quickly. [ 18 ] The Passerini reaction can also exhibit enantioselectivity. Addition of tert-butyl isocyanide to a wide variety of aldehydes (aromatic, heteroaromatic, olefinic, acetylenic, aliphatic) is achieved using a catalytic system of tetrachloride and a chiral bisphosphoramide which provides good yield and good enantioselectivities. [ 19 ] For other types of isocyanides, rate of addition of isocyanide to reaction mixture dictates good yields and high selectivities. [ 19 ]
Apart from forming α- acyloxy amide products, the Passerini reaction can be used to form heterocycles , polymers , amino acids , and medicinal products.
The original Passerini reaction produces acyclic depsipeptides which are labile in physiological conditions. To increase product stability for medicinal use, post-Passerini cyclization reactions have been used to afford heterocycles such as β-lactams , butenolides , and isocoumarins . [ 16 ] To enable these cyclizations, reagents are pre-functionalized with reactive groups (ex. halogens, azides, etc.) and used in tandem with other reactions (ex. Passerini- Knoevenagel , Passerini- Dieckmann ) to afford heterocyclic products. [ 16 ] Compounds like three membered oxirane and aziridine derivatives, four-membered b- lactams , and five-membered tetrasubstituted 4,5-dihydro pyrazoles have been produced through this reaction. [ 12 ]
This reaction has also been used for polymerization, monomer formation, and post-polymerization modification. [ 20 ] [ 21 ] [ 22 ] [ 17 ] [ 23 ] The Passerini reaction has also been used to form sequence-defined polymers . [ 24 ] Bifunctional substrates can be used to undergo post-polymerization modification or serve as precursors for polymerization . [ 10 ] [ 11 ] [ 8 ] As this reaction has high functional group tolerance, the polymers created using this reaction are widely diverse with tuneable properties . [ 20 ] Macromolecules that have been produced with this reaction include macroamides, macrocyclic depsipeptides, three-component dendrimers and three-armed star branched mesogen core molecules. [ 12 ]
Passerini reaction has been employed for the formation of structures like α-amino acids , α-hydroxy-β-amino acids, α-ketoamides, β-ketoamides, α- hydroxyketones and α-aminoxyamides. [ 12 ] The Passerini reaction has synthesized α-Acyloxy carboxamides that have demonstrated activity as anti-cancer medications along with functionalized [C60]- fullerenes used in medicinal and plant chemistry. [ 12 ] [ 25 ] This reaction has also been used as a synthetic step in the total synthesis of commercially available pharmaceuticals such as telaprevir (VX-950), an antiviral sold by Vertex Pharmaceuticals and Johnson & Johnson. [ 12 ] | https://en.wikipedia.org/wiki/Passerini_reaction |
Passing–Bablok regression is a method from robust statistics for nonparametric regression analysis suitable for method comparison studies introduced by Wolfgang Bablok and Heinrich Passing in 1983. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] The procedure is adapted to fit linear errors-in-variables models . It is symmetrical and is robust in the presence of one or few outliers.
The Passing-Bablok procedure fits the parameters a {\displaystyle a} and b {\displaystyle b} of the linear equation y = a + b ∗ x {\displaystyle y=a+b*x} using non-parametric methods. The coefficient b {\displaystyle b} is calculated by taking the shifted median of all slopes of the straight lines between any two points, disregarding lines for which the points are identical or b = − 1 {\displaystyle b=-1} . The median is shifted based on the number of slopes where b < − 1 {\displaystyle b<-1} to create an approximately consistent estimator. The estimator is therefore close in spirit to the Theil-Sen estimator . The parameter a {\displaystyle a} is calculated by a = median ( y i − b x i ) {\displaystyle a=\operatorname {median} ({y_{i}-bx_{i})}} .
In 1986, Passing and Bablok extended their method introducing an equivariant extension for method transformation which also works when the slope b {\displaystyle b} is far from 1. [ 6 ] It may be considered a robust version of reduced major axis regression . The slope estimator b {\displaystyle b} is the median of the absolute values of all pairwise slopes.
The original algorithm is rather slow for larger data sets as its computational complexity is O ( n 2 ) {\displaystyle O(n^{2})} . However, fast quasilinear algorithms of complexity O ( n {\displaystyle O(n} ln n ) {\displaystyle n)} have been devised. [ 4 ] [ 5 ]
Passing and Bablok define a method for calculating a 95% confidence interval (CI) for both a {\displaystyle a} and b {\displaystyle b} in their original paper, [ 1 ] which was later refined, [ 4 ] though bootstrapping the parameters is the preferred method for in vitro diagnostics (IVD) when using patient samples. [ 7 ] The Passing-Bablok procedure is valid only when a linear relationship exists between x {\displaystyle x} and y {\displaystyle y} , which can be assessed by a CUSUM test. Further assumptions include the error ratio to be proportional to the slope b {\displaystyle b} and the similarity of the error distributions of the x {\displaystyle x} and y {\displaystyle y} distributions. [ 1 ] The results are interpreted as follows. If 0 is in the CI of a {\displaystyle a} , and 1 is in the CI of b {\displaystyle b} , the two methods are comparable within the investigated concentration range. If 0 is not in the CI of a {\displaystyle a} there is a systematic difference and if 1 is not in the CI of b {\displaystyle b} then there is a proportional difference between the two methods.
However, the use of Passing–Bablok regression in method comparison studies has been criticized because it ignores random differences between methods. [ 8 ] | https://en.wikipedia.org/wiki/Passing–Bablok_regression |
Passive autocatalytic recombiner ( PAR ) is a device that removes hydrogen from the containment of a nuclear power plant during an accident . Its purpose is to prevent hydrogen explosions . Recombiners come into action spontaneously as soon as the hydrogen concentration increases. They are passive devices because their operation does not require external energy. [ 1 ]
Hydrogen may be generated in a nuclear accident if the reactor fuel overheats and zirconium cladding of the fuel rods reacts chemically with steam. If the hydrogen is released from the reactor to the containment, it may get mixed with air and form a flammable or even explosive mixture. A hydrogen explosion could break the containment and cause radioactive materials to be released to the environment. Recombiners aim at removing hydrogen and thereby preventing explosions. [ 2 ]
Inside a recombiner there are plates or pellets that are coated with platinum or palladium catalyst . On the surface of the catalyst, hydrogen and oxygen molecules react chemically at low temperature and low hydrogen concentration. The reaction generates steam. The reaction starts spontaneously when the hydrogen concentration reaches 1–2 percent. Burning of hydrogen in air requires at least 4 percent hydrogen concentration, and even higher for an explosion. Therefore, a recombiner is able to remove hydrogen from the containment before a flammable concentration is reached. [ 1 ]
A recombiner is a box that is open from the bottom and from the top. The catalyst is located at the lower part of the box. The reaction of hydrogen and oxygen on the catalyst surface generates heat, and temperature in the recombiner reaches hundreds of degrees Celsius. Hot steam is lighter than the air in the containment, so buoyancy is caused inside the recombiner, much like in a chimney. This causes a strong airflow through the recombiner, feeding hydrogen and oxygen from the containment to the device. [ 1 ]
Hundreds of kilograms of hydrogen may be generated in a few hours during a severe reactor accident. [ 1 ] The most efficient recombiner made by Framatome (formerly Areva) removes slightly over five kilograms of hydrogen per hour when the hydrogen concentration is four percent. [ 3 ] Therefore, many recombiners are needed. For example, the containment of Olkiluoto 3 EPR in Finland has 50 recombiners. [ 2 ]
Manufacturers of passive autocatalytic recombiners include Framatome, [ 3 ] SNC-Lavalin (formerly Atomic Energy of Canada Ltd, AECL), [ 4 ] and German Siempelkamp-NIS. [ 5 ] | https://en.wikipedia.org/wiki/Passive_autocatalytic_recombiner |
In complexation catalysis, the term passive binding refers to any stabilizing interaction that is equally strong at the transition state level and in the reactant-catalyst complex.
Having the same effect on the stability of the transition state and the reactant-catalyst complex, passive binding contributes to acceleration only if the equilibrium between the unassociated reactant and catalyst and their complex is not completely shifted to the right. It was defined by A.J. Kirby in 1996 as opposed to the dynamic binding , i.e. the whole of interactions that are stronger at the transition state level than in the reactant-catalyst complex. [ 1 ]
This organic chemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Passive_binding |
Passive daytime radiative cooling ( PDRC ) (also passive radiative cooling , daytime passive radiative cooling , radiative sky cooling , photonic radiative cooling , and terrestrial radiative cooling [ 2 ] [ 3 ] [ 4 ] [ 5 ] ) is the use of unpowered, reflective/ thermally-emissive surfaces to lower the temperature of a building or other object. [ 6 ]
It has been proposed as a method of reducing temperature increases caused by greenhouse gases by reducing the energy needed for air conditioning , [ 7 ] [ 8 ] lowering the urban heat island effect , [ 9 ] [ 10 ] and lowering human body temperatures . [ 11 ] [ 1 ] [ 12 ] [ 13 ] [ 7 ]
PDRCs can aid systems that are more efficient at lower temperatures, such as photovoltaic systems , [ 4 ] [ 14 ] dew collection devices, and thermoelectric generators . [ 4 ] [ 14 ]
Some estimates propose that dedicating 1–2% of the Earth's surface area to PDRC would stabilize surface temperatures. [ 15 ] [ 3 ] Regional variations provide different cooling potentials with desert and temperate climates benefiting more than tropical climates , attributed to the effects of humidity and cloud cover . [ 16 ] [ 17 ] [ 18 ] PDRCs can be included in adaptive systems, switching from cooling to heating to mitigate any potential "overcooling" effects. [ 19 ] [ 20 ] PDRC applications for indoor space cooling is growing with an estimated "market size of ~$27 billion in 2025." [ 21 ]
PDRC surfaces are designed to be high in solar reflectance to minimize heat gain and strong in longwave infrared (LWIR) thermal radiation heat transfer matching the atmosphere's infrared window (8–13 μm). [ 22 ] [ 2 ] [ 3 ] This allows the heat to pass through the atmosphere into space . [ 6 ] [ 2 ]
PDRCs leverage the natural process of radiative cooling, in which the Earth cools by releasing heat to space . [ 23 ] [ 24 ] [ 7 ] PDRC operates during daytime. [ 25 ] On a clear day, solar irradiance can reach 1000 W/m 2 with a diffuse component between 50-100 W/m 2 . The average PDRC has an estimated cooling power of ~100-150 W/m 2 , proportional to the exposed surface area . [ 4 ] [ 19 ]
PDRC applications are deployed as sky-facing surfaces. [ 14 ] Low-cost scalable PDRC materials with potential for mass production include coatings , thin films , metafabrics, aerogels , and biodegradable surfaces.
While typically white, other colors can also work, although generally offering less cooling potential. [ 26 ] [ 27 ]
Research, development, and interest in PDRCs has grown rapidly since the 2010s, attributable to a breakthrough in the use of photonic metamaterials to increase daytime cooling in 2014, [ 4 ] [ 28 ] [ 29 ] along with growing concerns over energy use and global warming. [ 30 ] [ 31 ] PDRC can be contrasted with traditional compression-based cooling systems (e.g., air conditioners) that consume substantial amounts of energy, have a net heating effect (heating the outdoors more than cooling the indoors), require ready access to electric power and often employ coolants that deplete the ozone or have a strong greenhouse effect , [ 32 ] [ 33 ]
Unlike solar radiation management , PDRC increases heat emission beyond simple reflection. [ 34 ]
A 2019 study reported that "widescale adoption of radiative cooling could reduce air temperature near the surface, if not the whole atmosphere." [ 5 ] To address global warming, PDRCs must be designed "to ensure that the emission is through the atmospheric transparency window and out to space, rather than just to the atmosphere, which would allow for local but not global cooling." [ 34 ]
Currently the Earth is absorbing ~1 W m 2 more than it is emitting, which leads to an overall warming of the climate. By covering a small fraction of the Earth with thermally emitting materials, the heat flow away from the Earth can be increased, and the net radiative flux can be reduced to zero (or even made negative), thus stabilizing (or cooling) the Earth (...) If only 1%–2% of the Earth’s surface were instead made to radiate at this rate rather than its current average value, the total heat fluxes into and away from the entire Earth would be balanced and warming would cease. [ 12 ] The estimated total surface area coverage is 5×10 12 m 2 or about half the size of the Sahara Desert . [ 34 ]
Desert climates have the highest radiative cooling potential due to low year-round humidity and cloud cover, while tropical climates have less potential due to higher humidity and cloud cover. [ 5 ] [ 35 ] Costs for global implementation have been estimated at $1.25 to $2.5 trillion or about 3% of global GDP, with expected economies of scale . [ 34 ] Low-cost scalable materials have been developed for widescale implementation, although some challenges toward commercialization remain. [ 36 ] [ 37 ]
Some studies recommended efforts to maximize solar reflectance or albedo of surfaces, with a goal of thermal emittance of 90%. For example, increasing reflectivity from 0.2 (typical rooftop) to 0.9 is far more impactful than improving an already reflective surface, such as from 0.9 to 0.97. [ 10 ]
Studies have reported many PDRC benefits:
PDRC has been claimed to be more stable, adaptable, and reversible than stratospheric aerosol injection (SAI). [ 41 ]
Wang et al. claimed that SAI "might cause potentially dangerous threats to the Earth’s basic climate operations" that may not be reversible, and thus preferred PDRC. [ 42 ] Munday noted that although "unexpected effects will likely occur" with the global implementation of PDRC, that "these structures can be removed immediately if needed, unlike methods that involve dispersing particulate matter into the atmosphere, which can last for decades." [ 34 ]
When compared to the reflective surfaces approach of increasing surface albedo, such as through painting roofs white, or the space mirror proposals of "deploying giant reflective surfaces in space", Munday claimed that "the increased reflectivity likely falls short of what is needed and comes at a high financial cost." [ 34 ] PDRC differs from the reflective surfaces approach by "increasing the radiative heat emission from the Earth rather than merely decreasing its solar absorption". [ 34 ]
The basic measure of PDRCs is their solar reflectivity (in 0.4–2.5 μm) and heat emissivity (in 8–13 μm), [ 2 ] to maximize "net emission of longwave thermal radiation " and minimize "absorption of downward shortwave radiation ". [ 5 ] PDRCs use the infrared window (8–13 μm) for heat transfer with the coldness of outer space (~2.7 K ) to radiate heat and subsequently lower ambient temperatures with zero energy input. [ 5 ]
PDRCs mimic the natural process of radiative cooling , in which the Earth cools itself by releasing heat to outer space ( Earth's energy budget ), although during the daytime, lowering ambient temperatures under direct solar intensity. [ 5 ] On a clear day, solar irradiance can reach 1000 W/m 2 with a diffuse component between 50 and 100 W/m 2 . As of 2022 the average PDRC had a cooling power of ~100–150 W/m 2 . [ 19 ] Cooling power is proportional to the installation's surface area . [ 4 ]
The most useful measurements come in a real-world setting. Standardized devices have been proposed. [ 43 ]
Evaluating atmospheric downward longwave radiation based on "the use of ambient weather conditions such as the surface air temperature and humidity instead of the altitude-dependent atmospheric profiles ," may be problematic since "downward longwave radiation comes from various altitudes of the atmosphere with different temperatures, pressures, and water vapor contents" and "does not have uniform density, composition, and temperature across its thickness." [ 5 ]
Broadband emitters possess high emittance in both the solar spectrum and atmospheric LWIR window (8 to 14 μm), whereas selective emitters only emit longwave infrared radiation. [ 19 ]
In theory, selective thermal emitters can achieve higher cooling power. [ 19 ] However, selective emitters face challenges in real-world applications that can weaken their performance, such as from dropwise condensation (common even in semi-arid climates) that can accumulate on even hydrophobic surfaces and reduce emission. [ 44 ] Broadband emitters outperform selective materials when "the material is warmer than the ambient air, or when its sub-ambient surface temperature is within the range of several degrees". [ 9 ]
Each type can be advantageous for certain applications. Broadband emitters may be better for horizontal applications, such as roofs, whereas selective emitters may be more useful on vertical surfaces such as building facades , where dropwise condensation is inconsequential and their stronger cooling power can be achieved. [ 44 ]
Broadband emitters can be made angle-dependent to potentially enhance performance. [ 19 ] Polydimethylsiloxane (PDMS) is a common broadband emitter. [ 44 ] Most PDRC materials are broadband, primarily due to their lower cost and higher performance at above-ambient temperatures. [ 45 ]
Combining PDRCs with other systems may increase their cooling power. When included in a combined thermal insulation , evaporative cooling , and radiative cooling system consisting of "a solar reflector, a water-rich and IR-emitting evaporative layer, and a vapor-permeable, IR-transparent, and solar-reflecting insulation layer," 300% higher [ clarification needed ] ambient cooling power was demonstrated. This could extend the shelf life of food by 40% in humid climates and 200% in dry climates without refrigeration . The system however requires water "re-charges" to maintain cooling power. [ 46 ]
A dual-mode asymmetric photonic mirror (APM) consisting of silicon-based diffractive gratings could achieve all-season cooling, even under cloudy and humid conditions, as well as heating. The cooling power of APM could perform 80% more when compared to standalone radiative coolers. Under cloudy sky, it could achieve 8 °C more cooling and, for heating, 5.7 °C. [ 47 ]
The cooling potential of various areas varies primarily based on climate zones , weather patterns, and events. Dry and hot regions generally have higher radiative cooling power (up to 120 W m 2 ), while colder regions or those with high humidity or cloud cover generally have less. [ 35 ] Cooling potential changes seasonally due to shifts in humidity and cloud cover. [ 5 ] Studies mapping daytime radiative cooling potential have been done for China, [ 33 ] India, [ 48 ] the United States, [ 49 ] and across Europe. [ 50 ]
Dry regions such as western Asia, north Africa, Australia and the southwestern United States are ideal for PDRC due to the relative lack of humidity and cloud cover across the seasons. The cooling potential for desert regions has been estimated at "in the higher range of 80–110 W m 2 ", [ 5 ] and 120 W m 2 . [ 35 ] The Sahara Desert and western Asia is the largest area on earth with such a high cooling potential. [ 5 ]
The cooling potential of desert regions is likely to remain relatively unfulfilled due to low population densities, reducing demand for local cooling, despite tremendous cooling potential. [ 5 ]
Temperate climates have a high radiative cooling potential and greater population density, which may increase interest in PDRCs. These zones tend to be "transitional" zones between dry and humid climates. [ 5 ] High population areas in temperate zones may be susceptible to an "overcooling" effect from PDRCs due to temperature shifts from summer to winter, which can be overcome with the modification of PDRCs to adjust for temperature shifts. [ 19 ]
While PDRCs have proven successful in temperate regions, reaching the same level of performance is more difficult in tropical climes. This has primarily been attributed to the higher solar irradiance and atmospheric radiation, particularly humidity and cloud cover. [ 16 ] The average cooling potential of tropical climates varies between 10 and 40 W m 2 , significantly lower than hot and dry climates. [ 5 ]
For example, the cooling potential of most of southeast Asia and the Indian subcontinent is significantly diminished in the summer due to a dramatic increase in humidity, dropping as low as 10–30 W/m 2 . Other similar zones, such as tropical savannah areas in Africa, see a more modest decline during summer, dropping to 20–40 W/m 2 . However, tropical regions generally have a higher albedo or radiative forcing due to sustained cloud cover and thus their land surface contributes less to planetary albedo. [ 5 ]
A 2022 study reported that a PDRC surface in tropical climates should have a solar reflectance of at least 97% and an infrared emittance of at least 80% to reduce temperatures. The study applied a BaSO 4 - K 2 SO 4 coating with a "solar reflectance and infrared emittance (8–13 μm) of 98.4% and 95% respectively" in the tropical climate of Singapore and achieved a "sustained daytime sub-ambient temperature of 2°C" under direct solar intensity of 1000 W m 2 . [ 16 ]
Humidity and cloud coverage significantly weaken PDRC effectiveness. [ 7 ] A 2022 study noted that "vertical variations of both vapor concentration and temperature in the atmosphere" can have a considerable impact on radiative coolers. The authors reported that aerosol and cloud coverage can weaken the effectiveness of radiators and thus concluded that adaptable "design strategies of radiative coolers" are needed to maximize effectiveness under these climatic conditions. [ 17 ]
The formation of dropwise condensation on PDRC surfaces can alter the infrared emittance of selective PDRC emitters, which can weaken their performance. Even in semi-arid environments, dew formation. Another 2022 study reported that the cooling power of selective emitters "may broaden the narrowband emittances of the selective emitter and reduce their sub-ambient cooling power and their supposed cooling benefits over broadband emitters" [ 44 ] and that:
Our work shows that the assumed benefits of selective emitters are even smaller when it comes to the largest application of radiative cooling – cooling roofs of buildings. However, recently, it has been shown that for vertical building facades experiencing broadband summertime terrestrial heat gains and wintertime losses, selective emitters can achieve seasonal thermoregulation and energy savings. Since dew formation appears less likely on vertical surfaces even in exceptionally humid environments, the thermoregulatory benefits of selective emitters will likely persist in both humid and dry operating conditions. [ 44 ]
Rain can generally help clean PDRC surfaces covered with dust, dirt, or other debris. However, in humid areas, consistent rain can result in water accumulation that can hinder performance. Porous PDRCs can mitigate these conditions. [ 51 ] Another response is to make hydrophobic self-cleaning PDRCs. Scalable and sustainable hydrophobic PDRCs that avoid VOCs can repel rainwater and other liquids. [ 52 ]
Wind may alter the efficiency of passive radiative cooling surfaces and technologies. A 2020 study proposed using a "tilt strategy and wind cover strategy" to mitigate wind effects. The researchers reported regional differences in China, noting that "85% of China's areas can achieve radiative cooling performance with wind cover" whereas in northwestern China wind cover effects would be more substantial. [ 18 ] Bijarniya et al. similarly proposes the use of a wind shield in areas susceptible to high winds. [ 7 ]
PDRC surfaces can be made of various materials. However, for widespread application, PDRC materials must be low cost, available for mass production, and applicable in many contexts. Most research has focused on coatings and thin films, which tend to be more available for mass production, lower cost, and more applicable in a wider range of contexts, although other materials may provide potential for specific applications. [ 36 ] [ 37 ] [ 53 ] [ 54 ]
PDRC research has identified more sustainable material alternatives, even if not fully biodegradable . [ 30 ] [ 55 ] [ 56 ] [ 57 ] [ 58 ] A 2023 study reported that "most PDRC materials now are non-renewable polymers, artificial photonic or synthetic chemicals, which will cause excessive CO 2 emissions by consuming fossil fuels and go against the global carbon neutrality goal. Environmentally friendly bio-based renewable materials should be an ideal material to devise PDRC systems." [ 59 ]
Advanced photonic materials and structures, such as multilayer thin films, micro/nanoparticles, photonic crystals , metamaterials , and metasurfaces , have been reported as potential approaches. [ 60 ] However, while multilayer and complex nano-photonic structures have proven successful in experimental scenarios and simulations, a 2022 study reported that widespread application "is severely restricted because of the complex and expensive processes of preparation". [ 37 ] Similarly, a 2020 study reported that "scalable production of artificial photonic radiators with complex structures, outstanding properties, high throughput, and low cost is still challenging". [ 61 ] This has advanced research of simpler structures for PDRC materials possibly better suited for mass production. [ 60 ]
PDRC coatings such as paints may be advantageous given their direct application to surfaces, simplifying preparation and reducing costs, [ 37 ] although not all coatings are inexpensive. [ 62 ] A 2022 study stated that coatings generally offer "strong operability, convenient processing, and low cost, which have the prospect of large-scale utilization". [ 36 ] PDRC coatings have been developed in colors other than white while still demonstrating high solar reflectance and heat emissivity. [ 26 ]
Coatings must be durable and resistant to soiling, which can be achieved with porous PDRCs [ 51 ] or hydrophobic topcoats that can withstand cleaning, although hydrophobic coatings use polytetrafluoroethylene or similar compounds to be water-resistant. [ 62 ] Negative environmental impacts can be mitigated by limiting use of other toxic solvents common in paints, such as acetone . Non-toxic or water-based paints have been developed. [ 62 ] [ 56 ]
Porous Polymers Coating (PPC) exhibit excellent PDRC performance. These polymers have a high concentration of tiny pores, which scatter light effectively at the boundary between the polymer and the air. This scattering enhances both solar reflectance (more than 96%) and thermal emittance (97% of heat), lowering surface temperatures six degrees below the surroundings at noon in Phoenix. This process is solution-based, aiding scalability. [ 63 ] [ 64 ] Dye of the desired color is coated on the polymer. Compared to traditional dye in porous polymer, in which the dye is mixed in the polymer, the new design can cool more effectively. [ 65 ]
A 2018 study reported significantly lowered coating costs, stating that "photonic media, when properly randomized to minimize the photon transport mean free path, can be used to coat a black substrate and reduce its temperature by radiative cooling." This coating could "outperform commercially available solar-reflective white paint for daytime cooling" without expensive manufacturing steps or materials. [ 66 ]
Many thin films offer high solar reflectance and heat emittance. However, films with precise patterns or structures are not scalable "due to the cost and technical difficulties inherent in large-scale precise lithography " (2022), [ 9 ] or "due to complex nanoscale lithography/synthesis and rigidity" (2021). [ 69 ]
The polyacrylate hydrogel film [ 70 ] from the 2022 study has broader applications, including potential uses in building construction and large-scale thermal management systems. This research focused on a film developed for hybrid passive cooling. The film uses sodium polyacrylate , a low-cost industrial material, to achieve high solar reflectance and high mid-infrared emittance. A significant feature of this material is its ability to absorb atmospheric moisture, aiding evaporative cooling . This tripartite mechanism allows for efficient cooling under varying atmospheric conditions, including high humidity or given limited access to clear skies. [ 70 ]
PDRCs can be made of metafabrics, which can be used in clothing to shield/regulate body temperatures. Most metafabrics are made of petroleum-based fibers. [ 74 ] For instance, 2023 study reported that a that "new flexible cellulose fibrous films with wood-like hierarchical microstructures need to be developed for wearable PDRC applications." [ 59 ]
A 2021 study chose a composite of titanium oxide and polylactic acid (TiO2-PLA) with a polytetrafluoroethylene (PTFE) lamination. The fabric underwent optical and thermal characterization, measuring like reflectivity and emissivity. Numerical simulations, including Lorenz-Mie theory and Monte Carlo simulations , were crucial in predicting the fabric's performance and guiding optimization. Mechanical testing was conducted to assess the fabric's durability, strength, and practicality. [ 75 ]
The study reported exceptional ability to facilitate radiative cooling. The fabric achieved 94.5% emissivity and 92.4% reflectivity. This combination of high emissivity and reflectivity is central to its cooling capabilities, significantly outperforming traditional fabrics. Additionally, the fabric's mechanical properties, including strength, durability, waterproofness, and breathability, confirmed its suitability for clothing. [ 75 ] [ 76 ] [ 77 ]
Aerogels offer a potential low-cost material scalable for mass production. Some aerogels can be considered a more environmentally friendly alternative to other materials, with degradable potential and the absence of toxic chemicals. [ 79 ] [ 57 ] Aerogels can be useful as thermal insulation to reduce solar absorption and parasitic heat gain to improve the cooling performance of PDRCs. [ 80 ]
Pigments absorb light. Soap bubbles show a prism of different colors on their surfaces. These colors result from the way light interacts with differing thicknesses of the bubble's surface, termed structural color . One study reported that cellulose nanocrystals (CNCs), which are derived from the cellulose found in plants, could be made into iridescent, colorful films without added pigment. They made films with blue, green and red colors that, when placed under sunlight, were an average of nearly 7ᵒF cooler than the surrounding air. The film generated over 120 W m −2 of cooling power. [ 83 ]
Many proposed radiative cooling materials are not biodegradable . A 2022 study reported that "sustainable materials for radiative cooling have not been sufficiently investigated." [ 30 ]
A silica micro-grating photonic device cooled commercial silicon cells by 3.6 °C under solar intensity of 830 W m −2 to 990 W m −2 . [ 84 ]
Passive daytime radiative cooling has "the potential to simultaneously alleviate the two major problems of energy crisis and global warming" [ 1 ] along with an "environmental protection refrigeration technology." [ 36 ] PDRCs have an array of potential applications, but are now most often applied to various aspects of the built environment , such as building envelopes , cool pavements , and other surfaces to decrease energy demand, costs, and CO 2 emissions. [ 85 ] PDRC has been applied for indoor space cooling, outdoor urban cooling, solar cell efficiency , power plant condenser cooling, among other applications. [ 7 ] [ 4 ] [ 29 ] For outdoor applications, PDRC durability is an important requirement. [ 45 ]
The most common application is on building envelopes, including cool roofs . A PDRC can double the energy savings of a white roof. [ 4 ] This makes PDRCs an alternative or supplement to air conditioning that lowers energy demand and reduces air conditioning's release of hydrofluorocarbons (HFC) into the atmosphere. HFCs can be thousands of times more potent than CO 2 . [ 7 ] [ 4 ] [ 37 ] [ 8 ]
Air conditioning accounts for 12%-15% of global energy usage, [ 7 ] [ 74 ] while CO 2 emissions from air conditioning account for "13.7% of energy-related CO 2 emissions, approximately 52.3 EJ yearly" [ 36 ] or 10% of total emissions. [ 74 ] Air conditioning applications are expected to rise. [ 26 ] However, this can be significantly reduced with the mass production of low-cost PDRCs for indoor space cooling. [ 7 ] [ 8 ] [ 87 ] A multilayer PDRC surface covering 10% of a building's roof can replace 35% of air conditioning used during the hottest hours of daytime. [ 7 ]
In suburban single-family residential areas , PDRCs can lower energy costs by 26% to 46% in the United States [ 86 ] and lower temperatures on average by 5.1 °C. With the addition of "cold storage to utilize the excess cooling energy of water generated during off-peak hours, the cooling effects for indoor air during the peak-cooling-load times can be significantly enhanced" and air temperatures may be reduced by 6.6–12.7 °C. [ 88 ]
In cities, PDRCs can produce significant energy and cost savings. In a study on US cities, Zhou et al. found that "cities in hot and arid regions can achieve high annual electricity consumption savings of >2200 kWh , while <400 kWh is attainable in colder and more humid cities," ranking from highest to lowest by electricity consumption savings as follows: Phoenix (~2500 kWh), Las Vegas (~2250 kWh), Austin (~2100 kWh), Honolulu (~2050 kWh), Atlanta (~1500 kWh), Indianapolis (~1200 kWh), Chicago (~1150 kWh), New York City (~900 kWh), Minneapolis (~850 kWh), Boston (~750 kWh), Seattle (~350 kWh). [ 88 ] In a study projecting energy savings for Indian cities in 2030, Mumbai and Kolkata had a lower energy savings potential, Jaisalmer , Varansai , and Delhi had a higher potential, although with significant variations from April to August dependent on humidity and wind cover. [ 48 ]
The growing interest and rise in PDRC application to buildings has been attributed to cost savings related to "the sheer magnitude of the global building surface area, with a market size of ~$27 billion in 2025," as estimated in a 2020 study. [ 85 ]
PDRC surfaces can mitigate extreme heat from the urban heat island effect that occurs in over 450 cities worldwide. It can be as much as 10–12 °C (18–22 °F) hotter in urban areas than nearby rural areas . [ 9 ] [ 10 ] On an average hot summer day, the roofs of buildings can be 27–50 °C (49–90 °F) hotter than the surrounding air, warming air temperatures further through convection . Well-insulated dark rooftops are significantly hotter than all other urban surfaces, including asphalt pavements, [ 10 ] further expanding air conditioning demand (which further accelerates global warming and urban heat island through the release of waste heat into the ambient air) and increasing risks of heat-related disease and fatal health effects. [ 9 ] [ 39 ] [ 40 ]
PDRCs can be applied to building roofs and urban shelters to significantly lower surface temperatures with zero energy consumption by reflecting heat out of the urban environment and into outer space. [ 9 ] [ 10 ] The primary obstacle to PDRC implementation is the glare that may be caused through the reflection of visible light onto surrounding buildings. Colored PDRC surfaces may mitigate glare. [ 62 ] such as Zhai et al. [ 26 ] "Super-white paints with commercial high-index (n~1.9) retroreflective spheres ", [ 62 ] or the use of retroreflective materials (RRM) may also mitigate glare. [ 10 ] Surrounding buildings without PDRC may weaken the cooling power of PDRCs. [ 86 ]
Even when installed on roofs in highly dense urban areas, broadband radiative cooling panels lower surface temperatures at the sidewalk level. [ 89 ] A 2022 study assessed the effects of PDRC surfaces in winter, including non-modulated and modulated PDRCs, in the Kolkata metropolitan area . A non-modulated PDRC with a reflectance of 0.95 and emissivity of 0.93 decreased ground surface temperatures by nearly 4.9 °C (8.8 °F) and with an average daytime reduction of 2.2 °C (4.0 °F). [ 9 ]
While in summer the cooling effects of broadband non-modulated PDRCs may be desirable, they could present an uncomfortable "overcooling" effect for city populations in winter and thus increase energy use for heating. This can be mitigated by broadband modulated PDRCs, which they found could increase daily ambient urban temperatures by 0.4–1.4 °C (0.72–2.52 °F) in winter. While in Kolkata "overcooling" is unlikely, elsewhere it could have unwanted impacts. Therefore, modulated PDRCs may be preferred in cities with warm summers and cold winters for controlled cooling, while non-modulated PDRCs may be more beneficial for cities with hot summers and moderate winters. [ 9 ]
In a study on urban bus shelters , it was found that most shelters fail at providing thermal comfort for commuters , while a tree could provide 0.5 °C (0.90 °F) more cooling. [ 86 ] Other methods to cool shelters often involve air conditioning or other energy intensive measures. Urban shelters with PDRC roofing can significantly reduce temperatures with zero energy input, while adding "a non-reciprocal mid-infrared cover" can increase benefits by reducing incoming atmospheric radiation as well as reflecting radiation from surrounding buildings. [ 86 ]
For outdoor urban space cooling, a 2021 study recommended that PDRC in urban areas primarily focus on increasing albedo so long as emissivity can be maintained above 90%. [ 10 ]
PDRC surfaces can be integrated with solar energy plants , referred to as solar energy–radiative cooling (SE–RC), to improve functionality and performance by preventing solar cells from 'overheating' and thus degrading. Since silicon solar cells have a maximum efficiency of 33.7% (with the average commercial panel reaching around 20%), the majority of absorbed power produces excess heat and increases the operating temperature. [ 4 ] [ 72 ] Solar cell efficiency declines 0.4-0.5% for every 1 °C increase in temperature. [ 4 ]
PDRC can extend the life of solar cells by lowering the operating temperature of the system. [ 72 ] Integrating PDRCs into solar energy systems is also relatively simple, given that "most solar energy harvesting systems have a sky-facing flat plate structural design, which is similar to radiative cooling systems." Integration has been reported to increase energy gain per unit area while increasing the fraction of the day the cell operates. [ 14 ]
Methods have been proposed to potentially enhance cooling performance. One 2022 study proposed using a "full-spectrum synergetic management (FSSM) strategy to cool solar cells, which combines radiative cooling and spectral splitting to enhance radiative heat dissipation and reduce the waste heat generated by the absorption of sub-BG photons." [ 90 ]
Personal thermal management (PTM) employs PDRC in fabrics to regulate body temperatures during extreme heat. While other fabrics are useful for heat accumulation, they "may lead to heat stroke in hot weather." [ 91 ] A 2021 study claimed that "incorporating passive radiative cooling structures into personal thermal management technologies could effectively defend humans against intensifying global climate change." [ 92 ]
Wearable PDRCs can come in different forms and target outdoor workers. Products are at the prototype stage. [ 78 ] [ 93 ] Although most textiles are white, colored wearable materials in select colors may be appropriate in some contexts. [ 4 ]
Power plant condensers used in thermoelectric power plants and concentrated solar plants (CSP) can cool water for effective use within the heat exchanger . A study of a pond covered with a radiative cooler reported that 150 W m 2 flux could be achieved without loss of water. [ 7 ] PDRC can reduce water use and thermal pollution caused by water cooling . [ 5 ]
A review reported that supplementing the air-cooled condenser for radiative cooling panels in a thermoelectric power plant condenser achieved a 4096 kWhth/day cooling effect with a pump energy consumption of 11 kWh/day. [ 7 ] A concentrated solar plant (CSP) on the CO 2 supercritical cycle at 550 °C was reported to produce 5% net output gain over an air-cooled system by integration with 14 m2 /kWe capacity radiative cooler." [ 7 ]
In addition to cooling, PDRC surfaces can be modified for bi-directional thermal regulation (cooling and heating). [ 9 ] This can be achieved through switching thermal emittance between high and low values. [ 9 ] [ 4 ]
When combined with a thermoelectric generator, a PDRC surface can generate small amounts of electricity. [ 4 ]
Thermally enclosed spaces, including automobiles and greenhouses , are particularly susceptible to harmful temperature increases. This is because of the heavy presence of windows, which are transparent to incoming solar radiation yet opaque to outgoing long-wave thermal radiation, which causes them to heat rapidly in the sun. Automobile temperatures in direct sunlight can rise to 60–82 °C when ambient temperatures is only 21 °C. [ 4 ]
Dew harvesting yields may be improved via with PDRC. Selective PDRC emitters that have a high emissivity and broadband emitters may produce varying results. In one study using a broadband PDRC, the device condensed ~8.5 mL day of water for 800 W m 2 of peak solar intensity." [ 4 ] Whereas selective emitters may be less advantageous in other contexts, they may be superior for dew harvesting applications. [ 44 ] PDRCs could improve atmospheric water harvesting by being combined with solar vapor generation systems to improve water collection rates. [ 45 ]
PDRC surfaces can be installed over the surface of a body of water for cooling. In a controlled study, a body of water was cooled 10.6 ᵒC below the ambient temperature with the usage of a photonic radiator. [ 7 ] [ failed verification ]
PDRC surfaces have been developed to cool ice and prevent ice from melting under sunlight. It has been proposed as a sustainable method for ice protection. This can also be applied to protect refrigerated food from spoiling. [ 94 ]
Jeremy Munday writes that although "unexpected effects will likely occur", PDRC structures "can be removed immediately if needed, unlike methods that involve dispersing particulate matter into the atmosphere, which can last for decades." [ 95 ] Stratospheric aerosol injection "might cause potentially dangerous threats to the Earth’s basic climate operations" that may not be reversible, preferring PDRC. [ 2 ] Zevenhoven et al. state that "instead of stratospheric aerosol injection (SAI), cloud brightening or a large number of mirrors in the sky (“sunshade geoengineering”) to block out or reflect incoming (short-wave, SW) solar irradiation , long-wavelength (LW) thermal radiation can be selectively emitted and transferred through the atmosphere into space". [ 3 ]
"Overcooling" is cited as a side effect of PDRCs that may be problematic, especially when PDRCs are applied in high-population areas with hot summers and cool winters, characteristic of temperate zones . [ 19 ] While PDRC application in these areas can be useful in summer, in winter it can result in an increase in energy consumption for heating and thus may reduce the benefits of PDRCs on energy savings and emissions. [ 9 ] [ 20 ] As per Chen et al., "to overcome this issue, dynamically switchable coatings have been developed to prevent overcooling in winter or cold environments." [ 19 ]
The detriments of overcooling can be reduced by modulation of PDRCs, harnessing their passive cooling abilities during summer, while modifying them to passively heat during winter. Modulation can involve "switching the emissivity or reflectance to low values during the winter and high values during the warm period." [ 9 ] In 2022, Khan et al. concluded that "low-cost optically modulated" PDRCs are "under development" and "are expected to be commercially available on the market soon with high future potential to reduce urban heat in cities without leading to an overcooling penalty during cold periods." [ 9 ]
There are various methods of making PDRCs 'switchable' to mitigate overcooling. [ 19 ] Most research has used vanadium dioxide (VO2), an inorganic compound , to achieve temperature-based 'switchable' cooling and heating effects. [ 19 ] [ 20 ] While, as per Khan et al., developing VO2 is difficult, their review found that "recent research has focused on simplifying and improving the expansion of techniques for different types of applications." [ 9 ] Chen et al. found that "much effort has been devoted to VO2 coatings in the switching of the mid-infrared spectrum , and only a few studies have reported the switchable ability of temperature-dependent coatings in the solar spectrum." [ 19 ] Temperature-dependent switching requires no extra energy input to achieve both cooling and heating. [ 19 ]
Other methods of PDRC 'switching' require extra energy input to achieve desired effects. One such method involves changing the dielectric environment . This can be done through "reversible wetting" and drying of the PDRC surface with common liquids such as water and alcohol . However, for this to be implemented on a mass scale, "the recycling, and utilization of working liquids and the tightness of the circulation loop should be considered in realistic applications." [ 19 ]
Another method involves 'switching' through mechanical force, which may be useful and has been "widely investigated in [PDRC] polymer coatings owing to their stretchability." For this method, "to achieve a switchable coating in εLWIR , mechanical stress/strain can be applied in a thin PDMS film, consisting of a PDMS grating and embedded nanoparticles ." One study estimated, with the use of this method, that "19.2% of the energy used for heating and cooling can be saved in the US, which is 1.7 times higher than the only cooling mode and 2.2 times higher than the only heating mode," which may inspire additional research and development. [ 19 ]
Glare caused from surfaces with high solar reflectance may present visibility concerns that can limit PDRC application, particularly within urban environments at the ground level. [ 26 ] PDRCs that use a "scattering system" to generate reflection in a more diffused manner have been developed and are "more favorable in real applications," as per Lin et al. [ 96 ]
Low-cost PDRC colored paint coatings, which reduce glare and increase the color diversity of PDRC surfaces, have also been developed. While some of the surface's solar reflectance is lost in the visible light spectrum, colored PDRCs can still exhibit significant cooling power, such as a coating by Zhai et al., which used a α- Bi 2 O 3 coating (resembling the color of the compound) to develop a non-toxic paint that demonstrated a solar reflectance of 99% and heat emissivity of 97%. [ 26 ]
Generally it is noted that there is a tradeoff between cooling potential and darker colored surfaces. Less reflective colored PDRCs can also be applied to walls while more reflective white PDRCs can be applied to roofs to increase visual diversity of vertical surfaces, yet still contribute to cooling. [ 27 ]
Nocturnal passive radiative cooling has been recognized for thousands of years, with records showing awareness by the ancient Iranians , demonstrated through the construction of Yakhchāls , since 400 B.C.E. [ 98 ]
PDRC was hypothesized by Félix Trombe in 1967. The first experimental setup was created in 1975, but was only successful for nighttime cooling. Further developments to achieve daytime cooling using different material compositions were not successful. [ 7 ]
In the 1980s, Lushiku and Granqvist identified the infrared window as a potential way to access the ultracold outer space as a way to achieve passive daytime cooling. [ 3 ]
Early attempts at developing passive radiative daytime cooling materials took inspiration from nature, particularly the Saharan silver ant and white beetles, noting how they cooled themselves in extreme heat. [ 4 ] [ 29 ]
Research and development in PDRC evolved rapidly in the 2010s with the discovery of the ability to suppress solar heating using photonic metamaterials, which widely expanded research and development in the field. [ 4 ] [ 29 ]
In 2024, Nissan introduced a paint that lowers car interior temperatures by up to 21 °F in direct sunlight. It involves two types of particles, each operating at a different frequency. One reflects near-infrared light. The second converts other frequencies to match the infrared window, radiating the energy into space. [ 99 ] | https://en.wikipedia.org/wiki/Passive_daytime_radiative_cooling |
A passive dual coil resonator (pDCR) is a purely passive receive coil insert for a preclinical magnetic particle imaging (MPI) system which provides frequency-selective signal enhancement. The pDCR aims to enhance the frequency components associated with high mixing orders, which are critical to achieve a high spatial resolution . [ 1 ]
One of the biggest challenges in MPI is to achieve a good signal-to-noise ratio , especially for higher harmonics . The intention behind this is that as many harmonics of the induced particle signal as possible, which drop in intensity at higher frequencies and then disappear in the noise floor, should be usable for image reconstruction to reach a better spatial resolution. [ 2 ] To enhance the harmonics at higher frequencies, one aims to increase the inductive coupling between the particles and the receive coils at higher harmonics.
As the name suggests, the pDCR, which consists of two coaxial coils , is passive because it does not have a voltage source and also no electrical connection to the rest of the MPI scanner system. The pDCR represents a resonant circuit and therefore also includes a capacitor . The pDCR picks up the particles’ magnetization response mainly with its interior coil. It then sends out the received signal with its exterior coil, but enhanced in the range of its resonant frequency. This output is then picked up by the scanner’s receive coils. There will be coupling between all the different coils of the scanner and the pDCR, however, the described process will dominate. The pDCR, i.e. the resonant circuit, is tuned to a frequency near the frequency at which the harmonics of the signal disappear into the noise floor and thus serves its function as mentioned above. | https://en.wikipedia.org/wiki/Passive_dual_coil_resonator |
In immunology , passive immunity is the transfer of active humoral immunity of ready-made antibodies . Passive immunity can occur naturally, when maternal antibodies are transferred to the fetus through the placenta , and it can also be induced artificially, when high levels of antibodies specific to a pathogen or toxin (obtained from humans , horses , or other animals ) are transferred to non- immune persons through blood products that contain antibodies, such as in immunoglobulin therapy or antiserum therapy. [ 1 ] Passive immunization is used when there is a high risk of infection and insufficient time for the body to develop its own immune response, or to reduce the symptoms of ongoing or immunosuppressive diseases. [ 2 ] Passive immunization can be provided when people cannot synthesize antibodies, and when they have been exposed to a disease that they do not have immunity against. [ 3 ]
Maternal passive immunity is a type of naturally acquired passive immunity, and refers to antibody -mediated immunity conveyed to a fetus or infant by its mother. Naturally acquired passive immunity can be provided during pregnancy, and through breastfeeding . [ 4 ] In humans, maternal antibodies (MatAb) are passed through the placenta to the fetus by an FcRn receptor on placental cells. This occurs predominately during the third trimester of pregnancy, and thus is often reduced in babies born prematurely. Immunoglobulin G (IgG) is the only antibody isotype that can pass through the human placenta, and is the most common antibody of the five types of antibodies found in the body. IgG antibodies protects against bacterial and viral infections in fetuses. Immunization is often required shortly following birth to prevent diseases in newborns such as tuberculosis , hepatitis B , polio , and pertussis , however, maternal IgG can inhibit the induction of protective vaccine responses throughout the first year of life. This effect is usually overcome by secondary responses to booster immunization. [ 5 ] Maternal antibodies protect against some diseases, such as measles, rubella, and tetanus, more effectively than against others, such as polio and pertussis. [ 6 ] Maternal passive immunity offers immediate protection, though protection mediated by maternal IgG typically only lasts up to a year. [ 6 ]
Passive immunity is also provided through colostrum and breast milk, which contain IgA antibodies that are transferred to the gut of the infant, providing local protection against disease causing bacteria and viruses until the newborn can synthesize its own antibodies. [ 7 ] Protection mediated by IgA is dependent on the length of time that an infant is breastfed, which is one of the reasons the World Health Organization recommends breastfeeding for at least the first two years of life. [ 8 ]
Other species besides humans transfer maternal antibodies before birth, including primates and lagomorphs (which includes rabbits and hares). [ 9 ] In some of these species IgM can be transferred across the placenta as well as IgG. All other mammalian species predominantly or solely transfer maternal antibodies after birth through milk. In these species, the neonatal gut is able to absorb IgG for hours to days after birth. However, after a period of time the neonate can no longer absorb maternal IgG through their gut, an event that is referred to as "gut closure". If a neonatal animal does not receive adequate amounts of colostrum prior to gut closure, it does not have a sufficient amount of maternal IgG in its blood to fight off common diseases. This condition is referred to as failure of passive transfer. It can be diagnosed by measuring the amount of IgG in a newborn's blood, and is treated with intravenous administration of immunoglobulins. If not treated, it can be fatal. [ citation needed ]
A preprint suggested that (SARS-CoV-2) antibodies in or transmitted through the air are an unrecognized mechanism by which, transferred, passive immune protection occurs. [ 10 ] [ better source needed ]
Antibodies from vaccination can be present in saliva and thereby may have utility in preventing infection. [ 11 ] [ better source needed ]
Artificially acquired passive immunity is a short-term immunization achieved by the transfer of antibodies, which can be administered in several forms; as human or animal blood plasma or serum , as pooled human immunoglobulin for intravenous ( IVIG ) or intramuscular (IG) use, as high-titer human IVIG or IG from immunized donors or from donors recovering from the disease, and as monoclonal antibodies (MAb). Passive transfer is used to prevent disease or used prophylactically in the case of immunodeficiency diseases, such as hypogammaglobulinemia . [ 12 ] [ 13 ] It is also used in the treatment of several types of acute infection, and to treat poisoning . [ 2 ] Immunity derived from passive immunization lasts for a few weeks to three to four months. [ 14 ] [ 15 ] There is also a potential risk for hypersensitivity reactions, and serum sickness , especially from gamma globulin of non-human origin. [ 7 ] Passive immunity provides immediate protection, but the body does not develop memory; therefore, the patient is at risk of being infected by the same pathogen later unless they acquire active immunity or vaccination. [ 7 ]
In 1888 Emile Roux and Alexandre Yersin showed that the clinical effects of diphtheria were caused by diphtheria toxin and, following the 1890 discovery of an antitoxin -based immunity to diphtheria and tetanus by Emil Adolf von Behring and Kitasato Shibasaburō , antitoxin became the first major success of modern therapeutic immunology. [ 16 ] [ 17 ] Shibasaburo and von Behring immunized guinea pigs with the blood products from animals that had recovered from diphtheria and realized that the same process of heat treating blood products of other animals could treat humans with diphtheria. [ 18 ] By 1896, the introduction of diphtheria antitoxin was hailed as "the most important advance of the [19th] Century in the medical treatment of acute infective disease". [ 19 ]
Prior to the advent of vaccines and antibiotics , specific antitoxin was often the only treatment available for infections such as diphtheria and tetanus. Immunoglobulin therapy continued to be a first line therapy in the treatment of severe respiratory diseases until the 1930s, even after sulfonamides were introduced. [ 13 ]
In 1890 antibody therapy was used to treat tetanus , when serum from immunized horses was injected into patients with severe tetanus in an attempt to neutralize the tetanus toxin, and prevent the dissemination of the disease. Since the 1960s, human tetanus immune globulin (TIG) has been used in the United States in unimmunized, vaccine-naive or incompletely immunized patients who have sustained wounds consistent with the development of tetanus. [ 13 ] The administration of horse antitoxin remains the only specific pharmacologic treatment available for botulism . [ 20 ] Antitoxin also known as heterologous hyperimmune serum is often also given prophylactically to individuals known to have ingested contaminated food. [ 6 ] IVIG treatment was also used successfully to treat several patients with toxic shock syndrome , during the 1970s tampon scare . [ citation needed ]
Antibody therapy is also used to treat viral infections. In 1945, hepatitis A infections, epidemic in summer camps, were successfully prevented by immunoglobulin treatment. Similarly, hepatitis B immune globulin (HBIG) effectively prevents hepatitis B infection. Antibody prophylaxis of both hepatitis A and B has largely been supplanted by the introduction of vaccines; however, it is still indicated following exposure and prior to travel to areas of endemic infection. [ 21 ]
In 1953, human vaccinia immunoglobulin (VIG) was used to prevent the spread of smallpox during an outbreak in Madras, India , and continues to be used to treat complications arising from smallpox vaccination. Although the prevention of measles is typically induced through vaccination, it is often treated immuno-prophylactically upon exposure. Prevention of rabies infection still requires the use of both vaccine and immunoglobulin treatments. [ 13 ]
During a 1995 Ebola virus outbreak in the Democratic Republic of Congo , whole blood from recovering patients, and containing anti-Ebola antibodies, was used to treat eight patients, as there was no effective means of prevention, though a treatment was discovered recently in the 2013 Ebola epidemic in Africa. Only one of the eight infected patients died, compared to a typical 80% Ebola mortality, which suggested that antibody treatment may contribute to survival. [ 22 ] Immune globulin or immunoglobulin has been used to both prevent and treat reactivation of the herpes simplex virus (HSV), varicella zoster virus , Epstein-Barr virus (EBV), and cytomegalovirus (CMV). [ 13 ]
The following immunoglobulins are the immunoglobulins currently approved for use for infectious disease prophylaxis and immunotherapy , in the United States. [ 23 ]
The one exception to passive humoral immunity is the passive transfer of cell-mediated immunity , also called adoptive immunization which involves the transfer of mature circulating lymphocytes. It is rarely used in humans, and requires histocompatible (matched) donors, which are often difficult to find, and carries severe risks of graft-versus-host disease . [ 2 ] This technique has been used in humans to treat certain diseases including some types of cancer and immunodeficiency . However, this specialized form of passive immunity is most often used in a laboratory setting in the field of immunology , to transfer immunity between " congenic ", or deliberately inbred mouse strains which are histocompatible. [ citation needed ]
Passive immunity starts working faster than vaccines do, as the patient's immune system does not need to make its own antibodies: B cells take time to activate and multiply after a vaccine is given. Passive immunity works even if an individual has a immune system disorder that prevents them from making antibodies in response to a vaccine. [ 18 ] In addition to conferring passive immunities, breastfeeding has other lasting beneficial effects on the baby's health, such as decreased risk of allergies and obesity. [ 26 ]
A disadvantage to passive immunity is that producing antibodies in a laboratory is expensive and difficult to do. In order to produce antibodies for infectious diseases, there is a need for possibly thousands of human donors to donate blood or immune animals' blood would be obtained for the antibodies. Patients who are immunized with the antibodies from animals may develop serum sickness due to the proteins from the immune animal and develop serious allergic reactions. [ 6 ] Antibody treatments can be time-consuming and are given through an intravenous injection or IV, while a vaccine shot or jab is less time-consuming and has less risk of complication than an antibody treatment. Passive immunity is effective, but only lasts a short amount of time. [ 18 ] | https://en.wikipedia.org/wiki/Passive_immunity |
A speaker enclosure using a passive radiator usually contains an "active loudspeaker " (or main driver), and a passive radiator (also known as a "drone cone"). The active loudspeaker is a normal driver, and the passive radiator is of similar construction, but without a voice coil and magnet assembly . It is not attached to a voice coil or wired to an electrical circuit or power amplifier . Small [ 1 ] [ 2 ] and Hurlburt [ 3 ] have published the results of research into the analysis and design of passive-radiator loudspeaker systems. The passive-radiator principle was identified as being particularly useful in compact systems where vent realization is difficult or impossible, but it can also be applied satisfactorily to larger systems.
In the same way as a ported loudspeaker, a passive radiator system uses the sound pressure otherwise trapped in the enclosure to excite a resonance that makes it easier for the speaker system to create the deepest pitches (e.g., basslines ). The passive radiator resonates at a frequency determined by its mass and the springiness (compliance) of the air in the enclosure. It is tuned to the specific enclosure by varying its mass (e.g., by adding weight to the cone). Internal air pressure produced by movements of the active driver cone moves the passive radiator cone. [ 4 ] This resonance simultaneously reduces the amount that the woofer has to move.
Passive radiators are used instead of a reflex port for several reasons. In small-volume enclosures tuned to low frequencies, the length of the required port becomes very large. [ 5 ] They are also used to reduce or eliminate the objectionable noises of port turbulence and compressive flow caused by high-velocity airflow in small ports. In addition, ports have pipe resonances that can produce undesirable effects on the frequency response. To a first-order approximation, the passive radiator works identically to a port. [ 6 ]
Passive radiators are tuned by mass variations (M mp ), changing the way that they interact with the compliance of the air in the box. The weight of the cone of the passive radiator should be approximately equivalent to the mass of the air that would have filled the port which might have been used for that design. If the passive radiator's acoustic mass equals that of the port, and the passive radiator's compliance is negligible, then the frequency response behaviour of these two types of systems will be virtually identical. [ 6 ]
Although the frequency response of a passive radiator will be similar to that of a ported cabinet, the system low-frequency roll-off will be slightly steeper (5th-order rather than 4th-order), due to a notch (dip) in the frequency response caused by the V ap (compliance or stiffness) of the passive radiator. This notch occurs at the passive radiator's free-air resonant frequency and causes slightly poorer transient response. Despite this, perhaps due to the lack of vent turbulence and vent pipe resonances, some listeners prefer the sound of passive radiators to reflex ports. Passive radiator speakers are only slightly more complex to design and are generally more expensive as compared to standard bass reflex enclosures.
Passive radiators are used in Bluetooth speakers, home stereo speakers, subwoofer cabinets and car audio speaker systems, particularly in cases where there is not enough space for a port or vent system. While most studio monitor speakers are either ported bass reflex designs, or closed-back without a vent or passive radiator, Mackie 's HR824 and HR624 monitor speakers have a passive radiator installed on the rear of the cabinet. Focal also sells a studio monitor with a passive radiator called the SM9. Respective examples of a smart speaker and a portable Bluetooth Speaker utilizing passive radiators are Apple 's HomePod mini and Ultimate Ears ' UE Boom . | https://en.wikipedia.org/wiki/Passive_radiator_(speaker) |
Passive survivability refers to a building's ability to maintain critical life-support conditions in the event of extended loss of power, heating fuel, or water. [ 1 ] This idea proposes that designers should incorporate ways for a building to continue sheltering inhabitants for an extended period of time during and after a disaster situation, whether it be a storm that causes a power outage, a drought which limits water supply, or any other possible event.
The term was coined by BuildingGreen President and EBN Executive editor Alex Wilson in 2005 after the wake of Hurricane Katrina . [ 2 ] Passive survivability is suggested to become a standard in the design criteria for houses, apartment buildings, and especially buildings used as emergency shelters. While many of the strategies considered to achieve the goals of passive survivability are not new concepts and have been widely used in green building over the decades, the distinction comes from the motivation for moving towards resilient and safe buildings. [ 1 ]
The increase in duration, frequency, and intensity of extreme weather events due to climate change exacerbates the challenges that passive survivability tries to address. [ 1 ] Climates that did not previously need cooling are now seeing warmer temperatures and a need for air conditioning. Sea level rise and storm surge increases the risk of flooding in coastal locations, while precipitation-based flooding is an issue in low-lying areas. In order for buildings to provide livable conditions at all times, potential threats must be realized.
In much of the developed world, there is a heavy reliance on a grid for power and gas. These grids are the main source of energy for many societies, and while they generally do not get interrupted, they are constantly prone to events that may cause disruption, such as natural disasters . In California, there have even been intentional power outages as a preventative measure in response to wildfires caused by power lines. [ 3 ] When a power outage occurs, most mechanical heating and cooling can no longer operate. The aim of passive survivability is to be prepared for when such an event may occur, and maintain safe indoor temperatures. While back-up generators can provide some power during an outage, it is often not enough for heating and cooling needs or adequate lighting. [ 1 ]
Heat is the leading cause of weather-related death in the US. [ 4 ] Heat waves coinciding with power outages puts many lives at risk due to the inability of a building to keep temperatures down. Even without a power outage, lack of access to air conditioning or lack of funds to pay for electricity also highlights the need for passive ways to maintain a livable thermal environment. [ 4 ] One of the issues that passive survivability looks at is considering the many ways to keep thermal resistance of a building skin to prevent a room from becoming overbearing in the event of having a lack of access to standard temperature regulating systems.
In the winter months, power outages or lack of a fuel source for heat pose a threat when there are cold fronts . [ 1 ] Leaky construction and poor insulation result in rapid heat loss, causing indoor temperatures to fall.
During a drought , the limited water supply means a community must get by using less, which may mean mandatory restrictions on water use. Extended dry spells can instigate wildfires , which add a heightened level of devastation. [ 5 ] Drying clay soil can cause critical water mains to burst and damage homes and infrastructure. [ 6 ] Droughts can also cause power-outages in areas where thermo-electric power plants are the main source of electricity. [ 7 ] Water-efficient appliances and landscaping is crucial in water-scarce locations.
Natural disasters such as hurricanes , earthquakes , tornadoes , and other storm events can result in destruction of infrastructure that provides key electricity, water, and energy sources. [ 8 ] Flooding after extreme precipitation is a major threat to buildings and utilities. The resulting electricity or water shortages can pose more of a threat than the event itself, often lasting longer than the initial disaster. [ 9 ]
Terrorist threats and cyberterrorism can also cause an interruption in power supply. Attacks on central plants or major distribution segments, or hacking of a utility grid’s control system are possible threats that could cut off electricity, water, or fuel. [ 8 ]
There are many passive strategies that require no electricity but instead can provide heating, cooling, and lighting for a building through proper design. In envelope-dominated buildings, the climate and surroundings have a greater effect on the interior of the structure due to a high surface area to volume ratio and minimal internal heat sources. [ 10 ] Internally dominated buildings, such as the typical office building, are more affected by internal heat sources like equipment and people, however the building envelope still plays an important role, especially during a power outage.
While the distinction between the two types of buildings can sometimes be unclear, all buildings have a balance point temperature that is a result of building design and function. Balance point temperature is the outdoor temperature under which a building requires heating. [ 10 ] An internally dominated structure will have a lower balance point temperature because of more internal heat sources, which means a longer overheated period and shorter under-heated period. Achieving a livable thermal environment during a power outage is dependent on the balance point temperature, as well as the interaction with the surrounding environment. A key aspect of all design for passive survivability is climate-responsive design. Passive strategies should be chosen based on climate and local conditions, in addition to building function.
When a building has leaky construction or poor insulation , desired heat is lost in the winter and conditioned air is lost in the summer. [ 10 ] This loss is accounted for by pumping more mechanical heating or cooling into the building to make up the difference. Since this strategy is obsolete during a power outage, the building should be able to maintain internal temperatures for longer periods of time. To avoid heat loss by infiltration , the thermal envelope should be constructed with minimal breaks and joints, and cracks around windows and doors should be sealed. The air tightness of a building can be tested using a blower-door test.
Heat is also lost by transmission through the many surfaces in a room, including walls, windows, floors, ceilings, and doors. The area and thermal resistance of the surface, as well as the temperature difference between indoors and outdoors, determines the rate of heat loss. [ 10 ] Continuous insulation with high R-values reduces heat loss by transmission in walls and ceilings. Double and triple-pane windows with special coatings reduce loss through windows. [ 9 ] The practice of superinsulation greatly reduces heat loss through high levels of thermal resistance and air tightness.
The ability to passively heat a building is beneficial during the colder winter months to help keep temperature levels up. Passive solar systems collect and distribute energy from the sun without the use of mechanical equipment such as fans or pumps. Passive solar heating consists of equator-facing glazing (south-facing in the northern hemisphere) to collect solar energy and thermal mass to store the heat. [ 10 ] A direct-gain system allows short-wave radiation from the sun to enter a room through the window, where the floor and wall surfaces then act as thermal mass to absorb the heat, and the long-wave radiation is trapped inside due to the greenhouse effect . [ 10 ] Proper glazing to thermal mass ratios should be used to prevent overheating and provide adequate heating. [ 11 ] A Trombe wall or indirect gain system places the thermal mass right inside the glazing to collect heat during the day for night-time use due to time-lag of mass. [ 10 ] This method is useful if daylighting is not required, or can be used in combination with direct-gain. A third technique is a sunspace or isolated gain system, which collects solar energy in a separate space attached to the building, and which can double as a living area for most of the year. [ 10 ]
Heat avoidance strategies can be used to reduce cooling needs during the overheated periods of the year. This is achieved largely though shading devices and building orientation. In the northern hemisphere, windows should primarily be placed on southern facades which receive the most sun during the winter, while windows on east and west facades should be avoided due to difficulty to shade and high solar radiation during the summer. [ 10 ] Fixed overhangs can be designed that block the sun during the overheated periods and allow the sun during the under-heated periods. Movable shading devices are most appropriate due to their ability to respond to the environment and building needs. [ 10 ] Using light colors on roofs and walls is another effective strategy to reduce heat gain by reflecting the sun.
Natural ventilation can be used to increase thermal comfort during warmer periods. There are two main types of natural ventilation: comfort ventilation and night-flush cooling. Comfort ventilation brings in outside air to move over skin and increase the skin’s evaporative cooling, creating a more comfortable thermal environment. [ 10 ] The temperature does not necessarily decrease unless the outdoor temperature is lower than the indoor temperature, however the air movement increases comfort. This technique is especially useful in humid climates. When the wind is not blowing, a solar chimney can increase ventilation flow by using the sun to increase buoyancy of air. [ 12 ]
Night-flush cooling utilizes the cool nighttime air to flush the warm air out of the building and lower the indoor temperature. The cooled structure then acts as a heat sink during the day, when bringing the warm outdoor air in is avoided. Night-flush cooling is most effective in locations that have large diurnal temperature ranges , such as in hot and dry climates. [ 10 ] With both techniques, providing operable windows alone does not result in adequate natural ventilation; the building must be designed for proper airflow.
When the power goes out, rooms at the center of a building typically receive little to no light. Designing a building to take advantage of natural daylight instead of relying on electric lighting will make it more resilient to power outages and other events. Daylighting and passive solar gain often go hand in hand, but in the summer there is a desire for “cool” daylight. Daylighting design should therefore provide adequate lighting without adding undesired heat. Direct sunlight and reflected light from the sky have different levels of radiation. [ 10 ] The daylighting design should reflect the needs of the building in both its climate and function, and different methods can achieve that. Southern and northern windows are generally best for daylighting, and clerestories or monitors on the roof can bring daylight into the center of a building. [ 10 ] Placing windows higher up on a wall will bring the light further into the room, and other methods like light shelves can bring light deeper into a building by reflecting light off the ceiling. [ 10 ]
The over arching goal of passive survivability is to try to reduce discomfort or suffering in the event of having a key source cut off to a building. There are several different solutions to any one design problem. While many of the solutions that are presented by advocates of passive survivability are ones that have been universally accepted by passive design and other standard sustainability practices, it is important to examine these measures and apply the appropriate strategies to developing and existing buildings in order to minimize the risk of displeasure or death. [ 13 ]
Buildings should be designed to maintain survivable thermal conditions without air conditioning or supplemental heat. Providing back-up generators and adequate fuel to maintain the critical functions of a building during outages are conventional solutions to power-supply interruptions. However, unless they are very large, generators support only basic needs for a short amount of time and may not power systems such as air conditioning, lighting, or even heating or ventilation during extended outages. Back-up generators are also expensive both to buy and maintain. Storing significant quantities of fuel on-site to power generators during extended outages has inherent environmental and safety risks, particularly during storms.
Renewable energy systems can provide power during an extreme event. For example, photovoltaic (or solar electric) power systems, when coupled with on-site battery storage can provide electricity when the grid loses power. Other fuel sources like wood can provide heat if buildings are equipped with wood-burning stoves or fireplaces.
Emergency water supply systems such as rooftop rainwater harvesting systems can provide water for toilet flushing, bathing, and other building needs in the event of water supply interruptions. Rain barrels or larger cisterns store water from runoff that can often use a gravity-feed to obtain the water for use. Installing composting toilets and waterless urinals ensure those facilities can continue to function regardless of the circumstance, while reducing water consumptions on a daily basis. Having backup sources of potable water on-site is also a necessity in the case of water interruption. [ 5 ]
Leadership in Energy and Environmental Design (LEED) is a widely used green building certification in the United States. As of LEED version 4, there is a pilot credit called “Passive Survivability and Backup Power During Disruptions” under LEED BD+C: New Construction. [ 14 ] The credit is worth up to two points, with one point awarded for providing for passive survivability and thermal safety, and one point awarded for providing backup power for critical loads. For the passive survivability point, the building must maintain thermally safe conditions during a four-day power outage during both peak summer and peak winter conditions. [ 14 ] LEED lists three paths to compliance for thermal safety, two of which consist of thermal modelling, and the remaining path being Passive House certification.
While passive survivability is not mentioned by name in the two major passive house standards, Passive House Institute and Passive House Institute US (PHIUS), the passive strategies that make these buildings so energy efficient are the same strategies outlined for passive survivability. Buildings that achieve passive house certification are hitting some of the main criteria for passive survivability, including airtight construction and superinsulation . [ 15 ] Many buildings will also have on-site photovoltaics to offset energy consumption. These buildings that rely very little on energy will be more resilient in power outages and extreme weather. [ 15 ]
RELi is a building and community rating system completely based on resilient design. It has been adopted by the US Green Building Council, the same body that developed LEED. [ 16 ] The Hazard Adaptation and Mitigation category has several credits related to passive survivability. One required credit is “Fundamental Emergency Operations: Thermal Safety During Emergencies” which requires indoor temperatures to be at or below outdoor temperatures in the summer, and above 50 °F in the winter for up to four days. [ 17 ] Another way to comply is to provide a thermal safe zone with adequate space for all building occupants. There is an optional poly-credit, “Advanced Emergency Operations: Back-Up Power, Operations, Thermal Safety & Operating Water,” that incorporates other passive survivability measures such as water storage. [ 17 ] Another poly-credit, “Passive Thermal Safety, Thermal Comfort, & Lighting Design Strategies,” outlines more passive strategies including passive cooling , passive heating , and daylighting . [ 17 ] | https://en.wikipedia.org/wiki/Passive_survivability |
Passive transport is a type of membrane transport that does not require energy to move substances across cell membranes . [ 1 ] [ 2 ] Instead of using cellular energy , like active transport , [ 3 ] passive transport relies on the second law of thermodynamics to drive the movement of substances across cell membranes. [ 1 ] [ 2 ] [ 4 ] Fundamentally, substances follow Fick's first law , and move from an area of high concentration to an area of low concentration because this movement increases the entropy of the overall system . [ 4 ] [ 5 ] The rate of passive transport depends on the permeability of the cell membrane, which, in turn, depends on the organization and characteristics of the membrane lipids and proteins . [ citation needed ] The four main kinds of passive transport are simple diffusion , facilitated diffusion , filtration , and/or osmosis .
Passive transport follows Fick's first law .
Diffusion is the net movement of material from an area of high concentration to an area with lower concentration. The difference of concentration between the two areas is often termed as the concentration gradient , and diffusion will continue until this gradient has been eliminated. Since diffusion moves materials from an area of higher concentration to an area of lower concentration, it is described as moving solutes "down the concentration gradient" (compared with active transport , which often moves material from area of low concentration to area of higher concentration, and therefore referred to as moving the material "against the concentration gradient").
However, in many cases (e.g. passive drug transport) the driving force of passive transport can not be simplified to the concentration gradient. If there are different solutions at the two sides of the membrane with different equilibrium solubility of the drug, the difference in the degree of saturation is the driving force of passive membrane transport. [ 6 ] It is also true for supersaturated solutions which are more and more important owing to the spreading of the application of amorphous solid dispersions for drug bioavailability enhancement.
Simple diffusion and osmosis are in some ways similar. Simple diffusion is the passive movement of solute from a high concentration to a lower concentration until the concentration of the solute is uniform throughout and reaches equilibrium. Osmosis is much like simple diffusion but it specifically describes the movement of water (not the solute) across a selectively permeable membrane until there is an equal concentration of water and solute on both sides of the membrane. Simple diffusion and osmosis are both forms of passive transport and require none of the cell's ATP energy .
For passive diffusion, the law of diffusion states that the mean squared displacement is ⟨ r 2 ⟩ = 2 d D t {\displaystyle \langle r^{2}\rangle =2dDt} with d being the number of dimensions and D the diffusion coefficient ). So to diffuse a distance of about x {\displaystyle x} takes time ∼ x 2 / 2 d D {\displaystyle \sim x^{2}/2dD} , and the "average speed" is ∼ 2 d D / x {\displaystyle \sim 2dD/x} . This means that in the same physical environment, diffusion is fast when the distance is small, but less when the distance is large.
This can be seen in material transport within the cell. Prokaryotes typically have small bodies, allowing diffusion to suffice for material transport within the cell. Larger cells like eukaryotes would either have very low metabolic rate to accommodate the slowness of diffusion, or invest in complex cellular machinery to allow active transport within the cell, such as kinesin walking along microtubules .
A biological example of diffusion is the gas exchange that occurs during respiration within the human body. [ 7 ] Upon inhalation, oxygen is brought into the lungs and quickly diffuses across the membrane of alveoli and enters the circulatory system by diffusing across the membrane of the pulmonary capillaries. [ 8 ] Simultaneously, carbon dioxide moves in the opposite direction, diffusing across the membrane of the capillaries and entering into the alveoli, where it can be exhaled. The process of moving oxygen into the cells, and carbon dioxide out, occurs because of the concentration gradient of these substances, each moving away from their respective areas of higher concentration toward areas of lower concentration. [ 7 ] [ 8 ] Cellular respiration is the cause of the low concentration of oxygen and high concentration of carbon dioxide within the blood which creates the concentration gradient. Because the gasses are small and uncharged, they are able to pass directly through the cell membrane without any special membrane proteins. [ 9 ] No energy is required because the movement of the gasses follows Fick's first law and the second law of thermodynamics .
Facilitated diffusion, also called carrier-mediated osmosis, is the movement of molecules across the cell membrane via special transport proteins that are embedded in the plasma membrane by actively taking up or excluding ions [14] . Through facilitated diffusion, energy is not required in order for molecules to pass through the cell membrane. [ 1 ] Active transport of protons by H + ATPases [ 10 ] alters membrane potential allowing for facilitated passive transport of particular ions such as potassium [ 11 ] down their charge gradient through high affinity transporters and channels.
An example of facilitated diffusion is when glucose is absorbed into cells through Glucose transporter 2 (GLUT2) in the human body. [ 12 ] [ 13 ] There are many other types of glucose transport proteins , some that do require energy , and are therefore not examples of passive transport. [ 13 ] Since glucose is a large molecule, it requires a specific channel to facilitate its entry across plasma membranes and into cells. [ 13 ] When diffusing into a cell through GLUT2, the driving force that moves glucose into the cell is the concentration gradient. [ 12 ] The main difference between simple diffusion and facilitated diffusion is that facilitated diffusion requires a transport protein to 'facilitate' or assist the substance through the membrane. [ 14 ] After a meal, the cell is signaled to move GLUT2 into membranes of the cells lining the intestines called enterocytes . [ 12 ] With GLUT2 in place after a meal and the relative high concentration of glucose outside of these cells as compared to within them, the concentration gradient drives glucose across the cell membrane through GLUT2. [ 12 ] [ 13 ]
Filtration is movement of water and solute molecules across the cell membrane due to hydrostatic pressure generated by the cardiovascular system . Depending on the size of the membrane pores, only solutes of a certain size may pass through it. For example, the membrane pores of the Bowman's capsule in the kidneys are very small, and only albumins , the smallest of the proteins, have any chance of being filtered through. On the other hand, the membrane pores of liver cells are extremely large, but not forgetting cells are extremely small to allow a variety of solutes to pass through and be metabolized.
Osmosis is the net movement of water molecules across a selectively permeable membrane from an area of high water potential to an area of low water potential. A cell with a less negative water potential will draw in water, but this depends on other factors as well such as solute potential (pressure in the cell e.g. solute molecules) and pressure potential (external pressure e.g. cell wall). There are three types of Osmosis solutions: the isotonic solution, hypotonic solution, and hypertonic solution. Isotonic solution is when the extracellular solute concentration is balanced with the concentration inside the cell. In the Isotonic solution, the water molecules still move between the solutions, but the rates are the same from both directions, thus the water movement is balanced between the inside of the cell as well as the outside of the cell. A hypotonic solution is when the solute concentration outside the cell is lower than the concentration inside the cell. In hypotonic solutions, the water moves into the cell, down its concentration gradient (from higher to lower water concentrations). That can cause the cell to swell. Cells that don't have a cell wall, such as animal cells, could burst in this solution. A hypertonic solution is when the solute concentration is higher (think of hyper - as high) than the concentration inside the cell. In hypertonic solution, the water will move out , causing the cell to shrink. | https://en.wikipedia.org/wiki/Passive_transport |
Passive ventilation is the process of supplying air to and removing air from an indoor space without using mechanical systems . It refers to the flow of external air to an indoor space as a result of pressure differences arising from natural forces.
There are two types of natural ventilation occurring in buildings: wind driven ventilation and buoyancy-driven ventilation . Wind driven ventilation arises from the different pressures created by wind around a building or structure, and openings being formed on the perimeter which then permit flow through the building. Buoyancy-driven ventilation occurs as a result of the directional buoyancy force that results from temperature differences between the interior and exterior. [ 1 ]
Since the internal heat gains which create temperature differences between the interior and exterior are created by natural processes, including the heat from people, and wind effects are variable, naturally ventilated buildings are sometimes called "breathing buildings".
The static pressure of air is the pressure in a free-flowing air stream and is depicted by isobars in weather maps . Differences in static pressure arise from global and microclimate thermal phenomena and create the air flow we call wind . Dynamic pressure is the pressure exerted when the wind comes into contact with an object such as a hill or a building and it is described by the following equation: [ 2 ]
where (using SI units):
The impact of wind on a building affects the ventilation and infiltration rates through it and the associated heat losses or heat gains. Wind speed increases with height and is lower towards the ground due to frictional drag. In practical terms wind pressure will vary considerably creating complex air flows and turbulence by its interaction with elements of the natural environment (trees, hills) and urban context (buildings, structures). Vernacular and traditional buildings in different climatic regions rely heavily upon natural ventilation for maintaining thermal comfort conditions in the enclosed spaces. [ 3 ]
Design guidelines are offered in building regulations and other related literature and include a variety of recommendations on many specific areas such as:
The following design guidelines are selected from the Whole Building Design Guide , a program of the National Institute of Building Sciences : [ 4 ]
Wind driven ventilation can be classified as cross ventilation and single-sided ventilation. Wind driven ventilation depends on wind behavior, on the interactions with the building envelope and on openings or other air exchange devices such as inlets or windcatchers .
The knowledge of the urban climatology i.e. the wind around the buildings is crucial when evaluating the air quality and thermal comfort inside buildings as air and heat exchange depends on the wind pressure on facades. As observed in the equation (1), the air exchange depends linearly on the wind speed in the urban place where the architectural project will be built. CFD ( Computational Fluid Dynamics ) tools and zonal modelings are usually used to design naturally ventilated buildings. Windcatchers are able to aid wind driven ventilation by directing air in and out of buildings.
Buoyancy driven ventilation arise due to differences in density of interior and exterior air, which in large part arises from differences in temperature. When there is a temperature difference between two adjoining volumes of air the warmer air will have lower density and be more buoyant thus will rise above the cold air creating an upward air stream. Forced upflow buoyancy driven ventilation in a building takes place in a traditional fireplace. Passive stack ventilators are common in most bathrooms and other type of spaces without direct access to the outdoors.
In order for a building to be ventilated adequately via buoyancy driven ventilation, the inside and outside temperatures must be different. When the interior is warmer than the exterior, indoor air rises and escapes the building at higher apertures. If there are lower apertures then colder, denser air from the exterior enters the building through them, thereby creating upflow displacement ventilation. However, if there are no lower apertures present, then both in- and out-flow will occur through the high level opening. This is called mixing ventilation. This latter strategy still results in fresh air reaching to low level, since although the incoming cold air will mix with the interior air, it will always be more dense than the bulk interior air and hence fall to the floor. Buoyancy-driven ventilation increases with greater temperature difference, and increased height between the higher and lower apertures in the case of displacement ventilation. When both high and low level openings are present, the neutral plane in a building occurs at the location between the high and low openings at which the internal pressure will be the same as the external pressure (in the absence of wind). Above the neutral plane, the internal air pressure will be positive and air will flow out of any intermediate level apertures created. Below the neutral plane the internal air pressure will be negative and external air will be drawn into the space through any intermediate level apertures. Buoyancy-driven ventilation has several significant benefits: {See Linden, P Annu Rev Fluid Mech, 1999}
Limitations of buoyancy-driven ventilation:
Natural ventilation in buildings can rely mostly on wind pressure differences in windy conditions, but buoyancy effects can a) augment this type of ventilation and b) ensure air flow rates during still days. Buoyancy-driven ventilation can be implemented in ways that air inflow in the building does not rely solely on wind direction. In this respect, it may provide improved air quality in some types of polluted environments such as cities. For example, air can be drawn through the backside or courtyards of buildings avoiding the direct pollution and noise of the street facade. Wind can augment the buoyancy effect, but can also reduce its effect depending on its speed, direction and the design of air inlets and outlets. Therefore, prevailing winds must be taken into account when designing for stack effect ventilation.
The natural ventilation flow rate for buoyancy-driven natural ventilation with vents at two different heights can be estimated with this equation: [ 5 ]
One way to measure the performance of a naturally ventilated space is to measure the air changes per hour in an interior space. In order for ventilation to be effective, there must be exchange between outdoor air and room air. A common method for measuring ventilation effectiveness is to use a tracer gas . [ 6 ] The first step is to close all windows, doors, and openings in the space. Then a tracer gas is added to the air. The reference, American Society for Testing and Materials (ASTM) Standard E741: Standard Test Method for Determining Air Change in a Single Zone by Means of a Tracer Gas Dilution, describes which tracer gases can be used for this kind of testing and provides information about the chemical properties, health impacts, and ease of detection. [ 7 ] Once the tracer gas has been added, mixing fans can be used to distribute the tracer gas as uniformly as possible throughout the space. To do a decay test, the concentration of the tracer gas is first measured when the concentration of the tracer gas is constant. Windows and doors are then opened and the concentration of the tracer gas in the space is measured at regular time intervals to determine the decay rate of the tracer gas. The airflow can be deduced by looking at the change in concentration of the tracer gas over time. For further details on this test method, refer to ASTM Standard E741. [ 7 ]
While natural ventilation eliminates electrical energy consumed by fans, overall energy consumption of natural ventilation systems is often higher than that of modern mechanical ventilation systems featuring heat recovery . Typical modern mechanical ventilation systems use as little as 2000 J/m 3 for fan operation, and in cold weather they can recover much more energy than this in the form of heat transferred from waste exhaust air to fresh supply air using recuperators .
Ventilation heat loss can be calculated as:
θ = C p ⋅ ρ ⋅ Δ T ⋅ ( 1 − η ) . {\displaystyle \theta =C_{p}\cdot \rho \cdot \Delta T\cdot (1-\eta ).}
Where:
The temperature differential needed between indoor and outdoor air for mechanical ventilation with heat recovery to outperform natural ventilation in terms of overall energy efficiency can therefore be calculated as:
Δ T = S F P C p ⋅ ρ ⋅ ( 1 − η ) {\displaystyle \Delta T={\frac {SFP}{C_{p}\cdot \rho \cdot (1-\eta )}}}
Where:
SFP is specific fan power in Pa, J/m 3 , or W/(m 3 /s)
Under typical comfort ventilation conditions with a heat recovery efficiency of 80% and a SFP of 2000 J/m 3 we get:
Δ T = 2000 / ( 1000 ∗ 1.2 ∗ ( 1 − 0.8 ) ) = 8.33 [ K ] {\displaystyle \Delta T=2000/(1000*1.2*(1-0.8))=8.33[K]}
In climates where the mean absolute difference between inside and outside temperatures exceeds ~10K the energy conservation argument for choosing natural over mechanical ventilation might therefore be questioned. It should however be noted that heating energy might be cheaper and more environmentally friendly than electricity. This is especially the case in areas where district heating is available.
To develop natural ventilation systems with heat recovery two inherent challenges must first be solved:
Research aiming at the development of natural ventilation systems featuring heat recovery have been made as early as 1993 where Shultz et al. [ 8 ] proposed and tested a chimney type design relying on stack effect while recovering heat using a large counterflow recuperator constructed from corrugated galvanized iron. Both supply and exhaust happened through an unconditioned attic space, with exhaust air being extracted at ceiling height and air being supplied at floor level through a vertical duct.
The device was found to provide sufficient ventilation air flow for a single family home and heat recovery with an efficiency around 40%. The device was however found to be too large and heavy to be practical, and the heat recovery efficiency too low to be competitive with mechanical systems of the time. [ 8 ]
Later attempts have primarily focused on wind as the main driving force due to its higher pressure potential. This however introduces an issue of there being large fluctuations in driving pressure.
With the use of wind towers placed on the roof of ventilated spaces, supply and exhaust can be placed close to each other on opposing sides of the small towers. [ 9 ] These systems often feature finned heat pipes although this limits the theoretical maximum heat recovery efficiency. [ 10 ]
Liquid coupled run around loops have also been tested to achieve indirect thermal connection between exhaust and supply air. While these tests have been somewhat successful, liquid coupling introduces mechanical pumps that consume energy to circulate the working fluid. [ 11 ] [ 12 ]
While some commercially available solutions have been available for years, [ 13 ] [ 14 ] the claimed performance by manufacturers has yet to be verified by independent scientific studies. This might explain the apparent lack of market impact of these commercially available products claiming to deliver natural ventilation and high heat recovery efficiencies.
A radically new approach to natural ventilation with heat recovery is currently being developed at Aarhus University, where heat exchange tubes are integrated into structural concrete slabs between building floors. [ 15 ]
For standards relating to ventilation rates, in the United States refer to ASHRAE Standard 62.1-2010: Ventilation for Acceptable Indoor Air Quality . [ 16 ] These requirements are for "all spaces intended for human occupancy except those within single-family houses, multifamily structures of three stories or fewer above grade, vehicles, and aircraft." [ 16 ] In the revision to the standard in 2010, Section 6.4 was modified to specify that most buildings designed to have systems to naturally condition spaces must also "include a mechanical ventilation system designed to meet the Ventilation Rate or IAQ procedures [in ASHRAE 62.1-2010]. The mechanical system is to be used when windows are closed due to extreme outdoor temperatures noise and security concerns". [ 16 ] The standard states that two exceptions in which naturally conditioned buildings do not require mechanical systems are when:
Also, an authority having jurisdiction may allow for the design of conditioning system that does not have a mechanical system but relies only on natural systems. [ 16 ] In reference for how controls of conditioning systems should be designed, the standard states that they must take into consideration measures to "properly coordinate operation of the natural and mechanical ventilation systems." [ 16 ]
Another reference is ASHRAE Standard 62.2-2010: Ventilation and Acceptable Indoor Air Quality in low-rise Residential Buildings. [ 17 ] These requirements are for "single-family houses and multifamily structures of three stories or fewer above grade, including manufactured and modular houses," but is not applicable "to transient housing such as hotels, motels, nursing homes, dormitories, or jails." [ 17 ]
For standards relating to ventilation rates, in the United States refer to ASHRAE Standard 55-2010: Thermal Environmental Conditions for Human Occupancy. [ 18 ] Throughout its revisions, its scope has been consistent with its currently articulated purpose, “to specify the combinations of indoor thermal environmental factors and personal factors that will produce thermal environmental conditions acceptable to a majority of the occupants within the space.” [ 18 ] The standard was revised in 2004 after field study results from the ASHRAE research project, RP-884: developing an adaptive model of thermal comfort and preference, indicated that there are differences between naturally and mechanically conditioned spaces with regards to occupant thermal response, change in clothing, availability of control, and shifts in occupant expectations. [ 19 ] The addition to the standard, 5.3: Optional Method For Determining Acceptable Thermal Conditions in Naturally Ventilated Spaces, uses an adaptive thermal comfort approach for naturally conditioned buildings by specifying acceptable operative temperature ranges for naturally conditioned spaces. [ 18 ] As a result, the design of natural ventilation systems became more feasible, which was acknowledged by ASHRAE as a way to further sustainable, energy efficient, and occupant-friendly design. [ 18 ]
University-based research centers that currently conduct natural ventilation research:
Natural Ventilation Guidelines: | https://en.wikipedia.org/wiki/Passive_ventilation |
The Passivhaus-Institut (PHI) is responsible for promoting and maintaining the Passivhaus building program. [ 1 ] [ 2 ] [ 3 ] The "Passivhaus Institute" was founded in 1996, and is based and active in Darmstadt, Germany .
The English spelling was used for the Passive House Institute US (PHIUS) when it formed in 2007 [ 4 ] originally under the umbrella of the Passivhaus Institute. The two separated in 2012.
Though PHI and PHIUS sustainable design standards are different, they both share common goals for drastic energy conservation and carbon reduction through sustainable architecture design techniques and specifications to create low-energy houses and other structures with low energy building practices for the public benefit worldwide.
This article about an organisation based in Germany is a stub . You can help Wikipedia by expanding it .
This article about an architectural organization or association is a stub . You can help Wikipedia by expanding it .
This engineering-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Passivhaus-Institut |
Passivity is a property of engineering systems, most commonly encountered in analog electronics and control systems . Typically, analog designers use passivity to refer to incrementally passive components and systems, which are incapable of power gain . In contrast, control systems engineers will use passivity to refer to thermodynamically passive ones, which consume, but do not produce, energy. As such, without context or a qualifier, the term passive is ambiguous.
An electronic circuit consisting entirely of passive components is called a passive circuit , and has the same properties as a passive component.
If a device is not passive, then it is an active device .
In control systems and circuit network theory, a passive component or circuit is one that consumes energy, but does not produce energy. Under this methodology, voltage and current sources are considered active, while resistors , capacitors , inductors , transistors , tunnel diodes , metamaterials and other dissipative and energy-neutral components are considered passive. Circuit designers will sometimes refer to this class of components as dissipative, or thermodynamically passive.
While many books give definitions for passivity, many of these contain subtle errors in how initial conditions are treated and, occasionally, the definitions do not generalize to all types of nonlinear time-varying systems with memory. Below is a correct, formal definition, taken from Wyatt et al., [ 1 ] which also explains the problems with many other definitions. Given an n - port R with a state representation S , and initial state x , define available energy E A as:
where the notation sup x → T ≥0 indicates that the supremum is taken over all T ≥ 0 and all admissible pairs { v (·), i (·)} with the fixed initial state x (e.g., all voltage–current trajectories for a given initial condition of the system). A system is considered passive if E A is finite for all initial states x . Otherwise, the system is considered active. Roughly speaking, the inner product ⟨ v ( t ) , i ( t ) ⟩ {\displaystyle \langle v(t),i(t)\rangle } is the instantaneous power (e.g., the product of voltage and current), and E A is the upper bound on the integral of the instantaneous power (i.e., energy). This upper bound (taken over all T ≥ 0) is the available energy in the system for the particular initial condition x . If, for all possible initial states of the system, the energy available is finite, then the system is called passive. If the available energy is finite, it is known to be non-negative, since any trajectory with voltage v ( t ) = 0 {\displaystyle v(t)=0} gives an integral equal to zero, and the available energy is the supremum over all possible trajectories. Moreover, by definition, for any trajectory { v (·), i (·)}, the following inequality holds:
The existence of a non-negative function E A that satisfies this inequality, known as a "storage function", is equivalent to passivity. [ 2 ] For a given system with a known model, it is often easier to construct a storage function satisfying the differential inequality than directly computing the available energy, as taking the supremum on a collection of trajectories might require the use of calculus of variations .
In circuit design , informally, passive components refer to ones that are not capable of power gain ; this means they cannot amplify signals. Under this definition, passive components include capacitors , inductors , resistors , diodes , transformers , voltage sources, and current sources. [ 3 ] They exclude devices like transistors , vacuum tubes , relays , tunnel diodes, and glow tubes .
To give other terminology, systems for which the small-signal model is not passive are sometimes called locally active (e.g. transistors and tunnel diodes). Systems that can generate power about a time-variant unperturbed state are often called parametrically active (e.g. certain types of nonlinear capacitors). [ 4 ]
Formally, for a memoryless two-terminal element, this means that the current–voltage characteristic is monotonically increasing . For this reason, control systems and circuit network theorists refer to these devices as locally passive, incrementally passive, increasing, monotone increasing, or monotonic. It is not clear how this definition would be formalized to multiport devices with memory – as a practical matter, circuit designers use this term informally, so it may not be necessary to formalize it. [ nb 1 ] [ 5 ]
This term is used colloquially in a number of other contexts:
Passivity, in most cases, can be used to demonstrate that passive circuits will be stable under specific criteria. This only works if only one of the above definitions of passivity is used – if components from the two are mixed, the systems may be unstable under any criteria. In addition, passive circuits will not necessarily be stable under all stability criteria. For instance, a resonant series LC circuit will have unbounded voltage output for a bounded voltage input, but will be stable in the sense of Lyapunov , and given bounded energy input will have bounded energy output.
Passivity is frequently used in control systems to design stable control systems or to show stability in control systems. This is especially important in the design of large, complex control systems (e.g. stability of airplanes). Passivity is also used in some areas of circuit design, especially filter design.
A passive filter is a kind of electronic filter that is made only from passive components – in contrast to an active filter, it does not require an external power source (beyond the signal). Since most filters are linear, in most cases, passive filters are composed of just the four basic linear elements – resistors, capacitors, inductors, and transformers. More complex passive filters may involve nonlinear elements, or more complex linear elements, such as transmission lines.
A passive filter has several advantages over an active filter :
They are commonly used in speaker crossover design (due to the moderately large voltages and currents, and the lack of easy access to a power supply), filters in power distribution networks (due to the large voltages and currents), power supply bypassing (due to low cost, and in some cases, power requirements), as well as a variety of discrete and home brew circuits (for low-cost and simplicity). Passive filters are uncommon in monolithic integrated circuit design, where active devices are inexpensive compared to resistors and capacitors, and inductors are prohibitively expensive. Passive filters are still found, however, in hybrid integrated circuits . Indeed, it may be the desire to incorporate a passive filter that leads the designer to use the hybrid format.
Passive circuit elements may be divided into energic and non-energic kinds. When current passes through it, an energic passive circuit element converts some of the energy supplied to it into heat . It is dissipative . When current passes through it, a non-energic passive circuit element converts none of the energy supplied to it into heat. It is non-dissipative. Resistors are energic. Ideal capacitors, inductors, transformers, and gyrators are non-energic. [ 10 ] | https://en.wikipedia.org/wiki/Passivity_(engineering) |
In signal processing , a passthrough is a logic gate that enables a signal to "pass through" unaltered, sometimes with little alteration. Sometimes the concept of a "passthrough" can also involve daisy chain logic.
This electronics-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Passthrough_(electronics) |
In theoretical physics , the Pasterski–Strominger–Zhiboedov ( PSZ ) triangle or infrared triangle is a series of relationships between three groups of concepts involving the theory of relativity , quantum field theory and quantum gravity . The triangle highlights connections already known or demonstrated by its authors, Sabrina Gonzalez Pasterski , Andrew Strominger and Alexander Zhiboedov. [ 1 ]
The connections are among weak and lasting effects caused by the passage of gravitational or electromagnetic waves ( memory effects ), quantum field theorems on graviton and photon and geometrical symmetries of spacetime . Because all of this occurs under conditions of low energy, known as infrared in the language of physicists, it is also referred to as the infrared triangle. [ 2 ]
The concepts that are interconnected by the triangle are:
Each group is linked to another by special relationships:
So, for example:
In addition to the first triangular relationship highlighted by the authors, several others may exist and have been hypothesized. [ 10 ] | https://en.wikipedia.org/wiki/Pasterski–Strominger–Zhiboedov_triangle |
The Pasteur effect describes how available oxygen inhibits ethanol fermentation , driving yeast to switch toward aerobic respiration for increased generation of the energy carrier adenosine triphosphate (ATP) . [ 1 ] More generally, in the medical literature, the Pasteur effect refers to how the presence of oxygen causes in a decrease in the cellular rate of glycolysis and suppression of lactate accumulation. The effect occurs in animal tissues, as well as in microorganisms belonging to the fungal kingdom . [ 2 ] [ 3 ]
In 1857, microbiologist Louis Pasteur showed that aeration of yeasted broth causes cell growth to increase while the fermentation rate decreases, based on lowered ethanol production. [ 4 ] [ 5 ]
Yeast fungi, being facultative anaerobes , can either produce energy through ethanol fermentation or aerobic respiration. When the O 2 concentration is low, the two pyruvate molecules formed through glycolysis are each fermented into ethanol and carbon dioxide . While only 2 ATP are produced per glucose, this method is utilized under anaerobic conditions because it oxidizes the electron shuttle NADH into NAD + for another round of glycolysis and ethanol fermentation.
If the concentration of oxygen increases, pyruvate is instead converted to acetyl CoA , used in the citric acid cycle , and undergoes oxidative phosphorylation . Per glucose, 10 NADH and 2 FADH 2 are produced in cellular respiration for a significant amount of proton pumping to produce a proton gradient utilized by ATP Synthase . While the exact ATP output ranges based on considerations like the overall electrochemical gradient, aerobic respiration produces far more ATP than the anaerobic process of ethanol fermentation. The increased ATP and citrate from aerobic respiration allosterically inhibit the glycolysis enzyme phosphofructokinase 1 because less pyruvate is needed to produce the same amount of ATP.
Despite this energetic incentive, Rosario Lagunas has shown that yeast continue to partially ferment available glucose into ethanol for many reasons. [ 1 ] First, glucose metabolism is faster through ethanol fermentation because it involves fewer enzymes and limits all reactions to the cytoplasm . Second, ethanol has bactericidal activity by causing damage to the cell membrane and protein denaturing , allowing yeast fungus to outcompete environmental bacteria for resources. [ 6 ] Third, partial fermentation may be a defense mechanism against environmental competitors depleting all oxygen faster than the yeast's regulatory systems could fully switch from aerobic respiration to ethanol fermentation.
The fermentation processes used in alcohol production is commonly maintained in low oxygen conditions, under a blanket of carbon dioxide, while growing yeast for biomass involves aerating the broth for maximized energy production. Despite the bactericidal effects of ethanol, acidifying effects of fermentation, and low oxygen conditions of industrial alcohol production, bacteria that undergo lactic acid fermentation can contaminate such facilities because lactic acid has a low pKa of 3.86 to avoid decoupling the pH membrane gradient that supports regulated transport. [ 7 ] | https://en.wikipedia.org/wiki/Pasteur_effect |
The Pasteur point is a level of oxygen (about 0.3% by volume which is less than 1% of Present Atmospheric Level or PAL) above which facultative aerobic microorganisms and facultative anaerobes adapt from fermentation to aerobic respiration . [ 1 ] It is also used to mark the level of oxygen in the early atmosphere of the Earth that is believed to have led to major evolutionary changes. It is named after Louis Pasteur , the French microbiologist who studied anaerobic microbial fermentation, and is related to the Pasteur effect . [ 2 ]
It was once supposed that about 400 million years ago, in the Cambrian period , the level of oxygen in the atmosphere rose from 0.1 to 1 percent of present atmospheric level. Supposedly, this led to many organisms adapting from fermentation to respiration, leading to organisms evolving photosynthesis and what is termed the Cambrian explosion of species. It has also been suggested that this increased oxygen level reduced the influence of ultraviolet radiation . [ 3 ] [ 4 ] [ 5 ] [ 6 ]
It is now well documented that oxygen level reached at least 10% of the present value 2.4 billion years ago (for details see Great Oxygenation Event ).
This geology article is a stub . You can help Wikipedia by expanding it .
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pasteur_point |
In topology , the pasting or gluing lemma , and sometimes the gluing rule , is an important result which says that two continuous functions can be "glued together" to create another continuous function. The lemma is implicit in the use of piecewise functions . For example, in the book Topology and Groupoids , where the condition given for the statement below is that A ∖ B ⊆ Int A {\displaystyle A\setminus B\subseteq \operatorname {Int} A} and B ∖ A ⊆ Int B . {\displaystyle B\setminus A\subseteq \operatorname {Int} B.}
The pasting lemma is crucial to the construction of the fundamental group and fundamental groupoid of a topological space ; it allows one to concatenate paths to create a new path.
Let X , Y {\displaystyle X,Y} be both closed (or both open ) subsets of a topological space A {\displaystyle A} such that A = X ∪ Y {\displaystyle A=X\cup Y} , and let B {\displaystyle B} also be a topological space. If f : A → B {\displaystyle f:A\to B} is continuous when restricted to both X {\displaystyle X} and Y , {\displaystyle Y,} then f {\displaystyle f} is continuous. [ 1 ]
This result allows one to take two continuous functions defined on closed (or open) subsets of a topological space and create a new one.
Proof : if U {\displaystyle U} is a closed subset of B , {\displaystyle B,} then f − 1 ( U ) ∩ X {\displaystyle f^{-1}(U)\cap X} and f − 1 ( U ) ∩ Y {\displaystyle f^{-1}(U)\cap Y} are both closed since each is the preimage of f {\displaystyle f} when restricted to X {\displaystyle X} and Y {\displaystyle Y} respectively, which by assumption are continuous. Then their union , f − 1 ( U ) {\displaystyle f^{-1}(U)} is also closed, being a finite union of closed sets.
A similar argument applies when X {\displaystyle X} and Y {\displaystyle Y} are both open. ◻ {\displaystyle \Box }
The infinite analog of this result (where A = X 1 ∪ X 2 ∪ X 3 ∪ ⋯ {\displaystyle A=X_{1}\cup X_{2}\cup X_{3}\cup \cdots } ) is not true for closed X 1 , X 2 , X 3 , … . {\displaystyle X_{1},X_{2},X_{3},\ldots .} For instance, the inclusion map ι : Z → R {\displaystyle \iota :\mathbb {Z} \to \mathbb {R} } from the integers to the real line (with the integers equipped with the cofinite topology ) is continuous when restricted to an integer, but the preimage of a bounded open set in the reals with this map is at most a finite number of points, so not open in Z . {\displaystyle \mathbb {Z} .}
It is, however, true if the X 1 , X 2 , X 3 … {\displaystyle X_{1},X_{2},X_{3}\ldots } form a locally finite collection since a union of locally finite closed sets is closed. Similarly, it is true if the X 1 , X 2 , X 3 , … {\displaystyle X_{1},X_{2},X_{3},\ldots } are instead assumed to be open since a union of open sets is open. | https://en.wikipedia.org/wiki/Pasting_lemma |
The patch clamp technique is a laboratory technique in electrophysiology used to study ionic currents in individual isolated living cells , tissue sections, or patches of cell membrane. The technique is especially useful in the study of excitable cells such as neurons , cardiomyocytes , muscle fibers , and pancreatic beta cells , and can also be applied to the study of bacterial ion channels in specially prepared giant spheroplasts .
Patch clamping can be performed using the voltage clamp technique. In this case, the voltage across the cell membrane is controlled by the experimenter and the resulting currents are recorded. Alternatively, the current clamp technique can be used. In this case, the current passing across the membrane is controlled by the experimenter and the resulting changes in voltage are recorded, generally in the form of action potentials .
Erwin Neher and Bert Sakmann developed the patch clamp in the late 1970s and early 1980s. This discovery made it possible to record the currents of single ion channel molecules for the first time, which improved understanding of the involvement of channels in fundamental cell processes such as action potentials and nerve activity. Neher and Sakmann received the Nobel Prize in Physiology or Medicine in 1991 for this work. [ 1 ]
During a patch clamp recording, a hollow glass tube known as a micropipette or patch pipette filled with an electrolyte solution and a recording electrode connected to an amplifier is brought into contact with the membrane of an isolated cell . Another electrode is placed in a bath surrounding the cell or tissue as a reference ground electrode. An electrical circuit can be formed between the recording and reference electrode with the cell of interest in between.
The solution filling the patch pipette might match the ionic composition of the bath solution, as in the case of cell-attached recording, or match the cytoplasm , for whole-cell recording. The solution in the bath solution may match the physiological extracellular solution, the cytoplasm, or be entirely non-physiological, depending on the experiment to be performed. The researcher can also change the content of the bath solution (or less commonly the pipette solution) by adding ions or drugs to study the ion channels under different conditions.
Depending on what the researcher is trying to measure, the diameter of the pipette tip used may vary, but it is usually in the micrometer range. [ 2 ] This small size is used to enclose a cell membrane surface area or "patch" that often contains just one or a few ion channel molecules. [ 3 ] This type of electrode is distinct from the "sharp microelectrode" used to puncture cells in traditional intracellular recordings , in that it is sealed onto the surface of the cell membrane, rather than inserted through it.
In some experiments, the micropipette tip is heated in a microforge to produce a smooth surface that assists in forming a high resistance seal with the cell membrane. To obtain this high resistance seal, the micropipette is pressed against a cell membrane and suction is applied. A portion of the cell membrane is suctioned into the pipette, creating an omega -shaped area of membrane which, if formed properly, creates a resistance in the 10–100 gigaohms range, called a "gigaohm seal" or "gigaseal". [ 3 ] The high resistance of this seal makes it possible to isolate electronically the currents measured across the membrane patch with little competing noise , as well as providing some mechanical stability to the recording. [ 4 ]
Many patch clamp amplifiers do not use true voltage clamp circuitry, but instead are differential amplifiers that use the bath electrode to set the zero current (ground) level. This allows a researcher to keep the voltage constant while observing changes in current . To make these recordings, the patch pipette is compared to the ground electrode. Current is then injected into the system to maintain a constant, set voltage. The current that is needed to clamp the voltage is opposite in sign and equal in magnitude to the current through the membrane. [ 3 ]
Alternatively, the cell can be current clamped in whole-cell mode, keeping current constant while observing changes in membrane voltage . [ 5 ]
Accurate tissue sectioning with compresstome vibratome or microtomes is essential, in addition to patch clamp methods. By supplying thin, uniform tissue slices, these devices provide optimal electrode implantation. To prepare tissues for patch clamp studies in a way that ensures accurate and dependable recordings, researchers can select between using vibratomes for softer tissues and microtomes for tougher structures. [ 6 ] Leica Biosystems , Carl Zeiss AG are the notable producer of these devices.
Several variations of the basic technique can be applied, depending on what the researcher wants to study. The inside-out and outside-out techniques are called "excised patch" techniques, because the patch is excised (removed) from the main body of the cell. Cell-attached and both excised patch techniques are used to study the behavior of individual ion channels in the section of membrane attached to the electrode.
Whole-cell patch and perforated patch allow the researcher to study the electrical behavior of the entire cell, instead of single channel currents. The whole-cell patch, which enables low-resistance electrical access to the inside of a cell, has now largely replaced high-resistance microelectrode recording techniques to record currents across the entire cell membrane.
For this method, the pipette is sealed onto the cell membrane to obtain a gigaseal (a seal with electrical resistance on the order of a gigaohm), while ensuring that the cell membrane remains intact. This allows the recording of currents through single, or a few, ion channels contained in the patch of membrane captured by the pipette. By only attaching to the exterior of the cell membrane, there is very little disturbance of the cell structure. [ 3 ] Also, by not disrupting the interior of the cell, any intracellular mechanisms normally influencing the channel will still be able to function as they would physiologically. [ 7 ] Using this method it is also relatively easy to obtain the right configuration, and once obtained it is fairly stable. [ 8 ]
For ligand-gated ion channels or channels that are modulated by metabotropic receptors , the neurotransmitter or drug being studied is usually included in the pipette solution, where it can interact with what used to be the external surface of the membrane. The resulting channel activity can be attributed to the drug being used, although it is usually not possible to then change the drug concentration inside the pipette. The technique is thus limited to one point in a dose response curve per patch. Therefore, the dose response is accomplished using several cells and patches. However, voltage-gated ion channels can be clamped successively at different membrane potentials in a single patch. This results in channel activation as a function of voltage, and a complete I-V (current-voltage) curve can be established in only one patch. Another potential drawback of this technique is that, just as the intracellular pathways of the cell are not disturbed, they cannot be directly modified either. [ 8 ]
In the inside-out method, a patch of the membrane is attached to the patch pipette, detached from the rest of the cell, and the cytosolic surface of the membrane is exposed to the external media, or bath. [ 9 ] One advantage of this method is that the experimenter has access to the intracellular surface of the membrane via the bath and can change the chemical composition of what the inside surface of the membrane is exposed to. This is useful when an experimenter wishes to manipulate the environment at the intracellular surface of single ion channels. For example, channels that are activated by intracellular ligands can then be studied through a range of ligand concentrations.
To achieve the inside-out configuration, the pipette is attached to the cell membrane as in the cell-attached mode, forming a gigaseal, and is then retracted to break off a patch of membrane from the rest of the cell. Pulling off a membrane patch often results initially in the formation of a vesicle of membrane in the pipette tip, because the ends of the patch membrane fuse together quickly after excision. The outer face of the vesicle must then be broken open to enter into inside-out mode; this may be done by briefly taking the membrane through the bath solution/air interface, by exposure to a low Ca 2+ solution, or by momentarily making contact with a droplet of paraffin or a piece of cured silicone polymer. [ 10 ]
Whole-cell recordings involve recording currents through multiple channels simultaneously, over a large region of the cell membrane. The electrode is left in place on the cell, as in cell-attached recordings, but more suction is applied to rupture the membrane patch, thus providing access from the interior of the pipette to the intracellular space of the cell. This provides a means to administer and study how treatments (e.g. drugs) can affect cells in real time. [ 11 ] Once the pipette is attached to the cell membrane, there are two methods of breaking the patch. The first is by applying more suction. The amount and duration of this suction depends on the type of cell and size of the pipette. The other method requires a large current pulse to be sent through the pipette. How much current is applied and the duration of the pulse also depend on the type of cell. [ 8 ] For some types of cells, it is convenient to apply both methods simultaneously to break the patch.
The advantage of whole-cell patch clamp recording over sharp electrode technique recording is that the larger opening at the tip of the patch clamp electrode provides lower resistance and thus better electrical access to the inside of the cell. [ 12 ] [ 11 ] A disadvantage of this technique is that because the volume of the electrode is larger than the volume of the cell, the soluble contents of the cell's interior will slowly be replaced by the contents of the electrode. This is referred to as the electrode "dialyzing" the cell's contents. [ 8 ] After a while, any properties of the cell that depend on soluble intracellular contents will be altered. The pipette solution used usually approximates the high- potassium environment of the interior of the cell to minimize any changes this may cause. There is often a period at the beginning of a whole-cell recording when one can take measurements before the cell has been dialyzed. [ 8 ]
The name "outside-out" emphasizes both this technique's complementarity to the inside-out technique, and the fact that it places the external rather than intracellular surface of the cell membrane on the outside of the patch of membrane, in relation to the patch electrode. [ 7 ]
The formation of an outside-out patch begins with a whole-cell recording configuration. After the whole-cell configuration is formed, the electrode is slowly withdrawn from the cell, allowing a bulb of membrane to bleb out from the cell. When the electrode is pulled far enough away, this bleb will detach from the cell and reform as a convex membrane on the end of the electrode (like a ball open at the electrode tip), with the original outside of the membrane facing outward from the electrode. [ 7 ] As the image at the right shows, this means that the fluid inside the pipette will be simulating the intracellular fluid, while a researcher is free to move the pipette and the bleb with its channels to another bath of solution. While multiple channels can exist in a bleb of membrane, single channel recordings are also possible in this conformation if the bleb of detached membrane is small and only contains one channel. [ 13 ]
Outside-out patching gives the experimenter the opportunity to examine the properties of an ion channel when it is isolated from the cell and exposed successively to different solutions on the extracellular surface of the membrane. The experimenter can perfuse the same patch with a variety of solutions in a relatively short amount of time, and if the channel is activated by a neurotransmitter or drug from the extracellular face, a dose-response curve can then be obtained. [ 14 ] This ability to measure current through exactly the same piece of membrane in different solutions is the distinct advantage of the outside-out patch relative to the cell-attached method. On the other hand, it is more difficult to accomplish. The longer formation process involves more steps that could fail and results in a lower frequency of usable patches.
This variation of the patch clamp method is very similar to the whole-cell configuration. The main difference lies in the fact that when the experimenter forms the gigaohm seal, suction is not used to rupture the patch membrane. Instead, the electrode solution contains small amounts of an antifungal or antibiotic agent, such as amphothericin-B , nystatin , or gramicidin , which diffuses into the membrane patch and forms small pores in the membrane, providing electrical access to the cell interior. [ 15 ] When comparing the whole-cell and perforated patch methods, one can think of the whole-cell patch as an open door, in which there is complete exchange between molecules in the pipette solution and the cytoplasm. The perforated patch can be likened to a screen door that only allows the exchange of certain molecules from the pipette solution to the cytoplasm of the cell.
Advantages of the perforated patch method, relative to whole-cell recordings, include the properties of the antibiotic pores, that allow equilibration only of small monovalent ions between the patch pipette and the cytosol, but not of larger molecules that cannot permeate through the pores. This property maintains endogenous levels of
divalent ions such as Ca 2+ and signaling molecules such as cAMP . Consequently, one can have recordings of the entire cell, as in whole-cell patch clamping, while retaining most intracellular signaling mechanisms, as in cell-attached recordings. As a result, there is reduced current rundown, and stable perforated patch recordings can last longer than one hour. [ 15 ] Disadvantages include a higher access resistance, relative to whole-cell, due to the partial membrane occupying the tip of the electrode. This may decrease current resolution and increase recording noise. It can also take a significant amount of time for the antibiotic to perforate the membrane (about 15 minutes for amphothericin-B, and even longer for gramicidin and nystatin). The membrane under the electrode tip is weakened by the perforations formed by the antibiotic and can rupture. If the patch ruptures, the recording is then in whole-cell mode, with antibiotic contaminating the inside of the cell. [ 15 ]
A loose patch clamp is different from the other techniques discussed here in that it employs a loose seal (low electrical resistance) rather than the tight gigaseal used in the conventional technique. This technique was used as early as the year 1961, as described in a paper by Strickholm on the impedance of a muscle cell's surface, [ 16 ] but received little attention until being brought up again and given a name by Almers, Stanfield, and Stühmer in 1982, [ 17 ] after patch clamp had been established as a major tool of electrophysiology.
To achieve a loose patch clamp on a cell membrane, the pipette is moved slowly towards the cell, until the electrical resistance of the contact between the cell and the pipette increases to a few times greater resistance than that of the electrode alone. The closer the pipette gets to the membrane, the greater the resistance of the pipette tip becomes, but if too close a seal is formed, and it could become difficult to remove the pipette without damaging the cell. For the loose patch technique, the pipette does not get close enough to the membrane to form a gigaseal or a permanent connection, nor to pierce the cell membrane. [ 18 ] The cell membrane stays intact, and the lack of a tight seal creates a small gap through which ions can pass outside the cell without entering the pipette.
A significant advantage of the loose seal is that the pipette that is used can be repeatedly removed from the membrane after recording, and the membrane will remain intact. This allows repeated measurements in a variety of locations on the same cell without destroying the integrity of the membrane. This flexibility has been especially useful to researchers for studying muscle cells as they contract under real physiological conditions, obtaining recordings quickly, and doing so without resorting to drastic measures to stop the muscle fibers from contracting. [ 17 ] A major disadvantage is that the resistance between the pipette and the membrane is greatly reduced, allowing current to leak through the seal, and significantly reducing the resolution of small currents. This leakage can be partially corrected for, however, which offers the opportunity to compare and contrast recordings made from different areas on the cell of interest. Given this, it has been estimated that the loose patch technique can resolve currents smaller than 1 mA/cm 2 . [ 18 ]
A combination of cellular imaging, RNA sequencing and patch clamp this method is used to fully characterize neurons across multiple modalities. [ 19 ] As neural tissues are one of the most transcriptomically diverse populations of cells , classifying neurons into cell types in order to understand the circuits they form is a major challenge for neuroscientists. Combining classical classification methods with single cell RNA-sequencing post-hoc has proved to be difficult and slow. By combining multiple data modalities such as electrophysiology , sequencing and microscopy , Patch-seq allows for neurons to be characterized in multiple ways simultaneously. It currently suffers from low throughput relative to other sequencing methods mainly due to the manual labor involved in achieving a successful patch-clamp recording on a neuron. Investigations are currently underway to automate patch-clamp technology which will improve the throughput of patch-seq as well. [ 20 ]
Automated patch clamp systems have been developed in order to collect large amounts of data inexpensively in a shorter period of time. Such systems typically include a single-use microfluidic device, either an injection molded or a polydimethylsiloxane (PDMS) cast chip, to capture a cell or cells, and an integrated electrode.
In one form of such an automated system, a pressure differential is used to force the cells being studied to be drawn towards the pipette opening until they form a gigaseal. Then, by briefly exposing the pipette tip to the atmosphere, the portion of the membrane protruding from the pipette bursts, and the membrane is now in the inside-out conformation, at the tip of the pipette. In a completely automated system, the pipette and the membrane patch can then be rapidly moved through a series of different test solutions, allowing different test compounds to be applied to the intracellular side of the membrane during recording. [ 20 ] | https://en.wikipedia.org/wiki/Patch_clamp |
Patch dynamics is an ecological perspective that the structure, function, and dynamics of ecological systems can be understood through studying their interactive patches. Patch dynamics, as a term, may also refer to the spatiotemporal changes within and among patches that make up a landscape. Patch dynamics is ubiquitous in terrestrial and aquatic systems across organizational levels and spatial scales. From a patch dynamics perspective, populations, communities, ecosystems, and landscapes may all be studied effectively as mosaics of patches that differ in size, shape, composition, history, and boundary characteristics.
The idea of patch dynamics dates back to the 1940s when plant ecologists studied the structure and dynamics of vegetation in terms of the interactive patches that it comprises. A mathematical theory of patch dynamics was developed by Simon Levin and Robert Paine in the 1970s, originally to describe the pattern and dynamics of an intertidal community as a patch mosaic created and maintained by tidal disturbances. Patch dynamics became a dominant theme in ecology between the late 1970s and the 1990s.
Patch dynamics is a conceptual approach to ecosystem and habitat analysis that emphasizes dynamics of heterogeneity within a system (i.e. that each area of an ecosystem is made up of a mosaic of small 'sub-ecosystems'). [ 1 ]
Diverse patches of habitat created by natural disturbance regimes are seen as critical to the maintenance of this diversity (ecology) . A habitat patch is any discrete area with a definite shape, spatial and configuration used by a species for breeding or obtaining other resources. Mosaics are the patterns within landscapes that are composed of smaller elements, such as individual forest stands, shrubland patches, highways, farms, or towns.
Historically, due to the short time scale of human observation, mosaic landscapes were perceived to be static patterns of human population mosaics. [ 2 ] This focus centered on the idea that the status of a particular population , community , or ecosystem could be understood by studying a particular patch within a mosaic. However, this perception ignored the conditions that interact with, and connect patches. In 1979, Bormann and Likens coined the phrase shifting mosaic to describe the theory that landscapes change and fluctuate, and are in fact dynamic. [ 3 ] This is related to the battle of cells that occurs in a Petri dish [ citation needed ] .
Patch dynamics refers to the concept that landscapes are dynamic. [ 1 ] There are three states that a patch can exist in: potential , active , and degraded . Patches in the potential state are transformed into active patches through colonization of the patch by dispersing species arriving from other active or degrading patches. Patches are transformed from the active state to the degraded state when the patch is abandoned, and patches change from degraded to active through a process of recovery . [ 4 ]
Logging, fire, farming, and reforestation can all contribute to the process of colonization, and can effectively change the shape of the patch. Patch dynamics also refers to changes in the structure, function, and composition of individual patches that can, for example, affect the rate of nutrient cycling [ citation needed ] .
Patches are also linked. Although patches may be separated in space, migration can occur from one patch to another. This migration maintains the population of some patches, and can be the mechanism by which some plant species spread. This implies that ecological systems within landscapes are open, rather than closed and isolated. (Pickett, 2006)
Recognizing the patch dynamics within a system is needed for conservation (ecology) efforts to succeed. Successful conservation includes understanding how a patch changes and predicting how they will be affected by external forces. [ 5 ] These externalities include natural effects, such as land use , disturbance , restoration , and succession , and the effects of human activities. In a sense, conservation is the active maintenance of patch dynamics (Pickett, 2006). The analysis of patch dynamics could be used to predict changes in biodiversity of an ecosystem. When patches of species can be tracked, it has been shown that fluctuations on the biggest patch (the most dominant species) can be used as an early warning of a biodiversity collapse. [ 6 ] That means that if external conditions, like climate change and habitat fragmentation , change the internal dynamics of patches, a sharp reduction in biodiversity can be detected before it is produced. [ 6 ] [ 7 ] | https://en.wikipedia.org/wiki/Patch_dynamics |
Patch dynamics is a term used in physics to bridge, using algorithms, the models describing macroscale behavior and to predict large-scale patterns in fluid flow . It uses locally averaged properties of short space-time scales to advance and predict long space-time scale dynamics.
In patch dynamics and finite difference approximations , the macroscale variables are defined at the grid points of a mesh chosen to resolve the solution. The standard PDE adaptive grid methods can be used to resolve gradients in the macroscale solution. Both patch dynamics and finite difference methods generate time derivatives at mesh points; these time derivatives then help advance the solution in time. [ 1 ]
This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Patch_dynamics_(physics) |
A patch panel is a device or unit featuring a number of jacks , usually of the same or similar type, for the use of connecting and routing circuits for monitoring, interconnecting, and testing circuits in a convenient, flexible manner. Patch panels are commonly used in computer networking , recording studios , and radio and television .
The term patch came from early use in telephony and radio studios , where extra equipment kept on standby could be temporarily substituted for failed devices. This reconnection was done via patch cords and patch panels, like the jack fields of cord-type telephone switchboards .
Patch panels are also referred to as patch bays , patch fields , jack panels or jack fields .
In recording studios , television and radio broadcast studios, and concert sound reinforcement systems, patchbays are widely used to facilitate the connection of different devices, such as microphones, electric or electronic instruments, effects (e.g. compression, reverb, etc.), recording gear, amplifiers , or broadcasting equipment. Patchbays make it easier to connect different devices in different orders for different projects, because all of the changes can be made at the patchbay. Additionally, patchbays make it easier to troubleshoot problems such as ground loops ; even small home studios and amateur project studios often use patchbays, because it groups all of the input jacks into one location. This means that devices mounted in racks or keyboard instruments can be connected without having to hunt around behind the rack or instrument with a flashlight for the right jack. Using a patchbay also saves wear and tear on the input jacks of studio gear and instruments, because all of the connections are made with the patchbay.
Patch panels are being used more prevalently in domestic installations, owing to the popularity of "Structured Wiring" installs. [ 1 ] They are also found in home cinema installations more and more. [ citation needed ]
It is conventional to have the top row of jacks wired at the rear to outputs and bottom row of jacks wired to inputs. [ 3 ] Patch bays may be half-normal (usually bottom) or full-normal, "normal" indicating that the top and bottom jacks are connected internally. When a patch bay has bottom half-normal wiring, then with no patch cord inserted into either jack, the top jack is internally linked to the bottom jack via break contacts on the bottom jack; inserting a patch cord into the top jack will take a feed off that jack while retaining the internal link between the two jacks; inserting a patch cord into the bottom jack will break the internal link and replace the signal feed from the top jack with the signal carried on the patch cord. With top half-normal wiring, the same happens but vice versa. If a patch bay is wired to full-normal, then it includes break contacts in both rows of jacks.
Dedicated switching equipment can be an alternative to patch bays in some applications. Switches can make routing as easy as pushing a button, and can provide other benefits over patch bays, including routing a signal to any number of destinations simultaneously. However, switching equipment that can emulate the capabilities of a given patch bay is much more expensive. For example, an S-Video matrix routing switcher with the same capability (8×8) as a 16-point S-Video patch panel (8 patch cables connects 8 inputs and 8 outputs) may cost ten times more, though it would probably have more capabilities.
Like patch panels, switching equipment for nearly any type of signal is available, including analog and digital video and audio, as well as RF (cable TV), MIDI , telephone, networking and electrical. There are various types of switches for audio and video, from simple selector switches to sophisticated production switchers . However, emulating or exceeding the capabilities of audio or video patch panels requires specialized devices like crossbar switches .
Switching equipment may be electronic, mechanical, or electro-mechanical . Some switcher hardware can be controlled via computer or other external devices. Some have automated or pre-programmed operational capabilities. There are also software switcher applications used to route signals and control data within a "pure digital" computer environment.
Media related to Patch panels at Wikimedia Commons | https://en.wikipedia.org/wiki/Patch_panel |
A patch test is a diagnostic method used to determine which specific substances cause allergic inflammation of a patient's skin .
Patch testing helps identify which substances may be causing a delayed-type allergic reaction in a patient and may identify allergens not identified by blood testing or skin prick testing. It is intended to produce a local allergic reaction on a small area of the patient's back, where the diluted chemicals were planted.
The chemicals included in the patch test kit are the offenders in approximately 85–90 percent of contact allergic eczema and include chemicals present in metals ( e.g. , nickel), rubber, leather, formaldehyde, lanolin, fragrance, toiletries, hair dyes, medicine, pharmaceutical items, food, drink, preservative, and other additives.
A patch test relies on the principle of a type IV hypersensitivity reaction .
The first step in becoming allergic is sensitization. When skin is exposed to an allergen , the antigen-presenting cells (APCs) – also known as Langerhans cell or Dermal Dendritic Cell – phagocytize the substance, break it down to smaller components and present them on their surface bound major histocompatibility complex type two (MHC-II) molecules. The APC then travels to a lymph node , where it presents the displayed allergen to a CD4+ T-cell , or T-helper cell. The T-cell undergoes clonal expansion and some clones of the newly formed antigen specific sensitized T-cells travel back to the site of antigen exposure. [ 1 ] [ 2 ]
When the skin is again exposed to the antigen, the memory t-cells in the skin recognize the antigen and produce cytokines (chemical signals), which cause more T-cells to migrate from blood vessels . This starts a complex immune cascade leading to skin inflammation, itching, and the typical rash of contact dermatitis . In general, it takes 2–4 days for a response in patch testing to develop. The patch test is just induction of contact dermatitis in a small area. [ 3 ]
Application of the patch tests takes about half an hour, though many times the overall appointment time is longer as the provider will take an extensive history. Tiny quantities of 25 to ~150 materials (allergens) in individual square plastic or round aluminium chambers are applied to the upper back. They are kept in place with special hypoallergenic adhesive tape. The patches stay in place undisturbed for at least 48 hours. Vigorous exercise or stretching may disrupt the test. At the second appointment, usually, 48 hours later, the patches are removed. Sometimes additional patches are applied. The back is marked with an indelible black felt tip pen or another suitable marker to identify the test sites, and a preliminary reading is done. These marks must be visible at the third appointment, usually 24–48 hours later (72–96 hours after application). In some cases, reading at 7 days may be requested, especially if a special metal series is tested. [ citation needed ]
Patch Testing for cosmetic and skincare products can be broken down into a variety of different categories, including the following: [ 4 ]
The dermatologist or allergist will read the results on Day 2 (48 hours) and Day 3 (72 hours). If the initial results are negative, another reading is made at Day 7 (168 hours). The result for each test site is recorded as per the International Contact Dermatitis Research Group Criteria: No reaction (0), doubtful reaction (?), weak positive (1+), strong positive (2+), extreme positive (3+), irritant reaction (IR), and not tested (NT). [ 5 ]
Doubtful reactions are associated with faint erythema . Weak positives are associated with palpable erythema, infiltration, and papules . Strong positives are more severe than weak positives and show the presence of vesicles . Extreme positives are more intense than strong positives and show coalescing vesciles. [ 5 ]
The patch test has a poor sensitivity ranging between 11-38%, meaning that false negative reactions are common with the patch test. False positive reactions can also occur as a result of irritant reactions. If the patch test yields a false negative result, then skin prick or intradermal testing may be recommended. [ 5 ]
The top allergens from 2005–06 were: nickel sulfate (19.0%), Myroxylon pereirae ( Balsam of Peru , 11.9%), fragrance mix I (11.5%), quaternium-15 (10.3%), neomycin (10.0%), bacitracin (9.2%), formaldehyde (9.0%), cobalt chloride (8.4%), methyldibromoglutaronitrile / phenoxyethanol (5.8%), p -phenylenediamine (5.0%), potassium dichromate (4.8%), carba mix (3.9%), thiuram mix (3.9%), diazolidinyl urea (3.7%), and 2-bromo-2-nitropropane-1,3-diol (3.4%). [ 6 ]
The most frequent allergen recorded in many research studies around the world is nickel . Nickel allergy is more prevalent in young women and is especially associated with ear piercing or any nickel-containing watch, belt, zipper, or jewelry. Other common allergens are surveyed in North America by the North American Contact Dermatitis Group (NACDG). [ 6 ]
Dermatologists may refer a patient with a suspected food allergy for patch testing. [ 7 ] Foods identified by blood testing or skin prick testing may or may not overlap with foods identified by patch testing. [ 7 ]
Certain food additives and flavorings can cause allergic reactions around and in the mouth, around the anus and vulva as food allergens pass out of the body, or cause a widespread rash on the skin. Allergens such as nickel, balsam of Peru , parabens , sodium benzoate , or cinnamaldehyde may worsen or cause skin rashes.
Foods that cause urticaria (hives) or anaphylaxis (such as peanuts) cause a type I hypersensitivity reaction whereby the part of the food molecule is directly recognized by cells close to the skin, called mast cells. Mast cells have antibodies on their surface called immunoglobulin E (IgE). These act as receptors, and if they recognize the allergen, they release their contents, causing an immediate allergic reaction. Type I reactions like anaphylaxis are immediate and do not take 2 to 4 days to appear.
In a study of patients with chronic hives who were patch tested, those who were found allergic and avoided all contact with their allergen, including dietary intake, stopped having hives. Those who started eating their allergen again had recurrence of their hives. [ 8 ] | https://en.wikipedia.org/wiki/Patch_test |
The aggregation of fluorescently tagged antibodies that are associated with proteins on membranes of living cells . The aggregation appears as a cap or a patch in the fluorescence microscope and is due to the bivalent nature of antibodies. Patching and capping were critical in demonstrating the fluid nature of plasma membranes.
Variations in density within the specimen are amplified to enhance contrast in unstained cells which is especially useful for examining living unpigmented cells. In other words, phase contrast is a contrast-enhancing optical technique that can be used to produce high contrast images such as living cells and subcellular including nuclei and other organelles. One of the major advantages of using phase contrast microscopy is that living cells can be examined in their natural state without being killed, fixed, or especially stained. As a result, biological processes in the cell can be observed and recorded in high contrast with sharp clarity of minute specimen details.
When the ligand binds to its specific receptor, the ligand - receptor complex accumulates in the coated pits. In many cells these pits and complexes begin to concentrate in one area of a cell. Cytochemically, this appears as patches of label on the cell surface (patching). Eventually, the patches coalesce to form a cap at one pole of the cell (capping). Not all cells form caps, but most do form patches. The pre-concentration process minimizes the amount of fluid that is taken up in the vesicle . | https://en.wikipedia.org/wiki/Patching_and_Capping |
Patchy particles are micron- or nanoscale colloidal particles that are anisotropically patterned, either by modification of the particle surface chemistry ("enthalpic patches"), [ 1 ] through particle shape ("entropic patches"), [ 2 ] or both. [ 3 ] The particles have a repulsive core and highly interactive surfaces that allow for this assembly. [ 2 ] The placement of these patches on the surface of a particle promotes bonding with patches on other particles. Patchy particles are used as a shorthand for modelling anisotropic colloids, [ 1 ] proteins [ 4 ] and water [ 5 ] and for designing approaches to nanoparticle synthesis. [ 6 ] Patchy particles range in valency from two ( Janus particles ) or higher. [ 7 ] Patchy particles of valency three or more experience liquid-liquid phase separation. [ 8 ] [ 9 ] Some phase diagrams of patchy particles do not follow the law of rectilinear diameters. [ 8 ]
The interaction between patchy particles can be described by a combination of two discontinuous potentials. A hard sphere potential accounting for the repulsion between the cores of the particles and an attractive square potential for the attraction between the patches . [ 8 ] [ 9 ] With the interaction potential in hand one can use different methods to compute thermodynamic properties.
Using a continuous representation [ 8 ] of the discontinuous potential described above enables the simulation of patchy particles using molecular dynamics.
One simulation done involves a Monte Carlo method , where the best “move” ensures equilibrium in the particle. One type of move is rototranslation. This is carried out by choosing a random particle, random angular and radial displacements, and a random axis of rotation. [ 10 ] Rotational degrees of freedom need to be determined prior to the simulation. The particle is then rotated/moved according to these values. Also, the integration time step needs to be controlled because it will affect the resulting shape/size of the particle. Another simulation done is the grand-canonical ensemble. In the grand-canonical ensemble, the system is in equilibrium with a thermal bath and reservoir of particles. [ 10 ] Volume, temperature, and chemical potential are fixed. Because of these constants, a number of particles (n) changes. This is typically used to monitor phase behaviour. With these additional moves, the particle is added at a random orientation and random position.
Other simulations involve biased Monte Carlo moves. One type is aggregation volume-bias moves. It consists of 2 moves; the first tries to form bond between two previously unbonded particles, the second tries to break an existing bond by separation. Aggregation volume-bias moves reflects the following procedure: two particles are chosen, I and J, which are not neighboring particles, particle J is moved inside the bonding volume of particle I. This process is carried out uniformly. Another aggregation volume-bias move follows a method of randomly choosing a particle J that is bonded to I. Particle J is then moved outside the bonding volume of particle I, resulting in the two particles no longer being bonded. [ 10 ] A third type of aggregation volume-bias move takes a particle I bonded to particle J and inserts it into a third particle.
Grand canonical ensemble is improved by aggregation volume-bias moves. When aggregation volume-bias moves are applied, the rate of monomer formation and depletion in enhanced and the grand-canonical ensemble moves increase.
A second biased Monte Carlo simulation is virtual move Monte Carlo. This is a cluster move algorithm. It was made to improve relaxation times in strongly interacting, low density systems and to better approximate diffusive dynamics in the system. [ 10 ] This simulation is good for self-assembling and polymeric systems that can find natural moves that relax the system.
Self-assembly is also a method to create patchy particles. This method allows formation of complex structures like chains, sheets, rings, icosahedra, square pyramids, tetrahedra, and twisted staircase structures. [ 1 ] By coating the surface of particles with highly anisotropic, highly directional, weakly interacting patches, the arrangement of the attractive patches can organize disordered particles into structures. The coating and the arrangement of the attractive patches is what contributes to the size, shape, and structure of the resulting particle. [ 1 ]
Developing entropic patches that will self-assemble into simple cubic , body-centered cubic (bcc), diamond, and dodecagonal quasicrystal structures. The local coordination shell partially dictates the structure that is assembled. [ 2 ] Spheres are simulated with cubic, octahedral, and tetrahedral faceting. This allows for entropic patches to self-assemble.
Tetrahedral faceted spheres are targeted by beginning with simple spheres. In coordination with the faces of a tetrahedron, the sphere is sliced at four equal facets. Monte Carlo simulations were performed to determine different forms of α, the faceting amount. [ 2 ] The particular faceting amount determines the lattice that assembles. Simple cubic lattices are achieved in a similar way by slicing cubic facets into spheres. This allows for the assembly of simple cubic lattices. A bcc crystal is achieved by faceting a sphere octahedrally. [ 2 ]
The faceting amount, α, is used in the emergent valence self-assembly to determine what crystal structure will form. A perfect sphere is set as α=0. The shape that is faceted to the sphere is defined at α=1. [ 2 ] By fluctuating the faceting amount between α=0 and α=1, the lattice can change. Changes include effects on self-assembly, packing structure, amount of coordination of the faceting patch to the sphere, shape of the faceting patch, type of crystal lattice formed, and the strength of the entropic patch. [ 2 ] | https://en.wikipedia.org/wiki/Patchy_particles |
The patent encumbrance of large automotive NiMH batteries refers to allegations that corporate interests have used the patent system to prevent the commercialization of nickel metal hydride (NiMH) battery technology. Nickel metal hydride battery technology was considered important to the development of battery electric vehicles (BEVs or EVs), plug-in hybrid electric vehicles (PHEVs) and hybrid electric vehicles (HEVs) before the technology for lithium-ion battery packs became a viable replacement. [ 1 ]
The modern nickel-metal hydride (NiMH) electric vehicle battery was invented by Dr. Masahiko Oshitani, of the GS Yuasa Corporation , and Stanford Ovshinsky , the founder of the Ovonics Battery Company , [ 2 ] and granted a patent. [ 3 ] The current trend in the industry is towards the development of lithium-ion (Li-Ion) technology to replace NiMH in electric vehicles. In 2009, Toyota tested lithium batteries as a potential replacement for the nickel metal hydride batteries used in its Prius model gasoline-electric hybrid. The company said that it would continue to use NiMH batteries in the Prius, but would introduce an all-electric vehicle based on lithium technology. Li-Ion technology, while functionally superior [ citation needed ] due to its higher specific energy and specific power , is more expensive and, as of 2009, relatively untested with regards to its long-term reliability. [ 4 ] In 2007, the National Renewable Energy Laboratory [ failed verification ] said that Li-ion batteries may be subject to dangerous overheating and fire if cells are controlled incorrectly or damaged. [ 5 ] In 2011, the National Highway Traffic Safety Administration [ failed verification ] investigated the safety of lithium battery powered vehicles and concluded that they pose no more risk of fire than other vehicles. [ 6 ]
According to the United States Department of Energy [ failed verification ] the primary advantages of lithium batteries include their high power-to-weight ratio, high energy efficiency, good high temperature performance, and low tendency to spontaneously discharge when left unused for extended periods of time. Nickel metal hydride batteries have higher self-discharge, tend to generate heat at high temperatures, and have problems with hydrogen loss. [ 7 ] However, none of these problems prevented the use of nickel metal hydride batteries in the Toyota Prius , which has an excellent reliability rating. [ 8 ]
The 1999 GM EV1 production vehicle, powered by nickel metal hydride batteries, had a 26.4 kWh battery and an EPA range of 105 miles. [ 9 ] [ 10 ] [ note 1 ]
The 2011 Nissan Leaf production vehicle had a 24 kWh battery and an EPA range of 84 miles. [ 11 ]
Despite having not only a shorter range, but also a battery of smaller capacity than the 1999 GM EV1, the Leaf found 200,000 buyers worldwide before battery capacity was first increased in 2016.
Based on this, the claim that the NiMH technology was not sufficiently advanced in the 1990s seems false (at least with regard to two-seater cars that could more easily accommodate a battery of relatively large dimensions and weight). [ editorializing ] [ citation needed ]
In an interview in the 2006 documentary Who Killed the Electric Car? , Ovshinsky stated that in the early 1990s, the auto industry created the US Auto Battery Consortium (USABC) to stifle the development of electric vehicle technology by preventing the dissemination of knowledge about Ovshinky's battery-related patents to the public through the California Air Resources Board (CARB). [ 12 ]
According to Ovshinsky, the auto industry falsely suggested that NiMH technology was not yet ready for widespread use in road cars. [ 13 ] Members of the USABC, including General Motors , Ford , and Chrysler , threatened to take legal action against Ovshinsky if he continued to promote NiMH's potential for use in BEVs, and if he continued to lend test batteries to Solectria , a start-up electric vehicle maker that was not part of the USABC. The Big Three car companies [ not specific enough to verify ] argued that his behavior violated their exclusive rights to the battery technology, because they had matched a federal government grant given to Ovonics to develop NiMH technology. [ dubious – discuss ] Critics argue that the Big Three were more interested in convincing CARB members that electric vehicles were not technologically and commercially viable. [ 12 ]
In 1994, General Motors acquired a controlling interest in Ovonics 's battery development and manufacture, including patents controlling the manufacture of large NiMH batteries . The original intent of the equity alliance was to develop NiMH batteries for GM's EV1 BEV. Sales of GM-Ovonics batteries were later taken over by GM manager and critic of CARB John Williams, leading Ovshinsky to wonder whether his decision to sell to GM had been naive. [ 12 ] The EV1 program was shut down by GM before the new NiMH battery could be widely commercialized, despite field tests that indicated the Ovonics battery extended the EV1's range to over 150 miles. [ 12 ]
Here, "field tests" mean actual use by customers, [ citation needed ] as these NiMH powered cars were in the hands of customers from 1999 until 2003, when GM took the cars from lessees (over their protests) [ citation needed ] and crushed them.
In October 2000, oil company Texaco announced the purchase of General Motors' share in GM Ovonics. [ 14 ] [ 15 ] [ 16 ] Texaco was itself acquired by rival Chevron , which was announced a few days later (and completed a year later). [ 17 ] [ 18 ]
At that time, Toyota was using NiMH batteries in the electric RAV4 SUV available in California. In March 2001, Chevron-controlled Ovonics filed a patent infringement suit against Toyota 's battery supplier, Panasonic , as well as against Toyota itself, [ 19 ] that led to a negotiated settlement in 2004. The agreement included extensive cross-licensing of each company's patents; a joint research venture to improve nickel hydride battery technology; and a ban on Panasonic's and Toyota's [ 19 ] use of large format NiMH batteries for certain transportation uses until 2010. [ 19 ] [ 20 ] [ 21 ] Information about what exactly were these particular transportation uses of NiMH batteries - that ChevronTexaco has forbidden - was suppressed due to a gag order placed on Toyota. [ 22 ] This Gag Order was so effective that Panasonic did not Develop Size C and D Eneloop Batteries.
This way, ChevronTexaco forced Toyota to end production of the first generation Toyota RAV4 EV [ neutrality is disputed ] . However, the existing RAV4 EVs were not crushed. After the court granted a motion to stay proceedings (January 2002) and the ICC also agreed to hold its proceedings, Toyota allowed customers to buy their RAV4 EVs (March 2002 onwards), previously only available on a lease, even though it was not required by the California law to do so - and Toyota's obligations under the MOA with the California Air Resources Board's ZEV mandate were already fulfilled. Moreover, production and sales continued during the time the proceedings were halted, and sales ended only shortly (November 2002) before the parties entered into an arbitration agreement (December 2002), [ 19 ] even though the last delivered vehicles had to be built with non-standard manufacturing techniques such as assembling a vehicle from spare parts (deliveries continued until September 2003). This shows obvious interest [ neutrality is disputed ] of at least some people in Toyota to keep the electric RAV4 alive.
After the forced discontinuation of the first-gen RAV4 EV and until Chevron-held patents expired, no new NiMH electric car was offered for sale or lease in the United States. Forced by Chevron to abandon plug-in vehicles, Toyota continued to use NiMH batteries in the non-plug-in Prius hybrid.
In 2003, Texaco Ovonics Battery Systems was restructured into Cobasys , a 50/50 joint venture between ChevronTexaco and Ovonics, now known as Energy Conversion Devices (ECD) Ovonics . [ 23 ] Energy Conversion Devices announced that they had exercised an option to purchase back 4,376,633 shares of stock from a Chevron subsidiary, and would cancel and return them to authorized-unissued status. This is the exact number of shares that was listed as owned by ChevronTexaco in the January 15, 2003 filing.
ChevronTexaco also maintained veto power over any sale or licensing of NiMH technology. [ 24 ] In addition, ChevronTexaco maintained the right to seize all of Cobasys' intellectual property rights in the event that ECD Ovonics did not fulfill its contractual obligations. [ 24 ] On September 10, 2007, ChevronTexaco (now known as simply "Chevron") filed suit claiming that ECD Ovonics had not fulfilled its obligations. ECD Ovonics disputed this claim. [ 25 ] The arbitration hearing has been repeatedly suspended while the parties negotiated with General Motors over the sale of Cobasys back to GM. As of March 2008, no agreement had been reached with GM. [ 26 ]
In her 2007 book Plug-in Hybrids: The Cars that Will Recharge America , Sherry Boschert argues that large-format NiMH batteries (i.e., 25 amp-hours or more) are commercially viable but that Cobasys would only accept very large orders (more than 10,000) for these batteries. The effect is that this policy precludes small companies and individuals from buying them. It also precludes larger auto manufacturers from developing test fleets of new PHEV and EV designs. Toyota employees complained about the difficulty in getting smaller orders of large format NiMH batteries to service the existing 825 RAV4 EVs .
Since no other companies were willing to make large orders, Cobasys was not manufacturing nor licensing any large format NiMH battery technology for automotive purposes. Boschert quotes Dave Goldstein, president of the Electric Vehicle Association of Washington D.C., as saying this policy is necessary because the cost of setting up a multimillion-dollar battery assembly line could not be justified without guaranteed orders of 100,000 batteries (~12,000 EVs) per year for 3 years. Boschert concludes that, "it's possible that Cobasys (Chevron) is squelching all access to large NiMH batteries through its control of patent licenses in order to remove a competitor to gasoline. Or it's possible that Cobasys simply wants the market for itself and is waiting for a major automaker to start producing plug-in hybrids or electric vehicles." [ 27 ]
In an interview with The Economist , Ovshinsky subscribed to the former view. "I think we at ECD made a mistake of having a joint venture with an oil company , frankly speaking. And I think it's not a good idea to go into business with somebody whose strategies would put you out of business, rather than building the business." [ 28 ] In the same interview, however, when asked, "So it’s your opinion that Cobasys is preventing other people from making it for that reason?", he responded, "Cobasys is not preventing anybody. Cobasys just needs an infusion of cash."
In October 2007, International Acquisitions Services, Inc. and Innovative Transportation Systems AG filed suit against Cobasys and its parents for refusing to fill a large, previously agreed-upon, order for large-format NiMH batteries to be used in the Innovan electric vehicle. [ 26 ] In August 2008, Mercedes-Benz sued Cobasys for again refusing to fill a large, previously agreed-upon order for NiMH batteries. [ 29 ] [ 30 ]
Multiple companies have tried to develop NiMH battery technology without making use of Ovonics' patents. Electro Energy Inc. , working with CalCars , converted a Toyota Prius from a hybrid electric vehicle to a PHEV using its own bipolar NiMH batteries. [ 31 ] Plug-In Conversions uses Nilar NiMH batteries and the EAA-PHEV open source control system in its Prius PHEV conversions. These organizations maintain that these developments are allowable because their NiMH battery technologies are not covered by Cobasys' patents. These batteries became commercially available in late 2007. [ 32 ]
On July 28, 2009, Automotive News reported that Cobasys would be bought from Chevron and Energy Conversion Devices by battery maker SB LiMotive , a joint venture of Bosch and Samsung . [ 33 ] At the time of the 2009 Cobasys sale, control of NiMH battery technology transferred back to ECD Ovonics. [ 34 ] In October 2009, ECD Ovonics announced that their next-generation NiMH batteries will provide specific energy and power that are comparable to those of lithium-ion batteries at a cost that is significantly lower than the cost of lithium-ion batteries. [ 35 ]
On February 3, 2010, patent JP2003504507 was refused, hence removing any patent encumbrance in Japan. [ 36 ]
On July 2, 2010, patent US6413670, expired due to lack of fee payments for the 8th year from filing the patent, hence removing any patent encumbrance in the USA. [ 3 ]
On February 14, 2012, BASF announced that it had acquired Ovonic Battery Company from Energy Conversion Devices Inc. But Chevron Corp. held the patent US6969567 for the NiMH multi-cell battery pack for cars [ 37 ] until its expiration on August 23, 2018. [ 38 ] This particular patent's maintenance fees for the 4th (2008) and 8th (2012) years were paid. [ 39 ] As of December 2019 the status was "2019-12-04
Application status is Expired - Fee Related". [ 38 ] | https://en.wikipedia.org/wiki/Patent_encumbrance_of_large_automotive_NiMH_batteries |
The paternal age effect is the statistical relationship between the father's age at conception and biological effects on the child. [ 1 ] Such effects can relate to birthweight , congenital disorders, life expectancy, and psychological outcomes. [ 2 ] A 2017 review found that while severe health effects are associated with higher paternal age, the total increase in problems caused by paternal age is low. [ 3 ] Average paternal age at birth reached a low point between 1960 and 1980 in many countries and has been increasing since then, but has not reached historically unprecedented levels. [ 4 ] The rise in paternal age is not seen as a major public health concern. [ 3 ]
The genetic quality of sperm, as well as its volume and motility, may decrease with age, [ 5 ] leading the population geneticist James F. Crow to claim that the "greatest mutational health hazard to the human genome is fertile older males". [ 6 ]
The paternal age effect was first proposed implicitly by physician Wilhelm Weinberg in 1912 [ 7 ] and explicitly by psychiatrist Lionel Penrose in 1955. [ 8 ] DNA-based research started more recently, in 1998, in the context of paternity testing.
Evidence for a paternal age effect has been proposed for several conditions, diseases, and other effects. In many of these, the statistical evidence of association is weak, and the association may be related by confounding factors or behavioral differences. [ 9 ] [ 3 ] Conditions proposed to show correlation with paternal age include the following: [ 10 ]
Advanced paternal age may be associated with a higher risk for certain single-gene disorders caused by mutations of the FGFR2 , FGFR3 and RET genes. [ 11 ] These conditions are Apert syndrome , Crouzon syndrome , Pfeiffer syndrome , achondroplasia , thanatophoric dysplasia , multiple endocrine neoplasia type 2 , and multiple endocrine neoplasia type 2b . [ 11 ] The most significant effect concerns achondroplasia (a form of dwarfism ), which might occur in about 1 in 1,875 children fathered by men over 50, compared to 1 in 15,000 in the general population. [ 12 ] However, the risk for achondroplasia is still considered clinically negligible. [ 13 ] The FGFR genes may be particularly prone to a paternal age effect due to selfish spermatogonial selection, whereby the influence of spermatogonial mutations in older men is enhanced because cells with certain mutations have a selective advantage over other cells (see § DNA mutations ). [ 14 ]
Several studies have reported that advanced paternal age is associated with an increased risk of miscarriage . [ 15 ] The strength of the association differs between studies. [ 16 ] It has been suggested that these miscarriages are caused by chromosome abnormalities in the sperm of aging men. [ 15 ] An increased risk for stillbirth has also been suggested for pregnancies fathered by men over 45. [ 16 ]
A systematic review published in 2010 concluded that the graph of the risk of low birth weight in infants with paternal age is "saucer-shaped" (U-shaped); that is, the highest risks occur at low and at high paternal ages. [ 17 ] Compared with a paternal age of 25–28 years as a reference group, the odds ratio for low birthweight was approximately 1.1 at a paternal age of 20 and approximately 1.2 at a paternal age of 50. [ 17 ] There was no association of paternal age with preterm births or with small for gestational age births. [ 17 ]
Schizophrenia is associated with advanced paternal age. [ 18 ] [ 19 ] [ 20 ] Some studies examining autism spectrum disorder (ASD) and advanced paternal age have demonstrated an association between the two, although there also appears to be an increase with maternal age . [ 21 ]
In one study, the risk of bipolar disorder , particularly for early-onset disease, is J-shaped, with the lowest risk for children of 20- to 24-year-old fathers, a twofold risk for younger fathers, and a threefold risk for fathers >50 years old. There is no similar relationship with maternal age. [ 22 ] A second study also found a risk of schizophrenia in both fathers above age 50 and fathers below age 25. The risk in younger fathers was noted to affect only male children. [ 23 ]
A 2010 study found the relationship between parental age and psychotic disorders to be stronger with maternal age than paternal age. [ 24 ]
A 2016 review concluded that the mechanism behind the reported associations was still not clear, with evidence both for selection of individuals liable to psychiatric illness into late fatherhood and evidence for causative mutations. The mechanisms under discussion are not mutually exclusive. [ 25 ]
A 2017 review concluded that the vast majority of studies supported a relationship between older paternal age and autism and schizophrenia but that there is less convincing and also inconsistent evidence for associations with other psychiatric illnesses. [ 3 ]
Paternal age may be associated with an increased risk of breast cancer , [ 26 ] but the association is weak and there are confounding effects. [ 10 ]
According to a 2017 review, there is consistent evidence of an increase in the incidence of childhood acute lymphoblastic leukemia with paternal age. Results for associations with other childhood cancers are more mixed (e.g. retinoblastoma ) or generally negative. [ 3 ]
High paternal age has been suggested as a risk factor for type 1 diabetes , [ 27 ] but research findings are inconsistent, and a clear association has not been established. [ 28 ] [ 29 ]
It appears that a paternal-age effect might exist concerning Down syndrome , but it is small when compared to the maternal-age effect . [ 30 ] [ 31 ]
A review in 2005 found a U-shaped relationship between paternal age and low intelligence quotients (IQs). [ 32 ] The highest IQ was found at paternal ages of 25–29; fathers younger than 25 and older than 29 tended to have children with lower IQs. [ 32 ] It also found that "at least a half dozen other studies ... have demonstrated significant associations between paternal age and human intelligence." [ 32 ] A 2009 study examined children at 8 months, 4 years, and 7 years and found that higher paternal age was associated with poorer scores in almost all neurocognitive tests used but that higher maternal age was associated with better scores on the same tests; [ 33 ] this was a reverse effect to that observed in the 2005 review, which found that maternal age began to correlate with lower intelligence at a younger age than paternal age, [ 32 ] however two other past studies were in agreement with the 2009 study's results. [ 24 ] An editorial accompanying the 2009 paper emphasized the importance of controlling for socioeconomic status in studies of paternal age and intelligence. [ 34 ] A 2010 study from Spain also found an association between advanced paternal age and intellectual disability. [ 24 ]
On the other hand, later research concluded that previously reported negative associations might be explained by confounding factors, especially parental intelligence and education. A re-analysis of the 2009 study found that the paternal age effect could be explained by adjusting for maternal education and number of siblings. [ 35 ] A 2012 Scottish study found no significant association between paternal age and intelligence, after adjusting what was initially an inverse-U association for both parental education and socioeconomic status as well as number of siblings. [ 36 ] A 2013 study of half a million Swedish men adjusted for genetic confounding by comparing brothers and found no association between paternal age and offspring IQ. [ 37 ] Another study from 2014 found an initially positive association between paternal age and offspring IQ that disappeared when adjusting for parental IQs. [ 38 ]
A 2008 paper found a U-shaped association between paternal age and the overall mortality rate in children (i.e., mortality rate up to age 18). [ 39 ] Although the relative mortality rates were higher, the absolute numbers were low, because of the relatively low occurrence of genetic abnormality. The study has been criticized for not adjusting for maternal health, which could have a large effect on child mortality. [ 40 ] The researchers also found a correlation between paternal age and offspring death by injury or poisoning, indicating the need to control for social and behavioral confounding factors. [ 41 ]
In 2012, a study showed that greater age at paternity tends to increase telomere length in offspring for up to two generations. Since telomere length affects health and mortality, this may affect the health and aging rate of the offspring. The authors speculated that this effect may provide a mechanism by which populations have some plasticity in adapting longevity to different social and ecological contexts. [ 42 ]
Parents do not decide when to reproduce randomly. This implies that paternal age effects may be confounded by social and genetic predictors of reproductive timing.
A simulation study concluded that reported paternal age effects on psychiatric disorders in the epidemiological literature are too large to be explained only by mutations. They conclude that a model in which parents with a genetic liability to psychiatric illness tend to reproduce later better explains the literature. [ 9 ]
Later age at parenthood is also associated with a more stable family environment, with older parents being less likely to divorce or change partners. [ 43 ] Older parents also tend to occupy a higher socio-economic position and report feeling more devoted to their children and satisfied with their family. [ 43 ] On the other hand, the risk of the father dying before the child becomes an adult increases with paternal age. [ 43 ]
To adjust for genetic liability, some studies compare full siblings. Additionally, studies statistically adjust for some or all of these confounding factors. Using sibling comparisons or adjusting for more covariates frequently changes the direction or magnitude of paternal age effects. For example, one study drawing on Finnish census data concluded that increases in offspring mortality with paternal age could be explained completely by parental loss. [ 44 ] On the other hand, a population-based cohort study drawing on 2.6 million records from Sweden found that risk of attention deficit hyperactivity disorder was only positively associated with paternal age when comparing siblings. [ 45 ]
Several hypothesized chains of causality exist whereby increased paternal age may lead to health effects. [ 16 ] [ 46 ] There are different types of genome mutations, with distinct mutation mechanisms:
Telomeres are repetitive genetic sequences at both ends of each chromosome that protect the structure of the chromosome . [ 47 ] As men age, most telomeres shorten, but sperm telomeres increase in length. [ 16 ] The offspring of older fathers have longer telomeres in both their sperm and white blood cells . [ 16 ] [ 47 ] A large study showed a positive paternal, but no independent maternal age effect on telomere length. Because the study used twins , it could not compare siblings who were discordant for paternal age. It found that telomere length was 70% heritable. [ 48 ] Regarding the mutation of microsatellite DNA, also known as short tandem repeat (STR) DNA, a survey of over 12,000 paternity-tested families shows that the microsatellite DNA mutation rate in both very young teenage fathers and in middle-aged fathers is elevated, while the mother's age has no effect. [ 49 ]
In contrast to oogenesis , the production of sperm cells is a lifelong process. [ 16 ] Each year after puberty, spermatogonia (precursors of the spermatozoa ) divide meiotically about 23 times. [ 46 ] By age 40, the spermatogonia will have undergone about 660 such divisions, compared to 200 at age 20. [ 46 ] Copying errors might sometimes happen during the DNA replication preceding these cell divisions, which may lead to new ( de novo ) mutations in the sperm DNA. [ 14 ]
The selfish spermatogonial selection hypothesis proposes that the influence of spermatogonial mutations in older men is further enhanced because cells with certain mutations have a selective advantage over other cells. [ 46 ] [ 50 ] Such an advantage would allow the mutated cells to increase in number through clonal expansion. [ 46 ] [ 50 ] In particular, mutations that affect the RAS pathway , which regulates spermatogonial proliferation, appear to offer a competitive advantage to spermatogonial cells, while also leading to diseases associated with paternal age. [ 50 ]
During the past two decades evidence has accumulated that pregnancy loss as well as reduced rate of success with assisted reproductive technologies is linked to impaired sperm chromosome integrity and DNA fragmentation . [ 51 ] Advanced paternal age was shown to be associated with a significant increase in DNA fragmentation in a recent systematic review (where 17 out of the 19 studies considered showed such an association). [ 52 ]
The production of sperm cells involves DNA methylation , an epigenetic process that regulates the expression of genes . [ 46 ] Improper genomic imprinting and other errors sometimes occur during this process, which can affect the expression of genes related to certain disorders, increasing the offspring's susceptibility. The frequency of these errors appears to increase with age. This could explain the association between paternal age and schizophrenia.; [ 53 ] Paternal age affects offspring's behavior, possibly via an epigenetic mechanism recruiting a transcriptional repressor REST. [ 54 ]
A 2001 review on variation in semen quality and fertility by male age concluded that older men had lower semen volume, lower sperm motility, a decreased percent of normal sperm, as well as decreased pregnancy rates, increased time to pregnancy, and increased infertility at a given point in time. [ 55 ] When controlling for the age of the female partner, comparisons between men under 30 and men over 50 found relative decreases in pregnancy rates between 23% and 38%. [ 55 ]
A 2014 review indicated that increasing male age is associated with declines in many semen traits, including semen volume and percentage motility. However, this review also found that sperm concentration did not decline as male age increased. [ 56 ]
Some classify the paternal age effect as one of two different types. One effect is directly related to advanced paternal age and autosomal mutations in the offspring. The other effect is an indirect effect related to mutations on the X chromosome which are passed to daughters who are then at risk for having sons with X-linked diseases. [ 57 ]
Birth defects were acknowledged in the children of older men and women even in antiquity. In book six of Plato 's Republic , Socrates states that men and women should have children in the "prime of their life" which is stated to be twenty in a woman and thirty in a man. He states that in his proposed society men should be forbidden to father children in their fifties and that the offspring of such unions should be considered "the offspring of darkness and strange lust." He suggests appropriate punishments be administered to the offenders and their offspring. [ 58 ] [ 59 ]
In 1912, Wilhelm Weinberg , a German physician, was the first person to hypothesize that non-inherited cases of achondroplasia could be more common in last-born children than in children born earlier to the same set of parents. [ 60 ] Weinberg "made no distinction between paternal age, maternal age and birth order " in his hypothesis. In 1953, Krooth used the term "paternal age effect" in the context of achondroplasia, but mistakenly thought the condition represented a maternal age effect. [ 60 ] [ 61 ] : 375 The paternal age effect for achondroplasia was described by Lionel Penrose in 1955. At a DNA level, the paternal age effect was first reported in 1998 in routine paternity tests. [ 62 ]
Scientific interest in paternal age effects is relevant because the average paternal age increased in countries such as the United Kingdom, [ 63 ] Australia [ 64 ] and Germany, [ 65 ] and because birth rates for fathers aged 30–54 years have risen between 1980 and 2006 in the United States. [ 66 ] Possible reasons for the increases in average paternal age include increasing life expectancy and increasing rates of divorce and remarriage. [ 65 ] Despite recent increases in average paternal age, however, the oldest father documented in the medical literature was born in 1840: George Isaac Hughes was 94 years old at the time of the birth of his son by his second wife, a 1935 article in the Journal of the American Medical Association stated that his fertility "has been definitely and affirmatively checked up medically," and he fathered a daughter in 1936 at age 96. [ 65 ] [ 67 ] [ 68 ]
The American College of Medical Genetics recommends obstetric ultrasonography at 18–20 weeks gestation in cases of advanced paternal age to evaluate fetal development, but it notes that this procedure "is unlikely to detect many of the conditions of interest." They also note that there is no standard definition of advanced paternal age ; [ 11 ] it is commonly defined as age 40 or above, but the effect increases linearly with paternal age, rather than appearing at any particular age. [ 69 ] According to a 2006 review, any adverse effects of advanced paternal age "should be weighed up against potential social advantages for children born to older fathers who are more likely to have progressed in their career and to have achieved financial security." [ 63 ]
Geneticist James F. Crow described mutations that have a direct visible effect on the child's health and also mutations that can be latent or have minor visible effects on the child's health; many such minor or latent mutations allow the child to reproduce, but cause more serious problems for grandchildren, great-grandchildren and later generations. [ 6 ] | https://en.wikipedia.org/wiki/Paternal_age_effect |
In biology , paternal care is parental investment provided by a male to his own offspring . It is a complex social behaviour in vertebrates associated with animal mating systems, life history traits, and ecology. [ 1 ] Paternal care may be provided in concert with the mother (biparental care) or, more rarely, by the male alone (so called exclusive paternal care).
The provision of care, by either males or females, is presumed to increase growth rates, quality, and/or survival of young, and hence ultimately increase the inclusive fitness of parents. [ 2 ] [ 3 ] [ 4 ] In a variety of vertebrate species (e.g., about 80% of birds [ 5 ] and about 6% of mammals), [ 6 ] both males and females invest heavily in their offspring. Many of these biparental species are socially monogamous , so individuals remain with their mate for at least one breeding season.
Exclusive paternal care has evolved multiple times in a variety of organisms, including invertebrates, fishes, and amphibians. [ 7 ] [ 8 ] [ 9 ]
Male mammals employ different behaviors to enhance their reproductive success (e.g. courtship displays , mate choice ). However, the benefits of paternal care have rarely been studied in mammals, largely because only 5-10% of mammals exhibit such care (mostly present in primates , rodents and canids ). [ 11 ] [ 12 ] In those species in which males provide extensive care for their offspring , indirect evidence suggests that its costs can be substantial. [ 13 ] [ 14 ] For example, mammalian fathers that care for their young may undergo changes in body mass and an increase in production of a number of costly hormones (e.g. androgens , glucocorticoids , leptin ). [ 15 ] [ 16 ] [ 17 ] Nonetheless, there is evidence that suggest that across all mammals, when males carry and groom their offspring their female partner fecundity increases, and if males provision the females, their litter size tend to be larger. [ 18 ]
Human cultures and societies vary widely in the expression of paternal care. Some cultures recognize paternal care via celebration of Father's Day . Human paternal care is a derived characteristic (evolved in humans or our recent ancestors) and one of the defining characteristics of Homo sapiens . [ 19 ] Different aspects of human paternal care (direct, indirect, fostering social or moral development) may have evolved at different points in our history, and together they form a unique suite of behaviors as compared with the great apes .
One study of humans has found evidence suggesting a possible evolutionary trade-off between mating success and parenting involvement; specifically, fathers with smaller testes tend to be more involved in care of their children . [ 20 ]
Research on the effects of paternal care on human happiness have yielded conflicting results. However, one recent study concluded that fathers generally report higher levels of happiness, positive emotion, and meaning in life as compared with non-fathers. [ 21 ]
According to the United States Census Bureau, approximately one third of children in the U.S. grow up without their biological father in their home. Numerous studies have documented negative consequences of being raised in a home that lacks a father, including increased likelihood of living in poverty, having behavioral problems, committing crimes, spending time in prison, abusing drugs or alcohol, becoming obese, and dropping out of school. [ 22 ] [ 23 ]
In non-human primates , paternal investment is often dependent on the type of mating system exhibited by each species. Mating systems influence paternity certainty and the likelihood that a male is providing care towards his own biological offspring. Paternal certainty is high in monogamous pair-bonded species and males are less likely to be at risk for caring for unrelated offspring and not contributing to their own fitness. [ 24 ] [ 25 ] In contrast, polygamous primate societies create paternity uncertainty and males are more at risk of providing care for unrelated offspring and compromising their own fitness. [ 26 ] [ 27 ] Paternal care by male non-human primates motivated by biological paternity utilize past mating history and phenotypic matching in order to recognize their own offspring. [ 28 ] [ 29 ] Comparing male care efforts exhibited by the same species can provide insight on the significant relationship between paternity certainty and the amount of paternal care exhibited by a male. For example, Siamangs ( Symphalangus syndactylus ) utilize both polyandrous and monogamous mating systems but, it was found that monogamous males are more likely to carry infants and contribute to parental duties compared to those in promiscuous mating systems. [ 30 ] Studies in Primatology have used primate mating systems and social organization to help theorize the evolutionary significance of paternal care in Primates.
Strepsirrhini is a suborder of the order Primates and includes lemurs, lorises, and bush babies. In this sub-order, males exhibit the lowest levels of paternal care for infants among primates. [ 25 ] Examples of observed male care in this group include playing, grooming, and occasionally transporting infants. Males have also been observed interacting with infants while mothers park them and temporarily leave in order to feed. [ 31 ] [ 25 ] When female strepsirrhines park or nest their infants in nearby trees, males frequently use this as an opportunity to play with the unattended infants. [ 31 ] In this suborder, male care and affection is directed toward multiple infants including non-biological offspring, and young strepsirrhines can be found interacting with various males. [ 25 ] Paternal care does not influence infant growth rates or shorten inter-birth intervals of mothers as it can in haplorrhines. [ 31 ] Strepsirrhini males exhibit the lowest intensity of care towards infants in non-human primates.
Strepsirrhines are constrained by their life history traits and reproductive rates are not flexible within this group of primates. This group of primates are programmed to give birth when food is abundant resulting in strict seasonal breeding periods. [ 31 ] Shortening inter-birth intervals, which is theorized to be a possible outcome of increased male care, is not beneficial for Strepsirrhine mothers and can decrease infant survival. [ 31 ] Studies also show that paternity can be highly skewed in Strepsirrhines, with only one or few male members being the only biological father within a single group. [ 32 ] Instead of relying on a singular paternal figure, female mothers in this group rely on alloparenting from other group members. Infant parking and strict reproductive schedules are more beneficial for successful infant development in Strepsirrhines.
Haplorhini , a sub-order of the order Primate, includes tarsiers, New World Monkeys, Old World monkeys, apes, and humans. Haplorrhini is broken into two sister groups which are commonly distinguished by the characteristic of the primate nose: Catarrhini (narrow turned down nose) and Platyrrhini (flat nose). Paternal care is highly variable between the two sister groups and the species within them. [ citation needed ]
Catarrhini is composed of Old World Monkeys (Cercopithecidae ) and apes (Hylobatidae and Hominoidea ). [ citation needed ] These primates are geographically located in Africa, Asia, and Madagascar.
Cercopithecines , the largest primate family, include primates species such as baboons , macaques , colobus , and vervet monkeys . [ citation needed ]
Apes consist of species of gibbons , siamangs , bonobos , chimpanzees , gorillas , orangutans and humans . [ citation needed ]
Catarrhines (non-human) are often organized into a multimale-multifemale social systems and utilize polygamous mating systems which results in paternity uncertainty. It is predicted that males in promiscuous mating systems do not engage in infant care due to the high costs of caring for an infant and missing opportunities to mate with receptive females. [ 27 ] Male care in this group of primates is often portrayed through actions such as grooming, carrying, tolerance of the infant, as well as protection against agonistic interactions and infanticide. [ 33 ] [ 27 ] High ranking males can also provide access to food for developing infants. [ 34 ] Direct care such as grooming and playing is not as common compared to male intervention on behalf of the infant when it is being harassed by conspecifics. [ 27 ]
In Cercopiths , male involvement in the infant's interactions with others is common in many species of baboons but between species paternal care is not always biased towards biological offspring. Male Savannah baboons ( Papio cynocephalus ) direct care towards their own biological offspring. [ 35 ] [ 33 ] Males in this species are more likely to intervene and protect infants from harassment against other group members when the infant is predicted to be their own. Studies have shown that male Savannah baboons selectively choose to remain in closer proximity to their own offspring and engage in long-term investment beyond early infancy, when the infant is at greatest risk for infanticide . [ 36 ] [ 33 ] Infants receiving paternal investment in Savannah baboons have shown enhanced fitness and accelerated maturation through males creating a safe zone for infants to exist in. [ 35 ] [ 36 ] Similarly to Savannah Baboons, Yellow baboon ( Papio cynocephalus ) males provide elevated care for their own offspring. Long-term care and investment beyond early infancy is better linked to paternity in this species and affecting infant growth and development. [ 28 ] Male baboons also direct care towards unrelated offspring based on male affiliations with female mothers. Baboon males and females within a social group often exhibit “friendships” with females which begin during birth of her infant and has been observed to end abruptly if the infant dies. [ 37 ] [ 26 ] [ 33 ] Males establish associations with females in which they have previously mated resulting in affiliative behaviour and protection towards her offspring. Relationships created by male and female members are significant for infant survival in Chacma baboons ( Papio ursinus ) because the risk of infanticide in early infancy is higher in this species. [ 37 ] [ 26 ] Paternal care in the form of protection for the infant is therefore more beneficial than long term investment in Chacma baboons and is believed to be directed towards both biological and non-biological infants in the group.
Similarly to baboons, paternal roles and the underlying mechanisms as to why paternal care evolved vary within macaque species. In Sulawesi crested macaques ( Macaca nigra ) both male rank and the relationship to the mother predicted male care towards an infant instead of true biological paternity. [ 27 ] In both Sulawesi and Barbary macaques ( Macaca sylvanus ) males adopted a “care-then-mate” strategy, in which care is provided to infants regardless of paternity in order for the male to increase future mating opportunities with the mother. [ 28 ] [ 27 ] In both species, it was observed that male macaques are more likely to initiate care towards and positively interact with the infant in the presence of the mother. [ 27 ] In Assamese macaques ( Macaca assamensis ) biological paternity was the most significant predictor of male affiliations with infants and therefore males biased care towards infants presumed to be their own. [ 38 ] [ 27 ] Observers found that Assamese males were more likely to engage and provide care for infants in the absence of their mothers reducing the likelihood that care provided to infants will impress the mother and secure access to mating possibilities. [ 38 ] In Rhesus macaques , male's providing protection and greater access to food resulted in higher weight gain for both male and female infants. [ 34 ] This had a positive effect on infant survival and was significant in the first year of infancy when the risk of infanticide is the highest. [ 27 ] [ 34 ]
Chimpanzees ( Pan troglodytes ) are organized into fission-fusion social groups and provide an example of a polygamous mating society. Male chimpanzees often engage with infants in the form of grooming, playing, and providing protection towards other group members. In both Western and Eastern chimpanzees it was found that males were more likely to engage with their own biological offspring meaning that male care is directed by paternity in this species. [ 28 ] [ 29 ] In both chimpanzee and bonobo social groups, high ranking alpha males sire approximately half of the offspring within their social group. [ 39 ] [ 40 ] More research needs to be done addressing how reproductive skew affects paternal care and infant-male relationships in non-human primates including chimpanzees and bonobos.
Platyrrhini is a sub-order of the order Primate and are commonly referred to as the New World Monkeys . [ 41 ] These primates occupy Central and South America, and Mexico. This group is broken into five families, range in body size, and include species such as spider monkeys , capuchins , and howler monkeys .
Among primate species, the highest levels of male care found in New World monkeys are observed in Owl monkeys ( Aotus azarai ) and Titi monkeys ( Callicebus caligatus ). In both of these species, males and females are monogamous, pair-bonded, and exhibit bi-parental care for their offspring. [ 25 ] [ 42 ] [ 24 ] The social group in both these species consists of female and male parents along with their offspring. [ 43 ] [ 24 ] Males in these species serve as the primary caregivers and play a major role in infant survival. [ 24 ]
Male Titi monkeys are more involved than the mother in all aspects of male care except nursing, and engage in more social activities such as grooming, food sharing, play, and transportation of the infant. [ 42 ] [ 24 ] The bond between an infant and its father is established right after birth and maintained into adolescence making the father the infant's predominant attachment figure. Similarly, the male Owl monkey acts as the main caregiver and is crucial to the survival of his offspring. If a female gives birth to twins, the male is still responsible for transporting both the infants. [ 24 ] In the absence of a father, infant mortality increases in both these species and it is unlikely that the infant will survive. One study found that the replacement of a male enacting as the role of the father resulted in higher mortality during infancy emphasizing the importance of the social bond created between father and offspring at birth. [ 25 ]
In White‐faced Capuchins ( Cebus capucinus ) one study found that parental care was exhibited in the form of playful behaviour, proximity to, inspection of, and collecting discarded food items from infants as determined by male rank and dominance status rather than biological relatedness to the infant. [ 44 ] Scientists believe that future research on kin recognition needs to be done on capuchins to determine if males choose to bias their care as well as in other non-human primates relying on phenotypic matching to distinguish biological offspring. [ 28 ] [ 44 ]
The Theory of Paternal Investment : Differences in infant care between sexes stems from females investing more time and energy in their offspring than males, while males compete with one another for access to females. [ 27 ] Although paternal care is rare among mammalians, males across many primate species still play a paternal role in infant care.
The Paternal Care hypothesis : Paternal care and investment will be designated to biological offspring, increasing the infant's chance of survival, and therefore increasing the male's own fitness . [ 33 ] [ 27 ] This hypothesis requires the on male to use recognition and behavioural cues to distinguish their own offspring from other infants. [ 29 ] Paternal uncertainty is high in multimale-multifemale primate groups so males must use these cues to recognize and bias care towards their own offspring. This allows males to provide both short and long-term investment for infants. [ 26 ] Primates living in monogamous pairs or single-male groups exhibit high paternity certainty and assist with the Paternal Care hypothesis.
The Mating Effort hypothesis : Males provide care for infants in order to increase mating opportunities with females. [ 27 ] [ 38 ] This means that males are more likely to engage in affiliative behaviours with the infant in the presence of the mother as a form of male mating effort in order to enhance future reproductive success. [ 27 ] This theory is independent of genetics and evolved independent of paternity.
The Maternal Relief hypothesis : Males provide care infants to help reduce reproductive burdens of the female, ultimately resulting shorter inter-birth intervals and more successful offspring. [ 27 ] This stems from the male alleviating the female from her parental duties in order to keep her resources from becoming depleted and subsequently allowing her to produce high quality milk for the infant. [ 25 ] Similarly to the mating effort hypothesis, the maternal relief hypothesis is independent of genetics and does not require the male to be the biological father to take part in infant care.
Several species of rodents have been studied as models of paternal care, including prairie voles ( Microtus ochrogaster ), Campbell's dwarf hamster , the Mongolian gerbil , and the African striped mouse . The California mouse ( Peromyscus californicus ) is a monogamous rodent that exhibits extensive and essential paternal care, and hence has been studied as a model organism for this phenomenon. [ 45 ] [ 46 ] One study of this species found that fathers had larger hindlimb muscles than did non-breeding males. [ 14 ] Quantitative genetic analysis has identified several genomic regions that affect paternal care. [ 47 ]
Fathers contribute equally with mothers to the care of offspring in as many as 90% of bird species, sometimes including incubating the eggs . Most paternal care is associated with biparental care in socially monogamous mating systems (about 81% of species), but in approximately 1% of species, fathers provide all care after eggs are laid. [ 5 ] The unusually high incidence of paternal care in birds compared to other vertebrate taxa is often assumed to stem from the extensive resource requirements for production of flight-capable offspring. By contrast, in bats (the other extant flying vertebrate lineage), care of offspring is provided by females (although males may help guard pups in some species [ 48 ] ). In contrast to the large clutch sizes found in many bird species with biparental care, bats typically produce single offspring, which may be a limitation related to lack of male help. It has been suggested, though not without controversy, that paternal care is the ancestral form of parental care in birds. [ 9 ]
Paternal care occurs in a number of species of anuran amphibians, [ 49 ] including glass frogs .
According to the Encyclopedia of Fish Physiology: From Genome to Environment :
About 30% of the 500 known fish families show some form of parental care, and most often (78% of the time) care is provided by only one parent (usually the male). Male care (50%) is much more common than female care (30%) with biparental care accounting for about 20%, although a more recent comparative analysis suggests that male care may be more common (84%). [ 50 ]
There are three common theoretical explanations for the high levels of paternal care in fish, with the third one currently favoured. First, external fertilization protects against paternity loss; however, sneaker tactics and strong sperm competition have evolved many times. Second, the earlier release of eggs than sperm gives females an opportunity to flee; however, in many paternal care species, eggs and sperm are released simultaneously. Third, if a male is already protecting a valuable spawning territory in order to attract females, defending young adds minimal parental investment, giving males a lower relative cost of parental care. [ 50 ]
One well-known example of paternal care is in seahorses , where males brood the eggs in a brood pouch until they are ready to hatch.
Males from the Centrarchidae (sunfish) family exhibit paternal parental care of their eggs and fry through a variety of behaviors such as nest guarding and nest fanning (aerating eggs). [ 51 ]
In jawfish , the female lays the eggs and the male then takes them in his mouth. A male can have up to 400 eggs in his mouth at one time. The male can't feed while he hosts the young, but as the young get older, they spend more time out of the mouth. [ 52 ] This is sometimes termed mouthbrooding .
During the breeding season, male three-spined sticklebacks defend nesting territories. Males attract females to spawn in their nests and defend their breeding territory from intruders and predators. After spawning, the female leaves the male's territory and the male is solely responsible for the care of the eggs. During the ~6-day incubation period, the male 'fans' (oxygenates) the eggs, removes rotten eggs and debris, and defends the territory. Even after embryos hatch, father sticklebacks continue to tend their newly hatched offspring for ~7 days, chasing and retrieving fry that stray from the nest and spitting them back into the nest. [ 53 ]
Paternal care is rare in arthropods , [ 54 ] but occurs in some species , including the giant water bug [ 55 ] [ 56 ] and the arachnid Iporangaia pustulosa , a harvestman . [ 57 ] In several species of crustaceans , males provide care of offspring by building and defending burrows or other nest sites. [ 58 ] Exclusive paternal care, where males provide the sole investment after egg-laying, is the rarest form, and is known in only 13 taxa: giant water bugs, sea spiders , two genera of leaf-footed bugs , two genera of assassin bugs , three genera of phlaeothripid thrips , three genera of harvestmen, and in millipedes of the family Andrognathidae . [ 59 ]
Mathematical models related to the prisoner's dilemma suggest that when female reproductive costs are higher than male reproductive costs, males cooperate with females even when they do not reciprocate. In this view, paternal care is an evolutionary achievement that compensates for the higher energy demands that reproduction typically involves for mothers. [ 60 ] [ 61 ]
Other models suggest that basic life-history differences between males and females are adequate to explain the evolutionary origins of maternal, paternal, and bi-parental care. Specifically, paternal care is more likely if male adult mortality is high, and maternal care is more likely to evolve if female adult mortality is high. [ 62 ] Basic life-history differences between the sexes can also cause evolutionary transitions among different sex-specific patterns of parental care. [ 63 ]
Care by fathers can have important consequences for survival and development of offspring in both humans [ 64 ] and other species. Mechanisms underlying such effects may include protecting offspring from predators or environmental extremes (e.g., heat or cold), feeding them or, in some species, direct teaching of skills. Moreover, some studies indicate a potential epigenetic germline inheritance of paternal effects. [ 65 ]
The effects of paternal care on offspring can be studied in various ways. One way is to compare species that vary in the degree of paternal care. For example, an extended duration of paternal care occurs in the gentoo penguin , as compared with other Pygoscelis species. It was found that their fledging period, the time between a chick's first trip to sea and its absolute independence from the group, was longer than other penguins of the same genus. The authors hypothesized that this was because it allowed chicks to better develop their foraging skills before becoming completely independent from their parents. By doing so, a chick may have a higher chance of survival and increase the population's overall fitness. [ 66 ]
The proximate mechanisms of paternal care are not well understood for any organism . In vertebrates, at the level of hormonal control, vasopressin apparently underlies the neurochemical basis of paternal care; prolactin and testosterone may also be involved. As with other behaviors that affect Darwinian fitness , reward pathways [ 67 ] in the brain may reinforce the expression of paternal care and may be involved in the formation of attachment bonds .
The mechanisms that underlie the onset of parental behaviors in female mammals have been characterized in a variety of species. In mammals, females undergo endocrine changes during gestation and lactation that "prime" mothers to respond maternally towards their offspring. [ 68 ] [ 69 ]
Paternal males do not undergo these same hormonal changes and so the proximate causes of the onset of parental behaviors must differ from those in females. There is little consensus regarding the processes by which mammalian males begin to express parental behaviors. [ 16 ] In humans, evidence ties oxytocin to sensitive care-giving in both women and men, and with affectionate infant contact in women and stimulatory infant contact in men. In contrast, testosterone decreases in men who become involved fathers and testosterone may interfere with aspects of paternal care. [ 70 ]
Placentophagia (the behavior of ingesting the afterbirth after parturition ) has been proposed to have physiological consequences that could facilitate a male's responsiveness to offspring [ 71 ] [ 72 ] [ 73 ] [ 74 ] Non- genomic transmission of paternal behavior from fathers to their sons has been reported to occur in laboratory studies of the biparental California mouse , but whether this involves ( epigenetic ) modifications or other mechanisms is not yet known. [ 75 ] | https://en.wikipedia.org/wiki/Paternal_care |
The Paternò–Büchi reaction , named after Emanuele Paternò and George Büchi , who established its basic utility and form, [ 1 ] [ 2 ] is a photochemical reaction , specifically a 2+2 photocycloaddition , which forms four-membered oxetane rings from an excited carbonyl and reacting with an alkene . [ 3 ]
With substrates benzaldehyde and 2-methyl-2-butene the reaction product is a mixture of structural isomers :
Another substrate set is benzaldehyde and furan [ 4 ] or heteroaromatic ketones and fluorinated alkenes. [ 5 ]
The alternative strategy for the above reaction is called the Transposed Paternò−Büchi reaction . | https://en.wikipedia.org/wiki/Paternò–Büchi_reaction |
PathVisio is a free open-source pathway analysis and drawing software. It allows drawing, editing, and analyzing biological pathways .
Visualization of ones experimental data on the pathways for finding relevant pathways that are over-represented in your data set is possible. [ 1 ] [ 2 ] [ 3 ]
PathVisio provides a basic set of features for pathway drawing, analysis and visualization. [ 4 ] [ 5 ] Additional features are available as plugins.
PathVisio was created primarily at Maastricht University and Gladstone Institutes . [ 6 ] The software is developed in Java and it's also used as part of the WikiPathways framework as an applet. [ 7 ] Starting from version 3.0 (released in 2012) plugins are OSGi compliant and a plugin directory , describing them, was developed.
In 2015 version 3.2 was released. This was the first signed version with a certificate issued by a certification authority. Many of the running issues introduced by java 1.7 and 1.8 with the new security rules were solved.
Since 2013 a javascript version (PVJS) is being developed to replace the applet. From 2015 it also allows small edits and in the future it will be a full editor. | https://en.wikipedia.org/wiki/PathVisio |
In mathematics , a path in a topological space X {\displaystyle X} is a continuous function from a closed interval into X . {\displaystyle X.}
Paths play an important role in the fields of topology and mathematical analysis .
For example, a topological space for which there exists a path connecting any two points is said to be path-connected . Any space may be broken up into path-connected components . The set of path-connected components of a space X {\displaystyle X} is often denoted π 0 ( X ) . {\displaystyle \pi _{0}(X).}
One can also define paths and loops in pointed spaces , which are important in homotopy theory . If X {\displaystyle X} is a topological space with basepoint x 0 , {\displaystyle x_{0},} then a path in X {\displaystyle X} is one whose initial point is x 0 {\displaystyle x_{0}} . Likewise, a loop in X {\displaystyle X} is one that is based at x 0 {\displaystyle x_{0}} .
A curve in a topological space X {\displaystyle X} is a continuous function f : J → X {\displaystyle f:J\to X} from a non-empty and non-degenerate interval J ⊆ R . {\displaystyle J\subseteq \mathbb {R} .} A path in X {\displaystyle X} is a curve f : [ a , b ] → X {\displaystyle f:[a,b]\to X} whose domain [ a , b ] {\displaystyle [a,b]} is a compact non-degenerate interval (meaning a < b {\displaystyle a<b} are real numbers ), where f ( a ) {\displaystyle f(a)} is called the initial point of the path and f ( b ) {\displaystyle f(b)} is called its terminal point .
A path from x {\displaystyle x} to y {\displaystyle y} is a path whose initial point is x {\displaystyle x} and whose terminal point is y . {\displaystyle y.} Every non-degenerate compact interval [ a , b ] {\displaystyle [a,b]} is homeomorphic to [ 0 , 1 ] , {\displaystyle [0,1],} which is why a path is sometimes, especially in homotopy theory, defined to be a continuous function f : [ 0 , 1 ] → X {\displaystyle f:[0,1]\to X} from the closed unit interval I := [ 0 , 1 ] {\displaystyle I:=[0,1]} into X . {\displaystyle X.}
An arc or C 0 -arc in X {\displaystyle X} is a path in X {\displaystyle X} that is also a topological embedding .
Importantly, a path is not just a subset of X {\displaystyle X} that "looks like" a curve , it also includes a parameterization . For example, the maps f ( x ) = x {\displaystyle f(x)=x} and g ( x ) = x 2 {\displaystyle g(x)=x^{2}} represent two different paths from 0 to 1 on the real line.
A loop in a space X {\displaystyle X} based at x ∈ X {\displaystyle x\in X} is a path from x {\displaystyle x} to x . {\displaystyle x.} A loop may be equally well regarded as a map f : [ 0 , 1 ] → X {\displaystyle f:[0,1]\to X} with f ( 0 ) = f ( 1 ) {\displaystyle f(0)=f(1)} or as a continuous map from the unit circle S 1 {\displaystyle S^{1}} to X {\displaystyle X}
This is because S 1 {\displaystyle S^{1}} is the quotient space of I = [ 0 , 1 ] {\displaystyle I=[0,1]} when 0 {\displaystyle 0} is identified with 1. {\displaystyle 1.} The set of all loops in X {\displaystyle X} forms a space called the loop space of X . {\displaystyle X.}
Paths and loops are central subjects of study in the branch of algebraic topology called homotopy theory . A homotopy of paths makes precise the notion of continuously deforming a path while keeping its endpoints fixed.
Specifically, a homotopy of paths, or path-homotopy , in X {\displaystyle X} is a family of paths f t : [ 0 , 1 ] → X {\displaystyle f_{t}:[0,1]\to X} indexed by I = [ 0 , 1 ] {\displaystyle I=[0,1]} such that
The paths f 0 {\displaystyle f_{0}} and f 1 {\displaystyle f_{1}} connected by a homotopy are said to be homotopic (or more precisely path-homotopic , to distinguish between the relation defined on all continuous functions between fixed spaces). One can likewise define a homotopy of loops keeping the base point fixed.
The relation of being homotopic is an equivalence relation on paths in a topological space. The equivalence class of a path f {\displaystyle f} under this relation is called the homotopy class of f , {\displaystyle f,} often denoted [ f ] . {\displaystyle [f].}
One can compose paths in a topological space in the following manner. Suppose f {\displaystyle f} is a path from x {\displaystyle x} to y {\displaystyle y} and g {\displaystyle g} is a path from y {\displaystyle y} to z {\displaystyle z} . The path f g {\displaystyle fg} is defined as the path obtained by first traversing f {\displaystyle f} and then traversing g {\displaystyle g} :
Clearly path composition is only defined when the terminal point of f {\displaystyle f} coincides with the initial point of g . {\displaystyle g.} If one considers all loops based at a point x 0 , {\displaystyle x_{0},} then path composition is a binary operation .
Path composition, whenever defined, is not associative due to the difference in parametrization. However it is associative up to path-homotopy. That is, [ ( f g ) h ] = [ f ( g h ) ] . {\displaystyle [(fg)h]=[f(gh)].} Path composition defines a group structure on the set of homotopy classes of loops based at a point x 0 {\displaystyle x_{0}} in X . {\displaystyle X.} The resultant group is called the fundamental group of X {\displaystyle X} based at x 0 , {\displaystyle x_{0},} usually denoted π 1 ( X , x 0 ) . {\displaystyle \pi _{1}\left(X,x_{0}\right).}
In situations calling for associativity of path composition "on the nose," a path in X {\displaystyle X} may instead be defined as a continuous map from an interval [ 0 , a ] {\displaystyle [0,a]} to X {\displaystyle X} for any real a ≥ 0. {\displaystyle a\geq 0.} (Such a path is called a Moore path .) A path f {\displaystyle f} of this kind has a length | f | {\displaystyle |f|} defined as a . {\displaystyle a.} Path composition is then defined as before with the following modification:
Whereas with the previous definition, f , {\displaystyle f,} g {\displaystyle g} , and f g {\displaystyle fg} all have length 1 {\displaystyle 1} (the length of the domain of the map), this definition makes | f g | = | f | + | g | . {\displaystyle |fg|=|f|+|g|.} What made associativity fail for the previous definition is that although ( f g ) h {\displaystyle (fg)h} and f ( g h ) {\displaystyle f(gh)} have the same length, namely 1 , {\displaystyle 1,} the midpoint of ( f g ) h {\displaystyle (fg)h} occurred between g {\displaystyle g} and h , {\displaystyle h,} whereas the midpoint of f ( g h ) {\displaystyle f(gh)} occurred between f {\displaystyle f} and g {\displaystyle g} . With this modified definition ( f g ) h {\displaystyle (fg)h} and f ( g h ) {\displaystyle f(gh)} have the same length, namely | f | + | g | + | h | , {\displaystyle |f|+|g|+|h|,} and the same midpoint, found at ( | f | + | g | + | h | ) / 2 {\displaystyle \left(|f|+|g|+|h|\right)/2} in both ( f g ) h {\displaystyle (fg)h} and f ( g h ) {\displaystyle f(gh)} ; more generally they have the same parametrization throughout.
There is a categorical picture of paths which is sometimes useful. Any topological space X {\displaystyle X} gives rise to a category where the objects are the points of X {\displaystyle X} and the morphisms are the homotopy classes of paths. Since any morphism in this category is an isomorphism , this category is a groupoid called the fundamental groupoid of X . {\displaystyle X.} Loops in this category are the endomorphisms (all of which are actually automorphisms ). The automorphism group of a point x 0 {\displaystyle x_{0}} in X {\displaystyle X} is just the fundamental group based at x 0 {\displaystyle x_{0}} . More generally, one can define the fundamental groupoid on any subset A {\displaystyle A} of X , {\displaystyle X,} using homotopy classes of paths joining points of A . {\displaystyle A.} This is convenient for Van Kampen's Theorem . | https://en.wikipedia.org/wiki/Path_(topology) |
Path dependence is a concept in the social sciences , referring to processes where past events or decisions constrain later events or decisions. [ 1 ] [ 2 ] It can be used to refer to outcomes at a single point in time or to long-run equilibria of a process. [ 3 ] Path dependence has been used to describe institutions, technical standards , patterns of economic or social development, organizational behavior , and more. [ 4 ] [ 1 ]
In common usage, the phrase can imply two types of claims. The first is the broad concept that "history matters", often articulated to challenge explanations that pay insufficient attention to historical factors. [ 1 ] [ 5 ] [ 6 ] This claim can be formulated simply as "the future development of an economic system is affected by the path it has traced out in the past" [ 7 ] or "particular events in the past can have crucial effects in the future." [ 1 ] The second is a more specific claim about how past events or decisions affect future events or decisions in significant or disproportionate ways, through mechanisms such as increasing returns , positive feedback effects, or other mechanisms. [ 1 ] [ 2 ] [ 3 ] [ 5 ]
The videotape format war is a key example of path dependence. Three mechanisms independent of product quality could explain how VHS achieved dominance over Betamax from a negligible early adoption lead:
An alternative analysis is that VHS was better-adapted to market demands (e.g. having a longer recording time). In this interpretation, path dependence had little to do with VHS's success, which would have occurred even if Betamax had established an early lead. [ 9 ]
The QWERTY keyboard is a prominent example of path dependence due to the widespread emergence and persistence of the QWERTY keyboard. QWERTY has persisted over time despite potentially more efficient keyboard arrangements being developed – QWERTY vs. Dvorak is an example of this. [ 10 ] However as it is not clear whether other keyboard layouts really are better, there is still debate if this is a good example of path dependence. [ 11 ] [ 12 ]
The standard gauge of railway tracks is another example of path dependence which explains how a seemingly insignificant event or circumstance can change the choice of technology over the long run despite contemporary know-how showing such a choice to be inefficient. [ 13 ]
More than half the world's railway gauges are 4 feet 8 + 1 ⁄ 2 inches (143.5 cm), known as standard gauge , despite the consensus among engineers being that wider gauges have increased performance [ clarification needed ] and speed. The path to the adoption of the standard gauge began in the late 1820s when George Stephenson, a British engineer, began work on the Liverpool and Manchester Railway . His experience with primitive coal tramways resulted in this gauge width being copied by the Liverpool and Manchester Railway, then the rest of Great Britain, and finally by railroads in Europe and North America. [ 14 ]
There are tradeoffs involved in the choice of rail gauge between the cost of constructing a line (which rises with wider gauges) and various performance metrics, including maximum speed, low center of gravity (desirable, especially in double-stack rail transport ). While the attempts with Brunel gauge , a significantly broader gauge failed, the widespread use of Iberian gauge , Russian gauge and Indian gauge , all of which are broader than Stephenson's choice, show that there is nothing inherent to the 1435 mm gauge that led to its global success.
Path dependence theory was originally developed by economists to explain technology adoption processes and industry evolution. The theoretical ideas have had a strong influence on evolutionary economics . [ 15 ] A common expression of the concept is the claim that predictable amplifications of small differences are a disproportionate cause of later circumstances, and, in the "strong" form, that this historical hang-over is inefficient . [ 16 ]
There are many models and empirical cases where economic processes do not progress steadily toward some pre-determined and unique equilibrium , but rather the nature of any equilibrium achieved depends partly on the process of getting there. Therefore, the outcome of a path-dependent process will often not converge towards a unique equilibrium, but will instead reach one of several equilibria (sometimes known as absorbing states ).
This dynamic vision of economic evolution is very different from the tradition of neo-classical economics , which in its simplest form assumed that only a single outcome could possibly be reached, regardless of initial conditions or transitory events. With path dependence, both the starting point and 'accidental' events ( noise ) can have significant effects on the ultimate outcome. In each of the following examples it is possible to identify some random events that disrupted the ongoing course, with irreversible consequences.
In economic development, it is said (initially by Paul David in 1985) [ 17 ] that a standard that is first-to-market can become entrenched (like the QWERTY layout in typewriters still used in computer keyboards). He called this "path dependence", [ 10 ] and said that inferior standards can persist simply because of the legacy they have built up. That QWERTY vs. Dvorak is an example of this phenomenon, has been re-asserted, [ 18 ] questioned, [ 19 ] and continues to be argued. [ 20 ] Economic debate continues on the significance of path dependence in determining how standards form. [ 21 ]
Economists from Alfred Marshall to Paul Krugman have noted that similar businesses tend to congregate geographically ( "agglomerate" ); opening near similar companies attracts workers with skills in that business, which draws in more businesses seeking experienced employees. There may have been no reason to prefer one place to another before the industry developed, but as it concentrates geographically, participants elsewhere are at a disadvantage, and will tend to move into the hub, further increasing its relative efficiency . This network effect follows a statistical power law in the idealized case, [ 22 ] though negative feedback can occur (through rising local costs). [ 23 ] Buyers often cluster around sellers, and related businesses frequently form business clusters , so a concentration of producers (initially formed by accident and agglomeration) can trigger the emergence of many dependent businesses in the same region. [ 24 ]
In the 1980s, the US dollar exchange rate appreciated, lowering the world price of tradable goods below the cost of production in many (previously successful) U.S. manufacturers. Some of the factories that closed as a result, could later have been operated at a (cash-flow) profit after dollar depreciation, but reopening would have been too expensive. This is an example of hysteresis , switching barriers , and irreversibility.
If the economy follows adaptive expectations , future inflation is partly determined by past experience with inflation, since experience determines expected inflation and this is a major determinant of realized inflation.
A transitory high rate of unemployment during a recession can lead to a permanently higher unemployment rate because of the skills loss (or skill obsolescence) by the unemployed, along with a deterioration of work attitudes. In other words, cyclical unemployment may generate structural unemployment . This structural hysteresis model of the labour market differs from the prediction of a "natural" unemployment rate or NAIRU , around which 'cyclical' unemployment is said to move without influencing the "natural" rate itself.
Liebowitz and Margolis distinguish types of path dependence; [ 25 ] some do not imply inefficiencies and do not challenge the policy implications of neoclassical economics. Only "third-degree" path dependence—where switching gains are high, but transition is impractical—involves such a challenge. They argue that such situations should be rare for theoretical reasons, and that no real-world cases of private locked-in inefficiencies exist. [ 26 ] Vergne and Durand qualify this critique by specifying the conditions under which path dependence theory can be tested empirically. [ 27 ]
Technically, a path-dependent stochastic process has an asymptotic distribution that "evolves as a consequence (function of) the process's own history". [ 28 ] This is also known as a non-ergodic stochastic process .
In The Theory of the Growth of the Firm (1959), Edith Penrose analyzed how the growth of a firm both organically and through acquisition is strongly influenced by the experience of its managers and the history of the firm's development.
Path dependence may arise or be hindered by a number of important factors, these may include
Recent methodological work in comparative politics and sociology has adapted the concept of path dependence into analyses of political and social phenomena. Path dependence has primarily been used in comparative-historical analyses of the development and persistence of institutions , whether they be social, political, or cultural. There are arguably two types of path-dependent processes:
The critical juncture framework has been used to explain the development and persistence of welfare states , labor incorporation in Latin America , and the variations in economic development between countries, among other things. [ 31 ] Scholars such as Kathleen Thelen caution that the historical determinism in path-dependent frameworks is subject to constant disruption from institutional evolution .
Kathleen Thelen has criticized the application of QWERTY keyboard-style mechanisms to politics. She argues that such applications to politics are both too contingent and too deterministic. Too contingent in the sense that the initial choice is open and flukey, and too deterministic in the sense that once the initial choice is made, an unavoidable path inevitably forms from which there is no return. [ 32 ]
Based on the theory of path dependence, Monika Stachowiak-Kudła and Janusz Kudła show that legal tradition affects the administrative court’s rulings in Poland. It also complements the two other reasons for diversified verdicts: the experience of the judges and courts (specialization) and preference (bias) for one of the parties. This effect is persistent even if the verdicts are controversial and result in serious consequences for a party and when the penalty paid by the complainant is perceived as excessive but fulfilling the strict rules of law. The German tradition of law favours legal certainty, while the courts from the former Russian and Austrian partitions are more likely to refer to the principle of justice. Interestingly, the institutional factors can be identified almost one hundred years after the end of the partition period and the unification of formal and material law, corroborating the existence of path dependence. [ 33 ] [ relevant? ]
Paul Pierson 's influential attempt [ specify ] to rigorously formalize path dependence within political science, draws partly on ideas from economics. Herman Schwartz has questioned those efforts, arguing that forces analogous to those identified in the economic literature are not pervasive in the political realm, where the strategic exercise of power gives rise to, and transforms, institutions.
Especially sociology and organizational theory , a distinct yet closely related concept to path dependence is the concept of imprinting which captures how initial environmental conditions leave a persistent mark (or imprint) on organizations and organizational collectives (such as industries and communities), thus continuing to shape organizational behaviours and outcomes in the long run, even as external environmental conditions change. [ 34 ]
The path dependence of emergent strategy has been observed in behavioral experiments with individuals and groups . [ 35 ] | https://en.wikipedia.org/wiki/Path_dependence |
Path integral Monte Carlo ( PIMC ) is a quantum Monte Carlo method used to solve quantum statistical mechanics problems numerically within the path integral formulation . The application of Monte Carlo methods to path integral simulations of condensed matter systems was first pursued in a key paper by John A. Barker. [ 1 ] [ 2 ]
The method is typically (but not necessarily) applied under the assumption that symmetry or antisymmetry under exchange can be neglected, i.e., identical particles are assumed to be quantum Boltzmann particles, as opposed to fermion and boson particles. The method is often applied to calculate thermodynamic properties [ 3 ] such as the internal energy , [ 4 ] heat capacity, [ 5 ] or free energy . [ 6 ] [ 7 ] As with all Monte Carlo method based approaches, a large number of points must be calculated.
In principle, as more path descriptors are used (these can be "replicas", "beads," or "Fourier coefficients," depending on what strategy is used to represent the paths), [ 8 ] the more quantum (and the less classical) the result is. However, for some properties the correction may cause model predictions to initially become less accurate than neglecting them if a small number of path descriptors are included. At some point the number of descriptors is sufficiently large and the corrected model begins to converge smoothly to the correct quantum answer. [ 5 ] Because it is a statistical sampling method, PIMC can take anharmonicity fully into account, and because it is quantum, it takes into account important quantum effects such as tunneling and zero-point energy (while neglecting the exchange interaction in some cases). [ 6 ]
The basic framework was originally formulated within the canonical ensemble, [ 9 ] but has since been extended to include the grand canonical ensemble [ 10 ] and the microcanonical ensemble . [ 11 ] Its use has been extended to fermion systems [ 12 ] as well as systems of bosons. [ 13 ]
An early application was to the study of liquid helium. [ 14 ] Numerous applications have been made to other systems, including liquid water [ 15 ] and the hydrated electron. [ 16 ] The algorithms and formalism have also been mapped onto non-quantum mechanical problems in the field of financial modeling , including option pricing . [ 17 ]
This quantum chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Path_integral_Monte_Carlo |
The path integral formulation is a description in quantum mechanics that generalizes the stationary action principle of classical mechanics . It replaces the classical notion of a single, unique classical trajectory for a system with a sum, or functional integral , over an infinity of quantum-mechanically possible trajectories to compute a quantum amplitude .
This formulation has proven crucial to the subsequent development of theoretical physics , because manifest Lorentz covariance (time and space components of quantities enter equations in the same way) is easier to achieve than in the operator formalism of canonical quantization . Unlike previous methods, the path integral allows one to easily change coordinates between very different canonical descriptions of the same quantum system. Another advantage is that it is in practice easier to guess the correct form of the Lagrangian of a theory, which naturally enters the path integrals (for interactions of a certain type, these are coordinate space or Feynman path integrals ), than the Hamiltonian . Possible downsides of the approach include that unitarity (this is related to conservation of probability; the probabilities of all physically possible outcomes must add up to one) of the S-matrix is obscure in the formulation. The path-integral approach has proven to be equivalent to the other formalisms of quantum mechanics and quantum field theory. Thus, by deriving either approach from the other, problems associated with one or the other approach (as exemplified by Lorentz covariance or unitarity) go away. [ 1 ]
The path integral also relates quantum and stochastic processes, and this provided the basis for the grand synthesis of the 1970s, which unified quantum field theory with the statistical field theory of a fluctuating field near a second-order phase transition . The Schrödinger equation is a diffusion equation with an imaginary diffusion constant, and the path integral is an analytic continuation of a method for summing up all possible random walks . [ 2 ]
The path integral has impacted a wide array of sciences, including polymer physics , quantum field theory, string theory and cosmology . In physics, it is a foundation for lattice gauge theory and quantum chromodynamics . [ 3 ] It has been called the "most powerful formula in physics", [ 4 ] with Stephen Wolfram also declaring it to be the "fundamental mathematical construct of modern quantum mechanics and quantum field theory". [ 5 ]
The basic idea of the path integral formulation can be traced back to Norbert Wiener , who introduced the Wiener integral for solving problems in diffusion and Brownian motion . [ 6 ] This idea was extended to the use of the Lagrangian in quantum mechanics by Paul Dirac , whose 1933 paper gave birth to path integral formulation. [ 7 ] [ 8 ] [ 9 ] [ 3 ] The complete method was developed in 1948 by Richard Feynman . [ 10 ] Some preliminaries were worked out earlier in his doctoral work under the supervision of John Archibald Wheeler . The original motivation stemmed from the desire to obtain a quantum-mechanical formulation for the Wheeler–Feynman absorber theory using a Lagrangian (rather than a Hamiltonian ) as a starting point.
In quantum mechanics, as in classical mechanics, the Hamiltonian is the generator of time translations. This means that the state at a slightly later time differs from the state at the current time by the result of acting with the Hamiltonian operator (multiplied by the negative imaginary unit , − i ). For states with a definite energy, this is a statement of the de Broglie relation between frequency and energy, and the general relation is consistent with that plus the superposition principle .
The Hamiltonian in classical mechanics is derived from a Lagrangian , which is a more fundamental quantity in the context of special relativity . The Hamiltonian indicates how to march forward in time, but the time is different in different reference frames . The Lagrangian is a Lorentz scalar , while the Hamiltonian is the time component of a four-vector . So the Hamiltonian is different in different frames, and this type of symmetry is not apparent in the original formulation of quantum mechanics.
The Hamiltonian is a function of the position and momentum at one time, and it determines the position and momentum a little later. The Lagrangian is a function of the position now and the position a little later (or, equivalently for infinitesimal time separations, it is a function of the position and velocity). The relation between the two is by a Legendre transformation , and the condition that determines the classical equations of motion (the Euler–Lagrange equations ) is that the action has an extremum.
In quantum mechanics, the Legendre transform is hard to interpret, because the motion is not over a definite trajectory. In classical mechanics, with discretization in time, the Legendre transform becomes
and
where the partial derivative with respect to q ˙ {\displaystyle {\dot {q}}} holds q ( t + ε ) fixed. The inverse Legendre transform is
where
and the partial derivative now is with respect to p at fixed q .
In quantum mechanics, the state is a superposition of different states with different values of q , or different values of p , and the quantities p and q can be interpreted as noncommuting operators. The operator p is only definite on states that are indefinite with respect to q . So consider two states separated in time and act with the operator corresponding to the Lagrangian:
If the multiplications implicit in this formula are reinterpreted as matrix multiplications, the first factor is
and if this is also interpreted as a matrix multiplication, the sum over all states integrates over all q ( t ) , and so it takes the Fourier transform in q ( t ) to change basis to p ( t ) . That is the action on the Hilbert space – change basis to p at time t .
Next comes
or evolve an infinitesimal time into the future .
Finally, the last factor in this interpretation is
which means change basis back to q at a later time .
This is not very different from just ordinary time evolution: the H factor contains all the dynamical information – it pushes the state forward in time. The first part and the last part are just Fourier transforms to change to a pure q basis from an intermediate p basis.
Another way of saying this is that since the Hamiltonian is naturally a function of p and q , exponentiating this quantity and changing basis from p to q at each step allows the matrix element of H to be expressed as a simple function along each path. This function is the quantum analog of the classical action. This observation is due to Paul Dirac . [ 11 ]
Dirac further noted that one could square the time-evolution operator in the S representation:
and this gives the time-evolution operator between time t and time t + 2 ε . While in the H representation the quantity that is being summed over the intermediate states is an obscure matrix element, in the S representation it is reinterpreted as a quantity associated to the path. In the limit that one takes a large power of this operator, one reconstructs the full quantum evolution between two states, the early one with a fixed value of q (0) and the later one with a fixed value of q ( t ) . The result is a sum over paths with a phase, which is the quantum action.
Crucially, Dirac identified the effect of the classical limit on the quantum form of the action principle:
...we see that the integrand in (11) must be of the form e iF / h , where F is a function of q T , q 1 , q 2 , … q m , q t , which remains finite as h tends to zero. Let us now picture one of the intermediate q s, say q k , as varying continuously while the other ones are fixed. Owing to the smallness of h , we shall then in general have F / h varying extremely rapidly. This means that e iF / h will vary periodically with a very high frequency about the value zero, as a result of which its integral will be practically zero. The only important part in the domain of integration of q k is thus that for which a comparatively large variation in q k produces only a very small variation in F . This part is the neighbourhood of a point for which F is stationary with respect to small variations in q k . We can apply this argument to each of the variables of integration ... and obtain the result that the only important part in the domain of integration is that for which F is stationary for small variations in all intermediate q s. ... We see that F has for its classical analogue ∫ t T L dt , which is just the action function, which classical mechanics requires to be stationary for small variations in all the intermediate q s. This shows the way in which equation (11) goes over into classical results when h becomes extremely small.
That is, in the limit of action that is large compared to the Planck constant ħ – the classical limit – the path integral is dominated by solutions that are in the neighborhood of stationary points of the action. The classical path arises naturally in the classical limit.
Dirac's work did not provide a precise prescription to calculate the sum over paths, and he did not show that one could recover the Schrödinger equation or the canonical commutation relations from this rule. This was done by Feynman.
Feynman showed that Dirac's quantum action was, for most cases of interest, simply equal to the classical action, appropriately discretized. This means that the classical action is the phase acquired by quantum evolution between two fixed endpoints. He proposed to recover all of quantum mechanics from the following postulates:
In order to find the overall probability amplitude for a given process, then, one adds up, or integrates , the amplitude of the 3rd postulate over the space of all possible paths of the system in between the initial and final states, including those that are absurd by classical standards. In calculating the probability amplitude for a single particle to go from one space-time coordinate to another, it is correct to include paths in which the particle describes elaborate curlicues , curves in which the particle shoots off into outer space and flies back again, and so forth. The path integral assigns to all these amplitudes equal weight but varying phase , or argument of the complex number . Contributions from paths wildly different from the classical trajectory may be suppressed by interference (see below).
Feynman showed that this formulation of quantum mechanics is equivalent to the canonical approach to quantum mechanics when the Hamiltonian is at most quadratic in the momentum. An amplitude computed according to Feynman's principles will also obey the Schrödinger equation for the Hamiltonian corresponding to the given action.
The path integral formulation of quantum field theory represents the transition amplitude (corresponding to the classical correlation function ) as a weighted sum of all possible histories of the system from the initial to the final state. A Feynman diagram is a graphical representation of a perturbative contribution to the transition amplitude.
One common approach to deriving the path integral formula is to divide the time interval into small pieces. Once this is done, the Trotter product formula tells us that the noncommutativity of the kinetic and potential energy operators can be ignored.
For a particle in a smooth potential, the path integral is approximated by zigzag paths, which in one dimension is a product of ordinary integrals. For the motion of the particle from position x a at time t a to x b at time t b , the time sequence
can be divided up into n + 1 smaller segments t j − t j − 1 , where j = 1, ..., n + 1 , of fixed duration
This process is called time-slicing .
An approximation for the path integral can be computed as proportional to
where L ( x , v ) is the Lagrangian of the one-dimensional system with position variable x ( t ) and velocity v = ẋ ( t ) considered (see below), and dx j corresponds to the position at the j th time step, if the time integral is approximated by a sum of n terms. [ nb 1 ]
In the limit n → ∞ , this becomes a functional integral , which, apart from a nonessential factor, is directly the product of the probability amplitudes ⟨ x b , t b | x a , t a ⟩ (more precisely, since one must work with a continuous spectrum, the respective densities) to find the quantum mechanical particle at t a in the initial state x a and at t b in the final state x b .
Actually L is the classical Lagrangian of the one-dimensional system considered,
and the abovementioned "zigzagging" corresponds to the appearance of the terms
in the Riemann sum approximating the time integral, which are finally integrated over x 1 to x n with the integration measure dx 1 ... dx n , x̃ j is an arbitrary value of the interval corresponding to j , e.g. its center, x j + x j −1 / 2 .
Thus, in contrast to classical mechanics, not only does the stationary path contribute, but actually all virtual paths between the initial and the final point also contribute.
In terms of the wave function in the position representation, the path integral formula reads as follows:
where D x {\displaystyle {\mathcal {D}}\mathbf {x} } denotes integration over all paths x {\displaystyle \mathbf {x} } with x ( 0 ) = x {\displaystyle \mathbf {x} (0)=x} and where Z {\displaystyle Z} is a normalization factor. Here S {\displaystyle S} is the action, given by
The path integral representation gives the quantum amplitude to go from point x to point y as an integral over all paths. For a free-particle action (for simplicity let m = 1 , ħ = 1 )
the integral can be evaluated explicitly.
To do this, it is convenient to start without the factor i in the exponential, so that large deviations are suppressed by small numbers, not by cancelling oscillatory contributions. The amplitude (or Kernel) reads:
Splitting the integral into time slices:
where the D is interpreted as a finite collection of integrations at each integer multiple of ε . Each factor in the product is a Gaussian as a function of x ( t + ε ) centered at x ( t ) with variance ε . The multiple integrals are a repeated convolution of this Gaussian G ε with copies of itself at adjacent times:
where the number of convolutions is T / ε . The result is easy to evaluate by taking the Fourier transform of both sides, so that the convolutions become multiplications:
The Fourier transform of the Gaussian G is another Gaussian of reciprocal variance:
and the result is
The Fourier transform gives K , and it is a Gaussian again with reciprocal variance:
The proportionality constant is not really determined by the time-slicing approach, only the ratio of values for different endpoint choices is determined. The proportionality constant should be chosen to ensure that between each two time slices the time evolution is quantum-mechanically unitary, but a more illuminating way to fix the normalization is to consider the path integral as a description of a stochastic process .
The result has a probability interpretation. The sum over all paths of the exponential factor can be seen as the sum over each path of the probability of selecting that path. The probability is the product over each segment of the probability of selecting that segment, so that each segment is probabilistically independently chosen. The fact that the answer is a Gaussian spreading linearly in time is the central limit theorem , which can be interpreted as the first historical evaluation of a statistical path integral.
The probability interpretation gives a natural normalization choice. The path integral should be defined so that
This condition normalizes the Gaussian and produces a kernel that obeys the diffusion equation:
For oscillatory path integrals, ones with an i in the numerator, the time slicing produces convolved Gaussians, just as before. Now, however, the convolution product is marginally singular, since it requires careful limits to evaluate the oscillating integrals. To make the factors well defined, the easiest way is to add a small imaginary part to the time increment ε . This is closely related to Wick rotation . Then the same convolution argument as before gives the propagation kernel:
which, with the same normalization as before (not the sum-squares normalization – this function has a divergent norm), obeys a free Schrödinger equation:
This means that any superposition of K s will also obey the same equation, by linearity. Defining
then ψ t obeys the free Schrödinger equation just as K does:
The Lagrangian for the simple harmonic oscillator is [ 12 ]
Write its trajectory x ( t ) as the classical trajectory plus some perturbation, x ( t ) = x c ( t ) + δx ( t ) and the action as S = S c + δS . The classical trajectory can be written as
This trajectory yields the classical action
Next, expand the deviation from the classical path as a Fourier series, and calculate the contribution to the action δS , which gives
This means that the propagator is
for some normalization
Using the infinite-product representation of the sinc function ,
the propagator can be written as
Let T = t f − t i . One may write this propagator in terms of energy eigenstates as
Using the identities i sin ωT = 1 / 2 e iωT (1 − e −2 iωT ) and cos ωT = 1 / 2 e iωT (1 + e −2 iωT ) , this amounts to
One may absorb all terms after the first e − iωT /2 into R ( T ) , thereby obtaining
One may finally expand R ( T ) in powers of e − iωT : All terms in this expansion get multiplied by the e − iωT /2 factor in the front, yielding terms of the form
Comparison to the above eigenstate expansion yields the standard energy spectrum for the simple harmonic oscillator,
Feynman's time-sliced approximation does not, however, exist for the most important quantum-mechanical path integrals of atoms, due to the singularity of the Coulomb potential e 2 / r at the origin. Only after replacing the time t by another path-dependent pseudo-time parameter
the singularity is removed and a time-sliced approximation exists, which is exactly integrable, since it can be made harmonic by a simple coordinate transformation, as discovered in 1979 by İsmail Hakkı Duru and Hagen Kleinert . [ 13 ] The combination of a path-dependent time transformation and a coordinate transformation is an important tool to solve many path integrals and is called generically the Duru–Kleinert transformation .
The path integral reproduces the Schrödinger equation for the initial and final state even when a potential is present. This is easiest to see by taking a path-integral over infinitesimally separated times.
Since the time separation is infinitesimal and the cancelling oscillations become severe for large values of ẋ , the path integral has most weight for y close to x . In this case, to lowest order the potential energy is constant, and only the kinetic energy contribution is nontrivial. (This separation of the kinetic and potential energy terms in the exponent is essentially the Trotter product formula .) The exponential of the action is
The first term rotates the phase of ψ ( x ) locally by an amount proportional to the potential energy. The second term is the free particle propagator, corresponding to i times a diffusion process. To lowest order in ε they are additive; in any case one has with (1):
As mentioned, the spread in ψ is diffusive from the free particle propagation, with an extra infinitesimal rotation in phase that slowly varies from point to point from the potential:
and this is the Schrödinger equation. The normalization of the path integral needs to be fixed in exactly the same way as in the free particle case. An arbitrary continuous potential does not affect the normalization, although singular potentials require careful treatment.
Since the states obey the Schrödinger equation, the path integral must reproduce the Heisenberg equations of motion for the averages of x and ẋ variables, but it is instructive to see this directly. The direct approach shows that the expectation values calculated from the path integral reproduce the usual ones of quantum mechanics.
Start by considering the path integral with some fixed initial state
Now x ( t ) at each separate time is a separate integration variable. So it is legitimate to change variables in the integral by shifting: x ( t ) = u ( t ) + ε ( t ) where ε ( t ) is a different shift at each time but ε (0) = ε ( T ) = 0 , since the endpoints are not integrated:
The change in the integral from the shift is, to first infinitesimal order in ε :
which, integrating by parts in t , gives:
But this was just a shift of integration variables, which doesn't change the value of the integral for any choice of ε ( t ) . The conclusion is that this first order variation is zero for an arbitrary initial state and at any arbitrary point in time:
this is the Heisenberg equation of motion.
If the action contains terms that multiply ẋ and x , at the same moment in time, the manipulations above are only heuristic, because the multiplication rules for these quantities is just as noncommuting in the path integral as it is in the operator formalism.
If the variation in the action exceeds ħ by many orders of magnitude, we typically have destructive interference other than in the vicinity of those trajectories satisfying the Euler–Lagrange equation , which is now reinterpreted as the condition for constructive interference. This can be shown using the method of stationary phase applied to the propagator. As ħ decreases, the exponential in the integral oscillates rapidly in the complex domain for any change in the action. Thus, in the limit that ħ goes to zero, only points where the classical action does not vary contribute to the propagator.
The formulation of the path integral does not make it clear at first sight that the quantities x and p do not commute. In the path integral, these are just integration variables and they have no obvious ordering. Feynman discovered that the non-commutativity is still present. [ 14 ]
To see this, consider the simplest path integral, the brownian walk. This is not yet quantum mechanics, so in the path-integral the action is not multiplied by i :
The quantity x ( t ) is fluctuating, and the derivative is defined as the limit of a discrete difference.
The distance that a random walk moves is proportional to √ t , so that:
This shows that the random walk is not differentiable, since the ratio that defines the derivative diverges with probability one.
The quantity xẋ is ambiguous, with two possible meanings:
In elementary calculus, the two are only different by an amount that goes to 0 as ε goes to 0. But in this case, the difference between the two is not 0:
Let
Then f ( t ) is a rapidly fluctuating statistical quantity, whose average value is 1, i.e. a normalized "Gaussian process". The fluctuations of such a quantity can be described by a statistical Lagrangian
and the equations of motion for f derived from extremizing the action S corresponding to L just set it equal to 1. In physics, such a quantity is "equal to 1 as an operator identity". In mathematics, it "weakly converges to 1". In either case, it is 1 in any expectation value, or when averaged over any interval, or for all practical purpose.
Defining the time order to be the operator order:
This is called the Itō lemma in stochastic calculus , and the (euclideanized) canonical commutation relations in physics.
For a general statistical action, a similar argument shows that
and in quantum mechanics, the extra imaginary unit in the action converts this to the canonical commutation relation,
For a particle in curved space the kinetic term depends on the position, and the above time slicing cannot be applied, this being a manifestation of the notorious operator ordering problem in Schrödinger quantum mechanics. One may, however, solve this problem by transforming the time-sliced flat-space path integral to curved space using a multivalued coordinate transformation ( nonholonomic mapping explained here ).
Sometimes (e.g. a particle moving in curved space) we also have measure-theoretic factors in the functional integral:
This factor is needed to restore unitarity.
For instance, if
then it means that each spatial slice is multiplied by the measure √ g . This measure cannot be expressed as a functional multiplying the D x measure because they belong to entirely different classes.
Matrix elements of the kind ⟨ x f | e − i ℏ H ^ ( t − t ′ ) F ( x ^ ) e − i ℏ H ^ ( t ′ ) | x i ⟩ {\displaystyle \langle x_{f}|e^{-{\frac {i}{\hbar }}{\hat {H}}(t-t')}F({\hat {x}})e^{-{\frac {i}{\hbar }}{\hat {H}}(t')}|x_{i}\rangle } take the form
This generalizes to multiple operators, for example
and to the general vacuum expectation value (in the large time limit)
It is very common in path integrals to perform a Wick rotation from real to imaginary times. In the setting of quantum field theory, the Wick rotation changes the geometry of space-time from Lorentzian to Euclidean; as a result, Wick-rotated path integrals are often called Euclidean path integrals.
If we replace t {\displaystyle t} by − i t {\displaystyle -it} , the time-evolution operator e − i t H ^ / ℏ {\displaystyle e^{-it{\hat {H}}/\hbar }} is replaced by e − t H ^ / ℏ {\displaystyle e^{-t{\hat {H}}/\hbar }} . (This change is known as a Wick rotation .) If we repeat the derivation of the path-integral formula in this setting, we obtain [ 15 ]
where S E u c l i d e a n {\displaystyle S_{\mathrm {Euclidean} }} is the Euclidean action, given by
Note the sign change between this and the normal action, where the potential energy term is negative. (The term Euclidean is from the context of quantum field theory, where the change from real to imaginary time changes the space-time geometry from Lorentzian to Euclidean.)
Now, the contribution of the kinetic energy to the path integral is as follows:
where f ( x ) {\displaystyle f(\mathbf {x} )} includes all the remaining dependence of the integrand on the path. This integral has a rigorous mathematical interpretation as integration against the Wiener measure , denoted μ x {\displaystyle \mu _{x}} . The Wiener measure, constructed by Norbert Wiener gives a rigorous foundation to Einstein's mathematical model of Brownian motion . The subscript x {\displaystyle x} indicates that the measure μ x {\displaystyle \mu _{x}} is supported on paths x {\displaystyle \mathbf {x} } with x ( 0 ) = x {\displaystyle \mathbf {x} (0)=x} .
We then have a rigorous version of the Feynman path integral, known as the Feynman–Kac formula : [ 16 ]
where now ψ ( x , t ) {\displaystyle \psi (x,t)} satisfies the Wick-rotated version of the Schrödinger equation,
Although the Wick-rotated Schrödinger equation does not have a direct physical meaning, interesting properties of the Schrödinger operator H ^ {\displaystyle {\hat {H}}} can be extracted by studying it. [ 17 ]
Much of the study of quantum field theories from the path-integral perspective, in both the mathematics and physics literatures, is done in the Euclidean setting, that is, after a Wick rotation. In particular, there are various results showing that if a Euclidean field theory with suitable properties can be constructed, one can then undo the Wick rotation to recover the physical, Lorentzian theory. [ 18 ] On the other hand, it is much more difficult to give a meaning to path integrals (even Euclidean path integrals) in quantum field theory than in quantum mechanics. [ nb 2 ]
The path integral is just the generalization of the integral above to all quantum mechanical problems—
is the action of the classical problem in which one investigates the path starting at time t = 0 and ending at time t = t f , and D x {\displaystyle {\mathcal {D}}\mathbf {x} } denotes the integration measure over all paths. In the classical limit, S [ x ] ≫ ℏ {\displaystyle {\mathcal {S}}[\mathbf {x} ]\gg \hbar } , the path of minimum action dominates the integral, because the phase of any path away from this fluctuates rapidly and different contributions cancel. [ 19 ]
The connection with statistical mechanics follows. Considering only paths that begin and end in the same configuration, perform the Wick rotation it = ħβ , i.e., make time imaginary, and integrate over all possible beginning-ending configurations. The Wick-rotated path integral—described in the previous subsection, with the ordinary action replaced by its "Euclidean" counterpart—now resembles the partition function of statistical mechanics defined in a canonical ensemble with inverse temperature proportional to imaginary time, 1 / T = i k B t / ħ . Strictly speaking, though, this is the partition function for a statistical field theory .
Clearly, such a deep analogy between quantum mechanics and statistical mechanics cannot be dependent on the formulation. In the canonical formulation, one sees that the unitary evolution operator of a state is given by
where the state α is evolved from time t = 0 . If one makes a Wick rotation here, and finds the amplitude to go from any state, back to the same state in (imaginary) time iβ is given by
which is precisely the partition function of statistical mechanics for the same system at the temperature quoted earlier. One aspect of this equivalence was also known to Erwin Schrödinger who remarked that the equation named after him looked like the diffusion equation after Wick rotation. Note, however, that the Euclidean path integral is actually in the form of a classical statistical mechanics model.
Both the Schrödinger and Heisenberg approaches to quantum mechanics single out time and are not in the spirit of relativity. For example, the Heisenberg approach requires that scalar field operators obey the commutation relation
for two simultaneous spatial positions x and y , and this is not a relativistically invariant concept. The results of a calculation are covariant, but the symmetry is not apparent in intermediate stages. If naive field-theory calculations did not produce infinite answers in the continuum limit , this would not have been such a big problem – it would just have been a bad choice of coordinates. But the lack of symmetry means that the infinite quantities must be cut off, and the bad coordinates make it nearly impossible to cut off the theory without spoiling the symmetry. This makes it difficult to extract the physical predictions, which require a careful limiting procedure .
The problem of lost symmetry also appears in classical mechanics, where the Hamiltonian formulation also superficially singles out time. The Lagrangian formulation makes the relativistic invariance apparent. In the same way, the path integral is manifestly relativistic. It reproduces the Schrödinger equation, the Heisenberg equations of motion, and the canonical commutation relations and shows that they are compatible with relativity. It extends the Heisenberg-type operator algebra to operator product rules , which are new relations difficult to see in the old formalism.
Further, different choices of canonical variables lead to very different-seeming formulations of the same theory. The transformations between the variables can be very complicated, but the path integral makes them into reasonably straightforward changes of integration variables. For these reasons, the Feynman path integral has made earlier formalisms largely obsolete.
The price of a path integral representation is that the unitarity of a theory is no longer self-evident, but it can be proven by changing variables to some canonical representation. The path integral itself also deals with larger mathematical spaces than is usual, which requires more careful mathematics, not all of which has been fully worked out. The path integral historically was not immediately accepted, partly because it took many years to incorporate fermions properly. This required physicists to invent an entirely new mathematical object – the Grassmann variable – which also allowed changes of variables to be done naturally, as well as allowing constrained quantization .
The integration variables in the path integral are subtly non-commuting. The value of the product of two field operators at what looks like the same point depends on how the two points are ordered in space and time. This makes some naive identities fail .
In relativistic theories, there is both a particle and field representation for every theory. The field representation is a sum over all field configurations, and the particle representation is a sum over different particle paths.
The nonrelativistic formulation is traditionally given in terms of particle paths, not fields. There, the path integral in the usual variables, with fixed boundary conditions, gives the probability amplitude for a particle to go from point x to point y in time T :
This is called the propagator . To obtain the final state at y we simply apply K ( x , y ; T ) to the initial state and integrate over x resulting in:
For a spatially homogeneous system, where K ( x , y ) is only a function of ( x − y ) , the integral is a convolution , the final state is the initial state convolved with the propagator:
For a free particle of mass m , the propagator can be evaluated either explicitly from the path integral or by noting that the Schrödinger equation is a diffusion equation in imaginary time, and the solution must be a normalized Gaussian:
Taking the Fourier transform in ( x − y ) produces another Gaussian:
and in p -space the proportionality factor here is constant in time, as will be verified in a moment. The Fourier transform in time, extending K ( p ; T ) to be zero for negative times, gives Green's function, or the frequency-space propagator:
which is the reciprocal of the operator that annihilates the wavefunction in the Schrödinger equation, which wouldn't have come out right if the proportionality factor weren't constant in the p -space representation.
The infinitesimal term in the denominator is a small positive number, which guarantees that the inverse Fourier transform in E will be nonzero only for future times. For past times, the inverse Fourier transform contour closes toward values of E where there is no singularity. This guarantees that K propagates the particle into the future and is the reason for the subscript "F" on G . The infinitesimal term can be interpreted as an infinitesimal rotation toward imaginary time.
It is also possible to reexpress the nonrelativistic time evolution in terms of propagators going toward the past, since the Schrödinger equation is time-reversible. The past propagator is the same as the future propagator except for the obvious difference that it vanishes in the future, and in the Gaussian t is replaced by − t . In this case, the interpretation is that these are the quantities to convolve the final wavefunction so as to get the initial wavefunction:
Given the nearly identical only change is the sign of E and ε , the parameter E in Green's function can either be the energy if the paths are going toward the future, or the negative of the energy if the paths are going toward the past.
For a nonrelativistic theory, the time as measured along the path of a moving particle and the time as measured by an outside observer are the same. In relativity, this is no longer true. For a relativistic theory the propagator should be defined as the sum over all paths that travel between two points in a fixed proper time, as measured along the path (these paths describe the trajectory of a particle in space and in time):
The integral above is not trivial to interpret because of the square root. Fortunately, there is a heuristic trick. The sum is over the relativistic arc length of the path of an oscillating quantity, and like the nonrelativistic path integral should be interpreted as slightly rotated into imaginary time. The function K ( x − y , τ ) can be evaluated when the sum is over paths in Euclidean space:
This describes a sum over all paths of length Τ of the exponential of minus the length. This can be given a probability interpretation. The sum over all paths is a probability average over a path constructed step by step. The total number of steps is proportional to Τ , and each step is less likely the longer it is. By the central limit theorem , the result of many independent steps is a Gaussian of variance proportional to Τ :
The usual definition of the relativistic propagator only asks for the amplitude to travel from x to y , after summing over all the possible proper times it could take:
where W (Τ) is a weight factor, the relative importance of paths of different proper time. By the translation symmetry in proper time, this weight can only be an exponential factor and can be absorbed into the constant α :
This is the Schwinger representation . Taking a Fourier transform over the variable ( x − y ) can be done for each value of Τ separately, and because each separate Τ contribution is a Gaussian, gives whose Fourier transform is another Gaussian with reciprocal width. So in p -space, the propagator can be reexpressed simply:
which is the Euclidean propagator for a scalar particle. Rotating p 0 to be imaginary gives the usual relativistic propagator, up to a factor of − i and an ambiguity, which will be clarified below:
This expression can be interpreted in the nonrelativistic limit, where it is convenient to split it by partial fractions :
For states where one nonrelativistic particle is present, the initial wavefunction has a frequency distribution concentrated near p 0 = m . When convolving with the propagator, which in p space just means multiplying by the propagator, the second term is suppressed and the first term is enhanced. For frequencies near p 0 = m , the dominant first term has the form
This is the expression for the nonrelativistic Green's function of a free Schrödinger particle.
The second term has a nonrelativistic limit also, but this limit is concentrated on frequencies that are negative. The second pole is dominated by contributions from paths where the proper time and the coordinate time are ticking in an opposite sense, which means that the second term is to be interpreted as the antiparticle. The nonrelativistic analysis shows that with this form the antiparticle still has positive energy.
The proper way to express this mathematically is that, adding a small suppression factor in proper time, the limit where t → −∞ of the first term must vanish, while the t → +∞ limit of the second term must vanish. In the Fourier transform, this means shifting the pole in p 0 slightly, so that the inverse Fourier transform will pick up a small decay factor in one of the time directions:
Without these terms, the pole contribution could not be unambiguously evaluated when taking the inverse Fourier transform of p 0 . The terms can be recombined:
which when factored, produces opposite-sign infinitesimal terms in each factor. This is the mathematically precise form of the relativistic particle propagator, free of any ambiguities. The ε term introduces a small imaginary part to the α = m 2 , which in the Minkowski version is a small exponential suppression of long paths.
So in the relativistic case, the Feynman path-integral representation of the propagator includes paths going backwards in time, which describe antiparticles. The paths that contribute to the relativistic propagator go forward and backwards in time, and the interpretation of this is that the amplitude for a free particle to travel between two points includes amplitudes for the particle to fluctuate into an antiparticle, travel back in time, then forward again.
Unlike the nonrelativistic case, it is impossible to produce a relativistic theory of local particle propagation without including antiparticles. All local differential operators have inverses that are nonzero outside the light cone, meaning that it is impossible to keep a particle from travelling faster than light. Such a particle cannot have a Green's function that is only nonzero in the future in a relativistically invariant theory.
However, the path integral formulation is also extremely important in direct application to quantum field theory, in which the "paths" or histories being considered are not the motions of a single particle, but the possible time evolutions of a field over all space. The action is referred to technically as a functional of the field: S [ ϕ ] , where the field ϕ ( x μ ) is itself a function of space and time, and the square brackets are a reminder that the action depends on all the field's values everywhere, not just some particular value. One such given function ϕ ( x μ ) of spacetime is called a field configuration . In principle, one integrates Feynman's amplitude over the class of all possible field configurations.
Much of the formal study of QFT is devoted to the properties of the resulting functional integral, and much effort (not yet entirely successful) has been made toward making these functional integrals mathematically precise.
Such a functional integral is extremely similar to the partition function in statistical mechanics . Indeed, it is sometimes called a partition function , and the two are essentially mathematically identical except for the factor of i in the exponent in Feynman's postulate 3. Analytically continuing the integral to an imaginary time variable (called a Wick rotation ) makes the functional integral even more like a statistical partition function and also tames some of the mathematical difficulties of working with these integrals.
In quantum field theory , if the action is given by the functional S of field configurations (which only depends locally on the fields), then the time-ordered vacuum expectation value of polynomially bounded functional F , ⟨ F ⟩ , is given by
The symbol ∫ D ϕ here is a concise way to represent the infinite-dimensional integral over all possible field configurations on all of space-time. As stated above, the unadorned path integral in the denominator ensures proper normalization.
Strictly speaking, the only question that can be asked in physics is: What fraction of states satisfying condition A also satisfy condition B ? The answer to this is a number between 0 and 1, which can be interpreted as a conditional probability , written as P( B | A ) . In terms of path integration, since P( B | A ) = P( A ∩ B ) / P( A ) , this means
where the functional O in [ ϕ ] is the superposition of all incoming states that could lead to the states we are interested in. In particular, this could be a state corresponding to the state of the Universe just after the Big Bang , although for actual calculation this can be simplified using heuristic methods. Since this expression is a quotient of path integrals, it is naturally normalised.
Since this formulation of quantum mechanics is analogous to classical action principle, one might expect that identities concerning the action in classical mechanics would have quantum counterparts derivable from a functional integral. This is often the case.
In the language of functional analysis, we can write the Euler–Lagrange equations as
(the left-hand side is a functional derivative ; the equation means that the action is stationary under small changes in the field configuration). The quantum analogues of these equations are called the Schwinger–Dyson equations .
If the functional measure D ϕ turns out to be translationally invariant (we'll assume this for the rest of this article, although this does not hold for, let's say nonlinear sigma models ), and if we assume that after a Wick rotation
which now becomes
for some H , it goes to zero faster than a reciprocal of any polynomial for large values of φ , then we can integrate by parts (after a Wick rotation, followed by a Wick rotation back) to get the following Schwinger–Dyson equations for the expectation:
for any polynomially-bounded functional F . In the deWitt notation this looks like [ 20 ]
These equations are the analog of the on-shell EL equations. The time ordering is taken before the time derivatives inside the S , i .
If J (called the source field ) is an element of the dual space of the field configurations (which has at least an affine structure because of the assumption of the translational invariance for the functional measure), then the generating functional Z of the source fields is defined to be
Note that
or
where
Basically, if D φ e i S [ φ ] is viewed as a functional distribution (this shouldn't be taken too literally as an interpretation of QFT , unlike its Wick-rotated statistical mechanics analogue, because we have time ordering complications here!), then ⟨ φ ( x 1 ) ... φ ( x n )⟩ are its moments , and Z is its Fourier transform .
If F is a functional of φ , then for an operator K , F [ K ] is defined to be the operator that substitutes K for φ . For example, if
and G is a functional of J , then
Then, from the properties of the functional integrals
we get the "master" Schwinger–Dyson equation:
or
If the functional measure is not translationally invariant, it might be possible to express it as the product M [ φ ] D φ , where M is a functional and D φ is a translationally invariant measure. This is true, for example, for nonlinear sigma models where the target space is diffeomorphic to R n . However, if the target manifold is some topologically nontrivial space, the concept of a translation does not even make any sense.
In that case, we would have to replace the S in this equation by another functional
If we expand this equation as a Taylor series about J = 0, we get the entire set of Schwinger–Dyson equations.
The path integrals are usually thought of as being the sum of all paths through an infinite space–time. However, in local quantum field theory we would restrict everything to lie within a finite causally complete region, for example inside a double light-cone. This gives a more mathematically precise and physically rigorous definition of quantum field theory.
Now how about the on shell Noether's theorem for the classical case? Does it have a quantum analog as well? Yes, but with a caveat. The functional measure would have to be invariant under the one parameter group of symmetry transformation as well.
Let's just assume for simplicity here that the symmetry in question is local (not local in the sense of a gauge symmetry , but in the sense that the transformed value of the field at any given point under an infinitesimal transformation would only depend on the field configuration over an arbitrarily small neighborhood of the point in question). Let's also assume that the action is local in the sense that it is the integral over spacetime of a Lagrangian , and that
for some function f where f only depends locally on φ (and possibly the spacetime position).
If we don't assume any special boundary conditions, this would not be a "true" symmetry in the true sense of the term in general unless f = 0 or something. Here, Q is a derivation that generates the one parameter group in question. We could have antiderivations as well, such as BRST and supersymmetry .
Let's also assume
for any polynomially-bounded functional F . This property is called the invariance of the measure, and this does not hold in general. (See anomaly (physics) for more details.)
Then,
which implies
where the integral is over the boundary. This is the quantum analog of Noether's theorem.
Now, let's assume even further that Q is a local integral
where
so that\
where
(this is assuming the Lagrangian only depends on φ and its first partial derivatives! More general Lagrangians would require a modification to this definition!). We're not insisting that q ( x ) is the generator of a symmetry (i.e. we are not insisting upon the gauge principle ), but just that Q is. And we also assume the even stronger assumption that the functional measure is locally invariant:
Then, we would have
Alternatively,
The above two equations are the Ward–Takahashi identities.
Now for the case where f = 0 , we can forget about all the boundary conditions and locality assumptions. We'd simply have
Alternatively,
Path integrals as they are defined here require the introduction of regulators . Changing the scale of the regulator leads to the renormalization group . In fact, renormalization is the major obstruction to making path integrals well-defined.
Regardless of whether one works in configuration space or phase space, when equating the operator formalism and the path integral formulation, an ordering prescription is required to resolve the ambiguity in the correspondence between non-commutative operators and the commutative functions that appear in path integrands. For example, the operator 1 2 ( q ^ p ^ + p ^ q ^ ) {\displaystyle {\frac {1}{2}}({\hat {q}}{\hat {p}}+{\hat {p}}{\hat {q}})} can be translated back as either q p − i ℏ 2 {\displaystyle qp-{\frac {i\hbar }{2}}} , q p + i ℏ 2 {\displaystyle qp+{\frac {i\hbar }{2}}} , or q p {\displaystyle qp} depending on whether one chooses the q ^ p ^ {\displaystyle {\hat {q}}{\hat {p}}} , p ^ q ^ {\displaystyle {\hat {p}}{\hat {q}}} , or Weyl ordering prescription; conversely, q p {\displaystyle qp} can be translated to either q ^ p ^ {\displaystyle {\hat {q}}{\hat {p}}} , p ^ q ^ {\displaystyle {\hat {p}}{\hat {q}}} , or 1 2 ( q ^ p ^ + p ^ q ^ ) {\displaystyle {\frac {1}{2}}({\hat {q}}{\hat {p}}+{\hat {p}}{\hat {q}})} for the same respective choice of ordering prescription.
In one interpretation of quantum mechanics , the "sum over histories" interpretation, the path integral is taken to be fundamental, and reality is viewed as a single indistinguishable "class" of paths that all share the same events. [ 21 ] For this interpretation, it is crucial to understand what exactly an event is. The sum-over-histories method gives identical results to canonical quantum mechanics, and Sinha and Sorkin [ 22 ] claim the interpretation explains the Einstein–Podolsky–Rosen paradox without resorting to nonlocality .
Some [ who? ] advocates of interpretations of quantum mechanics emphasizing decoherence have attempted to make more rigorous the notion of extracting a classical-like "coarse-grained" history from the space of all possible histories.
Whereas in quantum mechanics the path integral formulation is fully equivalent to other formulations, it may be that it can be extended to quantum gravity, which would make it different from the Hilbert space model. Feynman had some success in this direction, and his work has been extended by Hawking and others. [ 23 ] Approaches that use this method include causal dynamical triangulations and spinfoam models.
Quantum tunnelling can be modeled by using the path integral formation to determine the action of the trajectory through a potential barrier. Using the WKB approximation , the tunneling rate ( Γ ) can be determined to be of the form
with the effective action S eff and pre-exponential factor A o . This form is specifically useful in a dissipative system , in which the systems and surroundings must be modeled together. Using the Langevin equation to model Brownian motion , the path integral formation can be used to determine an effective action and pre-exponential model to see the effect of dissipation on tunnelling. [ 24 ] From this model, tunneling rates of macroscopic systems (at finite temperatures) can be predicted. | https://en.wikipedia.org/wiki/Path_integral_formulation |
Path integral molecular dynamics ( PIMD ) is a method of incorporating quantum mechanics into molecular dynamics simulations using Feynman path integrals . In PIMD, one uses the Born–Oppenheimer approximation to separate the wavefunction into a nuclear part and an electronic part. The nuclei are treated quantum mechanically by mapping each quantum nucleus onto a classical system of several fictitious particles connected by springs (harmonic potentials) governed by an effective Hamiltonian, which is derived from Feynman's path integral. The resulting classical system, although complex, can be solved relatively quickly. There are now a number of commonly used condensed matter computer simulation techniques that make use of the path integral formulation including centroid molecular dynamics ( CMD ), [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] ring polymer molecular dynamics ( RPMD ), [ 6 ] [ 7 ] and the Feynman–Kleinert quasi-classical Wigner ( FK–QCW ) method . [ 8 ] [ 9 ] The same techniques are also used in path integral Monte Carlo (PIMC). [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ]
There are two ways to calculate the dynamics calculations of PIMD. The first one is the non-Hamiltonian phase space analysis theory [ 15 ] , which has been updated to create an "extended system" of isokinetic equations of motion which overcomes the properties of a system that created issues within the community. The second way is by using Nosé–Hoover chain , [ 16 ] which is a chain of variables instead of a single thermostat of variable.
The simulations done my PIMD can broadly characterize the biomolecular systems, covering the entire structure and organization of the membrane, including the permeability, protein-lipid interactions, along with "lipid-drug interactions, protein–ligand interactions, and protein structure and dynamics."
PIMD is "widely used to describe nuclear quantum effects in chemistry and physics". [ 17 ]
Path Integral Molecular Dynamics can be applied to polymer physics, both field theories, quantum and not, string theory, stochastic dynamics, quantum mechanics, and quantum gravity. PIMD can also be used to calculate time correlation functions [ 18 ] | https://en.wikipedia.org/wiki/Path_integral_molecular_dynamics |
A polymer is a macromolecule , composed of many similar or identical repeated subunits. Polymers are common in, but not limited to, organic media. They range from familiar synthetic plastics to natural biopolymers such as DNA and proteins . Their unique elongated molecular structure produces unique physical properties, including toughness , viscoelasticity , and a tendency to form glasses and semicrystalline structures. The modern concept of polymers as covalently bonded macromolecular structures was proposed in 1920 by Hermann Staudinger. [ 1 ] One sub-field in the study of polymers is polymer physics . As a part of soft matter studies, Polymer physics concerns itself with the study of mechanical properties [ 2 ] and focuses on the perspective of condensed matter physics .
Because polymers are such large molecules, bordering on the macroscopic scale, their physical properties are usually too complicated for solving using deterministic methods. Therefore, statistical approaches are often implemented to yield pertinent results. The main reason for this relative success is that polymers constructed from a large number of monomers are efficiently described in the thermodynamic limit of infinitely many monomers, although in actuality they are obviously finite in size.
Thermal fluctuations continuously affect the shape of polymers in liquid solutions, and modeling their effect requires using principles from statistical mechanics and dynamics. The path integral approach falls in line with this basic premise and its afforded results are unvaryingly statistical averages. The path integral, when applied to the study of polymers, is essentially a mathematical mechanism to describe, count and statistically weigh all possible spatial configuration a polymer can conform to under well defined potential and temperature circumstances. Employing path integrals, problems hitherto unsolved were successfully worked out: Excluded volume, entanglement, links and knots to name a few. [ 3 ] Prominent contributors to the development of the theory include Nobel laureate P.G. de Gennes , Sir Sam Edwards , M.Doi , [ 4 ] [ 5 ] F.W. Wiegel [ 3 ] and H. Kleinert . [ 6 ]
Early attempts at path integrals can be traced back to 1918. [ 7 ] A sound mathematical formalism wasn't established until 1921. [ 8 ] This eventually lead Richard Feynman to construct a formulation for quantum mechanics, [ 9 ] now commonly known as Feynman Integrals .
In the core of Path integrals lies the concept of Functional integration . Regular integrals consist of a limiting process where a sum of functions is taken over a space of the function's variables. In functional integration the sum of functionals is taken over a space of functions. For each function the functional returns a value to add up.
Path integrals should not be confused with line integrals which are regular integrals with the integration evaluated along a curve in the variable's space.
Not very surprisingly functional integrals often diverge , therefore to obtain physically meaningful results a quotient of path integrals is taken.
This article will use the notation adopted by Feynman and Hibbs , [ 10 ] denoting a path integral as:
with G [ f ( x ) ] {\displaystyle G[f(x)]} as the functional and D f ( x ) {\displaystyle {\mathcal {D}}f(x)} the functional differential.
One extremely naive yet fruitful approach to quantitatively analyze the spatial structure and configuration of a polymer is the free random walk model. The polymer is depicted as a chain of point like unit molecules which are strongly bound by chemical bonds and hence the mutual distance between successive units can be approximated to be constant.
In the ideal polymer model the polymer subunits are completely free to rotate with respect to each other, and therefore the process of polymerization can be looked at as a random three dimensional walk, with each monomer added corresponding to another random step of predetermined length.
Mathematically this is formalized through the probability function for the position vector of the bonds, i.e. the relative positions of a pair of adjacent units:
With δ ( ) {\displaystyle \delta ()} standing for the dirac delta . The important thing to note here is that the bond position vector has a uniform distribution over a sphere of radius l {\displaystyle l} , our constant bond length.
A second crucial feature of the ideal model is that the bond vectors r → n {\displaystyle {\vec {r}}_{n}} are independent of each other, meaning we can write the distribution function for the complete polymer conformation as:
Where we assumed N {\displaystyle \textstyle N} monomers and n {\displaystyle \textstyle n} acts as a dummy index. The curly brackets { } mean that Ψ {\displaystyle \Psi } is a function of the set of vectors r → n {\displaystyle {\vec {r}}_{n}}
Salient results of this model include:
In accordance with the random walk model, the end to end vector average vanishes due to symmetry considerations. Therefore, in order to get an estimate of the polymer size, we turn to the end to end vector variance : ⟨ R → 2 ⟩ = N l 2 {\displaystyle \left\langle {\vec {R}}^{2}\right\rangle =Nl^{2}} with the end to end vector defined as: R → ≡ ∑ n = 1 N r → n {\displaystyle \textstyle {\vec {R}}\equiv \sum _{n=1}^{N}{\vec {r}}_{n}} .
Thus, a first crude approximation for the polymer size is simply R 0 ≡ ⟨ R → 2 ⟩ = N l {\displaystyle R_{0}\equiv {\sqrt {\left\langle {\vec {R}}^{2}\right\rangle }}={\sqrt {N}}l} .
As mentioned, we are usually interested in statistical features of the polymer configuration. A central quantity will therefore be the end to end vector probability distribution:
Note that the distribution depends only on the end to end vector magnitude . Also, the above expression gives non-zero probability for sizes larger than N l {\displaystyle Nl} , clearly an unreasonable result which stems from the limit taken N → ∞ {\displaystyle N\rightarrow \infty } for its derivation.
Taking the limit of a smooth spatial contour for the polymer conformation, that is, taking the limits N → ∞ {\displaystyle N\rightarrow \infty } and l → 0 , {\displaystyle l\rightarrow 0,} under the constraint N l = c o n s t {\displaystyle Nl=const} one comes to a differential equation for the probability distribution:
With the laplacian ∇ 2 {\displaystyle \textstyle \nabla ^{2}} taken in respect to actual space. One way to derive this equation is via Taylor expansion to Φ ( R → , N {\displaystyle \Phi ({\vec {R}},N} ) and Φ ( R → , N + Δ N ) . {\displaystyle \Phi ({\vec {R}},N+\Delta N).}
One might wonder why bother with a differential equation for a function already analytically obtained, but as will be demonstrated, this equation can also be generalized for non-ideal circumstances.
Under the same assumption of a smooth contour, the distribution function can be expressed using a path integral:
Where we defined L 0 = 3 2 l 2 ( d R → d ν ) 2 . {\displaystyle \textstyle L_{0}={\frac {3}{2l^{2}}}\left({\frac {d{\vec {R}}}{d\nu }}\right)^{2}.}
Here ν {\displaystyle \nu } acts as a parametrization variable for the polymer, describing in effect its spatial configuration, or contour.
The exponent is a measure for the number density of polymer configurations in which the shape of the polymer is close to a continuous and differentiable curve. [ 3 ]
Thus far, the path integral approach didn't avail us of any novel results. For that, one must venture further than the ideal model. As a first departure from this limited model, we now consider the constraint of spatial obstructions. The ideal model assumed no constraints on the spatial configuration of each additional monomer, including forces between monomers which obviously exist, since two monomers cannot occupy the same space. Here, we'll take the concept of obstruction to encompass not only monomer-monomer interactions, but also constraints that arise from the presence of dust and boundary conditions such as walls or other physical obstructions. [ 3 ]
Consider a space filled with small impenetrable particles, or " dust ". Denote the fraction of space excluding a monomer end point by f ( R → ) {\displaystyle f({\vec {R}})} so its values range: 0 ≤ f ( R → ) ≤ 1 {\displaystyle 0\leq f({\vec {R}})\leq 1} .
Constructing a Taylor expansion for Φ ( R → , N + Δ N ) . {\displaystyle \Phi ({\vec {R}},N+\Delta N).} , one can arrive at the new governing differential equation:
For which the corresponding path integral is:
To model a perfect rigid wall, simply set f ( R → ) l 2 → + ∞ {\displaystyle \textstyle {\frac {f({\vec {R}})}{l^{2}}}\rightarrow +\infty } for all regions in space out of reach of the polymer due to the wall contour.
The walls a polymer usually interacts with are complex structures. Not only can the contour be full of bumps and twists, but their interaction with the polymer is far from the rigid mechanical idealization depicted above. In practice, a polymer will often be "absorbed" or condense on the wall due to attractive intermolecular forces. Due to heat, this process is counteracted by an entropy driven process, favoring polymer configurations that correspond to large volumes in phase space . A thermodynamic adsorption-desorption process arises. One common example for this are polymers confined within a cell membrane .
To account for the attraction forces, define a potential per monomer denoted as: V ( R → ) {\displaystyle \textstyle V({\vec {R}})} . The potential will be incorporated through a Boltzmann factor . Taken for the entire polymer this takes the form:
Where we used β = ( k b T ) − 1 {\displaystyle \beta =(k_{b}T)^{-}1} with T {\displaystyle T} as Temperature and k b {\displaystyle k_{b}} the Boltzmann constant . In the right hand side, our usual limits N → ∞ & L → 0 {\displaystyle N\rightarrow \infty \quad \&\quad L\rightarrow 0} were taken.
The number of polymer configurations with fixed endpoints can now be determined by the path integral:
Similarly to the ideal polymer case, this integral can be interpreted as a propagator for the differential equation:
This leads to a bi-linear expansion for Q V ( R → N , N | R → 0 , 0 ) = ∑ n f n ( R → N ) f n ∗ ( R → 0 ) exp ( − E N N ) {\displaystyle Q_{V}({\vec {R}}_{N},N|{\vec {R}}_{0},0)=\sum _{n}f_{n}({\vec {R}}_{N})f_{n}^{*}({\vec {R}}_{0})\exp(-E_{N}N)} in terms of orthonormal eigenfunctions and eigenvalues:
and so our absorption problem is reduced to an eigenfunction problem.
For a typical well like (attractive) potential this leads to two regimes for the absorption phenomenon, with the critical temperature T c {\displaystyle T_{c}} determined by the specific problem parameters l , V ( R → ) {\displaystyle l,V({\vec {R}})} :
In high temperatures T > T c {\displaystyle T>T_{c}} , the potential well has no bound states, meaning all eigenvalues are positive and the corresponding eigenfunction takes the asymptotic form < ( x → ∞ ) {\displaystyle <(x\rightarrow \infty )} :
The result is shown for the x coordinate after a separation of variables and assuming a surface at x = 0 {\displaystyle x=0} .
This expression represents a very open configuration for the polymer, away from the surface, meaning the polymer is desorbed.
For low enough temperatures T < T c {\displaystyle T<T_{c}} , there exist at least one bounded state with a negative eigenvalue. In our "large polymer" limit, this means that the bi-linear expansion will be dominated by the ground state, which asymptotically ( x → ∞ ) {\displaystyle (x\rightarrow \infty )} takes the form:
This time the configurations of the polymer are localized in a narrow layer near the surface with an effective thickness l 6 | λ 0 | {\displaystyle \textstyle {\frac {l}{\sqrt {6|\lambda _{0}|}}}}
A wide variety of adsorption problems boasting a host of "wall" geometries and interaction potentials can be solved using this method.
To obtain a quantitatively well defined result one has to use the recovered eigenfunctions and construct the corresponding configuration sum.
For a complete and rigorous solution see. [ 11 ]
Another obvious obstruction, thus far blatantly disregarded, is the interactions between monomers within the same polymer. An exact solution for the number of configurations under this very realistic constraint has not yet been found for any dimension larger than one. [ 3 ] This problem has historically came to be known as the excluded volume problem. To better understand the problem, one can imagine a random walk chain, as previously presented, with a small hard sphere (not unlike the "specks of dust" mentioned above) at the endpoint of each monomer. The radius of these spheres necessarily obeys r < l / 2 {\displaystyle r<l/2} , otherwise successive spheres would overlap.
A path integral approach affords a relatively simple method to derive an approximated solution: [ 12 ] The results presented are for three dimensional space, but can be easily generalized to any dimensionality .
The calculation is based on two reasonable assumptions:
In accordance with the path integral expression for Q V ( R → N , N | R → 0 .0 ) {\displaystyle \textstyle Q_{V}({\vec {R}}_{N},N|{\vec {R}}_{0}.0)} previously presented, the most probable configuration will be the curve R → ∗ ( ν ) {\displaystyle {\vec {R}}^{*}(\nu )} that minimizes the exponent of the original path integral:
To minimize the expression, employ calculus of variations and obtain the Euler–Lagrange equation :
We set R ≡ R ∗ {\displaystyle R\equiv R^{*}} .
To determine the appropriate function f ( R → ) {\displaystyle f({\vec {R}})} , consider a sphere of radius R {\displaystyle R} , thickness d R {\displaystyle dR} and profile 4 π R 2 {\displaystyle 4\pi R^{2}} centered around the origin of the polymer. The average number of monomers in this shell should equal 4 π R 2 ( 4 / 3 ) π r 3 f ( R ) d R {\displaystyle \textstyle {\frac {4\pi R^{2}}{(4/3)\pi r^{3}}}f(R)dR} .
On the other hand, the same average should also equal d ν = ( d R d ν ) − 1 {\displaystyle \textstyle d\nu =\left({\frac {dR}{d\nu }}\right)^{-1}} (Remember that ν {\displaystyle \nu } was defined as a parametrization factor with values 0 ≤ ν ≤ N {\displaystyle 0\leq \nu \leq N} ). This equality results in:
We find S [ R → ( ν ) ] {\displaystyle S[{\vec {R}}(\nu )]} can now be written as:
We again use variation calculus to arrive at:
Note that we now have an ODE for R ( ν ) {\displaystyle R(\nu )} without any f ( R → ∗ ) {\displaystyle f({\vec {R}}^{*})} dependence.
Although quite horrendous to look at, this equation has a fairly simple solution:
We arrived at the important conclusion that for a polymer with excluded volume the end to end distance grows with N like:
R ≅ ( 3 π ( 4 / 3 ) π r 3 L 2 ) − 1 / 5 N 3 / 5 {\displaystyle R\cong \left({\frac {3\pi }{(4/3)\pi r^{3}L^{2}}}\right)^{-1/5}N^{3/5}} , a first departure from the ideal model result: R ∼ N {\displaystyle R\sim {\sqrt {N}}} .
So far, the only polymer parameters incorporated into the calculation were the number of monomers N {\displaystyle N} which was taken to infinity, and the constant bond length l {\displaystyle l} . This is usually sufficient, as that is the only way the local structure of the polymer affects the problem. To try and do a bit better than the "constant bond distance" approximation, let us examine the next most rudimentary approach; A more realistic description of the single bond length will be a Gaussian distribution: [ 13 ]
So like before, we maintain the result: ⟨ r → 2 ⟩ = l 2 {\displaystyle \langle {\vec {r}}^{2}\rangle =l^{2}} . Note that although a bit more complex than before, ψ ( r → ) {\displaystyle \psi ({\vec {r}})} still has a single parameter - l {\displaystyle l} .
The conformational distribution function for our new bond vector distribution is:
Where we switched from the relative bond vector r → n {\displaystyle {\vec {r}}_{n}} to the absolute position vector difference: ( R → n − R → n − 1 ) {\displaystyle ({\vec {R}}_{n}-{\vec {R}}_{n-1})} .
This conformation is known as the Gaussian chain. The Gaussian approximation for ψ ( r → ) {\displaystyle \psi ({\vec {r}})} does not hold for a microscopic analysis of the polymer structure but will yield accurate results for large-scale properties.
An intuitive way to construe this model is as a mechanical model of beads successively connected by a harmonic spring. The potential energy for such a model is given by:
At thermal equilibrium one can expect the Boltzmann distribution, which indeed recovers the result above for Ψ ( { r → n } ) {\displaystyle \Psi (\left\{{\vec {r}}_{n}\right\})} .
An important property of the Gaussian chain is self-similarity . Meaning the distribution for R → n − R → m {\displaystyle {\vec {R}}_{n}-{\vec {R}}_{m}} between any two units is again Gaussian, depending only on l {\displaystyle l} and the unit to unit distance ( n − m ) {\displaystyle (n-m)} :
This immediately leads to < ( R → n − R → m ) 2 >= | n − m | l 2 {\displaystyle <({\vec {R}}_{n}-{\vec {R}}_{m})^{2}>=|n-m|l^{2}} .
As was implicitly done in the section for spatial obstructions, we take the suffix n {\displaystyle n} to a continuous limit and replace R → n − R → m {\displaystyle {\vec {R}}_{n}-{\vec {R}}_{m}} by ∂ R → n / ∂ n {\displaystyle \partial {\vec {R}}_{n}/\partial n} . So now, our conformational distribution is expressed by:
The independent variable transformed from a vector into a function, meaning Ψ [ R → ( n ) ] {\displaystyle \Psi [{\vec {R}}(n)]} is now a functional . This formula is known as the Wiener distribution.
Assuming an external potential field U e ( R → ) {\displaystyle U_{e}({\vec {R}})} , the equilibrium conformational distribution described above will be modified by a Boltzmann factor:
An important tool in the study of a Gaussian chain conformational distribution is the Green function , defined by the path integral quotient:
The path integration is interpreted as a summation over all polymer curves R → ( n ) {\displaystyle {\vec {R}}(n)} that start from R → 0 = R → ′ {\displaystyle {\vec {R}}_{0}={\vec {R}}'} and terminate at R → N = R → {\displaystyle {\vec {R}}_{N}={\vec {R}}} .
For the simple zero field case U e = 0 {\displaystyle U_{e}=0} The Green function reduces back to:
In the more general case, G ( R → − R → ′ ; N ) {\displaystyle G({\vec {R}}-{\vec {R}}';N)} plays the role of a weight factor in the complete partition function for all possible polymer conformations:
There exists an important identity for the Green function that stems directly from its definition:
G ( R → , R → ′ ; N ) = ∫ d R → ″ G ( R → , R → ″ ; N − n ) G ( R → ″ , R → ′ ; N ) , ( 0 < n < N ) . {\displaystyle G({\vec {R}},{\vec {R}}';N)=\int d{\vec {R}}''G({\vec {R}},{\vec {R}}'';N-n)G({\vec {R}}'',{\vec {R}}';N),\quad (0<n<N).}
This equation has a clear physical significance, which might also serve to elucidate the concept of the path integral:
The product G ( R → , R → ″ ; N − n ) G ( R → ″ , R → ′ ; N ) ) {\displaystyle \textstyle G({\vec {R}},{\vec {R}}'';N-n)G({\vec {R}}'',{\vec {R}}';N))} expresses the weight factor for a chain which starts at R ′ {\displaystyle R'} , passes through R ″ {\displaystyle R''} in n {\displaystyle n} steps, and ends at R {\displaystyle R} after N {\displaystyle N} steps. The integration over all possible midpoints R ″ {\displaystyle R''} gives back the statistical weight for a chain starting at R ′ {\displaystyle R'} , and terminating at R {\displaystyle R} . It should now be clear that the path integral is simply a sum over all possible literal paths the polymer can form between two fixed endpoints.
With the help of G ( R → , R → ′ ; N ) {\displaystyle G({\vec {R}},{\vec {R}}';N)} the average of any physical quantity A {\displaystyle A} can be calculated. Assuming A {\displaystyle \textstyle A} depends only on the position of the n {\displaystyle n} -th segment, then:
⟨ A ( R → n ) ⟩ = ∫ d R → N d R → n d R → 0 G ( R → N , R → n ; N − n ) G ( R → n , R → 0 ; n ) A ( R → n ) ∫ d R → N d → R 0 G ( R → N , R → 0 ; N ) {\displaystyle \left\langle A({\vec {R}}_{n})\right\rangle ={\frac {\displaystyle \int d{\vec {R}}_{N}~d{\vec {R}}_{n}~d{\vec {R}}_{0}~G({\vec {R}}_{N},{\vec {R}}_{n};N-n)G({\vec {R}}_{n},{\vec {R}}_{0};n)A({\vec {R}}_{n})}{\displaystyle \int d{\vec {R}}_{N}~{\vec {d}}R_{0}~G({\vec {R}}_{N},{\vec {R}}_{0};N)}}}
It stands to reason that A should depend on more than one monomer. assuming now it depends on R → m {\displaystyle {\vec {R}}_{m}} as well as R → n {\displaystyle {\vec {R}}_{n}} the average takes the form:
⟨ A ( R → n , R → m ) ⟩ = ∫ d R → N d R → n d R → m d R → 0 G ( R → N , R → n ; N − n ) G ( R → n , R → m ; n − m ) A ( R → n , R → m ) ∫ d R → N d → R 0 G ( R → N , R → 0 ; N ) {\displaystyle \left\langle A({\vec {R}}_{n},{\vec {R}}_{m})\right\rangle ={\frac {\displaystyle \int d{\vec {R}}_{N}~d{\vec {R}}_{n}~d{\vec {R}}_{m}~d{\vec {R}}_{0}~G({\vec {R}}_{N},{\vec {R}}_{n};N-n)G({\vec {R}}_{n},{\vec {R}}_{m};n-m)A({\vec {R}}_{n},{\vec {R}}_{m})}{\displaystyle \int d{\vec {R}}_{N}~{\vec {d}}R_{0}~G({\vec {R}}_{N},{\vec {R}}_{0};N)}}}
With an obvious generalization for more monomers dependence.
If one imposes the reasonable boundary conditions:
then with the help of a Taylor expansion for G ( R → , R → ′ ; N + Δ N ) {\displaystyle G({\vec {R}},{\vec {R}}';N+\Delta N)} , a differential equation for G {\displaystyle G} can be derived:
With the help of this equation the explicit form of G ( R → , R → ′ ; N ) {\displaystyle G({\vec {R}},{\vec {R}}';N)} is found for a variety of problems. Then, with a calculation of the partition function a host of statistical quantities can be extracted.
A different new approach for finding the power dependence ⟨ R → 2 ⟩ ∝ N α {\displaystyle \left\langle {\vec {R}}^{2}\right\rangle \propto N^{\alpha }} caused by excluded volume effects, is considered superior to the one previously presented. [ 6 ]
The field theory approach in polymer physics is based on an intimate relationship of polymer fluctuations and field fluctuations. The statistical mechanics of a many particle system can be described by a single fluctuating field. A particle in such an ensemble moves through space along a fluctuating orbit in a fashion that resembles a random polymer chain. The immediate conclusion to be drawn is that large groups of polymers may also be described by a single fluctuating field. As it turns out, the same can be said of a single polymer as well.
In analogy to the original path integral expression presented, the end to end distribution of the polymer now takes the form:
Our new path integrand consists of:
[ ∂ ∂ N − 1 2 M ∇ 2 + η ( R → ) ] P η ( N , L ) = δ ( 3 ) ( R → − R → ′ ) δ ( N ) {\displaystyle \left[{\frac {\partial }{\partial N}}-{\frac {1}{2M}}\nabla ^{2}+\eta ({\vec {R}})\right]P^{\eta }(N,L)=\delta ^{(3)}({\vec {R}}-{\vec {R}}')\delta (N)} with M {\displaystyle M} acting as an effective mass determined by the dimensionality and bond length.
Note that the inner integral is now also a path integral, so two spaces of function are integrated over - the polymer conformations - R → ( ν ) {\displaystyle {\vec {R}}(\nu )} and the scalar fields η ( R → ) {\displaystyle \eta ({\vec {R}})} .
These path integrals have a physical interpretation. The action A {\displaystyle {\mathcal {A}}} describes the orbit of a particle in a space dependent random potential η ( R → ) {\displaystyle \eta ({\vec {R}})} . The path integral over R → ( ν ) {\displaystyle {\vec {R}}(\nu )} yields the end to end distribution of the fluctuating polymer in this potential. The second path integral over η ( R → ) {\displaystyle \eta ({\vec {R}})} with the weight e − A [ η ] {\displaystyle e^{-{\mathcal {A}}[\eta ]}} accounts for the repulsive cloud of other chain elements. To avoid divergence, the η ( R → ) {\displaystyle \eta ({\vec {R}})} integration has to run along the imaginary field axis.
Such a field description for a fluctuating polymer has the important advantage that it establishes a connection with the theory of critical phenomena in field theory.
To find a solution for Φ ( R → , N ) {\displaystyle \Phi ({\vec {R}},N)} , one usually employs a Laplace transform and considers a correlation function similar to the statistic average ⟨ A ( R → n , R → m ) ⟩ {\displaystyle \left\langle A({\vec {R}}_{n},{\vec {R}}_{m})\right\rangle } formerly described, with the green function substituted by a fluctuating complex field. In the common limit of large polymers (N>>1), the solutions for the end to end vector distribution correspond to the well developed regime studied in the quantum field theoretic approach to critical phenomena in many body systems. [ 14 ] [ 15 ]
Another simplifying assumption was taken for granted in the treatment presented thus far; All models described a single polymer. Obviously a more physically realistic description will have to account for the possibility of interactions between polymers. In essence, this is an extension of the excluded volume problem.
To see this from a pictorial point, one can imagine a snap shot of a concentrated polymer solution . Excluded volume correlations are now not only taking place within one single chain, but an increasing number of contact points from other chains at increasing polymer concentration yields additional excluded volume. These additional contacts can have substantial effects on the statistical behavior of the individual polymer.
A distinction must be made between two different length scales. [ 16 ] One regime will be given by small end to end vector scales R 0 < ξ {\displaystyle R_{0}<\xi } . At these scales the chain piece experiences only correlations from itself, i.e., the classical self-avoiding behavior. For larger scales R 0 > ξ {\displaystyle R_{0}>\xi } self-avoiding correlations do not play a significant role and the chain statistics resemble a Gaussian chain. The critical value ξ {\displaystyle \xi } must be a function of the concentration. Intuitively, one significant concentration can already be found. This concentration characterizes the overlap between the chains. If the polymers just marginally overlap, one chain is occupied in its own volume. This gives:
C ∗ = N / R 0 3 ∼ N / N 3 σ = N 1 − 3 σ {\displaystyle C^{*}=N/R_{0}^{3}\sim N/N^{3\sigma }=N^{1-3\sigma }} Where we used R 0 ∼ N σ {\displaystyle R_{0}\sim N^{\sigma }}
This is an important result and one immediately sees that for large chain lengths N, the overlap
concentration is very small. The self-avoiding walk previously described is changed and therefore the partition function is no longer ruled by the single polymer volume excluded paths, but by the remaining density fluctuations which are determined by the overall concentration of the polymer solution. In the limit of very large concentrations, imagined by an almost completely filled lattice , the density fluctuations become less and less important.
To begin with, let us generalize the path integral formulation to many chains.
The generalization for the partition function calculation is very simple and all that has to be done is to take into account the interaction between all the chain segments:
Z = ∫ ∏ α = 1 n p D R → α ( ν ) exp { − β H ( [ R → α ( ν ) ] ) } {\displaystyle Z=\int \prod _{\alpha =1}^{n_{p}}{\mathcal {D}}{\vec {R}}_{\alpha }(\nu )\exp\{-\beta {\mathcal {H}}([{\vec {R}}_{\alpha }(\nu )])\}}
Where the weighed energy states are defined as:
β H ( [ R → α ( ν ) ] ) = 3 2 l 2 ∑ α = 1 n p ∫ 0 N α ( ∂ R → α ∂ ν ) 2 d ν + 1 2 σ ∑ α , β = 1 n p ∫ 0 N α d ν ∫ 0 N β d ν ′ δ ( R → α ( ν ) − R → β ( ν ′ ) ) {\displaystyle \displaystyle \beta {\mathcal {H}}([{\vec {R}}_{\alpha }(\nu )])={\frac {3}{2l^{2}}}\sum _{\alpha =1}^{n_{p}}\int _{0}^{N_{\alpha }}\left({\frac {\partial {\vec {R}}_{\alpha }}{\partial \nu }}\right)^{2}d\nu +{\frac {1}{2}}\sigma \sum _{\alpha ,\beta =1}^{n_{p}}\int _{0}^{N_{\alpha }}d\nu \int _{0}^{N_{\beta }}d\nu '\delta ({\vec {R}}_{\alpha }(\nu )-{\vec {R}}_{\beta }(\nu '))}
With n p {\displaystyle n_{p}} denoting the number of polymers.
This is generally not simple and the partition function cannot be computed exactly.
One simplification is to assume monodispersity which means that all chains have the same length. or, mathematically: N α = N β ∀ α , β {\displaystyle N_{\alpha }=N_{\beta }\quad \forall \ \alpha ,\beta } .
Another problem is that the partition function contains too many degrees of freedom. The number of chains n p {\displaystyle n_{p}} involved can be very large and every chain has internal degrees of freedom, since they are assumed to be totally flexible. For this reason, it is convenient to introduce collective variables, which in this case is the polymer segment density:
ρ ( x → ) = 1 V ∑ α = 1 n p ∫ 0 N d ν δ ( x → − R → α ( ν ) ) . {\displaystyle \rho ({\vec {x}})={\frac {1}{V}}\sum _{\alpha =1}^{n_{p}}\int _{0}^{N}d\nu \delta ({\vec {x}}-{\vec {R}}_{\alpha }(\nu )).} with V {\displaystyle V} the total solution volume.
ρ ( x → ) {\displaystyle \rho ({\vec {x}})} can be viewed as a microscopic density operator whose value defines the density at an arbitrary point x → {\displaystyle {\vec {x}}} .
The transformation H ( [ R → α ( ν ) ] ) → H ( [ ρ ( x → ) ] ) {\displaystyle {\mathcal {H}}([{\vec {R}}_{\alpha }(\nu )])\rightarrow {\mathcal {H}}([\rho ({\vec {x}})])} is less trivial than one might imagine and cannot be carried out exactly. The final result corresponds to the so-called random phase approximation (RPA) which has been frequently used in solid-state physics . To explicitly calculate the partition function using the segment density one has to switch to reciprocal space , change variables and only then execute the integration. For a detailed derivation see. [ 13 ] [ 17 ] With the partition function obtained, a variety of physical quantities can be extracted as previously described. | https://en.wikipedia.org/wiki/Path_integrals_in_polymer_science |
Path integration is the method thought to be used by animals for dead reckoning .
Charles Darwin first postulated an inertially-based navigation system in animals in 1873. [ 1 ] Studies beginning in the middle of the 20th century confirmed that animals could return directly to a starting point, such as a nest, in the absence of vision and having taken a circuitous outwards journey. This shows that they can use cues to track distance and direction in order to estimate their position, and hence how to get home. This process was named path integration to capture the concept of continuous integration of movement cues over the journey. Manipulation of inertial cues confirmed that at least one of these movement (or idiothetic ) cues is information from the vestibular organs , which detect movement in the three dimensions . Other cues probably include proprioception (information from muscles and joints about limb position), motor efference (information from the motor system telling the rest of the brain what movements were commanded and executed), and optic flow (information from the visual system signaling how fast the visual world is moving past the eyes). Together, these sources of information can tell the animal which direction it is moving, at what speed, and for how long. In addition, sensitivity to the Earth's magnetic field for underground animals (e.g., mole rat ) can give path integration. [ 2 ]
Studies in arthropods , most notably in the Sahara desert ant ( Cataglyphis bicolor ), reveal the existence of highly effective path integration mechanisms that depend on determination of directional heading (by polarized light or sun position) and distance computations (by monitoring leg movement or optical flow). [ 3 ]
In mammals, three important discoveries shed light on this.
The first, in the early 1970s, is that neurons in the hippocampal formation , called place cells , respond to the position of the animal.
The second, in the early 1990s, is that neurons in neighboring regions (including anterior thalamus and post- subiculum ), called head direction cells , respond to the head direction of the animal. This enables a much more fine-grained study of path integration since it is possible to manipulate movement information and see how place and head direction cells respond (a much simpler procedure than training an animal, which is very slow).
The third finding was that neurons in the dorso-medial entorhinal cortex , which feeds information to the place cells in the hippocampus, fire in a metrically regular way across the whole surface of a given environment. The activity patterns of these grid cells looks very much like a hexagonally organized sheet of graph paper , and suggest a possible metric system that place cells can use to compute distances. Whether place and grid cells actually compute a path integration signal remains to be seen, but computational models exist suggesting this is plausible. Certainly, brain damage to these regions seems to impair the ability of animals to path integrate.
David Redish states that "The carefully controlled experiments of Mittelstaedt and Mittelstaedt (1980) and Etienne (1987) have demonstrated conclusively that this ability [path integration in mammals] is a consequence of integrating internal cues from vestibular signals and motor efferent copy". [ 4 ] | https://en.wikipedia.org/wiki/Path_integration |
In theoretical computer science , in particular in term rewriting , a path ordering is a well-founded strict total order (>) on the set of all terms such that
where ( . >) is a user-given total precedence order on the set of all function symbols .
Intuitively, a term f (...) is bigger than any term g (...) built from terms s i smaller than f (...) using a
lower-precedence root symbol g .
In particular, by structural induction , a term f (...) is bigger than any term containing only symbols smaller than f .
A path ordering is often used as reduction ordering in term rewriting, in particular in the Knuth–Bendix completion algorithm .
As an example, a term rewriting system for " multiplying out " mathematical expressions could contain a rule x *( y + z ) → ( x * y ) + ( x * z ). In order to prove termination , a reduction ordering (>) must be found with respect to which the term x *( y + z ) is greater than the term ( x * y )+( x * z ). This is not trivial, since the former term contains both fewer function symbols and fewer variables than the latter. However, setting the precedence (*) . > (+), a path ordering can be used, since both x *( y + z ) > x * y and x *( y + z ) > x * z is easy to achieve.
There may also be systems for certain general recursive functions , for example a system for the Ackermann function may contain the rule A( a + , b + ) → A( a , A( a + , b )), [ 1 ] where b + denotes the successor of b .
Given two terms s and t , with a root symbol f and g , respectively, to decide their relation their root symbols are compared first.
The latter variations include:
Dershowitz, Okada (1988) list more variants, and relate them to Ackermann 's system of ordinal notations . In particular, an upper bound given on the order types of recursive path orderings with n function symbols is φ( n ,0), using Veblen's function for large countable ordinals. [ 7 ]
The multiset path ordering (>) can be defined as follows: [ 9 ]
where
More generally, an order functional is a function O mapping an ordering to another one, and satisfying the following properties: [ 11 ]
The multiset extension, mapping (>) above to (>>) above is one example of an order functional: (>>)= O (>).
Another order functional is the lexicographic extension, leading to the lexicographic path ordering . | https://en.wikipedia.org/wiki/Path_ordering_(term_rewriting) |
Pathatrix is a high volume recirculating immuno magnetic-capture system developed by Thermo Fisher Scientific (and supplier parts by Life Technologies ) [ 1 ] for the detection of pathogens in food and environmental samples.
Pathatrix and its Pathatrix Recirculating Immunomagnetic Separation System (RIMS) was used in 2006 to detect the E. coli O157:H7 strain in contaminated spinach using a polymerase chain reaction (PCR). The Pathatrix system is used by regulatory agencies and food companies around the world as a reliable method for detecting pathogens in food.
Unlike other detection methods, Pathatrix allows the entire pre-enriched sample or large pooled samples to be recirculated over antibody-coated paramagnetic beads. It can specifically isolate pathogens directly from food samples and in conjunction with quantitative PCR can provide results within hours. It is also used to improve the performance of other rapid methods such as PCR , lateral flow , ELISA and chromogenic media by reducing or eliminating the need for lengthy pre-enrichment and/or selective enrichment steps. [ 1 ] The Pathatrix is useful in pathogen labs that would be running food samples and looking for foodborne diseases .
The Pathatrix is a rapid test method and Pathatrix pooling allows the screening of large numbers of food samples in a highly cost-effective way for specific pathogens such as E. coli O157, Salmonella or Listeria monocytogenes . [ 1 ]
The Pathatrix will selectively bind and purify the target organism from a comprehensive range of complex food matrices (including raw ground beef , chocolate, peanut butter , leafy greens , spinach, tomatoes). The Pathatrix is a microbial detection system that allows for the entire sample to be analyzed. | https://en.wikipedia.org/wiki/Pathatrix |
PathoPhenoDB is a biological database . [ 1 ] The database connects pathogens to their phenotypes using multiple databases such as NCBI, Human Disease Ontology [ 2 ] Human Phenotype Ontology , [ 3 ] Mammalian Phenotype Ontology, [ 4 ] PubChem , SIDER [ 5 ] and CARD . [ 6 ] Pathogen-disease associations were gathered mainly through the CDC and the List of Infectious Diseases page on Wikipedia. The manner by which they assigned taxonomy was semi-automatic. When mapped against NCBI Taxonomy, if the pathogen was not an exact match, it was then mapped to the parent class. PathoPhenoDB employs NPMI [ 7 ] in order to filter pairs based on their co-occurrence statistics.
This Biological database -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/PathoPhenoDB |
Pathogen avoidance (also parasite avoidance or pathogen disgust ) refers to the theory that the disgust response, in humans, is an adaptive system that guides behavior to avoid infection caused by parasites such as viruses , bacteria , fungi , protozoa , helminth worms , arthropods and social parasites . [ 1 ] [ 2 ] [ 3 ] Pathogen avoidance is a psychological mechanism associated with the behavioral immune system . Pathogen avoidance has been discussed as one of the three domains of disgust which also include sexual and moral disgust. [ 4 ]
In nature, controlling or the avoidance of pathogens is an essential fitness strategy because disease-causing agents are ever-present. [ 5 ] Pathogens reproduce rapidly at the expense of their hosts' fitness, this creates a coevolutionary arms race between pathogen transmission and host avoidance. [ 6 ] [ 7 ] For a pathogen to move to a new host, it must exploit regions of the body that serve as points of contact between current and future hosts such as the mouth, the skin, the anus and the genitals. [ 4 ] To avoid the cost of infection, organisms require counteradaptations to prevent pathogen transmission, by defending entry points such as the mouth and skin and avoiding other individual's exit points and the substances exiting these points such as feces and sneeze droplets. [ 4 ] Pathogen avoidance provides the first line of defense by physically avoiding conspecifics , other species, objects or locations that could increase vulnerability to pathogens. [ 4 ]
The pathogen avoidance theory of disgust predicts that behavior that reduces contact with pathogens, will have been under strong selection throughout the evolution of free-living organisms and should be prevalent throughout the Animalia kingdom. [ 8 ] Compared to the alternative, facing the infectious threat, avoidance likely provides a reduction in exposure to pathogens and in energetic costs associated with activation of the physiological immune response . [ 9 ] These behaviors are found throughout the animal literature, particularly amongst social animals. [ 2 ]
In humans, the disgust responses are the primary mechanism for avoiding infection through behavior triggered by sensory cues. [ 10 ] [ 1 ] Tybur argues that pathogen disgust requires two psychological mechanisms: detection systems that recognize input cues associated with the presence of pathogens and integration systems that weigh cue-based pathogen threats with other fitness relevant factors and generate withdrawal or avoidance behaviors appropriately. [ 4 ]
The genetic underpinnings of these neural mechanisms are to date, not well understood. [ 10 ] There is some evidence to suggest that humans are capable of detecting visual and olfactory sickness cues before overt cues for the disgust response are produced. [ 11 ]
Pathogens are typically too small to be directly observed and so require the presence of observable cues that tend to co-occur with them. [ 2 ] These inputs take the form of recognizable cues.
Tybur proposed a model of how an information processing system might be structured. In this model, perceptual systems (vision, olfaction, etc.) monitor the environment for cues to pathogens. [ 4 ] Then, a mechanism integrates cues from the different perceptual systems and estimates a pathogen index, an internal estimation of the probability that pathogens are present based on reliability and detection of cues. Finally context-dependent avoidance can only occur if additional information is taken as input- if other mechanisms exist that function to trade off pathogen presence against other fitness-impacting dimensions across various contexts. [ 12 ] The expected value of contact is a downstream index that integrates other indices relevant to the costs and benefits of contact which then regulates the approach versus avoidance in an adaptive manner. This model is consistent with several empirical findings of how additional variables such as sexual value, nutrient status, kinship status, hormonal status and immune function also influence responses to pathogen cues. [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ]
Signaling detection errors are prevalent in the pathogen avoidance system; there are two types of errors: a false alarm, where a pathogen avoidance response is deployed needlessly or a miss, where a pathogen avoidance was not deployed in the presence of infection risk, they depend on whether pathogens are present or not. [ 12 ] The costs for not mounting an avoidance response in the presence of infection risk is assumed to be greater, suggesting that selection may be favoring a greater sensitivity to cue pathogens at the expense of specificity. [ 12 ] This is thought to explain the law of contagion wherein, objects in contact with an infectious cue are themselves treated as infectious. [ 18 ] [ 19 ]
Hosts and parasites are under reciprocal evolutionary selection for hosts to acquire adaptations to prevent pathogen transmission and pathogens to acquire traits to evade host defense, this is known as host-parasite coevolution . [ 20 ]
Many parasitic species manipulate the behavior if their hosts in order to increase the probability of transmission and completion of a parasite's lifecycle, these are sometimes referred to as behavior-altering parasites . This is a widespread adaptive strategy that increases fitness benefits for the parasite. [ 21 ] Parasites can affect host behavior in multiple ways by altering host activity, the host's microenvironment or both. [ 22 ] A comparison across host and parasite taxa revealed that vertebrates that were infected were more likely to have impaired reaction to predators as a result of manipulation while infection in invertebrates lead to increase in the host coming in contact with predators. [ 22 ]
Women consistently demonstrate higher disgust sensitivity than men. [ 23 ] Evidence suggests that women respond more sensitively to disease threats than men. [ 23 ] [ 24 ] [ 25 ] This is hypothesized to be consistent with the enhanced evolutionary role in women for protecting their offspring. [ 23 ]
Sexual behavior with another individual, such as intercourse is a major source of pathogenic risk particularly for bacterial or viral infection. [ 26 ] Research has found a negative relationship between sexual arousal and disgust, indicating that when sexual arousal increases disgust responses decrease. [ 13 ] Additional evidence points to variation in pathogen avoidance traits and their relationship with sexual behavior. Individuals with high trait-level pathogen avoidance are less motivated to have sex with multiple partners. [ 27 ] [ 28 ] [ 29 ] [ 30 ] This suggests that individuals with a more active behavioral immune system might perceive the costs of sexual activity with multiple partners as higher than those with a less active behavioral immune system. [ 31 ]
Distinct properties of parasite transmission of aquatic and terrestrial ecosystems lead to differences in the avoidance behaviors in these environments, however, the mechanisms are quite similar. [ 32 ] For example, marine parasites are estimated to spread at a rate two times faster than terrestrial counterparts due to a combination of the increased viscosity and density of seawater and the movement of water through tides and currents. [ 33 ]
Researchers have suggested that elements of a conservative political orientation function to reduce individual exposure to infectious agents. [ 34 ] [ 35 ] These studies found that the relationship between pathogen avoidance and social conservatism was statistically robust. [ 34 ] Multiple mechanisms have been proposed as pathogen-neutralizing aspects of conservatism such as in-group favoritism , [ 34 ] cultural evolution favoring pathogen-neutralizing traditions and rituals, [ 36 ] and advocating for tradition-adherence within a community. [ 37 ] There is criticism of this association. Tybur argues that the relationship between social conservatism and pathogen avoidance is explained by sexual strategies associated with conservatism , such as orientation towards monogamous sexual strategies. [ 30 ] Another study, suggests that a generalized response to social resources is a more plausible mechanism underlying in-group favoritism than adaptations to pathogen stress. [ 38 ]
As parasite avoidance is a selective pressure imposed on all living animals, there are commonalities in strategies, mechanisms and consequences of pathogen avoidance behavior across species. [ 1 ]
Asian elephants ( Elephas maximus ) use branches to deter biting flies from areas of the body with thinner skin or that cannot be easily reached. [ 39 ] [ 40 ]
Rats use their saliva which possesses bactericidal properties, [ 41 ] to protect themselves and potential mating partners from genital pathogens by licking their genitalia after copulation. [ 39 ] [ 42 ] Wood rats ( Neotoma fuscipes ) exhibit a unique behavior of placing bay leaves ( Umbellularia californica ) in or near their nest to prevent flea infestations. [ 5 ] [ 43 ] Canids will defecate and urinate away from the proximity of their dens to protect against oro-faecally transmitted parasites [ 39 ] Newborns who cannot exit the den, will have fresh excreta consumed by their mothers, as parasitic ova take several days to hatch thus preventing infection. [ 39 ]
Mice avoid sick conspecifics. The detection of cues associated with disease is mediated by an olfactory subsystem - the vomeronasal organ. [ 44 ]
Bonobos rely on visual, tactile and olfactory cues to determine contamination risk when presented with contaminated food items versus the uncontaminated control group. [ 45 ] Mandrills engage in allo-grooming practices in which they avoid members of the same species with parasitic infection and rely on the smell of feces of conspecifics infected with parasites to discriminate those individuals. [ 46 ] Evidence has shown that both chimpanzees and Japanese macaques ( Macaca fuscata ) engage in food washing to remove food soiled with bodily fluids and dirt as a contaminant avoidance behavior strategy. [ 47 ] [ 48 ] [ 49 ] [ 50 ]
Birds engage in body maintenance, nest maintenance, avoidance of parasitized prey, migration and toleration as ectoparasite avoidance behavior. [ 51 ] These anti-parasite behaviors are central to bird hygiene. For example, birds preen to straighten and clean feathers but this also is used as a method to remove ectoparasites in their plumage. [ 52 ]
Social lobsters engage in specialized den selection by preferentially choosing dens with uninfected lobsters over dens with lobsters infected with the PaV1 virus. [ 53 ]
Bees have several steps to avoid parasitic invasion of a colony; avoidance parasite contact, recognition of parasites and subsequent rejection, and the avoidance of social parasite exploitation. [ 54 ] Within the colony, parasitic avoidance include: having several queens , nest construction that prevents invasion, [ 55 ] [ 56 ] chemical cues, coordinated defense. [ 54 ] In the event of parasitic invasion of a colony, bees resort to hygienic behavior defense as a last resort effort against parasite infection in which infected, dying and already dead bodies are removed from the nest. [ 57 ] [ 58 ] [ 59 ]
The most comprehensive data on avoidance behaviors has been generated for C. elegans . [ 10 ] They protect themselves from unfavorable effects of pathogenic bacteria by avoiding lawns on which Microbacterium nematophilum is found. [ 60 ] Evidence suggests that C. elegans relies on its olfactory system for pathogen avoidance, [ 61 ] by avoiding odors that mimic those infected by pathogenic bacterium. [ 62 ] Genetic analysis has revealed three mechanisms involved in avoidance behavior: learning of pathogen avoidance based on G-protein signaling in chemosensory neurons, [ 63 ] learning of pathogen avoidance behavior through serotonin signaling pathways, [ 62 ] physical avoidance and reduced oral uptake of pathogens. [ 64 ]
A study has suggested that the four pillars of human medicine: quarantine , medication , immunization and nursing or caring are extensions of behavioral defenses against pathogens seen in animals. [ 5 ] Hart argues that more complex applications of pathogen avoidance behaviors seen in medicine can be attributed to advanced linguistic and cognitive capabilities and higher rates of sickness in humans compared to animals. [ 5 ] [ 65 ] | https://en.wikipedia.org/wiki/Pathogen_avoidance |
Pathogen reduction using riboflavin and UV light is a method by which infectious pathogens in blood for transfusion are inactivated by adding riboflavin and irradiating with UV light . [ 1 ] [ 2 ] [ 3 ] This method reduces the infectious levels of disease-causing agents that may be found in donated blood components, while still maintaining good quality blood components for transfusion. This type of approach to increase blood safety is also known as “pathogen inactivation” in the industry.
Despite measures that are in place in the developed world to ensure the safety of blood products for transfusion, a risk of disease transmission still exists. Consequently, the development of pathogen inactivation/reduction technologies for blood products has been an ongoing effort in the field of transfusion medicine. A new procedure for the treatment of individual units of single-donor (apheresis) or whole blood–derived, pooled, platelets has recently been introduced. This technology uses riboflavin and light for the treatment of platelets and plasma.
Riboflavin and UV is only one of several ways that have been developed for photodynamic disinfection of blood products . [ 4 ] There are also light-independent methods of pathogen reduction in blood products.
This pathogen reduction process involves adding riboflavin (vitamin B2) to the blood component, which is then placed into an illuminator where it is exposed to UV light for about five to ten minutes. Exposure to UV light activates riboflavin and when it is associated with nucleic acids ( DNA and RNA ), riboflavin causes a chemical alteration to functional groups of the nucleic acids thereby making pathogens unable to replicate. [ 1 ] [ 5 ] [ 6 ] In this way the process prevents viruses, bacteria, parasites and white blood cells, from replicating and causing disease. [ 7 ] [ 8 ]
This method using riboflavin and UV light renders pathogens harmless by using a non-mutagenic, non-toxic method. Riboflavin and its photoproducts are already present in the human body and do not need to be removed from blood products prior to transfusion. [ 1 ]
The riboflavin and UV light method for pathogen reduction of platelets and plasma is in routine use in multiple countries throughout Europe. [ 11 ] [ 12 ] [ 13 ] [ 14 ] This same process is currently in development for the treatment of whole blood, resulting in pathogen reduction of the three components (RBCs, platelets and plasma). | https://en.wikipedia.org/wiki/Pathogen_reduction_using_riboflavin_and_UV_light |
In pathology , pathogenesis is the process by which a disease or disorder develops. It can include factors which contribute not only to the onset of the disease or disorder, but also to its progression and maintenance. [ 1 ] The word comes from Ancient Greek πάθος (pathos) ' suffering, disease ' and γένεσις (genesis) ' creation ' .
Types of pathogenesis include microbial infection , inflammation , malignancy and tissue breakdown . For example, bacterial pathogenesis is the process by which bacteria cause infectious illness. [ citation needed ]
Most diseases are caused by multiple processes. For example, certain cancers arise from dysfunction of the immune system ( skin tumors and lymphoma after a renal transplant , which requires immunosuppression ), Streptococcus pneumoniae is spread through contact with respiratory secretions , such as saliva , mucus , or cough droplets from an infected person and colonizes the upper respiratory tract and begins to multiply. [ 2 ] [ 3 ] [ 4 ]
The pathogenic mechanisms of a disease (or condition) are set in motion by the underlying causes, which if controlled would allow the disease to be prevented . [ 5 ] Often, a potential cause is identified by epidemiological observations before a pathological link can be drawn between the cause and the disease. The pathological perspective can be directly integrated into an epidemiological approach in the interdisciplinary field of molecular pathological epidemiology . [ 6 ] Molecular pathological epidemiology can help to assess pathogenesis and causality by means of linking a potential risk factor to molecular pathologic signatures of a disease. [ 7 ] Thus, the molecular pathological epidemiology paradigm can advance the area of causal inference . [ 8 ] | https://en.wikipedia.org/wiki/Pathogenesis |
Pathogenic fungi are fungi that cause disease in humans or other organisms . Although fungi are eukaryotic , many pathogenic fungi are microorganisms . [ 1 ] Approximately 300 fungi are known to be pathogenic to humans; [ 2 ] their study is called " medical mycology ". Fungal infections are estimated to kill more people than either tuberculosis or malaria —about two million people per year. [ 3 ]
In 2022 the World Health Organization (WHO) published a list of fungal pathogens which should be a priority for public health action. [ 4 ]
Markedly more fungi are known to be pathogenic to plant life than those of the animal kingdom . [ 5 ] The study of fungi and other organisms pathogenic to plants is called plant pathology .
According to the World Health Organization (WHO) in 2022 pathogens of particular concern are: [ 4 ]
Candida species cause infections in individuals with deficient immune systems. Candida species tend to be the culprit of most fungal infections and can cause both systemic and superficial infection. [ 6 ] Th1-type cell-mediated immunity (CMI) is required for clearance of a fungal infection. Candida albicans is a kind of diploid yeast that commonly occurs among the human gut microflora . C. albicans is an opportunistic pathogen in humans. Abnormal over-growth of this fungus can occur, particularly in immunocompromised individuals. [ 7 ] C. albicans has a parasexual cycle that appears to be stimulated by environmental stress. [ 8 ]
C. auris , first described in 2009, is resistant to many frontline antifungal drugs, disinfectants, and heat, which makes it extremely difficult to eradicate. Like many fungal pathogens it mostly affects immunocompromised people; if in the blood or other organs and tissues, mortality is about 50%. [ 3 ]
Other species of Candida may be pathogenic as well, including Candida stellatoidea , C. tropicalis , C. pseudotropicalis , C. krusei , C. parapsilosis , and C. guilliermondii . [ 9 ]
The most common pathogenic species are Aspergillus fumigatus and Aspergillus flavus . Aspergillus flavus produces aflatoxin which is both a toxin and a carcinogen and which can potentially contaminate foods such as nuts. Aspergillus fumigatus and Aspergillus clavatus can cause allergic disease. Some Aspergillus species cause disease on grain crops, especially maize , and synthesize mycotoxins including aflatoxin . Aspergillosis is the group of diseases caused by Aspergillus . The symptoms include fever, cough, chest pain or breathlessness. Usually, only patients with weakened immune systems or with other lung conditions are susceptible. [ 1 ]
The spores of Aspergillus fumigatus are ubiquitous in the atmosphere. A. fumigatus is an opportunistic pathogen. It can cause potentially lethal invasive infection in immunocompromised individuals. [ 10 ] A. fumigatus has a fully functional sexual cycle that produces cleistothecia and ascospores . [ citation needed ]
Cryptococcus neoformans can cause a severe form of meningitis and meningo-encephalitis in patients with HIV infection and AIDS . The majority of Cryptococcus species live in the soil and do not cause disease in humans. Cryptococcus neoformans is the major human and animal pathogen. Papiliotrema laurentii and Naganishia albida , both formerly referred to Cryptococcus , have been known to occasionally cause moderate-to-severe disease in human patients with compromised immunity. Cryptococcus gattii is endemic to tropical parts of the continent of Africa and Australia and can cause disease in non-immunocompromised people. [ 1 ]
Infecting C. neoformans cells are usually phagocytosed by alveolar macrophages in the lung. [ 11 ] The invading C. neoformans cells may be killed by the release of oxidative and nitrosative molecules by these macrophages. [ 12 ] However some C. neoformans cells may survive within the macrophages. [ 11 ] The ability of the pathogen to survive within the macrophages probably determines latency of the disease, dissemination and resistance to antifungal agents. In order to survive in the hostile intracellular environment of the macrophage, one of the responses of C. neoformans is to upregulate genes employed in responses to oxidative stress . [ 11 ]
The haploid nuclei of C. neoformans can undergo nuclear fusion ( karyogamy ) to become diploid. These diploid nuclei may then undergo meiosis , including recombination , resulting in the formation of haploid basidiospores that are able to disperse. [ 13 ] Meiosis may facilitate repair of C. neoformans DNA in response to macrophage challenge. [ 13 ] [ 14 ]
Histoplasma capsulatum can cause histoplasmosis in humans, dogs and cats. The fungus is most prevalent in the Americas, India and southeastern Asia. It is endemic in certain areas of the United States . Infection is usually due to inhaling contaminated air.
Pneumocystis jirovecii (or Pneumocystis carinii) can cause a form of pneumonia in people with weakened immune systems , such as premature children, patients on immunosuppressive treatment, the elderly and AIDS patients. [ 15 ]
Stachybotrys chartarum or "black mold" can cause respiratory damage and severe headaches. It frequently occurs in houses and in regions that are chronically damp. [ 16 ]
Mammalian endothermy and homeothermy are potent nonspecific defenses against most fungi. [ 17 ] A comparative genomic study found that in opportunistic fungi there are few if any specialised virulence traits consistently linked to opportunistic pathogenicity of fungi in humans apart from the ability to grow at 37 °C. [ 18 ]
The skin , respiratory tract , gastrointestinal tract , and the genital-urinary tract induced inflammation [ vague ] are common bodily regions of fungal infection.
Studies have shown that hosts with higher levels of immune response cells such as monocytes / macrophages , dendritic cells , and invariant natural killer (iNK) T-cells exhibited greater control of fungal growth and protection against systemic infection. Pattern recognition receptors (PRRs) play an important role in inducing an immune response by recognizing specific fungal pathogens and initiating an immune response.
In the case of mucosal candidiasis , the cells that produce cytokine IL-17 are extremely important in maintaining innate immunity . [ 19 ]
A comprehensive comparison of distribution of opportunistic pathogens and stress-tolerant fungi in the fungal tree of life showed that polyextremotolerance and opportunistic pathogenicity consistently appear in the same fungal orders and that the co-occurrence of opportunism and extremotolerance (e.g. osmotolerance and psychrotolerance ) is statistically significant. This suggests that some adaptations to stressful environments may also promote fungal survival during the infection. [ 18 ] | https://en.wikipedia.org/wiki/Pathogenic_fungus |
On Earth, frozen environments such as permafrost and glaciers are known for their ability to preserve items, as they are too cold for ordinary decomposition to take place. This makes them a valuable source of archeological artefacts and prehistoric fossils , yet it also means that there are certain risks once ancient organic matter is finally subject to thaw. The best-studied risk is that of decomposition of such organic matter releasing a substantial quantity of carbon dioxide and methane , and thus acting as a notable climate change feedback . Yet, some scientists have also raised concerns about the possibility that some metabolically dormant bacteria and protists , as well as always metabolically inactive viruses , may both survive the thaw and either threaten humans directly, or affect some of the animal or plant species important for human wellbeing.
As of 2023, there has been at least one recorded reemergence of anthrax , a pathogen long-known for its ability to hibernate in soils. There have also been several cases when truly novel microorganisms discovered in the frozen environments were successfully revived by researchers, or were found live in a recently thawed environment. So far, most only affect amoebas , and none have been known to pose a risk to humans or to crops . Of the already-studied pathogens, at least one anthrax outbreak has been connected to decades-old infected carrion thaw; yet, samples of influenza and smallpox pathogens have failed to survive the thaw even in laboratory conditions. Some researchers have also raised alarm about the potential of horizontal gene transfer between ancient and modern bacteria, and the risk it could exacerbate the challenge of antibiotic resistance . At the same time, other scientists consider these concerns overblown, and argue that ancient microorganisms are unlikely to make a difference today.
Johan Hultin made multiple attempts during the 20th century to culture 1918 influenza virus he found in the frozen corpses of pandemic victims at Brevig Mission in Alaska . Every attempt failed, which suggested that the influenza virus is incapable of surviving the thaw after being frozen. In the 1990s, other scientists have tried to revive pneumonia -causing bacteria and the smallpox virus, yet all of those attempts were unsuccessful as well. [ 2 ]
A group of researchers was able to extract potentially viable microscopic fungi , as well as the RNA of tomato mosaic virus , from Greenland ice cores up to 140,000 years old. [ 3 ] [ 4 ]
It was estimated that between 10 17 and 10 21 microorganisms, ranging from fungi and bacteria in addition to viruses , were already released every year due to ice melt, often directly into the ocean . According to researchers behind this estimate, only viruses with high abundance, ability to be transported through ice, and ability to resume disease cycles after the thaw would be of any concern.
In particular, caliciviruses of Vesivirus genus were hypothesized as the most likely to spread from ancient ice, due to their high abundance and using ocean animals as hosts, where the migratory nature of many species of fish and birds could potentially enable a high transmission rate. Caliciviruses are poorly adapted to humans, and the only known infections were of marine biologists who worked closely with infected seals . However, Enteroviruses (a group which includes polioviruses , echoviruses and Coxsackie viruses ) and even influenza A were also considered less likely but still plausible candidates. [ 5 ]
In the 1960s, the United States Army Corps of Engineers had dug out Fox tunnel in Alaska , to provide a good test ground to better understand permafrost before the construction of the Trans-Alaska Pipeline System . By 2005, scientists revisiting that tunnel have discovered frozen cells of carnobacterium pleistocenium , with an estimated age of 32,000 years. Melting the ice had revived them, resulting in the first documented case of an organism "coming back to life" from ancient ice. [ 6 ] None of the bacteria in carnobacterium genus are known to be pathogenic in humans, although some are known for spoiling chilled food products, and one species may cause disease in fish. [ 7 ]
A paper by two Russian scientists in Global Health Action , a journal published by Umeå University in Sweden , had warned of the risk that the old burial grounds of cattle which had died to anthrax in the early last century may thaw and lead to the re-emergence of the viable pathogen. The authors noted that at the time, there were about 13,885 cattle burial grounds in Northern Russia , a substantial fraction of which did not meet sanitary standards, and some had their maps or other records missing. [ 8 ]
A completely unknown plant virus was revived from a frozen caribou feces deposit which was only 700 years old. It was named "ancient caribou feces associated virus" (aCFV) by its discoverers. The scientists have also introduced this virus into the tissues of Nicotiana benthamiana , a common model species for plant pathogens. aCFV had successfully replicated, yet was unable to cause more than an asymptomatic infection. According to the researchers, this either suggests a large genetic distance between the original host species of aCFV and more modern plants, or that N. benthamiana was simply a suboptimal host for this species. [ 9 ]
Also in 2014, two ~30,000 years old giant virus species, Pithovirus sibericum [ 10 ] and Mollivirus sibericum, [ 11 ] were discovered in the Siberian permafrost and they retained their infectivity. Like the other giant viruses with large genomes , they are larger in size than most bacteria and pose no risk to humans, as they infect other microorganisms like Acanthamoeba , a genus of amoebas. [ 11 ]
An anthrax outbreak had occurred in the Yamal Peninsula region in Northern Russia . It was thought to be linked to an infected reindeer corpse, which died 75 years earlier, yet had thawed after a heatwave . Over 2,000 reindeer had been infected and the disease had spread to humans, hospitalizing dozens and killing a child before the outbreak was contained. [ 12 ]
The same team of French researchers behind the 2014 revival of two giant viruses had also managed to revive 8 more ancient amoeba-infecting viral species. Four of these species were from the pandoravirus , cedratvirus (sometimes classified as a subgroup of pithovirus), megavirus and pacmanvirus (part of Asfarviridae ) families, which weren't previously revived from the permafrost. In addition, five more species from these families were found in already thawed permafrost, with no way to tell their age. The oldest revived virus was a 48,500-year-old Pandoravirus yedoma . [ 13 ] [ 14 ]
Scientists are split on whether revived microorganisms from the permafrost can pose a significant threat to humans. Jean-Michel Claverie, who led the most successful attempts to revive such "zombie viruses", believes that the public health threat from them is underestimated, and that while his research focused on amoeba-infecting viruses, this decision was in part motivated by the desire to avoid viral spillover as well as convenience, and "one can reasonably infer" other viral species would also remain infectious. [ 13 ] [ 14 ] Another professor, Birgitta Evengård, argued that permafrost thaw would eventually uncover microorganisms older than the human species, and to which there would be no preexisting immunity. In the same interview, Claverie had even suggested that ancient microorganisms might have had caused or contributed to the extinction of Neanderthals or mammoths, and that those may still be preserved in the permafrost. [ 15 ] On the other hand, University of British Columbia virologist Curtis Suttle argued that "people already inhale thousands of viruses every day, and swallow billions whenever they swim in the sea". In his view, the odds of a frozen virus replicating and then circulating to a sufficient extent to threaten humans "stretches scientific rationality to the breaking point". [ 16 ]
While some point to the 2016 Yamal Peninsula outbreak as an example of dangers associated with the thaw, [ 12 ] others argue that anthrax is not a pathogen which can spread contagiously between humans, and that it has been known for its ability to remain dormant in the soil since the Middle Ages , without requiring the cold to do so. [ 2 ] Some scientists have argued that Hultin's inability to revive thawed influenza virus, as well as other researchers' failure to revive pneumonia -causing bacteria or smallpox viruses show that pathogens adapted to warm-blooded hosts cannot survive being frozen for a prolonged period of time. [ 2 ] [ 17 ] However, many of the amoeba-infecting viruses revived in Claverie's 2023 research were taken from a ~27,000-year-old site with "a large amount of mammoth wool ", and one species, Pacmanvirus lupus , was found in the intestine of an equally old Siberian wolf carcass. [ 13 ]
There is some agreement that revived bacteria would be less dangerous than the revived viruses, since they would still be affected by broad-spectrum antibiotics and would not require wholly new treatments. [ 13 ] However, they would not be completely vulnerable either, due to the discovery of ancient antibiotic resistance genes in permafrost samples. Antibiotics to which permafrost bacteria have displayed at least some resistance include chloramphenicol , streptomycin , kanamycin , gentamicin , tetracycline , spectinomycin and neomycin . [ 18 ] Some scientists consider horizontal gene transfer of novel antibiotic resistance sequences from otherwise harmless ancient bacteria into modern pathogens to be a far more realistic threat than a revival of an ancient pathogen. [ 19 ] At the same time, other studies show that resistance levels in ancient bacteria to modern antibiotics remain lower than in the contemporary bacteria from the active (thawed) layer above them, [ 1 ] suggesting that this risk is "no greater" than in any other soil. [ 17 ]
According to a 2023 interview with Marion Koopmans , the head of the Netherlands ' Versatile Emerging infectious disease Observatory (VEO), precautions taken by the researchers studying potentially risky sites in Greenland include not starting new digs and only analyizing the locations which were already going to be studied by archeologists , wearing protective gear while in the field, and operating under high BSL standards in the lab. If a place was found to harbour a potentially dangerous microorganism, they have the authority to advise the Naalakkersuisut to shut down access to the area. [ 20 ] | https://en.wikipedia.org/wiki/Pathogenic_microorganisms_in_frozen_environments |
Pathogenicity islands ( PAIs ), as termed in 1990, are a distinct class of genomic islands acquired by microorganisms through horizontal gene transfer . [ 1 ] [ 2 ] Pathogenicity islands are found in both animal and plant pathogens. [ 2 ] Additionally, PAIs are found in both gram-positive and gram-negative bacteria . [ 2 ] They are transferred through horizontal gene transfer events such as transfer by a plasmid , phage , or conjugative transposon . [ 3 ] Although the general makeup of pathogenicity islands (PAIs) might vary throughout bacterial pathogen strains, all PAIs are known to have characteristics with all genomic islands, which includes virulence genes, functional mobility elements, and areas of homology to tRNA genes and direct repeats. [ 2 ] [ 4 ] Therefore, PAIs enables microorganisms to induce disease and also contribute to microorganisms' ability to evolve. The spread of antibiotic resistance and, more generally, the conversion of non-pathogenic strains in natural environments to strains that infect animal and plant hosts with disease are two examples of the evolutionary and ecological changes brought about by the transmission and acquisition of PAIs among bacterial species. [ 5 ] However, It is impossible to overlook their impact on bacterial evolution, though, since if a PAI is acquired and is stably absorbed, it can irreversibly change the bacterial genome . [ 2 ] [ 3 ]
One species of bacteria may have more than one PAI. For example, Salmonella has at least five. [ 6 ] An analogous genomic structure in rhizobia is termed a symbiosis island .
Pathogenicity islands (PAIs) are gene clusters incorporated in the genome , chromosomally or extrachromosomally, of pathogenic organisms, but are usually absent from those nonpathogenic organisms of the same or closely related species. [ 2 ] [ 7 ] [ 8 ] They may be located on a bacterial chromosome or may be transferred within a plasmid or can be found in bacteriophage genomes. [ 2 ] Every genomic island has the following characteristics; a GC- content that differs from the surrounding DNA sequence, a connection with tRNA genes, the presence of repeats on both ends (flanking), and the capacity to recombine, which is usually shown by the presence of an integrase . [ 5 ] The GC-content and codon usage of pathogenicity islands often differs from that of the rest of the genome, potentially aiding in their detection within a given DNA sequence, unless the donor and recipient of the PAI have similar GC-content. [ 2 ]
The most basic kind of mobile genetic element is an insertion sequence (IS) , which usually just has one or two open reading frames that encode genes to make transposition easier. [ 5 ] Sections inside the PAI may be rearranged or deleted with the use of IS components. [ 2 ] These changes encourages adaption and aid in the generation of alternative strains. [ 5 ] PAIs also contain transposons, which are more sophisticated forms of IS elements. The majority are surrounded by brief terminal inverted repeats that serve as homologous recombination sites, enhancing a PAI's stability. [ 5 ] Bacteriophage integrases also found on pathogenicity islands (PAIs) are enzymes produced by bacteriophages to enable site-specific recombination between two recognition sequences , serving as another form of mobility element to enable PAIs insertion into host DNA. [ 5 ] PAIs are often associated with tRNA genes, which target sites for this integration event. [ 2 ] Given that integration may result in tRNA truncation, it is probable that only non-essential tRNA loci found in multiple locations, or those possessing wobble capacity (the ability of a 5' base of a tRNA anticodon to mispair with the third base of an mRNA codon) can become common integration sites. [ 2 ] They can be transferred as a single unit to new bacterial cells, thus conferring virulence to formerly benign strains. [ 7 ]
Pathogenicity islands carry genes encoding one or more virulence factors, including, but not limited to, adhesins , secretion systems (type III and IV secretion system), toxins , invasins , modulins , effectors , superantigens , iron uptake systems, o-antigen synthesis, serum resistance, immunoglobulin A proteases, apoptosis , capsule synthesis, and plant tumorigenesis via Agrobacterium tumefaciens . [ 2 ] Type III and type IV secretion systems, which are both expressed in Gram-negative bacteria, are the secretion systems most frequently linked to PAIs. [ 5 ] The bacterial membranes contain the type III secretion system (T3SS), which functions essentially as a molecular syringe. The needle-like apparatus secretes effectors, which go from the bacterial cell to the host cell via the tip of the apparatus, creating a hole in the membrane of the host cell. [ 5 ]
There are various combinations of regulation involving pathogenicity islands. The first combination is that the pathogenicity island contains the genes to regulate the virulence genes encoded on the PAI. [ 2 ] The second combination is that the pathogenicity island contains the genes to regulate genes located outside of the pathogenicity island. [ 2 ] Additionally, regulatory genes outside of the PAI may regulate virulence genes in the pathogenicity island. [ 2 ] Regulation genes typically encoded on PAIs include AraC-like proteins and two-component response regulators. [ 2 ]
PAIs can be considered unstable DNA regions as they are susceptible to deletions or mobilization. [ 2 ] This may be due to the structure of PAIs, with direct repeats, insertion sequences and association with tRNA that enables deletion and mobilization at higher frequencies. [ 3 ] Additionally, deletions of pathogenicity islands inserted in the genome can result in disrupting tRNA and subsequently affect the metabolism of the cell. [ 7 ] | https://en.wikipedia.org/wiki/Pathogenicity_island |
Pathogenomics is a field which uses high-throughput screening technology and bioinformatics to study encoded microbe resistance, as well as virulence factors (VFs), which enable a microorganism to infect a host and possibly cause disease. [ 1 ] [ 2 ] [ 3 ] [ 4 ] This includes studying genomes of pathogens which cannot be cultured outside of a host. [ 5 ] In the past, researchers and medical professionals found it difficult to study and understand pathogenic traits of infectious organisms. [ 6 ] With newer technology, pathogen genomes can be identified and sequenced in a much shorter time and at a lower cost, [ 7 ] [ 8 ] thus improving the ability to diagnose, treat, and even predict and prevent pathogenic infections and disease. [ 9 ] It has also allowed researchers to better understand genome evolution events - gene loss, gain, duplication, rearrangement - and how those events impact pathogen resistance and ability to cause disease. [ 8 ] This influx of information has created a need for bioinformatics tools and databases to analyze and make the vast amounts of data accessible to researchers, [ 10 ] [ 11 ] and it has raised ethical questions about the wisdom of reconstructing previously extinct and deadly pathogens in order to better understand virulence. [ 12 ]
During the earlier times when genomics was being studied, scientists found it challenging to sequence genetic information. [ 13 ] The field began to explode in 1977 when Fred Sanger , PhD, along with his colleagues, sequenced the DNA-based genome of a bacteriophage , using a method now known as the Sanger Method . [ 14 ] [ 15 ] [ 16 ] The Sanger Method for sequencing DNA exponentially advanced molecular biology and directly led to the ability to sequence genomes of other organisms, including the complete human genome. [ 14 ] [ 15 ]
The Haemophilus influenza genome was one of the first organism genomes sequenced in 1995 by J. Craig Venter and Hamilton Smith using whole genome shotgun sequencing. [ 17 ] [ 15 ] Since then, newer and more efficient high-throughput sequencing, such as Next Generation Genomic Sequencing (NGS) and Single-Cell Genomic Sequencing, have been developed. [ 15 ] While the Sanger method is able to sequence one DNA fragment at a time, NGS technology can sequence thousands of sequences at a time. [ 18 ] With the ability to rapidly sequence DNA, new insights developed, such as the discovery that since prokaryotic genomes are more diverse than originally thought, it is necessary to sequence multiple strains in a species rather than only a few. [ 19 ] E.coli was an example of why this is important, with genes encoding virulence factors in two strains of the species differing by at least thirty percent. [ 19 ] Such knowledge, along with more thorough study of genome gain, loss, and change, is giving researchers valuable insight into how pathogens interact in host environments and how they are able to infect hosts and cause disease. [ 19 ] [ 13 ]
With this high influx of new information, there has arisen a higher demand for bioinformatics so scientists can properly analyze the new data. In response, software and other tools have been developed for this purpose. [ 10 ] [ 20 ] Also, as of 2008, the amount of stored sequences was doubling every 18 months, making urgent the need for better ways to organize data and aid research. [ 21 ] In response, many publicly accessible databases and other resources have been created, including the NCBI pathogen detection program, the Pathosystems Resource Integration Centre (PATRIC), [ 22 ] Pathogenwatch, [ 23 ] the Virulence Factor Database (VFDB) of pathogenic bacteria, [ 24 ] [ 3 ] [ 21 ] the Victors database of virulence factors in human and animal pathogens. [ 25 ] Until 2022, the most sequenced pathogens are Salmonella enterica and E. coli - Shigella. [ 10 ] The sequencing technologies, the bioinformatics tools, the databases, statistics related to pathogen genomes and the applications in forensics, epidemiology, clinical practice and food safety have been extensively reviewed. [ 10 ]
Pathogens may be prokaryotic ( archaea or bacteria ), single-celled eukarya or viruses . Prokaryotic genomes have typically been easier to sequence due to smaller genome size compared to Eukarya. Due to this, there is a bias in reporting pathogenic bacterial behavior. Regardless of this bias in reporting, many of the dynamic genomic events are similar across all the types of pathogen organisms. Genomic evolution occurs via gene gain, gene loss, and genome rearrangement, and these "events" are observed in multiple pathogen genomes, with some bacterial pathogens experiencing all three. [ 13 ] Pathogenomics does not focus exclusively on understanding pathogen-host interactions , however. Insight of individual or cooperative pathogen behavior provides knowledge into the development or inheritance of pathogen virulence factors. [ 13 ] Through a deeper understanding of the small sub-units that cause infection, it may be possible to develop novel therapeutics that are efficient and cost-effective. [ 26 ]
Dynamic genomes with high plasticity are necessary to allow pathogens, especially bacteria, to survive in changing environments. [ 19 ] With the assistance of high throughput sequencing methods and in silico technologies, it is possible to detect, compare and catalogue many of these dynamic genomic events. Genomic diversity is important when detecting and treating a pathogen since these events can change the function and structure of the pathogen. [ 27 ] [ 28 ] There is a need to analyze more than a single genome sequence of a pathogen species to understand pathogen mechanisms. Comparative genomics is a methodology which allows scientists to compare the genomes of different species and strains. [ 29 ] There are several examples of successful comparative genomics studies, among them the analysis of Listeria [ 30 ] and Escherichia coli . [ 31 ] Some studies have attempted to address the difference between pathogenic and non-pathogenic microbes. This inquiry proves to be difficult, however, since a single bacterial species can have many strains, and the genomic content of each of these strains varies. [ 31 ]
Varying microbe strains and genomic content are caused by different forces, including three specific evolutionary events which have an impact on pathogen resistance and ability to cause disease, a: gene gain, gene loss, and genome rearrangement. [ 13 ]
Gene loss occurs when genes are deleted. The reason why this occurs is still not fully understood, [ 32 ] though it most likely involves adaptation to a new environment or ecological niche. [ 33 ] [ 34 ] Some researchers believe gene loss may actually increase fitness and survival among pathogens. [ 32 ] In a new environment, some genes may become unnecessary for survival, and so mutations are eventually "allowed" on those genes until they become inactive " pseudogenes ." [ 33 ] These pseudogenes are observed in organisms such as Shigella flexneri , Salmonella enterica , [ 35 ] and Yersinia pestis . [ 33 ] Over time, the pseudogenes are deleted, and the organisms become fully dependent on their host as either endosymbionts or obligate intracellular pathogens , as is seen in Buchnera , Myobacterium leprae , and Chlamydia trachomatis . [ 33 ] These deleted genes are also called Anti-virulence genes (AVG) since it is thought they may have prevented the organism from becoming pathogenic. [ 33 ] In order to be more virulent, infect a host and remain alive, the pathogen had to get rid of those AVGs. [ 33 ] The reverse process can happen as well, as was seen during analysis of Listeria strains, which showed that a reduced genome size led to a non-pathogenic Listeria strain from a pathogenic strain. [ 30 ] Systems have been developed to detect these pseudogenes/AVGs in a genome sequence. [ 8 ]
One of the key forces driving gene gain is thought to be horizontal (lateral) gene transfer (LGT). [ 36 ] It is of particular interest in microbial studies because these mobile genetic elements may introduce virulence factors into a new genome. [ 37 ] A comparative study conducted by Gill et al. in 2005 postulated that LGT may have been the cause for pathogen variations between Staphylococcus epidermidis and Staphylococcus aureus . [ 38 ] There still, however, remains skepticism about the frequency of LGT, its identification, and its impact. [ 39 ] New and improved methodologies have been engaged, especially in the study of phylogenetics , to validate the presence and effect of LGT. [ 40 ] Gene gain and gene duplication events are balanced by gene loss, such that despite their dynamic nature, the genome of a bacterial species remains approximately the same size. [ 41 ]
Mobile genetic insertion sequences can play a role in genome rearrangement activities. [ 42 ] Pathogens that do not live in an isolated environment have been found to contain a large number of insertion sequence elements and various repetitive segments of DNA. [ 19 ] The combination of these two genetic elements is thought help mediate homologous recombination . There are pathogens, such as Burkholderia mallei , [ 43 ] and Burkholderia pseudomallei [ 44 ] which have been shown to exhibit genome-wide rearrangements due to insertion sequences and repetitive DNA segments. [ 19 ] At this time, no studies demonstrate genome-wide rearrangement events directly giving rise to pathogenic behavior in a microbe. This does not mean it is not possible. Genome-wide rearrangements do, however, contribute to the plasticity of bacterial genome, which may prime the conditions for other factors to introduce, or lose, virulence factors. [ 19 ]
Single Nucleotide Polymorphisms , or SNPs, allow for a wide array of genetic variation among humans as well as pathogens. They allow researchers to estimate a variety of factors: the effects of environmental toxins, how different treatment methods affect the body, and what causes someone's predisposition to illnesses. [ 45 ] SNPs play a key role in understanding how and why mutations occur. SNPs also allows for scientists to map genomes and analyze genetic information. [ 45 ]
Pan-genome overview The most recent definition of a bacterial species comes from the pre-genomic era. In 1987, it was proposed that bacterial strains showing >70% DNA·DNA re-association and sharing characteristic phenotypic traits should be considered to be strains of the same species. [ 46 ] The diversity within pathogen genomes makes it difficult to identify the total number of genes that are associated within all strains of a pathogen species. [ 46 ] It has been thought that the total number of genes associated with a single pathogen species may be unlimited, [ 46 ] although some groups are attempting to derive a more empirical value. [ 47 ] For this reason, it was necessary to introduce the concept of pan-genomes and core genomes. [ 48 ] Pan-genome and core genome literature also tends to have a bias towards reporting on prokaryotic pathogenic organisms. Caution may need to be exercised when extending the definition of a pan-genome or a core-genome to the other pathogenic organisms because there is no formal evidence of the properties of these pan-genomes. [ citation needed ]
A core genome is the set of genes found across all strains of a pathogen species. [ 46 ] A pan-genome is the entire gene pool for that pathogen species, and includes genes that are not shared by all strains. [ 46 ] Pan-genomes may be open or closed depending on whether comparative analysis of multiple strains reveals no new genes (closed) or many new genes (open) compared to the core genome for that pathogen species. [ 13 ] In the open pan-genome, genes may be further characterized as dispensable or strain specific. Dispensable genes are those found in more than one strain, but not in all strains, of a pathogen species. [ 48 ] Strain specific genes are those found only in one strain of a pathogen species. [ 48 ] The differences in pan-genomes are reflections of the life style of the organism. For example, Streptococcus agalactiae , which exists in diverse biological niches, has a broader pan-genome when compared with the more environmentally isolated Bacillus anthracis . [ 19 ] Comparative genomics approaches are also being used to understand more about the pan-genome. [ 49 ] Recent discoveries show that the number of new species continue to grow with an estimated 10 31 bacteriophages on the planet with those bacteriophages infecting 10 24 others per second, the continuous flow of genetic material being exchanged is difficult to imagine. [ 46 ]
Multiple genetic elements of human-affecting pathogens contribute to the transfer of virulence factors: plasmids , pathogenicity island , prophages , bacteriophages, transposons, and integrative and conjugative elements. [ 13 ] [ 50 ] Pathogenicity islands and their detection are the focus of several bioinformatics efforts involved in pathogenomics. [ 51 ] [ 52 ] It is a common belief that "environmental bacterial strains" lack the capacity to harm or do damage to humans. However, recent studies show that bacteria from aquatic environments have acquired pathogenic strains through evolution. This allows for the bacteria to have a wider range in genetic traits and can cause a potential threat to humans from which there is more resistance towards antibiotics. [ 50 ]
Microbe-host interactions tend to overshadow the consideration of microbe-microbe interactions. Microbe-microbe interactions though can lead to chronic states of infirmity that are difficult to understand and treat. [ 9 ]
Biofilms are an example of microbe-microbe interactions and are thought to be associated with up to 80% of human infections. [ 53 ] Recently it has been shown that there are specific genes and cell surface proteins involved in the formation of biofilm. [ 54 ] These genes and also surface proteins may be characterized through in silico methods to form an expression profile of biofilm-interacting bacteria. [ 9 ] This expression profile may be used in subsequent analysis of other microbes to predict biofilm microbe behaviour, or to understand how to dismantle biofilm formation. [ 9 ]
Pathogens have the ability to adapt and manipulate host cells, taking full advantage of a host cell's cellular processes and mechanisms. [ 9 ]
A microbe may be influenced by hosts to either adapt to its new environment or learn to evade it. An insight into these behaviours will provide beneficial insight for potential therapeutics. The most detailed outline of host-microbe interaction initiatives is outlined by the Pathogenomics European Research Agenda. [ 9 ] Its report emphasizes the following features:
The diverse community within the gut has been heralded to be vital for human health. There are a number of projects under way to better understand the ecosystems of the gut. [ 58 ] The sequence of commensal Escherichia coli strain SE11, for example, has already been determined from the faecal matter of a healthy human and promises to be the first of many studies. [ 59 ] Through genomic analysis and also subsequent protein analysis, a better understanding of the beneficial properties of commensal flora will be investigated in hopes of understanding how to build a better therapeutic. [ 60 ]
The "eco-evo" perspective on pathogen-host interactions emphasizes the influences ecology and the environment on pathogen evolution. [ 13 ] The dynamic genomic factors such as gene loss, gene gain and genome rearrangement, are all strongly influenced by changes in the ecological niche where a particular microbial strain resides. Microbes may switch from being pathogenic and non-pathogenic due to changing environments. [ 30 ] This was demonstrated during studies of the plague, Yersinia pestis , which apparently evolved from a mild gastrointestinal pathogen to a very highly pathogenic microbe through dynamic genomic events. [ 61 ] In order for colonization to occur, there must be changes in biochemical makeup to aid survival in a variety of environments. This is most likely due to a mechanism allowing the cell to sense changes within the environment, thus influencing change in gene expression. [ 62 ] Understanding how these strain changes occur from being low or non-pathogenic to being highly pathogenic and vice versa may aid in developing novel therapeutics for microbial infections. [ 13 ]
Human health has greatly improved and the mortality rate has declined substantially since the second world war because of improved hygiene due to changing public health regulations, as well as more readily available vaccines and antibiotics. [ 63 ] Pathogenomics will allow scientists to expand what they know about pathogenic and non-pathogenic microbes, thus allowing for new and improved vaccines. [ 63 ] Pathogenomics also has wider implication, including preventing bioterrorism. [ 63 ]
Reverse vaccinology is relatively new. While research is still being conducted, there have been breakthroughs with pathogens such as Streptococcus and Meningitis . [ 64 ] Methods of vaccine production, such as biochemical and serological, are laborious and unreliable. They require the pathogens to be in vitro to be effective. [ 65 ] New advances in genomic development help predict nearly all variations of pathogens, thus making advances for vaccines. [ 65 ] Protein-based vaccines are being developed to combat resistant pathogens such as Staphylococcus and Chlamydia . [ 64 ]
In 2005, the sequence of the 1918 Spanish influenza was completed. Accompanied with phylogenetic analysis, it was possible to supply a detailed account of the virus' evolution and behavior, in particular its adaptation to humans. [ 66 ] Following the sequencing of the Spanish influenza, the pathogen was also reconstructed. When inserted into mice, the pathogen proved to be incredibly deadly. [ 67 ] [ 12 ] The 2001 anthrax attacks shed light on the possibility of bioterrorism as being more of a real than imagined threat. Bioterrorism was anticipated in the Iraq war, with soldiers being inoculated for a smallpox attack. [ 68 ] Using technologies and insight gained from reconstruction of the Spanish influenza, it may be possible to prevent future deadly planted outbreaks of disease. There is a strong ethical concern however, as to whether the resurrection of old viruses is necessary and whether it does more harm than good. [ 12 ] [ 69 ] The best avenue for countering such threats is coordinating with organizations which provide immunizations. The increased awareness and participation would greatly decrease the effectiveness of a potential epidemic. An addition to this measure would be to monitor natural water reservoirs as a basis to prevent an attack or outbreak. Overall, communication between labs and large organizations, such as Global Outbreak Alert and Response Network (GOARN), can lead to early detection and prevent outbreaks. [ 63 ] | https://en.wikipedia.org/wiki/Pathogenomics |
In mathematics , when a mathematical phenomenon runs counter to some intuition, then the phenomenon is sometimes called pathological . On the other hand, if a phenomenon does not run counter to intuition, it is sometimes called well-behaved or nice . These terms are sometimes useful in mathematical research and teaching, but there is no strict mathematical definition of pathological or well-behaved. [ 1 ]
A classic example of a pathology is the Weierstrass function , a function that is continuous everywhere but differentiable nowhere. [ 1 ] The sum of a differentiable function and the Weierstrass function is again continuous but nowhere differentiable; so there are at least as many such functions as differentiable functions. In fact, using the Baire category theorem , one can show that continuous functions are generically nowhere differentiable. [ 2 ]
Such examples were deemed pathological when they were first discovered. To quote Henri Poincaré : [ 3 ]
Logic sometimes breeds monsters. For half a century there has been springing up a host of weird functions, which seem to strive to have as little resemblance as possible to honest functions that are of some use. No more continuity, or else continuity but no derivatives, etc. More than this, from the point of view of logic, it is these strange functions that are the most general; those that are met without being looked for no longer appear as more than a particular case, and they have only quite a little corner left them.
Formerly, when a new function was invented, it was in view of some practical end. To-day they are invented on purpose to show our ancestors' reasonings at fault, and we shall never get anything more than that out of them.
If logic were the teacher's only guide, he would have to begin with the most general, that is to say, with the most weird, functions. He would have to set the beginner to wrestle with this collection of monstrosities. If you don't do so, the logicians might say, you will only reach exactness by stages.
Since Poincaré, nowhere differentiable functions have been shown to appear in basic physical and biological processes such as Brownian motion and in applications such as the Black-Scholes model in finance.
Counterexamples in Analysis is a whole book of such counterexamples. [ 4 ]
Another example of pathological function is Du-Bois Reymond continuous function , that can't be represented as a Fourier series . [ 5 ]
One famous counterexample in topology is the Alexander horned sphere , showing that topologically embedding the sphere S 2 in R 3 may fail to separate the space cleanly. As a counterexample, it motivated mathematicians to define the tameness property, which suppresses the kind of wild behavior exhibited by the horned sphere, wild knot , and other similar examples. [ 6 ]
Like many other pathologies, the horned sphere in a sense plays on infinitely fine, recursively generated structure, which in the limit violates ordinary intuition. In this case, the topology of an ever-descending chain of interlocking loops of continuous pieces of the sphere in the limit fully reflects that of the common sphere, and one would expect the outside of it, after an embedding, to work the same. Yet it does not: it fails to be simply connected .
For the underlying theory, see Jordan–Schönflies theorem .
Counterexamples in Topology is a whole book of such counterexamples. [ 7 ]
Mathematicians (and those in related sciences) very frequently speak of whether a mathematical object—a function , a set , a space of one sort or another—is "well-behaved" . While the term has no fixed formal definition, it generally refers to the quality of satisfying a list of prevailing conditions, which might be dependent on context, mathematical interests, fashion, and taste. To ensure that an object is "well-behaved", mathematicians introduce further axioms to narrow down the domain of study. This has the benefit of making analysis easier, but produces a loss of generality of any conclusions reached.
In both pure and applied mathematics (e.g., optimization , numerical integration , mathematical physics ), well-behaved also means not violating any assumptions needed to successfully apply whatever analysis is being discussed.
The opposite case is usually labeled "pathological". It is not unusual to have situations in which most cases (in terms of cardinality or measure ) are pathological, but the pathological cases will not arise in practice—unless constructed deliberately.
The term "well-behaved" is generally applied in an absolute sense—either something is well-behaved or it is not. For example:
Unusually, the term could also be applied in a comparative sense:
Pathological examples often have some undesirable or unusual properties that make it difficult to contain or explain within a theory. Such pathological behaviors often prompt new investigation and research, which leads to new theory and more general results. Some important historical examples of this are:
At the time of their discovery, each of these was considered highly pathological; today, each has been assimilated into modern mathematical theory. These examples prompt their observers to correct their beliefs or intuitions, and in some cases necessitate a reassessment of foundational definitions and concepts. Over the course of history, they have led to more correct, more precise, and more powerful mathematics. For example, the Dirichlet function is Lebesgue integrable, and convolution with test functions is used to approximate any locally integrable function by smooth functions. [ Note 1 ]
Whether a behavior is pathological is by definition subject to personal intuition. Pathologies depend on context, training, and experience, and what is pathological to one researcher may very well be standard behavior to another.
Pathological examples can show the importance of the assumptions in a theorem. For example, in statistics , the Cauchy distribution does not satisfy the central limit theorem , even though its symmetric bell-shape appears similar to many distributions which do; it fails the requirement to have a mean and standard deviation which exist and that are finite.
Some of the best-known paradoxes , such as Banach–Tarski paradox and Hausdorff paradox , are based on the existence of non-measurable sets . Mathematicians, unless they take the minority position of denying the axiom of choice , are in general resigned to living with such sets. [ citation needed ]
In computer science , pathological has a slightly different sense with regard to the study of algorithms . Here, an input (or set of inputs) is said to be pathological if it causes atypical behavior from the algorithm, such as a violation of its average case complexity , or even its correctness. For example, hash tables generally have pathological inputs: sets of keys that collide on hash values. Quicksort normally has O ( n log n ) {\displaystyle O(n\log {n})} time complexity, but deteriorates to O ( n 2 ) {\displaystyle O(n^{2})} when it is given input that triggers suboptimal behavior.
The term is often used pejoratively, as a way of dismissing such inputs as being specially designed to break a routine that is otherwise sound in practice (compare with Byzantine ). On the other hand, awareness of pathological inputs is important, as they can be exploited to mount a denial-of-service attack on a computer system. Also, the term in this sense is a matter of subjective judgment as with its other senses. Given enough run time, a sufficiently large and diverse user community (or other factors), an input which may be dismissed as pathological could in fact occur (as seen in the first test flight of the Ariane 5 ).
A similar but distinct phenomenon is that of exceptional objects (and exceptional isomorphisms ), which occurs when there are a "small" number of exceptions to a general pattern (such as a finite set of exceptions to an otherwise infinite rule). By contrast, in cases of pathology, often most or almost all instances of a phenomenon are pathological (e.g., almost all real numbers are irrational).
Subjectively, exceptional objects (such as the icosahedron or sporadic simple groups ) are generally considered "beautiful", unexpected examples of a theory, while pathological phenomena are often considered "ugly", as the name implies. Accordingly, theories are usually expanded to include exceptional objects. For example, the exceptional Lie algebras are included in the theory of semisimple Lie algebras : the axioms are seen as good, the exceptional objects as unexpected but valid.
By contrast, pathological examples are instead taken to point out a shortcoming in the axioms, requiring stronger axioms to rule them out. For example, requiring tameness of an embedding of a sphere in the Schönflies problem . In general, one may study the more general theory, including the pathologies, which may provide its own simplifications (the real numbers have properties very different from the rationals, and likewise continuous maps have very different properties from smooth ones), but also the narrower theory, from which the original examples were drawn.
This article incorporates material from pathological on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License . | https://en.wikipedia.org/wiki/Pathological_(mathematics) |
A pathologists' assistant ( PA ) is a physician extender whose expertise lies in gross examination of surgical specimens as well as performing forensic , medicolegal , and hospital autopsies . [ 1 ]
In the United States , the profession is only licensed in two states: Nevada and New York . In other states, the scope of PAs falls under CLIA high complexity testing which requires an Associates degree. [ 2 ]
PAs work under the indirect or direct supervision of a board certified anatomical pathologist , who ultimately renders a diagnosis based on the PA's detailed gross examination and/or tissue submission for microscopic evaluation. Requirements to become a pathologists' assistant include graduation from a National Accrediting Agency for Clinical Laboratory Sciences (NAACLS) [ 3 ] accredited education program and successfully passing the American Society for Clinical Pathology (ASCP) certification exam , which is not legally required in most states. The credentialing is a certification from the ASCP. Some states such as Nevada and New York require a license. All pathologists' assistants are allied health workers who need to be CLIA 88 compliant to perform these high complexity tasks with indirect/direct supervision. [ 1 ] With ongoing changes in health care , a growing elderly population, and a decreasing number of pathology residents , the PA is in high demand due to their high level of training and contribution to the overall efficiency of the pathology laboratory . [ 1 ]
In addition to the major responsibilities outlined above, a pathologists ' assistant may also perform the following tasks (for a complete list, refer to Article III, Section B of the AAPA Bylaws [ 4 ] ):
While many PAs are employed in hospitals , they may also gain employment in private pathology laboratories/groups, medical examiner 's offices, morgues , government or reference laboratories, or universities , and may be self-employed and provide contract work. [ 1 ] According to a study published in Autonomic Pathology , PAs perform gross examinations on 56.5% of the total number of specimens submitted industry-wide, with a majority being biopsies . [ citation needed ]
The idea of physician extenders was conceived in 1966 by physician-educator Eugene A. Stead at Duke University , where the first physician assistant program was established. Three years later, also at Duke, Chairman of Pathology Dr. Thomas Kinney established the first pathologists’ assistant program. [ 1 ] To date, fourteen accredited programs have been established across the United States and Canada . [ 1 ]
Programs can apply to be NAACLS accredited. [ 5 ] Attending an accredited program is currently the only route to certification by the ASCP-BOC. [ 6 ] PathA programs collectively graduate approximately 118 students a year. As of 2010, just over 1,400 pathologists’ assistants are in practice. [ 1 ] The programs vary in details, but are two-year programs at the masters level, and include didactic and clinical exposure. The didactic year commonly includes an education in clinical anatomy , neuroscience , physiology , histology , pathology, pathologists’ assistant (clinical correlation)- specific courses, medical terminology , and inter-professional classes. Students then are placed in a clinical setting in affiliated hospitals and medical examiner's offices to learn prosection and autopsy techniques hands-on. [ 7 ]
Universities granting pathology assistant degrees include:
As of 9/1/2017, the programs above have the following status with the National Accrediting Agency for Clinical Laboratory Sciences: [ 8 ]
|*| Accredited
|**| Serious Applicant Status
|***| Submitted documentation to become accredited
Pathologists' assistants have been employed in pathology labs for over 40 years. Formal training programs slowly appeared (there were four nationwide in the late 1990s). NAACLS began accrediting PathA programs in the late 1990s, and then programs slowly continued their transitions from bachelor's to master's programs as their number increased. Prior to ASCP certification, which came about in 2005, the AAPA had a fellowship status that program trained pathologists' assistants or on-the-job trained (OJT) pathologists' assistants (who could do specific coursework and three years of active employment) could join only based on passing a rigorous exam that parallels the current ASCP certification exam . The OJT route was eliminated at the end of 2007. The professional association uniting PAs is the American Association of Pathologists' Assistants . Part of their duties as an association is to provide continuing medical education credits (CME) in order to keep members current on advances and procedures in the field that must be completed every three years in order to maintain ASCP certification.
The 2020 novel The Grave Below features a pathologists' assistant as a prominent character. | https://en.wikipedia.org/wiki/Pathologists'_assistant |
Pathology is the study of disease . [ 1 ] The word pathology also refers to the study of disease in general, incorporating a wide range of biology research fields and medical practices. However, when used in the context of modern medical treatment, the term is often used in a narrower fashion to refer to processes and tests that fall within the contemporary medical field of "general pathology", an area that includes a number of distinct but inter-related medical specialties that diagnose disease, mostly through analysis of tissue and human cell samples. Idiomatically, "a pathology" may also refer to the predicted or actual progression of particular diseases (as in the statement "the many different forms of cancer have diverse pathologies", in which case a more proper choice of word would be " pathophysiologies "). The suffix pathy is sometimes used to indicate a state of disease in cases of both physical ailment (as in cardiomyopathy ) and psychological conditions (such as psychopathy ). [ 2 ] A physician practicing pathology is called a pathologist .
As a field of general inquiry and research, pathology addresses components of disease: cause, mechanisms of development ( pathogenesis ), structural alterations of cells (morphologic changes), and the consequences of changes (clinical manifestations). [ 3 ] In common medical practice, general pathology is mostly concerned with analyzing known clinical abnormalities that are markers or precursors for both infectious and non-infectious disease, and is conducted by experts in one of two major specialties, anatomical pathology and clinical pathology . [ 4 ] Further divisions in specialty exist on the basis of the involved sample types (comparing, for example, cytopathology , hematopathology , and histopathology ), organs (as in renal pathology ), and physiological systems ( oral pathology ), as well as on the basis of the focus of the examination (as with forensic pathology ).
Pathology is a significant field in modern medical diagnosis and medical research .
The Latin term pathology derives from the Ancient Greek roots pathos ( πάθος ), meaning "experience" or "suffering", and -logia ( -λογία ), meaning "study of". The term is of early 16th-century origin, and became increasingly popularized after the 1530s. [ 5 ]
The study of pathology, including the detailed examination of the body, including dissection and inquiry into specific maladies, dates back to antiquity. Rudimentary understanding of many conditions was present in most early societies and is attested to in the records of the earliest historical societies , including those of the Middle East , India , and China . [ 6 ] By the Hellenic period of ancient Greece , a concerted causal study of disease was underway (see Medicine in ancient Greece ), with many notable early physicians (such as Hippocrates , for whom the modern Hippocratic Oath is named) having developed methods of diagnosis and prognosis for a number of diseases. The medical practices of the Romans and those of the Byzantines continued from these Greek roots, but, as with many areas of scientific inquiry, growth in understanding of medicine stagnated somewhat after the Classical Era , but continued to slowly develop throughout numerous cultures. Notably, many advances were made in the medieval era of Islam (see Medicine in medieval Islam ), during which numerous texts of complex pathologies were developed, also based on the Greek tradition. [ 7 ] Even so, growth in complex understanding of disease mostly languished until knowledge and experimentation again began to proliferate in the Renaissance , Enlightenment , and Baroque eras, following the resurgence of the empirical method at new centers of scholarship. By the 17th century, the study of rudimentary microscopy was underway and examination of tissues had led British Royal Society member Robert Hooke to coin the word " cell ", setting the stage for later germ theory . [ citation needed ]
Modern pathology began to develop as a distinct field of inquiry during the 19th Century through natural philosophers and physicians that studied disease and the informal study of what they termed "pathological anatomy" or "morbid anatomy". However, pathology as a formal area of specialty was not fully developed until the late 19th and early 20th centuries, with the advent of detailed study of microbiology . In the 19th century, physicians had begun to understand that disease-causing pathogens, or "germs" (a catch-all for disease-causing, or pathogenic, microbes, such as bacteria , viruses , fungi , amoebae , molds , protists , and prions ) existed and were capable of reproduction and multiplication, replacing earlier beliefs in humors or even spiritual agents, that had dominated for much of the previous 1,500 years in European medicine. With the new understanding of causative agents, physicians began to compare the characteristics of one germ's symptoms as they developed within an affected individual to another germ's characteristics and symptoms. This approach led to the foundational understanding that diseases are able to replicate themselves, and that they can have many profound and varied effects on the human host. To determine causes of diseases, medical experts used the most common and widely accepted assumptions or symptoms of their times, a general principle of approach that persists in modern medicine. [ 8 ] [ 9 ]
Modern medicine was particularly advanced by further developments of the microscope to analyze tissues, to which Rudolf Virchow gave a significant contribution, leading to a slew of research developments.
By the late 1920s to early 1930s pathology was deemed a medical specialty. [ 10 ] Combined with developments in the understanding of general physiology , by the beginning of the 20th century, the study of pathology had begun to split into a number of distinct fields, resulting in the development of a large number of modern specialties within pathology and related disciplines of diagnostic medicine . [ 11 ]
The modern practice of pathology is divided into a number of subdisciplines within the distinct but deeply interconnected aims of biological research and medical practice . Biomedical research into disease incorporates the work of a vast variety of life science specialists, whereas, in most parts of the world, to be licensed to practice pathology as a medical specialty, one has to complete medical school and secure a license to practice medicine. Structurally, the study of disease is divided into many different fields that study or diagnose markers for disease using methods and technologies particular to specific scales, organs , and tissue types.
Anatomical pathology ( Commonwealth ) or anatomic pathology ( United States ) is a medical specialty that is concerned with the diagnosis of disease based on the gross , microscopic , chemical, immunologic and molecular examination of organs, tissues, and whole bodies (as in a general examination or an autopsy ). Anatomical pathology is itself divided into subfields, the main divisions being surgical pathology , cytopathology , and forensic pathology . Anatomical pathology is one of two main divisions of the medical practice of pathology, the other being clinical pathology, the diagnosis of disease through the laboratory analysis of bodily fluids and tissues. Sometimes, pathologists practice both anatomical and clinical pathology, a combination known as general pathology. [ 4 ]
Cytopathology (sometimes referred to as "cytology") is a branch of pathology that studies and diagnoses diseases on the cellular level. It is usually used to aid in the diagnosis of cancer, but also helps in the diagnosis of certain infectious diseases and other inflammatory conditions as well as thyroid lesions, diseases involving sterile body cavities (peritoneal, pleural, and cerebrospinal), and a wide range of other body sites. Cytopathology is generally used on samples of free cells or tissue fragments (in contrast to histopathology, which studies whole tissues) and cytopathologic tests are sometimes called smear tests because the samples may be smeared across a glass microscope slide for subsequent staining and microscopic examination. However, cytology samples may be prepared in other ways, including cytocentrifugation . [ 12 ]
Dermatopathology is a subspecialty of anatomic pathology that focuses on the skin and the rest of the integumentary system as an organ. It is unique, in that there are two paths a physician can take to obtain the specialization. All general pathologists and general dermatologists train in the pathology of the skin, so the term dermatopathologist denotes either of these who has reached a certain level of accreditation and experience; in the US, either a general pathologist or a dermatologist [ 13 ] can undergo a 1 to 2 year fellowship in the field of dermatopathology. The completion of this fellowship allows one to take a subspecialty board examination, and becomes a board certified dermatopathologist. Dermatologists are able to recognize most skin diseases based on their appearances, anatomic distributions, and behavior. Sometimes, however, those criteria do not lead to a conclusive diagnosis, and a skin biopsy is taken to be examined under the microscope using usual histological tests. In some cases, additional specialized testing needs to be performed on biopsies, including immunofluorescence , immunohistochemistry , electron microscopy , flow cytometry , and molecular-pathologic analysis. [ 14 ] One of the greatest challenges of dermatopathology is its scope. More than 1500 different disorders of the skin exist, including cutaneous eruptions (" rashes ") and neoplasms . Therefore, dermatopathologists must maintain a broad base of knowledge in clinical dermatology, and be familiar with several other specialty areas in Medicine. [ 15 ]
Forensic pathology focuses on determining the cause of death by post-mortem examination of a corpse or partial remains. An autopsy is typically performed by a coroner or medical examiner, often during criminal investigations; in this role, coroners and medical examiners are also frequently asked to confirm the identity of a corpse. The requirements for becoming a licensed practitioner of forensic pathology varies from country to country (and even within a given nation [ 16 ] ) but typically a minimal requirement is a medical doctorate with a specialty in general or anatomical pathology with subsequent study in forensic medicine. The methods forensic scientists use to determine death include examination of tissue specimens to identify the presence or absence of natural disease and other microscopic findings, interpretations of toxicology on body tissues and fluids to determine the chemical cause of overdoses, poisonings or other cases involving toxic agents, and examinations of physical trauma . Forensic pathology is a major component in the trans-disciplinary field of forensic science . [ citation needed ]
Histopathology refers to the microscopic examination of various forms of human tissue . Specifically, in clinical medicine, histopathology refers to the examination of a biopsy or surgical specimen by a pathologist, after the specimen has been processed and histological sections have been placed onto glass slides. [ 17 ] This contrasts with the methods of cytopathology, which uses free cells or tissue fragments. Histopathological examination of tissues starts with surgery , biopsy , or autopsy. The tissue is removed from the body of an organism and then placed in a fixative that stabilizes the tissues to prevent decay. The most common fixative is formalin , although frozen section fixing is also common. [ 18 ] To see the tissue under a microscope, the sections are stained with one or more pigments. The aim of staining is to reveal cellular components; counterstains are used to provide contrast. Histochemistry refers to the science of using chemical reactions between laboratory chemicals and components within tissue. The histological slides are then interpreted diagnostically and the resulting pathology report describes the histological findings and the opinion of the pathologist. In the case of cancer, this represents the tissue diagnosis required for most treatment protocols.
Neuropathology is the study of disease of nervous system tissue, usually in the form of either surgical biopsies or sometimes whole brains in the case of autopsy. Neuropathology is a subspecialty of anatomic pathology, neurology , and neurosurgery . In many English-speaking countries, neuropathology is considered a subfield of anatomical pathology. A physician who specializes in neuropathology, usually by completing a fellowship after a residency in anatomical or general pathology, is called a neuropathologist. In day-to-day clinical practice, a neuropathologist generates diagnoses for patients. If a disease of the nervous system is suspected, and the diagnosis cannot be made by less invasive methods, a biopsy of nervous tissue is taken from the brain or spinal cord to aid in diagnosis. Biopsy is usually requested after a mass is detected by medical imaging . With autopsies, the principal work of the neuropathologist is to help in the post-mortem diagnosis of various conditions that affect the central nervous system. Biopsies can also consist of the skin. Epidermal nerve fiber density testing (ENFD) is a more recently developed neuropathology test in which a punch skin biopsy is taken to identify small fiber neuropathies by analyzing the nerve fibers of the skin. This test is becoming available in select labs as well as many universities; it replaces the traditional nerve biopsy test as less invasive . [ citation needed ]
Pulmonary pathology is a subspecialty of anatomic (and especially surgical) pathology that deals with diagnosis and characterization of neoplastic and non-neoplastic diseases of the lungs and thoracic pleura . Diagnostic specimens are often obtained via bronchoscopic transbronchial biopsy, CT -guided percutaneous biopsy, or video-assisted thoracic surgery . These tests can be necessary to diagnose between infection, inflammation , or fibrotic conditions. [ citation needed ]
Renal pathology is a subspecialty of anatomic pathology that deals with the diagnosis and characterization of disease of the kidneys . In a medical setting, renal pathologists work closely with nephrologists and transplant surgeons , who typically obtain diagnostic specimens via percutaneous renal biopsy. The renal pathologist must synthesize findings from traditional microscope histology, electron microscopy , and immunofluorescence to obtain a definitive diagnosis. Medical renal diseases may affect the glomerulus , the tubules and interstitium , the vessels, or a combination of these compartments.
Surgical pathology is one of the primary areas of practice for most anatomical pathologists. Surgical pathology involves the gross and microscopic examination of surgical specimens, as well as biopsies submitted by surgeons and non-surgeons such as general internists , medical subspecialists , dermatologists , and interventional radiologists . Often an excised tissue sample is the best and most definitive evidence of disease (or lack thereof) in cases where tissue is surgically removed from a patient. These determinations are usually accomplished by a combination of gross (i.e., macroscopic) and histologic (i.e., microscopic) examination of the tissue, and may involve evaluations of molecular properties of the tissue by immunohistochemistry or other laboratory tests. [ citation needed ]
There are two major types of specimens submitted for surgical pathology analysis: biopsies and surgical resections. A biopsy is a small piece of tissue removed primarily for surgical pathology analysis, most often in order to render a definitive diagnosis. Types of biopsies include core biopsies, which are obtained through the use of large-bore needles, sometimes under the guidance of radiological techniques such as ultrasound , CT scan , or magnetic resonance imaging . Incisional biopsies are obtained through diagnostic surgical procedures that remove part of a suspicious lesion , whereas excisional biopsies remove the entire lesion, and are similar to therapeutic surgical resections. Excisional biopsies of skin lesions and gastrointestinal polyps are very common. The pathologist's interpretation of a biopsy is critical to establishing the diagnosis of a benign or malignant tumor, and can differentiate between different types and grades of cancer, as well as determining the activity of specific molecular pathways in the tumor. Surgical resection specimens are obtained by the therapeutic surgical removal of an entire diseased area or organ (and occasionally multiple organs). These procedures are often intended as definitive surgical treatment of a disease in which the diagnosis is already known or strongly suspected, but pathological analysis of these specimens remains important in confirming the previous diagnosis. [ citation needed ]
Clinical pathology is a medical specialty that is concerned with the diagnosis of disease based on the laboratory analysis of bodily fluids such as blood and urine , as well as tissues, using the tools of chemistry , clinical microbiology , hematology and molecular pathology. Clinical pathologists work in close collaboration with medical technologists , hospital administrations, and referring physicians. Clinical pathologists learn to administer a number of visual and microscopic tests and an especially large variety of tests of the biophysical properties of tissue samples involving automated analysers and cultures . Sometimes the general term "laboratory medicine specialist" is used to refer to those working in clinical pathology, including medical doctors, Ph.D.s and doctors of pharmacology. [ 19 ] Immunopathology , the study of an organism's immune response to infection, is sometimes considered to fall within the domain of clinical pathology. [ 20 ]
Hematopathology is the study of diseases of blood cells (including constituents such as white blood cells , red blood cells , and platelets ) and the tissues, and organs comprising
the hematopoietic system. The term hematopoietic system refers to tissues and organs that produce and/or primarily host hematopoietic cells and includes bone marrow , the lymph nodes , thymus , spleen , and other lymphoid tissues. In the United States, hematopathology is a board certified subspecialty (licensed under the American Board of Pathology) practiced by those physicians who have completed a general pathology residency (anatomic, clinical, or combined) and an additional year of fellowship training in hematology. The hematopathologist reviews biopsies of lymph nodes, bone marrows and other tissues involved by an infiltrate of cells of the hematopoietic system. In addition, the hematopathologist may be in charge of flow cytometric and/or molecular hematopathology studies. [ citation needed ]
Molecular pathology is focused upon the study and diagnosis of disease through the examination of molecules within organs, tissues or bodily fluids . [ 21 ] Molecular pathology is multidisciplinary by nature and shares some aspects of practice with both anatomic pathology and clinical pathology, molecular biology , biochemistry , proteomics and genetics . It is often applied in a context that is as much scientific as directly medical and encompasses the development of molecular and genetic approaches to the diagnosis and classification of human diseases, the design and validation of predictive biomarkers for treatment response and disease progression, and the susceptibility of individuals of different genetic constitution to particular disorders. The crossover between molecular pathology and epidemiology is represented by a related field " molecular pathological epidemiology ". [ 22 ] Molecular pathology is commonly used in diagnosis of cancer and infectious diseases. Molecular Pathology is primarily used to detect cancers such as melanoma, brainstem glioma, brain tumors as well as many other types of cancer and infectious diseases. [ 23 ] Techniques are numerous but include quantitative polymerase chain reaction (qPCR), multiplex PCR , DNA microarray , in situ hybridization , DNA sequencing , antibody-based immunofluorescence tissue assays, molecular profiling of pathogens, and analysis of bacterial genes for antimicrobial resistance . [ 24 ] Techniques used are based on analyzing samples of DNA and RNA. Pathology is widely used for gene therapy and disease diagnosis. [ 25 ]
Oral and Maxillofacial Pathology is one of nine dental specialties recognized by the American Dental Association , and is sometimes considered a specialty of both dentistry and pathology. [ 26 ] Oral Pathologists must complete three years of post doctoral training in an accredited program and subsequently obtain diplomate status from the American Board of Oral and Maxillofacial Pathology. The specialty focuses on the diagnosis, clinical management and investigation of diseases that affect the oral cavity and surrounding maxillofacial structures including but not limited to odontogenic , infectious, epithelial , salivary gland , bone and soft tissue pathologies. It also significantly intersects with the field of dental pathology . Although concerned with a broad variety of diseases of the oral cavity, they have roles distinct from otorhinolaryngologists ("ear, nose, and throat" specialists), and speech pathologists , the latter of which helps diagnose many neurological or neuromuscular conditions relevant to speech phonology or swallowing . Owing to the availability of the oral cavity to non-invasive examination, many conditions in the study of oral disease can be diagnosed, or at least suspected, from gross examination, but biopsies, cell smears, and other tissue analysis remain important diagnostic tools in oral pathology. [ citation needed ]
Becoming a pathologist generally requires specialty -training after medical school , but individual nations vary some in the medical licensing required of pathologists. In the United States, pathologists are physicians ( D.O. or M.D. ) who have completed a four-year undergraduate program, four years of medical school training, and three to four years of postgraduate training in the form of a pathology residency . Training may be within two primary specialties, as recognized by the American Board of Pathology: [anatomical pathology and clinical pathology, each of which requires separate board certification. The American Osteopathic Board of Pathology also recognizes four primary specialties: anatomic pathology, dermatopathology, forensic pathology, and laboratory medicine . Pathologists may pursue specialised fellowship training within one or more subspecialties of either anatomical or clinical pathology. Some of these subspecialties permit additional board certification, while others do not. [ 27 ]
In the United Kingdom, pathologists are physicians licensed by the UK General Medical Council . The training to become a pathologist is under the oversight of the Royal College of Pathologists . After four to six years of undergraduate medical study, trainees proceed to a two-year foundation program. Full-time training in histopathology currently lasts between five and five and a half years and includes specialist training in surgical pathology, cytopathology, and autopsy pathology. It is also possible to take a Royal College of Pathologists diploma in forensic pathology, dermatopathology, or cytopathology, recognising additional specialist training and expertise and to get specialist accreditation in forensic pathology, pediatric pathology , and neuropathology. All postgraduate medical training and education in the UK is overseen by the General Medical Council.
In France, pathology is separated into two distinct specialties, anatomical pathology, and clinical pathology. Residencies for both lasts four years. Residency in anatomical pathology is open to physicians only, while clinical pathology is open to both physicians and pharmacists . At the end of the second year of clinical pathology residency, residents can choose between general clinical pathology and a specialization in one of the disciplines, but they can not practice anatomical pathology, nor can anatomical pathology residents practice clinical pathology. [ 20 ] [ 28 ]
Though separate fields in terms of medical practice, a number of areas of inquiry in medicine and
medical science either overlap greatly with general pathology, work in tandem with it, or contribute significantly to the understanding of the pathology of a given disease or its course in an individual. As a significant portion of all general pathology practice is concerned with cancer , the practice of oncology makes extensive use of both anatomical and clinical pathology in diagnosis and treatment. [ 29 ] In particular, biopsy, resection , and blood tests are all examples of pathology work that is essential for the diagnoses of many kinds of cancer and for the staging of cancerous masses . In a similar fashion, the tissue and blood analysis techniques of general pathology are of central significance to the investigation of serious infectious disease and as such inform significantly upon the fields of epidemiology , etiology , immunology , and parasitology . General pathology methods are of great importance to biomedical research into disease, wherein they are sometimes referred to as "experimental" or "investigative" pathology . [ citation needed ]
Medical imaging is the generating of visual representations of the interior of a body for clinical analysis and medical intervention. Medical imaging reveals details of internal physiology that help medical professionals plan appropriate treatments for tissue infection and trauma. Medical imaging is also central in supplying the biometric data necessary to establish baseline features of anatomy and physiology so as to increase the accuracy with which early or fine-detail abnormalities are detected. These diagnostic techniques are often performed in combination with general pathology procedures and are themselves often essential to developing new understanding of the pathogenesis of a given disease and tracking the progress of disease in specific medical cases. Examples of important subdivisions in medical imaging include radiology (which uses the imaging technologies of X-ray radiography ) magnetic resonance imaging , medical ultrasonography (or ultrasound), endoscopy , elastography , tactile imaging , thermography , medical photography , nuclear medicine and functional imaging techniques such as positron emission tomography . Though they do not strictly relay images, readings from diagnostics tests involving electroencephalography , magnetoencephalography , and electrocardiography often give hints as to the state and function of certain tissues in the brain and heart respectively.
Pathology informatics is a subfield of health informatics . It is the use of information technology in pathology. It encompasses pathology laboratory operations, data analysis, and the interpretation of pathology-related information.
Key aspects of pathology informatics include:
Psychopathology is the study of mental illness , particularly of severe disorders. Informed heavily by both psychology and neurology , its purpose is to classify mental illness, elucidate its underlying causes, and guide clinical psychiatric treatment accordingly. Although diagnosis and classification of mental norms and disorders is largely the purview of psychiatry—the results of which are guidelines such as the Diagnostic and Statistical Manual of Mental Disorders , which attempt to classify mental disease mostly on behavioural evidence, though not without controversy [ 30 ] [ 31 ] [ 32 ] —the field is also heavily, and increasingly, informed upon by neuroscience and other of the biological cognitive sciences . Mental or social disorders or behaviours seen as generally unhealthy or excessive in a given individual, to the point where they cause harm or severe disruption to the person's lifestyle, are often called "pathological" (e.g., pathological gambling or pathological liar ).
Although the vast majority of lab work and research in pathology concerns the development of disease in humans, pathology is of significance throughout the biological sciences. Two main catch-all fields exist to represent most complex organisms capable of serving as host to a pathogen or other form of disease: veterinary pathology (concerned with all non-human species of kingdom of Animalia ) and phytopathology , which studies disease in plants.
Veterinary pathology covers a vast array of species, but with a significantly smaller number of practitioners, so understanding of disease in non-human animals, especially as regards veterinary practice , varies considerably by species. Nevertheless, significant amounts of pathology research are conducted on animals, for two primary reasons: 1) The origins of diseases are typically zoonotic in nature, and many infectious pathogens have animal vectors and, as such, understanding the mechanisms of action for these pathogens in non-human hosts is essential to the understanding and application of epidemiology and 2) those animals that share physiological and genetic traits with humans can be used as surrogates for the study of the disease and potential treatments [ 33 ] as well as the effects of various synthetic products. For this reason, as well as their roles as livestock and companion animals , mammals generally have the largest body of research in veterinary pathology. Animal testing remains a controversial practice, even in cases where it is used to research treatment for human disease. [ 34 ] As in human medical pathology, the practice of veterinary pathology is customarily divided into the two main fields of anatomical and clinical pathology.
Although the pathogens and their mechanics differ greatly from those of animals, plants are subject to a wide variety of diseases, including those caused by fungi , oomycetes , bacteria , viruses , viroids , virus-like organisms, phytoplasmas , protozoa , nematodes and parasitic plants . Damage caused by insects , mites , vertebrate , and other small herbivores is not considered a part of the domain of plant pathology. The field is connected to plant disease epidemiology and especially concerned with the horticulture of species that are of high importance to the human diet or other human utility. | https://en.wikipedia.org/wiki/Pathology |
Pathophysiology (or physiopathology ) is a branch of study, at the intersection of pathology and physiology , concerning disordered physiological processes that cause, result from, or are otherwise associated with a disease or injury . Pathology is the medical discipline that describes conditions typically observed during a disease state, whereas physiology is the biological discipline that describes processes or mechanisms operating within an organism . Pathology describes the abnormal or undesired condition (symptoms of a disease), whereas pathophysiology seeks to explain the functional changes that are occurring within an individual due to a disease or pathologic state. [ 1 ]
The term pathophysiology comes from the Ancient Greek πάθος ( pathos ) and φυσιολογία ( phisiologia ).
The origins of pathophysiology as a distinct field date back to the late 18th century. The first known lectures on the subject were delivered by Professor August Friedrich Hecker [ de ] at the University of Erfurt in 1790, and in 1791, he published the first textbook on pathophysiology, Grundriss der Physiologia pathologica , [ 2 ] spanning 770 pages. [ 3 ] Hecker also established the first academic journal in the field, Magazin für die pathologische Anatomie und Physiologie , in 1796. [ 4 ] The French physician Jean François Fernel had earlier suggested in 1542 that a distinct branch of physiology should study the functions of diseased organisms, an idea further developed by Jean Varandal [ de ] in 1617, who first coined the term "pathologic physiology" in a medical text. [ 4 ]
In Germany in the 1830s, Johannes Müller led the establishment of physiology research autonomous from medical research. In 1843, the Berlin Physical Society was founded in part to purge biology and medicine of vitalism , and in 1847 Hermann von Helmholtz , who joined the Society in 1845, published the paper "On the conservation of energy", highly influential to reduce physiology's research foundation to physical sciences. In the late 1850s, German anatomical pathologist Rudolf Virchow , a former student of Müller, directed focus to the cell, establishing cytology as the focus of physiological research. He also recognized pathophysiology as a distinct discipline, arguing that it should rely on clinical observation and experimentation rather than purely anatomical pathology. [ 4 ] Virchow’s influence extended to his student Julius Cohnheim , who pioneered experimental pathology and the usage of intravital microscopy , further advancing the study of pathophysiology. [ 4 ]
By 1863, motivated by Louis Pasteur 's report on fermentation to butyric acid , fellow Frenchman Casimir Davaine identified a microorganism as the crucial causal agent of the cattle disease anthrax , but its routinely vanishing from blood left other scientists inferring it a mere byproduct of putrefaction . [ 5 ] In 1876, upon Ferdinand Cohn 's report of a tiny spore stage of a bacterial species, the fellow German Robert Koch isolated Davaine's bacterides in pure culture —a pivotal step that would establish bacteriology as a distinct discipline— identified a spore stage, applied Jakob Henle 's postulates, and confirmed Davaine's conclusion, a major feat for experimental pathology . Pasteur and colleagues followed up with ecological investigations confirming its role in the natural environment via spores in soil.
Also, as to sepsis , Davaine had injected rabbits with a highly diluted, tiny amount of putrid blood, duplicated disease, and used the term ferment of putrefaction , but it was unclear whether this referred as did Pasteur's term ferment to a microorganism or, as it did for many others, to a chemical. [ 6 ] In 1878, Koch published Aetiology of Traumatic Infective Diseases , unlike any previous work, where in 80 pages Koch, as noted by a historian, "was able to show, in a manner practically conclusive, that a number of diseases, differing clinically, anatomically, and in aetiology , can be produced experimentally by the injection of putrid materials into animals." [ 6 ] Koch used bacteriology and the new staining methods with aniline dyes to identify particular microorganisms for each. [ 6 ] Germ theory of disease crystallized the concept of cause—presumably identifiable by scientific investigation. [ 7 ]
The American physician William Welch trained in German pathology from 1876 to 1878, including under Cohnheim , and opened America's first scientific laboratory —a pathology laboratory— at Bellevue Hospital in New York City in 1878. [ 8 ] Welch's course drew enrollment from students at other medical schools, which responded by opening their own pathology laboratories. [ 8 ] Once appointed by Daniel Coit Gilman , upon advice by John Shaw Billings , as founding dean of the medical school of the newly forming Johns Hopkins University that Gilman, as its first president, was planning, Welch traveled again to Germany for training in Koch's bacteriology in 1883. [ 8 ] Welch returned to America but moved to Baltimore, eager to overhaul American medicine, while blending Virchow's anatomical pathology, Cohnheim's experimental pathology, and Koch's bacteriology. [ 9 ] Hopkins medical school, led by the "Four Horsemen" —Welch, William Osler , Howard Kelly , and William Halsted — opened at last in 1893 as America's first medical school devoted to teaching German scientific medicine, so called. [ 8 ]
The first biomedical institutes, Pasteur Institute and Berlin Institute for Infectious Diseases , whose first directors were Pasteur and Koch , were founded in 1888 and 1891, respectively. America's first biomedical institute, The Rockefeller Institute for Medical Research , was founded in 1901 with Welch, nicknamed "dean of American medicine", as its scientific director, who appointed his former Hopkins student Simon Flexner as director of pathology and bacteriology laboratories. By way of World War I and World War II , Rockefeller Institute became the globe's leader in biomedical research. [ citation needed ]
The 1918 pandemic triggered frenzied search for its cause, although most deaths were via lobar pneumonia , already attributed to pneumococcal invasion. In London, pathologist with the Ministry of Health, Fred Griffith in 1928 reported pneumococcal transformation from virulent to avirulent and between antigenic types —nearly a switch in species— challenging pneumonia's specific causation. [ 10 ] [ 11 ] The laboratory of Rockefeller Institute's Oswald Avery , America's leading pneumococcal expert, was so troubled by the report that they refused to attempt repetition. [ 12 ]
When Avery was away on summer vacation, Martin Dawson , British-Canadian, convinced that anything from England must be correct, repeated Griffith's results, then achieved transformation in vitro , too, opening it to precise investigation. [ 12 ] Having returned, Avery kept a photo of Griffith on his desk while his researchers followed the trail. In 1944, Avery, Colin MacLeod , and Maclyn McCarty reported the transformation factor as DNA , widely doubted amid estimations that something must act with it. [ 13 ] At the time of Griffith's report, it was unrecognized that bacteria even had genes. [ 14 ]
The first genetics, Mendelian genetics , began at 1900, yet inheritance of Mendelian traits was localized to chromosomes by 1903, thus chromosomal genetics . Biochemistry emerged in the same decade. [ 15 ] In the 1940s, most scientists viewed the cell as a "sack of chemicals" —a membrane containing only loose molecules in chaotic motion — and the only especial cell structures as chromosomes, which bacteria lack as such. [ 15 ] Chromosomal DNA was presumed too simple, so genes were sought in chromosomal proteins . Yet in 1953, American biologist James Watson , British physicist Francis Crick , and British chemist Rosalind Franklin inferred DNA's molecular structure —a double helix — and conjectured it to spell a code. In the early 1960s, Crick helped crack a genetic code in DNA , thus establishing molecular genetics .
In the late 1930s, Rockefeller Foundation had spearheaded and funded the molecular biology research program —seeking fundamental explanation of organisms and life— led largely by physicist Max Delbrück at Caltech and Vanderbilt University . [ 16 ] Yet the reality of organelles in cells was controversial amid unclear visualization with conventional light microscopy . [ 15 ] Around 1940, largely via cancer research at Rockefeller Institute, cell biology emerged as a new discipline filling the vast gap between cytology and biochemistry by applying new technology — ultracentrifuge and electron microscope — to identify and deconstruct cell structures, functions, and mechanisms. [ 15 ] The two new sciences interlaced, cell and molecular biology . [ 15 ]
Mindful of Griffith and Avery , Joshua Lederberg confirmed bacterial conjugation —reported decades earlier but controversial— and was awarded the 1958 Nobel Prize in Physiology or Medicine . [ 17 ] At Cold Spring Harbor Laboratory in Long Island, New York, Delbrück and Salvador Luria led the Phage Group —hosting Watson — discovering details of cell physiology by tracking changes to bacteria upon infection with their viruses , the process transduction . Lederberg led the opening of a genetics department at Stanford University 's medical school, and facilitated greater communication between biologists and medical departments. [ 17 ]
In the 1950s, researches on rheumatic fever , a complication of streptococcal infections, revealed it was mediated by the host's own immune response, stirring investigation by pathologist Lewis Thomas that led to identification of enzymes released by the innate immune cells macrophages and that degrade host tissue. [ 18 ] In the late 1970s, as president of Memorial Sloan–Kettering Cancer Center , Thomas collaborated with Lederberg , soon to become president of Rockefeller University , to redirect the funding focus of the US National Institutes of Health toward basic research into the mechanisms operating during disease processes, which at the time medical scientists were all but wholly ignorant of, as biologists had scarcely taken interest in disease mechanisms. [ 19 ] Thomas became for American basic researchers a patron saint . [ 20 ]
The pathophysiology of Parkinson's disease is death of dopaminergic neurons as a result of changes in biological activity in the brain with respect to Parkinson's disease (PD). There are several proposed mechanisms for neuronal death in PD; however, not all of them are well understood. Five proposed major mechanisms for neuronal death in Parkinson's Disease include protein aggregation in Lewy bodies , disruption of autophagy , changes in cell metabolism or mitochondrial function, neuroinflammation , and blood–brain barrier (BBB) breakdown resulting in vascular leakiness. [ 21 ]
The pathophysiology of heart failure is a reduction in the efficiency of the heart muscle, through damage or overloading. As such, it can be caused by a wide number of conditions, including myocardial infarction (in which the heart muscle is starved of oxygen and dies), hypertension (which increases the force of contraction needed to pump blood) and amyloidosis (in which misfolded proteins are deposited in the heart muscle, causing it to stiffen). Over time these increases in workload will produce changes to the heart itself.
The pathophysiology of multiple sclerosis is that of an inflammatory demyelinating disease of the CNS in which activated immune cells invade the central nervous system and cause inflammation, neurodegeneration and tissue damage. The underlying condition that produces this behaviour is currently unknown. Current research in neuropathology, neuroimmunology, neurobiology, and neuroimaging, together with clinical neurology provide support for the notion that MS is not a single disease but rather a spectrum [ 22 ]
The pathophysiology of hypertension is that of a chronic disease characterized by elevation of blood pressure . Hypertension can be classified by cause as either essential (also known as primary or idiopathic ) or secondary . About 90–95% of hypertension is essential hypertension. [ 23 ] [ 24 ] [ 25 ] [ 26 ]
The pathophysiology of HIV/AIDS involves, upon acquisition of the virus, that the virus replicates inside and kills T helper cells , which are required for almost all adaptive immune responses . There is an initial period of influenza-like illness , and then a latent, asymptomatic phase. When the CD4 lymphocyte count falls below 200 cells/ml of blood, the HIV host has progressed to AIDS, [ 27 ] a condition characterized by deficiency in cell-mediated immunity and the resulting increased susceptibility to opportunistic infections and certain forms of cancer .
The pathophysiology of spider bites is due to the effect of its venom . A spider envenomation occurs whenever a spider injects venom into the skin. Not all spider bites inject venom – a dry bite, and the amount of venom injected can vary based on the type of spider and the circumstances of the encounter. The mechanical injury from a spider bite is not a serious concern for humans.
The pathophysiology of obesity involves many possible pathophysiological mechanisms involved in its development and maintenance. [ 28 ] [ 29 ]
This field of research had been almost unapproached until the leptin gene was discovered in 1994 by J. M. Friedman's laboratory. [ 30 ] These investigators postulated that leptin was a satiety factor. In the ob/ob mouse, mutations in the leptin gene resulted in the obese phenotype opening the possibility of leptin therapy for human obesity. However, soon thereafter J. F. Caro's laboratory could not detect any mutations in the leptin gene in humans with obesity. On the contrary Leptin expression was increased proposing the possibility of Leptin-resistance in human obesity. [ 31 ] | https://en.wikipedia.org/wiki/Pathophysiology |
A pathovar is a bacterial strain or set of strains with the same or similar characteristics, that is differentiated at infrasubspecific level from other strains of the same species or subspecies on the basis of distinctive pathogenicity to one or more plant hosts.
Pathovars are named as a ternary or quaternary addition to the species binomial name, for example the bacterium that causes citrus canker Xanthomonas axonopodis , has several pathovars with different host ranges, X. axonopodis pv. citri is one of them; the abbreviation 'pv.' means pathovar.
The type strains of pathovars are pathotypes , which are distinguished from the types ( holotype , neotype , etc.) of the species to which the pathovar belongs. [ 1 ]
This plant disease article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pathovar |
Pathway Commons is a database of biological pathways and interactions. [ 1 ]
This Biological database -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pathway_Commons |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.