text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
The optical transfer function (OTF) of an optical system such as a camera, microscope, human eye, or projector is a scale-dependent description of their imaging contrast. Its magnitude is the image contrast of the harmonic intensity pattern,
1
+
cos
(
2
π
ν
⋅
x
)
{\displaystyle 1+\cos(2\pi \nu \cdot x)}
, as a function of the spatial frequency,
ν
{\displaystyle \nu }
, while its complex argument indicates a phase shift in the periodic pattern. The optical transfer function is used by optical engineers to describe how the optics project light from the object or scene onto a photographic film, detector array, retina, screen, or simply the next item in the optical transmission chain.
Formally, the optical transfer function is defined as the Fourier transform of the point spread function (PSF, that is, the impulse response of the optics, the image of a point source). As a Fourier transform, the OTF is generally complex-valued; however, it is real-valued in the common case of a PSF that is symmetric about its center. In practice, the imaging contrast, as given by the magnitude or modulus of the optical-transfer function, is of primary importance. This derived function is commonly referred to as the modulation transfer function (MTF).
The image on the right shows the optical transfer functions for two different optical systems in panels (a) and (d). The former corresponds to the ideal, diffraction-limited, imaging system with a circular pupil. Its transfer function decreases approximately gradually with spatial frequency until it reaches the diffraction-limit, in this case at 500 cycles per millimeter or a period of 2 μm. Since periodic features as small as this period are captured by this imaging system, it could be said that its resolution is 2 μm. Panel (d) shows an optical system that is out of focus. This leads to a sharp reduction in contrast compared to the diffraction-limited imaging system. It can be seen that the contrast is zero around 250 cycles/mm, or periods of 4 μm. This explains why the images for the out-of-focus system (e,f) are more blurry than those of the diffraction-limited system (b,c). Note that although the out-of-focus system has very low contrast at spatial frequencies around 250 cycles/mm, the contrast at spatial frequencies near the diffraction limit of 500 cycles/mm is diffraction-limited. Close observation of the image in panel (f) shows that the image of the large spoke densities near the center of the spoke target is relatively sharp.
== Definition and related concepts ==
Since the optical transfer function (OTF) is defined as the Fourier transform of the point-spread function (PSF), it is generally speaking a complex-valued function of spatial frequency. The projection of a specific periodic pattern is represented by a complex number with absolute value and complex argument proportional to the relative contrast and translation of the projected projection, respectively.
Often the contrast reduction is of most interest and the translation of the pattern can be ignored. The relative contrast is given by the absolute value of the optical transfer function, a function commonly referred to as the modulation transfer function (MTF). Its values indicate how much of the object's contrast is captured in the image as a function of spatial frequency. The MTF tends to decrease with increasing spatial frequency from 1 to 0 (at the diffraction limit); however, the function is often not monotonic. On the other hand, when also the pattern translation is important, the complex argument of the optical transfer function can be depicted as a second real-valued function, commonly referred to as the phase transfer function (PhTF). The complex-valued optical transfer function can be seen as a combination of these two real-valued functions:
O
T
F
(
ν
)
=
M
T
F
(
ν
)
e
i
P
h
T
F
(
ν
)
{\displaystyle \mathrm {OTF} (\nu )=\mathrm {MTF} (\nu )e^{i\,\mathrm {PhTF} (\nu )}}
where
M
T
F
(
ν
)
=
|
O
T
F
(
ν
)
|
,
{\displaystyle \mathrm {MTF} (\nu )=\left\vert \mathrm {OTF} (\nu )\right\vert ,}
P
h
T
F
(
ν
)
=
a
r
g
(
O
T
F
(
ν
)
)
,
{\displaystyle \mathrm {PhTF} (\nu )=\mathrm {arg} (\mathrm {OTF} (\nu )),}
and
a
r
g
(
⋅
)
{\displaystyle \mathrm {arg} (\cdot )}
represents the complex argument function, while
ν
{\displaystyle \nu }
is the spatial frequency of the periodic pattern. In general
ν
{\displaystyle \nu }
is a vector with a spatial frequency for each dimension, i.e. it indicates also the direction of the periodic pattern.
The impulse response of a well-focused optical system is a three-dimensional intensity distribution with a maximum at the focal plane, and could thus be measured by recording a stack of images while displacing the detector axially. By consequence, the three-dimensional optical transfer function can be defined as the three-dimensional Fourier transform of the impulse response. Although typically only a one-dimensional, or sometimes a two-dimensional section is used, the three-dimensional optical transfer function can improve the understanding of microscopes such as the structured illumination microscope.
True to the definition of transfer function,
O
T
F
(
0
)
=
M
T
F
(
0
)
{\displaystyle \mathrm {OTF} (0)=\mathrm {MTF} (0)}
should indicate the fraction of light that was detected from the point source object. However, typically the contrast relative to the total amount of detected light is most important. It is thus common practice to normalize the optical transfer function to the detected intensity, hence
M
T
F
(
0
)
≡
1
{\displaystyle \mathrm {MTF} (0)\equiv 1}
.
Generally, the optical transfer function depends on factors such as the spectrum and polarization of the emitted light and the position of the point source. E.g. the image contrast and resolution are typically optimal at the center of the image, and deteriorate toward the edges of the field-of-view. When significant variation occurs, the optical transfer function may be calculated for a set of representative positions or colors.
Sometimes it is more practical to define the transfer functions based on a binary black-white stripe pattern. The transfer function for an equal-width black-white periodic pattern is referred to as the contrast transfer function (CTF).
== Examples ==
=== Ideal lens system ===
A perfect lens system will provide a high contrast projection without shifting the periodic pattern, hence the optical transfer function is identical to the modulation transfer function. Typically the contrast will reduce gradually towards zero at a point defined by the resolution of the optics. For example, a perfect, non-aberrated, f/4 optical imaging system used, at the visible wavelength of 500 nm, would have the optical transfer function depicted in the right hand figure.
It can be read from the plot that the contrast gradually reduces and reaches zero at the spatial frequency of 500 cycles per millimeter, in other words the optical resolution of the image projection is 1/500th of a millimeter, or 2 micrometer. Correspondingly, for this particular imaging device, the spokes become more and more blurred towards the center until they merge into a gray, unresolved, disc.
Note that sometimes the optical transfer function is given in units of the object or sample space, observation angle, film width, or normalized to the theoretical maximum. Conversion between units is typically a matter of a multiplication or division. E.g. a microscope typically magnifies everything 10 to 100-fold, and a reflex camera will generally demagnify objects at a distance of 5 meter by a factor of 100 to 200.
The resolution of a digital imaging device is not only limited by the optics, but also by the number of pixels, more in particular by their separation distance. As explained by the Nyquist–Shannon sampling theorem, to match the optical resolution of the given example, the pixels of each color channel should be separated by 1 micrometer, half the period of 500 cycles per millimeter. A higher number of pixels on the same sensor size will not allow the resolution of finer detail. On the other hand, when the pixel spacing is larger than 1 micrometer, the resolution will be limited by the separation between pixels; moreover, aliasing may lead to a further reduction of the image fidelity.
=== Imperfect lens system ===
An imperfect, aberrated imaging system could possess the optical transfer function depicted in the following figure.
As the ideal lens system, the contrast reaches zero at the spatial frequency of 500 cycles per millimeter. However, at lower spatial frequencies the contrast is considerably lower than that of the perfect system in the previous example. In fact, the contrast becomes zero on several occasions even for spatial frequencies lower than 500 cycles per millimeter. This explains the gray circular bands in the spoke image shown in the above figure. In between the gray bands, the spokes appear to invert from black to white and vice versa, this is referred to as contrast inversion, directly related to the sign reversal in the real part of the optical transfer function, and represents itself as a shift by half a period for some periodic patterns.
While it could be argued that the resolution of both the ideal and the imperfect system is 2 μm, or 500 LP/mm, it is clear that the images of the latter example are less sharp. A definition of resolution that is more in line with the perceived quality would instead use the spatial frequency at which the first zero occurs, 10 μm, or 100 LP/mm. Definitions of resolution, even for perfect imaging systems, vary widely. A more complete, unambiguous picture is provided by the optical transfer function.
=== Optical system with a non-rotational symmetric aberration ===
Optical systems, and in particular optical aberrations are not always rotationally symmetric. Periodic patterns that have a different orientation can thus be imaged with different contrast even if their periodicity is the same. Optical transfer function or modulation transfer functions are thus generally two-dimensional functions. The following figures shows the two-dimensional equivalent of the ideal and the imperfect system discussed earlier, for an optical system with trefoil, a non-rotational-symmetric aberration.
Optical transfer functions are not always real-valued. Period patterns can be shifted by any amount, depending on the aberration in the system. This is generally the case with non-rotational-symmetric aberrations. The hue of the colors of the surface plots in the above figure indicate phase. It can be seen that, while for the rotational symmetric aberrations the phase is either 0 or π and thus the transfer function is real valued, for the non-rotational symmetric aberration the transfer function has an imaginary component and the phase varies continuously.
=== Practical example – high-definition video system ===
While optical resolution, as commonly used with reference to camera systems, describes only the number of pixels in an image, and hence the potential to show fine detail, the transfer function describes the ability of adjacent pixels to change from black to white in response to patterns of varying spatial frequency, and hence the actual capability to show fine detail, whether with full or reduced contrast. An image reproduced with an optical transfer function that 'rolls off' at high spatial frequencies will appear 'blurred' in everyday language.
Taking the example of a current high definition (HD) video system, with 1920 by 1080 pixels, the Nyquist theorem states that it should be possible, in a perfect system, to resolve fully (with true black to white transitions) a total of 1920 black and white alternating lines combined, otherwise referred to as a spatial frequency of 1920/2=960 line pairs per picture width, or 960 cycles per picture width, (definitions in terms of cycles per unit angle or per mm are also possible but generally less clear when dealing with cameras and more appropriate to telescopes etc.). In practice, this is far from the case, and spatial frequencies that approach the Nyquist rate will generally be reproduced with decreasing amplitude, so that fine detail, though it can be seen, is greatly reduced in contrast. This gives rise to the interesting observation that, for example, a standard definition television picture derived from a film scanner that uses oversampling, as described later, may appear sharper than a high definition picture shot on a camera with a poor modulation transfer function. The two pictures show an interesting difference that is often missed, the former having full contrast on detail up to a certain point but then no really fine detail, while the latter does contain finer detail, but with such reduced contrast as to appear inferior overall.
== The three-dimensional optical transfer function ==
Although one typically thinks of an image as planar, or two-dimensional, the imaging system will produce a three-dimensional intensity distribution in image space that in principle can be measured. e.g. a two-dimensional sensor could be translated to capture a three-dimensional intensity distribution. The image of a point source is also a three dimensional (3D) intensity distribution which can be represented by a 3D point-spread function. As an example, the figure on the right shows the 3D point-spread function in object space of a wide-field microscope (a) alongside that of a confocal microscope (c). Although the same microscope objective with a numerical aperture of 1.49 is used, it is clear that the confocal point spread function is more compact both in the lateral dimensions (x,y) and the axial dimension (z). One could rightly conclude that the resolution of a confocal microscope is superior to that of a wide-field microscope in all three dimensions.
A three-dimensional optical transfer function can be calculated as the three-dimensional Fourier transform of the 3D point-spread function. Its color-coded magnitude is plotted in panels (b) and (d), corresponding to the point-spread functions shown in panels (a) and (c), respectively. The transfer function of the wide-field microscope has a support that is half of that of the confocal microscope in all three-dimensions, confirming the previously noted lower resolution of the wide-field microscope. Note that along the z-axis, for x = y = 0, the transfer function is zero everywhere except at the origin. This missing cone is a well-known problem that prevents optical sectioning using a wide-field microscope.
The two-dimensional optical transfer function at the focal plane can be calculated by integration of the 3D optical transfer function along the z-axis. Although the 3D transfer function of the wide-field microscope (b) is zero on the z-axis for z ≠ 0; its integral, the 2D optical transfer, reaching a maximum at x = y = 0. This is only possible because the 3D optical transfer function diverges at the origin x = y = z = 0. The function values along the z-axis of the 3D optical transfer function correspond to the Dirac delta function.
== Calculation ==
Most optical design software has functionality to compute the optical or modulation transfer function of a lens design. Ideal systems such as in the examples here are readily calculated numerically using software such as Julia, GNU Octave or Matlab, and in some specific cases even analytically. The optical transfer function can be calculated following two approaches:
as the Fourier transform of the incoherent point spread function, or
as the auto-correlation of the pupil function of the optical system
Mathematically both approaches are equivalent. Numeric calculations are typically most efficiently done via the Fourier transform; however, analytic calculation may be more tractable using the auto-correlation approach.
=== Example ===
==== Ideal lens system with circular aperture ====
===== Auto-correlation of the pupil function =====
Since the optical transfer function is the Fourier transform of the point spread function, and the point spread function is the square absolute of the inverse Fourier transformed pupil function, the optical transfer function can also be calculated directly from the pupil function. From the convolution theorem it can be seen that the optical transfer function is in fact the autocorrelation of the pupil function.
The pupil function of an ideal optical system with a circular aperture is a disk of unit radius. The optical transfer function of such a system can thus be calculated geometrically from the intersecting area between two identical disks at a distance of
2
ν
{\displaystyle 2\nu }
, where
ν
{\displaystyle \nu }
is the spatial frequency normalized to the highest transmitted frequency. In general the optical transfer function is normalized to a maximum value of one for
ν
=
0
{\displaystyle \nu =0}
, so the resulting area should be divided by
π
{\displaystyle \pi }
.
The intersecting area can be calculated as the sum of the areas of two identical circular segments:
θ
/
2
−
sin
(
θ
)
/
2
{\displaystyle \theta /2-\sin(\theta )/2}
, where
θ
{\displaystyle \theta }
is the circle segment angle. By substituting
|
ν
|
=
cos
(
θ
/
2
)
{\displaystyle |\nu |=\cos(\theta /2)}
, and using the equalities
sin
(
θ
)
/
2
=
sin
(
θ
/
2
)
cos
(
θ
/
2
)
{\displaystyle \sin(\theta )/2=\sin(\theta /2)\cos(\theta /2)}
and
1
=
ν
2
+
sin
(
arccos
(
|
ν
|
)
)
2
{\displaystyle 1=\nu ^{2}+\sin(\arccos(|\nu |))^{2}}
, the equation for the area can be rewritten as
arccos
(
|
ν
|
)
−
|
ν
|
1
−
ν
2
{\displaystyle \arccos(|\nu |)-|\nu |{\sqrt {1-\nu ^{2}}}}
. Hence the normalized optical transfer function is given by:
OTF
(
ν
)
=
2
π
(
arccos
(
|
ν
|
)
−
|
ν
|
1
−
ν
2
)
.
{\displaystyle \operatorname {OTF} (\nu )={\frac {2}{\pi }}\left(\arccos(|\nu |)-|\nu |{\sqrt {1-\nu ^{2}}}\right).}
A more detailed discussion can be found in and.: 152–153
=== Numerical evaluation ===
The one-dimensional optical transfer function can be calculated as the discrete Fourier transform of the line spread function. This data is graphed against the spatial frequency data. In this case, a sixth order polynomial is fitted to the MTF vs. spatial frequency curve to show the trend. The 50% cutoff frequency is determined to yield the corresponding spatial frequency. Thus, the approximate position of best focus of the unit under test is determined from this data.
The Fourier transform of the line spread function (LSF) can not be determined analytically by the following equations:
MTF
=
F
[
LSF
]
MTF
=
∫
f
(
x
)
e
−
i
2
π
x
s
d
x
{\displaystyle \operatorname {MTF} ={\mathcal {F}}\left[\operatorname {LSF} \right]\qquad \qquad \operatorname {MTF} =\int f(x)e^{-i2\pi \,xs}\,dx}
Therefore, the Fourier Transform is numerically approximated using the discrete Fourier transform
D
F
T
{\displaystyle {\mathcal {DFT}}}
.
MTF
=
D
F
T
[
LSF
]
=
Y
k
=
∑
n
=
0
N
−
1
y
n
e
−
i
k
2
π
N
n
k
∈
[
0
,
N
−
1
]
{\displaystyle \operatorname {MTF} ={\mathcal {DFT}}[\operatorname {LSF} ]=Y_{k}=\sum _{n=0}^{N-1}y_{n}e^{-ik{\frac {2\pi }{N}}n}\qquad k\in [0,N-1]}
where
Y
k
{\displaystyle Y_{k}\,}
= the
k
th
{\displaystyle k^{\text{th}}}
value of the MTF
N
{\displaystyle N\,}
= number of data points
n
{\displaystyle n\,}
= index
k
{\displaystyle k\,}
=
k
th
{\displaystyle k^{\text{th}}}
term of the LSF data
y
n
{\displaystyle y_{n}\,}
=
n
th
{\displaystyle n^{\text{th}}\,}
pixel position
i
=
−
1
{\displaystyle i={\sqrt {-1}}}
e
±
i
a
=
cos
(
a
)
±
i
sin
(
a
)
{\displaystyle e^{\pm ia}=\cos(a)\,\pm \,i\sin(a)}
MTF
=
D
F
T
[
LSF
]
=
Y
k
=
∑
n
=
0
N
−
1
y
n
[
cos
(
k
2
π
N
n
)
−
i
sin
(
k
2
π
N
n
)
]
k
∈
[
0
,
N
−
1
]
{\displaystyle \operatorname {MTF} ={\mathcal {DFT}}[\operatorname {LSF} ]=Y_{k}=\sum _{n=0}^{N-1}y_{n}\left[\cos \left(k{\frac {2\pi }{N}}n\right)-i\sin \left(k{\frac {2\pi }{N}}n\right)\right]\qquad k\in [0,N-1]}
The MTF is then plotted against spatial frequency and all relevant data concerning this test can be determined from that graph.
=== The vectorial transfer function ===
At high numerical apertures such as those found in microscopy, it is important to consider the vectorial nature of the fields that carry light. By decomposing the waves in three independent components corresponding to the Cartesian axes, a point spread function can be calculated for each component and combined into a vectorial point spread function. Similarly, a vectorial optical transfer function can be determined as shown in () and ().
== Measurement ==
The optical transfer function is not only useful for the design of optical system, it is also valuable to characterize manufactured systems.
=== Starting from the point spread function ===
The optical transfer function is defined as the Fourier transform of the impulse response of the optical system, also called the point spread function. The optical transfer function is thus readily obtained by first acquiring the image of a point source, and applying the two-dimensional discrete Fourier transform to the sampled image. Such a point-source can, for example, be a bright light behind a screen with a pin hole, a fluorescent or metallic microsphere, or simply a dot painted on a screen. Calculation of the optical transfer function via the point spread function is versatile as it can fully characterize optics with spatial varying and chromatic aberrations by repeating the procedure for various positions and wavelength spectra of the point source.
=== Using extended test objects for spatially invariant optics ===
When the aberrations can be assumed to be spatially invariant, alternative patterns can be used to determine the optical transfer function such as lines and edges. The corresponding transfer functions are referred to as the line-spread function and the edge-spread function, respectively. Such extended objects illuminate more pixels in the image, and can improve the measurement accuracy due to the larger signal-to-noise ratio. The optical transfer function is in this case calculated as the two-dimensional discrete Fourier transform of the image and divided by that of the extended object. Typically either a line or a black-white edge is used.
==== The line-spread function ====
The two-dimensional Fourier transform of a line through the origin, is a line orthogonal to it and through the origin. The divisor is thus zero for all but a single dimension, by consequence, the optical transfer function can only be determined for a single dimension using a single line-spread function (LSF). If necessary, the two-dimensional optical transfer function can be determined by repeating the measurement with lines at various angles.
The line spread function can be found using two different methods. It can be found directly from an ideal line approximation provided by a slit test target or it can be derived from the edge spread function, discussed in the next sub section.
==== Edge-spread function ====
The two-dimensional Fourier transform of an edge is also only non-zero on a single line, orthogonal to the edge. This function is sometimes referred to as the edge spread function (ESF). However, the values on this line are inversely proportional to the distance from the origin. Although the measurement images obtained with this technique illuminate a large area of the camera, this mainly benefits the accuracy at low spatial frequencies. As with the line spread function, each measurement only determines a single axes of the optical transfer function, repeated measurements are thus necessary if the optical system cannot be assumed to be rotational symmetric.
As shown in the right hand figure, an operator defines a box area encompassing the edge of a knife-edge test target image back-illuminated by a black body. The box area is defined to be approximately 10% of the total frame area. The image pixel data is translated into a two-dimensional array (pixel intensity and pixel position). The amplitude (pixel intensity) of each line within the array is normalized and averaged. This yields the edge spread function.
ESF
=
X
−
μ
σ
σ
=
∑
i
=
0
n
−
1
(
x
i
−
μ
)
2
n
μ
=
∑
i
=
0
n
−
1
x
i
n
{\displaystyle \operatorname {ESF} ={\frac {X-\mu }{\sigma }}\qquad \qquad \sigma \,={\sqrt {\frac {\sum _{i=0}^{n-1}(x_{i}-\mu \,)^{2}}{n}}}\qquad \qquad \mu \,={\frac {\sum _{i=0}^{n-1}x_{i}}{n}}}
where
ESF = the output array of normalized pixel intensity data
X
{\displaystyle X\,}
= the input array of pixel intensity data
x
i
{\displaystyle x_{i}\,}
= the ith element of
X
{\displaystyle X\,}
μ
{\displaystyle \mu \,}
= the average value of the pixel intensity data
σ
{\displaystyle \sigma \,}
= the standard deviation of the pixel intensity data
n
{\displaystyle n\,}
= number of pixels used in average
The line spread function is identical to the first derivative of the edge spread function, which is differentiated using numerical methods. In case it is more practical to measure the edge spread function, one can determine the line spread function as follows:
LSF
=
d
d
x
ESF
(
x
)
{\displaystyle \operatorname {LSF} ={\frac {d}{dx}}\operatorname {ESF} (x)}
Typically the ESF is only known at discrete points, so the LSF is numerically approximated using the finite difference:
LSF
=
d
d
x
ESF
(
x
)
≈
Δ
ESF
Δ
x
{\displaystyle \operatorname {LSF} ={\frac {d}{dx}}\operatorname {ESF} (x)\approx {\frac {\Delta \operatorname {ESF} }{\Delta x}}}
LSF
≈
ESF
i
+
1
−
ESF
i
−
1
2
(
x
i
+
1
−
x
i
)
{\displaystyle \operatorname {LSF} \approx {\frac {\operatorname {ESF} _{i+1}-\operatorname {ESF} _{i-1}}{2(x_{i+1}-x_{i})}}}
where:
i
{\displaystyle i\,}
= the index
i
=
1
,
2
,
…
,
n
−
1
{\displaystyle i=1,2,\dots ,n-1}
x
i
{\displaystyle x_{i}\,}
=
i
th
{\displaystyle i^{\text{th}}\,}
position of the
i
th
{\displaystyle i^{\text{th}}\,}
pixel
ESF
i
{\displaystyle \operatorname {ESF} _{i}\,}
= ESF of the
i
th
{\displaystyle i^{\text{th}}\,}
pixel
==== Using a grid of black and white lines ====
Although 'sharpness' is often judged on grid patterns of alternate black and white lines, it should strictly be measured using a sine-wave variation from black to white (a blurred version of the usual pattern). Where a square wave pattern is used (simple black and white lines) not only is there more risk of aliasing, but account must be taken of the fact that the fundamental component of a square wave is higher than the amplitude of the square wave itself (the harmonic components reduce the peak amplitude). A square wave test chart will therefore show optimistic results (better resolution of high spatial frequencies than is actually achieved). The square wave result is sometimes referred to as the 'contrast transfer function' (CTF).
== Factors affecting MTF in typical camera systems ==
In practice, many factors result in considerable blurring of a reproduced image, such that patterns with spatial frequency just below the Nyquist rate may not even be visible, and the finest patterns that can appear 'washed out' as shades of grey, not black and white. A major factor is usually the impossibility of making the perfect 'brick wall' optical filter (often realized as a 'phase plate' or a lens with specific blurring properties in digital cameras and video camcorders). Such a filter is necessary to reduce aliasing by eliminating spatial frequencies above the Nyquist rate of the display.
=== Oversampling and downconversion to maintain the optical transfer function ===
The only way in practice to approach the theoretical sharpness possible in a digital imaging system such as a camera is to use more pixels in the camera sensor than samples in the final image, and 'downconvert' or 'interpolate' using special digital processing which cuts off high frequencies above the Nyquist rate to avoid aliasing whilst maintaining a reasonably flat MTF up to that frequency. This approach was first taken in the 1970s when flying spot scanners, and later CCD line scanners were developed, which sampled more pixels than were needed and then downconverted, which is why movies have always looked sharper on television than other material shot with a video camera. The only theoretically correct way to interpolate or downconvert is by use of a steep low-pass spatial filter, realized by convolution with a two-dimensional sin(x)/x weighting function which requires powerful processing. In practice, various mathematical approximations to this are used to reduce the processing requirement. These approximations are now implemented widely in video editing systems and in image processing programs such as Photoshop.
Just as standard definition video with a high contrast MTF is only possible with oversampling, so HD television with full theoretical sharpness is only possible by starting with a camera that has a significantly higher resolution, followed by digitally filtering. With movies now being shot in 4k and even 8k video for the cinema, we can expect to see the best pictures on HDTV only from movies or material shot at the higher standard. However much we raise the number of pixels used in cameras, this will always remain true in absence of a perfect optical spatial filter. Similarly, a 5-megapixel image obtained from a 5-megapixel still camera can never be sharper than a 5-megapixel image obtained after down-conversion from an equal quality 10-megapixel still camera. Because of the problem of maintaining a high contrast MTF, broadcasters like the BBC did for a long time consider maintaining standard definition television, but improving its quality by shooting and viewing with many more pixels (though as previously mentioned, such a system, though impressive, does ultimately lack the very fine detail which, though attenuated, enhances the effect of true HD viewing).
Another factor in digital cameras and camcorders is lens resolution. A lens may be said to 'resolve' 1920 horizontal lines, but this does not mean that it does so with full modulation from black to white. The 'modulation transfer function' (just a term for the magnitude of the optical transfer function with phase ignored) gives the true measure of lens performance, and is represented by a graph of amplitude against spatial frequency.
Lens aperture diffraction also limits MTF. Whilst reducing the aperture of a lens usually reduces aberrations and hence improves the flatness of the MTF, there is an optimum aperture for any lens and image sensor size beyond which smaller apertures reduce resolution because of diffraction, which spreads light across the image sensor. This was hardly a problem in the days of plate cameras and even 35 mm film, but has become an insurmountable limitation with the very small format sensors used in some digital cameras and especially video cameras. First generation HD consumer camcorders used 1/4-inch sensors, for which apertures smaller than about f4 begin to limit resolution. Even professional video cameras mostly use 2/3 inch sensors, prohibiting the use of apertures around f16 that would have been considered normal for film formats. Certain cameras (such as the Pentax K10D) feature an "MTF autoexposure" mode, where the choice of aperture is optimized for maximum sharpness. Typically this means somewhere in the middle of the aperture range.
=== Trend to large-format DSLRs and improved MTF potential ===
There has recently been a shift towards the use of large image format digital single-lens reflex cameras driven by the need for low-light sensitivity and narrow depth of field effects. This has led to such cameras becoming preferred by some film and television program makers over even professional HD video cameras, because of their 'filmic' potential. In theory, the use of cameras with 16- and 21-megapixel sensors offers the possibility of almost perfect sharpness by downconversion within the camera, with digital filtering to eliminate aliasing. Such cameras produce very impressive results, and appear to be leading the way in video production towards large-format downconversion with digital filtering becoming the standard approach to the realization of a flat MTF with true freedom from aliasing.
== Digital inversion of the OTF ==
Due to optical effects the contrast may be sub-optimal and approaches zero before the Nyquist frequency of the display is reached. The optical contrast reduction can be partially reversed by digitally amplifying spatial frequencies selectively before display or further processing. Although more advanced digital image restoration procedures exist, the Wiener deconvolution algorithm is often used for its simplicity and efficiency. Since this technique multiplies the spatial spectral components of the image, it also amplifies noise and errors due to e.g. aliasing. It is therefore only effective on good quality recordings with a sufficiently high signal-to-noise ratio.
== Limitations ==
In general, the point spread function, the image of a point source also depends on factors such as the wavelength (color), and field angle (lateral point source position). When such variation is sufficiently gradual, the optical system could be characterized by a set of optical transfer functions. However, when the image of the point source changes abruptly, the optical transfer function does not describe the optical system accurately. Inaccuracies can often be mitigated by a collection of optical transfer functions at well-chosen wavelengths or field-positions. However, a more complex characterization may be necessary for some imaging systems such as the Light field camera.
== See also ==
Bokeh
Gamma correction
Minimum resolvable contrast
Minimum resolvable temperature difference
Optical resolution
Signal-to-noise ratio
Signal transfer function
Strehl ratio
Transfer function
Wavefront coding
== References ==
== External links ==
"Modulation transfer function", by Glenn D. Boreman on SPIE Optipedia.
"How to Measure MTF and other Properties of Lenses", by Optikos Corporation. | Wikipedia/Modulation_transfer_function |
Holography is a technique that enables a wavefront to be recorded and later reconstructed. It is best known as a method of generating three-dimensional images, and has a wide range of other uses, including data storage, microscopy, and interferometry. In principle, it is possible to make a hologram for any type of wave.
A hologram is a recording of an interference pattern that can reproduce a 3D light field using diffraction. In general usage, a hologram is a recording of any type of wavefront in the form of an interference pattern. It can be created by capturing light from a real scene, or it can be generated by a computer, in which case it is known as a computer-generated hologram, which can show virtual objects or scenes. Optical holography needs a laser light to record the light field. The reproduced light field can generate an image that has the depth and parallax of the original scene. A hologram is usually unintelligible when viewed under diffuse ambient light. When suitably lit, the interference pattern diffracts the light into an accurate reproduction of the original light field, and the objects that were in it exhibit visual depth cues such as parallax and perspective that change realistically with the different angles of viewing. That is, the view of the image from different angles shows the subject viewed from similar angles.
A hologram is traditionally generated by overlaying a second wavefront, known as the reference beam, onto a wavefront of interest. This generates an interference pattern, which is then captured on a physical medium. When the recorded interference pattern is later illuminated by the second wavefront, it is diffracted to recreate the original wavefront. The 3D image from a hologram can often be viewed with non-laser light. However, in common practice, major image quality compromises are made to remove the need for laser illumination to view the hologram.
A computer-generated hologram is created by digitally modeling and combining two wavefronts to generate an interference pattern image. This image can then be printed onto a mask or film and illuminated with an appropriate light source to reconstruct the desired wavefront. Alternatively, the interference pattern image can be directly displayed on a dynamic holographic display.
Holographic portraiture often resorts to a non-holographic intermediate imaging procedure, to avoid the dangerous high-powered pulsed lasers which would be needed to optically "freeze" moving subjects as perfectly as the extremely motion-intolerant holographic recording process requires. Early holography required high-power and expensive lasers. Currently, mass-produced low-cost laser diodes, such as those found on DVD recorders and used in other common applications, can be used to make holograms. They have made holography much more accessible to low-budget researchers, artists, and dedicated hobbyists.
Most holograms produced are of static objects, but systems for displaying changing scenes on dynamic holographic displays are now being developed.
The word holography comes from the Greek words ὅλος (holos; "whole") and γραφή (graphē; "writing" or "drawing").
== History ==
The Hungarian-British physicist Dennis Gabor invented holography in 1948 while he was looking for a way to improve image resolution in electron microscopes. Gabor's work was built on pioneering work in the field of X-ray microscopy by other scientists including Mieczysław Wolfke in 1920 and William Lawrence Bragg in 1939. The formulation of holography was an unexpected result of Gabor's research into improving electron microscopes at the British Thomson-Houston Company (BTH) in Rugby, England, and the company filed a patent in December 1947 (patent GB685286). The technique as originally invented is still used in electron microscopy, where it is known as electron holography. Gabor was awarded the Nobel Prize in Physics in 1971 "for his invention and development of the holographic method".
Optical holography did not really advance until the development of the laser in 1960. The development of the laser enabled the first practical optical holograms that recorded 3D objects to be made in 1962 by Yuri Denisyuk in the Soviet Union and by Emmett Leith and Juris Upatnieks at the University of Michigan, US.
Early optical holograms used silver halide photographic emulsions as the recording medium. They were not very efficient as the produced diffraction grating absorbed much of the incident light. Various methods of converting the variation in transmission to a variation in refractive index (known as "bleaching") were developed which enabled much more efficient holograms to be produced.
A major advance in the field of holography was made by Stephen Benton, who invented a way to create holograms that can be viewed with natural light instead of lasers. These are called rainbow holograms.
== Basics of holography ==
Holography is a technique for recording and reconstructing light fields.: Section 1
A light field is generally the result of a light source scattered off objects. Holography can be thought of as somewhat similar to sound recording, whereby a sound field created by vibrating matter like musical instruments or vocal cords, is encoded in such a way that it can be reproduced later, without the presence of the original vibrating matter. However, it is even more similar to Ambisonic sound recording in which any listening angle of a sound field can be reproduced in the reproduction.
=== Laser ===
In laser holography, the hologram is recorded using a source of laser light, which is very pure in its color and orderly in its composition. Various setups may be used, and several types of holograms can be made, but all involve the interaction of light coming from different directions and producing a microscopic interference pattern which a plate, film, or other medium photographically records.
In one common arrangement, the laser beam is split into two, one known as the object beam and the other as the reference beam. The object beam is expanded by passing it through a lens and used to illuminate the subject. The recording medium is located where this light, after being reflected or scattered by the subject, will strike it. The edges of the medium will ultimately serve as a window through which the subject is seen, so its location is chosen with that in mind. The reference beam is expanded and made to shine directly on the medium, where it interacts with the light coming from the subject to create the desired interference pattern.
Like conventional photography, holography requires an appropriate exposure time to correctly affect the recording medium. Unlike conventional photography, during the exposure the light source, the optical elements, the recording medium, and the subject must all remain motionless relative to each other, to within about a quarter of the wavelength of the light, or the interference pattern will be blurred and the hologram spoiled. With living subjects and some unstable materials, that is only possible if a very intense and extremely brief pulse of laser light is used, a hazardous procedure which is rarely done outside of scientific and industrial laboratory settings. Exposures lasting several seconds to several minutes, using a much lower-powered continuously operating laser, are typical.
=== Apparatus ===
A hologram can be made by shining part of the light beam directly into the recording medium, and the other part onto the object in such a way that some of the scattered light falls onto the recording medium. A more flexible arrangement for recording a hologram requires the laser beam to be aimed through a series of elements that change it in different ways. The first element is a beam splitter that divides the beam into two identical beams, each aimed in different directions:
One beam (known as the 'illumination' or 'object beam') is spread using lenses and directed onto the scene using mirrors. Some of the light scattered (reflected) from the scene then falls onto the recording medium.
The second beam (known as the 'reference beam') is also spread through the use of lenses, but is directed so that it does not come in contact with the scene, and instead travels directly onto the recording medium.
Several different materials can be used as the recording medium. One of the most common is a film very similar to photographic film (silver halide photographic emulsion), but with much smaller light-reactive grains (preferably with diameters less than 20 nm), making it capable of the much higher resolution that holograms require. A layer of this recording medium (e.g., silver halide) is attached to a transparent substrate, which is commonly glass, but may also be plastic.
=== Process ===
When the two laser beams reach the recording medium, their light waves intersect and interfere with each other. It is this interference pattern that is imprinted on the recording medium. The pattern itself is seemingly random, as it represents the way in which the scene's light interfered with the original light source – but not the original light source itself. The interference pattern can be considered an encoded version of the scene, requiring a particular key – the original light source – in order to view its contents.
This missing key is provided later by shining a laser, identical to the one used to record the hologram, onto the developed film. When this beam illuminates the hologram, it is diffracted by the hologram's surface pattern. This produces a light field identical to the one originally produced by the scene and scattered onto the hologram.
=== Comparison with photography ===
Holography may be better understood via an examination of its differences from ordinary photography:
A hologram represents a recording of information regarding the light that came from the original scene as scattered in a range of directions rather than from only one direction, as in a photograph. This allows the scene to be viewed from a range of different angles, as if it were still present.
A photograph can be recorded using normal light sources (sunlight or electric lighting) whereas a laser is required to record a hologram.
A lens is required in photography to record the image, whereas in holography, the light from the object is scattered directly onto the recording medium.
A holographic recording requires a second light beam (the reference beam) to be directed onto the recording medium.
A photograph can be viewed in a wide range of lighting conditions, whereas holograms can only be viewed with very specific forms of illumination.
When a photograph is cut in half, each piece shows half of the scene. When a hologram is cut in half, the whole scene can still be seen in each piece. This is because, whereas each point in a photograph only represents light scattered from a single point in the scene, each point on a holographic recording includes information about light scattered from every point in the scene. It can be thought of as viewing a street outside a house through a large window, then through a smaller window. One can see all of the same things through the smaller window (by moving the head to change the viewing angle), but the viewer can see more at once through the large window.
A photographic stereogram is a two-dimensional representation that can produce a three-dimensional effect but only from one point of view, whereas the reproduced viewing range of a hologram adds many more depth perception cues that were present in the original scene. These cues are recognized by the human brain and translated into the same perception of a three-dimensional image as when the original scene might have been viewed.
A photograph clearly maps out the light field of the original scene. The developed hologram's surface consists of a very fine, seemingly random pattern, which appears to bear no relationship to the scene it recorded.
== Physics of holography ==
For a better understanding of the process, it is necessary to understand interference and diffraction. Interference occurs when one or more wavefronts are superimposed. Diffraction occurs when a wavefront encounters an object. The process of producing a holographic reconstruction is explained below purely in terms of interference and diffraction. It is somewhat simplified but is accurate enough to give an understanding of how the holographic process works.
For those unfamiliar with these concepts, it is worthwhile to read those articles before reading further in this article.
=== Plane wavefronts ===
A diffraction grating is a structure with a repeating pattern. A simple example is a metal plate with slits cut at regular intervals. A light wave that is incident on a grating is split into several waves; the direction of these diffracted waves is determined by the grating spacing and the wavelength of the light.
A simple hologram can be made by superimposing two plane waves from the same light source on a holographic recording medium. The two waves interfere, giving a straight-line fringe pattern whose intensity varies sinusoidally across the medium. The spacing of the fringe pattern is determined by the angle between the two waves, and by the wavelength of the light.
The recorded light pattern is a diffraction grating. When it is illuminated by only one of the waves used to create it, it can be shown that one of the diffracted waves emerges at the same angle at which the second wave was originally incident, so that the second wave has been 'reconstructed'. Thus, the recorded light pattern is a holographic recording as defined above.
=== Point sources ===
If the recording medium is illuminated with a point source and a normally incident plane wave, the resulting pattern is a sinusoidal zone plate, which acts as a negative Fresnel lens whose focal length is equal to the separation of the point source and the recording plane.
When a plane wave-front illuminates a negative lens, it is expanded into a wave that appears to diverge from the focal point of the lens. Thus, when the recorded pattern is illuminated with the original plane wave, some of the light is diffracted into a diverging beam equivalent to the original spherical wave; a holographic recording of the point source has been created.
When the plane wave is incident at a non-normal angle at the time of recording, the pattern formed is more complex, but still acts as a negative lens if it is illuminated at the original angle.
=== Complex objects ===
To record a hologram of a complex object, a laser beam is first split into two beams of light. One beam illuminates the object, which then scatters light onto the recording medium. According to diffraction theory, each point in the object acts as a point source of light so the recording medium can be considered to be illuminated by a set of point sources located at varying distances from the medium.
The second (reference) beam illuminates the recording medium directly. Each point source wave interferes with the reference beam, giving rise to its own sinusoidal zone plate in the recording medium. The resulting pattern is the sum of all these 'zone plates', which combine to produce a random (speckle) pattern as in the photograph above.
When the hologram is illuminated by the original reference beam, each of the individual zone plates reconstructs the object wave that produced it, and these individual wavefronts are combined to reconstruct the whole of the object beam. The viewer perceives a wavefront that is identical with the wavefront scattered from the object onto the recording medium, so that it appears that the object is still in place even if it has been removed.
== Applications ==
=== Art ===
Early on, artists saw the potential of holography as a medium and gained access to science laboratories to create their work. Holographic art is often the result of collaborations between scientists and artists, although some holographers would regard themselves as both an artist and a scientist.
Salvador Dalí claimed to have been the first to employ holography artistically. He was certainly the first and best-known surrealist to do so, but the 1972 New York exhibit of Dalí holograms had been preceded by the holographic art exhibition that was held at the Cranbrook Academy of Art in Michigan in 1968 and by the one at the Finch College gallery in New York in 1970, which attracted national media attention. In Great Britain, Margaret Benyon began using holography as an artistic medium in the late 1960s and had a solo exhibition at the University of Nottingham art gallery in 1969. This was followed in 1970 by a solo show at the Lisson Gallery in London, which was billed as the "first London expo of holograms and stereoscopic paintings".
During the 1970s, a number of art studios and schools were established, each with their particular approach to holography. Notably, there was the San Francisco School of Holography established by Lloyd Cross, The Museum of Holography in New York founded by Rosemary (Posy) H. Jackson, the Royal College of Art in London and the Lake Forest College Symposiums organised by Tung Jeong. None of these studios still exist; however, there is the Center for the Holographic Arts in New York and the HOLOcenter in Seoul, which offers artists a place to create and exhibit work.
During the 1980s, many artists who worked with holography helped the diffusion of this so-called "new medium" in the art world, such as Harriet Casdin-Silver of the United States, Dieter Jung of Germany, and Moysés Baumstein of Brazil, each one searching for a proper "language" to use with the three-dimensional work, avoiding the simple holographic reproduction of a sculpture or object. For instance, in Brazil, many concrete poets (Augusto de Campos, Décio Pignatari, Julio Plaza and José Wagner Garcia, associated with Moysés Baumstein) found in holography a way to express themselves and to renew concrete poetry.
A small but active group of artists still integrate holographic elements into their work. Some are associated with novel holographic techniques; for example, artist Matt Brand employed computational mirror design to eliminate image distortion from specular holography.
The MIT Museum and Jonathan Ross both have extensive collections of holography and on-line catalogues of art holograms.
=== Data storage ===
Holographic data storage is a technique that can store information at high density inside crystals or photopolymers. The ability to store large amounts of information in some kind of medium is of great importance, as many electronic products incorporate storage devices. As current storage techniques such as Blu-ray Disc reach the limit of possible data density (due to the diffraction-limited size of the writing beams), holographic storage has the potential to become the next generation of popular storage media. The advantage of this type of data storage is that the volume of the recording media is used instead of just the surface.
Currently available SLMs can produce about 1000 different images a second at 1024×1024-bit resolution which would result in about one-gigabit-per-second writing speed.
In 2005, companies such as Optware and Maxell produced a 120 mm disc that uses a holographic layer to store data to a potential 3.9 TB, a format called Holographic Versatile Disc. As of September 2014, no commercial product has been released.
Another company, InPhase Technologies, was developing a competing format, but went bankrupt in 2011 and all its assets were sold to Akonia Holographics, LLC.
While many holographic data storage models have used "page-based" storage, where each recorded hologram holds a large amount of data, more recent research into using submicrometre-sized "microholograms" has resulted in several potential 3D optical data storage solutions. While this approach to data storage can not attain the high data rates of page-based storage, the tolerances, technological hurdles, and cost of producing a commercial product are significantly lower.
=== Dynamic holography ===
In static holography, recording, developing and reconstructing occur sequentially, and a permanent hologram is produced.
There also exist holographic materials that do not need the developing process and can record a hologram in a very short time. This allows one to use holography to perform some simple operations in an all-optical way. Examples of applications of such real-time holograms include phase-conjugate mirrors ("time-reversal" of light), optical cache memories, image processing (pattern recognition of time-varying images), and optical computing.
The amount of processed information can be very high (terabits/s), since the operation is performed in parallel on a whole image. This compensates for the fact that the recording time, which is in the order of a microsecond, is still very long compared to the processing time of an electronic computer. The optical processing performed by a dynamic hologram is also much less flexible than electronic processing. On one side, one has to perform the operation always on the whole image, and on the other side, the operation a hologram can perform is basically either a multiplication or a phase conjugation. In optics, addition and Fourier transform are already easily performed in linear materials, the latter simply by a lens. This enables some applications, such as a device that compares images in an optical way.
The search for novel nonlinear optical materials for dynamic holography is an active area of research. The most common materials are photorefractive crystals, but in semiconductors or semiconductor heterostructures (such as quantum wells), atomic vapors and gases, plasmas and even liquids, it was possible to generate holograms.
A particularly promising application is optical phase conjugation. It allows the removal of the wavefront distortions a light beam receives when passing through an aberrating medium, by sending it back through the same aberrating medium with a conjugated phase. This is useful, for example, in free-space optical communications to compensate for atmospheric turbulence (the phenomenon that gives rise to the twinkling of starlight).
=== Hobbyist use ===
Since the beginning of holography, many holographers have explored its uses and displayed them to the public.
In 1971, Lloyd Cross opened the San Francisco School of Holography and taught amateurs how to make holograms using only a small (typically 5 mW) helium-neon laser and inexpensive home-made equipment. Holography had been supposed to require a very expensive metal optical table set-up to lock all the involved elements down in place and damp any vibrations that could blur the interference fringes and ruin the hologram. Cross's home-brew alternative was a sandbox made of a cinder block retaining wall on a plywood base, supported on stacks of old tires to isolate it from ground vibrations, and filled with sand that had been washed to remove dust. The laser was securely mounted atop the cinder block wall. The mirrors and simple lenses needed for directing, splitting and expanding the laser beam were affixed to short lengths of PVC pipe, which were stuck into the sand at the desired locations. The subject and the photographic plate holder were similarly supported within the sandbox. The holographer turned off the room light, blocked the laser beam near its source using a small relay-controlled shutter, loaded a plate into the holder in the dark, left the room, waited a few minutes to let everything settle, then made the exposure by remotely operating the laser shutter.
In 1979, Jason Sapan opened the Holographic Studios in New York City. Since then, they have been involved in the production of many holographs for many artists as well as companies. Sapan has been described as the "last professional holographer of New York".
Many of these holographers would go on to produce art holograms. In 1983, Fred Unterseher, a co-founder of the San Francisco School of Holography and a well-known holographic artist, published the Holography Handbook, an easy-to-read guide to making holograms at home. This brought in a new wave of holographers and provided simple methods for using the then-available AGFA silver halide recording materials.
In 2000, Frank DeFreitas published the Shoebox Holography Book and introduced the use of inexpensive laser pointers to countless hobbyists. For many years, it had been assumed that certain characteristics of semiconductor laser diodes made them virtually useless for creating holograms, but when they were eventually put to the test of practical experiment, it was found that not only was this untrue, but that some actually provided a coherence length much greater than that of traditional helium-neon gas lasers. This was a very important development for amateurs, as the price of red laser diodes had dropped from hundreds of dollars in the early 1980s to about $5 after they entered the mass market as a component pulled from CD, or later, DVD players from the mid 1980s onwards. Now, there are thousands of amateur holographers worldwide.
By late 2000, holography kits with inexpensive laser pointer diodes entered the mainstream consumer market. These kits enabled students, teachers, and hobbyists to make several kinds of holograms without specialized equipment, and became popular gift items by 2005. The introduction of holography kits with self-developing plates in 2003 made it possible for hobbyists to create holograms without the bother of wet chemical processing.
In 2006, a large number of surplus holography-quality green lasers (Coherent C315) became available and put dichromated gelatin (DCG) holography within the reach of the amateur holographer. The holography community was surprised at the amazing sensitivity of DCG to green light. It had been assumed that this sensitivity would be uselessly slight or non-existent. Jeff Blyth responded with the G307 formulation of DCG to increase the speed and sensitivity to these new lasers.
Kodak and Agfa, the former major suppliers of holography-quality silver halide plates and films, are no longer in the market. While other manufacturers have helped fill the void, many amateurs are now making their own materials. The favorite formulations are dichromated gelatin, Methylene-Blue-sensitised dichromated gelatin, and diffusion method silver halide preparations. Jeff Blyth has published very accurate methods for making these in a small lab or garage.
A small group of amateurs are even constructing their own pulsed lasers to make holograms of living subjects and other unsteady or moving objects.
=== Holographic interferometry ===
Holographic interferometry (HI) is a technique that enables static and dynamic displacements of objects with optically rough surfaces to be measured to optical interferometric precision (i.e. to fractions of a wavelength of light). It can also be used to detect optical-path-length variations in transparent media, which enables, for example, fluid flow to be visualized and analyzed. It can also be used to generate contours representing the form of the surface or the isodose regions in radiation dosimetry.
It has been widely used to measure stress, strain, and vibration in engineering structures.
=== Interferometric microscopy ===
The hologram keeps the information on the amplitude and phase of the field. Several holograms may keep information about the same distribution of light, emitted to various directions. The numerical analysis of such holograms allows one to emulate large numerical aperture, which, in turn, enables enhancement of the resolution of optical microscopy. The corresponding technique is called interferometric microscopy. Recent achievements of interferometric microscopy allow one to approach the quarter-wavelength limit of resolution.
=== Sensors or biosensors ===
The hologram is made with a modified material that interacts with certain molecules generating a change in the fringe periodicity or refractive index, therefore, the color of the holographic reflection.
=== Security ===
Holograms are commonly used for security, as they are replicated from a master hologram that requires expensive, specialized and technologically advanced equipment, and are thus difficult to forge. They are used widely in many currencies, such as the Brazilian 20, 50, and 100-reais notes; British 5, 10, 20 and 50-pound notes; South Korean 5000, 10,000, and 50,000-won notes; Japanese 5000 and 10,000 yen notes, Indian 50, 100, 500, and 2000 rupee notes; and all the currently-circulating banknotes of the Canadian dollar, Croatian kuna, Danish krone, and Euro. They can also be found in credit and bank cards as well as passports, ID cards, books, food packaging, DVDs, and sports equipment. Such holograms come in a variety of forms, from adhesive strips that are laminated on packaging for fast-moving consumer goods to holographic tags on electronic products. They often contain textual or pictorial elements to protect identities and separate genuine articles from counterfeits.
Holographic scanners are in use in post offices, larger shipping firms, and automated conveyor systems to determine the three-dimensional size of a package. They are often used in tandem with checkweighers to allow automated pre-packing of given volumes, such as a truck or pallet for bulk shipment of goods.
Holograms produced in elastomers can be used as stress-strain reporters due to its elasticity and compressibility, the pressure and force applied are correlated to the reflected wavelength, therefore its color. Holography technique can also be effectively used for radiation dosimetry.
==== High security registration plates ====
High-security holograms can be used on license plates for vehicles such as cars and motorcycles. As of April 2019, holographic license plates are required on vehicles in parts of India to aid in identification and security, especially in cases of car theft. Such number plates hold electronic data of vehicles, and have a unique ID number and a sticker to indicate authenticity.
Extended Reality "XR"
In March 2022, a real-time holographic communication solution was invented by Mária Virčíková and Matúš Kirchmayer, creating the world’s first holographic presence app requiring only a smartphone camera. Their company, MATSUKO, patented single-camera technology, enabling users to transmit and interact as realistic 3D holograms in XR environments, supported by 5G networks and mixed reality glasses.
Further advancements, including a spatial computing holographic meeting experience MATSUKO developed with Telefónica and NVIDIA, were unveiled and demonstrated at Mobile World Congress (MWC) 2024 in February 2024. This iteration leveraged 5G, edge computing, and AI to enhance realism with eye contact and facial expression tracking, supporting devices like Apple Vision Pro and Meta Quest.
== Holography using other types of waves ==
In principle, it is possible to make a hologram for any wave.
Electron holography is the application of holography techniques to electron waves rather than light waves. Electron holography was invented by Dennis Gabor to improve the resolution and avoid the aberrations of the transmission electron microscope. Today it is commonly used to study electric and magnetic fields in thin films, as magnetic and electric fields can shift the phase of the interfering wave passing through the sample. The principle of electron holography can also be applied to interference lithography.
Acoustic holography enables sound maps of an object to be generated. Measurements of the acoustic field are made at many points close to the object. These measurements are digitally processed to produce the "images" of the object.
Atomic holography has evolved out of the development of the basic elements of atom optics. With the Fresnel diffraction lens and atomic mirrors atomic holography follows a natural step in the development of the physics (and applications) of atomic beams. Recent developments including atomic mirrors and especially ridged mirrors have provided the tools necessary for the creation of atomic holograms, although such holograms have not yet been commercialized.
Neutron beam holography has been used to see the inside of solid objects.
Holograms with x-rays are generated by using synchrotrons or x-ray free-electron lasers as radiation sources and pixelated detectors such as CCDs as recording medium. The reconstruction is then retrieved via computation. Due to the shorter wavelength of x-rays compared to visible light, this approach allows imaging objects with higher spatial resolution. As free-electron lasers can provide ultrashort and x-ray pulses in the range of femtoseconds which are intense and coherent, x-ray holography has been used to capture ultrafast dynamic processes.
== False holograms ==
There are many optical effects that are falsely confused with holography, such as the effects produced by lenticular printing, the Pepper's ghost illusion (or modern variants such as the Musion Eyeliner), tomography and volumetric displays. Such illusions have been called "fauxlography".
The Pepper's ghost technique, being the easiest to implement of these methods, is most prevalent in 3D displays that claim to be (or are referred to as) "holographic". While the original illusion, used in theater, involved actual physical objects and persons, located offstage, modern variants replace the source object with a digital screen, which displays imagery generated with 3D computer graphics to provide the necessary depth cues. The reflection, which seems to float mid-air, is still flat however, thus less realistic than if an actual 3D object was being reflected.
Examples of this digital version of Pepper's ghost illusion include the Gorillaz performances in the 2005 MTV Europe Music Awards and the 48th Grammy Awards; and Tupac Shakur's virtual performance at Coachella Valley Music and Arts Festival in 2012, rapping alongside Snoop Dogg during his set with Dr. Dre. Digital avatars of the Swedish supergroup ABBA were displayed on stage in May 2022. The ABBA performance used technology that was an updated version of Pepper's Ghost created by Industrial Light & Magic. American rock group KISS unveiled similar digital avatars in December 2023 to tour in their place at the conclusion of the End of the Road World Tour using the same Pepper's Ghost technology as the ABBA avatars.
An even simpler illusion can be created by rear-projecting realistic images into semi-transparent screens. The rear projection is necessary because otherwise the semi-transparency of the screen would allow the background to be illuminated by the projection, which would break the illusion.
Crypton Future Media, a music software company that produced Hatsune Miku, one of many Vocaloid singing synthesizer applications, has produced concerts that have Miku, along with other Crypton Vocaloids, performing on stage as "holographic" characters. These concerts use rear projection onto a semi-transparent DILAD screen to achieve its "holographic" effect.
In 2011, in Beijing, apparel company Burberry produced the "Burberry Prorsum Autumn/Winter 2011 Hologram Runway Show", which included life size 2-D projections of models. The company's own video shows several centered and off-center shots of the main 2-dimensional projection screen, the latter revealing the flatness of the virtual models. The claim that holography was used was reported as fact in the trade media.
In Madrid, on 10 April 2015, a public visual presentation called "Hologramas por la Libertad" (Holograms for Liberty), featuring a ghostly virtual crowd of demonstrators, was used to protest a new Spanish law that prohibits citizens from demonstrating in public places. Although widely called a "hologram protest" in news reports, no actual holography was involved – it was yet another technologically updated variant of the Pepper's ghost illusion.
Holography is distinct from specular holography which is a technique for making three-dimensional images by controlling the motion of specularities on a two-dimensional surface. It works by reflectively or refractively manipulating bundles of light rays, not by using interference and diffraction.
== Tactile holograms ==
== In fiction ==
Holography has been widely referred to in movies, novels, and TV, usually in science fiction, starting in the late 1970s. Science fiction writers absorbed the urban legends surrounding holography that had been spread by overly-enthusiastic scientists and entrepreneurs trying to market the idea. This had the effect of giving the public overly high expectations of the capability of holography, due to the unrealistic depictions of it in most fiction, where they are fully three-dimensional computer projections that are sometimes tactile through the use of force fields. Examples of this type of depiction include the hologram of Princess Leia in Star Wars, Arnold Rimmer from Red Dwarf, who was later converted to "hard light" to make him solid, and the Holodeck and Emergency Medical Hologram from Star Trek.
Holography has served as an inspiration for many video games with science fiction elements. In many titles, fictional holographic technology has been used to reflect real life misrepresentations of potential military use of holograms, such as the "mirage tanks" in Command & Conquer: Red Alert 2 that can disguise themselves as trees. Player characters are able to use holographic decoys in games such as Halo: Reach and Crysis 2 to confuse and distract the enemy. Starcraft ghost agent Nova has access to "holo decoy" as one of her three primary abilities in Heroes of the Storm.
Fictional depictions of holograms have, however, inspired technological advances in other fields, such as augmented reality, that promise to fulfill the fictional depictions of holograms by other means.
== See also ==
== References ==
== Bibliography ==
== Further reading ==
== External links ==
"Dennis Gabor – Autobiography", 30 September 2004, Nobelprize.org
"Holography, 1948-1971 Nobel Lecture", 11 December 1971, by Dennis Gabor
"How Holograms Work", How Stuff Works, by Tracy V. Wilson, 30 August 2023
"Holography" by The Strange Theory of Light, QED
"Making Real Holograms!!!!!!" at YouTube by The Thought Emporium, 19 November 2020
"How are holograms possible?" at YouTube by Grant Sanderson, 3Blue1Brown, 5 October 2024 | Wikipedia/Holographic |
The Sellmeier equation is an empirical relationship between refractive index and wavelength for a particular transparent medium. The equation is used to determine the dispersion of light in the medium.
It was first proposed in 1872 by Wolfgang Sellmeier and was a development of the work of Augustin Cauchy on Cauchy's equation for modelling dispersion.
== Description ==
In its original and the most general form, the Sellmeier equation is given as
n
2
(
λ
)
=
1
+
∑
i
B
i
λ
2
λ
2
−
C
i
{\displaystyle n^{2}(\lambda )=1+\sum _{i}{\frac {B_{i}\lambda ^{2}}{\lambda ^{2}-C_{i}}}}
,
where n is the refractive index, λ is the wavelength, and Bi and Ci are experimentally determined Sellmeier coefficients. These coefficients are usually quoted for λ in micrometres. Note that this λ is the vacuum wavelength, not that in the material itself, which is λ/n. A different form of the equation is sometimes used for certain types of materials, e.g. crystals.
Each term of the sum representing an absorption resonance of strength Bi at a wavelength √Ci. For example, the coefficients for BK7 below correspond to two absorption resonances in the ultraviolet, and one in the mid-infrared region. Analytically, this process is based on approximating the underlying optical resonances as dirac delta functions, followed by the application of the Kramers-Kronig relations. This results in real and imaginary parts of the refractive index which are physically sensible. However, close to each absorption peak, the equation gives non-physical values of n2 = ±∞, and in these wavelength regions a more precise model of dispersion such as Helmholtz's must be used.
If all terms are specified for a material, at long wavelengths far from the absorption peaks the value of n tends to
n
≈
1
+
∑
i
B
i
≈
ε
r
,
{\displaystyle {\begin{matrix}n\approx {\sqrt {1+\sum _{i}B_{i}}}\approx {\sqrt {\varepsilon _{r}}}\end{matrix}},}
where εr is the relative permittivity of the medium.
For characterization of glasses the equation consisting of three terms is commonly used:
n
2
(
λ
)
=
1
+
B
1
λ
2
λ
2
−
C
1
+
B
2
λ
2
λ
2
−
C
2
+
B
3
λ
2
λ
2
−
C
3
,
{\displaystyle n^{2}(\lambda )=1+{\frac {B_{1}\lambda ^{2}}{\lambda ^{2}-C_{1}}}+{\frac {B_{2}\lambda ^{2}}{\lambda ^{2}-C_{2}}}+{\frac {B_{3}\lambda ^{2}}{\lambda ^{2}-C_{3}}},}
As an example, the coefficients for a common borosilicate crown glass known as BK7 are shown below:
For common optical glasses, the refractive index calculated with the three-term Sellmeier equation deviates from the actual refractive index by less than 5×10−6 over the wavelengths' range of 365 nm to 2.3 μm, which is of the order of the homogeneity of a glass sample. Additional terms are sometimes added to make the calculation even more precise.
Sometimes the Sellmeier equation is used in two-term form:
n
2
(
λ
)
=
A
+
B
1
λ
2
λ
2
−
C
1
+
B
2
λ
2
λ
2
−
C
2
.
{\displaystyle n^{2}(\lambda )=A+{\frac {B_{1}\lambda ^{2}}{\lambda ^{2}-C_{1}}}+{\frac {B_{2}\lambda ^{2}}{\lambda ^{2}-C_{2}}}.}
Here the coefficient A is an approximation of the short-wavelength (e.g., ultraviolet) absorption contributions to the refractive index at longer wavelengths. Other variants of the Sellmeier equation exist that can account for a material's refractive index change due to temperature, pressure, and other parameters.
== Derivation ==
Analytically, the Sellmeier equation models the refractive index as due to a series of optical resonances within the bulk material. Its derivation from the Kramers-Kronig relations requires a few assumptions about the material, from which any deviations will affect the model's accuracy:
There exists a number of resonances, and the final refractive index can be calculated from the sum over the contributions from all resonances.
All optical resonances are at wavelengths far away from the wavelengths of interest, where the model is applied.
At these resonant frequencies, the imaginary component of the susceptibility (
χ
i
{\displaystyle {\chi _{i}}}
) can be modeled as a delta function.
From the last point, the complex refractive index (and the electric susceptibility) becomes:
χ
i
(
ω
)
=
∑
i
A
i
δ
(
ω
−
ω
i
)
{\displaystyle \chi _{i}(\omega )=\sum _{i}A_{i}\delta (\omega -\omega _{i})}
The real part of the refractive index comes from applying the Kramers-Kronig relations to the imaginary part:
n
2
=
1
+
χ
r
(
ω
)
=
1
+
2
π
∫
0
∞
ω
χ
i
(
ω
)
ω
2
−
Ω
2
d
ω
{\displaystyle n^{2}=1+\chi _{r}(\omega )=1+{\frac {2}{\pi }}\int _{0}^{\infty }{\frac {\omega \chi _{i}(\omega )}{\omega ^{2}-\Omega ^{2}}}d\omega }
Plugging in the first equation above for the imaginary component:
n
2
=
1
+
2
π
∫
0
∞
∑
i
A
i
δ
(
ω
−
ω
i
)
ω
ω
2
−
Ω
2
d
ω
{\displaystyle n^{2}=1+{\frac {2}{\pi }}\int _{0}^{\infty }\sum _{i}A_{i}\delta (\omega -\omega _{i}){\frac {\omega }{\omega ^{2}-\Omega ^{2}}}d\omega }
The order of summation and integration can be swapped. When evaluated, this gives the following, where
H
{\displaystyle H}
is the Heaviside function:
n
2
=
1
+
2
π
∑
i
A
i
∫
0
∞
δ
(
ω
−
ω
i
)
ω
ω
2
−
Ω
2
d
ω
=
1
+
2
π
∑
i
A
i
ω
i
H
(
ω
i
)
ω
i
2
−
Ω
2
{\displaystyle n^{2}=1+{\frac {2}{\pi }}\sum _{i}A_{i}\int _{0}^{\infty }\delta (\omega -\omega _{i}){\frac {\omega }{\omega ^{2}-\Omega ^{2}}}d\omega =1+{\frac {2}{\pi }}\sum _{i}A_{i}{\frac {\omega _{i}H(\omega _{i})}{\omega _{i}^{2}-\Omega ^{2}}}}
Since the domain is assumed to be far from any resonances (assumption 2 above),
H
(
ω
i
)
{\displaystyle H(\omega _{i})}
evaluates to 1 and a familiar form of the Sellmeier equation is obtained:
n
2
=
1
+
2
π
∑
i
A
i
ω
i
ω
i
2
−
Ω
2
{\displaystyle n^{2}=1+{\frac {2}{\pi }}\sum _{i}A_{i}{\frac {\omega _{i}}{\omega _{i}^{2}-\Omega ^{2}}}}
By rearranging terms, the constants
B
i
{\displaystyle B_{i}}
and
C
i
{\displaystyle C_{i}}
can be substituted into the equation above to give the Sellmeier equation.
== Coefficients ==
== See also ==
Cauchy's equation
== References ==
== External links ==
RefractiveIndex.INFO Refractive index database featuring Sellmeier coefficients for many hundreds of materials.
A browser-based calculator giving refractive index from Sellmeier coefficients.
Annalen der Physik - free Access, digitized by the French national library
Sellmeier coefficients for 356 glasses from Ohara, Hoya, and Schott | Wikipedia/Sellmeier_equation |
The calculation of glass properties (glass modeling) is used to predict glass properties of interest or glass behavior under certain conditions (e.g., during production) without experimental investigation, based on past data and experience, with the intention to save time, material, financial, and environmental resources, or to gain scientific insight. It was first practised at the end of the 19th century by A. Winkelmann and O. Schott. The combination of several glass models together with other relevant functions can be used for optimization and six sigma procedures. In the form of statistical analysis glass modeling can aid with accreditation of new data, experimental procedures, and measurement institutions (glass laboratories).
== History ==
Historically, the calculation of glass properties is directly related to the founding of glass science. At the end of the 19th century the physicist Ernst Abbe developed equations that allow calculating the design of optimized optical microscopes in Jena, Germany, stimulated by co-operation with the optical workshop of Carl Zeiss. Before Ernst Abbe's time the building of microscopes was mainly a work of art and experienced craftsmanship, resulting in very expensive optical microscopes with variable quality. Now Ernst Abbe knew exactly how to construct an excellent microscope, but unfortunately, the required lenses and prisms with specific ratios of refractive index and dispersion did not exist. Ernst Abbe was not able to find answers to his needs from glass artists and engineers; glass making was not based on science at this time.
In 1879 the young glass engineer Otto Schott sent Abbe glass samples with a special composition (lithium silicate glass) that he had prepared himself and that he hoped to show special optical properties. Following measurements by Ernst Abbe, Schott's glass samples did not have the desired properties, and they were also not as homogeneous as desired. Nevertheless, Ernst Abbe invited Otto Schott to work on the problem further and to evaluate all possible glass components systematically. Finally, Schott succeeded in producing homogeneous glass samples, and he invented borosilicate glass with the optical properties Abbe needed. These inventions gave rise to the well-known companies Zeiss and Schott Glass (see also Timeline of microscope technology). Systematic glass research was born. In 1908, Eugene Sullivan founded glass research also in the United States (Corning, New York).
At the beginning of glass research it was most important to know the relation between the glass composition and its properties. For this purpose Otto Schott introduced the additivity principle in several publications for calculation of glass properties. This principle implies that the relation between the glass composition and a specific property is linear to all glass component concentrations, assuming an ideal mixture, with Ci and bi representing specific glass component concentrations and related coefficients respectively in the equation below. The additivity principle is a simplification and only valid within narrow composition ranges as seen in the displayed diagrams for the refractive index and the viscosity. Nevertheless, the application of the additivity principle lead the way to many of Schott's inventions, including optical glasses, glasses with low thermal expansion for cooking and laboratory ware (Duran), and glasses with reduced freezing point depression for mercury thermometers. Subsequently, English and Gehlhoff et al. published similar additive glass property calculation models. Schott's additivity principle is still widely in use today in glass research and technology.
Additivity Principle:
Glass Property
=
b
0
+
∑
i
=
1
n
b
i
C
i
{\displaystyle {\mbox{Glass Property}}=b_{0}+\sum _{i=1}^{n}b_{i}C_{i}}
== Global models ==
Schott and many scientists and engineers afterwards applied the additivity principle to experimental data measured in their own laboratory within sufficiently narrow composition ranges (local glass models). This is most convenient because disagreements between laboratories and non-linear glass component interactions do not need to be considered. In the course of several decades of systematic glass research thousands of glass compositions were studied, resulting in millions of published glass properties, collected in glass databases. This huge pool of experimental data was not investigated as a whole, until Bottinga, Kucuk, Priven, Choudhary, Mazurin, and Fluegel published their global glass models, using various approaches. In contrast to the models by Schott the global models consider many independent data sources, making the model estimates more reliable. In addition, global models can reveal and quantify non-additive influences of certain glass component combinations on the properties, such as the mixed-alkali effect as seen in the adjacent diagram, or the boron anomaly. Global models also reflect interesting developments of glass property measurement accuracy, e.g., a decreasing accuracy of experimental data in modern scientific literature for some glass properties, shown in the diagram. They can be used for accreditation of new data, experimental procedures, and measurement institutions (glass laboratories). In the following sections (except melting enthalpy) empirical modeling techniques are presented, which seem to be a successful way for handling huge amounts of experimental data. The resulting models are applied in contemporary engineering and research for the calculation of glass properties.
Non-empirical (deductive) glass models exist. They are often not created to obtain reliable glass property predictions in the first place (except melting enthalpy), but to establish relations among several properties (e.g. atomic radius, atomic mass, chemical bond strength and angles, chemical valency, heat capacity) to gain scientific insight. In future, the investigation of property relations in deductive models may ultimately lead to reliable predictions for all desired properties, provided the property relations are well understood and all required experimental data are available.
== Methods ==
Glass properties and glass behavior during production can be calculated through statistical analysis of glass databases such as GE-SYSTEM
SciGlass and Interglad,
sometimes combined with the finite element method. For estimating the melting enthalpy thermodynamic databases are used.
=== Linear regression ===
If the desired glass property is not related to crystallization (e.g., liquidus temperature) or phase separation, linear regression can be applied using common polynomial functions up to the third degree. Below is an example equation of the second degree. The C-values are the glass component concentrations like Na2O or CaO in percent or other fractions, the b-values are coefficients, and n is the total number of glass components. The glass main component silica (SiO2) is excluded in the equation below because of over-parametrization due to the constraint that all components sum up to 100%. Many terms in the equation below can be neglected based on correlation and significance analysis. Systematic errors such as seen in the picture are quantified by dummy variables. Further details and examples are available in an online tutorial by Fluegel.
Glass Property
=
b
0
+
∑
i
=
1
n
(
b
i
C
i
+
∑
k
=
i
n
b
i
k
C
i
C
k
)
{\displaystyle {\mbox{Glass Property}}=b_{0}+\sum _{i=1}^{n}\left(b_{i}C_{i}+\sum _{k=i}^{n}b_{ik}C_{i}C_{k}\right)}
=== Non-linear regression ===
The liquidus temperature has been modeled by non-linear regression using neural networks and disconnected peak functions. The disconnected peak functions approach is based on the observation that within one primary crystalline phase field linear regression can be applied and at eutectic points sudden changes occur.
=== Glass melting enthalpy ===
The glass melting enthalpy reflects the amount of energy needed to convert the mix of raw materials (batch) to a melt glass. It depends on the batch and glass compositions, on the efficiency of the furnace and heat regeneration systems, the average residence time of the glass in the furnace, and many other factors. A pioneering article about the subject was written by Carl Kröger in 1953.
=== Finite element method ===
For modeling of the glass flow in a glass melting furnace the finite element method is applied commercially, based on data or models for viscosity, density, thermal conductivity, heat capacity, absorption spectra, and other relevant properties of the glass melt. The finite element method may also be applied to glass forming processes.
=== Optimization ===
It is often required to optimize several glass properties simultaneously, including production costs.
This can be performed, e.g., by simplex search, or in a spreadsheet as follows:
Listing of the desired properties;
Entering of models for the reliable calculation of properties based on the glass composition, including a formula for estimating the production costs;
Calculation of the squares of the differences (errors) between desired and calculated properties;
Reduction of the sum of square errors using the Solver option in Microsoft Excel with the glass components as variables. Other software (e.g. Microcal Origin) can also be used to perform these optimizations.
It is possible to weight the desired properties differently. Basic information about the principle can be found in an article by Huff et al. The combination of several glass models together with further relevant technological and financial functions can be used in six sigma optimization.
== See also ==
Glass batch calculation
== References == | Wikipedia/Glass_model |
In optics, encircled energy is a measure of concentration of energy in an image, or projected laser at a given range. For example, if a single star is brought to its sharpest focus by a lens giving the smallest image possible with that given lens (called a point spread function or PSF), calculation of the encircled energy of the resulting image gives the distribution of energy in that PSF.
Encircled energy is calculated by first determining the total energy of the PSF over the full image plane, then determining the centroid of the PSF. Circles of increasing radius are then created at that centroid and the PSF energy within each circle is calculated and divided by the total energy. As the circle increases in radius, more of the PSF energy is enclosed, until the circle is sufficiently large to completely contain all the PSF energy. The encircled energy curve thus ranges from zero to one.
A typical criterion for encircled energy (EE) is the radius of the PSF at which either 50% or 80% of the energy is encircled. This is a linear dimension, typically in micrometers. When divided by the lens or mirror focal length, this gives the angular size of the PSF, typically expressed in arc-seconds when specifying astronomical optical system performance.
Encircled energy is also used to quantify the spreading of a laser beam at a given distance. All laser beams spread due to the necessarily limited aperture of the optical system projecting the beam. As in star image PSF's, the linear spreading of the beam expressed as encircled energy is divided by the projection distance to give the angular spreading.
An alternative to encircled energy is ensquared energy, typically used when quantifying image sharpness for digital imaging cameras using pixels.
== See also ==
Point spread function
Airy disc
== References ==
Smith, Warren J., Modern Optical Engineering, 3rd ed., pp. 383–385. New York: McGraw-Hill, Inc., 2000. ISBN 0-07-136360-2 | Wikipedia/Encircled_energy |
Orthopedic surgery or orthopedics (alternative spelling orthopaedics) is the branch of surgery concerned with conditions involving the musculoskeletal system. Orthopedic surgeons use both surgical and nonsurgical means to treat musculoskeletal trauma, spine diseases, sports injuries, degenerative diseases, infections, tumors and congenital disorders.
== Etymology ==
Nicholas Andry coined the word in French as orthopédie, derived from the Ancient Greek words ὀρθός orthos ("correct", "straight") and παιδίον paidion ("child"), and published Orthopedie (translated as Orthopædia: Or the Art of Correcting and Preventing Deformities in Children) in 1741. The word was assimilated into English as orthopædics; the ligature æ was common in that era for ae in Greek- and Latin-based words. As the name implies, the discipline was initially developed with attention to children, but the correction of spinal and bone deformities in all stages of life eventually became the cornerstone of orthopedic practice.
=== Differences in spelling ===
As with many words derived with the "æ" ligature, simplification to either "ae" or just "e" is common, especially in North America. In the US, the majority of college, university, and residency programmes, and even the American Academy of Orthopaedic Surgeons, still use the spelling with the digraph ae, though hospitals usually use the shortened form. Elsewhere, usage is not uniform; in Canada, both spellings are acceptable; "orthopaedics" is the normal spelling in the UK in line with other fields which retain "ae".
== History ==
=== Early orthopedics ===
Many developments in orthopedic surgery have resulted from experiences during wartime. On the battlefields of the Middle Ages, the injured were treated with bandages soaked in horses' blood, which dried to form a stiff, if unsanitary, splint.
Originally, the term orthopedics meant the correcting of musculoskeletal deformities in children. Nicolas Andry, a professor of medicine at the University of Paris, coined the term in the first textbook written on the subject in 1741. He advocated the use of exercise, manipulation, and splinting to treat deformities in children. His book was directed towards parents, and while some topics would be familiar to orthopedists today, it also included 'excessive sweating of the palms' and freckles.
Jean-André Venel established the first orthopedic institute in 1780, which was the first hospital dedicated to the treatment of children's skeletal deformities. He developed the club-foot shoe for children born with foot deformities and various methods to treat curvature of the spine.
Advances made in surgical technique during the 18th century, such as John Hunter's research on tendon healing and Percival Pott's work on spinal deformity steadily increased the range of new methods available for effective treatment. Robert Chessher, a pioneering British orthopedist, invented the double-inclined plane, used to treat lower-body bone fractures, in 1790. Antonius Mathijsen, a Dutch military surgeon, invented the plaster of Paris cast in 1851. Until the 1890s, though, orthopedics was still a study limited to the correction of deformity in children. One of the first surgical procedures developed was percutaneous tenotomy. This involved cutting a tendon, originally the Achilles tendon, to help treat deformities alongside bracing and exercises. In the late 1800s and first decades of the 1900s, significant controversy arose about whether orthopedics should include surgical procedures at all.
=== Modern orthopedics ===
Examples of people who aided the development of modern orthopedic surgery were Hugh Owen Thomas, a surgeon from Wales, and his nephew, Robert Jones. Thomas became interested in orthopedics and bone-setting at a young age, and after establishing his own practice, went on to expand the field into the general treatment of fracture and other musculoskeletal problems. He advocated enforced rest as the best remedy for fractures and tuberculosis, and created the so-called "Thomas splint" to stabilize a fractured femur and prevent infection. He is also responsible for numerous other medical innovations that all carry his name: Thomas's collar to treat tuberculosis of the cervical spine, Thomas's maneuvere, an orthopedic investigation for fracture of the hip joint, the Thomas test, a method of detecting hip deformity by having the patient lying flat in bed, and Thomas's wrench for reducing fractures, as well as a so-called "osteoclast" implement to break and reset bones.
Thomas's work was not fully appreciated in his own lifetime. Only during the First World War did his techniques come to be used for injured soldiers on the battlefield. His nephew, Sir Robert Jones, had already made great advances in orthopedics in his position as surgeon-superintendent for the construction of the Manchester Ship Canal in 1888. He was responsible for the injured among the 20,000 workers, and he organized the first comprehensive accident service in the world, dividing the 36-mile site into three sections, and establishing a hospital and a string of first-aid posts in each section. He had the medical personnel trained in fracture management. He personally managed 3,000 cases and performed 300 operations in his own hospital. This position enabled him to learn new techniques and improve the standard of fracture management. Physicians from around the world came to Jones' clinic to learn his techniques. Along with Alfred Tubby, Jones founded the British Orthopedic Society in 1894.
During the First World War, Jones served as a Territorial Army surgeon. He observed that treatment of fractures both, at the front and in hospitals at home, was inadequate, and his efforts led to the introduction of military orthopedic hospitals. He was appointed Inspector of Military Orthopedics, with responsibility for 30,000 beds. The hospital in Ducane Road, Hammersmith, became the model for both British and American military orthopedic hospitals. His advocacy of the use of Thomas splint for the initial treatment of femoral fractures reduced mortality of open fractures of the femur from 87% to less than 8% in the period from 1916 to 1918.
The use of intramedullary rods to treat fractures of the femur and tibia was pioneered by Gerhard Küntscher of Germany. This made a noticeable difference to the speed of recovery of injured German soldiers during World War II and led to more widespread adoption of intramedullary fixation of fractures in the rest of the world. Traction was the standard method of treating thigh bone fractures until the late 1970s, though, when the Harborview Medical Center group in Seattle popularized intramedullary fixation without opening up the fracture.
The modern total hip replacement was pioneered by Sir John Charnley, expert in tribology at Wrightington Hospital, in England in the 1960s. He found that joint surfaces could be replaced by implants cemented to the bone. His design consisted of a stainless steel, one-piece femoral stem and head, and a polyethylene acetabular component, both of which were fixed to the bone using PMMA (acrylic) bone cement. For over two decades, the Charnley low-friction arthroplasty and its derivative designs were the most-used systems in the world. This formed the basis for all modern hip implants.
The Exeter hip replacement system (with a slightly different stem geometry) was developed at the same time. Since Charnley, improvements have been continuous in the design and technique of joint replacement (arthroplasty) with many contributors, including W. H. Harris, the son of R. I. Harris, whose team at Harvard pioneered uncemented arthroplasty techniques with the bone bonding directly to the implant.
Knee replacements, using similar technology, were started by McIntosh in rheumatoid arthritis patients and later by Gunston and Marmor for osteoarthritis in the 1970s, developed by John Insall in New York using a fixed bearing system, and by Frederick Buechel and Michael Pappas using a mobile bearing system.
External fixation of fractures was refined by American surgeons during the Vietnam War, but a major contribution was made by Gavril Abramovich Ilizarov in the USSR. He was sent, without much orthopedic training, to look after injured Russian soldiers in Siberia in the 1950s. With no equipment, he was confronted with crippling conditions of unhealed, infected, and misaligned fractures. With the help of the local bicycle shop, he devised ring external fixators tensioned like the spokes of a bicycle. With this equipment, he achieved healing, realignment, and lengthening to a degree unheard of elsewhere. His Ilizarov apparatus is still used today as one of the distraction osteogenesis methods.
Modern orthopedic surgery and musculoskeletal research have sought to make surgery less invasive and to make implanted components better and more durable. On the other hand, since the emergence of the opioid epidemic, orthopedic surgeons have been identified as one of the highest prescribers of opioid medications. Decreasing prescription of opioids while still providing adequate pain control is a development in orthopedic surgery.
== Training ==
In the United States, orthopedic surgeons have typically completed four years of undergraduate education and four years of medical school and earned either a Doctor of Medicine (MD) or Doctor of Osteopathic Medicine (DO) degree. Subsequently, these medical school graduates undergo residency training in orthopedic surgery. The five-year residency is a categorical orthopedic surgery training.
Selection for residency training in orthopedic surgery is very competitive. Roughly 700 physicians complete orthopedic residency training per year in the United States. About 10% of current orthopedic surgery residents are women; about 20% are members of minority groups. Around 20,400 actively practicing orthopedic surgeons and residents are in the United States. According to the latest Occupational Outlook Handbook (2011–2012) published by the United States Department of Labor, 3–4% of all practicing physicians are orthopedic surgeons.
Many orthopedic surgeons elect to do further training, or fellowships, after completing their residency training. Fellowship training in an orthopedic sub-specialty is typically one year in duration (sometimes two) and sometimes has a research component involved with the clinical and operative training. Examples of orthopedic subspecialty training in the United States are:
Foot and ankle surgery
Hand and upper extremities
Hip and knee surgery
Orthopedic oncologist
Orthopedic trauma
Osseointegration
Pediatric orthopedics
Shoulder and elbow
Spine surgery
Surgical sports medicine
Total joint reconstruction (arthroplasty)
These specialized areas of medicine are not exclusive to orthopedic surgery. For example, hand surgery is practiced by some plastic surgeons, and spine surgery is practiced by most neurosurgeons. Additionally, foot and ankle surgery is also practiced by doctors of podiatric medicine (DPM) in the United States. Some family practice physicians practice sports medicine, but their scope of practice is nonoperative.
After completion of specialty residency or registrar training, an orthopedic surgeon is then eligible for board certification by the American Board of Medical Specialties or the American Osteopathic Association Bureau of Osteopathic Specialists. Certification by the American Board of Orthopedic Surgery or the American Osteopathic Board of Orthopedic Surgery means that the orthopedic surgeon has met the specified educational, evaluation, and examination requirements of the board. The process requires successful completion of a standardized written examination followed by an oral examination focused on the surgeon's clinical and surgical performance over a 6-month period. In Canada, the certifying organization is the Royal College of Physicians and Surgeons of Canada; in Australia and New Zealand, it is the Royal Australasian College of Surgeons.
In the United States, specialists in hand surgery and orthopedic sports medicine may obtain a certificate of added qualifications in addition to their board primary certification by successfully completing a separate standardized examination. No additional certification process exists for the other subspecialties.
== Practice ==
According to applications for board certification from 1999 to 2003, the top 25 most common procedures (in order) performed by orthopedic surgeons are:
Knee arthroscopy and meniscectomy
Shoulder arthroscopy and decompression
Carpal tunnel release
Knee arthroscopy and chondroplasty
Removal of support implant
Knee arthroscopy and anterior cruciate ligament reconstruction
Knee replacement
Repair of femoral neck fracture
Repair of trochanteric fracture
Debridement of skin/muscle/bone/ fracture
Knee arthroscopy repair of both menisci
Hip replacement
Shoulder arthroscopy/distal clavicle excision
Repair of rotator cuff tendon
Repair fracture of radius/ulna
Laminectomy
Repair of ankle fracture (bimalleolar type)
Shoulder arthroscopy and debridement
Lumbar spinal fusion
Repair fracture of the distal part of radius
Low back intervertebral disc surgery
Incise finger tendon sheath
Repair of ankle fracture (fibula)
Repair of femoral shaft fracture
Repair of trochanteric fracture
A typical schedule for a practicing orthopedic surgeon involves 50–55 hours of work per week divided among clinic, surgery, various administrative duties, and possibly teaching and/or research if in an academic setting. According to the American Association of Medical Colleges in 2021, the average work week of an orthopedic surgeon was 57 hours. This is a very low estimation however, as research derived from a 2013 survey of orthopedic surgeons who self identified as "highly successful" due to their prominent positions in the field indicated average work weeks of 70 hours or more.
== Arthroscopy ==
The use of arthroscopic techniques has been particularly important for injured patients. Arthroscopy was pioneered in the early 1950s by Masaki Watanabe of Japan to perform minimally invasive cartilage surgery and reconstructions of torn ligaments. Arthroscopy allows patients to recover from the surgery in a matter of days, rather than the weeks to months required by conventional, "open" surgery; it is a very popular technique. Knee arthroscopy is one of the most common operations performed by orthopedic surgeons today, and is often combined with meniscectomy or chondroplasty. The majority of upper-extremity outpatient orthopedic procedures are now performed arthroscopically.
== Arthroplasty ==
Arthroplasty is an orthopedic surgery where the articular surface of a musculoskeletal joint is replaced, remodeled, or realigned by osteotomy or some other procedure. It is an elective procedure that is done to relieve pain and restore function to the joint after damage by arthritis (rheumasurgery) or some other type of trauma. As well as the standard total knee replacement surgery, the unicompartmental knee replacement, in which only one weight-bearing surface of an arthritic knee is replaced, may be performed, but it bears a significant risk of revision surgery. Joint replacements are used for other joints, most commonly the hip or shoulder.
A post-surgical concern with joint replacements is wear of the bearing surfaces of components. This can lead to damage to the surrounding bone and contribute to eventual failure of the implant. The plastic chosen is usually ultra-high-molecular-weight polyethylene, which can also be altered in ways that may improve wear characteristics. The risk of revision surgery has also been shown to be associated with surgeon volume.
== Epidemiology ==
Between 2001 and 2016, the prevalence of musculoskeletal procedures drastically increased in the U.S., from 17.9% to 24.2% of all operating-room (OR) procedures performed during hospital stays.
In a study of hospitalizations in the United States in 2012, spine and joint procedures were common among all age groups except infants. Spinal fusion was one of the five most common OR procedures performed in every age group except infants younger than 1 year and adults 85 years and older. Laminectomy was common among adults aged 18–84 years. Knee arthroplasty and hip replacement were in the top five OR procedures for adults aged 45 years and older.
== See also ==
Bone grafting
Index of trauma and orthopaedics articles
List of orthopedic implants
Orthopaedic physician's assistant
Orthotics
Outline of trauma and orthopedics
== References ==
== External links ==
Media related to Orthopedics at Wikimedia Commons | Wikipedia/Orthopedic_surgery |
Emission theory, also called emitter theory or ballistic theory of light, was a competing theory for the special theory of relativity, explaining the results of the Michelson–Morley experiment of 1887. Emission theories obey the principle of relativity by having no preferred frame for light transmission, but say that light is emitted at speed "c" relative to its source instead of applying the invariance postulate. Thus, emitter theory combines electrodynamics and mechanics with a simple Newtonian theory. Although there are still proponents of this theory outside the scientific mainstream, this theory is considered to be conclusively discredited by most scientists.
== History ==
The name most often associated with emission theory is Isaac Newton. In his corpuscular theory Newton visualized light "corpuscles" being thrown off from hot bodies at a nominal speed of c with respect to the emitting object, and obeying the usual laws of Newtonian mechanics, and we then expect light to be moving towards us with a speed that is offset by the speed of the distant emitter (c ± v).
In the 20th century, special relativity was created by Albert Einstein to solve the apparent conflict between electrodynamics and the principle of relativity. The theory's geometrical simplicity was persuasive, and the majority of scientists accepted relativity by 1911. However, a few scientists rejected the second basic postulate of relativity: the constancy of the speed of light in all inertial frames. So different types of emission theories were proposed where the speed of light depends on the velocity of the source, and the Galilean transformation is used instead of the Lorentz transformation. All of them can explain the negative outcome of the Michelson–Morley experiment, since the speed of light is constant with respect to the interferometer in all frames of reference. Some of those theories were:
Light retains throughout its whole path the component of velocity which it obtained from its original moving source, and after reflection light spreads out in spherical form around a center which moves with the same velocity as the original source. (Proposed by Walter Ritz in 1908). This model was considered to be the most complete emission theory. (Actually, Ritz was modeling Maxwell–Lorentz electrodynamics. In a later paper Ritz said that the emission particles in his theory should suffer interactions with charges along their path and thus waves (produced by them) would not retain their original emission velocities indefinitely.)
The excited portion of a reflecting mirror acts as a new source of light and the reflected light has the same velocity c with respect to the mirror as has original light with respect to its source. (Proposed by Richard Chase Tolman in 1910, although he was a supporter of special relativity).
Light reflected from a mirror acquires a component of velocity equal to the velocity of the mirror image of the original source (Proposed by Oscar M. Stewart in 1911).
A modification of the Ritz–Tolman theory was introduced by J. G. Fox (1965). He argued that the extinction theorem (i.e., the regeneration of light within the traversed medium) must be considered. In air, the extinction distance would be only 0.2 cm, that is, after traversing this distance the speed of light would be constant with respect to the medium, not to the initial light source. (Fox himself was, however, a supporter of special relativity.)
Albert Einstein is supposed to have worked on his own emission theory before abandoning it in favor of his special theory of relativity. Many years later R.S. Shankland reports Einstein as saying that Ritz's theory had been "very bad" in places and that he himself had eventually discarded emission theory because he could think of no form of differential equations that described it, since it leads to the waves of light becoming "all mixed up".
== Refutations of emission theory ==
The following scheme was introduced by de Sitter to test emission theories:
c
′
=
c
±
k
v
{\displaystyle c'=c\pm kv\,}
where c is the speed of light, v that of the source, c' the resultant speed of light, and k a constant denoting the extent of source dependence which can attain values between 0 and 1. According to special relativity and the stationary aether, k=0, while emission theories allow values up to 1. Numerous terrestrial experiments have been performed, over very short distances, where no "light dragging" or extinction effects could come into play, and again the results confirm that light speed is independent of the speed of the source, conclusively ruling out emission theories.
=== Astronomical sources ===
In 1910 Daniel Frost Comstock and in 1913 Willem de Sitter wrote that for the case of a double-star system seen edge-on, light from the approaching star might be expected to travel faster than light from its receding companion, and overtake it. If the distance was great enough for an approaching star's "fast" signal to catch up with and overtake the "slow" light that it had emitted earlier when it was receding, then the image of the star system should appear completely scrambled. De Sitter argued that none of the star systems he had studied showed the extreme optical effect behavior, and this was considered the death knell for Ritzian theory and emission theory in general, with
k
<
2
×
10
−
3
{\displaystyle k<2\times 10^{-3}}
.
The effect of extinction on de Sitter's experiment has been considered in detail by Fox, and it arguably undermines the cogency of de Sitter type evidence based on binary stars. However, similar observations have been made more recently in the x-ray spectrum by Brecher (1977), which have a long enough extinction distance that it should not affect the results. The observations confirm that the speed of light is independent of the speed of the source, with
k
<
2
×
10
−
9
{\displaystyle k<2\times 10^{-9}}
.
Hans Thirring argued in 1924, that an atom which is accelerated during the emission process by thermal collisions in the sun, is emitting light rays having different velocities at their start- and endpoints. So one end of the light ray would overtake the preceding parts, and consequently the distance between the ends would be elongated up to 500 km until they reach Earth, so that the mere existence of sharp spectral lines in the sun's radiation, disproves the ballistic model.
=== Terrestrial sources ===
Such experiments include that of Sadeh (1963) who used a time-of-flight technique to measure velocity differences of photons traveling in opposite direction, which were produced by positron annihilation. Another experiment was conducted by Alväger et al. (1963), who compared the time of flight of gamma rays from moving and resting sources. Both experiments found no difference, in accordance with relativity.
Filippas and Fox (1964) did not consider Sadeh (1963) and Alväger (1963) to have sufficiently controlled for the effects of extinction. So they conducted an experiment using a setup specifically designed to account for extinction. Data collected from various detector-target distances were consistent with there being no dependence of the speed of light on the velocity of the source, and were inconsistent with modeled behavior assuming c ± v both with and without extinction.
Continuing their previous investigations, Alväger et al. (1964) observed π0-mesons which decay into photons at 99.9% light speed. The experiment showed that the photons didn't attain the velocity of their sources and still traveled at the speed of light, with
k
=
(
−
3
±
13
)
×
10
−
5
{\displaystyle k=(-3\pm 13)\times 10^{-5}}
. The investigation of the media which were crossed by the photons showed that the extinction shift was not sufficient to distort the result significantly.
Also measurements of neutrino speed have been conducted. Mesons travelling nearly at light speed were used as sources. Since neutrinos only participate in the electroweak interaction, extinction plays no role. Terrestrial measurements provided upper limits of
k
≤
10
−
6
{\displaystyle k\leq 10^{-6}}
.
=== Interferometry ===
The Sagnac effect demonstrates that one beam on a rotating platform covers less distance than the other beam, which creates the shift in the interference pattern. Georges Sagnac's original experiment has been shown to suffer extinction effects, but since then, the Sagnac effect has also been shown to occur in vacuum, where extinction plays no role.
The predictions of Ritz's version of emission theory were consistent with almost all terrestrial interferometric tests save those involving the propagation of light in moving media, and Ritz did not consider the difficulties presented by tests such as the Fizeau experiment to be insurmountable. Tolman, however, noted that a Michelson–Morley experiment using an extraterrestrial light source could provide a decisive test of the Ritz hypothesis. In 1924, Rudolf Tomaschek performed a modified Michelson–Morley experiment using starlight, while Dayton Miller used sunlight. Both experiments were inconsistent with the Ritz hypothesis.
Babcock and Bergman (1964) placed rotating glass plates between the mirrors of a common-path interferometer set up in a static Sagnac configuration. If the glass plates behave as new sources of light so that the total speed of light emerging from their surfaces is c + v, a shift in the interference pattern would be expected. However, there was no such effect which again confirms special relativity, and which again demonstrates the source independence of light speed. This experiment was executed in vacuum, thus extinction effects should play no role.
Albert Abraham Michelson (1913) and Quirino Majorana (1918/9) conducted interferometer experiments with resting sources and moving mirrors (and vice versa), and showed that there is no source dependence of light speed in air. Michelson's arrangement was designed to distinguish between three possible interactions of moving mirrors with light: (1) "the light corpuscles are reflected as projectiles from an elastic wall", (2) "the mirror surface acts as a new source", (3) "the velocity of light is independent of the velocity of the source". His results were consistent with source independence of light speed. Majorana analyzed the light from moving sources and mirrors using an unequal arm Michelson interferometer that was extremely sensitive to wavelength changes. Emission theory asserts that Doppler shifting of light from a moving source represents a frequency shift with no shift in wavelength. Instead, Majorana detected wavelength changes inconsistent with emission theory.
Beckmann and Mandics (1965) repeated the Michelson (1913) and Majorana (1918) moving mirror experiments in high vacuum, finding k to be less than 0.09. Although the vacuum employed was insufficient to definitively rule out extinction as the reason for their negative results, it was sufficient to make extinction highly unlikely. Light from the moving mirror passed through a Lloyd interferometer, part of the beam traveling a direct path to the photographic film, part reflecting off the Lloyd mirror. The experiment compared the speed of light hypothetically traveling at c + v from the moving mirrors, versus reflected light hypothetically traveling at c from the Lloyd mirror.
=== Other refutations ===
Emission theories use the Galilean transformation, according to which time coordinates are invariant when changing frames ("absolute time"). Thus the Ives–Stilwell experiment, which confirms relativistic time dilation, also refutes the emission theory of light. As shown by Howard Percy Robertson, the complete Lorentz transformation can be derived, when the Ives–Stillwell experiment is considered together with the Michelson–Morley experiment and the Kennedy–Thorndike experiment.
Furthermore, quantum electrodynamics places the propagation of light in an entirely different, but still relativistic, context, which is completely incompatible with any theory that postulates a speed of light that is affected by the speed of the source.
== See also ==
History of special relativity
Tests of special relativity
== References ==
Isaac Newton, Philosophiæ Naturalis Principia Mathematica
Isaac Newton, Opticks
== External links ==
de Sitter (1913) papers on binary stars as evidence against Ritz's emission theory. | Wikipedia/Emission_theory_(relativity) |
In mathematics, a collocation method is a method for the numerical solution of ordinary differential equations, partial differential equations and integral equations. The idea is to choose a finite-dimensional space of candidate solutions (usually polynomials up to a certain degree) and a number of points in the domain (called collocation points), and to select that solution which satisfies the given equation at the collocation points.
== Ordinary differential equations ==
Suppose that the ordinary differential equation
y
′
(
t
)
=
f
(
t
,
y
(
t
)
)
,
y
(
t
0
)
=
y
0
,
{\displaystyle y'(t)=f(t,y(t)),\quad y(t_{0})=y_{0},}
is to be solved over the interval
[
t
0
,
t
0
+
h
]
{\displaystyle [t_{0},t_{0}+h]}
. Choose
c
k
{\displaystyle c_{k}}
from 0 ≤ c1< c2< ... < cn ≤ 1.
The corresponding (polynomial) collocation method approximates the solution y by the polynomial p of degree n which satisfies the initial condition
p
(
t
0
)
=
y
0
{\displaystyle p(t_{0})=y_{0}}
, and the differential equation
p
′
(
t
k
)
=
f
(
t
k
,
p
(
t
k
)
)
{\displaystyle p'(t_{k})=f(t_{k},p(t_{k}))}
at all collocation points
t
k
=
t
0
+
c
k
h
{\displaystyle t_{k}=t_{0}+c_{k}h}
for
k
=
1
,
…
,
n
{\displaystyle k=1,\ldots ,n}
. This gives n + 1 conditions, which matches the n + 1 parameters needed to specify a polynomial of degree n.
All these collocation methods are in fact implicit Runge–Kutta methods. The coefficients ck in the Butcher tableau of a Runge–Kutta method are the collocation points. However, not all implicit Runge–Kutta methods are collocation methods.
=== Example: The trapezoidal rule ===
Pick, as an example, the two collocation points c1 = 0 and c2 = 1 (so n = 2). The collocation conditions are
p
(
t
0
)
=
y
0
,
{\displaystyle p(t_{0})=y_{0},\,}
p
′
(
t
0
)
=
f
(
t
0
,
p
(
t
0
)
)
,
{\displaystyle p'(t_{0})=f(t_{0},p(t_{0})),\,}
p
′
(
t
0
+
h
)
=
f
(
t
0
+
h
,
p
(
t
0
+
h
)
)
.
{\displaystyle p'(t_{0}+h)=f(t_{0}+h,p(t_{0}+h)).\,}
There are three conditions, so p should be a polynomial of degree 2. Write p in the form
p
(
t
)
=
α
(
t
−
t
0
)
2
+
β
(
t
−
t
0
)
+
γ
{\displaystyle p(t)=\alpha (t-t_{0})^{2}+\beta (t-t_{0})+\gamma \,}
to simplify the computations. Then the collocation conditions can be solved to give the coefficients
α
=
1
2
h
(
f
(
t
0
+
h
,
p
(
t
0
+
h
)
)
−
f
(
t
0
,
p
(
t
0
)
)
)
,
β
=
f
(
t
0
,
p
(
t
0
)
)
,
γ
=
y
0
.
{\displaystyle {\begin{aligned}\alpha &={\frac {1}{2h}}{\Big (}f(t_{0}+h,p(t_{0}+h))-f(t_{0},p(t_{0})){\Big )},\\\beta &=f(t_{0},p(t_{0})),\\\gamma &=y_{0}.\end{aligned}}}
The collocation method is now given (implicitly) by
y
1
=
p
(
t
0
+
h
)
=
y
0
+
1
2
h
(
f
(
t
0
+
h
,
y
1
)
+
f
(
t
0
,
y
0
)
)
,
{\displaystyle y_{1}=p(t_{0}+h)=y_{0}+{\frac {1}{2}}h{\Big (}f(t_{0}+h,y_{1})+f(t_{0},y_{0}){\Big )},\,}
where y1 = p(t0 + h) is the approximate solution at t = t1 = t0 + h.
This method is known as the "trapezoidal rule" for differential equations. Indeed, this method can also be derived by rewriting the differential equation as
y
(
t
)
=
y
(
t
0
)
+
∫
t
0
t
f
(
τ
,
y
(
τ
)
)
d
τ
,
{\displaystyle y(t)=y(t_{0})+\int _{t_{0}}^{t}f(\tau ,y(\tau ))\,{\textrm {d}}\tau ,\,}
and approximating the integral on the right-hand side by the trapezoidal rule for integrals.
=== Other examples ===
The Gauss–Legendre methods use the points of Gauss–Legendre quadrature as collocation points. The Gauss–Legendre method based on s points has order 2s. All Gauss–Legendre methods are A-stable.
In fact, one can show that the order of a collocation method corresponds to the order of the quadrature rule that one would get using the collocation points as weights.
== Orthogonal collocation method ==
In direct collocation method, we are essentially performing variational calculus with the finite-dimensional subspace of piecewise linear functions (as in trapezoidal rule), or cubic functions, or other piecewise polynomial functions. In orthogonal collocation method, we instead use the finite-dimensional subspace spanned by the first N vectors in some orthogonal polynomial basis, such as the Legendre polynomials.
== Notes ==
== References ==
Ascher, Uri M.; Petzold, Linda R. (1998), Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations, Philadelphia: Society for Industrial and Applied Mathematics, ISBN 978-0-89871-412-8.
Hairer, Ernst; Nørsett, Syvert Paul; Wanner, Gerhard (1993), Solving ordinary differential equations I: Nonstiff problems, Berlin, New York: Springer-Verlag, ISBN 978-3-540-56670-0.
Iserles, Arieh (1996), A First Course in the Numerical Analysis of Differential Equations, Cambridge University Press, Bibcode:1996fcna.book.....I, ISBN 978-0-521-55655-2.
Wang, Yingwei; Chen, Suqin; Wu, Xionghua (2009), "A rational spectral collocation method for solving a class of parameterized singular perturbation problems", Journal of Computational and Applied Mathematics, 233 (10): 2652–2660, doi:10.1016/j.cam.2009.11.011. | Wikipedia/Collocation_method |
The moving particle semi-implicit (MPS) method is a computational method for the simulation of incompressible free surface flows. It is a macroscopic, deterministic particle method (Lagrangian mesh-free method) developed by Koshizuka and Oka (1996).
== Method ==
The MPS method is used to solve the Navier-Stokes equations in a Lagrangian framework. A fractional step method is applied which consists of splitting each time step in two steps of prediction and correction. The fluid is represented with particles, and the motion of each particle is calculated based on the interactions with the neighboring particles by means of a kernel function. The MPS method is similar to the SPH (smoothed-particle hydrodynamics) method (Gingold and Monaghan, 1977; Lucy, 1977) in that both methods provide approximations to the strong form of the partial differential equations (PDEs) on the basis of integral interpolants. However, the MPS method applies simplified differential operator models solely based on a local weighted averaging process without taking the gradient of a kernel function. In addition, the solution process of MPS method differs to that of the original SPH method as the solutions to the PDEs are obtained through a semi-implicit prediction-correction process rather than the fully explicit one in original SPH method.
== Applications ==
Through the past years, the MPS method has been applied in a wide range of engineering applications including Nuclear Engineering (e.g. Koshizuka et al., 1999; Koshizuka and Oka, 2001; Xie et al., 2005), Coastal Engineering (e.g. Gotoh et al., 2005; Gotoh and Sakai, 2006), Environmental Hydraulics (e.g. Shakibaeina and Jin, 2009; Nabian and Farhadi, 2016), Ocean Engineering (Shibata and Koshizuka, 2007; Sueyoshi et al., 2008; Zuo et al. 2022), Structural Engineering (e.g. Chikazawa et al., 2001), Mechanical Engineering (e.g. Heo et al., 2002; Sun et al., 2009), Bioengineering (e.g. Tsubota et al., 2006) and Chemical Engineering (e.g. Sun et al., 2009; Xu and Jin, 2018).
== Improvements ==
Improved versions of MPS method have been proposed for enhancement of numerical stability (e.g. Koshizuka et al., 1998; Zhang et al., 2005; Ataie-Ashtiani and Farhadi, 2006;Shakibaeina and Jin, 2009; Jandaghian and Shakibaeinia, 2020; Cheng et al. 2021), momentum conservation (e.g. Hamiltonian MPS by Suzuki et al., 2007; Corrected MPS by Khayyer and Gotoh, 2008; Enhanced MPS by Jandaghian and Shakibaeinia, 2020), mechanical energy conservation (e.g. Hamiltonian MPS by Suzuki et al., 2007), pressure calculation (e.g. Khayyer and Gotoh, 2009, Kondo and Koshizuka, 2010, Khayyer and Gotoh, 2010, Xu and Jin, 2019), and for simulation of multiphase and granular flows (Nabian and Farhadi 2016; Xu and Jin, 2021; Xu and Li, 2022).
== References ==
R.A. Gingold and J.J. Monaghan, "Smoothed particle hydrodynamics: theory and application to non-spherical stars," Mon. Not. R. Astron. Soc., Vol 181, pp. 375–89, 1977.
L.B. Lucy, "A numerical approach to the testing of the fission hypothesis," Astron. J., Vol 82, pp. 1013–1024, 1977.
S. Koshizuka and Y. Oka, "Moving particle semi-implicit method for fragmentation of incompressible fluid," Nuclear Science and Engineering, Vol 123, pp. 421–434, 1996.
S. Koshizuka, A. Nobe and Y. Oka, "Numerical Analysis of Breaking Waves Using the Moving Particle Semi-implicit Method," Int. J. Numer. Meth. Fluid, Vol 26, pp. 751–769, 1998.
S. Koshizuka, H. Ikeda and Y. Oka, "Numerical analysis of fragmentation mechanisms in vapor explosions," Nuclear Engineering and Design, Vol 189, pp. 423–433, 1999.
Y. Chikazawa, S. Koshizuka, and Y. Oka, "A particle method for elastic and visco-plastic structures and fluid-structure interactions," Comput. Mech. 27, pp. 97–106, 2001.
S. Koshizuka, S. and Y. Oka, "Application of Moving Particle Semi-implicit Method to Nuclear Reactor Safety," Comput. Fluid Dyn. J., Vol 9, pp. 366–375, 2001.
S. Heo, S. Koshizuka and Y. Oka, "Numerical analysis of boiling on high heat-flux and high subcooling condition using MPS-MAFL," International Journal of Heat and Mass Transfer, Vol 45, pp. 2633–2642, 2002.
H. Gotoh, H. Ikari, T. Memita and T. Sakai, "Lagrangian particle method for simulation of wave overtopping on a vertical seawall," Coast. Eng. J., Vol 47, No 2–3, pp. 157–181, 2005.
H. Xie, S. Koshizuka and Y. Oka, "Simulation of drop deposition process in annular mist flow using three-dimensional particle method," Nuclear Engineering and Design, Vol 235, pp. 1687–1697, 2005.
S. Zhang, K. Morita, K. Fukuda and N. Shirakawa, "An improved MPS method for numerical simulations of convective heat transfer problems," Int. J. Numer. Meth. Fluid, 51, 31–47, 2005.
B. Ataie-Ashtiani and L. Farhadi, "A stable moving particle semi-implicit method for free surface flows," Fluid Dynamics Research 38, pp. 241–256, 2006.
H. Gotoh and T. Sakai, "Key issues in the particle method for computation of wave breaking," Coastal Engineering, Vol 53, No 2–3, pp. 171–179, 2006.
K. Tsubota, S. Wada, H. Kamada, Y. Kitagawa, R. Lima and T. Yamaguchi, "A Particle Method for Blood Flow Simulation – Application to Flowing Red Blood Cells and Platelets–," Journal of the Earth Simulator, Vol 5, pp. 2–7, 2006.
K. Shibata and S. Koshizuka, "Numerical analysis of shipping water impact on a deck using a particle method," Ocean Engineering, Vol 34, pp. 585–593, 2007.
Y. Suzuki, S. Koshizuka, Y. Oka, "Hamiltonian moving-particle semi-implicit (HMPS) method for incompressible fluid flows," Computer Methods in Applied Mechanics and Engineering, Vol 196, pp. 2876-2894, 2007.
A. Khayyer and H. Gotoh, "Development of CMPS method for accurate water-surface tracking in breaking waves," Coast. Eng. J., Vol 50, No 2, pp. 179–207, 2008.
M. Sueyoshi, M. Kashiwagi and S. Naito, "Numerical simulation of wave-induced nonlinear motions of a two-dimensional floating body by the moving particle semi-implicit method," Journal of Marine Science and Technology, Vol 13, pp. 85–94, 2008.
A. Khayyer and H. Gotoh, "Modified Moving Particle Semi-implicit methods for the prediction of 2D wave impact pressure," Coastal Engineering, Vol 56, pp. 419–440, 2009.
A. Shakibaeinia and Y.C. Jin "A weakly compressible MPS method for simulation open-boundary free-surface flow." Int. J. Numer. Methods Fluids, 63 (10):1208–1232 (Published Online: 7 Aug 2009 doi:10.1002/fld.2132).
A. Shakibaeinia and Y.C. Jin "Lagrangian Modeling of flow over spillways using moving particle semi-implicit method." Proc. 33rd IAHR Congress, Vancouver, Canada, 2009, 1809–1816.
Z. Sun, G. Xi and X. Chen, "A numerical study of stir mixing of liquids with particle method," Chemical Engineering Science, Vol 64, pp. 341–350, 2009.
Z. Sun, G. Xi and X. Chen, "Mechanism study of deformation and mass transfer for binary droplet collisions with particle method," Phys. Fluids, Vol 21, 032106, 2009.
A. Khayyer and H. Gotoh, "A higher order Laplacian model for enhancement and stabilization of pressure calculation by the MPS method," Applied Ocean Research, Vol 32, pp. 124-131, 2010.
A. Shakibaeinia and Y.C. Jin "A mesh-free particle model for simulation of mobile-bed dam break." Advances in Water Resources, 34 (6):794–807 doi:10.1016/j.advwatres.2011.04.011.
A. Shakibaeinia and Y.C. Jin "A MPS Based Mesh-free Particle Method for Open Channel flow." Journal of Hydraulic Engineering ASCE. 137(11): 1375–1384. 2011.
M. Kondo and S. Koshizuka, "Improvement of stability in moving particle semi-implicit method", Int. J. Numer. Meth. Fluid, Vol. 65, pp. 638-654, 2011.
A. Shakibaeinia and Y.C. Jin "MPS Mesh-Free Particle Method for Multiphase Flows." Computer methods in Applied Mechanics and Engineering. 229–232: 13–26. 2012.
K.S. Kim, M.H. Kim and J.C. Park, "Development of MPS (Moving Particle Simulation) method for Multi-liquid-layer Sloshing," Journal of Mathematical Problems in Engineering, Vol 2014, doi:10.1155/2014/350165 unflagged free DOI (link)
M.A. Nabian and L. Farhadi, "Multiphase Mesh-Free Particle Method for Simulating Granular Flows and Sediment Transport," Journal of Hydraulic Engineering, 2016.
T. Xu, Y. C. Jin, Simulation the convective mixing of CO2 in geological formations with a meshless model. Chemical Engineering Science, 192, 187-198, 2018.
T. Xu, Y. C. Jin, Improvement of a projection-based particle method in free-surface flows by improved Laplacian model and stabilization techniques. Computers & Fluids, 191, 104235, 2019.
M. Jandaghian and A. Shakibaeinia "An enhanced weakly-compressible MPS method for free-surface flows," Computer Methods in Applied Mechanics and Engineering, vol. 360, p. 112771, 2020/03/01/ 2020, doi: https://doi.org/10.1016/j.cma.2019.112771.
L. Y. Cheng, R. A. Amaro Jr., E. H. Favero, "Improving stability of moving particle semi-implicit method by source terms based on time-scale correction of particle-level impulses," Engineering Analysis with Boundary Elements, Vol. 131, pp. 118-145, 2021.
T. Xu, Y. C. Jin, Two-dimensional continuum modelling granular column collapse by non-local peridynamics in a mesh-free method with rheology. Journal of Fluid Mechanics, 917, A51, 2021.
T. Xu, S. S. Li, Development of a non-local partial Peridynamic explicit mesh-free incompressible method and its validation for simulating dry dense granular flows. Acta Geotechnica, 1-20, 2022.
J. Zuo, T. Xu, D. Z. Zhu, H. Gu, Impact pressure of dam-break waves on a vertical wall with various downstream conditions by an explicit mesh-free method. Ocean Engineering, 256, 111569, 2022.
Specific
== External links ==
Laboratory of Professor Seiichi Koshizuka at the University of Tokyo, Japan
Laboratory of Professor Hitoshi Gotoh at Kyoto University, Japan
MPS-RYUJIN by Fuji Technical Research | Wikipedia/Moving_particle_semi-implicit_method |
Pseudo-spectral methods, also known as discrete variable representation (DVR) methods, are a class of numerical methods used in applied mathematics and scientific computing for the solution of partial differential equations. They are closely related to spectral methods, but complement the basis by an additional pseudo-spectral basis, which allows representation of functions on a quadrature grid. This simplifies the evaluation of certain operators, and can considerably speed up the calculation when using fast algorithms such as the fast Fourier transform.
== Motivation with a concrete example ==
Take the initial-value problem
i
∂
∂
t
ψ
(
x
,
t
)
=
[
−
∂
2
∂
x
2
+
V
(
x
)
]
ψ
(
x
,
t
)
,
ψ
(
t
0
)
=
ψ
0
{\displaystyle i{\frac {\partial }{\partial t}}\psi (x,t)={\Bigl [}-{\frac {\partial ^{2}}{\partial x^{2}}}+V(x){\Bigr ]}\psi (x,t),\qquad \qquad \psi (t_{0})=\psi _{0}}
with periodic conditions
ψ
(
x
+
1
,
t
)
=
ψ
(
x
,
t
)
{\displaystyle \psi (x+1,t)=\psi (x,t)}
. This specific example is the Schrödinger equation for a particle in a potential
V
(
x
)
{\displaystyle V(x)}
, but the structure is more general. In many practical partial differential equations, one has a term that involves derivatives (such as a kinetic energy contribution), and a multiplication with a function (for example, a potential).
In the spectral method, the solution
ψ
{\displaystyle \psi }
is expanded in a suitable set of basis functions, for example plane waves,
ψ
(
x
,
t
)
=
1
2
π
∑
n
c
n
(
t
)
e
2
π
i
n
x
.
{\displaystyle \psi (x,t)={\frac {1}{\sqrt {2\pi }}}\sum _{n}c_{n}(t)e^{2\pi inx}.}
Insertion and equating identical coefficients yields a set of ordinary differential equations for the coefficients,
i
d
d
t
c
n
(
t
)
=
(
2
π
n
)
2
c
n
+
∑
k
V
n
−
k
c
k
,
{\displaystyle i{\frac {d}{dt}}c_{n}(t)=(2\pi n)^{2}c_{n}+\sum _{k}V_{n-k}c_{k},}
where the elements
V
n
−
k
{\displaystyle V_{n-k}}
are calculated through the explicit Fourier-transform
V
n
−
k
=
∫
0
1
V
(
x
)
e
2
π
i
(
k
−
n
)
x
d
x
.
{\displaystyle V_{n-k}=\int _{0}^{1}V(x)\ e^{2\pi i(k-n)x}dx.}
The solution would then be obtained by truncating the expansion to
N
{\displaystyle N}
basis functions, and finding a solution for the
c
n
(
t
)
{\displaystyle c_{n}(t)}
. In general, this is done by numerical methods, such as Runge–Kutta methods. For the numerical solutions, the right-hand side of the ordinary differential equation has to be evaluated repeatedly at different time steps. At this point, the spectral method has a major problem with the potential term
V
(
x
)
{\displaystyle V(x)}
.
In the spectral representation, the multiplication with the function
V
(
x
)
{\displaystyle V(x)}
transforms into a vector-matrix multiplication, which scales as
N
2
{\displaystyle N^{2}}
. Also, the matrix elements
V
n
−
k
{\displaystyle V_{n-k}}
need to be evaluated explicitly before the differential equation for the coefficients can be solved, which requires an additional step.
In the pseudo-spectral method, this term is evaluated differently. Given the coefficients
c
n
(
t
)
{\displaystyle c_{n}(t)}
, an inverse discrete Fourier transform yields the value of the function
ψ
{\displaystyle \psi }
at discrete grid points
x
j
=
2
π
j
/
N
{\displaystyle x_{j}=2\pi j/N}
. At these grid points, the function is then multiplied,
ψ
′
(
x
i
,
t
)
=
V
(
x
i
)
ψ
(
x
i
,
t
)
{\displaystyle \psi '(x_{i},t)=V(x_{i})\psi (x_{i},t)}
, and the result Fourier-transformed back. This yields a new set of coefficients
c
n
′
(
t
)
{\displaystyle c'_{n}(t)}
that are used instead of the matrix product
∑
k
V
n
−
k
c
k
(
t
)
{\displaystyle \sum _{k}V_{n-k}c_{k}(t)}
.
It can be shown that both methods have similar accuracy. However, the pseudo-spectral method allows the use of a fast Fourier transform, which scales as
O
(
N
ln
N
)
{\displaystyle O(N\ln N)}
, and is therefore significantly more efficient than the matrix multiplication. Also, the function
V
(
x
)
{\displaystyle V(x)}
can be used directly without evaluating any additional integrals.
== Technical discussion ==
In a more abstract way, the pseudo-spectral method deals with the multiplication of two functions
V
(
x
)
{\displaystyle V(x)}
and
f
(
x
)
{\displaystyle f(x)}
as part of a partial differential equation. To simplify the notation, the time-dependence is dropped. Conceptually, it consists of three steps:
f
(
x
)
,
f
~
(
x
)
=
V
(
x
)
f
(
x
)
{\displaystyle f(x),{\tilde {f}}(x)=V(x)f(x)}
are expanded in a finite set of basis functions (this is the spectral method).
For a given set of basis functions, a quadrature is sought that converts scalar products of these basis functions into a weighted sum over grid points.
The product is calculated by multiplying
V
,
f
{\displaystyle V,f}
at each grid point.
=== Expansion in a basis ===
The functions
f
,
f
~
{\displaystyle f,{\tilde {f}}}
can be expanded in a finite basis
{
ϕ
n
}
n
=
0
,
…
,
N
{\displaystyle \{\phi _{n}\}_{n=0,\ldots ,N}}
as
f
(
x
)
=
∑
n
=
0
N
c
n
ϕ
n
(
x
)
{\displaystyle f(x)=\sum _{n=0}^{N}c_{n}\phi _{n}(x)}
f
~
(
x
)
=
∑
n
=
0
N
c
~
n
ϕ
n
(
x
)
{\displaystyle {\tilde {f}}(x)=\sum _{n=0}^{N}{\tilde {c}}_{n}\phi _{n}(x)}
For simplicity, let the basis be orthogonal and normalized,
⟨
ϕ
n
,
ϕ
m
⟩
=
δ
n
m
{\displaystyle \langle \phi _{n},\phi _{m}\rangle =\delta _{nm}}
using the inner product
⟨
f
,
g
⟩
=
∫
a
b
f
(
x
)
g
(
x
)
¯
d
x
{\displaystyle \langle f,g\rangle =\int _{a}^{b}f(x){\overline {g(x)}}dx}
with appropriate boundaries
a
,
b
{\displaystyle a,b}
. The coefficients are then obtained by
c
n
=
⟨
f
,
ϕ
n
⟩
{\displaystyle c_{n}=\langle f,\phi _{n}\rangle }
c
~
n
=
⟨
f
~
,
ϕ
n
⟩
{\displaystyle {\tilde {c}}_{n}=\langle {\tilde {f}},\phi _{n}\rangle }
A bit of calculus yields then
c
~
n
=
∑
m
=
0
N
V
n
−
m
c
m
{\displaystyle {\tilde {c}}_{n}=\sum _{m=0}^{N}V_{n-m}c_{m}}
with
V
n
−
m
=
⟨
V
ϕ
m
,
ϕ
n
⟩
{\displaystyle V_{n-m}=\langle V\phi _{m},\phi _{n}\rangle }
. This forms the basis of the spectral method. To distinguish the basis of the
ϕ
n
{\displaystyle \phi _{n}}
from the quadrature basis, the expansion is sometimes called Finite Basis Representation (FBR).
=== Quadrature ===
For a given basis
{
ϕ
n
}
{\displaystyle \{\phi _{n}\}}
and number of
N
+
1
{\displaystyle N+1}
basis functions, one can try to find a quadrature, i.e., a set of
N
+
1
{\displaystyle N+1}
points and weights such that
⟨
ϕ
n
,
ϕ
m
⟩
=
∑
i
=
0
N
w
i
ϕ
n
(
x
i
)
ϕ
m
(
x
i
)
¯
n
,
m
=
0
,
…
,
N
{\displaystyle \langle \phi _{n},\phi _{m}\rangle =\sum _{i=0}^{N}w_{i}\phi _{n}(x_{i}){\overline {\phi _{m}(x_{i})}}\qquad \qquad n,m=0,\ldots ,N}
Special examples are the Gaussian quadrature for polynomials and the Discrete Fourier Transform for plane waves. It should be stressed that the grid points and weights,
x
i
,
w
i
{\displaystyle x_{i},w_{i}}
are a function of the basis and the number
N
{\displaystyle N}
.
The quadrature allows an alternative numerical representation of the function
f
(
x
)
,
f
~
(
x
)
{\displaystyle f(x),{\tilde {f}}(x)}
through their value at the grid points. This representation is sometimes denoted Discrete Variable Representation (DVR), and is completely equivalent to the expansion in the basis.
f
(
x
i
)
=
∑
n
=
0
N
c
n
ϕ
n
(
x
i
)
{\displaystyle f(x_{i})=\sum _{n=0}^{N}c_{n}\phi _{n}(x_{i})}
c
n
=
⟨
f
,
ϕ
n
⟩
=
∑
i
=
0
N
w
i
f
(
x
i
)
ϕ
n
(
x
i
)
¯
{\displaystyle c_{n}=\langle f,\phi _{n}\rangle =\sum _{i=0}^{N}w_{i}f(x_{i}){\overline {\phi _{n}(x_{i})}}}
=== Multiplication ===
The multiplication with the function
V
(
x
)
{\displaystyle V(x)}
is then done at each grid point,
f
~
(
x
i
)
=
V
(
x
i
)
f
(
x
i
)
.
{\displaystyle {\tilde {f}}(x_{i})=V(x_{i})f(x_{i}).}
This generally introduces an additional approximation. To see this, we can calculate one of the coefficients
c
~
n
{\displaystyle {\tilde {c}}_{n}}
:
c
~
n
=
⟨
f
~
,
ϕ
n
⟩
=
∑
i
w
i
f
~
(
x
i
)
ϕ
n
(
x
i
)
¯
=
∑
i
w
i
V
(
x
i
)
f
(
x
i
)
ϕ
n
(
x
i
)
¯
{\displaystyle {\tilde {c}}_{n}=\langle {\tilde {f}},\phi _{n}\rangle =\sum _{i}w_{i}{\tilde {f}}(x_{i}){\overline {\phi _{n}(x_{i})}}=\sum _{i}w_{i}V(x_{i})f(x_{i}){\overline {\phi _{n}(x_{i})}}}
However, using the spectral method, the same coefficient would be
c
~
n
=
⟨
V
f
,
ϕ
n
⟩
{\displaystyle {\tilde {c}}_{n}=\langle Vf,\phi _{n}\rangle }
. The pseudo-spectral method thus introduces the additional approximation
⟨
V
f
,
ϕ
n
⟩
≈
∑
i
w
i
V
(
x
i
)
f
(
x
i
)
ϕ
n
(
x
i
)
¯
.
{\displaystyle \langle Vf,\phi _{n}\rangle \approx \sum _{i}w_{i}V(x_{i})f(x_{i}){\overline {\phi _{n}(x_{i})}}.}
If the product
V
f
{\displaystyle Vf}
can be represented with the given finite set of basis functions, the above equation is exact due to the chosen quadrature.
== Special pseudospectral schemes ==
=== The Fourier method ===
If periodic boundary conditions with period
[
0
,
L
]
{\displaystyle [0,L]}
are imposed on the system, the basis functions can be generated by plane waves,
ϕ
n
(
x
)
=
1
L
e
−
ı
k
n
x
{\displaystyle \phi _{n}(x)={\frac {1}{\sqrt {L}}}e^{-\imath k_{n}x}}
with
k
n
=
(
−
1
)
n
⌈
n
/
2
⌉
2
π
/
L
{\displaystyle k_{n}=(-1)^{n}\lceil n/2\rceil 2\pi /L}
, where
⌈
⋅
⌉
{\displaystyle \lceil \cdot \rceil }
is the ceiling function.
The quadrature for a cut-off at
n
max
=
N
{\displaystyle n_{\text{max}}=N}
is given by the discrete Fourier transformation. The grid points are equally spaced,
x
i
=
i
Δ
x
{\displaystyle x_{i}=i\Delta x}
with spacing
Δ
x
=
L
/
(
N
+
1
)
{\displaystyle \Delta x=L/(N+1)}
, and the constant weights are
w
i
=
Δ
x
{\displaystyle w_{i}=\Delta x}
.
For the discussion of the error, note that the product of two plane waves is again a plane wave,
ϕ
a
+
ϕ
b
=
ϕ
c
{\displaystyle \phi _{a}+\phi _{b}=\phi _{c}}
with
c
≤
a
+
b
{\displaystyle c\leq a+b}
. Thus, qualitatively, if the functions
f
(
x
)
,
V
(
x
)
{\displaystyle f(x),V(x)}
can be represented sufficiently accurately with
N
f
,
N
V
{\displaystyle N_{f},N_{V}}
basis functions, the pseudo-spectral method gives accurate results if
N
f
+
N
V
{\displaystyle N_{f}+N_{V}}
basis functions are used.
An expansion in plane waves often has a poor quality and needs many basis functions to converge. However, the transformation between the basis expansion and the grid representation can be done using a Fast Fourier transform, which scales favorably as
N
ln
N
{\displaystyle N\ln N}
. As a consequence, plane waves are one of the most common expansion that is encountered with pseudo-spectral methods.
=== Polynomials ===
Another common expansion is into classical polynomials. Here, the Gaussian quadrature is used, which states that one can always find weights
w
i
{\displaystyle w_{i}}
and points
x
i
{\displaystyle x_{i}}
such that
∫
a
b
w
(
x
)
p
(
x
)
d
x
=
∑
i
=
0
N
w
i
p
(
x
i
)
{\displaystyle \int _{a}^{b}w(x)p(x)dx=\sum _{i=0}^{N}w_{i}p(x_{i})}
holds for any polynomial
p
(
x
)
{\displaystyle p(x)}
of degree
2
N
+
1
{\displaystyle 2N+1}
or less. Typically, the weight function
w
(
x
)
{\displaystyle w(x)}
and ranges
a
,
b
{\displaystyle a,b}
are chosen for a specific problem, and leads to one of the different forms of the quadrature. To apply this to the pseudo-spectral method, we choose basis functions
ϕ
n
(
x
)
=
w
(
x
)
P
n
(
x
)
{\displaystyle \phi _{n}(x)={\sqrt {w(x)}}P_{n}(x)}
, with
P
n
{\displaystyle P_{n}}
being a polynomial of degree
n
{\displaystyle n}
with the property
∫
a
b
w
(
x
)
P
n
(
x
)
P
m
(
x
)
d
x
=
δ
m
n
.
{\displaystyle \int _{a}^{b}w(x)P_{n}(x)P_{m}(x)dx=\delta _{mn}.}
Under these conditions, the
ϕ
n
{\displaystyle \phi _{n}}
form an orthonormal basis with respect to the scalar product
⟨
f
,
g
⟩
=
∫
a
b
f
(
x
)
g
(
x
)
¯
d
x
{\displaystyle \langle f,g\rangle =\int _{a}^{b}f(x){\overline {g(x)}}dx}
. This basis, together with the quadrature points can then be used for the pseudo-spectral method.
For the discussion of the error, note that if
f
{\displaystyle f}
is well represented by
N
f
{\displaystyle N_{f}}
basis functions and
V
{\displaystyle V}
is well represented by a polynomial of degree
N
V
{\displaystyle N_{V}}
, their product can be expanded in the first
N
f
+
N
V
{\displaystyle N_{f}+N_{V}}
basis functions, and the pseudo-spectral method will give accurate results for that many basis functions.
Such polynomials occur naturally in several standard problems. For example, the quantum harmonic oscillator is ideally expanded in Hermite polynomials, and Jacobi-polynomials can be used to define the associated Legendre functions typically appearing in rotational problems.
== Notes ==
== References ==
Orszag, Steven A. (1969). "Numerical Methods for the Simulation of Turbulence". Physics of Fluids. 12 (12): II-250. doi:10.1063/1.1692445.
Gottlieb, David; Orszag, Steven A. (1989). Numerical analysis of spectral methods : theory and applications (5. print. ed.). Philadelphia, Pa.: Society for Industrial and Applied Mathematics. ISBN 978-0898710236.
Hesthaven, Jan S.; Gottlieb, Sigal; Gottlieb, David (2007). Spectral methods for time-dependent problems (1. publ. ed.). Cambridge [u.a.]: Cambridge Univ. Press. ISBN 9780521792110.
Jie Shen, Tao Tang and Li-Lian Wang (2011) "Spectral Methods: Algorithms, Analysis and Applications" (Springer Series in Computational Mathematics, V. 41, Springer), ISBN 354071040X.
Trefethen, Lloyd N. (2000). Spectral methods in MATLAB (3rd. repr. ed.). Philadelphia, Pa: SIAM. ISBN 978-0-89871-465-4.
Fornberg, Bengt (1996). A Practical Guide to Pseudospectral Methods. Cambridge: Cambridge University Press. ISBN 9780511626357.
Boyd, John P. (2001). Chebyshev and Fourier spectral methods (2nd ed., rev. ed.). Mineola, N.Y.: Dover Publications. ISBN 978-0486411835.
Funaro, Daniele (1992). Polynomial approximation of differential equations. Berlin: Springer-Verlag. ISBN 978-3-540-46783-0.
de Frutos, Javier; Novo, Julia (January 2000). "A Spectral Element Method for the Navier--Stokes Equations with Improved Accuracy". SIAM Journal on Numerical Analysis. 38 (3): 799–819. doi:10.1137/S0036142999351984.
Claudio, Canuto; M. Yousuff, Hussaini; Alfio, Quarteroni; Thomas A., Zang (2006). Spectral methods fundamentals in single domains. Berlin: Springer-Verlag. ISBN 978-3-540-30726-6.
Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 20.7. Spectral Methods". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. | Wikipedia/Pseudo-spectral_method |
In numerical analysis, the balancing domain decomposition method (BDD) is an iterative method to find the solution of a symmetric positive definite system of linear algebraic equations arising from the finite element method. In each iteration, it combines the solution of local problems on non-overlapping subdomains with a coarse problem created from the subdomain nullspaces. BDD requires only solution of subdomain problems rather than access to the matrices of those problems, so it is applicable to situations where only the solution operators are available, such as in oil reservoir simulation by mixed finite elements. In its original formulation, BDD performs well only for 2nd order problems, such elasticity in 2D and 3D. For 4th order problems, such as plate bending, it needs to be modified by adding to the coarse problem special basis functions that enforce continuity of the solution at subdomain corners, which makes it however more expensive. The BDDC method uses the same corner basis functions as, but in an additive rather than multiplicative fashion. The dual counterpart to BDD is FETI, which enforces the equality of the solution between the subdomain by Lagrange multipliers. The base versions of BDD and FETI are not mathematically equivalent, though a special version of FETI designed to be robust for hard problems has the same eigenvalues and thus essentially the same performance as BDD.
The operator of the system solved by BDD is the same as obtained by eliminating the unknowns in the interiors of the subdomain, thus reducing the problem to the Schur complement on the subdomain interface. Since the BDD preconditioner involves the solution of Neumann problems on all subdomain, it is a member of the Neumann–Neumann class of methods, so named because they solve a Neumann problem on both sides of the interface between subdomains.
In the simplest case, the coarse space of BDD consists of functions constant on each subdomain and averaged on the interfaces. More generally, on each subdomain, the coarse space needs to only contain the nullspace of the problem as a subspace.
== References ==
== External links ==
BDD reference implementation at mgnet.org
Domain Decomposition – Theory, publications, methods, algorithms. Archived 2011-07-10 at the Wayback Machine | Wikipedia/Balancing_domain_decomposition_method |
In mathematics, the additive Schwarz method, named after Hermann Schwarz, solves a boundary value problem for a partial differential equation approximately by splitting it into boundary value problems on smaller domains and adding the results.
== Overview ==
Partial differential equations (PDEs) are used in all sciences to model phenomena. For the purpose of exposition, we give an example physical problem and the accompanying boundary value problem (BVP). Even if the reader is unfamiliar with the notation, the purpose is merely to show what a BVP looks like when written down.
(Model problem) The heat distribution in a square metal plate such that the left edge is kept at 1 degree, and the other edges are kept at 0 degree, after letting it sit for a long period of time satisfies the following boundary value problem:
fxx(x,y) + fyy(x,y) = 0
f(0,y) = 1; f(x,0) = f(x,1) = f(1,y) = 0
where f is the unknown function, fxx and fyy denote the second partial derivatives with respect to x and y, respectively.
Here, the domain is the square [0,1] × [0,1].
This particular problem can be solved exactly on paper, so there is no need for a computer. However, this is an exceptional case, and most BVPs cannot be solved exactly. The only possibility is to use a computer to find an approximate solution.
=== Solving on a computer ===
A typical way of doing this is to sample f at regular intervals in the square [0,1] × [0,1]. For instance, we could take 8 samples in the x direction at x = 0.1, 0.2, ..., 0.8 and 0.9, and 8 samples in the y direction at similar coordinates. We would then have 64 samples of the square, at places like (0.2,0.8) and (0.6,0.6). The goal of the computer program would be to calculate the value of f at those 64 points, which seems easier than finding an abstract function of the square.
There are some difficulties, for instance it is not possible to calculate fxx(0.5,0.5) knowing f at only 64 points in the square. To overcome this, one uses some sort of numerical approximation of the derivatives, see for instance the finite element method or finite differences. We ignore these difficulties and concentrate on another aspect of the problem.
=== Solving linear problems ===
Whichever method we choose to solve this problem, we will need to solve a large linear system of equations. The reader may recall linear systems of equations from high school, they look like this:
2a + 5b = 12 (*)
6a − 3b = −3
This is a system of 2 equations in 2 unknowns (a and b). If we solve the BVP above in the manner suggested, we will need to solve a system of 64 equations in 64 unknowns. This is not a hard problem for modern computers, but if we use a larger number of samples, even modern computers cannot solve the BVP very efficiently.
=== Domain decomposition ===
Which brings us to domain decomposition methods. If we split the domain [0,1] × [0,1] into two subdomains [0,0.5] × [0,1] and [0.5,1] × [0,1], each has only half of the sample points. So we can try to solve a version of our model problem on each subdomain, but this time each subdomain has only 32 sample points. Finally, given the solutions on each subdomain, we can attempt to reconcile them to obtain a solution of the original problem on [0,1] × [0,1].
==== Size of the problems ====
In terms of the linear systems, we're trying to split the system of 64 equations in 64 unknowns into two systems of 32 equations in 32 unknowns. This would be a clear gain, for the following reason. Looking back at system (*), we see that there are 6 important pieces of information. They are the coefficients of a and b (2,5 on the first line and 6,−3 on the second line), and the right hand side (which we write as 12,−3). On the other hand, if we take two "systems" of 1 equation in 1 unknown, it might look like this:
System 1: 2a = 12
System 2: -3b = −3
We see that this system has only 4 important pieces of information. This means that a computer program will have an easier time solving two 1×1 systems than solving a single 2×2 system, because the pair of 1×1 systems are simpler than the single 2×2 system. While the 64×64 and 32×32 systems are too large to illustrate here, we could say by analogy that the 64×64 system has 4160 pieces of information, while the 32×32 systems each have 1056, or roughly a quarter of the 64×64 system.
==== Domain decomposition algorithm ====
Unfortunately, for technical reasons it is usually not possible to split our grid of 64 points (a 64×64 system of linear equations) into two grids of 32 points (two 32×32 systems of linear equations) and obtain an answer to the 64×64 system. Instead, the following algorithm is what actually happens:
1) Begin with an approximate solution of the 64×64 system.
2) From the 64×64 system, create two 32×32 systems to improve the approximate solution.
3) Solve the two 32×32 systems.
4) Put the two 32×32 solutions "together" to improve the approximate solution to the 64×64 system.
5) If the solution isn't very good yet, repeat from 2.
There are two ways in which this can be better than solving the base 64×64 system. First, if the number of repetitions of the algorithm is small, solving two 32×32 systems may be more efficient than solving a 64×64 system. Second, the two 32×32 systems need not be solved on the same computer, so this algorithm can be run in parallel to use the power of multiple computers.
In fact, solving two 32×32 systems instead of a 64×64 system on a single computer (without using parallelism) is unlikely to be efficient. However, if we use more than two subdomains, the picture can change. For instance, we could use four 16×16 problems, and there's a chance that solving these will be better than solving a single 64×64 problem even if the domain decomposition algorithm needs to iterate a few times.
== A technical example ==
Here we assume that the reader is familiar with partial differential equations.
We will be solving the partial differential equation
uxx + uyy = f (**)
We impose boundedness at infinity.
We decompose the domain R² into two overlapping subdomains H1 = (− ∞,1] × R and H2 = [0,+ ∞) × R. In each subdomain, we will be solving a BVP of the form:
u( j )xx + u( j )yy = f in Hj
u( j )(xj,y) = g(y)
where x1 = 1 and x2 = 0 and taking boundedness at infinity as the other boundary condition. We denote the solution u( j ) of the above problem by S(f,g). Note that S is bilinear.
The Schwarz algorithm proceeds as follows:
Start with approximate solutions u( 1 )0 and u( 2 )0 of the PDE in subdomains H1 and H2 respectively. Initialize k to 0.
Calculate u( j )k + 1 = S(f,u(3 − j)k(xj)) with j = 1,2.
Increase k by one and repeat 2 until sufficient precision is achieved.
== See also ==
Domain decomposition method
Schwarz alternating method
== References ==
Barry Smith, Petter Bjørstad, William Gropp, Domain Decomposition, Parallel Multilevel Methods for Elliptic Partial Differential Equations, Cambridge University Press 1996
Andrea Toselli and Olof Widlund, Domain Decomposition Methods - Algorithms and Theory, Springer Series in Computational Mathematics, Vol. 34, 2004
== External links ==
The official Domain Decomposition Methods page | Wikipedia/Additive_Schwarz_method |
The Lax–Wendroff method, named after Peter Lax and Burton Wendroff, is a numerical method for the solution of hyperbolic partial differential equations, based on finite differences. It is second-order accurate in both space and time. This method is an example of explicit time integration where the function that defines the governing equation is evaluated at the current time.
== Definition ==
Suppose one has an equation of the following form:
∂
u
(
x
,
t
)
∂
t
+
∂
f
(
u
(
x
,
t
)
)
∂
x
=
0
{\displaystyle {\frac {\partial u(x,t)}{\partial t}}+{\frac {\partial f(u(x,t))}{\partial x}}=0}
where x and t are independent variables, and the initial state, u(x, 0) is given.
=== Linear case ===
In the linear case, where f(u) = Au, and A is a constant,
u
i
n
+
1
=
u
i
n
−
Δ
t
2
Δ
x
A
[
u
i
+
1
n
−
u
i
−
1
n
]
+
Δ
t
2
2
Δ
x
2
A
2
[
u
i
+
1
n
−
2
u
i
n
+
u
i
−
1
n
]
.
{\displaystyle u_{i}^{n+1}=u_{i}^{n}-{\frac {\Delta t}{2\Delta x}}A\left[u_{i+1}^{n}-u_{i-1}^{n}\right]+{\frac {\Delta t^{2}}{2\Delta x^{2}}}A^{2}\left[u_{i+1}^{n}-2u_{i}^{n}+u_{i-1}^{n}\right].}
Here
n
{\displaystyle n}
refers to the
t
{\displaystyle t}
dimension and
i
{\displaystyle i}
refers to the
x
{\displaystyle x}
dimension.
This linear scheme can be extended to the general non-linear case in different ways. One of them is letting
A
(
u
)
=
f
′
(
u
)
=
∂
f
∂
u
{\displaystyle A(u)=f'(u)={\frac {\partial f}{\partial u}}}
=== Non-linear case ===
The conservative form of Lax-Wendroff for a general non-linear equation is then:
u
i
n
+
1
=
u
i
n
−
Δ
t
2
Δ
x
[
f
(
u
i
+
1
n
)
−
f
(
u
i
−
1
n
)
]
+
Δ
t
2
2
Δ
x
2
[
A
i
+
1
/
2
(
f
(
u
i
+
1
n
)
−
f
(
u
i
n
)
)
−
A
i
−
1
/
2
(
f
(
u
i
n
)
−
f
(
u
i
−
1
n
)
)
]
.
{\displaystyle u_{i}^{n+1}=u_{i}^{n}-{\frac {\Delta t}{2\Delta x}}\left[f(u_{i+1}^{n})-f(u_{i-1}^{n})\right]+{\frac {\Delta t^{2}}{2\Delta x^{2}}}\left[A_{i+1/2}\left(f(u_{i+1}^{n})-f(u_{i}^{n})\right)-A_{i-1/2}\left(f(u_{i}^{n})-f(u_{i-1}^{n})\right)\right].}
where
A
i
±
1
/
2
{\displaystyle A_{i\pm 1/2}}
is the Jacobian matrix evaluated at
1
2
(
u
i
n
+
u
i
±
1
n
)
{\textstyle {\frac {1}{2}}(u_{i}^{n}+u_{i\pm 1}^{n})}
.
== Jacobian free methods ==
To avoid the Jacobian evaluation, use a two-step procedure.
=== Richtmyer method ===
What follows is the Richtmyer two-step Lax–Wendroff method. The first step in the Richtmyer two-step Lax–Wendroff method calculates values for f(u(x, t)) at half time steps, tn + 1/2 and half grid points, xi + 1/2. In the second step values at tn + 1 are calculated using the data for tn and tn + 1/2.
First (Lax) steps:
u
i
+
1
/
2
n
+
1
/
2
=
1
2
(
u
i
+
1
n
+
u
i
n
)
−
Δ
t
2
Δ
x
(
f
(
u
i
+
1
n
)
−
f
(
u
i
n
)
)
,
{\displaystyle u_{i+1/2}^{n+1/2}={\frac {1}{2}}(u_{i+1}^{n}+u_{i}^{n})-{\frac {\Delta t}{2\,\Delta x}}(f(u_{i+1}^{n})-f(u_{i}^{n})),}
u
i
−
1
/
2
n
+
1
/
2
=
1
2
(
u
i
n
+
u
i
−
1
n
)
−
Δ
t
2
Δ
x
(
f
(
u
i
n
)
−
f
(
u
i
−
1
n
)
)
.
{\displaystyle u_{i-1/2}^{n+1/2}={\frac {1}{2}}(u_{i}^{n}+u_{i-1}^{n})-{\frac {\Delta t}{2\,\Delta x}}(f(u_{i}^{n})-f(u_{i-1}^{n})).}
Second step:
u
i
n
+
1
=
u
i
n
−
Δ
t
Δ
x
[
f
(
u
i
+
1
/
2
n
+
1
/
2
)
−
f
(
u
i
−
1
/
2
n
+
1
/
2
)
]
.
{\displaystyle u_{i}^{n+1}=u_{i}^{n}-{\frac {\Delta t}{\Delta x}}\left[f(u_{i+1/2}^{n+1/2})-f(u_{i-1/2}^{n+1/2})\right].}
=== MacCormack method ===
Another method of this same type was proposed by MacCormack. MacCormack's method uses first forward differencing and then backward differencing:
First step:
u
i
∗
=
u
i
n
−
Δ
t
Δ
x
(
f
(
u
i
+
1
n
)
−
f
(
u
i
n
)
)
.
{\displaystyle u_{i}^{*}=u_{i}^{n}-{\frac {\Delta t}{\Delta x}}(f(u_{i+1}^{n})-f(u_{i}^{n})).}
Second step:
u
i
n
+
1
=
1
2
(
u
i
n
+
u
i
∗
)
−
Δ
t
2
Δ
x
[
f
(
u
i
∗
)
−
f
(
u
i
−
1
∗
)
]
.
{\displaystyle u_{i}^{n+1}={\frac {1}{2}}(u_{i}^{n}+u_{i}^{*})-{\frac {\Delta t}{2\Delta x}}\left[f(u_{i}^{*})-f(u_{i-1}^{*})\right].}
Alternatively,
First step:
u
i
∗
=
u
i
n
−
Δ
t
Δ
x
(
f
(
u
i
n
)
−
f
(
u
i
−
1
n
)
)
.
{\displaystyle u_{i}^{*}=u_{i}^{n}-{\frac {\Delta t}{\Delta x}}(f(u_{i}^{n})-f(u_{i-1}^{n})).}
Second step:
u
i
n
+
1
=
1
2
(
u
i
n
+
u
i
∗
)
−
Δ
t
2
Δ
x
[
f
(
u
i
+
1
∗
)
−
f
(
u
i
∗
)
]
.
{\displaystyle u_{i}^{n+1}={\frac {1}{2}}(u_{i}^{n}+u_{i}^{*})-{\frac {\Delta t}{2\Delta x}}\left[f(u_{i+1}^{*})-f(u_{i}^{*})\right].}
== References ==
Michael J. Thompson, An Introduction to Astrophysical Fluid Dynamics, Imperial College Press, London, 2006.
Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 20.1. Flux Conservative Initial Value Problems". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. p. 1040. ISBN 978-0-521-88068-8. | Wikipedia/Lax–Wendroff_method |
The analytic element method (AEM) is a numerical method used for the solution of partial differential equations. It was initially developed by O.D.L. Strack at the University of Minnesota. It is similar in nature to the boundary element method (BEM), as it does not rely upon the discretization of volumes or areas in the modeled system; only internal and external boundaries are discretized. One of the primary distinctions between AEM and BEMs is that the boundary integrals are calculated analytically. Although originally developed to model groundwater flow, AEM has subsequently been applied to other fields of study including studies of heat flow and conduction, periodic waves, and deformation by force.
== Mathematical basis ==
The basic premise of the analytic element method is that, for linear differential equations, elementary solutions may be superimposed to obtain more complex solutions. A suite of 2D and 3D analytic solutions ("elements") are available for different governing equations. These elements typically correspond to a discontinuity in the dependent variable or its gradient along a geometric boundary (e.g., point, line, ellipse, circle, sphere, etc.). This discontinuity has a specific functional form (usually a polynomial in 2D) and may be manipulated to satisfy Dirichlet, Neumann, or Robin (mixed) boundary conditions. Each analytic solution is infinite in space and/or time.
Commonly each analytic solution contains degrees of freedom (coefficients) that may be calculated to meet prescribed boundary conditions along the element's border. To obtain a global solution (i.e., the correct element coefficients), a system of equations is solved such that the boundary conditions are satisfied along all of the elements (using collocation, least-squares minimization, or a similar approach). Notably, the global solution provides a spatially continuous description of the dependent variable everywhere in the infinite domain, and the governing equation is satisfied everywhere exactly except along the border of the element, where the governing equation is not strictly applicable due to discontinuity.
The ability to superpose numerous elements in a single solution means that analytical solutions can be realized for arbitrarily complex boundary conditions. That is, models that have complex geometries, straight or curved boundaries, multiple boundaries, transient boundary conditions, multiple aquifer layers, piecewise varying properties, and continuously varying properties can be solved. Elements can be implemented using far-field expansions such that models containing many thousands of elements can be solved efficiently to high precision.
The analytic element method has been applied to problems of groundwater flow governed by a variety of linear partial differential equations including the Laplace, the Poisson equation, the modified Helmholtz equation, the heat equation, and the biharmonic equations. Often these equations are solved using complex variables which enables using mathematical techniques available in complex variable theory. A useful technique to solve complex problems is using conformal mapping which maps the boundary of a geometry, e.g. an ellipse, onto the boundary of the unit circle where the solution is known.
In the analytic element method the discharge potential and stream function, or combined the complex potential, are used. This potential links the physical properties of the groundwater system, the hydraulic head or flow boundaries, to a mathematical representation of a potential. This mathematical representation can be used to calculate the potential in terms of position and thus also solve groundwater flow problems. Elements are developed by solving the boundary conditions for either of these two properties, hydraulic head or flow boundary, which results in analytical solutions capable of dealing with numerous boundary conditions.
== Comparison to other methods ==
As mentioned the analytic element method thus does not rely on the discretization of volume or area in the model, as in the finite elements or finite different methods. Thus, it can model complex problems with an error in the order of machine precision. This is illustrated in a study that modeled a highly heterogeneous, isotropic aquifer by including 100,000 spherical heterogeneity with a random conductivity and tracing 40,000 particles. The analytical element method can efficiently be used as verification or as a screening tool in larger projects as it may fast and accurately calculate the groundwater flow for many complex problems.
In contrast to other commonly used groundwater modeling methods, e.g. the finite elements or finite different method, the AEM does not discrete the model domain into cells. This gives the advantage that the model is valid for any given point in the model domain. However, it also imposes that the domain is not as easily divided into regions of e.g. different hydraulic conductivity, as when modeling with a cell grid; however, one solution to this problem is to include subdomains to the AEM model. There also exist solutions for implementing vertically varying properties or structures in an aquifer in an AEM model.
== See also ==
Boundary element method
Conformal mapping
Superposition principle
== References ==
== Further read ==
Haitjema, H. M. (1995). Analytic element modeling of groundwater flow (PDF). San Diego, CA: Academic Press. ISBN 978-0-12-316550-3.
Strack, O. D. L. (1989). Groundwater Mechanics. Englewood Cliffs, NJ: Prentice Hall.
Fitts, C. R. (2012). Groundwater Science (2nd ed.). San Diego, CA: Elsevier/Academic Press. ISBN 9780123847058.
== External links ==
Analytic elements community wiki
Fitts Geolsolutions, AnAqSim (analytic aquifer simulator) and AnAqSimEDU (free) web site | Wikipedia/Analytic_element_method |
In numerical solution of differential equations, WENO (weighted essentially non-oscillatory) methods are classes of high-resolution schemes. WENO are used in the numerical solution of hyperbolic partial differential equations. These methods were developed from ENO methods (essentially non-oscillatory). The first WENO scheme was developed by Liu, Osher and Chan in 1994. In 1996, Guang-Shan Jiang and Chi-Wang Shu developed a new WENO scheme called WENO-JS. Nowadays, there are many WENO methods.
== See also ==
High-resolution scheme
ENO methods
== References ==
== Further reading ==
Shu, Chi-Wang (1998). "Essentially non-oscillatory and weighted essentially non-oscillatory schemes for hyperbolic conservation laws". Advanced Numerical Approximation of Nonlinear Hyperbolic Equations. Lecture Notes in Mathematics. Vol. 1697. pp. 325–432. CiteSeerX 10.1.1.127.895. doi:10.1007/BFb0096355. ISBN 978-3-540-64977-9.
Shu, Chi-Wang (2009). "High Order Weighted Essentially Nonoscillatory Schemes for Convection Dominated Problems". SIAM Review. 51: 82–126. Bibcode:2009SIAMR..51...82S. doi:10.1137/070679065. | Wikipedia/WENO_methods |
The method of lines (MOL, NMOL, NUMOL) is a technique for solving partial differential equations (PDEs) in which all but one dimension is discretized. By reducing a PDE to a single continuous dimension, the method of lines allows solutions to be computed via methods and software developed for the numerical integration of ordinary differential equations (ODEs) and differential-algebraic systems of equations (DAEs). Many integration routines have been developed over the years in many different programming languages, and some have been published as open source resources.
The method of lines most often refers to the construction or analysis of numerical methods for partial differential equations that proceeds by first discretizing the spatial derivatives only and leaving the time variable continuous. This leads to a system of ordinary differential equations to which a numerical method for initial value ordinary equations can be applied. The method of lines in this context dates back to at least the early 1960s. Many papers discussing the accuracy and stability of the method of lines for various types of partial differential equations have appeared since.
== Application to elliptical equations ==
MOL requires that the PDE problem is well-posed as an initial value (Cauchy) problem in at least one dimension, because ODE and DAE integrators are initial value problem (IVP) solvers. Thus it cannot be used directly on purely elliptic partial differential equations, such as Laplace's equation. However, MOL has been used to solve Laplace's equation by using the method of false transients. In this method, a time derivative of the dependent variable is added to Laplace’s equation. Finite differences are then used to approximate the spatial derivatives, and the resulting system of equations is solved by MOL. It is also possible to solve elliptical problems by a semi-analytical method of lines. In this method, the discretization process results in a set of ODE's that are solved by exploiting properties of the associated exponential matrix.
Recently, to overcome the stability issues associated with the method of false transients, a perturbation approach was proposed which was found to be more robust than standard method of false transients for a wide range of elliptic PDEs.
== References ==
== External links ==
False Transient Method of Lines - sample code
The Numerical Method of Lines | Wikipedia/Method_of_lines |
In mathematics, Neumann–Neumann methods are domain decomposition preconditioners named so because they solve a Neumann problem on each subdomain on both sides of the interface between the subdomains. Just like all domain decomposition methods, so that the number of iterations does not grow with the number of subdomains, Neumann–Neumann methods require the solution of a coarse problem to provide global communication. The balancing domain decomposition is a Neumann–Neumann method with a special kind of coarse problem.
More specifically, consider a domain Ω, on which we wish to solve the Poisson equation
−
Δ
u
=
f
,
u
|
∂
Ω
=
0
{\displaystyle -\Delta u=f,\qquad u|_{\partial \Omega }=0}
for some function f. Split the domain into two non-overlapping subdomains Ω1 and Ω2 with common boundary Γ and let u1 and u2 be the values of u in each subdomain. At the interface between the two subdomains, the two solutions must satisfy the matching conditions
u
1
=
u
2
,
∂
n
1
u
1
=
∂
n
2
u
2
{\displaystyle u_{1}=u_{2},\qquad \partial _{n_{1}}u_{1}=\partial _{n_{2}}u_{2}}
where
n
i
{\textstyle n_{i}}
is the unit normal vector to Γ in each subdomain.
An iterative method with iterations k=0,1,... for the approximation of each ui (i=1,2) that satisfies the matching conditions is to first solve the Dirichlet problems
−
Δ
u
i
(
k
)
=
f
i
in
Ω
i
,
u
i
(
k
)
|
∂
Ω
=
0
,
u
i
(
k
)
|
Γ
=
λ
(
k
)
{\displaystyle -\Delta u_{i}^{(k)}=f_{i}\;{\text{in}}\;\Omega _{i},\qquad u_{i}^{(k)}|_{\partial \Omega }=0,\quad u_{i}^{(k)}|_{\Gamma }=\lambda ^{(k)}}
for some function λ(k) on Γ, where λ(0) is any inexpensive initial guess. We then solve the two Neumann problems
−
Δ
ψ
i
(
k
)
=
0
in
Ω
i
,
ψ
i
(
k
)
|
∂
Ω
=
0
,
∂
n
i
ψ
i
(
k
)
|
Γ
=
ω
(
∂
n
1
u
1
(
k
)
+
∂
n
2
u
2
(
k
)
)
.
{\displaystyle -\Delta \psi _{i}^{(k)}=0\;{\text{in}}\;\Omega _{i},\qquad \psi _{i}^{(k)}|_{\partial \Omega }=0,\quad \partial _{n_{i}}\psi _{i}^{(k)}|_{\Gamma }=\omega (\partial _{n_{1}}u_{1}^{(k)}+\partial _{n_{2}}u_{2}^{(k)}).}
We then obtain the next iterate by setting
λ
(
k
+
1
)
=
λ
(
k
)
−
ω
(
θ
1
ψ
1
(
k
)
+
θ
2
ψ
2
(
k
)
)
on
Γ
{\displaystyle \lambda ^{(k+1)}=\lambda ^{(k)}-\omega (\theta _{1}\psi _{1}^{(k)}+\theta _{2}\psi _{2}^{(k)})\;{\text{on}}\;\Gamma }
for some parameters ω, θ1 and θ2.
This procedure can be viewed as a Richardson iteration for the iterative solution of the equations arising from the Schur complement method.
This continuous iteration can be discretized by the finite element method and then solved—in parallel—on a computer. The extension to more subdomains is straightforward, but using this method as stated as a preconditioner for the Schur complement system is not scalable with the number of subdomains; hence the need for a global coarse solve.
== See also ==
Neumann–Dirichlet method
== References == | Wikipedia/Neumann–Neumann_methods |
The stretched grid method (SGM) is a numerical technique for finding approximate solutions of various mathematical and engineering problems that can be related to an elastic grid behavior.
In particular, meteorologists use the stretched grid method for weather prediction and engineers use the stretched grid method to design tents and other tensile structures.
== FEM and BEM mesh refinement ==
In recent decades the finite element and boundary element methods (FEM and BEM) have become a mainstay for industrial engineering design and analysis. Increasingly larger and more complex designs are being simulated using the FEM or BEM. However, some problems of FEM and BEM engineering analysis are still on the cutting edge. The first problem is a reliability of engineering analysis that strongly depends upon the quality of initial data generated at the pre-processing stage. It is known that automatic element mesh generation techniques at this stage have become commonly used tools for the analysis of complex real-world models. With FEM and BEM increasing in popularity comes the incentive to improve automatic meshing algorithms. However, all of these algorithms can create distorted and even unusable grid elements. Several techniques exist which can take an existing mesh and improve its quality. For instance smoothing (also referred to as mesh refinement) is one such method, which repositions nodal locations, so as to minimize element distortion. The Stretched Grid Method (SGM) allows the obtaining of pseudo-regular meshes very easily and quickly in a one-step solution(see ).
Let one assume that there is an arbitrary triangle grid embedded into plane polygonal single-coherent contour and produced by an automeshing procedure (see fig. 1) It may be assumed further that the grid considered as a physical nodal system is distorted by a number of distortions. It is supposed that the total potential energy of this system is proportional to the length of some
n
{\displaystyle \ n}
-dimensional vector with all network segments as its components.
Thus, the potential energy takes the following form
Π
=
D
∑
j
=
1
n
R
j
2
{\displaystyle \Pi =D\sum _{j=1}^{n}{R_{j}}^{2}}
where
n
{\displaystyle \ n}
- total number of segments in the network,
R
j
{\displaystyle \ R_{j}}
- The length of segment number
j
{\displaystyle \ j}
,
D
{\displaystyle \ D}
- an arbitrary constant.
The length of segment number
j
{\displaystyle \ j}
may be expressed by two nodal co-ordinates as
R
=
(
X
12
−
X
11
)
2
+
(
X
22
−
X
21
)
2
{\displaystyle \ R={\sqrt {(X_{12}-X_{11})^{2}+(X_{22}-X_{21})^{2}}}}
It may also be supposed that co-ordinate vector
{
X
}
{\displaystyle \{\ X\}}
of all nodes is associated with non-distorted network and co-ordinate vector
{
X
′
}
{\displaystyle \{\ X'\}}
is associated with the distorted network. The expression for vector
{
X
}
{\displaystyle \{\ X\}}
may be written as
{
X
}
=
{
X
′
}
+
{
Δ
X
}
{\displaystyle \{\ X\}=\{\ X'\}+\{\Delta \ X\}}
The vector
{
X
}
{\displaystyle \{\ X\}}
determination is related to minimization of the quadratic form
Π
{\displaystyle \ \Pi }
by incremental vector
{
Δ
X
}
{\displaystyle \{\Delta \ X\}}
, i.e.
∂
Π
∂
Δ
X
k
l
=
0
{\displaystyle {\frac {\partial \Pi }{\partial \Delta X_{kl}}}=0}
where
l
{\displaystyle \ l}
- is the number of interior node of the area,
k
{\displaystyle \ k}
- the number of co-ordinate
After all transformations we may write the following two independent systems of linear algebraic equations
[
A
]
{
Δ
X
1
}
=
{
B
1
}
{\displaystyle [\ A]\{\Delta X_{1}\}=\{\ B_{1}\}}
[
A
]
{
Δ
X
2
}
=
{
B
2
}
{\displaystyle [\ A]\{\Delta X_{2}\}=\{\ B_{2}\}}
where
[
A
]
{\displaystyle [\ A]}
- symmetrical matrix in the banded form similar to global stiffness matrix of FEM assemblage,
{
Δ
X
1
}
{\displaystyle \{\Delta \ X_{1}\}}
and
{
Δ
X
2
}
{\displaystyle \{\Delta \ X_{2}\}}
- incremental vectors of co-ordinates of all nodes at axes 1, 2,
{
B
1
}
{\displaystyle \{\ B_{1}\}}
and
{
B
2
}
{\displaystyle \{\ B_{2}\}}
- the right part vectors that are combined by co-ordinates of all nodes in axes 1, 2.
The solution of both systems, keeping all boundary nodes conservative, obtains new interior node positions corresponding to a non-distorted mesh with pseudo-regular elements. For example, Fig. 2 presents the rectangular area covered by a triangular mesh. The initial auto mesh possesses some degenerative triangles (left mesh). The final mesh (right mesh) produced by the SGM procedure is pseudo-regular without any distorted elements.
As above systems are linear, the procedure elapses very quickly to a one-step solution. Moreover, each final interior node position meets the requirement of co-ordinate arithmetic mean of nodes surrounding it and meets the Delaunay criteria too. Therefore, the SGM has all the positive values peculiar to Laplacian and other kinds of smoothing approaches but much easier and reliable because of integer-valued final matrices representation. Finally, the described above SGM is perfectly applicable not only to 2D meshes but to 3D meshes consisting of any uniform cells as well as to mixed or transient meshes.
== Minimum surface problem solution ==
Mathematically the surface embedded into a non-plane closed curve is called minimum if its area is minimal amongst all the surfaces passing through this curve. The best-known minimum surface sample is a soap film bounded by wire frame. Usually to create a minimum surface, a fictitious constitutive law, which maintains a constant prestress, independent of any changes in strain, is used. The alternative approximated approach to the minimum surface problem solution is based on SGM. This formulation allows one to minimize the surface embedded into non-plane and plane closed contours.
The idea is to approximate a surface part embedded into 3D non-plane contour by an arbitrary triangle grid. To converge such triangle grid to grid with minimum area one should solve the same two systems described above. Increments of the third nodal co-ordinates may be determined additionally by similar system at axis 3 in the following way
[
A
]
{
Δ
X
3
}
=
{
B
3
}
{\displaystyle [\ A]\{\Delta X_{3}\}=\{\ B_{3}\}}
Solving all three systems simultaneously one can obtain a new grid that will be the approximating minimal surface embedded into non-plane closed curve because of the minimum of the function
Π
{\displaystyle \ \Pi }
where parameter
j
=
1
,
2
,
3
{\displaystyle \ j=1,2,3}
.
As an example the surface of catenoid which is calculated by the described above approach is presented in Fig 3. The radii of rings and the height of catenoid are equal to 1.0. The numerical area of catenoidal surface determined by SGM is equal to 2,9967189 (exact value is 2.992).
== Tensile fabric structures form finding ==
For structural analysis, the configuration of the structure is generally known à priori. This is not the case for tensile structures such as tension fabric structures. Since the membrane in a tension structure possesses no flexural stiffness, its form or configuration depends upon initial prestressing and the loads to which it is subjected. Thus, the load-bearing behaviour and the shape of the membrane cannot be separated and cannot be generally described by simple geometric models only. The membrane shape, the loads on the structure and the internal stresses interact in a non-linear manner to satisfy the equilibrium equations.
The preliminary design of tension structures involves the determination of an initial configuration referred to as form finding. In addition to satisfying the equilibrium conditions, the initial configuration must accommodate both architectural (aesthetics) and structural (strength and stability) requirements. Further, the requirements of space and clearance should be met, the membrane principal stresses must be tensile to avoid wrinkling, and the radii of the double-curved surface should be small enough to resist out-of-plane loads and to insure structural stability (work ). Several variations on form finding approaches based on FEM have been developed to assist engineers in the design of tension fabric structures. All of them are based on the same assumption as that used for analysing the behaviour of tension structures under various loads. However, as it is noted by some researchers it might sometimes be preferable to use the so-called ‘minimal surfaces’ in the design of tension structures.
The physical meaning of SGM consists in convergence of the energy of an arbitrary grid structure embedded into rigid (or elastic) 3D contour to minimum that is equivalent to minimum sum distances between arbitrary pairs of grid nodes. It allows the minimum surface energy problem solution substituting for finding grid structure sum energy minimum finding that provides much more plain final algebraic equation system than the usual FEM formulation. The generalized formulation of SGM presupposes a possibility to apply a set of outer forces and rigid or elastic constrains to grid structure nodes that allows the modelling of various outer effects. We may obtain the following expression for such SGM formulation
Π
=
∑
j
=
1
n
D
j
R
j
2
+
∑
i
=
1
3
(
∑
k
=
1
m
C
i
k
Δ
X
i
k
2
−
∑
k
=
1
m
P
i
k
Δ
X
i
k
)
{\displaystyle \Pi =\sum _{j=1}^{n}D_{j}R_{j}^{2}+\sum _{i=1}^{3}\left(\sum _{k=1}^{m}C_{ik}\Delta X_{ik}^{2}-\sum _{k=1}^{m}P_{ik}\Delta X_{ik}\right)}
where
n
{\displaystyle \ n}
- total number of grid segments,
m
{\displaystyle \ m}
- total number of nodes,
R
j
{\displaystyle \ R_{j}}
- length of segment number
j
{\displaystyle \ j}
,
D
j
{\displaystyle \ D_{j}}
- stiffness of segment number
j
{\displaystyle \ j}
,
Δ
X
i
k
{\displaystyle \ \Delta X_{ik}}
- coordinate increment of node
k
{\displaystyle \ k}
at axis
i
{\displaystyle \ i}
,
C
i
k
{\displaystyle \ C_{ik}}
- stiffness of an elastic constrain in node
k
{\displaystyle \ k}
at axis
i
{\displaystyle \ i}
,
P
i
k
{\displaystyle \ P_{ik}}
- outer force in node
k
{\displaystyle \ k}
at axis
i
{\displaystyle \ i}
.
== Unfolding problem and cutting pattern generation ==
Once a satisfactory shape has been found, a cutting pattern may be generated. Tension structures are highly varied in their size, curvature and material stiffness. Cutting pattern approximation is strongly related to each of these factors. It is essential for a cutting pattern generation method to minimize possible approximation and to produce reliable plane cloth data.
The objective is to develop the shapes described by these data, as close as possible to the ideal doubly curved strips. In general, cutting pattern generation involves two steps. First, the global surface of a tension structure is divided into individual cloths. The corresponding cutting pattern at the second step can be found by simply taking each cloth strip and unfolding it on a planar area. In the case of the ideal doubly curved membrane surface the subsurface cannot be simply unfolded and they must be flattened. For example, in, SGM has been used for the flattening problem solution.
The cutting pattern generation problem is actually subdivided into two independent formulations. These are the generation of a distortion-free plane form unfolding each cloth strip and flattening double-curved surfaces that cannot be simply unfolded. Studying the problem carefully one can notice that from the position of differential geometry both formulations are the same. We may consider it as an isometric mapping of a surface onto the plane area that will be conformal mapping and equiareal mapping simultaneously because of invariant angles between any curves and invariance of any pieces of area. In the case of single-curved surface that can be unfolded precisely equi-areal mapping allows one to obtain a cutting pattern for fabric structure without any distortions. The second type of surfaces can be equi-areal mapped only approximately with some distortions of linear surface elements limited by the fabric properties. Let us assume that two surfaces are parameterized so that their first quadratic forms may be written as follows
I
1
=
E
1
(
u
,
v
)
d
u
2
+
2
F
1
(
u
,
v
)
d
u
d
v
+
G
1
(
u
,
v
)
d
v
2
{\displaystyle I_{1}=E_{1}(u,v)\operatorname {d} u^{2}+2F_{1}(u,v)\operatorname {d} u\operatorname {d} v+G_{1}(u,v)\operatorname {d} v^{2}}
I
2
=
E
2
(
u
,
v
)
d
u
2
+
2
F
2
(
u
,
v
)
d
u
d
v
+
G
2
(
u
,
v
)
d
v
2
{\displaystyle I_{2}=E_{2}(u,v)\operatorname {d} u^{2}+2F_{2}(u,v)\operatorname {d} u\operatorname {d} v+G_{2}(u,v)\operatorname {d} v^{2}}
The condition of conformal mapping for two surfaces as is formulated in differential geometry requires that
I
2
=
λ
I
1
{\displaystyle {\sqrt {I_{2}}}=\lambda {\sqrt {I_{1}}}}
where
λ
{\displaystyle \ \lambda }
is the ratio of the surface distortion due to conformal mapping.
It is known that the first quadratic form reflects the distance between two surface points
(
u
,
v
)
{\displaystyle \ (u,v)}
and
(
u
+
d
u
,
v
+
d
v
)
{\displaystyle \ (u+\operatorname {d} u,v+\operatorname {d} v)}
. When
λ
{\displaystyle \ \lambda }
-ratio is close to 1 the above eqn converges to condition of isometric mapping and to equi-areal mapping respectively because of invariant angles between any curves and invariance of any pieces of area. Remembering that the first stage of form finding is based on triangular mesh of a surface and using the method of weighted residuals for the description of isometric and equi-areal mapping of the minimum surface onto a plane area we may write the following function which is defined by the sum of integrals along segments of curved triangles
Π
=
D
∑
j
=
1
n
∮
S
j
w
j
(
λ
I
1
−
I
2
)
2
d
s
{\displaystyle \Pi =D\sum _{j=1}^{n}\oint _{S_{j}}w_{j}\left(\lambda {\sqrt {I_{1}}}-{\sqrt {I_{2}}}\right)^{2}\operatorname {d} s}
where
n
{\displaystyle \ n}
- total number of grid cells,
w
j
{\displaystyle \ w_{j}}
- weight ratios,
Π
{\displaystyle \ \Pi }
- the total mapping residual,
D
{\displaystyle \ D}
- the constant that does not influence the final result and may be used as a scale ratio.
Considering further weight ratios
w
j
=
1
{\displaystyle \ w_{j}=1}
we may transform eqn. into approximate finite sum that is a combination of linear distances between nodes of the surface grid and write the basic condition of equi-areal surface mapping as a minimum of following non-linear function
Π
=
D
∑
j
=
1
n
∮
S
j
w
j
(
λ
R
j
−
L
j
)
2
d
s
{\displaystyle \Pi =D\sum _{j=1}^{n}\oint _{S_{j}}w_{j}\left(\lambda R_{j}-L_{j}\right)^{2}\operatorname {d} s}
where
R
j
{\displaystyle \ R_{j}}
- initial length of linear segment number
j
{\displaystyle \ j}
,
L
j
{\displaystyle \ L_{j}}
- final length of segment number
j
{\displaystyle \ j}
,
λ
{\displaystyle \ \lambda }
- distortion ratio close to 1 and may be different for each segment.
The initial and final lengths of segment number
j
{\displaystyle \ j}
may be expressed as usual by two nodal co-ordinates as
R
=
(
X
12
−
X
11
)
2
+
(
X
22
−
X
21
)
2
+
(
X
32
−
X
31
)
2
{\displaystyle R={\sqrt {(X_{12}-X_{11})^{2}+(X_{22}-X_{21})^{2}+(X_{32}-X_{31})^{2}}}}
L
=
(
x
12
−
x
11
)
2
+
(
x
22
−
x
21
)
2
{\displaystyle L={\sqrt {(x_{12}-x_{11})^{2}+(x_{22}-x_{21})^{2}}}}
where
X
i
k
{\displaystyle \ X_{ik}}
- co-ordinates of nodes of the initial segment,
x
i
k
{\displaystyle \ x_{ik}}
- co-ordinates of nodes of the final segment.
According to the initial assumption we can write
x
32
=
x
31
=
0
{\displaystyle \ x_{32}=x_{31}=0}
for the plane surface mapping. The expression for vectors
{
x
}
{\displaystyle \{\ x\}}
and
{
X
}
{\displaystyle \{\ X\}}
with co-ordinate increments term use may be written as
{
x
}
=
{
X
}
+
{
Δ
X
}
{\displaystyle \{\ x\}=\{\ X\}+\{\Delta \ X\}}
The vector
{
Δ
X
}
{\displaystyle \{\Delta \ X\}}
definition is made as previously
∂
Π
∂
Δ
X
k
l
=
0
{\displaystyle {\frac {\partial \Pi }{\partial \Delta X_{kl}}}=0}
After transformations we may write the following two independent systems of non-linear algebraic equations
[
A
]
{
Δ
X
1
}
=
{
B
1
}
+
{
Δ
P
1
}
{\displaystyle [\ A]\{\Delta X_{1}\}=\{\ B_{1}\}+\{\Delta P_{1}\}}
[
A
]
{
Δ
X
2
}
=
{
B
2
}
+
{
Δ
P
2
}
{\displaystyle [\ A]\{\Delta X_{2}\}=\{\ B_{2}\}+\{\Delta P_{2}\}}
where all the parts of the system can be expressed as previously and
{
Δ
P
1
}
{\displaystyle \{\ \Delta P_{1}\}}
and
{
Δ
P
2
}
{\displaystyle \{\ \Delta P_{2}\}}
are vectors of pseudo-stresses at axes 1, 2 that has the following form
{
Δ
P
l
t
}
=
−
{
∑
j
=
1
N
λ
R
m
L
m
(
x
l
m
−
x
l
t
)
}
{\displaystyle \{\Delta P_{lt}\}=-\left\{\sum _{j=1}^{N}\lambda {\frac {R_{m}}{L_{m}}}(x_{lm}-x_{lt})\right\}}
where
N
{\displaystyle \ N}
- total number of nodes that surround node number
t
{\displaystyle \ t}
,
l
{\displaystyle \ l}
- the number of global axes.
The above approach is another form of SGM and allows the obtaining of two independent systems of non-linear algebraic equations that can be solved by any standard iteration procedure. The less Gaussian curvature of the surface is the higher the accuracy of the plane mapping is. As a rule, the plane mapping allows to obtain a pattern with linear dimensions 1–2% less than corresponding spatial lines of a final surface. That is why it is necessary to provide the appropriate margins while patterning.
The typical sample of cut out — also called a cutout, a gore (segment), or a patch — is presented in Figs. 9, 10, 11.
== See also ==
Mesh generation
Types of mesh
== References ==
== External links ==
K3-Tent system for tensile fabric structures formfinding and cutting patterning
Kubantent corp | Wikipedia/Stretched_grid_method |
In mathematics, in the area of numerical analysis, Galerkin methods are a family of methods for converting a continuous operator problem, such as a differential equation, commonly in a weak formulation, to a discrete problem by applying linear constraints determined by finite sets of basis functions. They are named after the Soviet mathematician Boris Galerkin.
Often when referring to a Galerkin method, one also gives the name along with typical assumptions and approximation methods used:
Ritz–Galerkin method (after Walther Ritz) typically assumes symmetric and positive-definite bilinear form in the weak formulation, where the differential equation for a physical system can be formulated via minimization of a quadratic function representing the system energy and the approximate solution is a linear combination of the given set of the basis functions.
Bubnov–Galerkin method (after Ivan Bubnov) does not require the bilinear form to be symmetric and substitutes the energy minimization with orthogonality constraints determined by the same basis functions that are used to approximate the solution. In an operator formulation of the differential equation, Bubnov–Galerkin method can be viewed as applying an orthogonal projection to the operator.
Petrov–Galerkin method (after Georgii I. Petrov) allows using basis functions for orthogonality constraints (called test basis functions) that are different from the basis functions used to approximate the solution. Petrov–Galerkin method can be viewed as an extension of Bubnov–Galerkin method, applying a projection that is not necessarily orthogonal in the operator formulation of the differential equation.
Examples of Galerkin methods are:
the Galerkin method of weighted residuals, the most common method of calculating the global stiffness matrix in the finite element method,
the boundary element method for solving integral equations,
Krylov subspace methods.
== Linear equation in a Hilbert space ==
=== Weak formulation of a linear equation ===
Let us introduce Galerkin's method with an abstract problem posed as a weak formulation on a Hilbert space
V
{\displaystyle V}
, namely,
find
u
∈
V
{\displaystyle u\in V}
such that for all
v
∈
V
:
a
(
u
,
v
)
=
f
(
v
)
{\displaystyle v\in V:a(u,v)=f(v)}
.
Here,
a
(
⋅
,
⋅
)
{\displaystyle a(\cdot ,\cdot )}
is a bilinear form (the exact requirements on
a
(
⋅
,
⋅
)
{\displaystyle a(\cdot ,\cdot )}
will be specified later) and
f
{\displaystyle f}
is a bounded linear functional on
V
{\displaystyle V}
.
=== Galerkin dimension reduction ===
Choose a subspace
V
n
⊂
V
{\displaystyle V_{n}\subset V}
of dimension n and solve the projected problem:
Find
u
n
∈
V
n
{\displaystyle u_{n}\in V_{n}}
such that for all
v
n
∈
V
n
,
a
(
u
n
,
v
n
)
=
f
(
v
n
)
{\displaystyle v_{n}\in V_{n},a(u_{n},v_{n})=f(v_{n})}
.
We call this the Galerkin equation. Notice that the equation has remained unchanged and only the spaces have changed.
Reducing the problem to a finite-dimensional vector subspace allows us to numerically compute
u
n
{\displaystyle u_{n}}
as a finite linear combination of the basis vectors in
V
n
{\displaystyle V_{n}}
.
=== Galerkin orthogonality ===
The key property of the Galerkin approach is that the error is orthogonal to the chosen subspaces. Since
V
n
⊂
V
{\displaystyle V_{n}\subset V}
, we can use
v
n
{\displaystyle v_{n}}
as a test vector in the original equation. Subtracting the two, we get the Galerkin orthogonality relation for the error,
ϵ
n
=
u
−
u
n
{\displaystyle \epsilon _{n}=u-u_{n}}
which is the error between the solution of the original problem,
u
{\displaystyle u}
, and the solution of the Galerkin equation,
u
n
{\displaystyle u_{n}}
a
(
ϵ
n
,
v
n
)
=
a
(
u
,
v
n
)
−
a
(
u
n
,
v
n
)
=
f
(
v
n
)
−
f
(
v
n
)
=
0.
{\displaystyle a(\epsilon _{n},v_{n})=a(u,v_{n})-a(u_{n},v_{n})=f(v_{n})-f(v_{n})=0.}
=== Matrix form of Galerkin's equation ===
Since the aim of Galerkin's method is the production of a linear system of equations, we build its matrix form, which can be used to compute the solution algorithmically.
Let
e
1
,
e
2
,
…
,
e
n
{\displaystyle e_{1},e_{2},\ldots ,e_{n}}
be a basis for
V
n
{\displaystyle V_{n}}
. Then, it is sufficient to use these in turn for testing the Galerkin equation, i.e.: find
u
n
∈
V
n
{\displaystyle u_{n}\in V_{n}}
such that
a
(
u
n
,
e
i
)
=
f
(
e
i
)
i
=
1
,
…
,
n
.
{\displaystyle a(u_{n},e_{i})=f(e_{i})\quad i=1,\ldots ,n.}
We expand
u
n
{\displaystyle u_{n}}
with respect to this basis,
u
n
=
∑
j
=
1
n
u
j
e
j
{\displaystyle u_{n}=\sum _{j=1}^{n}u_{j}e_{j}}
and insert it into the equation above, to obtain
a
(
∑
j
=
1
n
u
j
e
j
,
e
i
)
=
∑
j
=
1
n
u
j
a
(
e
j
,
e
i
)
=
f
(
e
i
)
i
=
1
,
…
,
n
.
{\displaystyle a\left(\sum _{j=1}^{n}u_{j}e_{j},e_{i}\right)=\sum _{j=1}^{n}u_{j}a(e_{j},e_{i})=f(e_{i})\quad i=1,\ldots ,n.}
This previous equation is actually a linear system of equations
A
u
=
f
{\displaystyle Au=f}
, where
A
i
j
=
a
(
e
j
,
e
i
)
,
f
i
=
f
(
e
i
)
.
{\displaystyle A_{ij}=a(e_{j},e_{i}),\quad f_{i}=f(e_{i}).}
==== Symmetry of the matrix ====
Due to the definition of the matrix entries, the matrix of the Galerkin equation is symmetric if and only if the bilinear form
a
(
⋅
,
⋅
)
{\displaystyle a(\cdot ,\cdot )}
is symmetric.
== Analysis of Galerkin methods ==
Here, we will restrict ourselves to symmetric bilinear forms, that is
a
(
u
,
v
)
=
a
(
v
,
u
)
.
{\displaystyle a(u,v)=a(v,u).}
While this is not really a restriction of Galerkin methods, the application of the standard theory becomes much simpler. Furthermore, a Petrov–Galerkin method may be required in the nonsymmetric case.
The analysis of these methods proceeds in two steps. First, we will show that the Galerkin equation is a well-posed problem in the sense of Hadamard and therefore admits a unique solution. In the second step, we study the quality of approximation of the Galerkin solution
u
n
{\displaystyle u_{n}}
.
The analysis will mostly rest on two properties of the bilinear form, namely
Boundedness: for all
u
,
v
∈
V
{\displaystyle u,v\in V}
holds
a
(
u
,
v
)
≤
C
‖
u
‖
‖
v
‖
{\displaystyle a(u,v)\leq C\|u\|\,\|v\|}
for some constant
C
>
0
{\displaystyle C>0}
Ellipticity: for all
u
∈
V
{\displaystyle u\in V}
holds
a
(
u
,
u
)
≥
c
‖
u
‖
2
{\displaystyle a(u,u)\geq c\|u\|^{2}}
for some constant
c
>
0.
{\displaystyle c>0.}
By the Lax-Milgram theorem (see weak formulation), these two conditions imply well-posedness of the original problem in weak formulation. All norms in the following sections will be norms for which the above inequalities hold (these norms are often called an energy norm).
=== Well-posedness of the Galerkin equation ===
Since
V
n
⊂
V
{\displaystyle V_{n}\subset V}
, boundedness and ellipticity of the bilinear form apply to
V
n
{\displaystyle V_{n}}
. Therefore, the well-posedness of the Galerkin problem is actually inherited from the well-posedness of the original problem.
=== Quasi-best approximation (Céa's lemma) ===
The error
u
−
u
n
{\displaystyle u-u_{n}}
between the original and the Galerkin solution admits the estimate
‖
u
−
u
n
‖
≤
C
c
inf
v
n
∈
V
n
‖
u
−
v
n
‖
.
{\displaystyle \|u-u_{n}\|\leq {\frac {C}{c}}\inf _{v_{n}\in V_{n}}\|u-v_{n}\|.}
This means, that up to the constant
C
/
c
{\displaystyle C/c}
, the Galerkin solution
u
n
{\displaystyle u_{n}}
is as close to the original solution
u
{\displaystyle u}
as any other vector in
V
n
{\displaystyle V_{n}}
. In particular, it will be sufficient to study approximation by spaces
V
n
{\displaystyle V_{n}}
, completely forgetting about the equation being solved.
==== Proof ====
Since the proof is very simple and the basic principle behind all Galerkin methods, we include it here:
by ellipticity and boundedness of the bilinear form (inequalities) and Galerkin orthogonality (equals sign in the middle), we have for arbitrary
v
n
∈
V
n
{\displaystyle v_{n}\in V_{n}}
:
c
‖
u
−
u
n
‖
2
≤
a
(
u
−
u
n
,
u
−
u
n
)
=
a
(
u
−
u
n
,
u
−
v
n
)
≤
C
‖
u
−
u
n
‖
‖
u
−
v
n
‖
.
{\displaystyle c\|u-u_{n}\|^{2}\leq a(u-u_{n},u-u_{n})=a(u-u_{n},u-v_{n})\leq C\|u-u_{n}\|\,\|u-v_{n}\|.}
Dividing by
c
‖
u
−
u
n
‖
{\displaystyle c\|u-u_{n}\|}
and taking the infimum over all possible
v
n
{\displaystyle v_{n}}
yields the lemma.
=== Galerkin's best approximation property in the energy norm ===
For simplicity of presentation in the section above we have assumed that the bilinear form
a
(
u
,
v
)
{\displaystyle a(u,v)}
is symmetric and positive-definite, which implies that it is a scalar product and the expression
‖
u
‖
a
=
a
(
u
,
u
)
{\displaystyle \|u\|_{a}={\sqrt {a(u,u)}}}
is actually a valid vector norm, called the energy norm. Under these assumptions one can easily prove in addition Galerkin's best approximation property in the energy norm.
Using Galerkin a-orthogonality and the Cauchy–Schwarz inequality for the energy norm, we obtain
‖
u
−
u
n
‖
a
2
=
a
(
u
−
u
n
,
u
−
u
n
)
=
a
(
u
−
u
n
,
u
−
v
n
)
≤
‖
u
−
u
n
‖
a
‖
u
−
v
n
‖
a
.
{\displaystyle \|u-u_{n}\|_{a}^{2}=a(u-u_{n},u-u_{n})=a(u-u_{n},u-v_{n})\leq \|u-u_{n}\|_{a}\,\|u-v_{n}\|_{a}.}
Dividing by
‖
u
−
u
n
‖
a
{\displaystyle \|u-u_{n}\|_{a}}
and taking the infimum over all possible
v
n
∈
V
n
{\displaystyle v_{n}\in V_{n}}
proves that the Galerkin approximation
u
n
∈
V
n
{\displaystyle u_{n}\in V_{n}}
is the best approximation in the energy norm within the subspace
V
n
⊂
V
{\displaystyle V_{n}\subset V}
, i.e.
u
n
∈
V
n
{\displaystyle u_{n}\in V_{n}}
is nothing but the orthogonal, with respect to the scalar product
a
(
u
,
v
)
{\displaystyle a(u,v)}
, projection of the solution
u
{\displaystyle u}
to the subspace
V
n
{\displaystyle V_{n}}
.
== Galerkin method for stepped Structures ==
I. Elishakof, M. Amato, A. Marzani, P.A. Arvan, and J.N. Reddy
studied the application of the Galerkin method to stepped structures. They showed that the generalized function, namely unit-step function, Dirac’s delta function, and the doublet function are needed for obtaining accurate results.
== History ==
The approach is usually credited to Boris Galerkin. The method was explained to the Western reader by Hencky and Duncan among others. Its convergence was studied by Mikhlin and Leipholz Its coincidence with Fourier method was illustrated by Elishakoff et al. Its equivalence to Ritz's method for conservative problems was shown by Singer. Gander and Wanner showed how Ritz and Galerkin methods led to the modern finite element method. One hundred years of method's development was discussed by Repin. Elishakoff, Kaplunov and Kaplunov show that the Galerkin’s method was not developed by Ritz, contrary to the Timoshenko’s statements.
== See also ==
Ritz method
== References ==
== External links ==
"Galerkin method", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Galerkin Method from MathWorld | Wikipedia/Galerkin's_method |
Prony analysis (Prony's method) was developed by Gaspard Riche de Prony in 1795. However, practical use of the method awaited the digital computer. Similar to the Fourier transform, Prony's method extracts valuable information from a uniformly sampled signal and builds a series of damped complex exponentials or damped sinusoids. This allows the estimation of frequency, amplitude, phase and damping components of a signal.
== The method ==
Let
f
(
t
)
{\displaystyle f(t)}
be a signal consisting of
N
{\displaystyle N}
evenly spaced samples. Prony's method fits a function
f
^
(
t
)
=
∑
i
=
1
N
A
i
e
σ
i
t
cos
(
ω
i
t
+
ϕ
i
)
{\displaystyle {\hat {f}}(t)=\sum _{i=1}^{N}A_{i}e^{\sigma _{i}t}\cos(\omega _{i}t+\phi _{i})}
to the observed
f
(
t
)
{\displaystyle f(t)}
. After some manipulation utilizing Euler's formula, the following result is obtained, which allows more direct computation of terms:
f
^
(
t
)
=
∑
i
=
1
N
1
2
A
i
(
e
j
ϕ
i
e
λ
i
+
t
+
e
−
j
ϕ
i
e
λ
i
−
t
)
,
{\displaystyle {\begin{aligned}{\hat {f}}(t)=\sum _{i=1}^{N}{\tfrac {1}{2}}A_{i}\left(e^{j\phi _{i}}e^{\lambda _{i}^{+}t}+e^{-j\phi _{i}}e^{\lambda _{i}^{-}t}\right),\end{aligned}}}
where
λ
i
±
=
σ
i
±
j
ω
i
{\displaystyle \lambda _{i}^{\pm }=\sigma _{i}\pm j\omega _{i}}
are the eigenvalues of the system,
σ
i
=
−
ω
0
,
i
ξ
i
{\displaystyle \sigma _{i}=-\omega _{0,i}\xi _{i}}
are the damping components,
ω
i
=
ω
0
,
i
1
−
ξ
i
2
{\displaystyle \omega _{i}=\omega _{0,i}{\sqrt {1-\xi _{i}^{2}}}}
are the angular-frequency components,
ϕ
i
{\displaystyle \phi _{i}}
are the phase components,
A
i
{\displaystyle A_{i}}
are the amplitude components of the series,
j
{\displaystyle j}
is the imaginary unit (
j
2
=
−
1
{\displaystyle j^{2}=-1}
).
== Representations ==
Prony's method is essentially a decomposition of a signal with
M
{\displaystyle M}
complex exponentials via the following process:
Regularly sample
f
^
(
t
)
{\displaystyle {\hat {f}}(t)}
so that the
n
{\displaystyle n}
-th of
N
{\displaystyle N}
samples may be written as
F
n
=
f
^
(
Δ
t
n
)
=
∑
m
=
1
M
B
m
e
λ
m
Δ
t
n
,
n
=
0
,
…
,
N
−
1.
{\displaystyle F_{n}={\hat {f}}(\Delta _{t}n)=\sum _{m=1}^{M}\mathrm {B} _{m}e^{\lambda _{m}\Delta _{t}n},\quad n=0,\dots ,N-1.}
If
f
^
(
t
)
{\displaystyle {\hat {f}}(t)}
happens to consist of damped sinusoids, then there will be pairs of complex exponentials such that
B
i
(
1
)
=
1
2
A
i
e
ϕ
i
j
,
B
i
(
2
)
=
1
2
A
i
e
−
ϕ
i
j
,
λ
i
(
1
)
=
σ
i
+
j
ω
i
,
λ
i
(
2
)
=
σ
i
−
j
ω
i
,
{\displaystyle {\begin{aligned}\mathrm {B} _{i}^{(1)}&={\tfrac {1}{2}}A_{i}e^{\phi _{i}j},\\\mathrm {B} _{i}^{(2)}&={\tfrac {1}{2}}A_{i}e^{-\phi _{i}j},\\\lambda _{i}^{(1)}&=\sigma _{i}+j\omega _{i},\\\lambda _{i}^{(2)}&=\sigma _{i}-j\omega _{i},\end{aligned}}}
where
B
i
(
1
)
e
λ
i
(
1
)
t
+
B
i
(
2
)
e
λ
i
(
2
)
t
=
1
2
A
i
e
ϕ
i
j
e
(
σ
i
+
j
ω
i
)
t
+
1
2
A
i
e
−
ϕ
i
j
e
(
σ
i
−
j
ω
i
)
t
=
A
i
e
σ
i
t
cos
(
ω
i
t
+
ϕ
i
)
.
{\displaystyle {\begin{aligned}\mathrm {B} _{i}^{(1)}e^{\lambda _{i}^{(1)}t}+\mathrm {B} _{i}^{(2)}e^{\lambda _{i}^{(2)}t}&={\tfrac {1}{2}}A_{i}e^{\phi _{i}j}e^{(\sigma _{i}+j\omega _{i})t}+{\tfrac {1}{2}}A_{i}e^{-\phi _{i}j}e^{(\sigma _{i}-j\omega _{i})t}\\&=A_{i}e^{\sigma _{i}t}\cos(\omega _{i}t+\phi _{i}).\end{aligned}}}
Because the summation of complex exponentials is the homogeneous solution to a linear difference equation, the following difference equation will exist:
f
^
(
Δ
t
n
)
=
∑
m
=
1
M
f
^
[
Δ
t
(
n
−
m
)
]
P
m
,
n
=
M
,
…
,
N
−
1.
{\displaystyle {\hat {f}}(\Delta _{t}n)=\sum _{m=1}^{M}{\hat {f}}[\Delta _{t}(n-m)]P_{m},\quad n=M,\dots ,N-1.}
The key to Prony's Method is that the coefficients in the difference equation are related to the following polynomial:
z
M
−
P
1
z
M
−
1
−
⋯
−
P
M
=
∏
m
=
1
M
(
z
−
e
λ
m
)
.
{\displaystyle z^{M}-P_{1}z^{M-1}-\dots -P_{M}=\prod _{m=1}^{M}\left(z-e^{\lambda _{m}}\right).}
These facts lead to the following three steps within Prony's method:
1) Construct and solve the matrix equation for the
P
m
{\displaystyle P_{m}}
values:
[
F
M
⋮
F
N
−
1
]
=
[
F
M
−
1
…
F
0
⋮
⋱
⋮
F
N
−
2
…
F
N
−
M
−
1
]
[
P
1
⋮
P
M
]
.
{\displaystyle {\begin{bmatrix}F_{M}\\\vdots \\F_{N-1}\end{bmatrix}}={\begin{bmatrix}F_{M-1}&\dots &F_{0}\\\vdots &\ddots &\vdots \\F_{N-2}&\dots &F_{N-M-1}\end{bmatrix}}{\begin{bmatrix}P_{1}\\\vdots \\P_{M}\end{bmatrix}}.}
Note that if
N
≠
2
M
{\displaystyle N\neq 2M}
, a generalized matrix inverse may be needed to find the values
P
m
{\displaystyle P_{m}}
.
2) After finding the
P
m
{\displaystyle P_{m}}
values, find the roots (numerically if necessary) of the polynomial
z
M
−
P
1
z
M
−
1
−
⋯
−
P
M
.
{\displaystyle z^{M}-P_{1}z^{M-1}-\dots -P_{M}.}
The
m
{\displaystyle m}
-th root of this polynomial will be equal to
e
λ
m
{\displaystyle e^{\lambda _{m}}}
.
3) With the
e
λ
m
{\displaystyle e^{\lambda _{m}}}
values, the
F
n
{\displaystyle F_{n}}
values are part of a system of linear equations that may be used to solve for the
B
m
{\displaystyle \mathrm {B} _{m}}
values:
[
F
k
1
⋮
F
k
M
]
=
[
(
e
λ
1
)
k
1
…
(
e
λ
M
)
k
1
⋮
⋱
⋮
(
e
λ
1
)
k
M
…
(
e
λ
M
)
k
M
]
[
B
1
⋮
B
M
]
,
{\displaystyle {\begin{bmatrix}F_{k_{1}}\\\vdots \\F_{k_{M}}\end{bmatrix}}={\begin{bmatrix}(e^{\lambda _{1}})^{k_{1}}&\dots &(e^{\lambda _{M}})^{k_{1}}\\\vdots &\ddots &\vdots \\(e^{\lambda _{1}})^{k_{M}}&\dots &(e^{\lambda _{M}})^{k_{M}}\end{bmatrix}}{\begin{bmatrix}\mathrm {B} _{1}\\\vdots \\\mathrm {B} _{M}\end{bmatrix}},}
where
M
{\displaystyle M}
unique values
k
i
{\displaystyle k_{i}}
are used. It is possible to use a generalized matrix inverse if more than
M
{\displaystyle M}
samples are used.
Note that solving for
λ
m
{\displaystyle \lambda _{m}}
will yield ambiguities, since only
e
λ
m
{\displaystyle e^{\lambda _{m}}}
was solved for, and
e
λ
m
=
e
λ
m
+
q
2
π
j
{\displaystyle e^{\lambda _{m}}=e^{\lambda _{m}\,+\,q2\pi j}}
for an integer
q
{\displaystyle q}
. This leads to the same Nyquist sampling criteria that discrete Fourier transforms are subject to
|
Im
(
λ
m
)
|
=
|
ω
m
|
<
π
Δ
t
.
{\displaystyle \left|\operatorname {Im} (\lambda _{m})\right|=\left|\omega _{m}\right|<{\frac {\pi }{\Delta _{t}}}.}
== See also ==
Generalized pencil-of-function method
Computation of Prony decomposition using Autoregression analysis
Application of Prony decomposition in Time-frequency analysis
== Notes ==
== References == | Wikipedia/Prony's_method |
The material point method (MPM) is a numerical technique used to simulate the behavior of solids, liquids, gases, and any other continuum material. Especially, it is a robust spatial discretization method for simulating multi-phase (solid-fluid-gas) interactions. In the MPM, a continuum body is described by a number of small Lagrangian elements referred to as 'material points'. These material points are surrounded by a background mesh/grid that is used to calculate terms such as the deformation gradient. Unlike other mesh-based methods like the finite element method, finite volume method or finite difference method, the MPM is not a mesh based method and is instead categorized as a meshless/meshfree or continuum-based particle method, examples of which are smoothed particle hydrodynamics and peridynamics. Despite the presence of a background mesh, the MPM does not encounter the drawbacks of mesh-based methods (high deformation tangling, advection errors etc.) which makes it a promising and powerful tool in computational mechanics.
The MPM was originally proposed, as an extension of a similar method known as FLIP (a further extension of a method called PIC) to computational solid dynamics, in the early 1990 by Professors Deborah L. Sulsky, Zhen Chen and Howard L. Schreyer at University of New Mexico. After this initial development, the MPM has been further developed both in the national labs as well as the University of New Mexico, Oregon State University, University of Utah and more across the US and the world. Recently the number of institutions researching the MPM has been growing with added popularity and awareness coming from various sources such as the MPM's use in the Disney film Frozen.
== The algorithm ==
An MPM simulation consists of the following stages:
(Prior to the time integration phase)
Initialization of grid and material points.
A geometry is discretized into a collection of material points, each with its own material properties and initial conditions (velocity, stress, temperature, etc.)
The grid, being only used to provide a place for gradient calculations is normally made to cover an area large enough to fill the expected extent of computational domain needed for the simulation.
(During the time integration phase - explicit formulation)
Material point quantities are extrapolated to grid nodes.
Material point mass (
m
m
p
{\textstyle m_{mp}}
), momenta (
P
m
p
→
{\textstyle {\vec {P_{mp}}}}
), stresses (
σ
¯
¯
m
p
{\displaystyle {\boldsymbol {\bar {\bar {\sigma }}}}_{mp}}
), and external forces (
b
→
{\displaystyle {\vec {b}}}
) are extrapolated to the nodes at the corners of the cells within which the material points reside. This is most commonly done using standard linear shape functions (
N
n
d
−
m
p
{\textstyle N_{nd-mp}}
), the same used in FEM.
The grid use the material point values to create the masses (
M
n
o
d
e
{\textstyle M_{node}}
), velocities (
V
n
o
d
e
→
{\textstyle {\vec {V_{node}}}}
), internal and external force vectors (
F
n
o
d
e
i
n
t
e
r
n
a
l
→
{\textstyle {\vec {F_{node}^{\mathsf {internal}}}}}
,
F
n
o
d
e
e
x
t
e
r
n
a
l
→
{\textstyle {\vec {F_{node}^{\mathsf {external}}}}}
) for the nodes:
M
n
o
d
e
=
∑
m
p
m
m
p
N
m
p
−
n
d
{\displaystyle M_{node}=\sum _{mp}m_{mp}~~N_{mp-nd}}
V
n
o
d
e
→
=
1
M
n
o
d
e
∑
m
p
P
m
p
→
N
m
p
−
n
d
{\displaystyle {\vec {V_{node}}}={1 \over M_{node}}~~\sum _{mp}{\vec {P_{mp}}}~~N_{mp-nd}}
F
n
o
d
e
i
n
t
e
r
n
a
l
→
=
∑
m
p
σ
¯
¯
m
p
∇
N
m
p
−
n
d
{\displaystyle {\vec {F_{node}^{\mathsf {internal}}}}=\sum _{mp}~~{\bar {\bar {\sigma }}}_{mp}~~\nabla N_{mp-nd}}
F
n
o
d
e
e
x
t
e
r
n
a
l
→
=
∑
m
p
b
→
N
m
p
−
n
d
{\displaystyle {\vec {F_{node}^{\mathsf {external}}}}=\sum _{mp}{\vec {b}}~~N_{mp-nd}}
Equations of motion are solved on the grid.
Newton's 2nd Law is solved to obtain the nodal acceleration (
A
n
o
d
e
→
{\displaystyle {\vec {A_{node}}}}
)
A
n
o
d
e
→
=
F
n
o
d
e
e
x
t
e
r
n
a
l
+
F
n
o
d
e
i
n
t
e
r
n
a
l
→
M
n
o
d
e
→
{\displaystyle {\vec {A_{node}}}={\vec {F_{node}^{external}+{\vec {F_{node}^{internal}}} \over M_{node}}}}
New nodal velocities are found (
V
n
o
d
e
→
~
{\displaystyle {\tilde {\vec {V_{node}}}}}
).
V
n
o
d
e
→
~
=
V
n
o
d
e
→
+
A
n
o
d
e
→
d
t
{\displaystyle {\tilde {\vec {V_{node}}}}={\vec {V_{node}}}+{\vec {A_{node}}}\mathrm {d} t}
Derivative terms are extrapolated back to material points
Material point acceleration (
a
m
p
→
{\displaystyle {\vec {a_{mp}}}}
), deformation gradient (
F
m
p
¯
¯
{\displaystyle {\mathcal {\bar {\bar {F_{mp}}}}}}
) (or strain rate (
ε
˙
m
p
¯
¯
{\displaystyle {\bar {\bar {{\dot {\varepsilon }}_{mp}}}}}
) depending on the strain theory used) is extrapolated from the surrounding nodes using similar shape functions to before (
N
n
d
−
m
p
{\displaystyle N_{nd-mp}}
).
a
m
p
→
=
∑
n
d
A
n
o
d
e
→
N
n
d
−
m
p
{\displaystyle {\vec {a_{mp}}}=\sum _{nd}{\vec {A_{node}}}~~N_{nd-mp}}
ε
˙
m
p
¯
¯
=
∑
n
d
1
2
[
V
n
o
d
e
→
∇
N
n
d
−
m
p
+
(
V
n
o
d
e
∇
N
n
d
−
m
p
)
T
]
{\displaystyle {\bar {\bar {{\dot {\varepsilon }}_{mp}}}}=\sum _{nd}~{1 \over 2}~~[{\vec {V_{node}}}\nabla N_{nd-mp}+(V_{node}\nabla N_{nd-mp})^{T}]}
Variables on the material points: positions, velocities, strains, stresses etc. are then updated with these rates depending on integration scheme of choice and a suitable constitutive model.
Resetting of grid.
Now that the material points are fully updated at the next time step, the grid is reset to allow for the next time step to begin.
== History of PIC/MPM ==
The PIC was originally conceived to solve problems in fluid dynamics, and developed by Harlow at Los Alamos National Laboratory in 1957. One of the first PIC codes was the Fluid-Implicit Particle (FLIP) program, which was created by Brackbill in 1986 and has been constantly in development ever since. Until the 1990s, the PIC method was used principally in fluid dynamics.
Motivated by the need for better simulating penetration problems in solid dynamics, Sulsky, Chen and Schreyer started in 1993 to reformulate the PIC and develop the MPM, with funding from Sandia National Laboratories. The original MPM was then further extended by Bardenhagen et al.. to include frictional contact, which enabled the simulation of granular flow, and by Nairn to include explicit cracks and crack propagation (known as CRAMP).
Recently, an MPM implementation based on a micro-polar Cosserat continuum has been used to simulate high-shear granular flow, such as silo discharge. MPM's uses were further extended into Geotechnical engineering with the recent development of a quasi-static, implicit MPM solver which provides numerically stable analyses of large-deformation problems in Soil mechanics.
Annual workshops on the use of MPM are held at various locations in the United States. The Fifth MPM Workshop was held at Oregon State University, in Corvallis, OR, on April 2 and 3, 2009.
== Applications of PIC/MPM ==
The uses of the PIC or MPM method can be divided into two broad categories: firstly, there are many applications involving fluid dynamics, plasma physics, magnetohydrodynamics, and multiphase applications. The second category of applications comprises problems in solid mechanics.
=== Fluid dynamics and multiphase simulations ===
The PIC method has been used to simulate a wide range of fluid-solid interactions, including sea ice dynamics, penetration of biological soft tissues, fragmentation of gas-filled canisters, dispersion of atmospheric pollutants, multiscale simulations coupling molecular dynamics with MPM, and fluid-membrane interactions. In addition, the PIC-based FLIP code has been applied in magnetohydrodynamics and plasma processing tools, and simulations in astrophysics and free-surface flow.
As a result of a joint effort between UCLA's mathematics department and Walt Disney Animation Studios, MPM was successfully used to simulate snow in the 2013 animated film Frozen.
=== Solid mechanics ===
MPM has also been used extensively in solid mechanics, to simulate impact, penetration, collision and rebound, as well as crack propagation. MPM has also become a widely used method within the field of soil mechanics: it has been used to simulate granular flow, quickness test of sensitive clays, landslides, silo discharge, pile driving, fall-cone test, bucket filling, and material failure; and to model soil stress distribution, compaction, and hardening. It is now being used in wood mechanics problems such as simulations of transverse compression on the cellular level including cell wall contact. The work also received the George Marra Award for paper of the year from the Society of Wood Science and Technology.
== Classification of PIC/MPM codes ==
=== MPM in the context of numerical methods ===
One subset of numerical methods are Meshfree methods, which are defined as methods for which "a predefined mesh is not necessary, at least in field variable interpolation". Ideally, a meshfree method does not make use of a mesh "throughout the process of solving the problem governed by partial differential equations, on a given arbitrary domain, subject to all kinds of boundary conditions," although existing methods are not ideal and fail in at least one of these respects. Meshless methods, which are also sometimes called particle methods, share a "common feature that the history of state variables is traced at points (particles) which are not connected with any element mesh, the distortion of which is a source of numerical difficulties." As can be seen by these varying interpretations, some scientists consider MPM to be a meshless method, while others do not. All agree, however, that MPM is a particle method.
The Arbitrary Lagrangian Eulerian (ALE) methods form another subset of numerical methods which includes MPM. Purely Lagrangian methods employ a framework in which a space is discretised into initial subvolumes, whose flowpaths are then charted over time. Purely Eulerian methods, on the other hand, employ a framework in which the motion of material is described relative to a mesh that remains fixed in space throughout the calculation. As the name indicates, ALE methods combine Lagrangian and Eulerian frames of reference.
=== Subclassification of MPM/PIC ===
PIC methods may be based on either the strong form collocation or a weak form discretisation of the underlying partial differential equation (PDE). Those based on the strong form are properly referred to as finite-volume PIC methods. Those based on the weak form discretisation of PDEs may be called either PIC or MPM.
MPM solvers can model problems in one, two, or three spatial dimensions, and can also model axisymmetric problems. MPM can be implemented to solve either quasi-static or dynamic equations of motion, depending on the type of problem that is to be modeled. Several versions of MPM include Generalized Interpolation Material Point Method ;Convected Particle Domain Interpolation Method; Convected Particle Least Squares Interpolation Method.
The time-integration used for MPM may be either explicit or implicit. The advantage to implicit integration is guaranteed stability, even for large timesteps. On the other hand, explicit integration runs much faster and is easier to implement.
== Advantages ==
=== Compared to FEM ===
Unlike FEM, MPM does not require periodical remeshing steps and remapping of state variables, and is therefore better suited to the modeling of large material deformations. In MPM, particles and not the mesh points store all the information on the state of the calculation. Therefore, no numerical error results from the mesh returning to its original position after each calculation cycle, and no remeshing algorithm is required.
The particle basis of MPM allows it to treat crack propagation and other discontinuities better than FEM, which is known to impose the mesh orientation on crack propagation in a material. Also, particle methods are better at handling history-dependent constitutive models.
=== Compared to pure particle methods ===
Because in MPM nodes remain fixed on a regular grid, the calculation of gradients is trivial.
In simulations with two or more phases it is rather easy to detect contact between entities, as particles can interact via the grid with other particles in the same body, with other solid bodies, and with fluids.
== Disadvantages of MPM ==
MPM is more expensive in terms of storage than other methods, as MPM makes use of mesh as well as particle data. MPM is more computationally expensive than FEM, as the grid must be reset at the end of each MPM calculation step and reinitialised at the beginning of the following step. Spurious oscillation may occur as particles cross the boundaries of the mesh in MPM, although this effect can be minimized by using generalized interpolation methods (GIMP). In MPM as in FEM, the size and orientation of the mesh can impact the results of a calculation: for example, in MPM, strain localisation is known to be particularly sensitive to mesh refinement.
One stability problem in MPM that does not occur in FEM is the cell-crossing errors and null-space errors because the number of integration points (material points) does not remain constant in a cell.
== Notes ==
== External links ==
Center for Simulation of Accidental Fires and Explosions – MPM code available
NairnMPM – open source
MPM3D - open source (MPM3D-F90) and free trial version (MPM3D)
Taichi - Physically Based Computer Graphics Library – open source MPM code available
Anura3D open source – software for geotechnical problems and soil-water-structure interactions by Anura3D MPM Research Community | Wikipedia/Material_point_method |
In mathematics, the fictitious domain method is a method to find the solution of a partial differential equations on a complicated domain
D
{\displaystyle D}
, by substituting a given problem
posed on a domain
D
{\displaystyle D}
, with a new problem posed on a simple domain
Ω
{\displaystyle \Omega }
containing
D
{\displaystyle D}
.
== General formulation ==
Assume in some area
D
⊂
R
n
{\displaystyle D\subset \mathbb {R} ^{n}}
we want to find solution
u
(
x
)
{\displaystyle u(x)}
of the equation:
L
u
=
−
ϕ
(
x
)
,
x
=
(
x
1
,
x
2
,
…
,
x
n
)
∈
D
{\displaystyle Lu=-\phi (x),x=(x_{1},x_{2},\dots ,x_{n})\in D}
with boundary conditions:
l
u
=
g
(
x
)
,
x
∈
∂
D
{\displaystyle lu=g(x),x\in \partial D}
The basic idea of fictitious domains method is to substitute a given problem
posed on a domain
D
{\displaystyle D}
, with a new problem posed on a simple shaped domain
Ω
{\displaystyle \Omega }
containing
D
{\displaystyle D}
(
D
⊂
Ω
{\displaystyle D\subset \Omega }
). For example, we can choose n-dimensional parallelotope as
Ω
{\displaystyle \Omega }
.
Problem in the extended domain
Ω
{\displaystyle \Omega }
for the new solution
u
ϵ
(
x
)
{\displaystyle u_{\epsilon }(x)}
:
L
ϵ
u
ϵ
=
−
ϕ
ϵ
(
x
)
,
x
=
(
x
1
,
x
2
,
…
,
x
n
)
∈
Ω
{\displaystyle L_{\epsilon }u_{\epsilon }=-\phi ^{\epsilon }(x),x=(x_{1},x_{2},\dots ,x_{n})\in \Omega }
l
ϵ
u
ϵ
=
g
ϵ
(
x
)
,
x
∈
∂
Ω
{\displaystyle l_{\epsilon }u_{\epsilon }=g^{\epsilon }(x),x\in \partial \Omega }
It is necessary to pose the problem in the extended area so that the following condition is fulfilled:
u
ϵ
(
x
)
→
ϵ
→
0
u
(
x
)
,
x
∈
D
{\displaystyle u_{\epsilon }(x){\xrightarrow[{\epsilon \rightarrow 0}]{}}u(x),x\in D}
== Simple example, 1-dimensional problem ==
d
2
u
d
x
2
=
−
2
,
0
<
x
<
1
(
1
)
{\displaystyle {\frac {d^{2}u}{dx^{2}}}=-2,\quad 0<x<1\quad (1)}
u
(
0
)
=
0
,
u
(
1
)
=
0
{\displaystyle u(0)=0,u(1)=0}
=== Prolongation by leading coefficients ===
u
ϵ
(
x
)
{\displaystyle u_{\epsilon }(x)}
solution of problem:
d
d
x
k
ϵ
(
x
)
d
u
ϵ
d
x
=
−
ϕ
ϵ
(
x
)
,
0
<
x
<
2
(
2
)
{\displaystyle {\frac {d}{dx}}k^{\epsilon }(x){\frac {du_{\epsilon }}{dx}}=-\phi ^{\epsilon }(x),0<x<2\quad (2)}
Discontinuous coefficient
k
ϵ
(
x
)
{\displaystyle k^{\epsilon }(x)}
and right part of equation previous equation we obtain from expressions:
k
ϵ
(
x
)
=
{
1
,
0
<
x
<
1
1
ϵ
2
,
1
<
x
<
2
{\displaystyle k^{\epsilon }(x)={\begin{cases}1,&0<x<1\\{\frac {1}{\epsilon ^{2}}},&1<x<2\end{cases}}}
ϕ
ϵ
(
x
)
=
{
2
,
0
<
x
<
1
2
c
0
,
1
<
x
<
2
(
3
)
{\displaystyle \phi ^{\epsilon }(x)={\begin{cases}2,&0<x<1\\2c_{0},&1<x<2\end{cases}}\quad (3)}
Boundary conditions:
u
ϵ
(
0
)
=
0
,
u
ϵ
(
2
)
=
0
{\displaystyle u_{\epsilon }(0)=0,u_{\epsilon }(2)=0}
Connection conditions in the point
x
=
1
{\displaystyle x=1}
:
[
u
ϵ
]
=
0
,
[
k
ϵ
(
x
)
d
u
ϵ
d
x
]
=
0
{\displaystyle [u_{\epsilon }]=0,\ \left[k^{\epsilon }(x){\frac {du_{\epsilon }}{dx}}\right]=0}
where
[
⋅
]
{\displaystyle [\cdot ]}
means:
[
p
(
x
)
]
=
p
(
x
+
0
)
−
p
(
x
−
0
)
{\displaystyle [p(x)]=p(x+0)-p(x-0)}
Equation (1) has analytical solution therefore we can easily obtain error:
u
(
x
)
−
u
ϵ
(
x
)
=
O
(
ϵ
2
)
,
0
<
x
<
1
{\displaystyle u(x)-u_{\epsilon }(x)=O(\epsilon ^{2}),\quad 0<x<1}
=== Prolongation by lower-order coefficients ===
u
ϵ
(
x
)
{\displaystyle u_{\epsilon }(x)}
solution of problem:
d
2
u
ϵ
d
x
2
−
c
ϵ
(
x
)
u
ϵ
=
−
ϕ
ϵ
(
x
)
,
0
<
x
<
2
(
4
)
{\displaystyle {\frac {d^{2}u_{\epsilon }}{dx^{2}}}-c^{\epsilon }(x)u_{\epsilon }=-\phi ^{\epsilon }(x),\quad 0<x<2\quad (4)}
Where
ϕ
ϵ
(
x
)
{\displaystyle \phi ^{\epsilon }(x)}
we take the same as in (3), and expression for
c
ϵ
(
x
)
{\displaystyle c^{\epsilon }(x)}
c
ϵ
(
x
)
=
{
0
,
0
<
x
<
1
1
ϵ
2
,
1
<
x
<
2.
{\displaystyle c^{\epsilon }(x)={\begin{cases}0,&0<x<1\\{\frac {1}{\epsilon ^{2}}},&1<x<2.\end{cases}}}
Boundary conditions for equation (4) same as for (2).
Connection conditions in the point
x
=
1
{\displaystyle x=1}
:
[
u
ϵ
(
0
)
]
=
0
,
[
d
u
ϵ
d
x
]
=
0
{\displaystyle [u_{\epsilon }(0)]=0,\ \left[{\frac {du_{\epsilon }}{dx}}\right]=0}
Error:
u
(
x
)
−
u
ϵ
(
x
)
=
O
(
ϵ
)
,
0
<
x
<
1
{\displaystyle u(x)-u_{\epsilon }(x)=O(\epsilon ),\quad 0<x<1}
== Literature ==
P.N. Vabishchevich, The Method of Fictitious Domains in Problems of Mathematical Physics, Izdatelstvo Moskovskogo Universiteta, Moskva, 1991.
Smagulov S. Fictitious Domain Method for Navier–Stokes equation, Preprint CC SA USSR, 68, 1979.
Bugrov A.N., Smagulov S. Fictitious Domain Method for Navier–Stokes equation, Mathematical model of fluid flow, Novosibirsk, 1978, p. 79–90 | Wikipedia/Fictitious_domain_method |
Smoothed finite element methods (S-FEM) are a particular class of numerical simulation algorithms for the simulation of physical phenomena. It was developed by combining meshfree methods with the finite element method. S-FEM are applicable to solid mechanics as well as fluid dynamics problems, although so far they have mainly been applied to the former.
== Description ==
The essential idea in the S-FEM is to use a finite element mesh (in particular triangular mesh) to construct numerical models of good performance. This is achieved by modifying the compatible strain field, or construct a strain field using only the displacements, hoping a Galerkin model using the modified/constructed strain field can deliver some good properties. Such a modification/construction can be performed within elements but more often beyond the elements (meshfree concepts): bring in the information from the neighboring elements. Naturally, the strain field has to satisfy certain conditions, and the standard Galerkin weak form needs to be modified accordingly to ensure the stability and convergence. A comprehensive review of S-FEM covering both methodology and applications can be found in ("Smoothed Finite Element Methods (S-FEM): An Overview and Recent Developments").
== History ==
The development of S-FEM started from the works on meshfree methods, where the so-called weakened weak (W2) formulation based on the G space theory were developed. The W2 formulation offers possibilities for formulate various (uniformly) "soft" models that works well with triangular meshes. Because triangular mesh can be generated automatically, it becomes much easier in re-meshing and hence automation in modeling and simulation. In addition, W2 models can be made soft enough (in uniform fashion) to produce upper bound solutions (for force-driving problems). Together with stiff models (such as the fully compatible FEM models), one can conveniently bound the solution from both sides. This allows easy error estimation for generally complicated problems, as long as a triangular mesh can be generated. Typical W2 models are the Smoothed Point Interpolation Methods (or S-PIM). The S-PIM can be node-based (known as NS-PIM or LC-PIM), edge-based (ES-PIM), and cell-based (CS-PIM). The NS-PIM was developed using the so-called SCNI technique. It was then discovered that NS-PIM is capable of producing upper bound solution and volumetric locking free. The ES-PIM is found superior in accuracy, and CS-PIM behaves in between the NS-PIM and ES-PIM. Moreover, W2 formulations allow the use of polynomial and radial basis functions in the creation of shape functions (it accommodates the discontinuous displacement functions, as long as it is in G1 space), which opens further rooms for future developments.
The S-FEM is largely the linear version of S-PIM, but with most of the properties of the S-PIM and much simpler. It has also variations of NS-FEM, ES-FEM and CS-FEM. The major property of S-PIM can be found also in S-FEM.
== List of S-FEM models ==
Node-based Smoothed FEM (NS-FEM)
Edge-based Smoothed FEM (ES-FEM)
Face-based Smoothed FEM (FS-FEM)
Cell-based Smoothed FEM (CS-FEM)
Node/Edge-based Smoothed FEM (NS/ES-FEM)
Alpha FEM method (Alpha FEM)
Beta FEM method (Beta FEM)
== Applications ==
S-FEM has been applied to solve the following physical problems:
Mechanics for solid structures and piezoelectrics;
Fracture mechanics and crack propagation;
Nonlinear and contact problems;
Stochastic analysis;
Heat transfer;
Structural acoustics;
Adaptive analysis;
Limited analysis;
Crystal plasticity modeling.
== Basic Formulation of S-FEM ==
The fundamental problem addressed by SFEM is typically the solution of Poisson's equation with Dirichlet boundary conditions, given as follows:
Δu+f=0 in Ω, u=g on ΓD
where Ω is the domain and Γ is its boundary, consisting of ΓD=Γ. Here, u: Ω→R is the trial solution, f: Ω→R is a given function, and g represents Dirichlet boundary conditions.
S-FEM involves discretizing the domain Ω using finite element meshes, which can be global or local. The global mesh represents the entire domain, while the local mesh is used to discretize regions requiring high resolution within the global domain. The local domain is assumed to be included in the global domain (ΩL⊆ΩG).
=== Weak Formulation ===
The weak form of the problem is derived by multiplying the equation by suitable test functions and integrating over the domain. In SFEM, the weak form is expressed as follows: Given f and g, find u∈U such that for all w∈V,
aΩ(w,u)=LΩ(w)
where aΩ is a bilinear form, and LΩ is a linear functional.
=== S-FEM Formulation ===
In S-FEM, the trial solution u and test functions w are defined separately for the global (ΩG) and local (ΩL) domains. The trial solution spaces UG, UL and test function spaces VG, VL are defined accordingly. The weak form in the S-FEM formulation becomes:
aΩ′(w,u)=LΩ′(w)
where aΩ′(⋅,⋅) and LΩ′(⋅) are modified bilinear forms and linear functionals, respectively, to accommodate the S-FEM approach.
=== Challenges ===
One of the primary challenges of S-FEM is the difficulty in exact integration of the submatrices representing the relationship between global and local meshes (KGL and KLG). Additionally, the matrix K can become singular, posing numerical challenges in solving the resulting linear algebraic equations.
These challenges and potential solutions are discussed in detail in the literature, aiming to improve the efficiency and accuracy of S-FEM for various applications.
== B-spline S-FEM (BFSEM) ==
S-FEM can reasonably model an analytical domain by superimposing meshes with different spatial resolutions, it has intrinsic advantages of local high accuracy, low computation time, and simple meshing procedure. However, it has disadvantages such as accuracy of numerical integration and matrix singularity. Although several additional techniques have been proposed to mitigate these limitations, they are computationally expensive or ad-hoc, and detract from the method’s strengths. These issues can be address by incorporating cubic B-spline functions with C squared continuity across element boundaries as the global basis function. To avoid matrix singularity, applying different basis functions to different meshes. In a recent study Lagrange basis functions were used as local basis functions. With this method the numerical integration can be calculated with sufficient accuracy without any additional techniques used in conventional S-FEM. Furthermore, the proposed method avoids matrix singularity and is superior to conventional methods in terms of convergence for solving linear equations. Therefore, the proposed method has the potential to reduce computation time while maintaining a comparable accuracy to conventional S-FEM.
== See also ==
Finite element method
Meshfree methods
Weakened weak form
Loubignac iteration
== References ==
== External links ==
[1] | Wikipedia/Smoothed_finite_element_method |
High-resolution schemes are used in the numerical solution of partial differential equations where high accuracy is required in the presence of shocks or discontinuities. They have the following properties:
Second- or higher-order spatial accuracy is obtained in smooth parts of the solution.
Solutions are free from spurious oscillations or wiggles.
High accuracy is obtained around shocks and discontinuities.
The number of mesh points containing the wave is small compared with a first-order scheme with similar accuracy.
General methods are often not adequate for accurate resolution of steep gradient phenomena; they usually introduce non-physical effects such as smearing of the solution or spurious oscillations. Since publication of Godunov's order barrier theorem, which proved that linear methods cannot provide non-oscillatory solutions higher than first order (Godunov 1954, Godunov 1959), these difficulties have attracted much attention and a number of techniques have been developed that largely overcome these problems. To avoid spurious or non-physical oscillations where shocks are present, schemes that exhibit a Total Variation Diminishing (TVD) characteristic are especially attractive. Two techniques that are proving to be particularly effective are MUSCL (Monotone Upstream-Centered Schemes for Conservation Laws), a flux/slope limiter method (van Leer 1979, Hirsch 1991, Anderson, Tannehill & Pletcher 2016, Laney 1998, Toro 1999) and the WENO (Weighted Essentially Non-Oscillatory) method (Shu 1998, Shu 2009). Both methods are usually referred to as high resolution schemes (see diagram).
MUSCL methods are generally second-order accurate in smooth regions (although they can be formulated for higher orders) and provide good resolution, monotonic solutions around discontinuities. They are straightforward to implement and are computationally efficient.
For problems comprising both shocks and complex smooth solution structure, WENO schemes can provide higher accuracy than second-order schemes along with good resolution around discontinuities. Most applications tend to use a fifth order accurate WENO scheme, whilst higher order schemes can be used where the problem demands improved accuracy in smooth regions.
The method of holistic discretisation systematically analyses subgrid scale dynamics to algebraically construct closures for numerical discretisations that are both accurate to any specified order of error in smooth regions, and automatically adapt to cater for rapid grid variations through the algebraic learning of subgrid structures (Roberts 2003). A web service analyses any PDE in a class that may be submitted.
== See also ==
Godunov's theorem
Sergei K. Godunov
Total variation diminishing
Shock capturing method
== References == | Wikipedia/High-resolution_scheme |
In numerical linear algebra, the alternating-direction implicit (ADI) method is an iterative method used to solve Sylvester matrix equations. It is a popular method for solving the large matrix equations that arise in systems theory and control, and can be formulated to construct solutions in a memory-efficient, factored form. It is also used to numerically solve parabolic and elliptic partial differential equations, and is a classic method used for modeling heat conduction and solving the diffusion equation in two or more dimensions. It is an example of an operator splitting method.
The method was developed at Humble Oil in the mid-1950s by Jim Douglas Jr, Henry Rachford, and Don Peaceman.
== ADI for matrix equations ==
=== The method ===
The ADI method is a two step iteration process that alternately updates the column and row spaces of an approximate solution to
A
X
−
X
B
=
C
{\displaystyle AX-XB=C}
. One ADI iteration consists of the following steps:1. Solve for
X
(
j
+
1
/
2
)
{\displaystyle X^{(j+1/2)}}
, where
(
A
−
β
j
+
1
I
)
X
(
j
+
1
/
2
)
=
X
(
j
)
(
B
−
β
j
+
1
I
)
+
C
.
{\displaystyle \left(A-\beta _{j+1}I\right)X^{(j+1/2)}=X^{(j)}\left(B-\beta _{j+1}I\right)+C.}
2. Solve for
X
(
j
+
1
)
{\displaystyle X^{(j+1)}}
, where
X
(
j
+
1
)
(
B
−
α
j
+
1
I
)
=
(
A
−
α
j
+
1
I
)
X
(
j
+
1
/
2
)
−
C
{\displaystyle X^{(j+1)}\left(B-\alpha _{j+1}I\right)=\left(A-\alpha _{j+1}I\right)X^{(j+1/2)}-C}
.
The numbers
(
α
j
+
1
,
β
j
+
1
)
{\displaystyle (\alpha _{j+1},\beta _{j+1})}
are called shift parameters, and convergence depends strongly on the choice of these parameters. To perform
K
{\displaystyle K}
iterations of ADI, an initial guess
X
(
0
)
{\displaystyle X^{(0)}}
is required, as well as
K
{\displaystyle K}
shift parameters,
{
(
α
j
,
β
j
)
}
j
=
1
K
{\displaystyle \{(\alpha _{j},\beta _{j})\}_{j=1}^{K}}
.
=== When to use ADI ===
If
A
∈
C
m
×
m
{\displaystyle A\in \mathbb {C} ^{m\times m}}
and
B
∈
C
n
×
n
{\displaystyle B\in \mathbb {C} ^{n\times n}}
, then
A
X
−
X
B
=
C
{\displaystyle AX-XB=C}
can be solved directly in
O
(
m
3
+
n
3
)
{\displaystyle {\mathcal {O}}(m^{3}+n^{3})}
using the Bartels-Stewart method. It is therefore only beneficial to use ADI when matrix-vector multiplication and linear solves involving
A
{\displaystyle A}
and
B
{\displaystyle B}
can be applied cheaply.
The equation
A
X
−
X
B
=
C
{\displaystyle AX-XB=C}
has a unique solution if and only if
σ
(
A
)
∩
σ
(
B
)
=
∅
{\displaystyle \sigma (A)\cap \sigma (B)=\emptyset }
, where
σ
(
M
)
{\displaystyle \sigma (M)}
is the spectrum of
M
{\displaystyle M}
. However, the ADI method performs especially well when
σ
(
A
)
{\displaystyle \sigma (A)}
and
σ
(
B
)
{\displaystyle \sigma (B)}
are well-separated, and
A
{\displaystyle A}
and
B
{\displaystyle B}
are normal matrices. These assumptions are met, for example, by the Lyapunov equation
A
X
+
X
A
∗
=
C
{\displaystyle AX+XA^{*}=C}
when
A
{\displaystyle A}
is positive definite. Under these assumptions, near-optimal shift parameters are known for several choices of
A
{\displaystyle A}
and
B
{\displaystyle B}
. Additionally, a priori error bounds can be computed, thereby eliminating the need to monitor the residual error in implementation.
The ADI method can still be applied when the above assumptions are not met. The use of suboptimal shift parameters may adversely affect convergence, and convergence is also affected by the non-normality of
A
{\displaystyle A}
or
B
{\displaystyle B}
(sometimes advantageously). Krylov subspace methods, such as the Rational Krylov Subspace Method, are observed to typically converge more rapidly than ADI in this setting, and this has led to the development of hybrid ADI-projection methods.
=== Shift-parameter selection and the ADI error equation ===
The problem of finding good shift parameters is nontrivial. This problem can be understood by examining the ADI error equation. After
K
{\displaystyle K}
iterations, the error is given by
X
−
X
(
K
)
=
∏
j
=
1
K
(
A
−
α
j
I
)
(
A
−
β
j
I
)
(
X
−
X
(
0
)
)
∏
j
=
1
K
(
B
−
β
j
I
)
(
B
−
α
j
I
)
.
{\displaystyle X-X^{(K)}=\prod _{j=1}^{K}{\frac {(A-\alpha _{j}I)}{(A-\beta _{j}I)}}\left(X-X^{(0)}\right)\prod _{j=1}^{K}{\frac {(B-\beta _{j}I)}{(B-\alpha _{j}I)}}.}
Choosing
X
(
0
)
=
0
{\displaystyle X^{(0)}=0}
results in the following bound on the relative error:
‖
X
−
X
(
K
)
‖
2
‖
X
‖
2
≤
‖
r
K
(
A
)
‖
2
‖
r
K
(
B
)
−
1
‖
2
,
r
K
(
M
)
=
∏
j
=
1
K
(
M
−
α
j
I
)
(
M
−
β
j
I
)
.
{\displaystyle {\frac {\left\|X-X^{(K)}\right\|_{2}}{\|X\|_{2}}}\leq \|r_{K}(A)\|_{2}\|r_{K}(B)^{-1}\|_{2},\quad r_{K}(M)=\prod _{j=1}^{K}{\frac {(M-\alpha _{j}I)}{(M-\beta _{j}I)}}.}
where
‖
⋅
‖
2
{\displaystyle \|\cdot \|_{2}}
is the operator norm. The ideal set of shift parameters
{
(
α
j
,
β
j
)
}
j
=
1
K
{\displaystyle \{(\alpha _{j},\beta _{j})\}_{j=1}^{K}}
defines a rational function
r
K
{\displaystyle r_{K}}
that minimizes the quantity
‖
r
K
(
A
)
‖
2
‖
r
K
(
B
)
−
1
‖
2
{\displaystyle \|r_{K}(A)\|_{2}\|r_{K}(B)^{-1}\|_{2}}
. If
A
{\displaystyle A}
and
B
{\displaystyle B}
are normal matrices and have eigendecompositions
A
=
V
A
Λ
A
V
A
∗
{\displaystyle A=V_{A}\Lambda _{A}V_{A}^{*}}
and
B
=
V
B
Λ
B
V
B
∗
{\displaystyle B=V_{B}\Lambda _{B}V_{B}^{*}}
, then
‖
r
K
(
A
)
‖
2
‖
r
K
(
B
)
−
1
‖
2
=
‖
r
K
(
Λ
A
)
‖
2
‖
r
K
(
Λ
B
)
−
1
‖
2
{\displaystyle \|r_{K}(A)\|_{2}\|r_{K}(B)^{-1}\|_{2}=\|r_{K}(\Lambda _{A})\|_{2}\|r_{K}(\Lambda _{B})^{-1}\|_{2}}
.
==== Near-optimal shift parameters ====
Near-optimal shift parameters are known in certain cases, such as when
Λ
A
⊂
[
a
,
b
]
{\displaystyle \Lambda _{A}\subset [a,b]}
and
Λ
B
⊂
[
c
,
d
]
{\displaystyle \Lambda _{B}\subset [c,d]}
, where
[
a
,
b
]
{\displaystyle [a,b]}
and
[
c
,
d
]
{\displaystyle [c,d]}
are disjoint intervals on the real line. The Lyapunov equation
A
X
+
X
A
∗
=
C
{\displaystyle AX+XA^{*}=C}
, for example, satisfies these assumptions when
A
{\displaystyle A}
is positive definite. In this case, the shift parameters can be expressed in closed form using elliptic integrals, and can easily be computed numerically.
More generally, if closed, disjoint sets
E
{\displaystyle E}
and
F
{\displaystyle F}
, where
Λ
A
⊂
E
{\displaystyle \Lambda _{A}\subset E}
and
Λ
B
⊂
F
{\displaystyle \Lambda _{B}\subset F}
, are known, the optimal shift parameter selection problem is approximately solved by finding an extremal rational function that attains the value
Z
K
(
E
,
F
)
:=
inf
r
sup
z
∈
E
|
r
(
z
)
|
inf
z
∈
F
|
r
(
z
)
|
,
{\displaystyle Z_{K}(E,F):=\inf _{r}{\frac {\sup _{z\in E}|r(z)|}{\inf _{z\in F}|r(z)|}},}
where the infimum is taken over all rational functions of degree
(
K
,
K
)
{\displaystyle (K,K)}
. This approximation problem is related to several results in potential theory, and was solved by Zolotarev in 1877 for
E
{\displaystyle E}
= [a, b] and
F
=
−
E
.
{\displaystyle F=-E.}
The solution is also known when
E
{\displaystyle E}
and
F
{\displaystyle F}
are disjoint disks in the complex plane.
==== Heuristic shift-parameter strategies ====
When less is known about
σ
(
A
)
{\displaystyle \sigma (A)}
and
σ
(
B
)
{\displaystyle \sigma (B)}
, or when
A
{\displaystyle A}
or
B
{\displaystyle B}
are non-normal matrices, it may not be possible to find near-optimal shift parameters. In this setting, a variety of strategies for generating good shift parameters can be used. These include strategies based on asymptotic results in potential theory, using the Ritz values of the matrices
A
{\displaystyle A}
,
A
−
1
{\displaystyle A^{-1}}
,
B
{\displaystyle B}
, and
B
−
1
{\displaystyle B^{-1}}
to formulate a greedy approach, and cyclic methods, where the same small collection of shift parameters are reused until a convergence tolerance is met. When the same shift parameter is used at every iteration, ADI is equivalent to an algorithm called Smith's method.
=== Factored ADI ===
In many applications,
A
{\displaystyle A}
and
B
{\displaystyle B}
are very large, sparse matrices, and
C
{\displaystyle C}
can be factored as
C
=
C
1
C
2
∗
{\displaystyle C=C_{1}C_{2}^{*}}
, where
C
1
∈
C
m
×
r
,
C
2
∈
C
n
×
r
{\displaystyle C_{1}\in \mathbb {C} ^{m\times r},C_{2}\in \mathbb {C} ^{n\times r}}
, with
r
=
1
,
2
{\displaystyle r=1,2}
. In such a setting, it may not be feasible to store the potentially dense matrix
X
{\displaystyle X}
explicitly. A variant of ADI, called factored ADI, can be used to compute
Z
Y
∗
{\displaystyle ZY^{*}}
, where
X
≈
Z
Y
∗
{\displaystyle X\approx ZY^{*}}
. The effectiveness of factored ADI depends on whether
X
{\displaystyle X}
is well-approximated by a low rank matrix. This is known to be true under various assumptions about
A
{\displaystyle A}
and
B
{\displaystyle B}
.
== ADI for parabolic equations ==
Historically, the ADI method was developed to solve the 2D diffusion equation on a square domain using finite differences. Unlike ADI for matrix equations, ADI for parabolic equations does not require the selection of shift parameters, since the shift appearing in each iteration is determined by parameters such as the timestep, diffusion coefficient, and grid spacing. The connection to ADI on matrix equations can be observed when one considers the action of the ADI iteration on the system at steady state.
=== Example: 2D diffusion equation ===
The traditional method for solving the heat conduction equation numerically is the Crank–Nicolson method. This method results in a very complicated set of equations in multiple dimensions, which are costly to solve. The advantage of the ADI method is that the equations that have to be solved in each step have a simpler structure and can be solved efficiently with the tridiagonal matrix algorithm.
Consider the linear diffusion equation in two dimensions,
∂
u
∂
t
=
(
∂
2
u
∂
x
2
+
∂
2
u
∂
y
2
)
=
(
u
x
x
+
u
y
y
)
{\displaystyle {\partial u \over \partial t}=\left({\partial ^{2}u \over \partial x^{2}}+{\partial ^{2}u \over \partial y^{2}}\right)=(u_{xx}+u_{yy})}
The implicit Crank–Nicolson method produces the following finite difference equation:
u
i
j
n
+
1
−
u
i
j
n
Δ
t
=
1
2
(
Δ
x
)
2
(
δ
x
2
+
δ
y
2
)
(
u
i
j
n
+
1
+
u
i
j
n
)
{\displaystyle {u_{ij}^{n+1}-u_{ij}^{n} \over \Delta t}={1 \over 2(\Delta x)^{2}}\left(\delta _{x}^{2}+\delta _{y}^{2}\right)\left(u_{ij}^{n+1}+u_{ij}^{n}\right)}
where:
Δ
x
=
Δ
y
{\displaystyle \Delta x=\Delta y}
and
δ
p
2
{\displaystyle \delta _{p}^{2}}
is the central second difference operator for the p-th coordinate
δ
p
2
u
i
j
=
u
i
j
+
e
p
−
2
u
i
j
+
u
i
j
−
e
p
{\displaystyle \delta _{p}^{2}u_{ij}=u_{ij+e_{p}}-2u_{ij}+u_{ij-e_{p}}}
with
e
p
=
(
1
,
0
)
{\displaystyle e_{p}=(1,0)}
or
(
0
,
1
)
{\displaystyle (0,1)}
for
p
=
x
{\displaystyle p=x}
or
y
{\displaystyle y}
respectively (and
i
j
{\displaystyle ij}
a shorthand for lattice points
(
i
,
j
)
{\displaystyle (i,j)}
).
After performing a stability analysis, it can be shown that this method will be stable for any
Δ
t
{\displaystyle \Delta t}
.
A disadvantage of the Crank–Nicolson method is that the matrix in the above equation is banded with a band width that is generally quite large. This makes direct solution of the system of linear equations quite costly (although efficient approximate solutions exist, for example use of the conjugate gradient method preconditioned with incomplete Cholesky factorization).
The idea behind the ADI method is to split the finite difference equations into two, one with the x-derivative taken implicitly and the next with the y-derivative taken implicitly,
u
i
j
n
+
1
/
2
−
u
i
j
n
Δ
t
/
2
=
(
δ
x
2
u
i
j
n
+
1
/
2
+
δ
y
2
u
i
j
n
)
Δ
x
2
{\displaystyle {u_{ij}^{n+1/2}-u_{ij}^{n} \over \Delta t/2}={\left(\delta _{x}^{2}u_{ij}^{n+1/2}+\delta _{y}^{2}u_{ij}^{n}\right) \over \Delta x^{2}}}
u
i
j
n
+
1
−
u
i
j
n
+
1
/
2
Δ
t
/
2
=
(
δ
x
2
u
i
j
n
+
1
/
2
+
δ
y
2
u
i
j
n
+
1
)
Δ
y
2
{\displaystyle {u_{ij}^{n+1}-u_{ij}^{n+1/2} \over \Delta t/2}={\left(\delta _{x}^{2}u_{ij}^{n+1/2}+\delta _{y}^{2}u_{ij}^{n+1}\right) \over \Delta y^{2}}}
The system of equations involved is symmetric and tridiagonal (banded with bandwidth 3), and is typically solved using tridiagonal matrix algorithm.
It can be shown that this method is unconditionally stable and second order in time and space. There are more refined ADI methods such as the methods of Douglas, or the f-factor method which can be used for three or more dimensions.
=== Generalizations ===
The usage of the ADI method as an operator splitting scheme can be generalized. That is, we may consider general evolution equations
u
˙
=
F
1
u
+
F
2
u
,
{\displaystyle {\dot {u}}=F_{1}u+F_{2}u,}
where
F
1
{\displaystyle F_{1}}
and
F
2
{\displaystyle F_{2}}
are (possibly nonlinear) operators defined on a Banach space. In the diffusion example above we have
F
1
=
∂
2
∂
x
2
{\displaystyle F_{1}={\partial ^{2} \over \partial x^{2}}}
and
F
2
=
∂
2
∂
y
2
{\displaystyle F_{2}={\partial ^{2} \over \partial y^{2}}}
.
== Fundamental ADI (FADI) ==
=== Simplification of ADI to FADI ===
It is possible to simplify the conventional ADI method into Fundamental ADI method, which only has the similar operators at the left-hand sides while being operator-free at the right-hand sides. This may be regarded as the fundamental (basic) scheme of ADI method, with no more operator (to be reduced) at the right-hand sides, unlike most traditional implicit methods that usually consist of operators at both sides of equations. The FADI method leads to simpler, more concise and efficient update equations without degrading the accuracy of conventional ADI method.
=== Relations to other implicit methods ===
Many classical implicit methods by Peaceman-Rachford, Douglas-Gunn, D'Yakonov, Beam-Warming, Crank-Nicolson, etc., may be simplified to fundamental implicit schemes with operator-free right-hand sides. In their fundamental forms, the FADI method of second-order temporal accuracy can be related closely to the fundamental locally one-dimensional (FLOD) method, which can be upgraded to second-order temporal accuracy, such as for three-dimensional Maxwell's equations in computational electromagnetics. For two- and three-dimensional heat conduction and diffusion equations, both FADI and FLOD methods may be implemented in simpler, more efficient and stable manner compared to their conventional methods.
== Further reading ==
Usadi, Adam; Dawson, Clint (March 2006). "50 Years of ADI Methods: Celebrating the Contributions of Jim Douglas, Don Peaceman, and Henry Rachford". SIAM News. 39 (2). Retrieved March 28, 2025 – via ResearchGate.net. (Provides a review of the ADI methods and variants over the years.)
== References == | Wikipedia/Alternating_direction_implicit_method |
In computational fluid dynamics, the MacCormack method (/məˈkɔːrmæk ˈmɛθəd/) is a widely used discretization scheme for the numerical solution of hyperbolic partial differential equations. This second-order finite difference method was introduced by Robert W. MacCormack in 1969. The MacCormack method is elegant and easy to understand and program.
== The algorithm ==
The MacCormack method is designed to solve hyperbolic partial differential equations of the form
∂
u
∂
t
+
∂
f
(
u
)
∂
x
=
0
{\displaystyle {\frac {\partial u}{\partial t}}+{\frac {\partial f(u)}{\partial x}}=0}
To update this equation one timestep
Δ
t
{\displaystyle \Delta t}
on a grid with spacing
Δ
x
{\displaystyle \Delta x}
at grid cell
i
{\displaystyle i}
, the MacCormack method uses a "predictor step" and a "corrector step", given below
u
i
p
=
u
i
n
−
Δ
t
Δ
x
(
f
i
+
1
n
−
f
i
n
)
u
i
n
+
1
=
1
2
(
u
i
n
+
u
i
p
)
−
Δ
t
2
Δ
x
(
f
i
p
−
f
i
−
1
p
)
{\displaystyle {\begin{aligned}&u_{i}^{p}=u_{i}^{n}-{\frac {\Delta t}{\Delta x}}\left(f_{i+1}^{n}-f_{i}^{n}\right)\\&u_{i}^{n+1}={\frac {1}{2}}(u_{i}^{n}+u_{i}^{p})-{\frac {\Delta t}{2\Delta x}}(f_{i}^{p}-f_{i-1}^{p})\end{aligned}}}
== Linear Example ==
To illustrate the algorithm, consider the following first order hyperbolic equation
∂
u
∂
t
+
a
∂
u
∂
x
=
0.
{\displaystyle \qquad {\frac {\partial u}{\partial t}}+a{\frac {\partial u}{\partial x}}=0.}
The application of MacCormack method to the above equation proceeds in two steps; a predictor step which is followed by a corrector step.
Predictor step: In the predictor step, a "provisional" value of
u
{\displaystyle u}
at time level
n
+
1
{\displaystyle n+1}
(denoted by
u
i
p
{\displaystyle u_{i}^{p}}
) is estimated as follows
u
i
p
=
u
i
n
−
a
Δ
t
Δ
x
(
u
i
+
1
n
−
u
i
n
)
{\displaystyle u_{i}^{p}=u_{i}^{n}-a{\frac {\Delta t}{\Delta x}}\left(u_{i+1}^{n}-u_{i}^{n}\right)}
The above equation is obtained by replacing the spatial and temporal derivatives in the previous first order hyperbolic equation using forward differences.
Corrector step: In the corrector step, the predicted value
u
i
p
{\displaystyle u_{i}^{p}}
is corrected according to the equation
u
i
n
+
1
=
u
i
n
+
1
/
2
−
a
Δ
t
2
Δ
x
(
u
i
p
−
u
i
−
1
p
)
{\displaystyle u_{i}^{n+1}=u_{i}^{n+1/2}-a{\frac {\Delta t}{2\Delta x}}\left(u_{i}^{p}-u_{i-1}^{p}\right)}
Note that the corrector step uses backward finite difference approximations for spatial derivative. The time-step used in the corrector step is
Δ
t
/
2
{\displaystyle \Delta t/2}
in contrast to the
Δ
t
{\displaystyle \Delta t}
used in the predictor step.
Replacing the
u
i
n
+
1
/
2
{\displaystyle u_{i}^{n+1/2}}
term by the temporal average
u
i
n
+
1
/
2
=
u
i
n
+
u
i
p
2
{\displaystyle u_{i}^{n+1/2}={\frac {u_{i}^{n}+u_{i}^{p}}{2}}}
to obtain the corrector step as
u
i
n
+
1
=
u
i
n
+
u
i
p
2
−
a
Δ
t
2
Δ
x
(
u
i
p
−
u
i
−
1
p
)
{\displaystyle u_{i}^{n+1}={\frac {u_{i}^{n}+u_{i}^{p}}{2}}-a{\frac {\Delta t}{2\Delta x}}\left(u_{i}^{p}-u_{i-1}^{p}\right)}
=== Some remarks ===
The MacCormack method is well suited for nonlinear equations (Inviscid Burgers equation, Euler equations, etc.) The order of differencing can be reversed for the time step (i.e., forward/backward followed by backward/forward). For nonlinear equations, this procedure provides the best results. For linear equations, the MacCormack scheme is equivalent to the Lax–Wendroff method.
Unlike first-order upwind scheme, the MacCormack does not introduce diffusive errors in the solution. However, it is known to introduce dispersive errors (Gibbs phenomenon) in the region where the gradient is high.
== See also ==
Lax–Wendroff method
Upwind scheme
Hyperbolic partial differential equations
== References == | Wikipedia/MacCormack_method |
The Lax–Friedrichs method, named after Peter Lax and Kurt O. Friedrichs, is a numerical method for the solution of hyperbolic partial differential equations based on finite differences. The method can be described as the FTCS (forward in time, centered in space) scheme with a numerical dissipation term of 1/2. One can view the Lax–Friedrichs method as an alternative to Godunov's scheme, where one avoids solving a Riemann problem at each cell interface, at the expense of adding artificial viscosity.
== Illustration for a Linear Problem ==
Consider a one-dimensional, linear hyperbolic partial differential equation for
u
(
x
,
t
)
{\displaystyle u(x,t)}
of the form:
u
t
+
a
u
x
=
0
{\displaystyle u_{t}+au_{x}=0}
on the domain
b
≤
x
≤
c
,
0
≤
t
≤
d
{\displaystyle b\leq x\leq c,\;0\leq t\leq d}
with initial condition
u
(
x
,
0
)
=
u
0
(
x
)
{\displaystyle u(x,0)=u_{0}(x)\,}
and the boundary conditions
u
(
b
,
t
)
=
u
b
(
t
)
u
(
c
,
t
)
=
u
c
(
t
)
.
{\displaystyle {\begin{aligned}u(b,t)&=u_{b}(t)\\u(c,t)&=u_{c}(t).\end{aligned}}}
If one discretizes the domain
(
b
,
c
)
×
(
0
,
d
)
{\displaystyle (b,c)\times (0,d)}
to a grid with equally spaced points with a spacing of
Δ
x
{\displaystyle \Delta x}
in the
x
{\displaystyle x}
-direction and
Δ
t
{\displaystyle \Delta t}
in the
t
{\displaystyle t}
-direction, we introduce an approximation
u
~
{\displaystyle {\tilde {u}}}
of
u
{\displaystyle u}
u
i
n
=
u
~
(
x
i
,
t
n
)
with
x
i
=
b
+
i
Δ
x
,
t
n
=
n
Δ
t
for
i
=
0
,
…
,
N
,
n
=
0
,
…
,
M
,
{\displaystyle u_{i}^{n}={\tilde {u}}(x_{i},t^{n})~~{\text{ with }}~~{\begin{array}{l}x_{i}=b+i\,\Delta x,\\t^{n}=n\,\Delta t\end{array}}~~{\text{ for }}~~{\begin{array}{l}i=0,\ldots ,N,\\n=0,\ldots ,M,\end{array}}}
where
N
=
c
−
b
Δ
x
,
M
=
d
Δ
t
{\displaystyle N={\frac {c-b}{\Delta x}},\,M={\frac {d}{\Delta t}}}
are integers representing the number of grid intervals. Then the Lax–Friedrichs method to approximate the partial differential equation is given by:
u
i
n
+
1
−
1
2
(
u
i
+
1
n
+
u
i
−
1
n
)
Δ
t
+
a
u
i
+
1
n
−
u
i
−
1
n
2
Δ
x
=
0
{\displaystyle {\frac {u_{i}^{n+1}-{\frac {1}{2}}(u_{i+1}^{n}+u_{i-1}^{n})}{\Delta t}}+a{\frac {u_{i+1}^{n}-u_{i-1}^{n}}{2\,\Delta x}}=0}
Or, rewriting this to solve for the unknown
u
i
n
+
1
,
{\displaystyle u_{i}^{n+1},}
u
i
n
+
1
=
1
2
(
u
i
+
1
n
+
u
i
−
1
n
)
−
a
Δ
t
2
Δ
x
(
u
i
+
1
n
−
u
i
−
1
n
)
{\displaystyle u_{i}^{n+1}={\frac {1}{2}}\left(u_{i+1}^{n}+u_{i-1}^{n}\right)-a{\frac {\Delta t}{2\,\Delta x}}\left(u_{i+1}^{n}-u_{i-1}^{n}\right)}
Where the initial values and boundary nodes are taken from
u
i
0
=
u
0
(
x
i
)
u
0
n
=
u
b
(
t
n
)
u
N
n
=
u
c
(
t
n
)
.
{\displaystyle {\begin{aligned}u_{i}^{0}&=u_{0}(x_{i})\\u_{0}^{n}&=u_{b}(t^{n})\\u_{N}^{n}&=u_{c}(t^{n}).\end{aligned}}}
== Extensions to Nonlinear Problems ==
A nonlinear hyperbolic conservation law is defined through a flux function
f
{\displaystyle f}
:
u
t
+
(
f
(
u
)
)
x
=
0.
{\displaystyle u_{t}+(f(u))_{x}=0.}
In the case of
f
(
u
)
=
a
u
{\displaystyle f(u)=au}
, we end up with a scalar linear problem. Note that in general,
u
{\displaystyle u}
is a vector with
m
{\displaystyle m}
equations in it.
The generalization of the Lax-Friedrichs method to nonlinear systems takes the form
u
i
n
+
1
=
1
2
(
u
i
+
1
n
+
u
i
−
1
n
)
−
Δ
t
2
Δ
x
(
f
(
u
i
+
1
n
)
−
f
(
u
i
−
1
n
)
)
.
{\displaystyle u_{i}^{n+1}={\frac {1}{2}}\left(u_{i+1}^{n}+u_{i-1}^{n}\right)-{\frac {\Delta t}{2\,\Delta x}}\left(f(u_{i+1}^{n})-f(u_{i-1}^{n})\right).}
This method is conservative and first order accurate, hence quite dissipative. It can, however be used as a building block for building high-order numerical schemes for solving hyperbolic partial differential equations, much like Euler time steps can be used as a building block for creating high-order numerical integrators for ordinary differential equations.
We note that this method can be written in conservation form:
u
i
n
+
1
=
u
i
n
−
Δ
t
Δ
x
(
f
^
i
+
1
/
2
n
−
f
^
i
−
1
/
2
n
)
,
{\displaystyle u_{i}^{n+1}=u_{i}^{n}-{\frac {\Delta t}{\Delta x}}\left({\hat {f}}_{i+1/2}^{n}-{\hat {f}}_{i-1/2}^{n}\right),}
where
f
^
i
−
1
/
2
n
=
1
2
(
f
i
−
1
+
f
i
)
−
Δ
x
2
Δ
t
(
u
i
n
−
u
i
−
1
n
)
.
{\displaystyle {\hat {f}}_{i-1/2}^{n}={\frac {1}{2}}\left(f_{i-1}+f_{i}\right)-{\frac {\Delta x}{2\Delta t}}\left(u_{i}^{n}-u_{i-1}^{n}\right).}
Without the extra terms
u
i
n
{\displaystyle u_{i}^{n}}
and
u
i
−
1
n
{\displaystyle u_{i-1}^{n}}
in the discrete flux,
f
^
i
−
1
/
2
n
{\displaystyle {\hat {f}}_{i-1/2}^{n}}
, one ends up with the FTCS scheme, which is well known to be unconditionally unstable for hyperbolic problems.
== Stability and accuracy ==
This method is explicit and first order accurate in time and first order accurate in space (
O
(
Δ
t
)
+
O
(
Δ
x
2
/
Δ
t
)
)
{\displaystyle O(\Delta t)+O({\Delta x^{2}}/{\Delta t}))}
provided
u
0
(
x
)
,
u
b
(
t
)
,
u
c
(
t
)
{\displaystyle u_{0}(x),\,u_{b}(t),\,u_{c}(t)}
are sufficiently-smooth functions. Under these conditions, the method is stable if and only if the following condition is satisfied:
|
a
Δ
t
Δ
x
|
≤
1.
{\displaystyle \left|a{\frac {\Delta t}{\Delta x}}\right|\leq 1.}
(A von Neumann stability analysis can show the necessity of this stability condition.) The Lax–Friedrichs method is classified as having second-order dissipation and third order dispersion. For functions that have discontinuities, the scheme displays strong dissipation and dispersion; see figures at right.
== References ==
DuChateau, Paul; Zachmann, David (2002), Applied Partial Differential Equations, New York: Dover Publications, ISBN 978-0-486-41976-3
Press, William H; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007), "Section 20.1.2. Lax Method", Numerical Recipes: The Art of Scientific Computing (3rd ed.), New York: Cambridge University Press, ISBN 978-0-521-88068-8 | Wikipedia/Lax–Friedrichs_method |
The method of moments (MoM), also known as the moment method and method of weighted residuals, is a numerical method in computational electromagnetics. It is used in computer programs that simulate the interaction of electromagnetic fields such as radio waves with matter, for example antenna simulation programs like NEC that calculate the radiation pattern of an antenna. Generally being a frequency-domain method, it involves the projection of an integral equation into a system of linear equations by the application of appropriate boundary conditions. This is done by using discrete meshes as in finite difference and finite element methods, often for the surface. The solutions are represented with the linear combination of pre-defined basis functions; generally, the coefficients of these basis functions are the sought unknowns. Green's functions and Galerkin method play a central role in the method of moments.
For many applications, the method of moments is identical to the boundary element method. It is one of the most common methods in microwave and antenna engineering.
== History ==
Development of boundary element method and other similar methods for different engineering applications is associated with the advent of digital computing in the 1960s. Prior to this, variational methods were applied to engineering problems at microwave frequencies by the time of World War II. While Julian Schwinger and Nathan Marcuvitz have respectively compiled these works into lecture notes and textbooks, Victor Rumsey has formulated these methods into the "reaction concept" in 1954. The concept was later shown to be equivalent to the Galerkin method. In the late 1950s, an early version of the method of moments was introduced by Yuen Lo at a course on mathematical methods in electromagnetic theory at University of Illinois.
In the 1960s, early research work on the method was published by Kenneth Mei, Jean van Bladel and Jack Richmond. In the same decade, the systematic theory for the method of moments in electromagnetics was largely formalized by Roger Harrington. While the term "the method of moments" was coined earlier by Leonid Kantorovich and Gleb Akilov for analogous numerical applications, Harrington has adapted the term for the electromagnetic formulation. Harrington published the seminal textbook Field Computation by Moment Methods on the moment method in 1968. The development of the method and its indications in radar and antenna engineering attracted interest; MoM research was subsequently supported United States government. The method was further popularized by the introduction of generalized antenna modeling codes such as Numerical Electromagnetics Code, which was released into public domain by the United States government in the late 1980s. In the 1990s, introduction of fast multipole and multilevel fast multipole methods enabled efficient MoM solutions to problems with millions of unknowns.
Being one of the most common simulation techniques in RF and microwave engineering, the method of moments forms the basis of many commercial design software such as FEKO. Many non-commercial and public domain codes of different sophistications are also available. In addition to its use in electrical engineering, the method of moments has been applied to light scattering and plasmonic problems.
== Background ==
=== Basic concepts ===
An inhomogeneous integral equation can be expressed as:
L
(
f
)
=
g
{\displaystyle L(f)=g}
where L denotes a linear operator, g denotes the known forcing function and f denotes the unknown function. f can be approximated by a finite number of basis functions (
f
n
{\displaystyle f_{n}}
):
f
≈
∑
n
N
a
n
f
n
.
{\displaystyle f\approx \sum _{n}^{N}a_{n}f_{n}.}
By linearity, substitution of this expression into the equation yields:
∑
n
N
a
n
L
(
f
n
)
≈
g
.
{\displaystyle \sum _{n}^{N}a_{n}L(f_{n})\approx g.}
We can also define a residual for this expression, which denotes the difference between the actual and the approximate solution:
R
=
∑
n
N
a
n
L
(
f
n
)
−
g
{\displaystyle R=\sum _{n}^{N}a_{n}L(f_{n})-g}
The aim of the method of moments is to minimize this residual, which can be done by using appropriate weighting or testing functions, hence the name method of weighted residuals. After the determination of a suitable inner product for the problem, the expression then becomes:
∑
n
N
a
n
⟨
w
m
,
L
(
f
n
)
⟩
≈
⟨
w
m
,
g
⟩
{\displaystyle \sum _{n}^{N}a_{n}\langle w_{m},L(f_{n})\rangle \approx \langle w_{m},g\rangle }
Thus, the expression can be represented in the matrix form:
[
ℓ
m
n
]
[
α
m
]
=
[
g
n
]
{\displaystyle \left[\ell _{mn}\right]\left[\alpha _{m}\right]=[g_{n}]}
The resulting matrix is often referred as the impedance matrix. The coefficients of the basis functions can be obtained through inverting the matrix. For large matrices with a large number of unknowns, iterative methods such as conjugate gradient method can be used for acceleration. The actual field distributions can be obtained from the coefficients and the associated integrals. The interactions between each basis function in MoM is ensured by Green's function of the system.
=== Basis and testing functions ===
Different basis functions can be chosen to model the expected behavior of the unknown function in the domain; these functions can either be subsectional or global. Choice of Dirac delta function as basis function is known as point-matching or collocation. This corresponds to enforcing the boundary conditions on
N
{\displaystyle N}
discrete points and is often used to obtain approximate solutions when the inner product operation is cumbersome to perform. Other subsectional basis functions include pulse, piecewise triangular, piecewise sinusoidal and rooftop functions. Triangular patches, introduced by S. Rao, D. Wilton and A. Glisson in 1982, are known as RWG basis functions and are widely used in MoM. Characteristic basis functions were also introduced to accelerate computation and reduce the matrix equation.
The testing and basis functions are often chosen to be the same; this is known as the Galerkin method. Depending on the application and studied structure, the testing and basis functions should be chosen appropriately to ensure convergence and accuracy, as well as to prevent possible high order algebraic singularities.
== Integral equations ==
Depending on the application and sought variables, different integral or integro-differential equations are used in MoM. Radiation and scattering by thin wire structures, such as many types of antennas, can be modeled by specialized equations. For surface problems, common integral equation formulations include electric field integral equation (EFIE), magnetic field integral equation (MFIE) and mixed-potential integral equation (MPIE).
=== Thin-wire equations ===
As many antenna structures can be approximated as wires, thin wire equations are of interest in MoM applications. Two commonly used thin-wire equations are Pocklington and Hallén integro-differential equations. Pocklington's equation precedes the computational techniques, having been introduced in 1897 by Henry Cabourn Pocklington. For a linear wire that is centered on the origin and aligned with the z-axis, the equation can be written as:
∫
−
l
/
2
l
/
2
I
z
(
z
′
)
[
(
d
2
d
z
2
+
β
2
)
G
(
z
,
z
′
)
]
d
z
′
=
−
j
ω
ε
E
z
inc
(
p
=
a
)
{\displaystyle \int _{-l/2}^{l/2}I_{z}(z')\left[\left({\frac {d^{2}}{dz^{2}}}+\beta ^{2}\right)G(z,z')\right]\,dz'=-j\omega \varepsilon E_{z}^{\text{inc}}(p=a)}
where
l
{\displaystyle l}
and
a
{\displaystyle a}
denote the total length and thickness, respectively.
G
(
z
,
z
′
)
{\displaystyle G(z,z')}
is the Green's function for free space. The equation can be generalized to different excitation schemes, including magnetic frills.
Hallén integral equation, published by E. Hallén in 1938, can be given as:
(
d
2
d
z
2
+
β
2
)
∫
−
l
/
2
l
/
2
I
z
(
z
′
)
G
(
z
,
z
′
)
d
z
′
=
−
j
ω
ε
E
z
inc
(
p
=
a
)
{\displaystyle \left({\frac {d^{2}}{dz^{2}}}+\beta ^{2}\right)\int _{-l/2}^{l/2}I_{z}(z')G(z,z')\,dz'=-j\omega \varepsilon E_{z}^{\text{inc}}(p=a)}
This equation, despite being more well-behaved than the Pocklington's equation, is generally restricted to the delta-gap voltage excitations at the antenna feed point, which can be represented as an impressed electric field.
=== Electric field integral equation (EFIE) ===
The general form of electric field integral equation (EFIE) can be written as:
n
^
×
E
inc
(
r
)
=
n
^
×
∫
S
[
η
j
k
J
(
r
′
)
G
(
r
,
r
′
)
+
η
j
k
{
∇
s
′
⋅
J
(
r
′
)
}
∇
′
G
(
r
,
r
′
)
]
d
S
′
{\displaystyle {\hat {\mathbf {n} }}\times \mathbf {E} ^{\text{inc}}(\mathbf {r} )={\hat {\mathbf {n} }}\times \int _{S}\left[\eta jk\,\mathbf {J} (\mathbf {r} ')G(\mathbf {r} ,\mathbf {r} ')+{\frac {\eta }{jk}}\left\{{\boldsymbol {\nabla }}_{s}'\cdot \mathbf {J} (\mathbf {r} ')\right\}{\boldsymbol {\nabla }}'G(\mathbf {r} ,\mathbf {r} ')\right]\,dS'}
where
E
inc
{\displaystyle \mathbf {E} _{\text{inc}}}
is the incident or impressed electric field.
G
(
r
,
r
′
)
{\displaystyle G(r,r')}
is the Green function for Helmholtz equation and
η
{\displaystyle \eta }
represents the wave impedance. The boundary conditions are met at a defined PEC surface. EFIE is a Fredholm integral equation of the first kind.
=== Magnetic field integral equation (MFIE) ===
Another commonly used integral equation in MoM is the magnetic field integral equation (MFIE), which can be written as:
−
1
2
J
(
r
)
+
n
^
×
∮
S
J
(
r
′
)
×
∇
′
G
(
r
,
r
′
)
d
S
′
=
n
^
×
H
inc
(
r
)
{\displaystyle -{\frac {1}{2}}\mathbf {J} (r)+{\hat {\mathbf {n} }}\times \oint _{S}\mathbf {J} (r')\times {\boldsymbol {\nabla }}'G(r,r')\,dS'={\hat {\mathbf {n} }}\times \mathbf {H} _{\text{inc}}(r)}
MFIE is often formulated to be a Fredholm integral equation of the second kind and is generally well-posed. Nevertheless, the formulation necessitates the use of closed surfaces, which limits its applications.
=== Other formulations ===
Many different surface and volume integral formulations for MoM exist. In many cases, EFIEs are converted to mixed potential integral equations (MFIE) through the use of Lorenz gauge condition; this aims to reduce the orders of singularities through the use of magnetic vector and scalar electric potentials. In order to bypass the internal resonance problem in dielectric scattering calculations, combined-field integral equation (CFIE) and Poggio—Miller—Chang—Harrington—Wu—Tsai (PMCHWT) formulations are also used. Another approach, the volumetric integral equation, necessitates the discretization of the volume elements and is often computationally expensive.
MoM can also be integrated with physical optics theory and finite element method.
== Green's functions ==
Appropriate Green's function for the studied structure must be known to formulate MoM matrices: automatic incorporation of the radiation condition into the Green's function makes MoM particularly useful for radiation and scattering problems. Even though the Green function can be derived in closed form for very simple cases, more complex structures necessitate numerical derivation of these functions.
Full wave analysis of planarly-stratified structures in particular, such as microstrips or patch antennas, necessitate the derivation of Green's functions that are peculiar to these geometries. This can be achieved in two different methods. In the first method, known as spectral-domain approach, the inner products and convolution operation for MoM matrix entries are evaluated in the Fourier space with analytically-derived spectral-domain Green's functions through Parseval's theorem. The other approach is based on the use of spatial-domain Green's functions. This involves the inverse Hankel transform of the spectral-domain Green's function, which is defined on the Sommerfeld integration path. Nevertheless, this integral cannot be evaluated analytically, and its numerical evaluation is often computationally expensive due to the oscillatory kernels and slowly-converging nature of the integral. Common approaches for evaluating these integrals include tail extrapolation approaches such as weighted-averages method.
Other approaches include the approximation of the integral kernel. Following the extraction of quasi-static and surface pole components, these integrals can be approximated as closed-form complex exponentials through Prony's method or generalized pencil-of-function method; thus, the spatial Green's functions can be derived through the use of appropriate identities such as Sommerfeld identity. This method is known in the computational electromagnetics literature as the discrete complex image method (DCIM), since the Green's function is effectively approximated with a discrete number of image dipoles that are located within a complex distance from the origin. The associated Green's functions are referred as closed-form Green's functions. The method has also been extended for cylindrically-layered structures.
Rational-function fitting method, as well as its combinations with DCIM, can also be used to approximate closed-form Green's functions. Alternatively, the closed-form Green's function can be evaluated through method of steepest descent. For the periodic structures such as phased arrays and frequency selective surfaces, series acceleration methods such as Kummer's transformation and Ewald summation is often used to accelerate the computation of the periodic Green's function.
== See also ==
Boundary element method
Characteristic mode analysis
Discrete dipole approximation
Fast multipole method
Finite element method
Multilevel fast multipole method
== Notes ==
== References ==
Bibliography
Balanis, Constantine A. (2012). Advanced Engineering Electromagnetics (2 ed.). Wiley. ISBN 978-0-470-58948-9.
Chew, W. C.; Michielssen, E.; Song, J. M.; Jin, J. M., eds. (2001). Fast and Efficient Algorithms in Computational Electromagnetics. Artech House. ISBN 9781580531528.
Davidson, David B. (2005). Computational Electromagnetics for RF and Microwave Engineering. Cambridge University Press. ISBN 978-0-521-83859-7.
Gibson, Walton C. (2021). The Method of Moments in Electromagnetics (3rd ed.). Chapman & Hall. ISBN 9780367365066.
Harrington, Roger F. (1993). Field Computation by Moment Methods. IEEE Press. ISBN 9780470544631.
Kinayman, Noyan; Aksun, M. I. (2005). Modern Microwave Circuits. Norwood: Artech House. ISBN 9781844073832. | Wikipedia/Method_of_moments_(electromagnetics) |
The Level-set method (LSM) is a conceptual framework for using level sets as a tool for numerical analysis of surfaces and shapes. LSM can perform numerical computations involving curves and surfaces on a fixed Cartesian grid without having to parameterize these objects. LSM makes it easier to perform computations on shapes with sharp corners and shapes that change topology (such as by splitting in two or developing holes). These characteristics make LSM effective for modeling objects that vary in time, such as an airbag inflating or a drop of oil floating in water.
== Overview ==
The figure on the right illustrates several ideas about LSM. In the upper left corner is a bounded region with a well-behaved boundary. Below it, the red surface is the graph of a level set function
φ
{\displaystyle \varphi }
determining this shape, and the flat blue region represents the X-Y plane. The boundary of the shape is then the zero-level set of
φ
{\displaystyle \varphi }
, while the shape itself is the set of points in the plane for which
φ
{\displaystyle \varphi }
is positive (interior of the shape) or zero (at the boundary).
In the top row, the shape's topology changes as it is split in two. It is challenging to describe this transformation numerically by parameterizing the boundary of the shape and following its evolution. An algorithm can be used to detect the moment the shape splits in two and then construct parameterizations for the two newly obtained curves. On the bottom row, however, the plane at which the level set function is sampled is translated upwards, on which the shape's change in topology is described. It is less challenging to work with a shape through its level-set function rather than with itself directly, in which a method would need to consider all the possible deformations the shape might undergo.
Thus, in two dimensions, the level-set method amounts to representing a closed curve
Γ
{\displaystyle \Gamma }
(such as the shape boundary in our example) using an auxiliary function
φ
{\displaystyle \varphi }
, called the level-set function. The curve
Γ
{\displaystyle \Gamma }
is represented as the zero-level set of
φ
{\displaystyle \varphi }
by
Γ
=
{
(
x
,
y
)
∣
φ
(
x
,
y
)
=
0
}
,
{\displaystyle \Gamma =\{(x,y)\mid \varphi (x,y)=0\},}
and the level-set method manipulates
Γ
{\displaystyle \Gamma }
implicitly through the function
φ
{\displaystyle \varphi }
. This function
φ
{\displaystyle \varphi }
is assumed to take positive values inside the region delimited by the curve
Γ
{\displaystyle \Gamma }
and negative values outside.
== The level-set equation ==
If the curve
Γ
{\displaystyle \Gamma }
moves in the normal direction with a speed
v
{\displaystyle v}
, then by chain rule and implicit differentiation, it can be determined that the level-set function
φ
{\displaystyle \varphi }
satisfies the level-set equation
∂
φ
∂
t
=
v
|
∇
φ
|
.
{\displaystyle {\frac {\partial \varphi }{\partial t}}=v|\nabla \varphi |.}
Here,
|
⋅
|
{\displaystyle |\cdot |}
is the Euclidean norm (denoted customarily by single bars in partial differential equations), and
t
{\displaystyle t}
is time. This is a partial differential equation, in particular a Hamilton–Jacobi equation, and can be solved numerically, for example, by using finite differences on a Cartesian grid.
However, the numerical solution of the level set equation may require advanced techniques. Simple finite difference methods fail quickly. Upwinding methods such as the Godunov method are considered better; however, the level set method does not guarantee preservation of the volume and shape of the set level in an advection field that maintains shape and size, for example, a uniform or rotational velocity field. Instead, the shape of the level set may become distorted, and the level set may disappear over a few time steps. Therefore, high-order finite difference schemes, such as high-order essentially non-oscillatory (ENO) schemes, are often required, and even then, the feasibility of long-term simulations is questionable. More advanced methods have been developed to overcome this; for example, combinations of the leveling method with tracking marker particles suggested by the velocity field.
== Example ==
Consider a unit circle in
R
2
{\textstyle \mathbb {R} ^{2}}
, shrinking in on itself at a constant rate, i.e. each point on the boundary of the circle moves along its inwards pointing normally at some fixed speed. The circle will shrink and eventually collapse down to a point. If an initial distance field is constructed (i.e. a function whose value is the signed Euclidean distance to the boundary, positive interior, negative exterior) on the initial circle, the normalized gradient of this field will be the circle normal.
If the field has a constant value subtracted from it in time, the zero level (which was the initial boundary) of the new fields will also be circular and will similarly collapse to a point. This is due to this being effectively the temporal integration of the Eikonal equation with a fixed front velocity.
== Applications ==
In mathematical modeling of combustion, LSM is used to describe the instantaneous flame surface, known as the G equation.
Level-set data structures have been developed to facilitate the use of the level-set method in computer applications.
Computational fluid dynamics
Trajectory planning
Optimization
Image processing
Computational biophysics
Discrete complex dynamics (visualization of the parameter plane and the dynamic plane)
== History ==
The level-set method was developed in 1979 by Alain Dervieux, and subsequently popularized by Stanley Osher and James Sethian. It has since become popular in many disciplines, such as image processing, computer graphics, computational geometry, optimization, computational fluid dynamics, and computational biology.
== See also ==
== References ==
== External links ==
See Ronald Fedkiw's academic web page for many pictures and animations showing how the level-set method can be used to model real-life phenomena.
Multivac is a C++ library for front tracking in 2D with level-set methods.
James Sethian's web page on level-set method.
Stanley Osher's homepage.
The Level Set Method. MIT 16.920J / 2.097J / 6.339J. Numerical Methods for Partial Differential Equations by Per-Olof Persson. March 8, 2005
Lecture 11: The Level Set Method: MIT 18.086. Mathematical Methods for Engineers II by Gilbert Strang | Wikipedia/Level-set_method |
Generalized pencil-of-function method (GPOF), also known as matrix pencil method, is a signal processing technique for estimating a signal or extracting information with complex exponentials. Being similar to Prony and original pencil-of-function methods, it is generally preferred to those for its robustness and computational efficiency.
The method was originally developed by Yingbo Hua and Tapan Sarkar for estimating the behaviour of electromagnetic systems by its transient response, building on Sarkar's past work on the original pencil-of-function method. The method has a plethora of applications in electrical engineering, particularly related to problems in computational electromagnetics, microwave engineering and antenna theory.
== Method ==
=== Mathematical basis ===
A transient electromagnetic signal can be represented as:
y
(
t
)
=
x
(
t
)
+
n
(
t
)
≈
∑
i
=
1
M
R
i
e
s
i
t
+
n
(
t
)
;
0
≤
t
≤
T
,
{\displaystyle y(t)=x(t)+n(t)\approx \sum _{i=1}^{M}R_{i}e^{s_{i}t}+n(t);0\leq t\leq T,}
where
y
(
t
)
{\displaystyle y(t)}
is the observed time-domain signal,
n
(
t
)
{\displaystyle n(t)}
is the signal noise,
x
(
t
)
{\displaystyle x(t)}
is the actual signal,
R
i
{\displaystyle R_{i}}
are the residues (
R
i
{\displaystyle R_{i}}
),
s
i
{\displaystyle s_{i}}
are the poles of the system, defined as
s
i
=
−
α
i
+
j
ω
i
{\displaystyle s_{i}=-\alpha _{i}+j\omega _{i}}
,
z
i
=
e
(
−
α
i
+
j
ω
i
)
T
s
{\displaystyle z_{i}=e^{(-\alpha _{i}+j\omega _{i})T_{s}}}
by the identities of Z-transform,
α
i
{\displaystyle \alpha _{i}}
are the damping factors and
ω
i
{\displaystyle \omega _{i}}
are the angular frequencies.
The same sequence, sampled by a period of
T
s
{\displaystyle T_{s}}
, can be written as the following:
y
[
k
T
s
]
=
x
[
k
T
s
]
+
n
[
k
T
s
]
≈
∑
i
=
1
M
R
i
z
i
k
+
n
[
k
T
s
]
;
k
=
0
,
.
.
.
,
N
−
1
;
i
=
1
,
2
,
.
.
.
,
M
{\displaystyle y[kT_{s}]=x[kT_{s}]+n[kT_{s}]\approx \sum _{i=1}^{M}R_{i}z_{i}^{k}+n[kT_{s}];k=0,...,N-1;i=1,2,...,M}
,
Generalized pencil-of-function estimates the optimal
M
{\displaystyle M}
and
z
i
{\displaystyle z_{i}}
's.
=== Noise-free analysis ===
For the noiseless case, two
(
N
−
L
)
×
L
{\displaystyle (N-L)\times L}
matrices,
Y
1
{\displaystyle Y_{1}}
and
Y
2
{\displaystyle Y_{2}}
, are produced:
[
Y
1
]
=
[
x
(
0
)
x
(
1
)
⋯
x
(
L
−
1
)
x
(
1
)
x
(
2
)
⋯
x
(
L
)
⋮
⋮
⋱
⋮
x
(
N
−
L
−
1
)
x
(
N
−
L
)
⋯
x
(
N
−
2
)
]
(
N
−
L
)
×
L
;
{\displaystyle [Y_{1}]={\begin{bmatrix}x(0)&x(1)&\cdots &x(L-1)\\x(1)&x(2)&\cdots &x(L)\\\vdots &\vdots &\ddots &\vdots \\x(N-L-1)&x(N-L)&\cdots &x(N-2)\end{bmatrix}}_{(N-L)\times L};}
[
Y
2
]
=
[
x
(
1
)
x
(
2
)
⋯
x
(
L
)
x
(
2
)
x
(
3
)
⋯
x
(
L
+
1
)
⋮
⋮
⋱
⋮
x
(
N
−
L
)
x
(
N
−
L
+
1
)
⋯
x
(
N
−
1
)
]
(
N
−
L
)
×
L
{\displaystyle [Y_{2}]={\begin{bmatrix}x(1)&x(2)&\cdots &x(L)\\x(2)&x(3)&\cdots &x(L+1)\\\vdots &\vdots &\ddots &\vdots \\x(N-L)&x(N-L+1)&\cdots &x(N-1)\end{bmatrix}}_{(N-L)\times L}}
where
L
{\displaystyle L}
is defined as the pencil parameter.
Y
1
{\displaystyle Y_{1}}
and
Y
2
{\displaystyle Y_{2}}
can be decomposed into the following matrices:
[
Y
1
]
=
[
Z
1
]
[
B
]
[
Z
2
]
{\displaystyle [Y_{1}]=[Z_{1}][B][Z_{2}]}
[
Y
2
]
=
[
Z
1
]
[
B
]
[
Z
0
]
[
Z
2
]
{\displaystyle [Y_{2}]=[Z_{1}][B][Z_{0}][Z_{2}]}
where
[
Z
1
]
=
[
1
1
⋯
1
z
1
z
2
⋯
z
M
⋮
⋮
⋱
⋮
z
1
(
N
−
L
−
1
)
z
2
(
N
−
L
−
1
)
⋯
z
M
(
N
−
L
−
1
)
]
(
N
−
L
)
×
M
;
{\displaystyle [Z_{1}]={\begin{bmatrix}1&1&\cdots &1\\z_{1}&z_{2}&\cdots &z_{M}\\\vdots &\vdots &\ddots &\vdots \\z_{1}^{(N-L-1)}&z_{2}^{(N-L-1)}&\cdots &z_{M}^{(N-L-1)}\end{bmatrix}}_{(N-L)\times M};}
[
Z
2
]
=
[
1
z
1
⋯
z
1
L
−
1
1
z
2
⋯
z
2
L
−
1
⋮
⋮
⋱
⋮
1
z
M
⋯
z
M
L
−
1
]
M
×
L
{\displaystyle [Z_{2}]={\begin{bmatrix}1&z_{1}&\cdots &z_{1}^{L-1}\\1&z_{2}&\cdots &z_{2}^{L-1}\\\vdots &\vdots &\ddots &\vdots \\1&z_{M}&\cdots &z_{M}^{L-1}\end{bmatrix}}_{M\times L}}
[
Z
0
]
{\textstyle [Z_{0}]}
and
[
B
]
{\textstyle [B]}
are
M
×
M
{\textstyle M\times M}
diagonal matrices with sequentially-placed
z
i
{\textstyle z_{i}}
and
R
i
{\textstyle R_{i}}
values, respectively.
If
M
≤
L
≤
N
−
M
{\textstyle M\leq L\leq N-M}
, the generalized eigenvalues of the matrix pencil
[
Y
2
]
−
λ
[
Y
1
]
=
[
Z
1
]
[
B
]
(
[
Z
0
]
−
λ
[
I
]
)
[
Z
2
]
{\displaystyle [Y_{2}]-\lambda [Y_{1}]=[Z_{1}][B]([Z_{0}]-\lambda [I])[Z_{2}]}
yield the poles of the system, which are
λ
=
z
i
{\displaystyle \lambda =z_{i}}
. Then, the generalized eigenvectors
p
i
{\displaystyle p_{i}}
can be obtained by the following identities:
[
Y
1
]
+
[
Y
1
]
p
i
=
p
i
;
{\displaystyle [Y_{1}]^{+}[Y_{1}]p_{i}=p_{i};}
i
=
1
,
.
.
.
,
M
{\displaystyle i=1,...,M}
[
Y
1
]
+
[
Y
2
]
p
i
=
z
i
p
i
;
{\displaystyle [Y_{1}]^{+}[Y_{2}]p_{i}=z_{i}p_{i};}
i
=
1
,
.
.
.
,
M
{\displaystyle i=1,...,M}
where the
+
{\displaystyle ^{+}}
denotes the Moore–Penrose inverse, also known as the pseudo-inverse. Singular value decomposition can be employed to compute the pseudo-inverse.
=== Noise filtering ===
If noise is present in the system,
[
Y
1
]
{\textstyle [Y_{1}]}
and
[
Y
2
]
{\textstyle [Y_{2}]}
are combined in a general data matrix,
[
Y
]
{\textstyle [Y]}
:
[
Y
]
=
[
y
(
0
)
y
(
1
)
⋯
y
(
L
)
y
(
1
)
y
(
2
)
⋯
y
(
L
+
1
)
⋮
⋮
⋱
⋮
y
(
N
−
L
−
1
)
y
(
N
−
L
)
⋯
y
(
N
−
1
)
]
(
N
−
L
)
×
(
L
+
1
)
{\displaystyle [Y]={\begin{bmatrix}y(0)&y(1)&\cdots &y(L)\\y(1)&y(2)&\cdots &y(L+1)\\\vdots &\vdots &\ddots &\vdots \\y(N-L-1)&y(N-L)&\cdots &y(N-1)\end{bmatrix}}_{(N-L)\times (L+1)}}
where
y
{\displaystyle y}
is the noisy data. For efficient filtering, L is chosen between
N
3
{\textstyle {\frac {N}{3}}}
and
N
2
{\textstyle {\frac {N}{2}}}
. A singular value decomposition on
[
Y
]
{\textstyle [Y]}
yields:
[
Y
]
=
[
U
]
[
Σ
]
[
V
]
H
{\displaystyle [Y]=[U][\Sigma ][V]^{H}}
In this decomposition,
[
U
]
{\textstyle [U]}
and
[
V
]
{\textstyle [V]}
are unitary matrices with respective eigenvectors
[
Y
]
[
Y
]
H
{\textstyle [Y][Y]^{H}}
and
[
Y
]
H
[
Y
]
{\textstyle [Y]^{H}[Y]}
and
[
Σ
]
{\textstyle [\Sigma ]}
is a diagonal matrix with singular values of
[
Y
]
{\textstyle [Y]}
. Superscript
H
{\textstyle H}
denotes the conjugate transpose.
Then the parameter
M
{\textstyle M}
is chosen for filtering. Singular values after
M
{\textstyle M}
, which are below the filtering threshold, are set to zero; for an arbitrary singular value
σ
c
{\textstyle \sigma _{c}}
, the threshold is denoted by the following formula:
σ
c
σ
m
a
x
=
10
−
p
{\displaystyle {\frac {\sigma _{c}}{\sigma _{max}}}=10^{-p}}
,
σ
m
a
x
{\textstyle \sigma _{max}}
and p are the maximum singular value and significant decimal digits, respectively. For a data with significant digits accurate up to p, singular values below
10
−
p
{\textstyle 10^{-p}}
are considered noise.
[
V
1
′
]
{\textstyle [V_{1}']}
and
[
V
2
′
]
{\textstyle [V_{2}']}
are obtained through removing the last and first row and column of the filtered matrix
[
V
′
]
{\textstyle [V']}
, respectively;
M
{\textstyle M}
columns of
[
Σ
]
{\textstyle [\Sigma ]}
represent
[
Σ
′
]
{\textstyle [\Sigma ']}
. Filtered
[
Y
1
]
{\textstyle [Y_{1}]}
and
[
Y
2
]
{\textstyle [Y_{2}]}
matrices are obtained as:
[
Y
1
]
=
[
U
]
[
Σ
′
]
[
V
1
′
]
H
{\displaystyle [Y_{1}]=[U][\Sigma '][V_{1}']^{H}}
[
Y
2
]
=
[
U
]
[
Σ
′
]
[
V
2
′
]
H
{\displaystyle [Y_{2}]=[U][\Sigma '][V_{2}']^{H}}
Prefiltering can be used to combat noise and enhance signal-to-noise ratio (SNR). Band-pass matrix pencil (BPMP) method is a modification of the GPOF method via FIR or IIR band-pass filters.
GPOF can handle up to 25 dB SNR. For GPOF, as well as for BPMP, variance of the estimates approximately reaches Cramér–Rao bound.
=== Calculation of residues ===
Residues of the complex poles are obtained through the least squares problem:
[
y
(
0
)
y
(
1
)
⋮
y
(
N
−
1
)
]
=
[
1
1
⋯
1
z
1
z
2
⋯
z
M
⋮
⋮
⋱
⋮
z
1
N
−
1
z
2
N
−
1
⋯
z
M
N
−
1
]
[
R
1
R
2
⋮
R
M
]
{\displaystyle {\begin{bmatrix}y(0)\\y(1)\\\vdots \\y(N-1)\end{bmatrix}}={\begin{bmatrix}1&1&\cdots &1\\z_{1}&z_{2}&\cdots &z_{M}\\\vdots &\vdots &\ddots &\vdots \\z_{1}^{N-1}&z_{2}^{N-1}&\cdots &z_{M}^{N-1}\end{bmatrix}}{\begin{bmatrix}R_{1}\\R_{2}\\\vdots \\R_{M}\end{bmatrix}}}
== Applications ==
The method is generally used for the closed-form evaluation of Sommerfeld integrals in discrete complex image method for method of moments applications, where the spectral Green's function is approximated as a sum of complex exponentials. Additionally, the method is used in antenna analysis, S-parameter-estimation in microwave integrated circuits, wave propagation analysis, moving target indication, radar signal processing, and series acceleration in electromagnetic problems.
== See also ==
Estimation of signal parameters via rotational invariance techniques
Generalized eigenvalue problem
Matrix pencil
MUSIC (algorithm)
Prony's method
== References == | Wikipedia/Generalized_pencil-of-function_method |
Integrable algorithms are numerical algorithms that rely on basic ideas from the mathematical theory of integrable systems.
== Background ==
The theory of integrable systems has advanced with the connection between numerical analysis. For example, the discovery of solitons came from the numerical experiments to the KdV equation by Norman Zabusky and Martin David Kruskal. Today, various relations between numerical analysis and integrable systems have been found (Toda lattice and numerical linear algebra, discrete soliton equations and series acceleration), and studies to apply integrable systems to numerical computation are rapidly advancing.
== Integrable difference schemes ==
Generally, it is hard to accurately compute the solutions of nonlinear differential equations due to its non-linearity. In order to overcome this difficulty, R. Hirota has made discrete versions of integrable systems with the viewpoint of "Preserve mathematical structures of integrable systems in the discrete versions".
At the same time, Mark J. Ablowitz and others have not only made discrete soliton equations with discrete Lax pair but also compared numerical results between integrable difference schemes and ordinary methods. As a result of their experiments, they have found that the accuracy can be improved with integrable difference schemes at some cases.
== References ==
== See also ==
Soliton
Integrable system | Wikipedia/Integrable_algorithm |
In numerical analysis, the Schur complement method, named after Issai Schur, is the basic and the earliest version of non-overlapping domain decomposition method, also called iterative substructuring. A finite element problem is split into non-overlapping subdomains, and the unknowns in the interiors of the subdomains are eliminated. The remaining Schur complement system on the unknowns associated with subdomain interfaces is solved by the conjugate gradient method.
== The method and implementation ==
Suppose we want to solve the Poisson equation
−
Δ
u
=
f
,
u
|
∂
Ω
=
0
{\displaystyle -\Delta u=f,\qquad u|_{\partial \Omega }=0}
on some domain Ω. When we discretize this problem we get an N-dimensional linear system AU = F. The Schur complement method splits up the linear system into sub-problems. To do so, divide Ω into two subdomains Ω1, Ω2 which share an interface Γ. Let U1, U2 and UΓ be the degrees of freedom associated with each subdomain and with the interface. We can then write the linear system as
[
A
11
0
A
1
Γ
0
A
22
A
2
Γ
A
Γ
1
A
Γ
2
A
Γ
Γ
]
[
U
1
U
2
U
Γ
]
=
[
F
1
F
2
F
Γ
]
,
{\displaystyle \left[{\begin{matrix}A_{11}&0&A_{1\Gamma }\\0&A_{22}&A_{2\Gamma }\\A_{\Gamma 1}&A_{\Gamma 2}&A_{\Gamma \Gamma }\end{matrix}}\right]\left[{\begin{matrix}U_{1}\\U_{2}\\U_{\Gamma }\end{matrix}}\right]=\left[{\begin{matrix}F_{1}\\F_{2}\\F_{\Gamma }\end{matrix}}\right],}
where F1, F2 and FΓ are the components of the load vector in each region.
The Schur complement method proceeds by noting that we can find the values on the interface by solving the smaller system
Σ
U
Γ
=
F
Γ
−
A
Γ
1
A
11
−
1
F
1
−
A
Γ
2
A
22
−
1
F
2
,
{\displaystyle \Sigma U_{\Gamma }=F_{\Gamma }-A_{\Gamma 1}A_{11}^{-1}F_{1}-A_{\Gamma 2}A_{22}^{-1}F_{2},}
for the interface values UΓ, where we define the Schur complement matrix
Σ
=
A
Γ
Γ
−
A
Γ
1
A
11
−
1
A
1
Γ
−
A
Γ
2
A
22
−
1
A
2
Γ
.
{\displaystyle \Sigma =A_{\Gamma \Gamma }-A_{\Gamma 1}A_{11}^{-1}A_{1\Gamma }-A_{\Gamma 2}A_{22}^{-1}A_{2\Gamma }.}
The important thing to note is that the computation of any quantities involving
A
11
−
1
{\displaystyle A_{11}^{-1}}
or
A
22
−
1
{\displaystyle A_{22}^{-1}}
involves solving decoupled Dirichlet problems on each domain, and these can be done in parallel. Consequently, we need not store the Schur complement matrix explicitly; it is sufficient to know how to multiply a vector by it.
Once we know the values on the interface, we can find the interior values using the two relations
A
11
U
1
=
F
1
−
A
1
Γ
U
Γ
,
A
22
U
2
=
F
2
−
A
2
Γ
U
Γ
,
{\displaystyle A_{11}U_{1}=F_{1}-A_{1\Gamma }U_{\Gamma },\qquad A_{22}U_{2}=F_{2}-A_{2\Gamma }U_{\Gamma },}
which can both be done in parallel.
The multiplication of a vector by the Schur complement is a discrete version of the Poincaré–Steklov operator, also called the Dirichlet to Neumann mapping.
== Advantages ==
There are two benefits of this method. First, the elimination of the interior unknowns on the subdomains, that is the solution of the Dirichlet problems, can be done in parallel. Second, passing to the Schur complement reduces condition number and thus tends to decrease the number of iterations. For second-order problems, such as the Laplace equation or linear elasticity, the matrix of the system has condition number of the order 1/h2, where h is the characteristic element size. The Schur complement, however, has condition number only of the order 1/h.
For performances, the Schur complement method is combined with preconditioning, at least a diagonal preconditioner. The Neumann–Neumann method and the Neumann–Dirichlet method are the Schur complement method with particular kinds of preconditioners.
When a fast function is utilized, especially in low cost parallel computers, the Schur complement method is relatively efficient.
== References == | Wikipedia/Schur_complement_method |
The finite-difference frequency-domain (FDFD) method is a numerical solution method for problems usually in electromagnetism and sometimes in acoustics, based on finite-difference approximations of the derivative operators in the differential equation being solved.
While "FDFD" is a generic term describing all frequency-domain finite-difference methods, the title seems to mostly describe the method as applied to scattering problems. The method shares many similarities to the finite-difference time-domain (FDTD) method, so much so that the literature on FDTD can be directly applied. The method works by transforming Maxwell's equations (or other partial differential equation) for sources and fields at a constant frequency into matrix form
A
x
=
b
{\displaystyle Ax=b}
. The matrix A is derived from the wave equation operator, the column vector x contains the field components, and the column vector b describes the source. The method is capable of incorporating anisotropic materials, but off-diagonal components of the tensor require special treatment.
Strictly speaking, there are at least two categories of "frequency-domain" problems in electromagnetism. One is to find the response to a current density J with a constant frequency ω, i.e. of the form
J
(
x
)
e
i
ω
t
{\displaystyle \mathbf {J} (\mathbf {x} )e^{i\omega t}}
, or a similar time-harmonic source. This frequency-domain response problem leads to an
A
x
=
b
{\displaystyle Ax=b}
system of linear equations as described above. An early description of a frequency-domain response FDTD method to solve scattering problems was published by Christ and Hartnagel (1987). Another is to find the normal modes of a structure (e.g. a waveguide) in the absence of sources: in this case the frequency ω is itself a variable, and one obtains an eigenproblem
A
x
=
λ
x
{\displaystyle Ax=\lambda x}
(usually, the eigenvalue λ is ω2). An early description of an FDTD method to solve electromagnetic eigenproblems was published by Albani and Bernardi (1974).
== Implementing the method ==
Use a Yee grid because it offers the following benefits: (1) it implicitly satisfies the zero divergence conditions to avoid spurious solutions, (2) it naturally handles physical boundary conditions, and (3) it provides a very elegant and compact way of approximating the curl equations with finite-differences.
Much of the literature on finite-difference time-domain (FDTD) methods applies to FDFD, particularly topics on how to represent materials and devices on a Yee grid.
== Comparison with FDTD and FEM ==
The FDFD method is very similar to the finite element method (FEM), though there are some major differences. Unlike the FDTD method, there are no time steps that must be computed sequentially, thus making FDFD easier to implement. This might also lead one to imagine that FDFD is less computationally expensive; however, this is not necessarily the case. The FDFD method requires solving a sparse linear system, which even for simple problems can be 20,000 by 20,000 elements or larger, with over a million unknowns. In this respect, the FDFD method is similar to the FEM, which is a finite differential method and is also usually implemented in the frequency domain. There are efficient numerical solvers available so that matrix inversion—an extremely computationally expensive process—can be avoided. Additionally, model order reduction techniques can be employed to reduce problem size.
FDFD, and FDTD for that matter, does not lend itself well to complex geometries or multiscale structures, as the Yee grid is restricted mostly to rectangular structures. This can be circumvented by either using a very fine grid mesh (which increases computational cost), or by approximating the effects with surface boundary conditions. Non uniform gridding can lead to spurious charges at the interface boundary, as the zero divergence conditions are not maintained when the grid is not uniform along an interface boundary. E and H field continuity can be maintained to circumvent this problem by enforcing weak continuity across the interface using basis functions, as is done in FEM. Perfectly matched layer (PML) boundary conditions can also be used to truncate the grid, and avoid meshing empty space.
== Susceptance element equivalent circuit ==
The FDFD equations can be rearranged in such a way as to describe a second order equivalent circuit, where nodal voltages represent the E field components and branch currents represent the H field components. This equivalent circuit representation can be extremely useful, as techniques from circuit theory can be used to analyze or simplify the problem and can be used as a spice-like tool for three-dimensional electromagnetic simulation. This susceptance element equivalent circuit (SEEC) model has the advantages of a reduced number of unknowns, only having to solve for E field components, and second order model order reduction techniques can be employed.
== Applications ==
The FDFD method has been used to provide full wave simulation for modeling interconnects for various applications in electronic packaging. FDFD has also been used for various scattering problems at optical frequencies.
== See also ==
Finite-difference time-domain method
Finite element method
== References == | Wikipedia/Finite-difference_frequency-domain_method |
In scientific computation and simulation, the method of fundamental solutions (MFS) is a technique for solving partial differential equations based on using the fundamental solution as a basis function. The MFS was developed to overcome the major drawbacks in the boundary element method (BEM) which also uses the fundamental solution to satisfy the governing equation. Consequently, both the MFS and the BEM are of a boundary discretization numerical technique and reduce the computational complexity by one dimensionality and have particular edge over the domain-type numerical techniques such as the finite element and finite volume methods on the solution of infinite domain, thin-walled structures, and inverse problems.
In contrast to the BEM, the MFS avoids the numerical integration of singular fundamental solution and is an inherent meshfree method. The method, however, is compromised by requiring a controversial fictitious boundary outside the physical domain to circumvent the singularity of fundamental solution, which has seriously restricted its applicability to real-world problems. But nevertheless the MFS has been found very competitive to some application areas such as infinite domain problems.
The MFS is also known by different names in the literature, including the charge simulation method, the superposition method, the desingularized method, the indirect boundary element method and the virtual boundary element method.
== MFS formulation ==
Consider a partial differential equation governing certain type of problems
L
u
=
f
(
x
,
y
)
,
(
x
,
y
)
∈
Ω
,
{\displaystyle Lu=f\left(x,y\right),\ \ \left(x,y\right)\in \Omega ,}
u
=
g
(
x
,
y
)
,
(
x
,
y
)
∈
∂
Ω
D
,
{\displaystyle u=g\left(x,y\right),\ \ \left(x,y\right)\in \partial \Omega _{D},}
∂
u
∂
n
=
h
(
x
,
y
)
,
(
x
,
y
)
∈
∂
Ω
N
,
{\displaystyle {\frac {\partial u}{\partial n}}=h\left(x,y\right),\ \ \left(x,y\right)\in \partial \Omega _{N},}
where
L
{\displaystyle L}
is the differential partial operator,
Ω
{\displaystyle \Omega }
represents the computational domain,
∂
Ω
D
{\displaystyle \partial \Omega _{D}}
and
∂
Ω
N
{\displaystyle \partial \Omega _{N}}
denote the Dirichlet and Neumann boundary, respectively,
∂
Ω
D
∪
∂
Ω
N
=
∂
Ω
{\displaystyle \partial \Omega _{D}\cup \partial \Omega _{N}=\partial \Omega }
and
∂
Ω
D
∩
∂
Ω
N
=
∅
{\displaystyle \partial \Omega _{D}\cap \partial \Omega _{N}=\varnothing }
.
The MFS employs the fundamental solution of the operator as its basis function to represent the approximation of unknown function u as follows
u
∗
(
x
,
y
)
=
∑
i
=
1
N
α
i
ϕ
(
r
i
)
{\displaystyle {{u}^{*}}\left(x,y\right)=\sum \limits _{i=1}^{N}\alpha _{i}\phi \left(r_{i}\right)}
where
r
i
=
‖
(
x
,
y
)
−
(
s
x
i
,
s
y
i
)
‖
{\displaystyle r_{i}=\left\|\left(x,y\right)-\left(sx_{i},sy_{i}\right)\right\|}
denotes the Euclidean distance between collocation points
(
x
,
y
)
{\displaystyle \left(x,y\right)}
and source points
(
s
x
i
,
s
y
i
)
{\displaystyle \left(sx_{i},sy_{i}\right)}
,
ϕ
(
⋅
)
{\displaystyle \phi \left(\cdot \right)}
is the fundamental solution which satisfies
L
ϕ
=
δ
{\displaystyle L\phi =\delta \,}
where
δ
{\displaystyle \delta }
denotes Dirac delta function, and
α
i
{\displaystyle {{\alpha }_{i}}}
are the unknown coefficients.
With the source points located outside the physical domain, the MFS avoid the fundamental solution singularity. Substituting the approximation into boundary condition yields the following matrix equation
[
ϕ
(
r
j
|
x
i
,
y
i
)
∂
ϕ
(
r
j
|
x
k
,
y
k
)
∂
n
]
⋅
α
=
(
g
(
x
i
,
y
i
)
h
(
x
k
,
y
k
)
)
,
{\displaystyle \left[{\begin{matrix}\phi \left(\left.r_{j}\right|_{x_{i},y_{i}}\right)\\{\frac {\partial \phi \left(\left.r_{j}\right|_{x_{k},y_{k}}\right)}{\partial n}}\\\end{matrix}}\right]\ \cdot \ \alpha =\left({\begin{matrix}g\left(x_{i},y_{i}\right)\\h\left(x_{k},y_{k}\right)\\\end{matrix}}\right),}
where
(
x
i
,
y
i
)
{\displaystyle \left(x_{i},y_{i}\right)}
and
(
x
k
,
y
k
)
{\displaystyle \left(x_{k},y_{k}\right)}
denote the collocation points, respectively, on Dirichlet and Neumann boundaries. The unknown coefficients
α
i
{\displaystyle \alpha _{i}}
can uniquely be determined by the above algebraic equation. And then we can evaluate numerical solution at any location in physical domain.
== History and recent developments ==
The ideas behind the MFS were developed primarily by V. D. Kupradze and M. A. Alexidze in the late 1950s and early 1960s. However, the method was first proposed as a computational technique much later by R. Mathon and R. L. Johnston in the late 1970s, followed by a number of papers by Mathon, Johnston and Graeme Fairweather with applications. The MFS then gradually became a useful tool for the solution of a large variety of physical and engineering problems.
In the 1990s, M. A. Golberg and C. S. Chen extended the MFS to deal with inhomogeneous equations and time-dependent problems, greatly expanding its applicability. Later developments indicated that the MFS can be used to solve partial differential equations with variable coefficients. The MFS has proved particularly effective for certain classes of problems such as inverse, unbounded domain, and free-boundary problems.
Some techniques have been developed to cure the fictitious boundary problem in the MFS, such as the boundary knot method, singular boundary method, and regularized meshless method.
== See also ==
Radial basis function
Boundary element method
Boundary knot method
Boundary particle method
Singular boundary method
Regularized meshless method
== References ==
== External links ==
International Center for Numerical Simulation Software in Engineering & Sciences | Wikipedia/Method_of_fundamental_solutions |
In mathematics, the Schwarz alternating method or alternating process is an iterative method introduced in 1869–1870 by Hermann Schwarz in the theory of conformal mapping. Given two overlapping regions in the complex plane in each of which the Dirichlet problem could be solved, Schwarz described an iterative method for solving the Dirichlet problem in their union, provided their intersection was suitably well behaved. This was one of several constructive techniques of conformal mapping developed by Schwarz as a contribution to the problem of uniformization, posed by Riemann in the 1850s and first resolved rigorously by Koebe and Poincaré in 1907. It furnished a scheme for uniformizing the union of two regions knowing how to uniformize each of them separately, provided their intersection was topologically a disk or an annulus. From 1870 onwards Carl Neumann also contributed to this theory.
In the 1950s Schwarz's method was generalized in the theory of partial differential equations to an iterative method for finding the solution of an elliptic boundary value problem on a domain which is the union of two overlapping subdomains. It involves solving the boundary value problem on each of the two subdomains in turn, taking always the last values of the approximate solution as the next boundary conditions. It is used in numerical analysis, under the name multiplicative Schwarz method (in opposition to additive Schwarz method) as a domain decomposition method.
== History ==
It was first formulated by H. A. Schwarz and served as a theoretical tool: its convergence for general second order elliptic partial differential equations was first proved much later, in 1951, by Solomon Mikhlin.
== The algorithm ==
The original problem considered by Schwarz was a Dirichlet problem (with the Laplace's equation) on a domain consisting of a circle and a partially overlapping square. To solve the Dirichlet problem on one of the two subdomains (the square or the circle), the value of the solution must be known on the border: since a part of the border is contained in the other subdomain, the Dirichlet problem must be solved jointly on the two subdomains. An iterative algorithm is introduced:
Make a first guess of the solution on the circle's boundary part that is contained in the square
Solve the Dirichlet problem on the circle
Use the solution in (2) to approximate the solution on the square's boundary
Solve the Dirichlet problem on the square
Use the solution in (4) to approximate the solution on the circle's boundary, then go to step (2).
At convergence, the solution on the overlap is the same when computed on the square or on the circle.
== Optimized Schwarz methods ==
The convergence speed depends on the size of the overlap between the subdomains, and on the transmission conditions (boundary conditions used in the interface between the subdomains). It is possible to increase the convergence speed of the Schwarz methods by choosing adapted transmission conditions: theses methods are then called Optimized Schwarz methods.
== See also ==
Uniformization theorem
Schwarzian derivative
Schwarz triangle map
Schwarz reflection principle
Additive Schwarz method
== Notes ==
== References ==
Original papers
Schwarz, H.A. (1869), "Über einige Abbildungsaufgaben", J. Reine Angew. Math., 1869 (70): 105–120, doi:10.1515/crll.1869.70.105, S2CID 121291546
Schwarz, H.A. (1870a), "Über die Integration der partiellen Differentialgleichung ∂2u/∂x2 + ∂2u/∂y2 = 0 unter vorgeschriebenen Grenz- und Unstetigkeitbedingungen", Monatsberichte der Königlichen Akademie der Wissenschaft zu Berlin: 767–795
Schwarz, H. A. (1870b), "Über einen Grenzübergang durch alternierendes Verfahren", Vierteljahrsschrift der Naturforschenden Gesellschaft in Zürich, 15: 272–286, JFM 02.0214.02
Neumann, Carl (1870), "Zur Theorie des Potentiales", Math. Ann., 2 (3): 514, doi:10.1007/bf01448242, S2CID 122015888
Neumann, Carl (1877), Untersuchungen über das logarithmische und Newton'sche Potential, Teubner
Neumann, Carl (1884), Vorlesungen über Riemann's Theorie der abelschen Integrale (2nd ed.), Teubner
Conformal mapping and harmonic functions
Nevanlinna, Rolf (1939), "Über das alternierende Verfahren von Schwarz", J. Reine Angew. Math., 1939 (180): 121–128, doi:10.1515/crll.1939.180.121, S2CID 199546268
Nevanlinna, Rolf (1939), "Bemerkungen zum alternierenden Verfahren", Monatshefte für Mathematik und Physik, 48: 500–508, doi:10.1007/bf01696203, S2CID 123260734
Nevanlinna, Rolf (1953), Uniformisierung, Die Grundlehren der Mathematischen Wissenschaften in Einzeldarstellungen mit besonderer Berücksichtigung der Anwendungsgebiete, vol. 64, Springer
Sario, Leo (1953), "Alternating method on arbitrary Riemann surfaces", Pacific J. Math., 3 (3): 631–645, doi:10.2140/pjm.1953.3.631
Morgenstern, Dietrich (1956), "Begründung des alternierenden Verfahrens durch Orthogonalprojektion", Z. Angew. Math. Mech., 36 (7–8): 255–256, Bibcode:1956ZaMM...36..255M, doi:10.1002/zamm.19560360711, hdl:10338.dmlcz/100409
Cohn, Harvey (1980), Conformal mapping on Riemann surfaces, Dover, pp. 242–262, ISBN 0-486-64025-6, Chapter 12, Alternating Procedures
Garnett, John B.; Marshall, Donald E. (2005), Harmonic Measure, Cambridge University Press, ISBN 1139443097
Freitag, Eberhard (2011), Complex analysis. 2. Riemann surfaces, several complex variables, abelian functions, higher modular functions, Springer, ISBN 978-3-642-20553-8
de Saint-Gervais, Henri Paul (2016), Uniformization of Riemann Surfaces: revisiting a hundred-year-old theorem, Heritage of European Mathematics, translated by Robert G. Burns, European Mathematical Society, doi:10.4171/145, ISBN 978-3-03719-145-3, translation of French text
Chorlay, Renaud (2007), L'émergence du couple local-global dans les théories géométriques, de Bernhard Riemann à la théorie des faisceaux (PDF), pp. 123–134 (cited in de Saint-Gervais)
Bottazzini, Umberto; Gray, Jeremy (2013), Hidden Harmony—Geometric Fantasies: The Rise of Complex Function Theory, Sources and Studies in the History of Mathematics and Physical Sciences, Springer, ISBN 978-1461457251
PDEs and numerical analysis
Mikhlin, S.G. (1951), "On the Schwarz algorithm", Doklady Akademii Nauk SSSR, n. Ser. (in Russian), 77: 569–571, MR 0041329, Zbl 0054.04204
== External links ==
Solomentsev, E.D. (2001) [1994], "Schwarz alternating method", Encyclopedia of Mathematics, EMS Press | Wikipedia/Schwarz_alternating_method |
In numerical analysis, a multigrid method (MG method) is an algorithm for solving differential equations using a hierarchy of discretizations. They are an example of a class of techniques called multiresolution methods, very useful in problems exhibiting multiple scales of behavior. For example, many basic relaxation methods exhibit different rates of convergence for short- and long-wavelength components, suggesting these different scales be treated differently, as in a Fourier analysis approach to multigrid. MG methods can be used as solvers as well as preconditioners.
The main idea of multigrid is to accelerate the convergence of a basic iterative method (known as relaxation, which generally reduces short-wavelength error) by a global correction of the fine grid solution approximation from time to time, accomplished by solving a coarse problem. The coarse problem, while cheaper to solve, is similar to the fine grid problem in that it also has short- and long-wavelength errors. It can also be solved by a combination of relaxation and appeal to still coarser grids. This recursive process is repeated until a grid is reached where the cost of direct solution there is negligible compared to the cost of one relaxation sweep on the fine grid. This multigrid cycle typically reduces all error components by a fixed amount bounded well below one, independent of the fine grid mesh size. The typical application for multigrid is in the numerical solution of elliptic partial differential equations in two or more dimensions.
Multigrid methods can be applied in combination with any of the common discretization techniques. For example, the finite element method may be recast as a multigrid method. In these cases, multigrid methods are among the fastest solution techniques known today. In contrast to other methods, multigrid methods are general in that they can treat arbitrary regions and boundary conditions. They do not depend on the separability of the equations or other special properties of the equation. They have also been widely used for more-complicated non-symmetric and nonlinear systems of equations, like the Lamé equations of elasticity or the Navier-Stokes equations.
== Algorithm ==
There are many variations of multigrid algorithms, but the common features are that a hierarchy of discretizations (grids) is considered. The important steps are:
Smoothing – reducing high frequency errors, for example using a few iterations of the Gauss–Seidel method.
Residual Computation – computing residual error after the smoothing operation(s).
Restriction – downsampling the residual error to a coarser grid.
Interpolation or prolongation – interpolating a correction computed on a coarser grid into a finer grid.
Correction – Adding prolongated coarser grid solution onto the finer grid.
There are many choices of multigrid methods with varying trade-offs between speed of solving a single iteration and the rate of convergence with said iteration. The 3 main types are V-Cycle, F-Cycle, and W-Cycle. These differ in which and how many coarse-grain cycles are performed per fine iteration. The V-Cycle algorithm executes one coarse-grain V-Cycle. F-Cycle does a coarse-grain V-Cycle followed by a coarse-grain F-Cycle, while each W-Cycle performs two coarse-grain W-Cycles per iteration. For a discrete 2D problem, F-Cycle takes 83% more time to compute than a V-Cycle iteration while a W-Cycle iteration takes 125% more. If the problem is set up in a 3D domain, then a F-Cycle iteration and a W-Cycle iteration take about 64% and 75% more time respectively than a V-Cycle iteration ignoring overheads. Typically, W-Cycle produces similar convergence to F-Cycle. However, in cases of convection-diffusion problems with high Péclet numbers, W-Cycle can show superiority in its rate of convergence per iteration over F-Cycle. The choice of smoothing operators are extremely diverse as they include Krylov subspace methods and can be preconditioned.
Any geometric multigrid cycle iteration is performed on a hierarchy of grids and hence it can be coded using recursion. Since the function calls itself with smaller sized (coarser) parameters, the coarsest grid is where the recursion stops. In cases where the system has a high condition number, the correction procedure is modified such that only a fraction of the prolongated coarser grid solution is added onto the finer grid.
== Computational cost ==
This approach has the advantage over other methods that it often scales linearly with the number of discrete nodes used. In other words, it can solve these problems to a given accuracy in a number of operations that is proportional to the number of unknowns.
Assume that one has a differential equation which can be solved approximately (with a given accuracy) on a grid
i
{\displaystyle i}
with a given grid point
density
N
i
{\displaystyle N_{i}}
. Assume furthermore that a solution on any grid
N
i
{\displaystyle N_{i}}
may be obtained with a given
effort
W
i
=
ρ
K
N
i
{\displaystyle W_{i}=\rho KN_{i}}
from a solution on a coarser grid
i
+
1
{\displaystyle i+1}
. Here,
ρ
=
N
i
+
1
/
N
i
<
1
{\displaystyle \rho =N_{i+1}/N_{i}<1}
is the ratio of grid points on "neighboring" grids and is assumed to be constant throughout the grid hierarchy, and
K
{\displaystyle K}
is some constant modeling the effort of computing the result for one grid point.
The following recurrence relation is then obtained for the effort of obtaining the solution on grid
k
{\displaystyle k}
:
W
k
=
W
k
+
1
+
ρ
K
N
k
{\displaystyle W_{k}=W_{k+1}+\rho KN_{k}}
And in particular, we find for the finest grid
N
1
{\displaystyle N_{1}}
that
W
1
=
W
2
+
ρ
K
N
1
{\displaystyle W_{1}=W_{2}+\rho KN_{1}}
Combining these two expressions (and using
N
k
=
ρ
k
−
1
N
1
{\displaystyle N_{k}=\rho ^{k-1}N_{1}}
) gives
W
1
=
K
N
1
∑
p
=
0
n
ρ
p
{\displaystyle W_{1}=KN_{1}\sum _{p=0}^{n}\rho ^{p}}
Using the geometric series, we then find (for finite
n
{\displaystyle n}
)
W
1
<
K
N
1
1
1
−
ρ
{\displaystyle W_{1}<KN_{1}{\frac {1}{1-\rho }}}
that is, a solution may be obtained in
O
(
N
)
{\displaystyle O(N)}
time. It should be mentioned that there is one exception to the
O
(
N
)
{\displaystyle O(N)}
i.e. W-cycle multigrid used on a 1D problem; it would result in
O
(
N
l
o
g
(
N
)
)
{\displaystyle O(Nlog(N))}
complexity.
== Multigrid preconditioning ==
A multigrid method with an intentionally reduced tolerance can be used as an efficient preconditioner for an external iterative solver, e.g., The solution may still be obtained in
O
(
N
)
{\displaystyle O(N)}
time as well as in the case where the multigrid method is used as a solver. Multigrid preconditioning is used in practice even for linear systems, typically with one cycle per iteration, e.g., in Hypre. Its main advantage versus a purely multigrid solver is particularly clear for nonlinear problems, e.g., eigenvalue problems.
If the matrix of the original equation or an eigenvalue problem is symmetric positive definite (SPD), the preconditioner is commonly constructed to be SPD as well, so that the standard conjugate gradient (CG) iterative methods can still be used. Such imposed SPD constraints may complicate the construction of the preconditioner, e.g., requiring coordinated pre- and post-smoothing. However, preconditioned steepest descent and flexible CG methods for SPD linear systems and LOBPCG for symmetric eigenvalue problems are all shown to be robust if the preconditioner is not SPD.
== Bramble–Pasciak–Xu preconditioner ==
Originally described in Xu's Ph.D. thesis
and later published in Bramble-Pasciak-Xu, the BPX-preconditioner is one of the two major multigrid
approaches (the other is the classic multigrid algorithm such as V-cycle) for solving large-scale algebraic systems that arise from the discretization of models in science and engineering described by partial differential equations. In view of the subspace correction framework, BPX preconditioner is a parallel subspace correction method where as the classic V-cycle is a successive subspace correction method. The BPX-preconditioner is known to be naturally more parallel and in some applications more robust than the classic V-cycle multigrid method. The method has been widely used by researchers and practitioners since 1990.
== Generalized multigrid methods ==
Multigrid methods can be generalized in many different ways. They can be applied naturally in a time-stepping solution of parabolic partial differential equations, or they can be applied directly to time-dependent partial differential equations. Research on multilevel techniques for hyperbolic partial differential equations is underway. Multigrid methods can also be applied to integral equations, or for problems in statistical physics.
Another set of multiresolution methods is based upon wavelets. These wavelet methods can be combined with multigrid methods. For example, one use of wavelets is to reformulate the finite element approach in terms of a multilevel method.
Adaptive multigrid exhibits adaptive mesh refinement, that is, it adjusts the grid as the computation proceeds, in a manner dependent upon the computation itself. The idea is to increase resolution of the grid only in regions of the solution where it is needed.
== Algebraic multigrid (AMG) ==
Practically important extensions of multigrid methods include techniques where no partial differential equation nor geometrical problem background is used to construct the multilevel hierarchy. Such algebraic multigrid methods (AMG) construct their hierarchy of operators directly from the system matrix. In classical AMG, the levels of the hierarchy are simply subsets of unknowns without any geometric interpretation. (More generally, coarse grid unknowns can be particular linear combinations of fine grid unknowns.) Thus, AMG methods become black-box solvers for certain classes of sparse matrices. AMG is regarded as advantageous mainly where geometric multigrid is too difficult to apply, but is often used simply because it avoids the coding necessary for a true multigrid implementation. While classical AMG was developed first, a related algebraic method is known as smoothed aggregation (SA).
In an overview paper by Jinchao Xu and Ludmil Zikatanov, the "algebraic multigrid" methods are understood from an abstract point of view. They developed a unified framework and existing algebraic multigrid methods can be derived coherently. Abstract theory about how to construct optimal coarse space as well as quasi-optimal spaces was derived. Also, they proved that, under appropriate assumptions, the abstract two-level AMG method converges uniformly with respect to the size of the linear system, the coefficient variation, and the anisotropy. Their abstract framework covers most existing AMG methods, such as classical AMG, energy-minimization AMG, unsmoothed and smoothed aggregation AMG, and spectral AMGe.
== Multigrid in time methods ==
Multigrid methods have also been adopted for the solution of initial value problems.
Of particular interest here are parallel-in-time multigrid methods:
in contrast to classical Runge–Kutta or linear multistep methods, they can offer concurrency in temporal direction.
The well known Parareal parallel-in-time integration method can also be reformulated as a two-level multigrid in time.
== Multigrid for nearly singular problems ==
Nearly singular problems arise in a number of important physical and engineering applications. Simple, but important example of nearly singular problems can be found at the displacement formulation of linear elasticity for nearly incompressible materials. Typically, the major problem to solve such nearly singular systems boils down to treat the nearly singular operator given by
A
+
ε
M
{\displaystyle A+\varepsilon M}
robustly with respect to the positive, but small parameter
ε
{\displaystyle \varepsilon }
. Here
A
{\displaystyle A}
is symmetric semidefinite operator with large null space, while
M
{\displaystyle M}
is a symmetric positive definite operator. There were many works to attempt to design a robust and fast multigrid method for such nearly singular problems. A general guide has been provided as a design principle to achieve parameters (e.g., mesh size and physical parameters such as Poisson's ratio that appear in the nearly singular operator) independent convergence rate of the multigrid method applied to such nearly singular systems, i.e., in each grid, a space decomposition based on which the smoothing is applied, has to be constructed so that the null space of the singular part of the nearly singular operator has to be included in the sum of the local null spaces, the intersection of the null space and the local spaces resulting from the space decompositions.
== Notes ==
== References ==
G. P. Astrachancev (1971), An iterative method of solving elliptic net problems. USSR Comp. Math. Math. Phys. 11, 171–182.
N. S. Bakhvalov (1966), On the convergence of a relaxation method with natural constraints on the elliptic operator. USSR Comp. Math. Math. Phys. 6, 101–13.
Achi Brandt (April 1977), "Multi-Level Adaptive Solutions to Boundary-Value Problems", Mathematics of Computation, 31: 333–90.
William L. Briggs, Van Emden Henson, and Steve F. McCormick (2000), A Multigrid Tutorial (2nd ed.), Philadelphia: Society for Industrial and Applied Mathematics, ISBN 0-89871-462-1.
R. P. Fedorenko (1961), A relaxation method for solving elliptic difference equations. USSR Comput. Math. Math. Phys. 1, p. 1092.
R. P. Fedorenko (1964), The speed of convergence of one iterative process. USSR Comput. Math. Math. Phys. 4, p. 227.
Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. (2007). "Section 20.6. Multigrid Methods for Boundary Value Problems". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8.
== External links ==
Links to AMG presentations | Wikipedia/Multigrid_method |
In numerical mathematics, the gradient discretisation method (GDM) is a framework which contains classical and recent numerical schemes for diffusion problems of various kinds: linear or non-linear, steady-state or time-dependent. The schemes may be conforming or non-conforming, and may rely on very general polygonal or polyhedral meshes (or may even be meshless).
Some core properties are required to prove the convergence of a GDM. These core properties enable complete proofs of convergence of the GDM for elliptic and parabolic problems, linear or non-linear. For linear problems, stationary or transient, error estimates can be established based on three indicators specific to the GDM (the quantities
C
D
{\displaystyle C_{D}}
,
S
D
{\displaystyle S_{D}}
and
W
D
{\displaystyle W_{D}}
, see below). For non-linear problems, the proofs are based on compactness techniques and do not require any non-physical strong regularity assumption on the solution or the model data. Non-linear models for which such convergence proof of the GDM have been carried out comprise: the Stefan problem which is modelling a melting material, two-phase flows in porous media, the Richards equation of underground water flow, the fully non-linear Leray—Lions equations.
Any scheme entering the GDM framework is then known to converge on all these problems. This applies in particular to conforming Finite Elements, Mixed Finite Elements, nonconforming Finite Elements, and, in the case of more recent schemes, the Discontinuous Galerkin method, Hybrid Mixed Mimetic method, the Nodal Mimetic Finite Difference method, some Discrete Duality Finite Volume schemes, and some Multi-Point Flux Approximation schemes
== The example of a linear diffusion problem ==
Consider Poisson's equation in a bounded open domain
Ω
⊂
R
d
{\displaystyle \Omega \subset \mathbb {R} ^{d}}
, with homogeneous Dirichlet boundary condition
where
f
∈
L
2
(
Ω
)
{\displaystyle f\in L^{2}(\Omega )}
. The usual sense of weak solution to this model is:
In a nutshell, the GDM for such a model consists in selecting a finite-dimensional space and two reconstruction operators (one for the functions, one for the gradients) and to substitute these discrete elements in lieu of the continuous elements in (2). More precisely, the GDM starts by defining a Gradient Discretization (GD), which is a triplet
D
=
(
X
D
,
0
,
Π
D
,
∇
D
)
{\displaystyle D=(X_{D,0},\Pi _{D},\nabla _{D})}
, where:
the set of discrete unknowns
X
D
,
0
{\displaystyle X_{D,0}}
is a finite dimensional real vector space,
the function reconstruction
Π
D
:
X
D
,
0
→
L
2
(
Ω
)
{\displaystyle \Pi _{D}~:~X_{D,0}\to L^{2}(\Omega )}
is a linear mapping that reconstructs, from an element of
X
D
,
0
{\displaystyle X_{D,0}}
, a function over
Ω
{\displaystyle \Omega }
,
the gradient reconstruction
∇
D
:
X
D
,
0
→
L
2
(
Ω
)
d
{\displaystyle \nabla _{D}~:~X_{D,0}\to L^{2}(\Omega )^{d}}
is a linear mapping which reconstructs, from an element of
X
D
,
0
{\displaystyle X_{D,0}}
, a "gradient" (vector-valued function) over
Ω
{\displaystyle \Omega }
. This gradient reconstruction must be chosen such that
‖
∇
D
⋅
‖
L
2
(
Ω
)
d
{\displaystyle \Vert \nabla _{D}\cdot \Vert _{L^{2}(\Omega )^{d}}}
is a norm on
X
D
,
0
{\displaystyle X_{D,0}}
.
The related Gradient Scheme for the approximation of (2) is given by: find
u
∈
X
D
,
0
{\displaystyle u\in X_{D,0}}
such that
The GDM is then in this case a nonconforming method for the approximation of (2), which includes the nonconforming finite element method. Note that the reciprocal is not true, in the sense that the GDM framework includes methods such that the function
∇
D
u
{\displaystyle \nabla _{D}u}
cannot be computed from the function
Π
D
u
{\displaystyle \Pi _{D}u}
.
The following error estimate, inspired by G. Strang's second lemma, holds
and
defining:
which measures the coercivity (discrete Poincaré constant),
which measures the interpolation error,
which measures the defect of conformity.
Note that the following upper and lower bounds of the approximation error can be derived:
Then the core properties which are necessary and sufficient for the convergence of the method are, for a family of GDs, the coercivity, the GD-consistency and the limit-conformity properties, as defined in the next section. More generally, these three core properties are sufficient to prove the convergence of the GDM for linear problems and for some nonlinear problems like the
p
{\displaystyle p}
-Laplace problem. For nonlinear problems such as nonlinear diffusion, degenerate parabolic problems..., we add in the next section two other core properties which may be required.
== The core properties allowing for the convergence of a GDM ==
Let
(
D
m
)
m
∈
N
{\displaystyle (D_{m})_{m\in \mathbb {N} }}
be a family of GDs, defined as above (generally associated with a sequence of regular meshes whose size tends to 0).
=== Coercivity ===
The sequence
(
C
D
m
)
m
∈
N
{\displaystyle (C_{D_{m}})_{m\in \mathbb {N} }}
(defined by (6)) remains bounded.
=== GD-consistency ===
For all
φ
∈
H
0
1
(
Ω
)
{\displaystyle \varphi \in H_{0}^{1}(\Omega )}
,
lim
m
→
∞
S
D
m
(
φ
)
=
0
{\displaystyle \lim _{m\to \infty }S_{D_{m}}(\varphi )=0}
(defined by (7)).
=== Limit-conformity ===
For all
φ
∈
H
div
(
Ω
)
{\displaystyle \varphi \in H_{\operatorname {div} }(\Omega )}
,
lim
m
→
∞
W
D
m
(
φ
)
=
0
{\displaystyle \lim _{m\to \infty }W_{D_{m}}(\varphi )=0}
(defined by (8)).
This property implies the coercivity property.
=== Compactness (needed for some nonlinear problems) ===
For all sequence
(
u
m
)
m
∈
N
{\displaystyle (u_{m})_{m\in \mathbb {N} }}
such that
u
m
∈
X
D
m
,
0
{\displaystyle u_{m}\in X_{D_{m},0}}
for all
m
∈
N
{\displaystyle m\in \mathbb {N} }
and
(
‖
u
m
‖
D
m
)
m
∈
N
{\displaystyle (\Vert u_{m}\Vert _{D_{m}})_{m\in \mathbb {N} }}
is bounded, then the sequence
(
Π
D
m
u
m
)
m
∈
N
{\displaystyle (\Pi _{D_{m}}u_{m})_{m\in \mathbb {N} }}
is relatively compact in
L
2
(
Ω
)
{\displaystyle L^{2}(\Omega )}
(this property implies the coercivity property).
=== Piecewise constant reconstruction (needed for some nonlinear problems) ===
Let
D
=
(
X
D
,
0
,
Π
D
,
∇
D
)
{\displaystyle D=(X_{D,0},\Pi _{D},\nabla _{D})}
be a gradient discretisation as defined above.
The operator
Π
D
{\displaystyle \Pi _{D}}
is a piecewise constant reconstruction if there exists a basis
(
e
i
)
i
∈
B
{\displaystyle (e_{i})_{i\in B}}
of
X
D
,
0
{\displaystyle X_{D,0}}
and a family of disjoint subsets
(
Ω
i
)
i
∈
B
{\displaystyle (\Omega _{i})_{i\in B}}
of
Ω
{\displaystyle \Omega }
such that
Π
D
u
=
∑
i
∈
B
u
i
χ
Ω
i
{\textstyle \Pi _{D}u=\sum _{i\in B}u_{i}\chi _{\Omega _{i}}}
for all
u
=
∑
i
∈
B
u
i
e
i
∈
X
D
,
0
{\textstyle u=\sum _{i\in B}u_{i}e_{i}\in X_{D,0}}
, where
χ
Ω
i
{\displaystyle \chi _{\Omega _{i}}}
is the characteristic function of
Ω
i
{\displaystyle \Omega _{i}}
.
== Some non-linear problems with complete convergence proofs of the GDM ==
We review some problems for which the GDM can be proved to converge when the above core properties are satisfied.
=== Nonlinear stationary diffusion problems ===
−
div
(
Λ
(
u
¯
)
∇
u
¯
)
=
f
{\displaystyle -\operatorname {div} (\Lambda ({\overline {u}})\nabla {\overline {u}})=f}
In this case, the GDM converges under the coercivity, GD-consistency, limit-conformity and compactness properties.
=== p-Laplace problem for p > 1 ===
−
div
(
|
∇
u
¯
|
p
−
2
∇
u
¯
)
=
f
{\displaystyle -\operatorname {div} \left(|\nabla {\overline {u}}|^{p-2}\nabla {\overline {u}}\right)=f}
In this case, the core properties must be written, replacing
L
2
(
Ω
)
{\displaystyle L^{2}(\Omega )}
by
L
p
(
Ω
)
{\displaystyle L^{p}(\Omega )}
,
H
0
1
(
Ω
)
{\displaystyle H_{0}^{1}(\Omega )}
by
W
0
1
,
p
(
Ω
)
{\displaystyle W_{0}^{1,p}(\Omega )}
and
H
div
(
Ω
)
{\displaystyle H_{\operatorname {div} }(\Omega )}
by
W
div
p
′
(
Ω
)
{\displaystyle W_{\operatorname {div} }^{p'}(\Omega )}
with
1
p
+
1
p
′
=
1
{\textstyle {\frac {1}{p}}+{\frac {1}{p'}}=1}
, and the GDM converges only under the coercivity, GD-consistency and limit-conformity properties.
=== Linear and nonlinear heat equation ===
∂
t
u
¯
−
div
(
Λ
(
u
¯
)
∇
u
¯
)
=
f
{\displaystyle \partial _{t}{\overline {u}}-\operatorname {div} (\Lambda ({\overline {u}})\nabla {\overline {u}})=f}
In this case, the GDM converges under the coercivity, GD-consistency (adapted to space-time problems), limit-conformity and compactness (for the nonlinear case) properties.
=== Degenerate parabolic problems ===
Assume that
β
{\displaystyle \beta }
and
ζ
{\displaystyle \zeta }
are nondecreasing Lipschitz continuous functions:
∂
t
β
(
u
¯
)
−
Δ
ζ
(
u
¯
)
=
f
{\displaystyle \partial _{t}\beta ({\overline {u}})-\Delta \zeta ({\overline {u}})=f}
Note that, for this problem, the piecewise constant reconstruction property is needed, in addition to the coercivity, GD-consistency (adapted to space-time problems), limit-conformity and compactness properties.
== Review of some numerical methods which are GDM ==
All the methods below satisfy the first four core properties of GDM (coercivity, GD-consistency, limit-conformity, compactness), and in some cases the fifth one (piecewise constant reconstruction).
=== Galerkin methods and conforming finite element methods ===
Let
V
h
⊂
H
0
1
(
Ω
)
{\displaystyle V_{h}\subset H_{0}^{1}(\Omega )}
be spanned by the finite basis
(
ψ
i
)
i
∈
I
{\displaystyle (\psi _{i})_{i\in I}}
. The Galerkin method in
V
h
{\displaystyle V_{h}}
is identical to the GDM where one defines
X
D
,
0
=
{
u
=
(
u
i
)
i
∈
I
}
=
R
I
,
{\displaystyle X_{D,0}=\{u=(u_{i})_{i\in I}\}=\mathbb {R} ^{I},}
Π
D
u
=
∑
i
∈
I
u
i
ψ
i
{\displaystyle \Pi _{D}u=\sum _{i\in I}u_{i}\psi _{i}}
∇
D
u
=
∑
i
∈
I
u
i
∇
ψ
i
.
{\displaystyle \nabla _{D}u=\sum _{i\in I}u_{i}\nabla \psi _{i}.}
In this case,
C
D
{\displaystyle C_{D}}
is the constant involved in the continuous Poincaré inequality, and, for all
φ
∈
H
div
(
Ω
)
{\displaystyle \varphi \in H_{\operatorname {div} }(\Omega )}
,
W
D
(
φ
)
=
0
{\displaystyle W_{D}(\varphi )=0}
(defined by (8)). Then (4) and (5) are implied by Céa's lemma.
The "mass-lumped"
P
1
{\displaystyle P^{1}}
finite element case enters the framework of the GDM, replacing
Π
D
u
{\displaystyle \Pi _{D}u}
by
Π
~
D
u
=
∑
i
∈
I
u
i
χ
Ω
i
{\textstyle {\widetilde {\Pi }}_{D}u=\sum _{i\in I}u_{i}\chi _{\Omega _{i}}}
, where
Ω
i
{\displaystyle \Omega _{i}}
is a dual cell centred on the vertex indexed by
i
∈
I
{\displaystyle i\in I}
. Using mass lumping allows to get the piecewise constant reconstruction property.
=== Nonconforming finite element ===
On a mesh
T
{\displaystyle T}
which is a conforming set of simplices of
R
d
{\displaystyle \mathbb {R} ^{d}}
, the nonconforming
P
1
{\displaystyle P^{1}}
finite elements are defined by the basis
(
ψ
i
)
i
∈
I
{\displaystyle (\psi _{i})_{i\in I}}
of the functions which are affine in any
K
∈
T
{\displaystyle K\in T}
, and whose value at the centre of gravity of one given face of the mesh is 1 and 0 at all the others (these finite elements are used in [Crouzeix et al] for the approximation of the Stokes and Navier-Stokes equations). Then the method enters the GDM framework with the same definition as in the case of the Galerkin method, except for the fact that
∇
ψ
i
{\displaystyle \nabla \psi _{i}}
must be understood as the "broken gradient" of
ψ
i
{\displaystyle \psi _{i}}
, in the sense that it is the piecewise constant function equal in each simplex to the gradient of the affine function in the simplex.
=== Mixed finite element ===
The mixed finite element method consists in defining two discrete spaces, one for the approximation of
∇
u
¯
{\displaystyle \nabla {\overline {u}}}
and another one for
u
¯
{\displaystyle {\overline {u}}}
.
It suffices to use the discrete relations between these approximations to define a GDM. Using the low degree Raviart–Thomas basis functions allows to get the piecewise constant reconstruction property.
=== Discontinuous Galerkin method ===
The Discontinuous Galerkin method consists in approximating problems by a piecewise polynomial function, without requirements on the jumps from an element to the other. It is plugged in the GDM framework by including in the discrete gradient a jump term, acting as the regularization of the gradient in the distribution sense.
=== Mimetic finite difference method and nodal mimetic finite difference method ===
This family of methods is introduced by [Brezzi et al] and completed in [Lipnikov et al]. It allows the approximation of elliptic problems using a large class of polyhedral meshes. The proof that it enters the GDM framework is done in [Droniou et al].
== See also ==
Finite element method
== References ==
== External links ==
The Gradient Discretisation Method by Jérôme Droniou, Robert Eymard, Thierry Gallouët, Cindy Guichard and Raphaèle Herbin | Wikipedia/Gradient_discretisation_method |
In numerical analysis, mortar methods are discretization methods for partial differential equations, which use separate finite element discretization on nonoverlapping subdomains. The meshes on the subdomains do not match on the interface, and the equality of the solution is enforced by Lagrange multipliers, judiciously chosen to preserve the accuracy of the solution. Mortar discretizations lend themselves naturally to the solution by iterative domain decomposition methods such as FETI and balancing domain decomposition In the engineering practice in the finite element method, continuity of solutions between non-matching subdomains is implemented by multiple-point constraints.
Similar to penalty methods, mortar methods are explicit in their nature, i.e. they require the contacting surfaces to be defined. This is in contrast to fully implicit methods, such as the third medium contact method, where contacting surfaces do not need to be defined.
== References == | Wikipedia/Mortar_methods |
In fluid dynamics, slosh refers to the movement of liquid inside another object (which is, typically, also undergoing motion).
Strictly speaking, the liquid must have a free surface to constitute a slosh dynamics problem, where the dynamics of the liquid can interact with the container to alter the system dynamics significantly. Important examples include propellant slosh in spacecraft tanks and rockets (especially upper stages), and the free surface effect (cargo slosh) in ships and trucks transporting liquids (for example oil and gasoline).
However, it has become common to refer to liquid motion in a completely filled tank, i.e. without a free surface, as "fuel slosh".
Such motion is characterized by "inertial waves" and can be an important effect in spinning spacecraft dynamics. Extensive mathematical and empirical relationships have been derived to describe liquid slosh. These types of analyses are typically undertaken using computational fluid dynamics and finite element methods to solve the fluid-structure interaction problem, especially if the solid container is flexible. Relevant fluid dynamics non-dimensional parameters include the Bond number, the Weber number, and the Reynolds number.
Slosh is an important effect for spacecraft, ships, some land vehicles and some aircraft. Slosh was a factor in the Falcon 1 second test flight anomaly, and has been implicated in various other spacecraft anomalies, including a near-disaster with the Near Earth Asteroid Rendezvous (NEAR Shoemaker) satellite.
== Spacecraft effects ==
Liquid slosh in microgravity is relevant to spacecraft, most commonly Earth-orbiting satellites, and must take account of liquid surface tension which can alter the shape (and thus the eigenvalues) of the liquid slug. Typically, a large fraction of the mass of a satellite is liquid propellant at/near Beginning of Life (BOL), and slosh can adversely affect satellite performance in a number of ways. For example, propellant slosh can introduce uncertainty in spacecraft attitude (pointing) which is often called jitter. Similar phenomena can cause pogo oscillation and can result in structural failure of a space vehicle.
Another example is problematic interaction with the spacecraft's Attitude Control System (ACS), especially for spinning satellites which can suffer resonance between slosh and nutation, or adverse changes to the rotational inertia. Because of these types of risk, in the 1960s the National Aeronautics and Space Administration (NASA) extensively studied liquid slosh in spacecraft tanks, and in the 1990s NASA undertook the Middeck 0-Gravity Dynamics Experiment on the Space Shuttle. The European Space Agency has advanced these investigations with the launch of SLOSHSAT. Most spinning spacecraft since 1980 have been tested at the Applied Dynamics Laboratories drop tower using sub-scale models. Extensive contributions have also been made by the Southwest Research Institute, but research is widespread in academia and industry.
Research is continuing into slosh effects on in-space propellant depots. In October 2009, the United States Air Force and United Launch Alliance (ULA) performed an experimental on-orbit demonstration on a modified Centaur upper stage on the DMSP-18 satellite launch in order to improve "understanding of propellant settling and slosh", "The light weight of DMSP-18 allowed 12,000 pounds (5,400 kg) of remaining LO2 and LH2 propellant, 28% of Centaur’s capacity", for the on-orbit tests. The post-spacecraft mission extension ran 2.4 hours before the planned deorbit burn was executed.
NASA's Launch Services Program is working on two on-going slosh fluid dynamics experiments with partners: CRYOTE and SPHERES-Slosh. ULA has additional small-scale demonstrations of cryogenic fluid management are planned with project CRYOTE in 2012–2014 leading to a ULA large-scale cryo-sat propellant depot test under the NASA flagship technology demonstrations program in 2015. SPHERES-Slosh with Florida Institute of Technology and Massachusetts Institute of Technology will examine how liquids move around inside containers in microgravity with the SPHERES Testbed on the International Space Station.
== Sloshing in road tank vehicles ==
Liquid sloshing strongly influences the directional dynamics and safety performance of highway tank vehicles in a highly adverse manner. Hydrodynamic forces and moments arising from liquid cargo oscillations in the tank under steering and/or braking maneuvers reduce the stability limit and controllability of partially-filled tank vehicles. Anti-slosh devices such as baffles are widely used in order to limit the adverse liquid slosh effect on directional performance and stability of the tank vehicles. Since most of the time, tankers are carrying dangerous liquid contents such as ammonia, gasoline and fuel oils, stability of partially-filled liquid cargo vehicles is very important. Optimizations and sloshing reduction techniques in fuel tanks such as elliptical tank, rectangular, modified oval and generic tank shape have been performed in different filling levels using numerical, analytical and analogical analyses. Most of these studies concentrate on effects of baffles on sloshing while the influence of cross-section is completely ignored.
The Bloodhound LSR 1,000 mph project car utilizes a liquid-fuelled rocket that requires a specially-baffled oxidizer tank to prevent directional instability, rocket thrust variations and even oxidizer tank damage.
== Practical effects ==
Sloshing or shifting cargo, water ballast, or other liquid (e.g., from leaks or fire fighting) can cause disastrous capsizing in ships due to free surface effect; this can also affect trucks and aircraft.
The effect of slosh is used to limit the bounce of a roller hockey ball. Water slosh can significantly reduce the rebound height of a ball but some amounts of liquid seem to lead to a resonance effect. Many of the balls for roller hockey commonly available contain water to reduce the bounce height.
== See also ==
Seiche, a phenomenon affecting lakes and other constrained bodies of water
Splash (fluid mechanics), other free surface phenomena
Succussion splash, audible medical sign
== References ==
=== Other references ===
Meserole, J. S.; Fortini, A. (December 1987). "Slosh dynamics in a toroidal tank". Journal of Spacecraft and Rockets. 24 (6): 523–531. Bibcode:1987JSpRo..24..523M. doi:10.2514/3.25948.
(in English) NASA (1969), Slosh suppression Archived 2013-01-08 at the Wayback Machine, May 1969, PDF, 36p
(in English) NASA (1966), Dynamic behavior of liquids in moving containers with applications to propellants in space vehicle fuel tanks, Jan 1, 1966, PDF, 464 p | Wikipedia/Slosh_dynamics |
In mathematics, numerical analysis, and numerical partial differential equations, domain decomposition methods solve a boundary value problem by splitting it into smaller boundary value problems on subdomains and iterating to coordinate the solution between adjacent subdomains. A coarse problem with one or few unknowns per subdomain is used to further coordinate the solution between the subdomains globally. The problems on the subdomains are independent, which makes domain decomposition methods suitable for parallel computing. Domain decomposition methods are typically used as preconditioners for Krylov space iterative methods, such as the conjugate gradient method, GMRES, and LOBPCG.
In overlapping domain decomposition methods, the subdomains overlap by more than the interface. Overlapping domain decomposition methods include the Schwarz alternating method and the additive Schwarz method. Many domain decomposition methods can be written and analyzed as a special case of the abstract additive Schwarz method.
In non-overlapping methods, the subdomains intersect only on their interface. In primal methods, such as Balancing domain decomposition and BDDC, the continuity of the solution across subdomain interface is enforced by representing the value of the solution on all neighboring subdomains by the same unknown. In dual methods, such as FETI, the continuity of the solution across the subdomain interface is enforced by Lagrange multipliers. The FETI-DP method is hybrid between a dual and a primal method.
Non-overlapping domain decomposition methods are also called iterative substructuring methods.
Mortar methods are discretization methods for partial differential equations, which use separate discretization on nonoverlapping subdomains. The meshes on the subdomains do not match on the interface, and the equality of the solution is enforced by Lagrange multipliers, judiciously chosen to preserve the accuracy of the solution. In the engineering practice in the finite element method, continuity of solutions between non-matching subdomains is implemented by multiple-point constraints.
Finite element simulations of moderate size models require solving linear systems with millions of unknowns. Several hours per time step is an average sequential run time, therefore, parallel computing is a necessity. Domain decomposition methods embody large potential for a parallelization of the finite element methods, and serve a basis for distributed, parallel computations.
== Example 1: 1D Linear BVP ==
u
″
(
x
)
−
u
(
x
)
=
0
{\displaystyle u''(x)-u(x)=0}
u
(
0
)
=
0
,
u
(
1
)
=
1
{\displaystyle u(0)=0,u(1)=1}
The exact solution is:
u
(
x
)
=
e
x
−
e
−
x
e
1
−
e
−
1
{\displaystyle u(x)={\frac {e^{x}-e^{-x}}{e^{1}-e^{-1}}}}
Subdivide the domain into two subdomains, one from
[
0
,
1
2
]
{\displaystyle \left[0,{\frac {1}{2}}\right]}
and another from
[
1
2
,
1
]
{\displaystyle \left[{\frac {1}{2}},1\right]}
. In the left subdomain define the interpolating function
v
1
(
x
)
{\displaystyle v_{1}(x)}
and in the right define
v
2
(
x
)
{\displaystyle v_{2}(x)}
. At the interface between these two subdomains the following interface conditions shall be imposed:
v
1
(
1
2
)
=
v
2
(
1
2
)
{\displaystyle v_{1}\left({\frac {1}{2}}\right)=v_{2}\left({\frac {1}{2}}\right)}
v
1
′
(
1
2
)
=
v
2
′
(
1
2
)
{\displaystyle v_{1}'\left({\frac {1}{2}}\right)=v_{2}'\left({\frac {1}{2}}\right)}
Let the interpolating functions be defined as:
v
1
(
x
)
=
∑
n
=
0
N
u
n
T
n
(
y
1
(
x
)
)
{\displaystyle v_{1}(x)=\sum _{n=0}^{N}u_{n}T_{n}(y_{1}(x))}
v
2
(
x
)
=
∑
n
=
0
N
u
n
+
N
T
n
(
y
2
(
x
)
)
{\displaystyle v_{2}(x)=\sum _{n=0}^{N}u_{n+N}T_{n}(y_{2}(x))}
y
1
(
x
)
=
4
x
−
1
{\displaystyle y_{1}(x)=4x-1}
y
2
(
x
)
=
4
x
−
3
{\displaystyle y_{2}(x)=4x-3}
Where
T
n
(
y
)
{\displaystyle T_{n}(y)}
is the nth cardinal function of the chebyshev polynomials of the first kind with input argument y.
If N=4 then the following approximation is obtained by this scheme:
u
1
=
0.06236
{\displaystyle u_{1}=0.06236}
u
2
=
0.21495
{\displaystyle u_{2}=0.21495}
u
3
=
0.37428
{\displaystyle u_{3}=0.37428}
u
4
=
0.44341
{\displaystyle u_{4}=0.44341}
u
5
=
0.51492
{\displaystyle u_{5}=0.51492}
u
6
=
0.69972
{\displaystyle u_{6}=0.69972}
u
7
=
0.90645
{\displaystyle u_{7}=0.90645}
This was obtained with the following MATLAB code.
== Related Books ==
Barry Smith, Petter Bjørstad, and William Gropp: Domain Decomposition: Parallel Multilevel Methods for Elliptic Partial Differential Equations, Cambridge Univ. Press, ISBN 0-521-49589-X (1996).
== See also ==
Multigrid method
== External links ==
The official Domain Decomposition Methods page
"Domain Decomposition - Numerical Simulations page". Archived from the original on 2021-01-26. | Wikipedia/Domain_decomposition_methods |
Bessel functions, named after Friedrich Bessel who was the first to systematically study them in 1824, are canonical solutions y(x) of Bessel's differential equation
x
2
d
2
y
d
x
2
+
x
d
y
d
x
+
(
x
2
−
α
2
)
y
=
0
{\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+x{\frac {dy}{dx}}+\left(x^{2}-\alpha ^{2}\right)y=0}
for an arbitrary complex number
α
{\displaystyle \alpha }
, which represents the order of the Bessel function. Although
α
{\displaystyle \alpha }
and
−
α
{\displaystyle -\alpha }
produce the same differential equation, it is conventional to define different Bessel functions for these two values in such a way that the Bessel functions are mostly smooth functions of
α
{\displaystyle \alpha }
.
The most important cases are when
α
{\displaystyle \alpha }
is an integer or half-integer. Bessel functions for integer
α
{\displaystyle \alpha }
are also known as cylinder functions or the cylindrical harmonics because they appear in the solution to Laplace's equation in cylindrical coordinates. Spherical Bessel functions with half-integer
α
{\displaystyle \alpha }
are obtained when solving the Helmholtz equation in spherical coordinates.
== Applications ==
Bessel's equation arises when finding separable solutions to Laplace's equation and the Helmholtz equation in cylindrical or spherical coordinates. Bessel functions are therefore especially important for many problems of wave propagation and static potentials. In solving problems in cylindrical coordinate systems, one obtains Bessel functions of integer order (α = n); in spherical problems, one obtains half-integer orders (α = n + 1/2). For example:
Electromagnetic waves in a cylindrical waveguide
Pressure amplitudes of inviscid rotational flows
Heat conduction in a cylindrical object
Modes of vibration of a thin circular or annular acoustic membrane (such as a drumhead or other membranophone) or thicker plates such as sheet metal (see Kirchhoff–Love plate theory, Mindlin–Reissner plate theory)
Diffusion problems on a lattice
Solutions to the Schrödinger equation in spherical and cylindrical coordinates for a free particle
Position space representation of the Feynman propagator in quantum field theory
Solving for patterns of acoustical radiation
Frequency-dependent friction in circular pipelines
Dynamics of floating bodies
Angular resolution
Diffraction from helical objects, including DNA
Probability density function of product of two normally distributed random variables
Analyzing of the surface waves generated by microtremors, in geophysics and seismology.
Bessel functions also appear in other problems, such as signal processing (e.g., see FM audio synthesis, Kaiser window, or Bessel filter).
== Definitions ==
Because this is a linear differential equation, solutions can be scaled to any amplitude. The amplitudes chosen for the functions originate from the early work in which the functions appeared as solutions to definite integrals rather than solutions to differential equations. Because the differential equation is second-order, there must be two linearly independent solutions: one of the first kind and one of the second kind. Depending upon the circumstances, however, various formulations of these solutions are convenient. Different variations are summarized in the table below and described in the following sections.The subscript n is typically used in place of
α
{\displaystyle \alpha }
when
α
{\displaystyle \alpha }
is known to be an integer.
Bessel functions of the second kind and the spherical Bessel functions of the second kind are sometimes denoted by Nn and nn, respectively, rather than Yn and yn.
=== Bessel functions of the first kind: Jα ===
Bessel functions of the first kind, denoted as Jα(x), are solutions of Bessel's differential equation. For integer or positive α, Bessel functions of the first kind are finite at the origin (x = 0); while for negative non-integer α, Bessel functions of the first kind diverge as x approaches zero. It is possible to define the function by
x
α
{\displaystyle x^{\alpha }}
times a Maclaurin series (note that α need not be an integer, and non-integer powers are not permitted in a Taylor series), which can be found by applying the Frobenius method to Bessel's equation:
J
α
(
x
)
=
∑
m
=
0
∞
(
−
1
)
m
m
!
Γ
(
m
+
α
+
1
)
(
x
2
)
2
m
+
α
,
{\displaystyle J_{\alpha }(x)=\sum _{m=0}^{\infty }{\frac {(-1)^{m}}{m!\,\Gamma (m+\alpha +1)}}{\left({\frac {x}{2}}\right)}^{2m+\alpha },}
where Γ(z) is the gamma function, a shifted generalization of the factorial function to non-integer values. Some earlier authors define the Bessel function of the first kind differently, essentially without the division by
2
{\displaystyle 2}
in
x
/
2
{\displaystyle x/2}
; this definition is not used in this article. The Bessel function of the first kind is an entire function if α is an integer, otherwise it is a multivalued function with singularity at zero. The graphs of Bessel functions look roughly like oscillating sine or cosine functions that decay proportionally to
x
−
1
/
2
{\displaystyle x^{-{1}/{2}}}
(see also their asymptotic forms below), although their roots are not generally periodic, except asymptotically for large x. (The series indicates that −J1(x) is the derivative of J0(x), much like −sin x is the derivative of cos x; more generally, the derivative of Jn(x) can be expressed in terms of Jn ± 1(x) by the identities below.)
For non-integer α, the functions Jα(x) and J−α(x) are linearly independent, and are therefore the two solutions of the differential equation. On the other hand, for integer order n, the following relationship is valid (the gamma function has simple poles at each of the non-positive integers):
J
−
n
(
x
)
=
(
−
1
)
n
J
n
(
x
)
.
{\displaystyle J_{-n}(x)=(-1)^{n}J_{n}(x).}
This means that the two solutions are no longer linearly independent. In this case, the second linearly independent solution is then found to be the Bessel function of the second kind, as discussed below.
==== Bessel's integrals ====
Another definition of the Bessel function, for integer values of n, is possible using an integral representation:
J
n
(
x
)
=
1
π
∫
0
π
cos
(
n
τ
−
x
sin
τ
)
d
τ
=
1
π
Re
(
∫
0
π
e
i
(
n
τ
−
x
sin
τ
)
d
τ
)
,
{\displaystyle J_{n}(x)={\frac {1}{\pi }}\int _{0}^{\pi }\cos(n\tau -x\sin \tau )\,d\tau ={\frac {1}{\pi }}\operatorname {Re} \left(\int _{0}^{\pi }e^{i(n\tau -x\sin \tau )}\,d\tau \right),}
which is also called Hansen-Bessel formula.
This was the approach that Bessel used, and from this definition he derived several properties of the function. The definition may be extended to non-integer orders by one of Schläfli's integrals, for Re(x) > 0:
J
α
(
x
)
=
1
π
∫
0
π
cos
(
α
τ
−
x
sin
τ
)
d
τ
−
sin
(
α
π
)
π
∫
0
∞
e
−
x
sinh
t
−
α
t
d
t
.
{\displaystyle J_{\alpha }(x)={\frac {1}{\pi }}\int _{0}^{\pi }\cos(\alpha \tau -x\sin \tau )\,d\tau -{\frac {\sin(\alpha \pi )}{\pi }}\int _{0}^{\infty }e^{-x\sinh t-\alpha t}\,dt.}
==== Relation to hypergeometric series ====
The Bessel functions can be expressed in terms of the generalized hypergeometric series as
J
α
(
x
)
=
(
x
2
)
α
Γ
(
α
+
1
)
0
F
1
(
α
+
1
;
−
x
2
4
)
.
{\displaystyle J_{\alpha }(x)={\frac {\left({\frac {x}{2}}\right)^{\alpha }}{\Gamma (\alpha +1)}}\;_{0}F_{1}\left(\alpha +1;-{\frac {x^{2}}{4}}\right).}
This expression is related to the development of Bessel functions in terms of the Bessel–Clifford function.
==== Relation to Laguerre polynomials ====
In terms of the Laguerre polynomials Lk and arbitrarily chosen parameter t, the Bessel function can be expressed as
J
α
(
x
)
(
x
2
)
α
=
e
−
t
Γ
(
α
+
1
)
∑
k
=
0
∞
L
k
(
α
)
(
x
2
4
t
)
(
k
+
α
k
)
t
k
k
!
.
{\displaystyle {\frac {J_{\alpha }(x)}{\left({\frac {x}{2}}\right)^{\alpha }}}={\frac {e^{-t}}{\Gamma (\alpha +1)}}\sum _{k=0}^{\infty }{\frac {L_{k}^{(\alpha )}\left({\frac {x^{2}}{4t}}\right)}{\binom {k+\alpha }{k}}}{\frac {t^{k}}{k!}}.}
=== Bessel functions of the second kind: Yα ===
The Bessel functions of the second kind, denoted by Yα(x), occasionally denoted instead by Nα(x), are solutions of the Bessel differential equation that have a singularity at the origin (x = 0) and are multivalued. These are sometimes called Weber functions, as they were introduced by H. M. Weber (1873), and also Neumann functions after Carl Neumann.
For non-integer α, Yα(x) is related to Jα(x) by
Y
α
(
x
)
=
J
α
(
x
)
cos
(
α
π
)
−
J
−
α
(
x
)
sin
(
α
π
)
.
{\displaystyle Y_{\alpha }(x)={\frac {J_{\alpha }(x)\cos(\alpha \pi )-J_{-\alpha }(x)}{\sin(\alpha \pi )}}.}
In the case of integer order n, the function is defined by taking the limit as a non-integer α tends to n:
Y
n
(
x
)
=
lim
α
→
n
Y
α
(
x
)
.
{\displaystyle Y_{n}(x)=\lim _{\alpha \to n}Y_{\alpha }(x).}
If n is a nonnegative integer, we have the series
Y
n
(
z
)
=
−
(
z
2
)
−
n
π
∑
k
=
0
n
−
1
(
n
−
k
−
1
)
!
k
!
(
z
2
4
)
k
+
2
π
J
n
(
z
)
ln
z
2
−
(
z
2
)
n
π
∑
k
=
0
∞
(
ψ
(
k
+
1
)
+
ψ
(
n
+
k
+
1
)
)
(
−
z
2
4
)
k
k
!
(
n
+
k
)
!
{\displaystyle Y_{n}(z)=-{\frac {\left({\frac {z}{2}}\right)^{-n}}{\pi }}\sum _{k=0}^{n-1}{\frac {(n-k-1)!}{k!}}\left({\frac {z^{2}}{4}}\right)^{k}+{\frac {2}{\pi }}J_{n}(z)\ln {\frac {z}{2}}-{\frac {\left({\frac {z}{2}}\right)^{n}}{\pi }}\sum _{k=0}^{\infty }(\psi (k+1)+\psi (n+k+1)){\frac {\left(-{\frac {z^{2}}{4}}\right)^{k}}{k!(n+k)!}}}
where
ψ
(
z
)
{\displaystyle \psi (z)}
is the digamma function, the logarithmic derivative of the gamma function.
There is also a corresponding integral formula (for Re(x) > 0):
Y
n
(
x
)
=
1
π
∫
0
π
sin
(
x
sin
θ
−
n
θ
)
d
θ
−
1
π
∫
0
∞
(
e
n
t
+
(
−
1
)
n
e
−
n
t
)
e
−
x
sinh
t
d
t
.
{\displaystyle Y_{n}(x)={\frac {1}{\pi }}\int _{0}^{\pi }\sin(x\sin \theta -n\theta )\,d\theta -{\frac {1}{\pi }}\int _{0}^{\infty }\left(e^{nt}+(-1)^{n}e^{-nt}\right)e^{-x\sinh t}\,dt.}
In the case where n = 0: (with
γ
{\displaystyle \gamma }
being Euler's constant)
Y
0
(
x
)
=
4
π
2
∫
0
1
2
π
cos
(
x
cos
θ
)
(
γ
+
ln
(
2
x
sin
2
θ
)
)
d
θ
.
{\displaystyle Y_{0}\left(x\right)={\frac {4}{\pi ^{2}}}\int _{0}^{{\frac {1}{2}}\pi }\cos \left(x\cos \theta \right)\left(\gamma +\ln \left(2x\sin ^{2}\theta \right)\right)\,d\theta .}
Yα(x) is necessary as the second linearly independent solution of the Bessel's equation when α is an integer. But Yα(x) has more meaning than that. It can be considered as a "natural" partner of Jα(x). See also the subsection on Hankel functions below.
When α is an integer, moreover, as was similarly the case for the functions of the first kind, the following relationship is valid:
Y
−
n
(
x
)
=
(
−
1
)
n
Y
n
(
x
)
.
{\displaystyle Y_{-n}(x)=(-1)^{n}Y_{n}(x).}
Both Jα(x) and Yα(x) are holomorphic functions of x on the complex plane cut along the negative real axis. When α is an integer, the Bessel functions J are entire functions of x. If x is held fixed at a non-zero value, then the Bessel functions are entire functions of α.
The Bessel functions of the second kind when α is an integer is an example of the second kind of solution in Fuchs's theorem.
=== Hankel functions: H(1)α, H(2)α ===
Another important formulation of the two linearly independent solutions to Bessel's equation are the Hankel functions of the first and second kind, H(1)α(x) and H(2)α(x), defined as
H
α
(
1
)
(
x
)
=
J
α
(
x
)
+
i
Y
α
(
x
)
,
H
α
(
2
)
(
x
)
=
J
α
(
x
)
−
i
Y
α
(
x
)
,
{\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&=J_{\alpha }(x)+iY_{\alpha }(x),\\[5pt]H_{\alpha }^{(2)}(x)&=J_{\alpha }(x)-iY_{\alpha }(x),\end{aligned}}}
where i is the imaginary unit. These linear combinations are also known as Bessel functions of the third kind; they are two linearly independent solutions of Bessel's differential equation. They are named after Hermann Hankel.
These forms of linear combination satisfy numerous simple-looking properties, like asymptotic formulae or integral representations. Here, "simple" means an appearance of a factor of the form ei f(x). For real
x
>
0
{\displaystyle x>0}
where
J
α
(
x
)
{\displaystyle J_{\alpha }(x)}
,
Y
α
(
x
)
{\displaystyle Y_{\alpha }(x)}
are real-valued, the Bessel functions of the first and second kind are the real and imaginary parts, respectively, of the first Hankel function and the real and negative imaginary parts of the second Hankel function. Thus, the above formulae are analogs of Euler's formula, substituting H(1)α(x), H(2)α(x) for
e
±
i
x
{\displaystyle e^{\pm ix}}
and
J
α
(
x
)
{\displaystyle J_{\alpha }(x)}
,
Y
α
(
x
)
{\displaystyle Y_{\alpha }(x)}
for
cos
(
x
)
{\displaystyle \cos(x)}
,
sin
(
x
)
{\displaystyle \sin(x)}
, as explicitly shown in the asymptotic expansion.
The Hankel functions are used to express outward- and inward-propagating cylindrical-wave solutions of the cylindrical wave equation, respectively (or vice versa, depending on the sign convention for the frequency).
Using the previous relationships, they can be expressed as
H
α
(
1
)
(
x
)
=
J
−
α
(
x
)
−
e
−
α
π
i
J
α
(
x
)
i
sin
α
π
,
H
α
(
2
)
(
x
)
=
J
−
α
(
x
)
−
e
α
π
i
J
α
(
x
)
−
i
sin
α
π
.
{\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&={\frac {J_{-\alpha }(x)-e^{-\alpha \pi i}J_{\alpha }(x)}{i\sin \alpha \pi }},\\[5pt]H_{\alpha }^{(2)}(x)&={\frac {J_{-\alpha }(x)-e^{\alpha \pi i}J_{\alpha }(x)}{-i\sin \alpha \pi }}.\end{aligned}}}
If α is an integer, the limit has to be calculated. The following relationships are valid, whether α is an integer or not:
H
−
α
(
1
)
(
x
)
=
e
α
π
i
H
α
(
1
)
(
x
)
,
H
−
α
(
2
)
(
x
)
=
e
−
α
π
i
H
α
(
2
)
(
x
)
.
{\displaystyle {\begin{aligned}H_{-\alpha }^{(1)}(x)&=e^{\alpha \pi i}H_{\alpha }^{(1)}(x),\\[6mu]H_{-\alpha }^{(2)}(x)&=e^{-\alpha \pi i}H_{\alpha }^{(2)}(x).\end{aligned}}}
In particular, if α = m + 1/2 with m a nonnegative integer, the above relations imply directly that
J
−
(
m
+
1
2
)
(
x
)
=
(
−
1
)
m
+
1
Y
m
+
1
2
(
x
)
,
Y
−
(
m
+
1
2
)
(
x
)
=
(
−
1
)
m
J
m
+
1
2
(
x
)
.
{\displaystyle {\begin{aligned}J_{-(m+{\frac {1}{2}})}(x)&=(-1)^{m+1}Y_{m+{\frac {1}{2}}}(x),\\[5pt]Y_{-(m+{\frac {1}{2}})}(x)&=(-1)^{m}J_{m+{\frac {1}{2}}}(x).\end{aligned}}}
These are useful in developing the spherical Bessel functions (see below).
The Hankel functions admit the following integral representations for Re(x) > 0:
H
α
(
1
)
(
x
)
=
1
π
i
∫
−
∞
+
∞
+
π
i
e
x
sinh
t
−
α
t
d
t
,
H
α
(
2
)
(
x
)
=
−
1
π
i
∫
−
∞
+
∞
−
π
i
e
x
sinh
t
−
α
t
d
t
,
{\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&={\frac {1}{\pi i}}\int _{-\infty }^{+\infty +\pi i}e^{x\sinh t-\alpha t}\,dt,\\[5pt]H_{\alpha }^{(2)}(x)&=-{\frac {1}{\pi i}}\int _{-\infty }^{+\infty -\pi i}e^{x\sinh t-\alpha t}\,dt,\end{aligned}}}
where the integration limits indicate integration along a contour that can be chosen as follows: from −∞ to 0 along the negative real axis, from 0 to ±πi along the imaginary axis, and from ±πi to +∞ ± πi along a contour parallel to the real axis.
=== Modified Bessel functions: Iα, Kα ===
The Bessel functions are valid even for complex arguments x, and an important special case is that of a purely imaginary argument. In this case, the solutions to the Bessel equation are called the modified Bessel functions (or occasionally the hyperbolic Bessel functions) of the first and second kind and are defined as
I
α
(
x
)
=
i
−
α
J
α
(
i
x
)
=
∑
m
=
0
∞
1
m
!
Γ
(
m
+
α
+
1
)
(
x
2
)
2
m
+
α
,
K
α
(
x
)
=
π
2
I
−
α
(
x
)
−
I
α
(
x
)
sin
α
π
,
{\displaystyle {\begin{aligned}I_{\alpha }(x)&=i^{-\alpha }J_{\alpha }(ix)=\sum _{m=0}^{\infty }{\frac {1}{m!\,\Gamma (m+\alpha +1)}}\left({\frac {x}{2}}\right)^{2m+\alpha },\\[5pt]K_{\alpha }(x)&={\frac {\pi }{2}}{\frac {I_{-\alpha }(x)-I_{\alpha }(x)}{\sin \alpha \pi }},\end{aligned}}}
when α is not an integer. When α is an integer, then the limit is used. These are chosen to be real-valued for real and positive arguments x. The series expansion for Iα(x) is thus similar to that for Jα(x), but without the alternating (−1)m factor.
K
α
{\displaystyle K_{\alpha }}
can be expressed in terms of Hankel functions:
K
α
(
x
)
=
{
π
2
i
α
+
1
H
α
(
1
)
(
i
x
)
−
π
<
arg
x
≤
π
2
π
2
(
−
i
)
α
+
1
H
α
(
2
)
(
−
i
x
)
−
π
2
<
arg
x
≤
π
{\displaystyle K_{\alpha }(x)={\begin{cases}{\frac {\pi }{2}}i^{\alpha +1}H_{\alpha }^{(1)}(ix)&-\pi <\arg x\leq {\frac {\pi }{2}}\\{\frac {\pi }{2}}(-i)^{\alpha +1}H_{\alpha }^{(2)}(-ix)&-{\frac {\pi }{2}}<\arg x\leq \pi \end{cases}}}
Using these two formulae the result to
J
α
2
(
z
)
{\displaystyle J_{\alpha }^{2}(z)}
+
Y
α
2
(
z
)
{\displaystyle Y_{\alpha }^{2}(z)}
, commonly known as Nicholson's integral or Nicholson's formula, can be obtained to give the following
J
α
2
(
x
)
+
Y
α
2
(
x
)
=
8
π
2
∫
0
∞
cosh
(
2
α
t
)
K
0
(
2
x
sinh
t
)
d
t
,
{\displaystyle J_{\alpha }^{2}(x)+Y_{\alpha }^{2}(x)={\frac {8}{\pi ^{2}}}\int _{0}^{\infty }\cosh(2\alpha t)K_{0}(2x\sinh t)\,dt,}
given that the condition Re(x) > 0 is met. It can also be shown that
J
α
2
(
x
)
+
Y
α
2
(
x
)
=
8
cos
(
α
π
)
π
2
∫
0
∞
K
2
α
(
2
x
sinh
t
)
d
t
,
{\displaystyle J_{\alpha }^{2}(x)+Y_{\alpha }^{2}(x)={\frac {8\cos(\alpha \pi )}{\pi ^{2}}}\int _{0}^{\infty }K_{2\alpha }(2x\sinh t)\,dt,}
only when |Re(α)| < 1/2 and Re(x) ≥ 0 but not when x = 0.
We can express the first and second Bessel functions in terms of the modified Bessel functions (these are valid if −π < arg z ≤ π/2):
J
α
(
i
z
)
=
e
α
π
i
2
I
α
(
z
)
,
Y
α
(
i
z
)
=
e
(
α
+
1
)
π
i
2
I
α
(
z
)
−
2
π
e
−
α
π
i
2
K
α
(
z
)
.
{\displaystyle {\begin{aligned}J_{\alpha }(iz)&=e^{\frac {\alpha \pi i}{2}}I_{\alpha }(z),\\[1ex]Y_{\alpha }(iz)&=e^{\frac {(\alpha +1)\pi i}{2}}I_{\alpha }(z)-{\tfrac {2}{\pi }}e^{-{\frac {\alpha \pi i}{2}}}K_{\alpha }(z).\end{aligned}}}
Iα(x) and Kα(x) are the two linearly independent solutions to the modified Bessel's equation:
x
2
d
2
y
d
x
2
+
x
d
y
d
x
−
(
x
2
+
α
2
)
y
=
0.
{\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+x{\frac {dy}{dx}}-\left(x^{2}+\alpha ^{2}\right)y=0.}
Unlike the ordinary Bessel functions, which are oscillating as functions of a real argument, Iα and Kα are exponentially growing and decaying functions respectively. Like the ordinary Bessel function Jα, the function Iα goes to zero at x = 0 for α > 0 and is finite at x = 0 for α = 0. Analogously, Kα diverges at x = 0 with the singularity being of logarithmic type for K0, and 1/2Γ(|α|)(2/x)|α| otherwise.
Two integral formulas for the modified Bessel functions are (for Re(x) > 0):
I
α
(
x
)
=
1
π
∫
0
π
e
x
cos
θ
cos
α
θ
d
θ
−
sin
α
π
π
∫
0
∞
e
−
x
cosh
t
−
α
t
d
t
,
K
α
(
x
)
=
∫
0
∞
e
−
x
cosh
t
cosh
α
t
d
t
.
{\displaystyle {\begin{aligned}I_{\alpha }(x)&={\frac {1}{\pi }}\int _{0}^{\pi }e^{x\cos \theta }\cos \alpha \theta \,d\theta -{\frac {\sin \alpha \pi }{\pi }}\int _{0}^{\infty }e^{-x\cosh t-\alpha t}\,dt,\\[5pt]K_{\alpha }(x)&=\int _{0}^{\infty }e^{-x\cosh t}\cosh \alpha t\,dt.\end{aligned}}}
Bessel functions can be described as Fourier transforms of powers of quadratic functions. For example (for Re(ω) > 0):
2
K
0
(
ω
)
=
∫
−
∞
∞
e
i
ω
t
t
2
+
1
d
t
.
{\displaystyle 2\,K_{0}(\omega )=\int _{-\infty }^{\infty }{\frac {e^{i\omega t}}{\sqrt {t^{2}+1}}}\,dt.}
It can be proven by showing equality to the above integral definition for K0. This is done by integrating a closed curve in the first quadrant of the complex plane.
Modified Bessel functions of the second kind may be represented with Bassett's integral
K
n
(
x
z
)
=
Γ
(
n
+
1
2
)
(
2
z
)
n
π
x
n
∫
0
∞
cos
(
x
t
)
d
t
(
t
2
+
z
2
)
n
+
1
2
.
{\displaystyle K_{n}(xz)={\frac {\Gamma \left(n+{\frac {1}{2}}\right)(2z)^{n}}{{\sqrt {\pi }}x^{n}}}\int _{0}^{\infty }{\frac {\cos(xt)\,dt}{(t^{2}+z^{2})^{n+{\frac {1}{2}}}}}.}
Modified Bessel functions K1/3 and K2/3 can be represented in terms of rapidly convergent integrals
K
1
3
(
ξ
)
=
3
∫
0
∞
exp
(
−
ξ
(
1
+
4
x
2
3
)
1
+
x
2
3
)
d
x
,
K
2
3
(
ξ
)
=
1
3
∫
0
∞
3
+
2
x
2
1
+
x
2
3
exp
(
−
ξ
(
1
+
4
x
2
3
)
1
+
x
2
3
)
d
x
.
{\displaystyle {\begin{aligned}K_{\frac {1}{3}}(\xi )&={\sqrt {3}}\int _{0}^{\infty }\exp \left(-\xi \left(1+{\frac {4x^{2}}{3}}\right){\sqrt {1+{\frac {x^{2}}{3}}}}\right)\,dx,\\[5pt]K_{\frac {2}{3}}(\xi )&={\frac {1}{\sqrt {3}}}\int _{0}^{\infty }{\frac {3+2x^{2}}{\sqrt {1+{\frac {x^{2}}{3}}}}}\exp \left(-\xi \left(1+{\frac {4x^{2}}{3}}\right){\sqrt {1+{\frac {x^{2}}{3}}}}\right)\,dx.\end{aligned}}}
The modified Bessel function
K
1
2
(
ξ
)
=
(
2
ξ
/
π
)
−
1
/
2
exp
(
−
ξ
)
{\displaystyle K_{\frac {1}{2}}(\xi )=(2\xi /\pi )^{-1/2}\exp(-\xi )}
is useful to represent the Laplace distribution as an Exponential-scale mixture of normal distributions.
The modified Bessel function of the second kind has also been called by the following names (now rare):
Basset function after Alfred Barnard Basset
Modified Bessel function of the third kind
Modified Hankel function
Macdonald function after Hector Munro Macdonald
=== Spherical Bessel functions: jn, yn ===
When solving the Helmholtz equation in spherical coordinates by separation of variables, the radial equation has the form
x
2
d
2
y
d
x
2
+
2
x
d
y
d
x
+
(
x
2
−
n
(
n
+
1
)
)
y
=
0.
{\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+2x{\frac {dy}{dx}}+\left(x^{2}-n(n+1)\right)y=0.}
The two linearly independent solutions to this equation are called the spherical Bessel functions jn and yn, and are related to the ordinary Bessel functions Jn and Yn by
j
n
(
x
)
=
π
2
x
J
n
+
1
2
(
x
)
,
y
n
(
x
)
=
π
2
x
Y
n
+
1
2
(
x
)
=
(
−
1
)
n
+
1
π
2
x
J
−
n
−
1
2
(
x
)
.
{\displaystyle {\begin{aligned}j_{n}(x)&={\sqrt {\frac {\pi }{2x}}}J_{n+{\frac {1}{2}}}(x),\\y_{n}(x)&={\sqrt {\frac {\pi }{2x}}}Y_{n+{\frac {1}{2}}}(x)=(-1)^{n+1}{\sqrt {\frac {\pi }{2x}}}J_{-n-{\frac {1}{2}}}(x).\end{aligned}}}
yn is also denoted nn or ηn; some authors call these functions the spherical Neumann functions.
From the relations to the ordinary Bessel functions it is directly seen that:
j
n
(
x
)
=
(
−
1
)
n
y
−
n
−
1
(
x
)
y
n
(
x
)
=
(
−
1
)
n
+
1
j
−
n
−
1
(
x
)
{\displaystyle {\begin{aligned}j_{n}(x)&=(-1)^{n}y_{-n-1}(x)\\y_{n}(x)&=(-1)^{n+1}j_{-n-1}(x)\end{aligned}}}
The spherical Bessel functions can also be written as (Rayleigh's formulas)
j
n
(
x
)
=
(
−
x
)
n
(
1
x
d
d
x
)
n
sin
x
x
,
y
n
(
x
)
=
−
(
−
x
)
n
(
1
x
d
d
x
)
n
cos
x
x
.
{\displaystyle {\begin{aligned}j_{n}(x)&=(-x)^{n}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{n}{\frac {\sin x}{x}},\\y_{n}(x)&=-(-x)^{n}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{n}{\frac {\cos x}{x}}.\end{aligned}}}
The zeroth spherical Bessel function j0(x) is also known as the (unnormalized) sinc function. The first few spherical Bessel functions are:
j
0
(
x
)
=
sin
x
x
.
j
1
(
x
)
=
sin
x
x
2
−
cos
x
x
,
j
2
(
x
)
=
(
3
x
2
−
1
)
sin
x
x
−
3
cos
x
x
2
,
j
3
(
x
)
=
(
15
x
3
−
6
x
)
sin
x
x
−
(
15
x
2
−
1
)
cos
x
x
{\displaystyle {\begin{aligned}j_{0}(x)&={\frac {\sin x}{x}}.\\j_{1}(x)&={\frac {\sin x}{x^{2}}}-{\frac {\cos x}{x}},\\j_{2}(x)&=\left({\frac {3}{x^{2}}}-1\right){\frac {\sin x}{x}}-{\frac {3\cos x}{x^{2}}},\\j_{3}(x)&=\left({\frac {15}{x^{3}}}-{\frac {6}{x}}\right){\frac {\sin x}{x}}-\left({\frac {15}{x^{2}}}-1\right){\frac {\cos x}{x}}\end{aligned}}}
and
y
0
(
x
)
=
−
j
−
1
(
x
)
=
−
cos
x
x
,
y
1
(
x
)
=
j
−
2
(
x
)
=
−
cos
x
x
2
−
sin
x
x
,
y
2
(
x
)
=
−
j
−
3
(
x
)
=
(
−
3
x
2
+
1
)
cos
x
x
−
3
sin
x
x
2
,
y
3
(
x
)
=
j
−
4
(
x
)
=
(
−
15
x
3
+
6
x
)
cos
x
x
−
(
15
x
2
−
1
)
sin
x
x
.
{\displaystyle {\begin{aligned}y_{0}(x)&=-j_{-1}(x)=-{\frac {\cos x}{x}},\\y_{1}(x)&=j_{-2}(x)=-{\frac {\cos x}{x^{2}}}-{\frac {\sin x}{x}},\\y_{2}(x)&=-j_{-3}(x)=\left(-{\frac {3}{x^{2}}}+1\right){\frac {\cos x}{x}}-{\frac {3\sin x}{x^{2}}},\\y_{3}(x)&=j_{-4}(x)=\left(-{\frac {15}{x^{3}}}+{\frac {6}{x}}\right){\frac {\cos x}{x}}-\left({\frac {15}{x^{2}}}-1\right){\frac {\sin x}{x}}.\end{aligned}}}
The first few non-zero roots of the first few spherical Bessel functions are:
==== Generating function ====
The spherical Bessel functions have the generating functions
1
z
cos
(
z
2
−
2
z
t
)
=
∑
n
=
0
∞
t
n
n
!
j
n
−
1
(
z
)
,
1
z
sin
(
z
2
−
2
z
t
)
=
∑
n
=
0
∞
t
n
n
!
y
n
−
1
(
z
)
.
{\displaystyle {\begin{aligned}{\frac {1}{z}}\cos \left({\sqrt {z^{2}-2zt}}\right)&=\sum _{n=0}^{\infty }{\frac {t^{n}}{n!}}j_{n-1}(z),\\{\frac {1}{z}}\sin \left({\sqrt {z^{2}-2zt}}\right)&=\sum _{n=0}^{\infty }{\frac {t^{n}}{n!}}y_{n-1}(z).\end{aligned}}}
==== Finite series expansions ====
In contrast to the whole integer Bessel functions Jn(x), Yn(x), the spherical Bessel functions jn(x), yn(x) have a finite series expression:
j
n
(
x
)
=
π
2
x
J
n
+
1
2
(
x
)
=
=
1
2
x
[
e
i
x
∑
r
=
0
n
i
r
−
n
−
1
(
n
+
r
)
!
r
!
(
n
−
r
)
!
(
2
x
)
r
+
e
−
i
x
∑
r
=
0
n
(
−
i
)
r
−
n
−
1
(
n
+
r
)
!
r
!
(
n
−
r
)
!
(
2
x
)
r
]
=
1
x
[
sin
(
x
−
n
π
2
)
∑
r
=
0
[
n
2
]
(
−
1
)
r
(
n
+
2
r
)
!
(
2
r
)
!
(
n
−
2
r
)
!
(
2
x
)
2
r
+
cos
(
x
−
n
π
2
)
∑
r
=
0
[
n
−
1
2
]
(
−
1
)
r
(
n
+
2
r
+
1
)
!
(
2
r
+
1
)
!
(
n
−
2
r
−
1
)
!
(
2
x
)
2
r
+
1
]
y
n
(
x
)
=
(
−
1
)
n
+
1
j
−
n
−
1
(
x
)
=
(
−
1
)
n
+
1
π
2
x
J
−
(
n
+
1
2
)
(
x
)
=
=
(
−
1
)
n
+
1
2
x
[
e
i
x
∑
r
=
0
n
i
r
+
n
(
n
+
r
)
!
r
!
(
n
−
r
)
!
(
2
x
)
r
+
e
−
i
x
∑
r
=
0
n
(
−
i
)
r
+
n
(
n
+
r
)
!
r
!
(
n
−
r
)
!
(
2
x
)
r
]
=
=
(
−
1
)
n
+
1
x
[
cos
(
x
+
n
π
2
)
∑
r
=
0
[
n
2
]
(
−
1
)
r
(
n
+
2
r
)
!
(
2
r
)
!
(
n
−
2
r
)
!
(
2
x
)
2
r
−
sin
(
x
+
n
π
2
)
∑
r
=
0
[
n
−
1
2
]
(
−
1
)
r
(
n
+
2
r
+
1
)
!
(
2
r
+
1
)
!
(
n
−
2
r
−
1
)
!
(
2
x
)
2
r
+
1
]
{\displaystyle {\begin{alignedat}{2}j_{n}(x)&={\sqrt {\frac {\pi }{2x}}}J_{n+{\frac {1}{2}}}(x)=\\&={\frac {1}{2x}}\left[e^{ix}\sum _{r=0}^{n}{\frac {i^{r-n-1}(n+r)!}{r!(n-r)!(2x)^{r}}}+e^{-ix}\sum _{r=0}^{n}{\frac {(-i)^{r-n-1}(n+r)!}{r!(n-r)!(2x)^{r}}}\right]\\&={\frac {1}{x}}\left[\sin \left(x-{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n}{2}}\right]}{\frac {(-1)^{r}(n+2r)!}{(2r)!(n-2r)!(2x)^{2r}}}+\cos \left(x-{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n-1}{2}}\right]}{\frac {(-1)^{r}(n+2r+1)!}{(2r+1)!(n-2r-1)!(2x)^{2r+1}}}\right]\\y_{n}(x)&=(-1)^{n+1}j_{-n-1}(x)=(-1)^{n+1}{\frac {\pi }{2x}}J_{-\left(n+{\frac {1}{2}}\right)}(x)=\\&={\frac {(-1)^{n+1}}{2x}}\left[e^{ix}\sum _{r=0}^{n}{\frac {i^{r+n}(n+r)!}{r!(n-r)!(2x)^{r}}}+e^{-ix}\sum _{r=0}^{n}{\frac {(-i)^{r+n}(n+r)!}{r!(n-r)!(2x)^{r}}}\right]=\\&={\frac {(-1)^{n+1}}{x}}\left[\cos \left(x+{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n}{2}}\right]}{\frac {(-1)^{r}(n+2r)!}{(2r)!(n-2r)!(2x)^{2r}}}-\sin \left(x+{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n-1}{2}}\right]}{\frac {(-1)^{r}(n+2r+1)!}{(2r+1)!(n-2r-1)!(2x)^{2r+1}}}\right]\end{alignedat}}}
==== Differential relations ====
In the following, fn is any of jn, yn, h(1)n, h(2)n for n = 0, ±1, ±2, ...
(
1
z
d
d
z
)
m
(
z
n
+
1
f
n
(
z
)
)
=
z
n
−
m
+
1
f
n
−
m
(
z
)
,
(
1
z
d
d
z
)
m
(
z
−
n
f
n
(
z
)
)
=
(
−
1
)
m
z
−
n
−
m
f
n
+
m
(
z
)
.
{\displaystyle {\begin{aligned}\left({\frac {1}{z}}{\frac {d}{dz}}\right)^{m}\left(z^{n+1}f_{n}(z)\right)&=z^{n-m+1}f_{n-m}(z),\\\left({\frac {1}{z}}{\frac {d}{dz}}\right)^{m}\left(z^{-n}f_{n}(z)\right)&=(-1)^{m}z^{-n-m}f_{n+m}(z).\end{aligned}}}
=== Spherical Hankel functions: h(1)n, h(2)n ===
There are also spherical analogues of the Hankel functions:
h
n
(
1
)
(
x
)
=
j
n
(
x
)
+
i
y
n
(
x
)
,
h
n
(
2
)
(
x
)
=
j
n
(
x
)
−
i
y
n
(
x
)
.
{\displaystyle {\begin{aligned}h_{n}^{(1)}(x)&=j_{n}(x)+iy_{n}(x),\\h_{n}^{(2)}(x)&=j_{n}(x)-iy_{n}(x).\end{aligned}}}
There are simple closed-form expressions for the Bessel functions of half-integer order in terms of the standard trigonometric functions, and therefore for the spherical Bessel functions. In particular, for non-negative integers n:
h
n
(
1
)
(
x
)
=
(
−
i
)
n
+
1
e
i
x
x
∑
m
=
0
n
i
m
m
!
(
2
x
)
m
(
n
+
m
)
!
(
n
−
m
)
!
,
{\displaystyle h_{n}^{(1)}(x)=(-i)^{n+1}{\frac {e^{ix}}{x}}\sum _{m=0}^{n}{\frac {i^{m}}{m!\,(2x)^{m}}}{\frac {(n+m)!}{(n-m)!}},}
and h(2)n is the complex-conjugate of this (for real x). It follows, for example, that j0(x) = sin x/x and y0(x) = −cos x/x, and so on.
The spherical Hankel functions appear in problems involving spherical wave propagation, for example in the multipole expansion of the electromagnetic field.
=== Riccati–Bessel functions: Sn, Cn, ξn, ζn ===
Riccati–Bessel functions only slightly differ from spherical Bessel functions:
S
n
(
x
)
=
x
j
n
(
x
)
=
π
x
2
J
n
+
1
2
(
x
)
C
n
(
x
)
=
−
x
y
n
(
x
)
=
−
π
x
2
Y
n
+
1
2
(
x
)
ξ
n
(
x
)
=
x
h
n
(
1
)
(
x
)
=
π
x
2
H
n
+
1
2
(
1
)
(
x
)
=
S
n
(
x
)
−
i
C
n
(
x
)
ζ
n
(
x
)
=
x
h
n
(
2
)
(
x
)
=
π
x
2
H
n
+
1
2
(
2
)
(
x
)
=
S
n
(
x
)
+
i
C
n
(
x
)
{\displaystyle {\begin{aligned}S_{n}(x)&=xj_{n}(x)={\sqrt {\frac {\pi x}{2}}}J_{n+{\frac {1}{2}}}(x)\\C_{n}(x)&=-xy_{n}(x)=-{\sqrt {\frac {\pi x}{2}}}Y_{n+{\frac {1}{2}}}(x)\\\xi _{n}(x)&=xh_{n}^{(1)}(x)={\sqrt {\frac {\pi x}{2}}}H_{n+{\frac {1}{2}}}^{(1)}(x)=S_{n}(x)-iC_{n}(x)\\\zeta _{n}(x)&=xh_{n}^{(2)}(x)={\sqrt {\frac {\pi x}{2}}}H_{n+{\frac {1}{2}}}^{(2)}(x)=S_{n}(x)+iC_{n}(x)\end{aligned}}}
They satisfy the differential equation
x
2
d
2
y
d
x
2
+
(
x
2
−
n
(
n
+
1
)
)
y
=
0.
{\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+\left(x^{2}-n(n+1)\right)y=0.}
For example, this kind of differential equation appears in quantum mechanics while solving the radial component of the Schrödinger equation with hypothetical cylindrical infinite potential barrier. This differential equation, and the Riccati–Bessel solutions, also arises in the problem of scattering of electromagnetic waves by a sphere, known as Mie scattering after the first published solution by Mie (1908). See e.g., Du (2004) for recent developments and references.
Following Debye (1909), the notation ψn, χn is sometimes used instead of Sn, Cn.
== Asymptotic forms ==
The Bessel functions have the following asymptotic forms. For small arguments
0
<
z
≪
α
+
1
{\displaystyle 0<z\ll {\sqrt {\alpha +1}}}
, one obtains, when
α
{\displaystyle \alpha }
is not a negative integer:
J
α
(
z
)
∼
1
Γ
(
α
+
1
)
(
z
2
)
α
.
{\displaystyle J_{\alpha }(z)\sim {\frac {1}{\Gamma (\alpha +1)}}\left({\frac {z}{2}}\right)^{\alpha }.}
When α is a negative integer, we have
J
α
(
z
)
∼
(
−
1
)
α
(
−
α
)
!
(
2
z
)
α
.
{\displaystyle J_{\alpha }(z)\sim {\frac {(-1)^{\alpha }}{(-\alpha )!}}\left({\frac {2}{z}}\right)^{\alpha }.}
For the Bessel function of the second kind we have three cases:
Y
α
(
z
)
∼
{
2
π
(
ln
(
z
2
)
+
γ
)
if
α
=
0
−
Γ
(
α
)
π
(
2
z
)
α
+
1
Γ
(
α
+
1
)
(
z
2
)
α
cot
(
α
π
)
if
α
is a positive integer (one term dominates unless
α
is imaginary)
,
−
(
−
1
)
α
Γ
(
−
α
)
π
(
z
2
)
α
if
α
is a negative integer,
{\displaystyle Y_{\alpha }(z)\sim {\begin{cases}{\dfrac {2}{\pi }}\left(\ln \left({\dfrac {z}{2}}\right)+\gamma \right)&{\text{if }}\alpha =0\\[1ex]-{\dfrac {\Gamma (\alpha )}{\pi }}\left({\dfrac {2}{z}}\right)^{\alpha }+{\dfrac {1}{\Gamma (\alpha +1)}}\left({\dfrac {z}{2}}\right)^{\alpha }\cot(\alpha \pi )&{\text{if }}\alpha {\text{ is a positive integer (one term dominates unless }}\alpha {\text{ is imaginary)}},\\[1ex]-{\dfrac {(-1)^{\alpha }\Gamma (-\alpha )}{\pi }}\left({\dfrac {z}{2}}\right)^{\alpha }&{\text{if }}\alpha {\text{ is a negative integer,}}\end{cases}}}
where γ is the Euler–Mascheroni constant (0.5772...).
For large real arguments z ≫ |α2 − 1/4|, one cannot write a true asymptotic form for Bessel functions of the first and second kind (unless α is half-integer) because they have zeros all the way out to infinity, which would have to be matched exactly by any asymptotic expansion. However, for a given value of arg z one can write an equation containing a term of order |z|−1:
J
α
(
z
)
=
2
π
z
(
cos
(
z
−
α
π
2
−
π
4
)
+
e
|
Im
(
z
)
|
O
(
|
z
|
−
1
)
)
for
|
arg
z
|
<
π
,
Y
α
(
z
)
=
2
π
z
(
sin
(
z
−
α
π
2
−
π
4
)
+
e
|
Im
(
z
)
|
O
(
|
z
|
−
1
)
)
for
|
arg
z
|
<
π
.
{\displaystyle {\begin{aligned}J_{\alpha }(z)&={\sqrt {\frac {2}{\pi z}}}\left(\cos \left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)+e^{\left|\operatorname {Im} (z)\right|}{\mathcal {O}}\left(|z|^{-1}\right)\right)&&{\text{for }}\left|\arg z\right|<\pi ,\\Y_{\alpha }(z)&={\sqrt {\frac {2}{\pi z}}}\left(\sin \left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)+e^{\left|\operatorname {Im} (z)\right|}{\mathcal {O}}\left(|z|^{-1}\right)\right)&&{\text{for }}\left|\arg z\right|<\pi .\end{aligned}}}
(For α = 1/2, the last terms in these formulas drop out completely; see the spherical Bessel functions above.)
The asymptotic forms for the Hankel functions are:
H
α
(
1
)
(
z
)
∼
2
π
z
e
i
(
z
−
α
π
2
−
π
4
)
for
−
π
<
arg
z
<
2
π
,
H
α
(
2
)
(
z
)
∼
2
π
z
e
−
i
(
z
−
α
π
2
−
π
4
)
for
−
2
π
<
arg
z
<
π
.
{\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(z)&\sim {\sqrt {\frac {2}{\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<2\pi ,\\H_{\alpha }^{(2)}(z)&\sim {\sqrt {\frac {2}{\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-2\pi <\arg z<\pi .\end{aligned}}}
These can be extended to other values of arg z using equations relating H(1)α(zeimπ) and H(2)α(zeimπ) to H(1)α(z) and H(2)α(z).
It is interesting that although the Bessel function of the first kind is the average of the two Hankel functions, Jα(z) is not asymptotic to the average of these two asymptotic forms when z is negative (because one or the other will not be correct there, depending on the arg z used). But the asymptotic forms for the Hankel functions permit us to write asymptotic forms for the Bessel functions of first and second kinds for complex (non-real) z so long as |z| goes to infinity at a constant phase angle arg z (using the square root having positive real part):
J
α
(
z
)
∼
1
2
π
z
e
i
(
z
−
α
π
2
−
π
4
)
for
−
π
<
arg
z
<
0
,
J
α
(
z
)
∼
1
2
π
z
e
−
i
(
z
−
α
π
2
−
π
4
)
for
0
<
arg
z
<
π
,
Y
α
(
z
)
∼
−
i
1
2
π
z
e
i
(
z
−
α
π
2
−
π
4
)
for
−
π
<
arg
z
<
0
,
Y
α
(
z
)
∼
i
1
2
π
z
e
−
i
(
z
−
α
π
2
−
π
4
)
for
0
<
arg
z
<
π
.
{\displaystyle {\begin{aligned}J_{\alpha }(z)&\sim {\frac {1}{\sqrt {2\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<0,\\[1ex]J_{\alpha }(z)&\sim {\frac {1}{\sqrt {2\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}0<\arg z<\pi ,\\[1ex]Y_{\alpha }(z)&\sim -i{\frac {1}{\sqrt {2\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<0,\\[1ex]Y_{\alpha }(z)&\sim i{\frac {1}{\sqrt {2\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}0<\arg z<\pi .\end{aligned}}}
For the modified Bessel functions, Hankel developed asymptotic expansions as well:
I
α
(
z
)
∼
e
z
2
π
z
(
1
−
4
α
2
−
1
8
z
+
(
4
α
2
−
1
)
(
4
α
2
−
9
)
2
!
(
8
z
)
2
−
(
4
α
2
−
1
)
(
4
α
2
−
9
)
(
4
α
2
−
25
)
3
!
(
8
z
)
3
+
⋯
)
for
|
arg
z
|
<
π
2
,
K
α
(
z
)
∼
π
2
z
e
−
z
(
1
+
4
α
2
−
1
8
z
+
(
4
α
2
−
1
)
(
4
α
2
−
9
)
2
!
(
8
z
)
2
+
(
4
α
2
−
1
)
(
4
α
2
−
9
)
(
4
α
2
−
25
)
3
!
(
8
z
)
3
+
⋯
)
for
|
arg
z
|
<
3
π
2
.
{\displaystyle {\begin{aligned}I_{\alpha }(z)&\sim {\frac {e^{z}}{\sqrt {2\pi z}}}\left(1-{\frac {4\alpha ^{2}-1}{8z}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)}{2!(8z)^{2}}}-{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)\left(4\alpha ^{2}-25\right)}{3!(8z)^{3}}}+\cdots \right)&&{\text{for }}\left|\arg z\right|<{\frac {\pi }{2}},\\K_{\alpha }(z)&\sim {\sqrt {\frac {\pi }{2z}}}e^{-z}\left(1+{\frac {4\alpha ^{2}-1}{8z}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)}{2!(8z)^{2}}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)\left(4\alpha ^{2}-25\right)}{3!(8z)^{3}}}+\cdots \right)&&{\text{for }}\left|\arg z\right|<{\frac {3\pi }{2}}.\end{aligned}}}
There is also the asymptotic form (for large real
z
{\displaystyle z}
)
I
α
(
z
)
=
1
2
π
z
1
+
α
2
z
2
4
exp
(
−
α
arcsinh
(
α
z
)
+
z
1
+
α
2
z
2
)
(
1
+
O
(
1
z
1
+
α
2
z
2
)
)
.
{\displaystyle {\begin{aligned}I_{\alpha }(z)={\frac {1}{{\sqrt {2\pi z}}{\sqrt[{4}]{1+{\frac {\alpha ^{2}}{z^{2}}}}}}}\exp \left(-\alpha \operatorname {arcsinh} \left({\frac {\alpha }{z}}\right)+z{\sqrt {1+{\frac {\alpha ^{2}}{z^{2}}}}}\right)\left(1+{\mathcal {O}}\left({\frac {1}{z{\sqrt {1+{\frac {\alpha ^{2}}{z^{2}}}}}}}\right)\right).\end{aligned}}}
When α = 1/2, all the terms except the first vanish, and we have
I
1
/
2
(
z
)
=
2
π
sinh
(
z
)
z
∼
e
z
2
π
z
for
|
arg
z
|
<
π
2
,
K
1
/
2
(
z
)
=
π
2
e
−
z
z
.
{\displaystyle {\begin{aligned}I_{{1}/{2}}(z)&={\sqrt {\frac {2}{\pi }}}{\frac {\sinh(z)}{\sqrt {z}}}\sim {\frac {e^{z}}{\sqrt {2\pi z}}}&&{\text{for }}\left|\arg z\right|<{\tfrac {\pi }{2}},\\[1ex]K_{{1}/{2}}(z)&={\sqrt {\frac {\pi }{2}}}{\frac {e^{-z}}{\sqrt {z}}}.\end{aligned}}}
For small arguments
0
<
|
z
|
≪
α
+
1
{\displaystyle 0<|z|\ll {\sqrt {\alpha +1}}}
, we have
I
α
(
z
)
∼
1
Γ
(
α
+
1
)
(
z
2
)
α
,
K
α
(
z
)
∼
{
−
ln
(
z
2
)
−
γ
if
α
=
0
Γ
(
α
)
2
(
2
z
)
α
if
α
>
0
{\displaystyle {\begin{aligned}I_{\alpha }(z)&\sim {\frac {1}{\Gamma (\alpha +1)}}\left({\frac {z}{2}}\right)^{\alpha },\\[1ex]K_{\alpha }(z)&\sim {\begin{cases}-\ln \left({\dfrac {z}{2}}\right)-\gamma &{\text{if }}\alpha =0\\[1ex]{\frac {\Gamma (\alpha )}{2}}\left({\dfrac {2}{z}}\right)^{\alpha }&{\text{if }}\alpha >0\end{cases}}\end{aligned}}}
== Properties ==
For integer order α = n, Jn is often defined via a Laurent series for a generating function:
e
x
2
(
t
−
1
t
)
=
∑
n
=
−
∞
∞
J
n
(
x
)
t
n
{\displaystyle e^{{\frac {x}{2}}\left(t-{\frac {1}{t}}\right)}=\sum _{n=-\infty }^{\infty }J_{n}(x)t^{n}}
an approach used by P. A. Hansen in 1843. (This can be generalized to non-integer order by contour integration or other methods.)
Infinite series of Bessel functions in the form
∑
ν
=
−
∞
∞
J
N
ν
+
p
(
x
)
{\textstyle \sum _{\nu =-\infty }^{\infty }J_{N\nu +p}(x)}
where
ν
,
p
∈
Z
,
N
∈
Z
+
\nu ,p\in \mathbb {Z} ,\ N\in \mathbb {Z} ^{+}
arise in many physical systems and are defined in closed form by the Sung series. For example, when N = 3:
∑
ν
=
−
∞
∞
J
3
ν
+
p
(
x
)
=
1
3
[
1
+
2
cos
(
x
3
/
2
−
2
π
p
/
3
)
]
{\textstyle \sum _{\nu =-\infty }^{\infty }J_{3\nu +p}(x)={\frac {1}{3}}\left[1+2\cos {(x{\sqrt {3}}/2-2\pi p/3)}\right]}
. More generally, the Sung series and the alternating Sung series are written as:
∑
ν
=
−
∞
∞
J
N
ν
+
p
(
x
)
=
1
N
∑
q
=
0
N
−
1
e
i
x
sin
2
π
q
/
N
e
−
i
2
π
p
q
/
N
{\displaystyle \sum _{\nu =-\infty }^{\infty }J_{N\nu +p}(x)={\frac {1}{N}}\sum _{q=0}^{N-1}e^{ix\sin {2\pi q/N}}e^{-i2\pi pq/N}}
∑
ν
=
−
∞
∞
(
−
1
)
ν
J
N
ν
+
p
(
x
)
=
1
N
∑
q
=
0
N
−
1
e
i
x
sin
(
2
q
+
1
)
π
/
N
e
−
i
(
2
q
+
1
)
π
p
/
N
{\displaystyle \sum _{\nu =-\infty }^{\infty }(-1)^{\nu }J_{N\nu +p}(x)={\frac {1}{N}}\sum _{q=0}^{N-1}e^{ix\sin {(2q+1)\pi /N}}e^{-i(2q+1)\pi p/N}}
A series expansion using Bessel functions (Kapteyn series) is
1
1
−
z
=
1
+
2
∑
n
=
1
∞
J
n
(
n
z
)
.
{\displaystyle {\frac {1}{1-z}}=1+2\sum _{n=1}^{\infty }J_{n}(nz).}
Another important relation for integer orders is the Jacobi–Anger expansion:
e
i
z
cos
ϕ
=
∑
n
=
−
∞
∞
i
n
J
n
(
z
)
e
i
n
ϕ
{\displaystyle e^{iz\cos \phi }=\sum _{n=-\infty }^{\infty }i^{n}J_{n}(z)e^{in\phi }}
and
e
±
i
z
sin
ϕ
=
J
0
(
z
)
+
2
∑
n
=
1
∞
J
2
n
(
z
)
cos
(
2
n
ϕ
)
±
2
i
∑
n
=
0
∞
J
2
n
+
1
(
z
)
sin
(
(
2
n
+
1
)
ϕ
)
{\displaystyle e^{\pm iz\sin \phi }=J_{0}(z)+2\sum _{n=1}^{\infty }J_{2n}(z)\cos(2n\phi )\pm 2i\sum _{n=0}^{\infty }J_{2n+1}(z)\sin((2n+1)\phi )}
which is used to expand a plane wave as a sum of cylindrical waves, or to find the Fourier series of a tone-modulated FM signal.
More generally, a series
f
(
z
)
=
a
0
ν
J
ν
(
z
)
+
2
⋅
∑
k
=
1
∞
a
k
ν
J
ν
+
k
(
z
)
{\displaystyle f(z)=a_{0}^{\nu }J_{\nu }(z)+2\cdot \sum _{k=1}^{\infty }a_{k}^{\nu }J_{\nu +k}(z)}
is called Neumann expansion of f. The coefficients for ν = 0 have the explicit form
a
k
0
=
1
2
π
i
∫
|
z
|
=
c
f
(
z
)
O
k
(
z
)
d
z
{\displaystyle a_{k}^{0}={\frac {1}{2\pi i}}\int _{|z|=c}f(z)O_{k}(z)\,dz}
where Ok is Neumann's polynomial.
Selected functions admit the special representation
f
(
z
)
=
∑
k
=
0
∞
a
k
ν
J
ν
+
2
k
(
z
)
{\displaystyle f(z)=\sum _{k=0}^{\infty }a_{k}^{\nu }J_{\nu +2k}(z)}
with
a
k
ν
=
2
(
ν
+
2
k
)
∫
0
∞
f
(
z
)
J
ν
+
2
k
(
z
)
z
d
z
{\displaystyle a_{k}^{\nu }=2(\nu +2k)\int _{0}^{\infty }f(z){\frac {J_{\nu +2k}(z)}{z}}\,dz}
due to the orthogonality relation
∫
0
∞
J
α
(
z
)
J
β
(
z
)
d
z
z
=
2
π
sin
(
π
2
(
α
−
β
)
)
α
2
−
β
2
{\displaystyle \int _{0}^{\infty }J_{\alpha }(z)J_{\beta }(z){\frac {dz}{z}}={\frac {2}{\pi }}{\frac {\sin \left({\frac {\pi }{2}}(\alpha -\beta )\right)}{\alpha ^{2}-\beta ^{2}}}}
More generally, if f has a branch-point near the origin of such a nature that
f
(
z
)
=
∑
k
=
0
a
k
J
ν
+
k
(
z
)
{\displaystyle f(z)=\sum _{k=0}a_{k}J_{\nu +k}(z)}
then
L
{
∑
k
=
0
a
k
J
ν
+
k
}
(
s
)
=
1
1
+
s
2
∑
k
=
0
a
k
(
s
+
1
+
s
2
)
ν
+
k
{\displaystyle {\mathcal {L}}\left\{\sum _{k=0}a_{k}J_{\nu +k}\right\}(s)={\frac {1}{\sqrt {1+s^{2}}}}\sum _{k=0}{\frac {a_{k}}{\left(s+{\sqrt {1+s^{2}}}\right)^{\nu +k}}}}
or
∑
k
=
0
a
k
ξ
ν
+
k
=
1
+
ξ
2
2
ξ
L
{
f
}
(
1
−
ξ
2
2
ξ
)
{\displaystyle \sum _{k=0}a_{k}\xi ^{\nu +k}={\frac {1+\xi ^{2}}{2\xi }}{\mathcal {L}}\{f\}\left({\frac {1-\xi ^{2}}{2\xi }}\right)}
where
L
{
f
}
{\displaystyle {\mathcal {L}}\{f\}}
is the Laplace transform of f.
Another way to define the Bessel functions is the Poisson representation formula and the Mehler-Sonine formula:
J
ν
(
z
)
=
(
z
2
)
ν
Γ
(
ν
+
1
2
)
π
∫
−
1
1
e
i
z
s
(
1
−
s
2
)
ν
−
1
2
d
s
=
2
(
z
2
)
ν
⋅
π
⋅
Γ
(
1
2
−
ν
)
∫
1
∞
sin
z
u
(
u
2
−
1
)
ν
+
1
2
d
u
{\displaystyle {\begin{aligned}J_{\nu }(z)&={\frac {\left({\frac {z}{2}}\right)^{\nu }}{\Gamma \left(\nu +{\frac {1}{2}}\right){\sqrt {\pi }}}}\int _{-1}^{1}e^{izs}\left(1-s^{2}\right)^{\nu -{\frac {1}{2}}}\,ds\\[5px]&={\frac {2}{{\left({\frac {z}{2}}\right)}^{\nu }\cdot {\sqrt {\pi }}\cdot \Gamma \left({\frac {1}{2}}-\nu \right)}}\int _{1}^{\infty }{\frac {\sin zu}{\left(u^{2}-1\right)^{\nu +{\frac {1}{2}}}}}\,du\end{aligned}}}
where ν > −1/2 and z ∈ C.
This formula is useful especially when working with Fourier transforms.
Because Bessel's equation becomes Hermitian (self-adjoint) if it is divided by x, the solutions must satisfy an orthogonality relationship for appropriate boundary conditions. In particular, it follows that:
∫
0
1
x
J
α
(
x
u
α
,
m
)
J
α
(
x
u
α
,
n
)
d
x
=
δ
m
,
n
2
[
J
α
+
1
(
u
α
,
m
)
]
2
=
δ
m
,
n
2
[
J
α
′
(
u
α
,
m
)
]
2
{\displaystyle \int _{0}^{1}xJ_{\alpha }\left(xu_{\alpha ,m}\right)J_{\alpha }\left(xu_{\alpha ,n}\right)\,dx={\frac {\delta _{m,n}}{2}}\left[J_{\alpha +1}\left(u_{\alpha ,m}\right)\right]^{2}={\frac {\delta _{m,n}}{2}}\left[J_{\alpha }'\left(u_{\alpha ,m}\right)\right]^{2}}
where α > −1, δm,n is the Kronecker delta, and uα,m is the mth zero of Jα(x). This orthogonality relation can then be used to extract the coefficients in the Fourier–Bessel series, where a function is expanded in the basis of the functions Jα(x uα,m) for fixed α and varying m.
An analogous relationship for the spherical Bessel functions follows immediately:
∫
0
1
x
2
j
α
(
x
u
α
,
m
)
j
α
(
x
u
α
,
n
)
d
x
=
δ
m
,
n
2
[
j
α
+
1
(
u
α
,
m
)
]
2
{\displaystyle \int _{0}^{1}x^{2}j_{\alpha }\left(xu_{\alpha ,m}\right)j_{\alpha }\left(xu_{\alpha ,n}\right)\,dx={\frac {\delta _{m,n}}{2}}\left[j_{\alpha +1}\left(u_{\alpha ,m}\right)\right]^{2}}
If one defines a boxcar function of x that depends on a small parameter ε as:
f
ε
(
x
)
=
1
ε
rect
(
x
−
1
ε
)
{\displaystyle f_{\varepsilon }(x)={\frac {1}{\varepsilon }}\operatorname {rect} \left({\frac {x-1}{\varepsilon }}\right)}
(where rect is the rectangle function) then the Hankel transform of it (of any given order α > −1/2), gε(k), approaches Jα(k) as ε approaches zero, for any given k. Conversely, the Hankel transform (of the same order) of gε(k) is fε(x):
∫
0
∞
k
J
α
(
k
x
)
g
ε
(
k
)
d
k
=
f
ε
(
x
)
{\displaystyle \int _{0}^{\infty }kJ_{\alpha }(kx)g_{\varepsilon }(k)\,dk=f_{\varepsilon }(x)}
which is zero everywhere except near 1. As ε approaches zero, the right-hand side approaches δ(x − 1), where δ is the Dirac delta function. This admits the limit (in the distributional sense):
∫
0
∞
k
J
α
(
k
x
)
J
α
(
k
)
d
k
=
δ
(
x
−
1
)
{\displaystyle \int _{0}^{\infty }kJ_{\alpha }(kx)J_{\alpha }(k)\,dk=\delta (x-1)}
A change of variables then yields the closure equation:
∫
0
∞
x
J
α
(
u
x
)
J
α
(
v
x
)
d
x
=
1
u
δ
(
u
−
v
)
{\displaystyle \int _{0}^{\infty }xJ_{\alpha }(ux)J_{\alpha }(vx)\,dx={\frac {1}{u}}\delta (u-v)}
for α > −1/2. The Hankel transform can express a fairly arbitrary function as an integral of Bessel functions of different scales. For the spherical Bessel functions the orthogonality relation is:
∫
0
∞
x
2
j
α
(
u
x
)
j
α
(
v
x
)
d
x
=
π
2
u
v
δ
(
u
−
v
)
{\displaystyle \int _{0}^{\infty }x^{2}j_{\alpha }(ux)j_{\alpha }(vx)\,dx={\frac {\pi }{2uv}}\delta (u-v)}
for α > −1.
Another important property of Bessel's equations, which follows from Abel's identity, involves the Wronskian of the solutions:
A
α
(
x
)
d
B
α
d
x
−
d
A
α
d
x
B
α
(
x
)
=
C
α
x
{\displaystyle A_{\alpha }(x){\frac {dB_{\alpha }}{dx}}-{\frac {dA_{\alpha }}{dx}}B_{\alpha }(x)={\frac {C_{\alpha }}{x}}}
where Aα and Bα are any two solutions of Bessel's equation, and Cα is a constant independent of x (which depends on α and on the particular Bessel functions considered). In particular,
J
α
(
x
)
d
Y
α
d
x
−
d
J
α
d
x
Y
α
(
x
)
=
2
π
x
{\displaystyle J_{\alpha }(x){\frac {dY_{\alpha }}{dx}}-{\frac {dJ_{\alpha }}{dx}}Y_{\alpha }(x)={\frac {2}{\pi x}}}
and
I
α
(
x
)
d
K
α
d
x
−
d
I
α
d
x
K
α
(
x
)
=
−
1
x
,
{\displaystyle I_{\alpha }(x){\frac {dK_{\alpha }}{dx}}-{\frac {dI_{\alpha }}{dx}}K_{\alpha }(x)=-{\frac {1}{x}},}
for α > −1.
For α > −1, the even entire function of genus 1, x−αJα(x), has only real zeros. Let
0
<
j
α
,
1
<
j
α
,
2
<
⋯
<
j
α
,
n
<
⋯
{\displaystyle 0<j_{\alpha ,1}<j_{\alpha ,2}<\cdots <j_{\alpha ,n}<\cdots }
be all its positive zeros, then
J
α
(
z
)
=
(
z
2
)
α
Γ
(
α
+
1
)
∏
n
=
1
∞
(
1
−
z
2
j
α
,
n
2
)
{\displaystyle J_{\alpha }(z)={\frac {\left({\frac {z}{2}}\right)^{\alpha }}{\Gamma (\alpha +1)}}\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{j_{\alpha ,n}^{2}}}\right)}
(There are a large number of other known integrals and identities that are not reproduced here, but which can be found in the references.)
=== Recurrence relations ===
The functions Jα, Yα, H(1)α, and H(2)α all satisfy the recurrence relations
2
α
x
Z
α
(
x
)
=
Z
α
−
1
(
x
)
+
Z
α
+
1
(
x
)
{\displaystyle {\frac {2\alpha }{x}}Z_{\alpha }(x)=Z_{\alpha -1}(x)+Z_{\alpha +1}(x)}
and
2
d
Z
α
(
x
)
d
x
=
Z
α
−
1
(
x
)
−
Z
α
+
1
(
x
)
,
{\displaystyle 2{\frac {dZ_{\alpha }(x)}{dx}}=Z_{\alpha -1}(x)-Z_{\alpha +1}(x),}
where Z denotes J, Y, H(1), or H(2). These two identities are often combined, e.g. added or subtracted, to yield various other relations. In this way, for example, one can compute Bessel functions of higher orders (or higher derivatives) given the values at lower orders (or lower derivatives). In particular, it follows that
(
1
x
d
d
x
)
m
[
x
α
Z
α
(
x
)
]
=
x
α
−
m
Z
α
−
m
(
x
)
,
(
1
x
d
d
x
)
m
[
Z
α
(
x
)
x
α
]
=
(
−
1
)
m
Z
α
+
m
(
x
)
x
α
+
m
.
{\displaystyle {\begin{aligned}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{m}\left[x^{\alpha }Z_{\alpha }(x)\right]&=x^{\alpha -m}Z_{\alpha -m}(x),\\\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{m}\left[{\frac {Z_{\alpha }(x)}{x^{\alpha }}}\right]&=(-1)^{m}{\frac {Z_{\alpha +m}(x)}{x^{\alpha +m}}}.\end{aligned}}}
Using the previous relations one can arrive to similar relations for the Spherical Bessel functions:
2
α
+
1
x
j
α
(
x
)
=
j
α
−
1
+
j
α
+
1
{\displaystyle {\frac {2\alpha +1}{x}}j_{\alpha }(x)=j_{\alpha -1}+j_{\alpha +1}}
and
d
j
α
(
x
)
d
x
=
j
α
−
1
−
α
+
1
x
j
α
{\displaystyle {\frac {dj_{\alpha }(x)}{dx}}=j_{\alpha -1}-{\frac {\alpha +1}{x}}j_{\alpha }}
Modified Bessel functions follow similar relations:
e
(
x
2
)
(
t
+
1
t
)
=
∑
n
=
−
∞
∞
I
n
(
x
)
t
n
{\displaystyle e^{\left({\frac {x}{2}}\right)\left(t+{\frac {1}{t}}\right)}=\sum _{n=-\infty }^{\infty }I_{n}(x)t^{n}}
and
e
z
cos
θ
=
I
0
(
z
)
+
2
∑
n
=
1
∞
I
n
(
z
)
cos
n
θ
{\displaystyle e^{z\cos \theta }=I_{0}(z)+2\sum _{n=1}^{\infty }I_{n}(z)\cos n\theta }
and
1
2
π
∫
0
2
π
e
z
cos
(
m
θ
)
+
y
cos
θ
d
θ
=
I
0
(
z
)
I
0
(
y
)
+
2
∑
n
=
1
∞
I
n
(
z
)
I
m
n
(
y
)
.
{\displaystyle {\frac {1}{2\pi }}\int _{0}^{2\pi }e^{z\cos(m\theta )+y\cos \theta }d\theta =I_{0}(z)I_{0}(y)+2\sum _{n=1}^{\infty }I_{n}(z)I_{mn}(y).}
The recurrence relation reads
C
α
−
1
(
x
)
−
C
α
+
1
(
x
)
=
2
α
x
C
α
(
x
)
,
C
α
−
1
(
x
)
+
C
α
+
1
(
x
)
=
2
d
d
x
C
α
(
x
)
,
{\displaystyle {\begin{aligned}C_{\alpha -1}(x)-C_{\alpha +1}(x)&={\frac {2\alpha }{x}}C_{\alpha }(x),\\[1ex]C_{\alpha -1}(x)+C_{\alpha +1}(x)&=2{\frac {d}{dx}}C_{\alpha }(x),\end{aligned}}}
where Cα denotes Iα or eαiπKα. These recurrence relations are useful for discrete diffusion problems.
=== Transcendence ===
In 1929, Carl Ludwig Siegel proved that Jν(x), J'ν(x), and the logarithmic derivative J'ν(x)/Jν(x) are transcendental numbers when ν is rational and x is algebraic and nonzero. The same proof also implies that
Γ
(
v
+
1
)
(
2
/
x
)
v
J
v
(
x
)
{\displaystyle \Gamma (v+1)(2/x)^{v}J_{v}(x)}
is transcendental under the same assumptions.
=== Sums with Bessel functions ===
The product of two Bessel functions admits the following sum:
∑
ν
=
−
∞
∞
J
ν
(
x
)
J
n
−
ν
(
y
)
=
J
n
(
x
+
y
)
,
{\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{n-\nu }(y)=J_{n}(x+y),}
∑
ν
=
−
∞
∞
J
ν
(
x
)
J
ν
+
n
(
y
)
=
J
n
(
y
−
x
)
.
{\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{\nu +n}(y)=J_{n}(y-x).}
From these equalities it follows that
∑
ν
=
−
∞
∞
J
ν
(
x
)
J
ν
+
n
(
x
)
=
δ
n
,
0
{\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{\nu +n}(x)=\delta _{n,0}}
and as a consequence
∑
ν
=
−
∞
∞
J
ν
2
(
x
)
=
1.
{\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }^{2}(x)=1.}
These sums can be extended to include a term multiplier that is a polynomial function of the index. For example,
∑
ν
=
−
∞
∞
ν
J
ν
(
x
)
J
ν
+
n
(
x
)
=
x
2
(
δ
n
,
1
+
δ
n
,
−
1
)
,
{\displaystyle \sum _{\nu =-\infty }^{\infty }\nu J_{\nu }(x)J_{\nu +n}(x)={\frac {x}{2}}\left(\delta _{n,1}+\delta _{n,-1}\right),}
∑
ν
=
−
∞
∞
ν
J
ν
2
(
x
)
=
0
,
{\displaystyle \sum _{\nu =-\infty }^{\infty }\nu J_{\nu }^{2}(x)=0,}
∑
ν
=
−
∞
∞
ν
2
J
ν
(
x
)
J
ν
+
n
(
x
)
=
x
2
(
δ
n
,
−
1
−
δ
n
,
1
)
+
x
2
4
(
δ
n
,
−
2
+
2
δ
n
,
0
+
δ
n
,
2
)
,
{\displaystyle \sum _{\nu =-\infty }^{\infty }\nu ^{2}J_{\nu }(x)J_{\nu +n}(x)={\frac {x}{2}}\left(\delta _{n,-1}-\delta _{n,1}\right)+{\frac {x^{2}}{4}}\left(\delta _{n,-2}+2\delta _{n,0}+\delta _{n,2}\right),}
∑
ν
=
−
∞
∞
ν
2
J
ν
2
(
x
)
=
x
2
2
.
{\displaystyle \sum _{\nu =-\infty }^{\infty }\nu ^{2}J_{\nu }^{2}(x)={\frac {x^{2}}{2}}.}
== Multiplication theorem ==
The Bessel functions obey a multiplication theorem
λ
−
ν
J
ν
(
λ
z
)
=
∑
n
=
0
∞
1
n
!
(
(
1
−
λ
2
)
z
2
)
n
J
ν
+
n
(
z
)
,
{\displaystyle \lambda ^{-\nu }J_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {1}{n!}}\left({\frac {\left(1-\lambda ^{2}\right)z}{2}}\right)^{n}J_{\nu +n}(z),}
where λ and ν may be taken as arbitrary complex numbers. For |λ2 − 1| < 1, the above expression also holds if J is replaced by Y. The analogous identities for modified Bessel functions and |λ2 − 1| < 1 are
λ
−
ν
I
ν
(
λ
z
)
=
∑
n
=
0
∞
1
n
!
(
(
λ
2
−
1
)
z
2
)
n
I
ν
+
n
(
z
)
{\displaystyle \lambda ^{-\nu }I_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {1}{n!}}\left({\frac {\left(\lambda ^{2}-1\right)z}{2}}\right)^{n}I_{\nu +n}(z)}
and
λ
−
ν
K
ν
(
λ
z
)
=
∑
n
=
0
∞
(
−
1
)
n
n
!
(
(
λ
2
−
1
)
z
2
)
n
K
ν
+
n
(
z
)
.
{\displaystyle \lambda ^{-\nu }K_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{n!}}\left({\frac {\left(\lambda ^{2}-1\right)z}{2}}\right)^{n}K_{\nu +n}(z).}
== Zeros of the Bessel function ==
=== Bourget's hypothesis ===
Bessel himself originally proved that for nonnegative integers n, the equation Jn(x) = 0 has an infinite number of solutions in x. When the functions Jn(x) are plotted on the same graph, though, none of the zeros seem to coincide for different values of n except for the zero at x = 0. This phenomenon is known as Bourget's hypothesis after the 19th-century French mathematician who studied Bessel functions. Specifically it states that for any integers n ≥ 0 and m ≥ 1, the functions Jn(x) and Jn + m(x) have no common zeros other than the one at x = 0. The hypothesis was proved by Carl Ludwig Siegel in 1929.
=== Transcendence ===
Siegel proved in 1929 that when ν is rational, all nonzero roots of Jν(x) and J'ν(x) are transcendental, as are all the roots of Kν(x). It is also known that all roots of the higher derivatives
J
ν
(
n
)
(
x
)
{\displaystyle J_{\nu }^{(n)}(x)}
for n ≤ 18 are transcendental, except for the special values
J
1
(
3
)
(
±
3
)
=
0
{\displaystyle J_{1}^{(3)}(\pm {\sqrt {3}})=0}
and
J
0
(
4
)
(
±
3
)
=
0
{\displaystyle J_{0}^{(4)}(\pm {\sqrt {3}})=0}
.
=== Numerical approaches ===
For numerical studies about the zeros of the Bessel function, see Gil, Segura & Temme (2007), Kravanja et al. (1998) and Moler (2004).
=== Numerical values ===
The first zeros in J0 (i.e., j0,1, j0,2 and j0,3) occur at arguments of approximately 2.40483, 5.52008 and 8.65373, respectively.
== History ==
=== Waves and elasticity problems ===
The first appearance of a Bessel function appears in the work of Daniel Bernoulli in 1732, while working on the analysis of a vibrating string, a problem that was tackled before by his father Johann Bernoulli. Daniel considered a flexible chain suspended from a fixed point above and free at its lower end. The solution of the differential equation led to the introduction of a function that is now considered
J
0
(
x
)
{\displaystyle J_{0}(x)}
. Bernoulli also developed a method to find the zeros of the function.
Leonhard Euler in 1736, found a link between other functions (now known as Laguerre polynomials) and Bernoulli's solution. Euler also introduced a non-uniform chain that lead to the introduction of functions now related to modified Bessel functions
I
n
(
x
)
{\displaystyle I_{n}(x)}
.
In the middle of the eighteen century, Jean le Rond d'Alembert had found a formula to solve the wave equation. By 1771 there was dispute between Bernoulli, Euler, d'Alembert and Joseph-Louis Lagrange on the nature of the solutions vibrating strings.
Euler worked in 1778 on buckling, introducing the concept of Euler's critical load. To solve the problem he introduced the series for
J
±
1
/
3
(
x
)
{\displaystyle J_{\pm 1/3}(x)}
. Euler also worked out the solutions of vibrating 2D membranes in cylindrical coordinates in 1780. In order to solve his differential equation he introduced a power series associated to
J
n
(
x
)
{\displaystyle J_{n}(x)}
, for integer n.
During the end of the 19th century Lagrange, Pierre-Simon Laplace and Marc-Antoine Parseval also found equivalents to the Bessel functions. Parseval for example found an integral representation of
J
0
(
x
)
{\displaystyle J_{0}(x)}
using cosine.
At the beginning of the 1800s, Joseph Fourier used
J
0
(
x
)
{\displaystyle J_{0}(x)}
to solve the heat equation in a problem with cylindrical symmetry. Fourier won a prize of the French Academy of Sciences for this work in 1811. But most of the details of his work, including the use of a Fourier series, remained unpublished until 1822. Poisson in rivalry with Fourier, extended Fourier's work in 1823, introducing new properties of Bessel functions including Bessel functions of half-integer order (now known as spherical Bessel functions).
=== Astronomical problems ===
In 1770, Lagrangre introduced the series expansion of Bessel functions to solve Kepler's equation, a trascendental equation in astronomy. Friedrich Wilhelm Bessel had seen Lagrange's solution but found it difficult to handle. In 1813 in a letter to Carl Friedrich Gauss, Bessel simplified the calculation using trigonometric functions. Bessel published his work in 1819, independently introducing the method of Fourier series unaware of the work of Fourier which was published later.
In 1824, Bessel carried out a systematic investigation of the functions, which earned the functions his name. In older literature the functions were called cylindrical functions or even Bessel–Fourier functions.
== See also ==
== Notes ==
== References ==
== External links == | Wikipedia/Bessel_function_of_the_first_kind |
Sparrow's resolution limit is an estimate of the angular resolution limit of an optical instrument.
== Rayleigh criterion ==
When a star is observed with a telescope, the light is diffracted or spread apart into an Airy disk. The resolution limit is defined as the minimum angular separation between two stars that can still be perceived as separate by an observer. The angular diameter of the Airy disk is determined by the aperture of the instrument.
Rayleigh's resolution limit is reached when the two stars are separated by the theoretical radius of the first dark interval around the Airy disk, which is larger than the disk's apparent radius, so that a distinct dark gap appears between the two disks. Most astronomers say they can still distinguish the two stars when they are closer than Rayleigh's resolution limit. Sparrow's Resolution Limit is reached when the combined light from two overlapping and equally bright Airy disks is constant along a line between the central peak brightness of the two Airy disks. However, at the Sparrow resolution limit the two Airy disks will appear to be just touching at their edges, which according to Sparrow is due to a brightness contrast response of the eye. The same reasoning applies to the resolution of two wavelengths in a spectroscope, where lines of emission or absorption will have a diffraction induced width analogous to the diameter of an Airy disk.
Sparrow's resolution limit is nearly equivalent to the theoretical diffraction limit of resolution, the wavelength of light divided by the aperture diameter, and about 20% smaller than the Rayleigh limit. For example, in a 200 mm (eight-inch) telescope, Rayleigh's resolution limit is 0.69 arc seconds, Sparrow's resolution limit is 0.54 arc seconds.
== Dawes' limit ==
Sparrow's resolution limit was derived in 1916 from photographic experiments with simulated spectroscopic lines and is most commonly applied in spectroscopy, microscopy and photography. The Dawes resolution limit is more often used in visual double star astronomy.
== Sparrow criterion ==
The Sparrow criterion expresses the resolution limit in term of the joint intensity curve when observing two very closely separated wavelengths of equal intensity.
They are considered resolved when the intensity at the midpoint between the peaks shows a minimum.
== References ==
Eugene Hecht, 2002, "Optics"
Rainer Heintzmann & Gabriella Ficz, 2006, "Breaking the resolution limit in light microscopy", Briefings in Functional genomics, Vol. 5, pp 289–301.
Ariel Lipson, Stephen G. Lipson, Henry Lipson, 2010, "Optical Physics" | Wikipedia/Sparrow's_resolution_limit |
In graph drawing, the angular resolution of a drawing of a graph is the sharpest angle formed by any two edges that meet at a common vertex of the drawing.
== Properties ==
=== Relation to vertex degree ===
Formann et al. (1993) observed that every straight-line drawing of a graph with maximum degree d has angular resolution at most 2π/d: if v is a vertex of degree d, then the edges incident to v partition the space around v into d wedges with total angle 2π, and the smallest of these wedges must have an angle of at most 2π/d. More strongly, if a graph is d-regular, it must have angular resolution less than
π
d
−
1
{\displaystyle {\frac {\pi }{d-1}}}
, because this is the best resolution that can be achieved for a vertex on the convex hull of the drawing.
=== Relation to graph coloring ===
As Formann et al. (1993) showed, the largest possible angular resolution of a graph G is closely related to the chromatic number of the square G2, the graph on the same vertex set in which pairs of vertices are connected by an edge whenever their distance in G is at most two. If G2 can be colored with χ colors, then G may be drawn with angular resolution π/χ − ε, for any ε > 0, by assigning distinct colors to the vertices of a regular χ-gon and placing each vertex of G close to the polygon vertex with the same color. Using this construction, they showed that every graph with maximum degree d has a drawing with angular resolution proportional to 1/d2. This bound is close to tight: they used the probabilistic method to prove the existence of graphs with maximum degree d whose drawings all have angular resolution
O
(
log
d
d
2
)
{\displaystyle O\left({\frac {\log d}{d^{2}}}\right)}
.
=== Existence of optimal drawings ===
Formann et al. (1993) provided an example showing that there exist graphs that do not have a drawing achieving the maximum possible angular resolution; instead, these graphs have a family of drawings whose angular resolutions tend towards some limiting value without reaching it. Specifically, they exhibited an 11-vertex graph that has drawings of angular resolution π/3 − ε for any ε > 0, but that does not have a drawing of angular resolution exactly π/3.
== Special classes of graphs ==
=== Trees ===
Every tree may be drawn in such a way that the edges are equally spaced around each vertex, a property known as perfect angular resolution. Moreover, if the edges may be freely permuted around each vertex, then such a drawing is possible, without crossings, with all edges unit length or higher, and with the entire drawing fitting within a bounding box of polynomial area. However, if the cyclic ordering of the edges around each vertex is already determined as part of the input to the problem, then achieving perfect angular resolution with no crossings may sometimes require exponential area.
=== Outerplanar graphs ===
Perfect angular resolution is not always possible for outerplanar graphs, because vertices on the convex hull of the drawing with degree greater than one cannot have their incident edges equally spaced around them. Nonetheless, every outerplanar graph of maximum degree d has an outerplanar drawing with angular resolution proportional to 1/d.
=== Planar graphs ===
For planar graphs with maximum degree d, the square-coloring technique of Formann et al. (1993) provides a drawing with angular resolution proportional to 1/d, because the square of a planar graph must have chromatic number proportional to d. More precisely, Wegner conjectured in 1977 that the chromatic number of the square of a planar graph is at most
max
(
d
+
5
,
3
d
2
+
1
)
{\displaystyle \max \left(d+5,{\frac {3d}{2}}+1\right)}
, and it is known that the chromatic number is at most
5
d
3
+
O
(
1
)
{\displaystyle {\frac {5d}{3}}+O(1)}
. However, the drawings resulting from this technique are generally not planar.
For some planar graphs, the optimal angular resolution of a planar straight-line drawing is O(1/d3), where d is the degree of the graph. Additionally, such a drawing may be forced to use very long edges, longer by an exponential factor than the shortest edges in the drawing.
Malitz & Papakostas (1994) used the circle packing theorem and ring lemma to show that every planar graph with maximum degree d has a planar drawing whose angular resolution is at worst an exponential function of d, independent of the number of vertices in the graph.
== Computational complexity ==
It is NP-hard to determine whether a given graph of maximum degree d has a drawing with angular resolution 2π/d, even in the special case that d = 4. However, for certain restricted classes of drawings, including drawings of trees in which extending the leaves to infinity produces a convex subdivision of the plane as well as drawings of planar graphs in which each bounded face is a centrally-symmetric polygon, a drawing of optimal angular resolution may be found in polynomial time.
== History ==
Angular resolution was first defined by Formann et al. (1993).
Although originally defined only for straight-line drawings of graphs, later authors have also investigated the angular resolution of drawings in which the edges are polygonal chains, circular arcs, or spline curves.
The angular resolution of a graph is closely related to its crossing resolution, the angle formed by crossings in a drawing of the graph. In particular, RAC drawing seeks to ensure that these angles are all right angles, the largest crossing angle possible.
== Notes ==
== References == | Wikipedia/Angular_resolution_(graph_drawing) |
Super-resolution microscopy is a series of techniques in optical microscopy that allow such images to have resolutions higher than those imposed by the diffraction limit, which is due to the diffraction of light. Super-resolution imaging techniques rely on the near-field (photon-tunneling microscopy as well as those that use the Pendry Superlens and near field scanning optical microscopy) or on the far-field. Among techniques that rely on the latter are those that improve the resolution only modestly (up to about a factor of two) beyond the diffraction-limit, such as confocal microscopy with closed pinhole or aided by computational methods such as deconvolution or detector-based pixel reassignment (e.g. re-scan microscopy, pixel reassignment), the 4Pi microscope, and structured-illumination microscopy technologies such as SIM and SMI.
There are two major groups of methods for super-resolution microscopy in the far-field that can improve the resolution by a much larger factor:
Deterministic super-resolution: the most commonly used emitters in biological microscopy, fluorophores, show a nonlinear response to excitation, which can be exploited to enhance resolution. Such methods include STED, GSD, RESOLFT and SSIM.
Stochastic super-resolution: the chemical complexity of many molecular light sources gives them a complex temporal behavior, which can be used to make several nearby fluorophores emit light at separate times and thereby become resolvable in time. These methods include super-resolution optical fluctuation imaging (SOFI) and all single-molecule localization methods (SMLM), such as SPDM, SPDMphymod, PALM, FPALM, STORM, and dSTORM.
On 8 October 2014, the Nobel Prize in Chemistry was awarded to Eric Betzig, W.E. Moerner and Stefan Hell for "the development of super-resolved fluorescence microscopy", which brings "optical microscopy into the nanodimension". The different modalities of super-resolution microscopy are increasingly being adopted by the biomedical research community, and these techniques are becoming indispensable tools to understanding biological function at the molecular level.
== History ==
By 1978, the first theoretical ideas had been developed to break the Abbe limit, which called for using a 4Pi microscope as a confocal laser-scanning fluorescence microscope where the light is focused from all sides to a common focus that is used to scan the object by 'point-by-point' excitation combined with 'point-by-point' detection.
However the publication from 1978 had drawn an improper physical conclusion (i.e. a point-like spot of light) and had completely missed the axial resolution increase as the actual benefit of adding the other side of the solid angle.
Some of the following information was gathered (with permission) from a chemistry blog's review of sub-diffraction microscopy techniques.
In 1986, a super-resolution optical microscope based on stimulated emission was patented by Okhonin.
== Super-resolution techniques ==
=== Photon tunneling microscopy (PTM) ===
=== Local enhancement / ANSOM / optical nano-antennas ===
=== Near-field optical random mapping (NORM) microscopy ===
Near-field optical random mapping (NORM) microscopy is a method of optical near-field acquisition by a far-field microscope through the observation of nanoparticles' Brownian motion in an immersion liquid.
NORM uses object surface scanning by stochastically moving nanoparticles. Through the microscope, nanoparticles look like symmetric round spots. The spot width is equivalent to the point spread function (~ 250 nm) and is defined by the microscope resolution. Lateral coordinates of the given particle can be evaluated with a precision much higher than the resolution of the microscope. By collecting the information from many frames one can map out the near field intensity distribution across the whole field of view of the microscope. In comparison with NSOM and ANSOM this method does not require any special equipment for tip positioning and has a large field of view and a depth of focus. Due to the large number of scanning "sensors" one can achieve image acquisition in a shorter time.
=== 4Pi ===
A 4Pi microscope is a laser-scanning fluorescence microscope with an improved axial resolution. The typical value of 500–700 nm can be improved to 100–150 nm, which corresponds to an almost spherical focal spot with 5–7 times less volume than that of standard confocal microscopy.
The improvement in resolution is achieved by using two opposing objective lenses, both of which are focused to the same geometric location. Also, the difference in optical path length through each of the two objective lenses is carefully minimized. By this, molecules residing in the common focal area of both objectives can be illuminated coherently from both sides, and the reflected or emitted light can be collected coherently, i.e. coherent superposition of emitted light on the detector is possible. The solid angle
Ω
{\displaystyle \Omega }
that is used for illumination and detection is increased and approaches the ideal case, where the sample is illuminated and detected from all sides simultaneously.
Up to now, the best quality in a 4Pi microscope has been reached in conjunction with STED microscopy in fixed cells and RESOLFT microscopy with switchable proteins in living cells.
=== Structured illumination microscopy (SIM) ===
Structured illumination microscopy (SIM) enhances spatial resolution by collecting information from frequency space outside the observable region. This process is done in reciprocal space: the Fourier transform (FT) of an SI image contains superimposed additional information from different areas of reciprocal space; with several frames where the illumination is shifted by some phase, it is possible to computationally separate and reconstruct the FT image, which has much more resolution information. The reverse FT returns the reconstructed image to a super-resolution image.
SIM could potentially replace electron microscopy as a tool for some medical diagnoses. These include diagnosis of kidney disorders, kidney cancer, and blood diseases.
Although the term "structured illumination microscopy" was coined by others in later years, Guerra (1995) first published results in which light patterned by a 50 nm pitch grating illuminated a second grating of pitch 50 nm, with the gratings rotated with respect to each other by the angular amount needed to achieve magnification. Although the illuminating wavelength was 650 nm, the 50 nm grating was easily resolved. This showed a nearly 5-fold improvement over the Abbe resolution limit of 232 nm that should have been the smallest obtained for the numerical aperture and wavelength used. In further development of this work, Guerra showed that super-resolved lateral topography is attained by phase-shifting the evanescent field. Several U.S. patents were issued to Guerra individually, or with colleagues, and assigned to the Polaroid Corporation. Licenses to this technology were procured by Dyer Energy Systems, Calimetrics Inc., and Nanoptek Corp. for use of this super-resolution technique in optical data storage and microscopy.
Images of cell nuclei and mitotic stages recorded with 3D-SIM.
=== Spatially modulated illumination (SMI) ===
One implementation of structured illumination is known as spatially modulated illumination (SMI). Like standard structured illumination, the SMI technique modifies the point spread function (PSF) of a microscope in a suitable manner. In this case however, "the optical resolution itself is not enhanced"; instead structured illumination is used to maximize the precision of distance measurements of fluorescent objects, to "enable size measurements at molecular dimensions of a few tens of nanometers".
The Vertico SMI microscope achieves structured illumination by using one or two opposing interfering laser beams along the axis. The object being imaged is then moved in high-precision steps through the wave field, or the wave field itself is moved relative to the object by phase shifts. This results in an improved axial size and distance resolution.
SMI can be combined with other super resolution technologies, for instance with 3D LIMON or LSI-TIRF as a total internal reflection interferometer with laterally structured illumination (this last instrument and technique is essentially a phase-shifted photon tunneling microscope, which employs a total internal reflection light microscope with phase-shifted evanescent field (Guerra, 1996). This SMI technique allows one to acquire light-optical images of autofluorophore distributions in sections from human eye tissue with previously unmatched optical resolution. Use of three different excitation wavelengths (488, 568, and 647 nm), enables one to gather spectral information about the autofluorescence signal. This has been used to examine human eye tissue affected by macular degeneration.
=== Biosensing ===
Biosensing is crucial for understanding the activities of cellular components in cell biology. Genetically encoded sensors have transformed this field and typically consist of two parts: the sensing domain, which detects cellular activity or interactions, and the reporting domain, which produces measurable signals. There are two main types of sensors: FRET-based sensors using two fluorophores for precise quantification but with some limitations, and single-fluorophore biosensors that are smaller, faster, and allow for multiplexed experiments, but may have challenges in obtaining absolute values and detecting response saturation. Various microscopy methods, including super-resolution optical fluctuation imaging, have been used to quantify and monitor biological activities in real time. Examples include calcium, pH, and voltage sensing. Greenwald et al. offer a more comprehensive overview of these applications.
== Deterministic functional techniques ==
REversible Saturable OpticaL Fluorescence Transitions (RESOLFT) microscopy is an optical microscopy with very high resolution that can image details in samples that cannot be imaged with conventional or confocal microscopy. Within RESOLFT the principles of STED microscopy and GSD microscopy are generalized. Also, there are techniques with other concepts than RESOLFT or SSIM. For example, fluorescence microscopy using the optical AND gate property of nitrogen-vacancy center, or super-resolution by Stimulated Emission of Thermal Radiation (SETR), which uses the intrinsic super-linearities of the Black-Body radiation and expands the concept of super-resolution beyond microscopy.
=== Stimulated emission depletion (STED) ===
Stimulated emission depletion microscopy (STED) uses two laser pulses, the excitation pulse for excitation of the fluorophores to their fluorescent state and the STED pulse for the de-excitation of fluorophores by means of stimulated emission. In practice, the excitation laser pulse is first applied whereupon a STED pulse soon follows (STED without pulses using continuous wave lasers is also used). Furthermore, the STED pulse is modified in such a way so that it features a zero-intensity spot that coincides with the excitation focal spot. Due to the non-linear dependence of the stimulated emission rate on the intensity of the STED beam, all the fluorophores around the focal excitation spot will be in their off state (the ground state of the fluorophores). By scanning this focal spot, one retrieves the image. The full width at half maximum (FWHM) of the point spread function (PSF) of the excitation focal spot can theoretically be compressed to an arbitrary width by raising the intensity of the STED pulse, according to equation (1).
Δ
r
≈
Δ
1
+
I
max
/
I
s
{\displaystyle \Delta r\approx {\frac {\Delta }{\sqrt {1+I_{\max }/I_{s}}}}}
(1)
where ∆r is the lateral resolution, ∆ is the FWHM of the diffraction limited PSF, Imax is the peak intensity of the STED laser, and
I
s
{\displaystyle I_{s}}
is the threshold intensity needed in order to achieve saturated emission depletion.
The main disadvantage of STED, which has prevented its widespread use, is that the machinery is complicated. On the one hand, the image acquisition speed is relatively slow for large fields of view because of the need to scan the sample in order to retrieve an image. On the other hand, it can be very fast for smaller fields of view: recordings of up to 80 frames per second have been shown. Due to a large Is value associated with STED, there is the need for a high-intensity excitation pulse, which may cause damage to the sample.
=== Ground state depletion (GSD) ===
Ground state depletion microscopy (GSD microscopy) uses the triplet state of a fluorophore as the off-state and the singlet state as the on-state, whereby an excitation laser is used to drive the fluorophores at the periphery of the singlet state molecule to the triplet state. This is much like STED, where the off-state is the ground state of fluorophores, which is why equation (1) also applies in this case. The
I
s
{\displaystyle I_{s}}
value is smaller than in STED, making super-resolution imaging possible at a much smaller laser intensity. Compared to STED, though, the fluorophores used in GSD are generally less photostable; and the saturation of the triplet state may be harder to realize.
=== Saturated structured illumination microscopy (SSIM) ===
Saturated structured-illumination microscopy (SSIM) exploits the nonlinear dependence of the emission rate of fluorophores on the intensity of the excitation laser. By applying a sinusoidal illumination pattern with a peak intensity close to that needed in order to saturate the fluorophores in their fluorescent state, one retrieves Moiré fringes. The fringes contain high order spatial information that may be extracted by computational techniques. Once the information is extracted, a super-resolution image is retrieved.
SSIM requires shifting the illumination pattern multiple times, effectively limiting the temporal resolution of the technique. In addition there is the need for very photostable fluorophores, due to the saturating conditions, which inflict radiation damage on the sample and restrict the possible applications for which SSIM may be used.
Examples of this microscopy are shown under section Structured illumination microscopy (SIM): images of cell nuclei and mitotic stages recorded with 3D-SIM Microscopy.
== Stochastic functional techniques ==
=== Localization microscopy ===
Single-molecule localization microscopy (SMLM) summarizes all microscopical techniques that achieve super-resolution by isolating emitters and fitting their images with the point spread function (PSF). Normally, the width of the point spread function (~ 250 nm) limits resolution. However, given an isolated emitter, one is able to determine its location with a precision only limited by its intensity according to equation (2).
Δ
l
o
c
≈
Δ
N
{\displaystyle \Delta \mathrm {loc} \approx {\frac {\Delta }{\sqrt {N}}}}
(2)
where Δloc is the localization precision, Δ is the FWHM (full width at half maximum) of the PSF and N is the number of collected photons.
This fitting process can only be performed reliably for isolated emitters (see Deconvolution), and interesting biological samples are so densely labeled with emitters that fitting is impossible when all emitters are active at the same time. SMLM techniques solve this dilemma by activating only a sparse subset of emitters at the same time, localizing these few emitters very precisely, deactivating them and activating another subset.
Considering background and camera pixelation, and using Gaussian approximation for the point spread function (Airy disk) of a typical microscope, the theoretical resolution is proposed by Thompson et al. and fine-tuned by Mortensen et al.:
σ
2
=
σ
P
S
F
2
+
a
2
/
12
N
s
i
g
(
16
9
+
8
π
N
b
g
(
σ
P
S
F
2
+
a
2
/
12
)
N
s
i
g
a
2
)
{\displaystyle \sigma ^{2}={\frac {\sigma _{PSF}^{2}+a^{2}/12}{N_{sig}}}({\frac {16}{9}}+{\frac {8\pi N_{bg}(\sigma _{PSF}^{2}+a^{2}/12)}{N_{sig}a^{2}}})}
where
* σ is the Gaussian standard deviation of the center locations of the same molecule if measured multiple times (e.g. frames of a video). (unit m)
* σPSF is the Gaussian standard deviation of the point spread function, whose FWHM following the Ernst Abbe equation d = λ/(2 N.A.). (unit m)
* a is the size of each image pixel. (unit m)
* Nsig is the photon counts of the total PSF over all pixels of interest. (unitless)
* Nbg the average background photon counts per pixel (dark counts already removed), which is approximated to be the square of the Gaussian standard deviation of the Poisson distribution background noise of each pixel over time or standard deviation of all pixels with background noise only, σbg2. The larger the σbg2, the better the approximation (e.g. good for σbg2 >10, excellent for σbg2 >1000). (unitless)
* Resolution FWHM is ~2.355 times the Gaussian standard deviation.
Generally, localization microscopy is performed with fluorophores. Suitable fluorophores (e.g. for STORM) reside in a non-fluorescent dark state for most of the time and are activated stochastically, typically with an excitation laser of low intensity. A readout laser stimulates fluorescence and bleaches or photoswitches the fluorophores back to a dark state, typically within 10–100 ms. In Points Accumulation for Imaging in Nanoscale Topography (PAINT), the fluorophores are nonfluorescent before binding and afterwards become fluorescent. The photons emitted during the fluorescent phase are collected with a camera and the resulting image of the fluorophore (which is distorted by the PSF) can be fitted with very high precision, even on the order of a few Angstroms. Repeating the process several thousand times ensures that all fluorophores can go through the bright state and are recorded. A computer then reconstructs a super-resolved image.
The desirable traits of fluorophores used for these methods, in order to maximize the resolution, are that they should be bright. That is, they should have a high extinction coefficient and a high quantum yield. They should also possess a high contrast ratio (ratio between the number of photons emitted in the light state and the number of photons emitted in the dark state). Also, a densely labeled sample is desirable, according to the Nyquist criteria.
The multitude of localization microscopy methods differ mostly in the type of fluorophores used.
==== Spectral precision distance microscopy (SPDM) ====
A single, tiny source of light can be located much better than the resolution of a microscope usually allows for: although the light will produce a blurry spot, computer algorithms can be used to accurately calculate the center of the blurry spot, taking into account the point spread function of the microscope, the noise properties of the detector, etc. However, this approach does not work when there are too many sources close to each other: the sources then all blur together.
Spectral precision distance microscopy (SPDM) is a family of localizing techniques in fluorescence microscopy which gets around the problem of there being many sources by measuring just a few sources at a time, so that each source is "optically isolated" from the others (i.e., separated by more than the microscope's resolution, typically ~200-250 nm). This "optical isolation" requires that the particles under examination have different spectral signatures, so that it is possible to look at light from just a few molecules at a time by using the appropriate light sources and filters. This achieves an effective optical resolution several times better than the conventional optical resolution that is represented by the half-width of the main maximum of the effective point image function.
The structural resolution achievable using SPDM can be expressed in terms of the smallest measurable distance between two punctiform particles of different spectral characteristics ("topological resolution"). Modeling has shown that under suitable conditions regarding the precision of localization, particle density, etc., the "topological resolution" corresponds to a "space frequency" that, in terms of the classical definition, is equivalent to a much improved optical resolution. Molecules can also be distinguished in even more subtle ways based on fluorescent lifetime and other techniques.
An important application is in genome research (study of the functional organization of the genome). Another important area of use is research into the structure of membranes.
===== SPDMphymod =====
Localization microscopy for many standard fluorescent dyes like GFP, Alexa dyes, and fluorescein molecules is possible if certain photo-physical conditions are present. With this so-called physically modifiable fluorophores (SPDMphymod) technology, a single laser wavelength of suitable intensity is sufficient for nanoimaging in contrast to other localization microscopy technologies that need two laser wavelengths when special photo-switchable/photo-activatable fluorescence molecules are used. A further example of the use of SPDMphymod is an analysis of Tobacco mosaic virus (TMV) particles or the study of virus–cell interaction.
Based on singlet–triplet state transitions it is crucial for SPDMphymod that this process is ongoing and leading to the effect that a single molecule comes first into a very long-living reversible dark state (with half-life of as much as several seconds) from which it returns to a fluorescent state emitting many photons for several milliseconds before it returns into a very long-living, so-called irreversible dark state. SPDMphymod microscopy uses fluorescent molecules that emit the same spectral light frequency but with different spectral signatures based on the flashing characteristics. By combining two thousands images of the same cell, it is possible, using laser optical precision measurements, to record localization images with significantly improved optical resolution.
Standard fluorescent dyes already successfully used with the SPDMphymod technology are GFP, RFP, YFP, Alexa 488, Alexa 568, Alexa 647, Cy2, Cy3, Atto 488 and fluorescein.
==== Cryogenic optical localization in 3D (COLD) ====
Cryogenic Optical Localization in 3D (COLD) is a method that allows localizing multiple fluorescent sites within a single small- to medium-sized biomolecule with Angstrom-scale resolution. The localization precision in this approach is enhanced because the slower photochemistry at low temperatures leads to a higher number of photons that can be emitted from each fluorophore before photobleaching. Consequently, cryogenic stochastic localization microscopy achieves the sub-molecular resolution required to resolve the 3D positions of several fluorophores attached to a small protein. By employing algorithms known from electron microscopy, the 2D projections of fluorophores are reconstructed into a 3D configuration. COLD brings fluorescence microscopy to its fundamental limit, depending on the size of the label. The method can also be combined with other structural biology techniques—such as X-ray crystallography, magnetic resonance spectroscopy, and electron microscopy—to provide valuable complementary information and specificity.
==== Binding-activated localization microscopy (BALM) ====
Binding-activated localization microscopy (BALM) is a general concept for single-molecule localization microscopy (SMLM): super-resolved imaging of DNA-binding dyes based on modifying the properties of DNA and a dye. By careful adjustment of the chemical environment—leading to local, reversible DNA melting and hybridization control over the fluorescence signal—DNA-binding dye molecules can be introduced. Intercalating and minor-groove binding DNA dyes can be used to register and optically isolate only a few DNA-binding dye signals at a time. DNA structure fluctuation-assisted BALM (fBALM) has been used to nanoscale differences in nuclear architecture, with an anticipated structural resolution of approximately 50 nm. Imaging chromatin nanostructure with binding-activated localization microscopy based on DNA structure fluctuations. Recently, the significant enhancement of fluorescence quantum yield of NIAD-4 upon binding to an amyloid was exploited for BALM imaging of amyloid fibrils and oligomers.
==== STORM, PALM, and FPALM ====
Stochastic optical reconstruction microscopy (STORM), photo activated localization microscopy (PALM), and fluorescence photo-activation localization microscopy (FPALM) are super-resolution imaging techniques that use sequential activation and time-resolved localization of photoswitchable fluorophores to create high resolution images. During imaging, only an optically resolvable subset of fluorophores is activated to a fluorescent state at any given moment, such that the position of each fluorophore can be determined with high precision by finding the centroid positions of the single-molecule images of a particular fluorophore. One subset of fluorophores is subsequently deactivated, and another subset is activated and imaged. Iteration of this process allows numerous fluorophores to be localized and a super-resolution image to be constructed from the image data.
These three methods were published independently over a short period of time, and their principles are identical. STORM was originally described using Cy5 and Cy3 dyes attached to nucleic acids or proteins, while PALM and FPALM were described using photoswitchable fluorescent proteins. In principle any photoswitchable fluorophore can be used, and STORM has been demonstrated with a variety of different probes and labeling strategies. Using stochastic photoswitching of single fluorophores, such as Cy5, STORM can be performed with a single red laser excitation source. The red laser both switches the Cy5 fluorophore to a dark state by formation of an adduct and subsequently returns the molecule to the fluorescent state. Many other dyes have been also used with STORM.
In addition to single fluorophores, dye-pairs consisting of an activator fluorophore (such as Alexa 405, Cy2, or Cy3) and a photoswitchable reporter dye (such as Cy5, Alexa 647, Cy5.5, or Cy7) can be used with STORM. In this scheme, the activator fluorophore, when excited near its absorption maximum, serves to reactivate the photoswitchable dye to the fluorescent state. Multicolor imaging has been performed by using different activation wavelengths to distinguish dye-pairs, depending on the activator fluorophore used, or using spectrally distinct photoswitchable fluorophores, either with or without activator fluorophores. Photoswitchable fluorescent proteins can be used as well. Highly specific labeling of biological structures with photoswitchable probes has been achieved with antibody staining, direct conjugation of proteins, and genetic encoding.
STORM has also been extended to three-dimensional imaging using optical astigmatism, in which the elliptical shape of the point spread function encodes the x, y, and z positions for samples up to several micrometers thick, and has been demonstrated in living cells. To date, the spatial resolution achieved by this technique is ~20 nm in the lateral dimensions and ~50 nm in the axial dimension; and the temporal resolution is as fast as 0.1–0.33s.
==== Points accumulation for imaging in nanoscale topography (PAINT) ====
Points accumulation for imaging in nanoscale topography (PAINT) is a single-molecule localization method that achieves stochastic single-molecule fluorescence by molecular adsorption/absorption and photobleaching/desorption. The first dye used was Nile red which is nonfluorescent in aqueous solution but fluorescent when inserted into a hydrophobic environment, such as micelles or living cell walls. Thus, the concentration of the dye is kept small, at the nanomolar level, so that the molecule's sorption rate to the diffraction-limited area is in the millisecond region. The stochastic binding of single-dye molecules (probes) to an immobilized target can be spatially and temporally resolved under a typical widefield fluorescence microscope. Each dye is photobleached to return the field to a dark state, so the next dye can bind and be observed. The advantage of this method, compared to other stochastic methods, is that in addition to obtaining the super-resolved image of the fixed target, it can measure the dynamic binding kinetics of the diffusing probe molecules, in solution, to the target.
Combining 3D super-resolution technique (e.g. the double-helix point spread function develop in Moerner's group), photo-activated dyes, power-dependent active intermittency, and points accumulation for imaging in nanoscale topography, SPRAIPAINT (SPRAI=Super resolution by PoweR-dependent Active Intermittency) can super-resolve live-cell walls. PAINT works by maintaining a balance between the dye adsorption/absorption and photobleaching/desorption rates. This balance can be estimated with statistical principles. The adsorption or absorption rate of a dilute solute to a surface or interface in a gas or liquid solution can be calculated using Fick's laws of diffusion. The photobleaching/desorption rate can be measured for a given solution condition and illumination power density.
DNA-PAINT has been further extended to use regular dyes, where the dynamic binding and unbinding of a dye-labeled DNA probe to a fixed DNA origami is used to achieve stochastic single-molecule imaging. DNA-PAINT is no longer limited to environment-sensitive dyes and can measure both the adsorption and the desorption kinetics of the probes to the target. The method uses the camera blurring effect of moving dyes. When a regular dye is diffusing in the solution, its image on a typical CCD camera is blurred because of its relatively fast speed and the relatively long camera exposure time, contributing to the fluorescence background. However, when it binds to a fixed target, the dye stops moving; and clear input into the point spread function can be achieved.
The term for this method is mbPAINT ("mb" standing for motion blur). When a total internal reflection fluorescence microscope (TIRF) is used for imaging, the excitation depth is limited to ~100 nm from the substrate, which further reduces the fluorescence background from the blurred dyes near the substrate and the background in the bulk solution. Very bright dyes can be used for mbPAINT which gives typical single-frame spatial resolutions of ~20 nm and single-molecule kinetic temporal resolutions of ~20 ms under relatively mild photoexcitation intensities, which is useful in studying molecular separation of single proteins.
By using a secondary DNA strand that couples to the primary (antibody-conjugated) strand, the fluorescent label can be gently stripped, allowing multiplexed localization of 30 different proteins. This method, called SUM-PAINT, has been used to map the localization of synaptic proteins at 5 nm resolution, revealing differences in the architecture of excitatory, inhibitory and mixed synapses.
The temporal resolution has been further improved (20 times) using a rotational phase mask placed in the Fourier plane during data acquisition and resolving the distorted point spread function that contains temporal information. The method was named Super Temporal-Resolved Microscopy (STReM).
==== Label-free localization microscopy ====
Optical resolution of cellular structures in the range of about 50 nm can be achieved, even in label-free cells, using localization microscopy SPDM.
By using two different laser wavelengths, SPDM reveals cellular objects which are not detectable under conventional fluorescence wide-field imaging conditions, beside making for a substantial resolution improvement of autofluorescent structures.
As a control, the positions of the detected objects in the localization image match those in the bright-field image.
Label-free superresolution microscopy has also been demonstrated using the fluctuations of a surface-enhanced Raman scattering signal on a highly uniform plasmonic metasurface.
==== Direct stochastical optical reconstruction microscopy (dSTORM) ====
dSTORM uses the photoswitching of a single fluorophore. In dSTORM, fluorophores are embedded in a reducing and oxidizing buffering system (ROXS) and fluorescence is excited. Sometimes, stochastically, the fluorophore will enter a triplet or some other dark state that is sensitive to the oxidation state of the buffer, from which they can be made to fluoresce, so that single molecule positions can be recorded. Development of the dSTORM method occurred at 3 independent laboratories at about the same time and was also called "reversible photobleaching microscopy" (RPM), "ground state depletion microscopy followed by individual molecule return" (GSDIM), as well as the now generally accepted moniker dSTORM.
==== Software for localization microscopy ====
Localization microscopy depends heavily on software that can precisely fit the point spread function (PSF) to millions of images of active fluorophores within a few minutes. Since the classical analysis methods and software suites used in the natural sciences are too slow to computationally solve these problems, often taking hours of computation for processing data measured in minutes, specialised software programs have been developed. Many of these localization software packages are open-source; they are listed at SMLM Software Benchmark. Once molecule positions have been determined, the locations need to be displayed and several algorithms for display have been developed.
=== Random Illumination Microscopy (RIM) ===
Random Illumination Microscopy (RIM) is a super-resolution imaging technique that employs random or pseudo-random wide-field illuminations generated by a laser. This method enables the reconstruction of a high-resolution image from multiple low-resolution frames captured under varying, unknown illumination patterns, achieving resolutions down to 90 nanometers. RIM is particularly advantageous for imaging thick, living samples due to its minimal phototoxicity and robust z-sectioning capabilities. Additionally, its resistance to optical aberrations makes it a highly effective tool for biological research.
=== Super-resolution optical fluctuation imaging (SOFI) ===
It is possible to circumvent the need for PSF fitting inherent in single molecule localization microscopy (SMLM) by directly computing the temporal autocorrelation of pixels. This technique is called super-resolution optical fluctuation imaging (SOFI) and has been shown to be more precise than SMLM when the density of concurrently active fluorophores is very high.
=== Omnipresent Localization Microscopy (OLM) ===
Omnipresent Localisation Microscopy (OLM) is an extension of Single Molecule Microscopy (SMLM) techniques that allow high-density single molecule imaging with an incoherent light source (such as a mercury-arc lamp) and a conventional epifluorescence microscope setup. A short burst of deep-blue excitation (with a 350-380 nm, instead of a 405 nm, laser) enables a prolonged reactivation of molecules, for a resolution of 90 nm on test specimens. Finally, correlative STED and SMLM imaging can be performed on the same biological sample using a simple imaging medium, which can provide a basis for a further enhanced resolution. These findings can democratize super-resolution imaging and help any scientist to generate high-density single-molecule images even with a limited budget.
=== Resolution Enhancement by Sequential Imaging (RESI) ===
Resolution enhancement by sequential imaging (RESI) is an extension of DNA-PAINT that can achieve theoretically unlimited resolution. Rather than using one label type to identify a given target species, copies of the same target are labeled with orthogonal DNA sequences. Upon sequential (i.e. separated) imaging, localization clouds that would overlap in conventional SMLM can be (1) resolved and (2) combined into a single "super" localization, the precision of which scales with the underlying number of localizations. As the number of achievable localizations in DNA-PAINT is unlimited, so is the theoretical resolution of RESI. Overlaying the RESI localizations from the underlying imaging rounds creates a composite, highly resolved image.
== Combination of techniques ==
=== 3D light microscopical nanosizing (LIMON) microscopy ===
Light MicrOscopical Nanosizing microscopy (3D LIMON) images, using the Vertico SMI microscope, are made possible by the combination of SMI and SPDM, whereby first the SMI, and then the SPDM, process is applied.
The SMI process determines the center of particles and their spread in the direction of the microscope axis. While the center of particles/molecules can be determined with a precision of 1–2 nm, the spread around this point can be determined down to an axial diameter of approximately 30–40 nm.
Subsequently, the lateral position of the individual particle/molecule is determined using SPDM, achieving a precision of a few nanometers.
As a biological application in the 3D dual color mode, the spatial arrangements of Her2/neu and Her3 clusters was achieved. The positions in all three directions of the protein clusters could be determined with an accuracy of about 25 nm.
=== Integrated correlative light and electron microscopy ===
Combining a super-resolution microscope with an electron microscope enables the visualization of contextual information, with the labelling provided by fluorescence markers. This overcomes the problem of the black backdrop that the researcher is left with when using only a light microscope. In an integrated system, the sample is measured by both microscopes simultaneously.
== Enhancing of techniques using neural networks ==
Recently, owing to advancements in artificial intelligence computing, deep learning neural networks (GANs) have been used for super-resolution enhancing of photographic images extracted from optical microscopes, enhancing resolution from 40x to 100x. Resolution increases from 20x with an optical microscope to 1500x, comparable to a scanning electron microscope, via a neural lens. These techniques have applications in super-resolving images from positron-emission tomography and fluorescence microscopy.
== See also ==
Correlative light-electron microscopy
Deconvolution
Multifocal plane microscopy (MUM)
Photoactivatable probes
Photoactivated localization microscopy (PALM)
Stimulated emission depletion microscope (STED)
Super-resolution imaging
Video super resolution
== References ==
== Further reading ==
Marx V (December 2013) [26 November 2013]. "Is super-resolution microscopy right for you?". Technology Feature. Nature Methods (Paper "Nature Reprint Collection, Technology Features"). 10 (12): 1157–63. doi:10.1038/nmeth.2756. PMID 24296472. S2CID 1004998.
Cremer C, Masters BR (April 2013). "Resolution enhancement techniques in microscopy". The European Physical Journal H. 38 (3): 281–344. Bibcode:2013EPJH...38..281C. doi:10.1140/epjh/e2012-20060-1. | Wikipedia/Super-resolution_microscopy |
The applied element method (AEM) is a numerical analysis used in predicting the continuum and discrete behavior of structures. The modeling method in AEM adopts the concept of discrete cracking allowing it to automatically track structural collapse behavior passing through all stages of loading: elastic, crack initiation and propagation in tension-weak materials, reinforcement yield, element separation, element contact and collision, as well as collision with the ground and adjacent structures.
== History ==
Exploration of the approach employed in the applied element method began in 1995 at the University of Tokyo as part of Dr. Hatem Tagel-Din's research studies. The term "applied element method" itself, however, was first coined in 2000 in a paper called "Applied element method for structural analysis: Theory and application for linear materials". Since then AEM has been the subject of research by a number of academic institutions and the driving factor in real-world applications. Research has verified its accuracy for: elastic analysis; crack initiation and propagation; estimation of failure loads at reinforced concrete structures; reinforced concrete structures under cyclic loading; buckling and post-buckling behavior; nonlinear dynamic analysis of structures subjected to severe earthquakes; fault-rupture propagation; nonlinear behavior of brick structures; and the analysis of glass reinforced polymers (GFRP) walls under blast loads.
== Technical discussion ==
In AEM, the structure is divided virtually and modeled as an assemblage of relatively small elements. The elements are then connected through a set of normal and shear springs located at contact points distributed along with the element faces. Normal and shear springs are responsible for the transfer of normal and shear stresses from one element to the next.
=== Element generation and formulation ===
The modeling of objects in AEM is very similar to modeling objects in FEM. Each object is divided into a series of elements connected and forming a mesh. The main difference between AEM and FEM, however, is how the elements are joined together. In AEM the elements are connected by a series of non-linear springs representing the material behavior.
There are three types of springs used in AEM:
Matrix Springs: Matrix springs connect two elements together representing the main material properties of the object.
Reinforcing Bar Springs: Reinforcement springs are used to implicitly represent additional reinforcement bars running through the object without adding additional elements to the analysis.
Contact Springs: Contact Springs are generated when two elements collide with each other or the ground. When this occurs three springs are generated (Shear Y, Shear X and Normal).
=== Automatic element separation ===
When the average strain value at the element face reaches the separation strain, all springs at this face are removed and elements are no longer connected until a collision occurs, at which point they collide together as rigid bodies.
Separation strain represents the strain at which adjacent elements are totally separated at the connecting face. This parameter is not available in the elastic material model. For concrete, all springs between the adjacent faces including reinforcement bar springs are cut. If the elements meet again, they will behave as two different rigid bodies that have now contacted each other. For steel, the bars are cut if the stress point reaches ultimate stress or if the concrete reaches the separation strain.
=== Automatic element contact/collision ===
Contact or collision is detected without any user intervention. Elements are able to separate, contract and/or make contact with other elements. In AEM three contact methods include Corner-to-Face, Edge-to-Edge, and Corner-to-Ground.
== Stiffness matrix ==
The spring stiffness in a 2D model can be calculated from the following equations:
K
n
=
E
⋅
T
⋅
d
a
{\displaystyle K_{n}={\frac {E\cdot T\cdot d}{a}}}
K
s
=
G
⋅
T
⋅
d
a
{\displaystyle K_{s}={\frac {G\cdot T\cdot d}{a}}}
Where d is the distance between springs, T is the thickness of the element, a is the length of the representative area, E is the Young's modulus, and G is the shear modulus of the material. The above equation's indicate that each spring represents the stiffness of an area (T·d) within the length of the studied material.
To model reinforcement bars embedded in concrete, a spring is placed inside the element at the location of the bar; the area (T·d) is replaced by the actual cross section area of the reinforcement bar. Similar to modeling embedded steel sections, the area (T·d) may be replaced by the area of the steel section represented by the spring.
Although the element motion moves as a rigid body, its internal deformations are represented by the spring deformation around each element. This means the element shape does not change during analysis, but the behavior of assembly of elements is deformable.
The two elements are assumed to be connected by only one pair of normal and shear springs. To have a general stiffness matrix, the locations of element and contact springs are assumed in a general position. The stiffness matrix components corresponding to each degree of freedom are determined by assuming a unit displacement in the studied direction and by determining forces at the centroid of each element. The 2D element stiffness matrix size is 6 × 6; the components of the upper left quarter of the stiffness matrix are shown below:
[
sin
2
(
θ
+
α
)
K
n
−
K
n
sin
(
θ
+
α
)
cos
(
θ
+
α
)
cos
(
θ
+
α
)
K
s
L
sin
(
α
)
+
cos
2
(
θ
+
α
)
K
s
+
K
s
sin
(
θ
+
α
)
cos
(
θ
+
α
)
−
sin
(
θ
+
α
)
K
n
L
cos
(
α
)
−
K
n
sin
(
θ
+
α
)
cos
(
θ
+
α
)
sin
2
(
θ
+
α
)
K
s
cos
(
θ
+
α
)
K
n
L
cos
(
α
)
+
K
s
sin
(
θ
+
α
)
cos
(
θ
+
α
)
+
cos
2
(
θ
+
α
)
K
n
+
sin
(
θ
+
α
)
K
s
L
sin
(
α
)
cos
(
θ
+
α
)
K
s
L
sin
(
α
)
cos
(
θ
+
α
)
K
n
L
cos
(
α
)
L
2
cos
2
(
α
)
K
n
−
sin
(
θ
+
α
)
K
n
L
cos
(
α
)
+
sin
(
θ
+
α
)
K
s
L
sin
(
α
)
+
L
2
sin
2
(
α
)
K
s
]
{\displaystyle {\begin{bmatrix}\sin ^{2}(\theta +\alpha )K_{n}&-K_{n}\sin(\theta +\alpha )\cos(\theta +\alpha )&\cos(\theta +\alpha )K_{s}L\sin(\alpha )\\+\cos ^{2}(\theta +\alpha )K_{s}&+K_{s}\sin(\theta +\alpha )\cos(\theta +\alpha )&-\sin(\theta +\alpha )K_{n}L\cos(\alpha )\\\\-K_{n}\sin(\theta +\alpha )\cos(\theta +\alpha )&\sin ^{2}(\theta +\alpha )K_{s}&\cos(\theta +\alpha )K_{n}L\cos(\alpha )\\+K_{s}\sin(\theta +\alpha )\cos(\theta +\alpha )&+\cos ^{2}(\theta +\alpha )K_{n}&+\sin(\theta +\alpha )K_{s}L\sin(\alpha )\\\\\cos(\theta +\alpha )K_{s}L\sin(\alpha )&\cos(\theta +\alpha )K_{n}L\cos(\alpha )&L^{2}\cos ^{2}(\alpha )K_{n}\\-\sin(\theta +\alpha )K_{n}L\cos(\alpha )&+\sin(\theta +\alpha )K_{s}L\sin(\alpha )&+L^{2}\sin ^{2}(\alpha )K_{s}\end{bmatrix}}}
The stiffness matrix depends on the contact spring stiffness and the spring location. The stiffness matrix is for only one pair of contact springs. However, the global stiffness matrix is determined by summing up the stiffness matrices of individual pairs of springs around each element. Consequently, the developed stiffness matrix has total effects from all pairs of springs, according to the stress situation around the element. This technique can be used in both load and displacement control cases. The 3D stiffness matrix may be deduced similarly.
== Applications ==
The applied element method is currently being used in the following applications:
Structural vulnerability assessment
Progressive collapse
Blast analysis
Impact analysis
Seismic analysis
Forensic engineering
Performance based design
Demolition analysis
Glass performance analysis
Visual effects
== See also ==
Building implosion
Earthquake engineering
Extreme Loading for Structures
Failure analysis
Multidisciplinary design optimization
Physics engine
Progressive collapse
Shear modulus
Structural engineering
Young's modulus
== References ==
== Further reading ==
Applied Element Method
Extreme Loading for Structures - Applied Element Method | Wikipedia/Applied_element_method |
Multi-disciplinary design optimization (MDO) is a field of engineering that uses optimization methods to solve design problems incorporating a number of disciplines. It is also known as multidisciplinary system design optimization (MSDO), and multidisciplinary design analysis and optimization (MDAO).
MDO allows designers to incorporate all relevant disciplines simultaneously. The optimum of the simultaneous problem is superior to the design found by optimizing each discipline sequentially, since it can exploit the interactions between the disciplines. However, including all disciplines simultaneously significantly increases the complexity of the problem.
These techniques have been used in a number of fields, including automobile design, naval architecture, electronics, architecture, computers, and electricity distribution. However, the largest number of applications have been in the field of aerospace engineering, such as aircraft and spacecraft design. For example, the proposed Boeing blended wing body (BWB) aircraft concept has used MDO extensively in the conceptual and preliminary design stages. The disciplines considered in the BWB design are aerodynamics, structural analysis, propulsion, control theory, and economics.
== History ==
Traditionally engineering has normally been performed by teams, each with expertise in a specific discipline, such as aerodynamics or structures. Each team would use its members' experience and judgement to develop a workable design, usually sequentially. For example, the aerodynamics experts would outline the shape of the body, and the structural experts would be expected to fit their design within the shape specified. The goals of the teams were generally performance-related, such as maximum speed, minimum drag, or minimum structural weight.
Between 1970 and 1990, two major developments in the aircraft industry changed the approach of aircraft design engineers to their design problems. The first was computer-aided design, which allowed designers to quickly modify and analyse their designs. The second was changes in the procurement policy of most airlines and military organizations, particularly the military of the United States, from a performance-centred approach to one that emphasized lifecycle cost issues. This led to an increased concentration on economic factors and the attributes known as the "ilities" including manufacturability, reliability, maintainability, etc.
Since 1990, the techniques have expanded to other industries. Globalization has resulted in more distributed, decentralized design teams. The high-performance personal computer has largely replaced the centralized supercomputer and the Internet and local area networks have facilitated sharing of design information. Disciplinary design software in many disciplines (such as OptiStruct or NASTRAN, a finite element analysis program for structural design) have become very mature. In addition, many optimization algorithms, in particular the population-based algorithms, have advanced significantly.
=== Origins in structural optimization ===
Whereas optimization methods are nearly as old as calculus, dating back to Isaac Newton, Leonhard Euler, Daniel Bernoulli, and Joseph Louis Lagrange, who used them to solve problems such as the shape of the catenary curve, numerical optimization reached prominence in the digital age. Its systematic application to structural design dates to its advocacy by Schmit in 1960. The success of structural optimization in the 1970s motivated the emergence of multidisciplinary design optimization (MDO) in the 1980s. Jaroslaw Sobieski championed decomposition methods specifically designed for MDO applications. The following synopsis focuses on optimization methods for MDO. First, the popular gradient-based methods used by the early structural optimization and MDO community are reviewed. Then those methods developed in the last dozen years are summarized.
=== Gradient-based methods ===
There were two schools of structural optimization practitioners using gradient-based methods during the 1960s and 1970s: optimality criteria and mathematical programming. The optimality criteria school derived recursive formulas based on the Karush–Kuhn–Tucker (KKT) necessary conditions for an optimal design. The KKT conditions were applied to classes of structural problems such as minimum weight design with constraints on stresses, displacements, buckling, or frequencies [Rozvany, Berke, Venkayya, Khot, et al.] to derive resizing expressions particular to each class. The mathematical programming school employed classical gradient-based methods to structural optimization problems. The method of usable feasible directions, Rosen's gradient projection (generalized reduce gradient) method, sequential unconstrained minimization techniques, sequential linear programming and eventually sequential quadratic programming methods were common choices. Schittkowski et al. reviewed the methods current by the early 1990s.
The gradient methods unique to the MDO community derive from the combination of optimality criteria with math programming, first recognized in the seminal work of Fleury and Schmit who constructed a framework of approximation concepts for structural optimization. They recognized that optimality criteria were so successful for stress and displacement constraints, because that approach amounted to solving the dual problem for Lagrange multipliers using linear Taylor series approximations in the reciprocal design space. In combination with other techniques to improve efficiency, such as constraint deletion, regionalization, and design variable linking, they succeeded in uniting the work of both schools. This approximation concepts based approach forms the basis of the optimization modules in modern structural design software.
Approximations for structural optimization were initiated by the reciprocal approximation Schmit and Miura for stress and displacement response functions. Other intermediate variables were employed for plates. Combining linear and reciprocal variables, Starnes and Haftka developed a conservative approximation to improve buckling approximations. Fadel chose an appropriate intermediate design variable for each function based on a gradient matching condition for the previous point. Vanderplaats initiated a second generation of high quality approximations when he developed the force approximation as an intermediate response approximation to improve the approximation of stress constraints. Canfield developed a Rayleigh quotient approximation to improve the accuracy of eigenvalue approximations. Barthelemy and Haftka published a comprehensive review of approximations in 1993.
=== Non-gradient-based methods ===
In recent years, non-gradient-based evolutionary methods including genetic algorithms, simulated annealing, and ant colony algorithms came into existence. At present, many researchers are striving to arrive at a consensus regarding the best modes and methods for complex problems like impact damage, dynamic failure, and real-time analyses. For this purpose, researchers often employ multiobjective and multicriteria design methods.
=== Recent MDO methods ===
MDO practitioners have investigated optimization methods in several broad areas in the last dozen years. These include decomposition methods, approximation methods, evolutionary algorithms, memetic algorithms, response surface methodology, reliability-based optimization, and multi-objective optimization approaches.
The exploration of decomposition methods has continued in the last dozen years with the development and comparison of a number of approaches, classified variously as hierarchic and non hierarchic, or collaborative and non collaborative.
Approximation methods spanned a diverse set of approaches, including the development of approximations based on surrogate models (often referred to as metamodels), variable fidelity models, and trust region management strategies. The development of multipoint approximations blurred the distinction with response surface methods. Some of the most popular methods include Kriging and the moving least squares method.
Response surface methodology, developed extensively by the statistical community, received much attention in the MDO community in the last dozen years. A driving force for their use has been the development of massively parallel systems for high performance computing, which are naturally suited to distributing the function evaluations from multiple disciplines that are required for the construction of response surfaces. Distributed processing is particularly suited to the design process of complex systems in which analysis of different disciplines may be accomplished naturally on different computing platforms and even by different teams.
Evolutionary methods led the way in the exploration of non-gradient methods for MDO applications. They also have benefited from the availability of massively parallel high performance computers, since they inherently require many more function evaluations than gradient-based methods. Their primary benefit lies in their ability to handle discrete design variables and the potential to find globally optimal solutions.
Reliability-based optimization (RBO) is a growing area of interest in MDO. Like response surface methods and evolutionary algorithms, RBO benefits from parallel computation, because the numeric integration to calculate the probability of failure requires many function evaluations. One of the first approaches employed approximation concepts to integrate the probability of failure. The classical first-order reliability method (FORM) and second-order reliability method (SORM) are still popular. Professor Ramana Grandhi used appropriate normalized variables about the most probable point of failure, found by a two-point adaptive nonlinear approximation to improve the accuracy and efficiency. Southwest Research Institute has figured prominently in the development of RBO, implementing state-of-the-art reliability methods in commercial software. RBO has reached sufficient maturity to appear in commercial structural analysis programs like Altair's Optistruct and MSC's Nastran.
Utility-based probability maximization was developed in response to some logical concerns (e.g., Blau's Dilemma) with reliability-based design optimization. This approach focuses on maximizing the joint probability of both the objective function exceeding some value and of all the constraints being satisfied. When there is no objective function, utility-based probability maximization reduces to a probability-maximization problem. When there are no uncertainties in the constraints, it reduces to a constrained utility-maximization problem. (This second equivalence arises because the utility of a function can always be written as the probability of that function exceeding some random variable). Because it changes the constrained optimization problem associated with reliability-based optimization into an unconstrained optimization problem, it often leads to computationally more tractable problem formulations.
In the marketing field there is a huge literature about optimal design for multiattribute products and services, based on experimental analysis to estimate models of consumers' utility functions. These methods are known as Conjoint Analysis. Respondents are presented with alternative products, measuring preferences about the alternatives using a variety of scales and the utility function is estimated with different methods (varying from regression and surface response methods to choice models). The best design is formulated after estimating the model. The experimental design is usually optimized to minimize the variance of the estimators. These methods are widely used in practice.
== Problem formulation ==
Problem formulation is normally the most difficult part of the process. It is the selection of design variables, constraints, objectives, and models of the disciplines. A further consideration is the strength and breadth of the interdisciplinary coupling in the problem.
=== Design variables ===
A design variable is a specification that is controllable from the point of view of the designer. For instance, the thickness of a structural member can be considered a design variable. Another might be the choice of material. Design variables can be continuous (such as a wing span), discrete (such as the number of ribs in a wing), or Boolean (such as whether to build a monoplane or a biplane). Design problems with continuous variables are normally solved more easily.
Design variables are often bounded, that is, they often have maximum and minimum values. Depending on the solution method, these bounds can be treated as constraints or separately.
One of the important variables that needs to be accounted is an uncertainty. Uncertainty, often referred to as epistemic uncertainty, arises due to lack of knowledge or incomplete information. Uncertainty is essentially unknown variable but it may causes the failure of system.
=== Constraints ===
A constraint is a condition that must be satisfied in order for the design to be feasible. An example of a constraint in aircraft design is that the lift generated by a wing must be equal to the weight of the aircraft. In addition to physical laws, constraints can reflect resource limitations, user requirements, or bounds on the validity of the analysis models. Constraints can be used explicitly by the solution algorithm or can be incorporated into the objective using Lagrange multipliers.
=== Objectives ===
An objective is a numerical value that is to be maximized or minimized. For example, a designer may wish to maximize profit or minimize weight. Many solution methods work only with single objectives. When using these methods, the designer normally weights the various objectives and sums them to form a single objective. Other methods allow multiobjective optimization, such as the calculation of a Pareto front.
=== Models ===
The designer must also choose models to relate the constraints and the objectives to the design variables. These models are dependent on the discipline involved. They may be empirical models, such as a regression analysis of aircraft prices, theoretical models, such as from computational fluid dynamics, or reduced-order models of either of these. In choosing the models the designer must trade off fidelity with analysis time.
The multidisciplinary nature of most design problems complicates model choice and implementation. Often several iterations are necessary between the disciplines in order to find the values of the objectives and constraints. As an example, the aerodynamic loads on a wing affect the structural deformation of the wing. The structural deformation in turn changes the shape of the wing and the aerodynamic loads. Therefore, in analysing a wing, the aerodynamic and structural analyses must be run a number of times in turn until the loads and deformation converge.
=== Standard form ===
Once the design variables, constraints, objectives, and the relationships between them have been chosen, the problem can be expressed in the following form:
find
x
{\displaystyle \mathbf {x} }
that minimizes
J
(
x
)
{\displaystyle J(\mathbf {x} )}
subject to
g
(
x
)
≤
0
{\displaystyle \mathbf {g} (\mathbf {x} )\leq \mathbf {0} }
,
h
(
x
)
=
0
{\displaystyle \mathbf {h} (\mathbf {x} )=\mathbf {0} }
and
x
l
b
≤
x
≤
x
u
b
{\displaystyle \mathbf {x} _{lb}\leq \mathbf {x} \leq \mathbf {x} _{ub}}
where
J
{\displaystyle J}
is an objective,
x
{\displaystyle \mathbf {x} }
is a vector of design variables,
g
{\displaystyle \mathbf {g} }
is a vector of inequality constraints,
h
{\displaystyle \mathbf {h} }
is a vector of equality constraints, and
x
l
b
{\displaystyle \mathbf {x} _{lb}}
and
x
u
b
{\displaystyle \mathbf {x} _{ub}}
are vectors of lower and upper bounds on the design variables. Maximization problems can be converted to minimization problems by multiplying the objective by -1. Constraints can be reversed in a similar manner. Equality constraints can be replaced by two inequality constraints.
== Problem solution ==
The problem is normally solved using appropriate techniques from the field of optimization. These include gradient-based algorithms, population-based algorithms, or others. Very simple problems can sometimes be expressed linearly; in that case the techniques of linear programming are applicable.
=== Gradient-based methods ===
Adjoint equation
Newton's method
Steepest descent
Conjugate gradient
Sequential quadratic programming
=== Gradient-free methods ===
Hooke-Jeeves pattern search
Nelder-Mead method
=== Population-based methods ===
Genetic algorithm
Memetic algorithm
Particle swarm optimization
Harmony search
ODMA
=== Other methods ===
Random search
Grid search
Simulated annealing
Direct search
IOSO (Indirect Optimization based on Self-Organization)
Most of these techniques require large numbers of evaluations of the objectives and the constraints. The disciplinary models are often very complex and can take significant amounts of time for a single evaluation. The solution can therefore be extremely time-consuming. Many of the optimization techniques are adaptable to parallel computing. Much current research is focused on methods of decreasing the required time.
Also, no existing solution method is guaranteed to find the global optimum of a general problem (see No free lunch in search and optimization). Gradient-based methods find local optima with high reliability but are normally unable to escape a local optimum. Stochastic methods, like simulated annealing and genetic algorithms, will find a good solution with high probability, but very little can be said about the mathematical properties of the solution. It is not guaranteed to even be a local optimum. These methods often find a different design each time they are run.
== See also ==
List of optimization software
ModeFRONTIER
pSeven
ModelCenter
OpenMDAO
== References ==
Avriel, M., Rijckaert, M.J. and Wilde, D.J. (eds.), Optimization and Design, Prentice-Hall, 1973.
Avriel, M. and Dembo, R.S. (eds.), Mathematical Programming Studies on Engineering Optimization, North-Holland, 1979.
Cramer, E.J., Dennis Jr., J.E., Frank, P.D., Lewis, R.M., and Shubin, G.R., Problem Formulation for Multidisciplinary Optimization, SIAM J. Optim., 4 (4): 754–776, 1994.
Deb, K. "Current trends in evolutionary multi-objective optimization", Int. J. Simul. Multi. Design Optim., 1 1 (2007) 1–8.
Lambe, A. B. and Martins, J. R. R. A. "Extensions to the design structure matrix for the description of multidisciplinary design, analysis, and optimization processes". Structural and Multidisciplinary Optimization, 46:273–284, August 2012. doi:10.1007/s00158-012-0763-y.
Siddall, J.N., Optimal Engineering Design, CRC, 1982.
Vanderplaats, G. N., Multidiscipline Design Optimization, Vanderplaatz R&D, Inc., 2007.
Viana, F.A.C., Simpson, T.W., Balabanov, V. and Toropov, V. "Metamodeling in multidisciplinary design optimization: How far have we really come?" AIAA Journal 52 (4) 670–690, 2014 (DOI: 10.2514/1.J052375)
== External links ==
Ansys optiSLang | Wikipedia/Multidisciplinary_design_optimization |
In numerical analysis, a mixed finite element method, is a variant of the finite element method in which extra fields to be solved are introduced during the posing a partial differential equation problem. Somewhat related is the hybrid finite element method. The extra fields may be constrained by using Lagrange multiplier fields. To be distinguished from the mixed finite element method, the more typical finite element methods that do not introduce such extra fields are also called irreducible or primal finite element methods. The mixed finite element method is efficient for some problems that would be numerically ill-posed if discretized by using the irreducible finite element method; one example of such problems is to compute the stress and strain fields in an almost incompressible elastic body.
== Variations ==
=== Constraints ===
In constrained mixed methods, the Lagrange multiplier fields inside the elements, usually enforcing the applicable partial differential equations. This results in a saddle point system having negative pivots and eigenvalues, rendering the system matrix to be non-definite which results in complications in solving for it. In sparse direct solvers, pivoting may be needed, where ultimately the resulting matrix has 2x2 blocks on the diagonal, rather than a working towards a completely pure LLH Cholesky decomposition for positive definite symmetric or Hermitian systems. Pivoting may result in unpredictable memory usage increases.[1] For iterative solvers, only GMRES based solvers work, rather than slightly "cheaper" MINRES based solvers.
=== Hybrid Methods ===
In hybrid methods, the Lagrange fields are for jumps of fields between elements, living on the boundary of the elements, weakly enforcing continuity; continuity from fields in the elements does not need to be enforced through shared degrees of freedom between elements anymore. Both mixing and hybridization can be applied simultaneously. These enforcements are "weak", i.e. occur upon having the solutions and possibly only at some points or e.g. matching moment integral conditions, rather than "strong" in which case the conditions are fulfilled directly in the type of solutions sought. Apart from the harmonics (usually semi-trivial local solution to the homogeneous equations at zero loads), hybridization allows for static Guyan condensation of the discontinuous fields internal to the elements, reducing the number of degrees of freedom, and moreover reducing or eliminating the number of negative eigenvalues and pivots resulting from application of the mixed method.
== References == | Wikipedia/Mixed_finite_element_method |
Unsteady flows are characterized as flows in which the properties of the fluid are time dependent. It gets reflected in the governing equations as the time derivative of the properties are absent.
For Studying Finite-volume method for unsteady flow there is some governing equations
>
== Governing Equation ==
The conservation equation for the transport of a scalar in unsteady flow has the general form as
∂
ρ
ϕ
∂
t
+
div
(
ρ
ϕ
υ
)
=
div
(
Γ
grad
ϕ
)
+
S
ϕ
{\displaystyle {\frac {\partial \rho \phi }{\partial t}}+\operatorname {div} \left(\rho \phi \upsilon \right)=\operatorname {div} \left(\Gamma \operatorname {grad} \phi \right)+S_{\phi }}
ρ
{\displaystyle \rho }
is density and
ϕ
{\displaystyle \phi }
is conservative form of all fluid flow,
Γ
{\displaystyle \Gamma }
is the Diffusion coefficient and
S
{\displaystyle S}
is the Source term.
div
(
ρ
ϕ
υ
)
{\displaystyle \operatorname {div} \left(\rho \phi \upsilon \right)}
is Net rate of flow of
ϕ
{\displaystyle \phi }
out of fluid element(convection),
div
(
Γ
grad
ϕ
)
{\displaystyle \operatorname {div} \left(\Gamma \operatorname {grad} \phi \right)}
is Rate of increase of
ϕ
{\displaystyle \phi }
due to diffusion,
S
ϕ
{\displaystyle S_{\phi }}
is Rate of increase of
ϕ
{\displaystyle \phi }
due to sources.
∂
ρ
ϕ
∂
t
{\displaystyle {\frac {\partial \rho \phi }{\partial t}}}
is Rate of increase of
ϕ
{\displaystyle \phi }
of fluid element(transient),
The first term of the equation reflects the unsteadiness of the flow and is absent in case of steady flows. The finite volume integration of the governing equation is carried out over a control volume and also over a finite time step ∆t.
∫
c
v
∫
t
t
+
Δ
t
(
∂
ρ
ϕ
∂
t
d
t
)
d
V
+
∫
t
t
+
Δ
t
∫
A
(
n
.
ρ
ϕ
u
d
A
)
d
t
=
∫
t
t
+
Δ
t
∫
A
(
n
⋅
(
Γ
grad
ϕ
)
d
A
)
d
t
+
∫
t
t
+
Δ
t
∫
c
v
S
ϕ
d
V
d
t
{\displaystyle \int \limits _{cv}\!\!\!\int _{t}^{t+\Delta t}\left({\frac {\partial \rho \phi }{\partial t}}\,\mathrm {d} t\right)\,\mathrm {d} V+\int _{t}^{t+\Delta t}\!\!\!\int \limits _{A}\left(n.{\rho \phi u}\,\mathrm {d} A\right)\,\mathrm {d} t=\int _{t}^{t+\Delta t}\!\!\!\int \limits _{A}\left(n\cdot \left(\Gamma \operatorname {grad} \phi \right)\,\mathrm {d} A\right)\,\mathrm {d} t+\int _{t}^{t+\Delta t}\!\!\!\int \limits _{cv}S_{\phi }\,\mathrm {d} V\,\mathrm {d} t}
The control volume integration of the steady part of the equation is similar to the steady state governing equation's integration. We need to focus on the integration of the unsteady component of the equation. To get a feel of the integration technique, we refer to the one-dimensional unsteady heat conduction equation.
ρ
c
∂
T
∂
t
=
∂
∂
x
k
∂
T
∂
x
+
S
{\displaystyle \rho c{\frac {\partial T}{\partial t}}={\frac {\partial }{\partial x}}{\frac {k\partial T}{\partial x}}+S}
∫
t
t
+
Δ
t
∫
c
v
ρ
c
∂
T
∂
t
d
V
d
t
=
∫
t
t
+
Δ
t
∫
c
v
∂
∂
x
k
∂
T
∂
x
d
V
d
t
+
∫
t
t
+
Δ
t
∫
c
v
S
d
V
d
t
{\displaystyle \int _{t}^{t+\Delta t}\!\!\!\int \limits _{cv}\rho c{\frac {\partial T}{\partial t}}\,\mathrm {d} V\,\mathrm {d} t=\int _{t}^{t+\Delta t}\!\!\!\int \limits _{cv}{\frac {\partial }{\partial x}}{\frac {k\partial T}{\partial x}}\,\mathrm {d} V\,\mathrm {d} t+\int _{t}^{t+\Delta t}\!\!\!\int \limits _{cv}S\,\mathrm {d} V\,\mathrm {d} t}
∫
e
w
∫
t
t
+
Δ
t
(
ρ
c
∂
T
∂
t
d
t
)
d
V
=
∫
t
t
+
Δ
t
[
(
k
A
∂
T
∂
x
)
e
−
(
k
A
∂
T
∂
x
)
w
]
d
t
+
∫
t
t
+
Δ
t
S
¯
Δ
V
d
t
{\displaystyle \int _{e}^{w}\!\!\!\int _{t}^{t+\Delta t}\left(\rho c{\frac {\partial T}{\partial t}}\,\mathrm {d} t\right)\,\mathrm {d} V=\int _{t}^{t+\Delta t}\left[\left(kA{\frac {\partial T}{\partial x}}\right)_{e}-\left(kA{\frac {\partial T}{\partial x}}\right)_{w}\right]\,\mathrm {d} t+\int _{t}^{t+\Delta t}{\bar {S}}\Delta V\,\mathrm {d} t}
Now, holding the assumption of the temperature at the node being prevalent in the entire control volume, the left side of the equation can be written as
∫
c
v
∫
t
t
+
Δ
t
(
ρ
c
∂
T
∂
t
d
t
)
d
V
=
ρ
c
(
T
P
−
T
P
O
)
Δ
V
{\displaystyle \int \limits _{cv}\!\!\!\int _{t}^{t+\Delta t}\left(\rho c{\frac {\partial T}{\partial t}}\,\mathrm {d} t\right)\,\mathrm {d} V=\rho c\left(T_{P}-{T_{P}}^{O}\right)\Delta V}
By using a first order backward differencing scheme, we can write the right hand side of the equation as
ρ
c
(
T
P
−
T
P
0
)
Δ
V
=
∫
t
t
+
Δ
t
[
(
K
e
A
T
E
−
T
P
δ
x
P
E
)
−
(
K
w
A
T
P
−
T
W
δ
x
W
P
)
]
d
t
+
∫
t
t
+
Δ
t
S
¯
Δ
V
d
t
{\displaystyle \rho c\left(T_{P}-{T_{P}}^{0}\right)\Delta V=\int _{t}^{t+\Delta t}\left[\left(K_{e}A{\frac {T_{E}-T_{P}}{\delta x_{PE}}}\right)-\left(K_{w}A{\frac {T_{P}-T_{W}}{\delta x_{WP}}}\right)\right]\,\mathrm {d} t+\int _{t}^{t+\Delta t}{\bar {S}}\Delta V\,\mathrm {d} t}
Now to evaluate the right hand side of the equation we use a weighting parameter
θ
{\displaystyle \theta }
between 0 and 1, and we write the integration of
T
P
{\displaystyle T_{P}}
I
T
=
∫
t
t
+
Δ
t
T
P
d
t
=
[
θ
T
P
+
(
1
−
θ
)
T
P
0
]
Δ
t
{\displaystyle I_{T}=\int _{t}^{t+\Delta t}T_{P}\,\mathrm {d} t=\left[\theta T_{P}+\left(1-\theta \right){T_{P}}^{0}\right]\Delta t}
Now, the exact form of the final discretised equation depends on the value of
Θ
{\displaystyle \Theta }
. As the variance of
Θ
{\displaystyle \Theta }
is 0<
Θ
{\displaystyle \Theta }
<1, the scheme to be used to calculate
T
P
{\displaystyle T_{P}}
depends on the value of the
Θ
{\displaystyle \Theta }
Thus\\
ρ
c
(
T
P
−
T
P
0
)
Δ
t
Δ
x
=
θ
[
(
K
e
T
E
−
T
P
δ
x
P
E
)
−
(
K
w
T
P
−
T
W
δ
x
W
P
)
]
+
(
1
−
θ
)
[
(
K
e
T
E
−
T
P
δ
x
P
E
)
−
(
K
w
T
P
−
T
W
δ
x
W
P
)
]
+
S
¯
Δ
x
{\displaystyle \rho c{\frac {\left(T_{P}-{T_{P}}^{0}\right)}{\Delta t}}\Delta x=\theta [\left(K_{e}{\frac {T_{E}-T_{P}}{\delta x_{PE}}}\right)-\left(K_{w}{\frac {T_{P}-T_{W}}{\delta x_{WP}}}\right)]+(1-\theta )[\left(K_{e}{\frac {T_{E}-T_{P}}{\delta x_{PE}}}\right)-\left(K_{w}{\frac {T_{P}-T_{W}}{\delta x_{WP}}}\right)]+{\bar {S}}\Delta x}
== Different Schemes ==
1. Explicit Scheme in the explicit scheme the source term is linearised as
b
=
S
u
+
S
P
T
P
0
{\displaystyle b=S_{u}+{S_{P}}{T_{P}}^{0}}
. We substitute
θ
=
0
{\displaystyle \theta =0}
to get the explicit discretisation i.e.:
a
P
T
P
=
a
w
T
w
0
+
a
e
T
e
0
+
[
a
P
0
−
(
a
w
+
a
e
−
S
P
)
]
T
P
0
+
S
u
{\displaystyle a_{P}T_{P}=a_{w}{T_{w}}^{0}+a_{e}{T_{e}}^{0}+\left[{a_{P}}^{0}-\left(a_{w}+a_{e}-S_{P}\right)\right]{T_{P}}^{0}+S_{u}}
where
a
P
=
a
P
0
{\displaystyle a_{P}={a_{P}}^{0}}
. One thing worth noting is that the right side contains values at the old time step and hence the left side can be calculated by forward matching in time. The scheme is based on backward differencing and its Taylor series truncation error is first order with respect to time. All coefficients need to be positive. For constant k and uniform grid spacing,
δ
x
P
E
=
δ
x
W
P
=
Δ
x
{\displaystyle \delta x_{PE}=\delta x_{WP}=\Delta x}
this condition may be written as
ρ
c
Δ
x
Δ
t
>
2
K
Δ
x
{\displaystyle \rho c{\frac {\Delta x}{\Delta t}}>{\frac {2K}{\Delta x}}}
This inequality sets a stringent condition on the maximum time step that can be used and represents a serious limitation on the scheme. It becomes very expensive to improve the spatial accuracy because the maximum possible time step needs to be reduced as the square of
Δ
x
{\displaystyle \Delta x}
2. Crank-Nicolson scheme : the Crank-Nicolson method results from setting
θ
=
1
2
{\displaystyle \theta ={\frac {1}{2}}}
. The discretised unsteady heat conduction equation becomes
a
P
T
P
=
a
E
[
T
E
+
T
E
0
2
]
+
a
W
[
T
W
+
T
W
0
2
]
+
[
a
P
0
−
a
E
2
−
a
W
2
]
T
P
0
+
b
{\displaystyle a_{P}T_{P}=a_{E}\left[{\frac {T_{E}+{T_{E}}^{0}}{2}}\right]+a_{W}\left[{\frac {T_{W}+{T_{W}}^{0}}{2}}\right]+\left[{a_{P}}^{0}-{\frac {a_{E}}{2}}-{\frac {a_{W}}{2}}\right]{T_{P}}^{0}+b}
Where
a
P
=
a
W
+
a
E
2
+
a
P
0
−
S
P
2
{\displaystyle a_{P}={\frac {a_{W}+a_{E}}{2}}+{a_{P}}^{0}-{\frac {S_{P}}{2}}}
Since more than one unknown value of T at the new time level is present in equation the method is implicit and simultaneous equations for all node points need to be solved at each time step. Although schemes with
1
2
<
θ
<
1
{\displaystyle {\frac {1}{2}}<\theta <1}
including the Crank-Nicolson scheme, are unconditionally stable for all values of the time step it is more important to ensure that all coefficients are positive for physically realistic and bounded results. This is the case if the coefficient of
T
P
0
{\displaystyle {T_{P}}^{0}}
satisfies the following condition
a
P
0
=
[
a
E
+
a
W
2
]
{\displaystyle {a_{P}}^{0}=\left[{\frac {a_{E}+a_{W}}{2}}\right]}
which leads to
Δ
t
<
ρ
c
Δ
x
2
K
{\displaystyle \Delta t<\rho c{\frac {\Delta x^{2}}{K}}}
the Crank-Nicolson is based on central differencing and hence is second order accurate in time. The overall accuracy of a computation depends also on the spatial differencing practice, so the Crank-Nicolson scheme is normally used in conjunction with spatial central differencing
3. Fully implicit Scheme when the value of Ѳ is set to 1 we get the fully implicit scheme. The discretised equation is:
a
P
T
P
=
a
W
T
W
+
a
E
T
E
+
a
P
0
T
P
0
+
S
u
{\displaystyle a_{P}T_{P}=a_{W}T_{W}+a_{E}T_{E}+{a_{P}}^{0}{T_{P}}^{0}+S_{u}}
a
P
=
a
P
0
+
a
W
+
a
E
−
S
P
{\displaystyle a_{P}={a_{P}}^{0}+a_{W}+a_{E}-S_{P}}
Both sides of the equation contain temperatures at the new time step, and a system of algebraic equations must be solved at each time level. The time marching procedure starts with a given initial field of temperatures
T
0
{\displaystyle T^{0}}
. The system of equations is solved after selecting time step
Δ
t
{\displaystyle \Delta t}
. Next the solution
T
{\displaystyle T}
is assigned to
T
0
{\displaystyle T^{0}}
and the procedure is repeated to progress the solution by a further time step. It can be seen that all coefficients are positive, which makes the implicit scheme unconditionally stable for any size of time step. Since the accuracy of the scheme is only first-order in time, small time steps are needed to ensure the accuracy of results. The implicit method is recommended for general purpose transient calculations because of its robustness and unconditional stability
== References == | Wikipedia/Finite_volume_method_for_unsteady_flow |
In computational modelling, multiphysics simulation (often shortened to simply "multiphysics") is defined as the simultaneous simulation of different aspects of a physical system or systems and the interactions among them. For example, simultaneous simulation of the physical stress on an object, the temperature distribution of the object and the thermal expansion which leads to the variation of the stress and temperature distributions would be considered a multiphysics simulation. Multiphysics simulation is related to multiscale simulation, which is the simultaneous simulation of a single process on either multiple time or distance scales.
As an interdisciplinary field, multiphysics simulation can span many science and engineering disciplines. Simulation methods frequently include numerical analysis, partial differential equations and tensor analysis.
== Multiphysics simulation process ==
The implementation of a multiphysics simulation follows a typical series of steps:
Identify the aspects of the system to be simulated, including physical processes, starting conditions, and the coupling or boundary conditions among these processes.
Create a discrete mathematical model of the system.
Numerically solve the model.
Process the resulting data.
== Mathematical models ==
Mathematical models used in multiphysics simulations are generally a set of coupled equations. The equations can be divided into three categories according to the nature and intended role: governing equation, auxiliary equations and boundary/initial conditions. A governing equation describes a major physical mechanism or process. Multiphysics simulations are numerically implemented with discretization methods such as the finite element method, finite difference method, or finite volume method.
== Challenges of multiphysics simulation ==
Generally speaking, multiphysics simulation is much harder than that for individual aspects of the physical processes.
The main extra issue is how to integrate the multiple aspects of the processes with proper handling of the interactions among them.
Such issues become quite difficult when different types of numerical methods are used for the simulations of individual physical aspects.
For example, when simulating a fluid-structure interaction problem with typical Eulerian finite volume method for flow
and Lagrangian finite element method for structure dynamics.
== See also ==
Finite difference time-domain method
== References ==
Susan L. Graham, Marc Snir, and Cynthia A. Patterson (Editors), Getting Up to Speed: The Future of Supercomputing, Appendix D. The National Academies Press, Washington DC, 2004. ISBN 0-309-09502-6.
Paul Lethbridge, Multiphysics Analysis, p26, The Industrial Physicist, Dec 2004/Jan 2005, [1], Archived at: [2] | Wikipedia/Multiphysics |
In numerical mathematics, the gradient discretisation method (GDM) is a framework which contains classical and recent numerical schemes for diffusion problems of various kinds: linear or non-linear, steady-state or time-dependent. The schemes may be conforming or non-conforming, and may rely on very general polygonal or polyhedral meshes (or may even be meshless).
Some core properties are required to prove the convergence of a GDM. These core properties enable complete proofs of convergence of the GDM for elliptic and parabolic problems, linear or non-linear. For linear problems, stationary or transient, error estimates can be established based on three indicators specific to the GDM (the quantities
C
D
{\displaystyle C_{D}}
,
S
D
{\displaystyle S_{D}}
and
W
D
{\displaystyle W_{D}}
, see below). For non-linear problems, the proofs are based on compactness techniques and do not require any non-physical strong regularity assumption on the solution or the model data. Non-linear models for which such convergence proof of the GDM have been carried out comprise: the Stefan problem which is modelling a melting material, two-phase flows in porous media, the Richards equation of underground water flow, the fully non-linear Leray—Lions equations.
Any scheme entering the GDM framework is then known to converge on all these problems. This applies in particular to conforming Finite Elements, Mixed Finite Elements, nonconforming Finite Elements, and, in the case of more recent schemes, the Discontinuous Galerkin method, Hybrid Mixed Mimetic method, the Nodal Mimetic Finite Difference method, some Discrete Duality Finite Volume schemes, and some Multi-Point Flux Approximation schemes
== The example of a linear diffusion problem ==
Consider Poisson's equation in a bounded open domain
Ω
⊂
R
d
{\displaystyle \Omega \subset \mathbb {R} ^{d}}
, with homogeneous Dirichlet boundary condition
where
f
∈
L
2
(
Ω
)
{\displaystyle f\in L^{2}(\Omega )}
. The usual sense of weak solution to this model is:
In a nutshell, the GDM for such a model consists in selecting a finite-dimensional space and two reconstruction operators (one for the functions, one for the gradients) and to substitute these discrete elements in lieu of the continuous elements in (2). More precisely, the GDM starts by defining a Gradient Discretization (GD), which is a triplet
D
=
(
X
D
,
0
,
Π
D
,
∇
D
)
{\displaystyle D=(X_{D,0},\Pi _{D},\nabla _{D})}
, where:
the set of discrete unknowns
X
D
,
0
{\displaystyle X_{D,0}}
is a finite dimensional real vector space,
the function reconstruction
Π
D
:
X
D
,
0
→
L
2
(
Ω
)
{\displaystyle \Pi _{D}~:~X_{D,0}\to L^{2}(\Omega )}
is a linear mapping that reconstructs, from an element of
X
D
,
0
{\displaystyle X_{D,0}}
, a function over
Ω
{\displaystyle \Omega }
,
the gradient reconstruction
∇
D
:
X
D
,
0
→
L
2
(
Ω
)
d
{\displaystyle \nabla _{D}~:~X_{D,0}\to L^{2}(\Omega )^{d}}
is a linear mapping which reconstructs, from an element of
X
D
,
0
{\displaystyle X_{D,0}}
, a "gradient" (vector-valued function) over
Ω
{\displaystyle \Omega }
. This gradient reconstruction must be chosen such that
‖
∇
D
⋅
‖
L
2
(
Ω
)
d
{\displaystyle \Vert \nabla _{D}\cdot \Vert _{L^{2}(\Omega )^{d}}}
is a norm on
X
D
,
0
{\displaystyle X_{D,0}}
.
The related Gradient Scheme for the approximation of (2) is given by: find
u
∈
X
D
,
0
{\displaystyle u\in X_{D,0}}
such that
The GDM is then in this case a nonconforming method for the approximation of (2), which includes the nonconforming finite element method. Note that the reciprocal is not true, in the sense that the GDM framework includes methods such that the function
∇
D
u
{\displaystyle \nabla _{D}u}
cannot be computed from the function
Π
D
u
{\displaystyle \Pi _{D}u}
.
The following error estimate, inspired by G. Strang's second lemma, holds
and
defining:
which measures the coercivity (discrete Poincaré constant),
which measures the interpolation error,
which measures the defect of conformity.
Note that the following upper and lower bounds of the approximation error can be derived:
Then the core properties which are necessary and sufficient for the convergence of the method are, for a family of GDs, the coercivity, the GD-consistency and the limit-conformity properties, as defined in the next section. More generally, these three core properties are sufficient to prove the convergence of the GDM for linear problems and for some nonlinear problems like the
p
{\displaystyle p}
-Laplace problem. For nonlinear problems such as nonlinear diffusion, degenerate parabolic problems..., we add in the next section two other core properties which may be required.
== The core properties allowing for the convergence of a GDM ==
Let
(
D
m
)
m
∈
N
{\displaystyle (D_{m})_{m\in \mathbb {N} }}
be a family of GDs, defined as above (generally associated with a sequence of regular meshes whose size tends to 0).
=== Coercivity ===
The sequence
(
C
D
m
)
m
∈
N
{\displaystyle (C_{D_{m}})_{m\in \mathbb {N} }}
(defined by (6)) remains bounded.
=== GD-consistency ===
For all
φ
∈
H
0
1
(
Ω
)
{\displaystyle \varphi \in H_{0}^{1}(\Omega )}
,
lim
m
→
∞
S
D
m
(
φ
)
=
0
{\displaystyle \lim _{m\to \infty }S_{D_{m}}(\varphi )=0}
(defined by (7)).
=== Limit-conformity ===
For all
φ
∈
H
div
(
Ω
)
{\displaystyle \varphi \in H_{\operatorname {div} }(\Omega )}
,
lim
m
→
∞
W
D
m
(
φ
)
=
0
{\displaystyle \lim _{m\to \infty }W_{D_{m}}(\varphi )=0}
(defined by (8)).
This property implies the coercivity property.
=== Compactness (needed for some nonlinear problems) ===
For all sequence
(
u
m
)
m
∈
N
{\displaystyle (u_{m})_{m\in \mathbb {N} }}
such that
u
m
∈
X
D
m
,
0
{\displaystyle u_{m}\in X_{D_{m},0}}
for all
m
∈
N
{\displaystyle m\in \mathbb {N} }
and
(
‖
u
m
‖
D
m
)
m
∈
N
{\displaystyle (\Vert u_{m}\Vert _{D_{m}})_{m\in \mathbb {N} }}
is bounded, then the sequence
(
Π
D
m
u
m
)
m
∈
N
{\displaystyle (\Pi _{D_{m}}u_{m})_{m\in \mathbb {N} }}
is relatively compact in
L
2
(
Ω
)
{\displaystyle L^{2}(\Omega )}
(this property implies the coercivity property).
=== Piecewise constant reconstruction (needed for some nonlinear problems) ===
Let
D
=
(
X
D
,
0
,
Π
D
,
∇
D
)
{\displaystyle D=(X_{D,0},\Pi _{D},\nabla _{D})}
be a gradient discretisation as defined above.
The operator
Π
D
{\displaystyle \Pi _{D}}
is a piecewise constant reconstruction if there exists a basis
(
e
i
)
i
∈
B
{\displaystyle (e_{i})_{i\in B}}
of
X
D
,
0
{\displaystyle X_{D,0}}
and a family of disjoint subsets
(
Ω
i
)
i
∈
B
{\displaystyle (\Omega _{i})_{i\in B}}
of
Ω
{\displaystyle \Omega }
such that
Π
D
u
=
∑
i
∈
B
u
i
χ
Ω
i
{\textstyle \Pi _{D}u=\sum _{i\in B}u_{i}\chi _{\Omega _{i}}}
for all
u
=
∑
i
∈
B
u
i
e
i
∈
X
D
,
0
{\textstyle u=\sum _{i\in B}u_{i}e_{i}\in X_{D,0}}
, where
χ
Ω
i
{\displaystyle \chi _{\Omega _{i}}}
is the characteristic function of
Ω
i
{\displaystyle \Omega _{i}}
.
== Some non-linear problems with complete convergence proofs of the GDM ==
We review some problems for which the GDM can be proved to converge when the above core properties are satisfied.
=== Nonlinear stationary diffusion problems ===
−
div
(
Λ
(
u
¯
)
∇
u
¯
)
=
f
{\displaystyle -\operatorname {div} (\Lambda ({\overline {u}})\nabla {\overline {u}})=f}
In this case, the GDM converges under the coercivity, GD-consistency, limit-conformity and compactness properties.
=== p-Laplace problem for p > 1 ===
−
div
(
|
∇
u
¯
|
p
−
2
∇
u
¯
)
=
f
{\displaystyle -\operatorname {div} \left(|\nabla {\overline {u}}|^{p-2}\nabla {\overline {u}}\right)=f}
In this case, the core properties must be written, replacing
L
2
(
Ω
)
{\displaystyle L^{2}(\Omega )}
by
L
p
(
Ω
)
{\displaystyle L^{p}(\Omega )}
,
H
0
1
(
Ω
)
{\displaystyle H_{0}^{1}(\Omega )}
by
W
0
1
,
p
(
Ω
)
{\displaystyle W_{0}^{1,p}(\Omega )}
and
H
div
(
Ω
)
{\displaystyle H_{\operatorname {div} }(\Omega )}
by
W
div
p
′
(
Ω
)
{\displaystyle W_{\operatorname {div} }^{p'}(\Omega )}
with
1
p
+
1
p
′
=
1
{\textstyle {\frac {1}{p}}+{\frac {1}{p'}}=1}
, and the GDM converges only under the coercivity, GD-consistency and limit-conformity properties.
=== Linear and nonlinear heat equation ===
∂
t
u
¯
−
div
(
Λ
(
u
¯
)
∇
u
¯
)
=
f
{\displaystyle \partial _{t}{\overline {u}}-\operatorname {div} (\Lambda ({\overline {u}})\nabla {\overline {u}})=f}
In this case, the GDM converges under the coercivity, GD-consistency (adapted to space-time problems), limit-conformity and compactness (for the nonlinear case) properties.
=== Degenerate parabolic problems ===
Assume that
β
{\displaystyle \beta }
and
ζ
{\displaystyle \zeta }
are nondecreasing Lipschitz continuous functions:
∂
t
β
(
u
¯
)
−
Δ
ζ
(
u
¯
)
=
f
{\displaystyle \partial _{t}\beta ({\overline {u}})-\Delta \zeta ({\overline {u}})=f}
Note that, for this problem, the piecewise constant reconstruction property is needed, in addition to the coercivity, GD-consistency (adapted to space-time problems), limit-conformity and compactness properties.
== Review of some numerical methods which are GDM ==
All the methods below satisfy the first four core properties of GDM (coercivity, GD-consistency, limit-conformity, compactness), and in some cases the fifth one (piecewise constant reconstruction).
=== Galerkin methods and conforming finite element methods ===
Let
V
h
⊂
H
0
1
(
Ω
)
{\displaystyle V_{h}\subset H_{0}^{1}(\Omega )}
be spanned by the finite basis
(
ψ
i
)
i
∈
I
{\displaystyle (\psi _{i})_{i\in I}}
. The Galerkin method in
V
h
{\displaystyle V_{h}}
is identical to the GDM where one defines
X
D
,
0
=
{
u
=
(
u
i
)
i
∈
I
}
=
R
I
,
{\displaystyle X_{D,0}=\{u=(u_{i})_{i\in I}\}=\mathbb {R} ^{I},}
Π
D
u
=
∑
i
∈
I
u
i
ψ
i
{\displaystyle \Pi _{D}u=\sum _{i\in I}u_{i}\psi _{i}}
∇
D
u
=
∑
i
∈
I
u
i
∇
ψ
i
.
{\displaystyle \nabla _{D}u=\sum _{i\in I}u_{i}\nabla \psi _{i}.}
In this case,
C
D
{\displaystyle C_{D}}
is the constant involved in the continuous Poincaré inequality, and, for all
φ
∈
H
div
(
Ω
)
{\displaystyle \varphi \in H_{\operatorname {div} }(\Omega )}
,
W
D
(
φ
)
=
0
{\displaystyle W_{D}(\varphi )=0}
(defined by (8)). Then (4) and (5) are implied by Céa's lemma.
The "mass-lumped"
P
1
{\displaystyle P^{1}}
finite element case enters the framework of the GDM, replacing
Π
D
u
{\displaystyle \Pi _{D}u}
by
Π
~
D
u
=
∑
i
∈
I
u
i
χ
Ω
i
{\textstyle {\widetilde {\Pi }}_{D}u=\sum _{i\in I}u_{i}\chi _{\Omega _{i}}}
, where
Ω
i
{\displaystyle \Omega _{i}}
is a dual cell centred on the vertex indexed by
i
∈
I
{\displaystyle i\in I}
. Using mass lumping allows to get the piecewise constant reconstruction property.
=== Nonconforming finite element ===
On a mesh
T
{\displaystyle T}
which is a conforming set of simplices of
R
d
{\displaystyle \mathbb {R} ^{d}}
, the nonconforming
P
1
{\displaystyle P^{1}}
finite elements are defined by the basis
(
ψ
i
)
i
∈
I
{\displaystyle (\psi _{i})_{i\in I}}
of the functions which are affine in any
K
∈
T
{\displaystyle K\in T}
, and whose value at the centre of gravity of one given face of the mesh is 1 and 0 at all the others (these finite elements are used in [Crouzeix et al] for the approximation of the Stokes and Navier-Stokes equations). Then the method enters the GDM framework with the same definition as in the case of the Galerkin method, except for the fact that
∇
ψ
i
{\displaystyle \nabla \psi _{i}}
must be understood as the "broken gradient" of
ψ
i
{\displaystyle \psi _{i}}
, in the sense that it is the piecewise constant function equal in each simplex to the gradient of the affine function in the simplex.
=== Mixed finite element ===
The mixed finite element method consists in defining two discrete spaces, one for the approximation of
∇
u
¯
{\displaystyle \nabla {\overline {u}}}
and another one for
u
¯
{\displaystyle {\overline {u}}}
.
It suffices to use the discrete relations between these approximations to define a GDM. Using the low degree Raviart–Thomas basis functions allows to get the piecewise constant reconstruction property.
=== Discontinuous Galerkin method ===
The Discontinuous Galerkin method consists in approximating problems by a piecewise polynomial function, without requirements on the jumps from an element to the other. It is plugged in the GDM framework by including in the discrete gradient a jump term, acting as the regularization of the gradient in the distribution sense.
=== Mimetic finite difference method and nodal mimetic finite difference method ===
This family of methods is introduced by [Brezzi et al] and completed in [Lipnikov et al]. It allows the approximation of elliptic problems using a large class of polyhedral meshes. The proof that it enters the GDM framework is done in [Droniou et al].
== See also ==
Finite element method
== References ==
== External links ==
The Gradient Discretisation Method by Jérôme Droniou, Robert Eymard, Thierry Gallouët, Cindy Guichard and Raphaèle Herbin | Wikipedia/Gradient_discretization_method |
A triangular function (also known as a triangle function, hat function, or tent function) is a function whose graph takes the shape of a triangle. Often this is an isosceles triangle of height 1 and base 2 in which case it is referred to as the triangular function. Triangular functions are useful in signal processing and communication systems engineering as representations of idealized signals, and the triangular function specifically as an integral transform kernel function from which more realistic signals can be derived, for example in kernel density estimation. It also has applications in pulse-code modulation as a pulse shape for transmitting digital signals and as a matched filter for receiving the signals. It is also used to define the triangular window sometimes called the Bartlett window.
== Definitions ==
The most common definition is as a piecewise function:
tri
(
x
)
=
Λ
(
x
)
=
def
max
(
1
−
|
x
|
,
0
)
=
{
1
−
|
x
|
,
|
x
|
<
1
;
0
otherwise
.
{\displaystyle {\begin{aligned}\operatorname {tri} (x)=\Lambda (x)\ &{\overset {\underset {\text{def}}{}}{=}}\ \max {\big (}1-|x|,0{\big )}\\&={\begin{cases}1-|x|,&|x|<1;\\0&{\text{otherwise}}.\\\end{cases}}\end{aligned}}}
Equivalently, it may be defined as the convolution of two identical unit rectangular functions:
tri
(
x
)
=
rect
(
x
)
∗
rect
(
x
)
=
∫
−
∞
∞
rect
(
x
−
τ
)
⋅
rect
(
τ
)
d
τ
.
{\displaystyle {\begin{aligned}\operatorname {tri} (x)&=\operatorname {rect} (x)*\operatorname {rect} (x)\\&=\int _{-\infty }^{\infty }\operatorname {rect} (x-\tau )\cdot \operatorname {rect} (\tau )\,d\tau .\\\end{aligned}}}
The triangular function can also be represented as the product of the rectangular and absolute value functions:
tri
(
x
)
=
rect
(
x
/
2
)
(
1
−
|
x
|
)
.
{\displaystyle \operatorname {tri} (x)=\operatorname {rect} (x/2){\big (}1-|x|{\big )}.}
Note that some authors instead define the triangle function to have a base of width 1 instead of width 2:
tri
(
2
x
)
=
Λ
(
2
x
)
=
def
max
(
1
−
2
|
x
|
,
0
)
=
{
1
−
2
|
x
|
,
|
x
|
<
1
2
;
0
otherwise
.
{\displaystyle {\begin{aligned}\operatorname {tri} (2x)=\Lambda (2x)\ &{\overset {\underset {\text{def}}{}}{=}}\ \max {\big (}1-2|x|,0{\big )}\\&={\begin{cases}1-2|x|,&|x|<{\tfrac {1}{2}};\\0&{\text{otherwise}}.\\\end{cases}}\end{aligned}}}
In its most general form a triangular function is any linear B-spline:
tri
j
(
x
)
=
{
(
x
−
x
j
−
1
)
/
(
x
j
−
x
j
−
1
)
,
x
j
−
1
≤
x
<
x
j
;
(
x
j
+
1
−
x
)
/
(
x
j
+
1
−
x
j
)
,
x
j
≤
x
<
x
j
+
1
;
0
otherwise
.
{\displaystyle \operatorname {tri} _{j}(x)={\begin{cases}(x-x_{j-1})/(x_{j}-x_{j-1}),&x_{j-1}\leq x<x_{j};\\(x_{j+1}-x)/(x_{j+1}-x_{j}),&x_{j}\leq x<x_{j+1};\\0&{\text{otherwise}}.\end{cases}}}
Whereas the definition at the top is a special case
Λ
(
x
)
=
tri
j
(
x
)
,
{\displaystyle \Lambda (x)=\operatorname {tri} _{j}(x),}
where
x
j
−
1
=
−
1
{\displaystyle x_{j-1}=-1}
,
x
j
=
0
{\displaystyle x_{j}=0}
, and
x
j
+
1
=
1
{\displaystyle x_{j+1}=1}
.
A linear B-spline is the same as a continuous piecewise linear function
f
(
x
)
{\displaystyle f(x)}
, and this general triangle function is useful to formally define
f
(
x
)
{\displaystyle f(x)}
as
f
(
x
)
=
∑
j
y
j
⋅
tri
j
(
x
)
,
{\displaystyle f(x)=\sum _{j}y_{j}\cdot \operatorname {tri} _{j}(x),}
where
x
j
<
x
j
+
1
{\displaystyle x_{j}<x_{j+1}}
for all integer
j
{\displaystyle j}
.
The piecewise linear function passes through every point expressed as coordinates with ordered pair
(
x
j
,
y
j
)
{\displaystyle (x_{j},y_{j})}
, that is,
f
(
x
j
)
=
y
j
{\displaystyle f(x_{j})=y_{j}}
.
== Scaling ==
For any parameter
a
≠
0
{\displaystyle a\neq 0}
:
tri
(
t
a
)
=
(
1
a
)
rect
(
t
a
)
∗
(
1
a
)
rect
(
t
a
)
=
∫
−
∞
∞
1
|
a
|
rect
(
τ
a
)
⋅
rect
(
t
−
τ
a
)
d
τ
=
{
1
−
|
t
/
a
|
,
|
t
|
<
|
a
|
;
0
otherwise
.
{\displaystyle {\begin{aligned}\operatorname {tri} \left({\tfrac {t}{a}}\right)&=\left({\tfrac {1}{\sqrt {a}}}\right)\operatorname {rect} \left({\tfrac {t}{a}}\right)*\left({\tfrac {1}{\sqrt {a}}}\right)\operatorname {rect} \left({\tfrac {t}{a}}\right)=\int _{-\infty }^{\infty }{\tfrac {1}{|a|}}\operatorname {rect} \left({\tfrac {\tau }{a}}\right)\cdot \operatorname {rect} \left({\tfrac {t-\tau }{a}}\right)\,d\tau \\&={\begin{cases}1-|t/a|,&|t|<|a|;\\0&{\text{otherwise}}.\end{cases}}\end{aligned}}}
== Fourier transform ==
The transform is easily determined using the convolution property of Fourier transforms and the Fourier transform of the rectangular function:
F
{
tri
(
t
)
}
=
F
{
rect
(
t
)
∗
rect
(
t
)
}
=
F
{
rect
(
t
)
}
⋅
F
{
rect
(
t
)
}
=
F
{
rect
(
t
)
}
2
=
s
i
n
c
2
(
f
)
,
{\displaystyle {\begin{aligned}{\mathcal {F}}\{\operatorname {tri} (t)\}&={\mathcal {F}}\{\operatorname {rect} (t)*\operatorname {rect} (t)\}\\&={\mathcal {F}}\{\operatorname {rect} (t)\}\cdot {\mathcal {F}}\{\operatorname {rect} (t)\}\\&={\mathcal {F}}\{\operatorname {rect} (t)\}^{2}\\&=\mathrm {sinc} ^{2}(f),\end{aligned}}}
where
sinc
(
x
)
=
sin
(
π
x
)
/
(
π
x
)
{\displaystyle \operatorname {sinc} (x)=\sin(\pi x)/(\pi x)}
is the normalized sinc function.
For the general form, we have:
F
{
tri
(
t
a
)
}
=
F
{
1
a
rect
(
t
a
)
∗
1
a
rect
(
t
a
)
}
=
1
a
F
{
rect
(
t
a
)
}
⋅
F
{
rect
(
t
a
)
}
=
1
a
F
{
rect
(
t
a
)
}
2
=
1
a
a
2
s
i
n
c
2
(
a
⋅
f
)
=
a
s
i
n
c
2
(
a
⋅
f
)
.
{\displaystyle {\begin{aligned}{\mathcal {F}}\{\operatorname {tri} \left({\tfrac {t}{a}}\right)\}&={\mathcal {F}}\{{\tfrac {1}{\sqrt {a}}}\operatorname {rect} \left({\tfrac {t}{a}}\right)*{\tfrac {1}{\sqrt {a}}}\operatorname {rect} \left({\tfrac {t}{a}}\right)\}\\&={\tfrac {1}{a}}\ {\mathcal {F}}\{\operatorname {rect} \left({\tfrac {t}{a}}\right)\}\cdot {\mathcal {F}}\{\operatorname {rect} \left({\tfrac {t}{a}}\right)\}\\&={\tfrac {1}{a}}\ {\mathcal {F}}\{\operatorname {rect} \left({\tfrac {t}{a}}\right)\}^{2}\\&={\tfrac {1}{a}}\ {a}^{2}\ \mathrm {sinc} ^{2}(a\cdot f)={a}\ \mathrm {sinc} ^{2}(a\cdot f).\end{aligned}}}
== See also ==
Källén function, also known as triangle function
Tent map
Triangular distribution
Triangle wave, a piecewise linear periodic function
Trigonometric functions
== References == | Wikipedia/Tent_function |
Céa's lemma is a lemma in mathematics. Introduced by Jean Céa in his Ph.D. dissertation, it is an important tool for proving error estimates for the finite element method applied to elliptic partial differential equations.
== Lemma statement ==
Let
V
{\displaystyle V}
be a real Hilbert space with the norm
‖
⋅
‖
.
{\displaystyle \|\cdot \|.}
Let
a
:
V
×
V
→
R
{\displaystyle a:V\times V\to \mathbb {R} }
be a bilinear form with the properties
|
a
(
v
,
w
)
|
≤
γ
‖
v
‖
‖
w
‖
{\displaystyle |a(v,w)|\leq \gamma \|v\|\,\|w\|}
for some constant
γ
>
0
{\displaystyle \gamma >0}
and all
v
,
w
{\displaystyle v,w}
in
V
{\displaystyle V}
(continuity)
a
(
v
,
v
)
≥
α
‖
v
‖
2
{\displaystyle a(v,v)\geq \alpha \|v\|^{2}}
for some constant
α
>
0
{\displaystyle \alpha >0}
and all
v
{\displaystyle v}
in
V
{\displaystyle V}
(coercivity or
V
{\displaystyle V}
-ellipticity).
Let
L
:
V
→
R
{\displaystyle L:V\to \mathbb {R} }
be a bounded linear operator. Consider the problem of finding an element
u
{\displaystyle u}
in
V
{\displaystyle V}
such that
a
(
u
,
v
)
=
L
(
v
)
{\displaystyle a(u,v)=L(v)}
for all
v
{\displaystyle v}
in
V
.
{\displaystyle V.}
Consider the same problem on a finite-dimensional subspace
V
h
{\displaystyle V_{h}}
of
V
,
{\displaystyle V,}
so,
u
h
{\displaystyle u_{h}}
in
V
h
{\displaystyle V_{h}}
satisfies
a
(
u
h
,
v
)
=
L
(
v
)
{\displaystyle a(u_{h},v)=L(v)}
for all
v
{\displaystyle v}
in
V
h
.
{\displaystyle V_{h}.}
By the Lax–Milgram theorem, each of these problems has exactly one solution. Céa's lemma states that
‖
u
−
u
h
‖
≤
γ
α
‖
u
−
v
‖
{\displaystyle \|u-u_{h}\|\leq {\frac {\gamma }{\alpha }}\|u-v\|}
for all
v
{\displaystyle v}
in
V
h
.
{\displaystyle V_{h}.}
That is to say, the subspace solution
u
h
{\displaystyle u_{h}}
is "the best" approximation of
u
{\displaystyle u}
in
V
h
,
{\displaystyle V_{h},}
up to the constant
γ
/
α
.
{\displaystyle \gamma /\alpha .}
The proof is straightforward
α
‖
u
−
u
h
‖
2
≤
a
(
u
−
u
h
,
u
−
u
h
)
=
a
(
u
−
u
h
,
u
−
v
)
+
a
(
u
−
u
h
,
v
−
u
h
)
=
a
(
u
−
u
h
,
u
−
v
)
≤
γ
‖
u
−
u
h
‖
‖
u
−
v
‖
{\displaystyle \alpha \|u-u_{h}\|^{2}\leq a(u-u_{h},u-u_{h})=a(u-u_{h},u-v)+a(u-u_{h},v-u_{h})=a(u-u_{h},u-v)\leq \gamma \|u-u_{h}\|\|u-v\|}
for all
v
{\displaystyle v}
in
V
h
.
{\displaystyle V_{h}.}
We used the
a
{\displaystyle a}
-orthogonality of
u
−
u
h
{\displaystyle u-u_{h}}
and
v
−
u
h
∈
V
h
{\displaystyle v-u_{h}\in V_{h}}
a
(
u
−
u
h
,
v
)
=
0
,
∀
v
∈
V
h
{\displaystyle a(u-u_{h},v)=0,\ \forall \ v\in V_{h}}
which follows directly from
V
h
⊂
V
{\displaystyle V_{h}\subset V}
a
(
u
,
v
)
=
L
(
v
)
=
a
(
u
h
,
v
)
{\displaystyle a(u,v)=L(v)=a(u_{h},v)}
for all
v
{\displaystyle v}
in
V
h
{\displaystyle V_{h}}
.
Note: Céa's lemma holds on complex Hilbert spaces also, one then uses a sesquilinear form
a
(
⋅
,
⋅
)
{\displaystyle a(\cdot ,\cdot )}
instead of a bilinear one. The coercivity assumption then becomes
|
a
(
v
,
v
)
|
≥
α
‖
v
‖
2
{\displaystyle |a(v,v)|\geq \alpha \|v\|^{2}}
for all
v
{\displaystyle v}
in
V
{\displaystyle V}
(notice the absolute value sign around
a
(
v
,
v
)
{\displaystyle a(v,v)}
).
== Error estimate in the energy norm ==
In many applications, the bilinear form
a
:
V
×
V
→
R
{\displaystyle a:V\times V\to \mathbb {R} }
is symmetric, so
a
(
v
,
w
)
=
a
(
w
,
v
)
{\displaystyle a(v,w)=a(w,v)}
for all
v
,
w
{\displaystyle v,w}
in
V
.
{\displaystyle V.}
This, together with the above properties of this form, implies that
a
(
⋅
,
⋅
)
{\displaystyle a(\cdot ,\cdot )}
is an inner product on
V
.
{\displaystyle V.}
The resulting norm
‖
v
‖
a
=
a
(
v
,
v
)
{\displaystyle \|v\|_{a}={\sqrt {a(v,v)}}}
is called the energy norm, since it corresponds to a physical energy in many problems. This norm is equivalent to the original norm
‖
⋅
‖
.
{\displaystyle \|\cdot \|.}
Using the
a
{\displaystyle a}
-orthogonality of
u
−
u
h
{\displaystyle u-u_{h}}
and
V
h
{\displaystyle V_{h}}
and the Cauchy–Schwarz inequality
‖
u
−
u
h
‖
a
2
=
a
(
u
−
u
h
,
u
−
u
h
)
=
a
(
u
−
u
h
,
u
−
v
)
≤
‖
u
−
u
h
‖
a
⋅
‖
u
−
v
‖
a
{\displaystyle \|u-u_{h}\|_{a}^{2}=a(u-u_{h},u-u_{h})=a(u-u_{h},u-v)\leq \|u-u_{h}\|_{a}\cdot \|u-v\|_{a}}
for all
v
{\displaystyle v}
in
V
h
{\displaystyle V_{h}}
.
Hence, in the energy norm, the inequality in Céa's lemma becomes
‖
u
−
u
h
‖
a
≤
‖
u
−
v
‖
a
{\displaystyle \|u-u_{h}\|_{a}\leq \|u-v\|_{a}}
for all
v
{\displaystyle v}
in
V
h
{\displaystyle V_{h}}
(notice that the constant
γ
/
α
{\displaystyle \gamma /\alpha }
on the right-hand side is no longer present).
This states that the subspace solution
u
h
{\displaystyle u_{h}}
is the best approximation to the full-space solution
u
{\displaystyle u}
in respect to the energy norm. Geometrically, this means that
u
h
{\displaystyle u_{h}}
is the projection of the solution
u
{\displaystyle u}
onto the subspace
V
h
{\displaystyle V_{h}}
in respect to the inner product
a
(
⋅
,
⋅
)
{\displaystyle a(\cdot ,\cdot )}
(see the adjacent picture).
Using this result, one can also derive a sharper estimate in the norm
‖
⋅
‖
{\displaystyle \|\cdot \|}
. Since
α
‖
u
−
u
h
‖
2
≤
a
(
u
−
u
h
,
u
−
u
h
)
=
‖
u
−
u
h
‖
a
2
≤
‖
u
−
v
‖
a
2
≤
γ
‖
u
−
v
‖
2
{\displaystyle \alpha \|u-u_{h}\|^{2}\leq a(u-u_{h},u-u_{h})=\|u-u_{h}\|_{a}^{2}\leq \|u-v\|_{a}^{2}\leq \gamma \|u-v\|^{2}}
for all
v
{\displaystyle v}
in
V
h
{\displaystyle V_{h}}
,
it follows that
‖
u
−
u
h
‖
≤
γ
α
‖
u
−
v
‖
{\displaystyle \|u-u_{h}\|\leq {\sqrt {\frac {\gamma }{\alpha }}}\|u-v\|}
for all
v
{\displaystyle v}
in
V
h
{\displaystyle V_{h}}
.
== An application of Céa's lemma ==
We will apply Céa's lemma to estimate the error of calculating the solution to an elliptic differential equation by the finite element method.
Consider the problem of finding a function
u
:
[
a
,
b
]
→
R
{\displaystyle u:[a,b]\to \mathbb {R} }
satisfying the conditions
{
−
u
″
=
f
in
[
a
,
b
]
u
(
a
)
=
u
(
b
)
=
0
{\displaystyle {\begin{cases}-u''=f{\mbox{ in }}[a,b]\\u(a)=u(b)=0\end{cases}}}
where
f
:
[
a
,
b
]
→
R
{\displaystyle f:[a,b]\to \mathbb {R} }
is a given continuous function.
Physically, the solution
u
{\displaystyle u}
to this two-point boundary value problem represents the shape taken by a string under the influence of a force such that at every point
x
{\displaystyle x}
between
a
{\displaystyle a}
and
b
{\displaystyle b}
the force density is
f
(
x
)
e
{\displaystyle f(x)\mathbf {e} }
(where
e
{\displaystyle \mathbf {e} }
is a unit vector pointing vertically, while the endpoints of the string are on a horizontal line, see the adjacent picture). For example, that force may be the gravity, when
f
{\displaystyle f}
is a constant function (since the gravitational force is the same at all points).
Let the Hilbert space
V
{\displaystyle V}
be the Sobolev space
H
0
1
(
a
,
b
)
,
{\displaystyle H_{0}^{1}(a,b),}
which is the space of all square-integrable functions
v
{\displaystyle v}
defined on
[
a
,
b
]
{\displaystyle [a,b]}
that have a weak derivative on
[
a
,
b
]
{\displaystyle [a,b]}
with
v
′
{\displaystyle v'}
also being square integrable, and
v
{\displaystyle v}
satisfies the conditions
v
(
a
)
=
v
(
b
)
=
0.
{\displaystyle v(a)=v(b)=0.}
The inner product on this space is
(
v
,
w
)
=
∫
a
b
(
v
(
x
)
w
(
w
)
+
v
′
(
x
)
w
′
(
x
)
)
d
x
{\displaystyle (v,w)=\int _{a}^{b}\!\left(v(x)w(w)+v'(x)w'(x)\right)\,dx}
for all
v
{\displaystyle v}
and
w
{\displaystyle w}
in
V
.
{\displaystyle V.}
After multiplying the original boundary value problem by
v
{\displaystyle v}
in this space and performing an integration by parts, one obtains the equivalent problem
a
(
u
,
v
)
=
L
(
v
)
{\displaystyle a(u,v)=L(v)}
for all
v
{\displaystyle v}
in
V
{\displaystyle V}
,
with
a
(
u
,
v
)
=
∫
a
b
u
′
(
x
)
v
′
(
x
)
d
x
{\displaystyle a(u,v)=\int _{a}^{b}\!u'(x)v'(x)\,dx}
,
and
L
(
v
)
=
∫
a
b
f
(
x
)
v
(
x
)
d
x
.
{\displaystyle L(v)=\int _{a}^{b}\!f(x)v(x)\,dx.}
It can be shown that the bilinear form
a
(
⋅
,
⋅
)
{\displaystyle a(\cdot ,\cdot )}
and the operator
L
{\displaystyle L}
satisfy the assumptions of Céa's lemma.
In order to determine a finite-dimensional subspace
V
h
{\displaystyle V_{h}}
of
V
,
{\displaystyle V,}
consider a partition
a
=
x
0
<
x
1
<
⋯
<
x
n
−
1
<
x
n
=
b
{\displaystyle a=x_{0}<x_{1}<\cdots <x_{n-1}<x_{n}=b}
of the interval
[
a
,
b
]
,
{\displaystyle [a,b],}
and let
V
h
{\displaystyle V_{h}}
be the space of all continuous functions that are affine on each subinterval in the partition (such functions are called piecewise-linear). In addition, assume that any function in
V
h
{\displaystyle V_{h}}
takes the value 0 at the endpoints of
[
a
,
b
]
.
{\displaystyle [a,b].}
It follows that
V
h
{\displaystyle V_{h}}
is a vector subspace of
V
{\displaystyle V}
whose dimension is
n
−
1
{\displaystyle n-1}
(the number of points in the partition that are not endpoints).
Let
u
h
{\displaystyle u_{h}}
be the solution to the subspace problem
a
(
u
h
,
v
)
=
L
(
v
)
{\displaystyle a(u_{h},v)=L(v)}
for all
v
{\displaystyle v}
in
V
h
,
{\displaystyle V_{h},}
so one can think of
u
h
{\displaystyle u_{h}}
as of a piecewise-linear approximation to the exact solution
u
.
{\displaystyle u.}
By Céa's lemma, there exists a constant
C
>
0
{\displaystyle C>0}
dependent only on the bilinear form
a
(
⋅
,
⋅
)
,
{\displaystyle a(\cdot ,\cdot ),}
such that
‖
u
−
u
h
‖
≤
C
‖
u
−
v
‖
{\displaystyle \|u-u_{h}\|\leq C\|u-v\|}
for all
v
{\displaystyle v}
in
V
h
.
{\displaystyle V_{h}.}
To explicitly calculate the error between
u
{\displaystyle u}
and
u
h
,
{\displaystyle u_{h},}
consider the function
π
u
{\displaystyle \pi u}
in
V
h
{\displaystyle V_{h}}
that has the same values as
u
{\displaystyle u}
at the nodes of the partition (so
π
u
{\displaystyle \pi u}
is obtained by linear interpolation on each interval
[
x
i
,
x
i
+
1
]
{\displaystyle [x_{i},x_{i+1}]}
from the values of
u
{\displaystyle u}
at interval's endpoints). It can be shown using Taylor's theorem that there exists a constant
K
{\displaystyle K}
that depends only on the endpoints
a
{\displaystyle a}
and
b
,
{\displaystyle b,}
such that
|
u
′
(
x
)
−
(
π
u
)
′
(
x
)
|
≤
K
h
‖
u
″
‖
L
2
(
a
,
b
)
{\displaystyle |u'(x)-(\pi u)'(x)|\leq Kh\|u''\|_{L^{2}(a,b)}}
for all
x
{\displaystyle x}
in
[
a
,
b
]
,
{\displaystyle [a,b],}
where
h
{\displaystyle h}
is the largest length of the subintervals
[
x
i
,
x
i
+
1
]
{\displaystyle [x_{i},x_{i+1}]}
in the partition, and the norm on the right-hand side is the L2 norm.
This inequality then yields an estimate for the error
‖
u
−
π
u
‖
.
{\displaystyle \|u-\pi u\|.}
Then, by substituting
v
=
π
u
{\displaystyle v=\pi u}
in Céa's lemma it follows that
‖
u
−
u
h
‖
≤
C
h
‖
u
″
‖
L
2
(
a
,
b
)
,
{\displaystyle \|u-u_{h}\|\leq Ch\|u''\|_{L^{2}(a,b)},}
where
C
{\displaystyle C}
is a different constant from the above (it depends only on the bilinear form, which implicitly depends on the interval
[
a
,
b
]
{\displaystyle [a,b]}
).
This result is of a fundamental importance, as it states that the finite element method can be used to approximately calculate the solution of our problem, and that the error in the computed solution decreases proportionately to the partition size
h
.
{\displaystyle h.}
Céa's lemma can be applied along the same lines to derive error estimates for finite element problems in higher dimensions (here the domain of
u
{\displaystyle u}
was in one dimension), and while using higher order polynomials for the subspace
V
h
.
{\displaystyle V_{h}.}
== References ==
Céa, Jean (1964). Approximation variationnelle des problèmes aux limites (PDF) (PhD thesis). Annales de l'Institut Fourier 14. Vol. 2. pp. 345–444. Retrieved 2010-11-27. (Original work from J. Céa)
Johnson, Claes (1987). Numerical solution of partial differential equations by the finite element method. Cambridge University Press. ISBN 0-521-34514-6.
Monk, Peter (2003). Finite element methods for Maxwell's equations. Oxford University Press. ISBN 0-19-850888-3.
Roos, H.-G.; Stynes, M.; Tobiska, L. (1996). Numerical methods for singularly perturbed differential equations: convection-diffusion and flow problems. Berlin; New York: Springer-Verlag. ISBN 3-540-60718-8.
Eriksson, K.; Estep, D.; Hansbo, P.; Johnson, C. (1996). Computational differential equations. Cambridge; New York: Cambridge University Press. ISBN 0-521-56738-6.
Zeidler, Eberhard (1995). Applied functional analysis: applications to mathematical physics. New York: Springer-Verlag. ISBN 0-387-94442-7.
Brenner, Susanne C.; L. Ridgeway Scott (2002). The mathematical theory of finite element methods (2nd ed.). Springer. ISBN 0-387-95451-1. OCLC 48892839.
Ciarlet, Philippe G. (2002). The finite element method for elliptic problems ((SIAM Classics reprint) ed.). ISBN 0-89871-514-8. OCLC 48892573. | Wikipedia/Céa's_lemma |
In structural engineering, the direct stiffness method, also known as the matrix stiffness method, is a structural analysis technique particularly suited for computer-automated analysis of complex structures including the statically indeterminate type. It is a matrix method that makes use of the members' stiffness relations for computing member forces and displacements in structures. The direct stiffness method is the most common implementation of the finite element method (FEM). In applying the method, the system must be modeled as a set of simpler, idealized elements interconnected at the nodes. The material stiffness properties of these elements are then, through linear algebra, compiled into a single matrix equation which governs the behaviour of the entire idealized structure. The structure’s unknown displacements and forces can then be determined by solving this equation. The direct stiffness method forms the basis for most commercial and free source finite element software.
The direct stiffness method originated in the field of aerospace. Researchers looked at various approaches for analysis of complex airplane frames. These included elasticity theory, energy principles in structural mechanics, flexibility method and matrix stiffness method. It was through analysis of these methods that the direct stiffness method emerged as an efficient method ideally suited for computer implementation.
== History ==
Between 1934 and 1938 A. R. Collar and W. J. Duncan published the first papers with the representation and terminology for matrix systems that are used today. Aeroelastic research continued through World War II but publication restrictions from 1938 to 1947 make this work difficult to trace. The second major breakthrough in matrix structural analysis occurred through 1954 and 1955 when professor John H. Argyris systemized the concept of assembling elemental components of a structure into a system of equations. Finally, on Nov. 6 1959, M. J. Turner, head of Boeing’s Structural Dynamics Unit, published a paper outlining the direct stiffness method as an efficient model for computer implementation (Felippa 2001).
== Member stiffness relations ==
A typical member stiffness relation has the following general form:
where
m = member number m.
Q
m
{\displaystyle \mathbf {Q} ^{m}}
= vector of member's characteristic forces, which are unknown internal forces.
k
m
{\displaystyle \mathbf {k} ^{m}}
= member stiffness matrix which characterizes the member's resistance against deformations.
q
m
{\displaystyle \mathbf {q} ^{m}}
= vector of member's characteristic displacements or deformations.
Q
o
m
{\displaystyle \mathbf {Q} ^{om}}
= vector of member's characteristic forces caused by external effects (such as known forces and temperature changes) applied to the member while
q
m
=
0
{\displaystyle \mathbf {q} ^{m}=0}
.
If
q
m
{\displaystyle \mathbf {q} ^{m}}
are member deformations rather than absolute displacements, then
Q
m
{\displaystyle \mathbf {Q} ^{m}}
are independent member forces, and in such case (1) can be inverted to yield the so-called member flexibility matrix, which is used in the flexibility method.
== System stiffness relation ==
For a system with many members interconnected at points called nodes, the members' stiffness relations such as Eq.(1) can be integrated by making use of the following observations:
The member deformations
q
m
{\displaystyle \mathbf {q} ^{m}}
can be expressed in terms of system nodal displacements r in order to ensure compatibility between members. This implies that r will be the primary unknowns.
The member forces
Q
m
{\displaystyle \mathbf {Q} ^{m}}
help to the keep the nodes in equilibrium under the nodal forces R. This implies that the right-hand-side of (1) will be integrated into the right-hand-side of the following nodal equilibrium equations for the entire system:
where
R
{\displaystyle \mathbf {R} }
= vector of nodal forces, representing external forces applied to the system's nodes.
K
{\displaystyle \mathbf {K} }
= system stiffness matrix, which is established by assembling the members' stiffness matrices
k
m
{\displaystyle \mathbf {k} ^{m}}
.
r
{\displaystyle \mathbf {r} }
= vector of system's nodal displacements that can define all possible deformed configurations of the system subject to arbitrary nodal forces R.
R
o
{\displaystyle \mathbf {R} ^{o}}
= vector of equivalent nodal forces, representing all external effects other than the nodal forces which are already included in the preceding nodal force vector R. This vector is established by assembling the members'
Q
o
m
{\displaystyle \mathbf {Q} ^{om}}
.
== Solution ==
The system stiffness matrix K is square since the vectors R and r have the same size. In addition, it is symmetric because
k
m
{\displaystyle \mathbf {k} ^{m}}
is symmetric. Once the supports' constraints are accounted for in (2), the nodal displacements are found by solving the system of linear equations (2), symbolically:
r
=
K
−
1
(
R
−
R
o
)
(
3
)
{\displaystyle \mathbf {r} =\mathbf {K} ^{-1}(\mathbf {R} -\mathbf {R} ^{o})\qquad \qquad \qquad \mathrm {(3)} }
Subsequently, the members' characteristic forces may be found from Eq.(1) where
q
m
{\displaystyle \mathbf {q} ^{m}}
can be found from r by compatibility consideration.
== The direct stiffness method ==
It is common to have Eq.(1) in a form where
q
m
{\displaystyle \mathbf {q} ^{m}}
and
Q
o
m
{\displaystyle \mathbf {Q} ^{om}}
are, respectively, the member-end displacements and forces matching in direction with r and R. In such case,
K
{\displaystyle \mathbf {K} }
and
R
o
{\displaystyle \mathbf {R} ^{o}}
can be obtained by direct summation of the members' matrices
k
m
{\displaystyle \mathbf {k} ^{m}}
and
Q
o
m
{\displaystyle \mathbf {Q} ^{om}}
. The method is then known as the direct stiffness method.
The advantages and disadvantages of the matrix stiffness method are compared and discussed in the flexibility method article.
== Example ==
=== Breakdown ===
The first step when using the direct stiffness method is to identify the individual elements which make up the structure.
Once the elements are identified, the structure is disconnected at the nodes, the points which connect the different elements together.
Each element is then analyzed individually to develop member stiffness equations. The forces and displacements are related through the element stiffness matrix which depends on the geometry and properties of the element.
A truss element can only transmit forces in compression or tension. This means that in two dimensions, each node has two degrees of freedom (DOF): horizontal and vertical displacement. The resulting equation contains a four by four stiffness matrix.
[
f
x
1
f
y
1
f
x
2
f
y
2
]
=
[
k
11
k
12
k
13
k
14
k
21
k
22
k
23
k
24
k
31
k
32
k
33
k
34
k
41
k
42
k
43
k
44
]
[
u
x
1
u
y
1
u
x
2
u
y
2
]
{\displaystyle {\begin{bmatrix}f_{x1}\\f_{y1}\\f_{x2}\\f_{y2}\\\end{bmatrix}}={\begin{bmatrix}k_{11}&k_{12}&k_{13}&k_{14}\\k_{21}&k_{22}&k_{23}&k_{24}\\k_{31}&k_{32}&k_{33}&k_{34}\\k_{41}&k_{42}&k_{43}&k_{44}\\\end{bmatrix}}{\begin{bmatrix}u_{x1}\\u_{y1}\\u_{x2}\\u_{y2}\\\end{bmatrix}}}
A frame element is able to withstand bending moments in addition to compression and tension. This results in three degrees of freedom: horizontal displacement, vertical displacement and in-plane rotation. The stiffness matrix in this case is six by six.
[
f
x
1
f
y
1
m
z
1
f
x
2
f
y
2
m
z
2
]
=
[
k
11
k
12
k
13
k
14
k
15
k
16
k
21
k
22
k
23
k
24
k
25
k
26
k
31
k
32
k
33
k
34
k
35
k
36
k
41
k
42
k
43
k
44
k
45
k
46
k
51
k
52
k
53
k
54
k
55
k
56
k
61
k
62
k
63
k
64
k
65
k
66
]
[
u
x
1
u
y
1
θ
z
1
u
x
2
u
y
2
θ
z
2
]
{\displaystyle {\begin{bmatrix}f_{x1}\\f_{y1}\\m_{z1}\\f_{x2}\\f_{y2}\\m_{z2}\\\end{bmatrix}}={\begin{bmatrix}k_{11}&k_{12}&k_{13}&k_{14}&k_{15}&k_{16}\\k_{21}&k_{22}&k_{23}&k_{24}&k_{25}&k_{26}\\k_{31}&k_{32}&k_{33}&k_{34}&k_{35}&k_{36}\\k_{41}&k_{42}&k_{43}&k_{44}&k_{45}&k_{46}\\k_{51}&k_{52}&k_{53}&k_{54}&k_{55}&k_{56}\\k_{61}&k_{62}&k_{63}&k_{64}&k_{65}&k_{66}\\\end{bmatrix}}{\begin{bmatrix}u_{x1}\\u_{y1}\\\theta _{z1}\\u_{x2}\\u_{y2}\\\theta _{z2}\\\end{bmatrix}}}
Other elements such as plates and shells can also be incorporated into the direct stiffness method and similar equations must be developed.
=== Assembly ===
Once the individual element stiffness relations have been developed they must be assembled into the original structure. The first step in this process is to convert the stiffness relations for the individual elements into a global system for the entire structure. In the case of a truss element, the global form of the stiffness method depends on the angle of the element with respect to the global coordinate system (This system is usually the traditional Cartesian coordinate system).
[
f
x
1
f
y
1
f
x
2
f
y
2
]
=
E
A
L
[
c
2
s
c
−
c
2
−
s
c
s
c
s
2
−
s
c
−
s
2
−
c
2
−
s
c
c
2
s
c
−
s
c
−
s
2
s
c
s
2
]
[
u
x
1
u
y
1
u
x
2
u
y
2
]
s
=
sin
β
c
=
cos
β
{\displaystyle {\begin{bmatrix}f_{x1}\\f_{y1}\\f_{x2}\\f_{y2}\\\end{bmatrix}}={\frac {EA}{L}}{\begin{bmatrix}c^{2}&sc&-c^{2}&-sc\\sc&s^{2}&-sc&-s^{2}\\-c^{2}&-sc&c^{2}&sc\\-sc&-s^{2}&sc&s^{2}\\\end{bmatrix}}{\begin{bmatrix}u_{x1}\\u_{y1}\\u_{x2}\\u_{y2}\\\end{bmatrix}}{\begin{array}{r }s=\sin \beta \\c=\cos \beta \\\end{array}}}
(for a truss element at angle β)
Equivalently,
[
f
x
1
f
y
1
f
x
2
f
y
2
]
=
E
A
L
[
c
x
c
x
c
x
c
y
−
c
x
c
x
−
c
x
c
y
c
y
c
x
c
y
c
y
−
c
y
c
x
−
c
y
c
y
−
c
x
c
x
−
c
x
c
y
c
x
c
x
c
x
c
y
−
c
y
c
x
−
c
y
c
y
c
y
c
x
c
y
c
y
]
[
u
x
1
u
y
1
u
x
2
u
y
2
]
{\displaystyle {\begin{bmatrix}f_{x1}\\f_{y1}\\\hline f_{x2}\\f_{y2}\end{bmatrix}}={\frac {EA}{L}}\left[{\begin{array}{c c|c c}c_{x}c_{x}&c_{x}c_{y}&-c_{x}c_{x}&-c_{x}c_{y}\\c_{y}c_{x}&c_{y}c_{y}&-c_{y}c_{x}&-c_{y}c_{y}\\\hline -c_{x}c_{x}&-c_{x}c_{y}&c_{x}c_{x}&c_{x}c_{y}\\-c_{y}c_{x}&-c_{y}c_{y}&c_{y}c_{x}&c_{y}c_{y}\\\end{array}}\right]{\begin{bmatrix}u_{x1}\\u_{y1}\\\hline u_{x2}\\u_{y2}\end{bmatrix}}}
where
c
x
{\displaystyle c_{x}}
and
c
y
{\displaystyle c_{y}}
are the direction cosines of the truss element (i.e., they are components of a unit vector aligned with the member). This form reveals how to generalize the element stiffness to 3-D space trusses by simply extending the pattern that is evident in this formulation.
After developing the element stiffness matrix in the global coordinate system, they must be merged into a single “master” or “global” stiffness matrix. When merging these matrices together there are two rules that must be followed: compatibility of displacements and force equilibrium at each node. These rules are upheld by relating the element nodal displacements to the global nodal displacements.
The global displacement and force vectors each contain one entry for each degree of freedom in the structure. The element stiffness matrices are merged by augmenting or expanding each matrix in conformation to the global displacement and load vectors.
k
(
1
)
=
E
A
L
[
1
0
−
1
0
0
0
0
0
−
1
0
1
0
0
0
0
0
]
→
K
(
1
)
=
E
A
L
[
1
0
−
1
0
0
0
0
0
0
0
0
0
−
1
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
]
{\displaystyle k^{(1)}={\frac {EA}{L}}{\begin{bmatrix}1&0&-1&0\\0&0&0&0\\-1&0&1&0\\0&0&0&0\\\end{bmatrix}}\rightarrow K^{(1)}={\frac {EA}{L}}{\begin{bmatrix}1&0&-1&0&0&0\\0&0&0&0&0&0\\-1&0&1&0&0&0\\0&0&0&0&0&0\\0&0&0&0&0&0\\0&0&0&0&0&0\\\end{bmatrix}}}
(for element (1) of the above structure)
Finally, the global stiffness matrix is constructed by adding the individual expanded element matrices together.
=== Solution ===
Once the global stiffness matrix, displacement vector, and force vector have been constructed, the system can be expressed as a single matrix equation.
For each degree of freedom in the structure, either the displacement or the force is known.
After inserting the known value for each degree of freedom, the master stiffness equation is complete and ready to be evaluated. There are several different methods available for evaluating a matrix equation including but not limited to Cholesky decomposition and the brute force evaluation of systems of equations. If a structure isn’t properly restrained, the application of a force will cause it to move rigidly and additional support conditions must be added.
The method described in this section is meant as an overview of the direct stiffness method. Additional sources should be consulted for more details on the process as well as the assumptions about material properties inherent in the process.
== Applications ==
The direct stiffness method was developed specifically to effectively and easily implement into computer software to evaluate complicated structures that contain a large number of elements. Today, nearly every finite element solver available is based on the direct stiffness method. While each program utilizes the same process, many have been streamlined to reduce computation time and reduce the required memory. In order to achieve this, shortcuts have been developed.
One of the largest areas to utilize the direct stiffness method is the field of structural analysis where this method has been incorporated into modeling software. The software allows users to model a structure and, after the user defines the material properties of the elements, the program automatically generates element and global stiffness relationships. When various loading conditions are applied the software evaluates the structure and generates the deflections for the user.
== See also ==
Finite element method
Finite element method in structural mechanics
Structural analysis
Flexibility method
List of finite element software packages
== External links ==
Application of direct stiffness method to a 1-D Spring System
Matrix Structural Analysis
Animations of Stiffness Analysis Simulations
== References ==
Felippa, Carlos A. (2001), "A historical outline of matrix structural analysis: a play in three acts" (PDF), Computers & Structures, 79 (14): 1313–1324, doi:10.1016/S0045-7949(01)00025-6, ISSN 0045-7949, archived from the original (PDF) on 2007-06-29, retrieved 2005-10-05
Felippa, Carlos A. Introduction to Finite Element Method. Fall 2001. University of Colorado. 18 Sept. 2005
Robinson, John. Structural Matrix Analysis for the Engineer. New York: John Wiley & Sons, 1966
Rubinstein, Moshe F. Matrix Computer Analysis of Structures. New Jersey: Prentice-Hall, 1966
McGuire, W., Gallagher, R. H., and Ziemian, R. D. Matrix Structural Analysis, 2nd Ed. New York: John Wiley & Sons, 2000. | Wikipedia/Direct_stiffness_method |
A discrete element method (DEM), also called a distinct element method, is any of a family of numerical methods for computing the motion and effect of a large number of small particles. Though DEM is very closely related to molecular dynamics, the method is generally distinguished by its inclusion of rotational degrees-of-freedom as well as stateful contact, particle deformation and often complicated geometries (including polyhedra). With advances in computing power and numerical algorithms for nearest neighbor sorting, it has become possible to numerically simulate millions of particles on a single processor. Today DEM is becoming widely accepted as an effective method of addressing engineering problems in granular and discontinuous materials, especially in granular flows, powder mechanics, ice and rock mechanics. DEM has been extended into the Extended Discrete Element Method taking heat transfer, chemical reaction and coupling to CFD and FEM into account.
Discrete element methods are relatively computationally intensive, which limits either the length of a simulation or the number of particles. Several DEM codes, as do molecular dynamics codes, take advantage of parallel processing capabilities (shared or distributed systems) to scale up the number of particles or length of the simulation. An alternative to treating all particles separately is to average the physics across many particles and thereby treat the material as a continuum. In the case of solid-like granular behavior as in soil mechanics, the continuum approach usually treats the material as elastic or elasto-plastic and models it with the finite element method or a mesh free method. In the case of liquid-like or gas-like granular flow, the continuum approach may treat the material as a fluid and use computational fluid dynamics. Drawbacks to homogenization of the granular scale physics, however, are well-documented and should be considered carefully before attempting to use a continuum approach.
== The DEM family ==
The various branches of the DEM family are the distinct element method proposed by Peter A. Cundall and Otto D. L. Strack in 1979, the generalized discrete element method, the discontinuous deformation analysis (DDA) (Shi 1992) and the finite-discrete element method concurrently developed by several groups (e.g., Munjiza and Owen). The general method was originally developed by Cundall in 1971 to problems in rock mechanics.
Williams showed that DEM could be viewed as a generalized finite element method, allowing deformation and fracturing of particles. Its application to geomechanics problems is described in the book Numerical Methods in Rock Mechanics. The 1st, 2nd and 3rd International Conferences on Discrete Element Methods have been a common point for researchers to publish advances in the method and its applications. Journal articles reviewing the state of the art have been published by Williams and O'Connnor, Bicanic, and Bobet et al. (see below). A comprehensive treatment of the combined Finite Element-Discrete Element Method is contained in the book The Combined Finite-Discrete Element Method.
== Applications ==
The fundamental assumption of the method is that the material consists of separate, discrete particles. These particles may have different shapes and properties that influence inter-particle contact. Some examples are:
liquids and solutions, for instance of sugar or proteins;
bulk materials in storage silos, like cereal;
granular matter, like sand;
powders, like toner.
Blocky or jointed rock masses
Typical industries using DEM are:
Agriculture and food handling
Chemical
Detergents
Oil and gas
Mining
Mineral processing
Pharmaceutical industry
Powder metallurgy
== Outline of the method ==
A DEM-simulation is started by first generating a model, which results in spatially orienting all particles and assigning an initial velocity. The forces which act on each particle are computed from the initial data and the relevant physical laws and contact models. Generally, a simulation consists of three parts: the initialization, explicit time-stepping, and post-processing. The time-stepping usually requires a nearest neighbor sorting step to reduce the number of possible contact pairs and decrease the computational requirements; this is often only performed periodically.
The following forces may have to be considered in macroscopic simulations:
friction, when two particles touch each other;
contact plasticity, or recoil, when two particles collide;
gravity, the force of attraction between particles due to their mass, which is only relevant in astronomical simulations.
attractive potentials, such as cohesion, adhesion, liquid bridging, electrostatic attraction. Note that, because of the overhead from determining nearest neighbor pairs, exact resolution of long-range, compared with particle size, forces can increase computational cost or require specialized algorithms to resolve these interactions.
On a molecular level, we may consider:
the Coulomb force, the electrostatic attraction or repulsion of particles carrying electric charge;
Pauli repulsion, when two atoms approach each other closely;
van der Waals force.
All these forces are added up to find the total force acting on each particle. An integration method is employed to compute the change in the position and the velocity of each particle during a certain time step from Newton's laws of motion. Then, the new positions are used to compute the forces during the next step, and this loop is repeated until the simulation ends.
Typical integration methods used in a discrete element method are:
the Verlet algorithm,
velocity Verlet,
symplectic integrators,
the leapfrog method.
== Thermal DEM ==
The discrete element method is widely applied for the consideration of mechanical interactions in many-body problems, particularly granular materials. Among the various extensions to DEM, the consideration of heat flow is particularly useful. Generally speaking in Thermal DEM methods, the thermo-mechanical coupling is considered, whereby the thermal properties of an individual element are considered in order to model heat flow through a macroscopic granular or multi-element medium subject to a mechanical loading. Interparticle forces, computed as a part of classical DEM, are used to determined areas of true interparticle contact and thus model the conductive transfer of heat from one solid element to another. A further aspect that is considered in DEM is the gas phase conduction, radiation and convection of heat in the interparticle spaces. To facilitate this, properties of the inter-element gaseous phase need to be considered in terms of pressure, gas conductivity and the mean-free path of gas molecules.
== Long-range forces ==
When long-range forces (typically gravity or the Coulomb force) are taken into account, then the interaction between each pair of particles needs to be computed. Both the number of interactions and cost of computation increase quadratically with the number of particles. This is not acceptable for simulations with large number of particles. A possible way to avoid this problem is to combine some particles, which are far away from the particle under consideration, into one pseudoparticle. Consider as an example the interaction between a star and a distant galaxy: The error arising from combining all the stars in the distant galaxy into one point mass is negligible. So-called tree algorithms are used to decide which particles can be combined into one pseudoparticle. These algorithms arrange all particles in a tree, a quadtree in the two-dimensional case and an octree in the three-dimensional case.
However, simulations in molecular dynamics divide the space in which the simulation take place into cells. Particles leaving through one side of a cell are simply inserted at the other side (periodic boundary conditions); the same goes for the forces. The force is no longer taken into account after the so-called cut-off distance (usually half the length of a cell), so that a particle is not influenced by the mirror image of the same particle in the other side of the cell. One can now increase the number of particles by simply copying the cells.
Algorithms to deal with long-range force include:
Barnes–Hut simulation,
the fast multipole method.
== Combined finite-discrete element method ==
Following the work by Munjiza and Owen, the combined finite-discrete element method has been further developed to various irregular and deformable particles in many applications including pharmaceutical tableting, packaging and flow simulations, and impact analysis.
== Advantages and limitations ==
Advantages
DEM can be used to simulate a wide variety of granular flow and rock mechanics situations. Several research groups have independently developed simulation software that agrees well with experimental findings in a wide range of engineering applications, including adhesive powders, granular flow, and jointed rock masses.
DEM allows a more detailed study of the micro-dynamics of powder flows than is often possible using physical experiments. For example, the force networks formed in a granular media can be visualized using DEM. Such measurements are nearly impossible in experiments with small and many particles.
The general characteristics of force-transmitting contacts in granular assemblies under external loading environments agree with experimental studies using Photo-stress analysis (PSA).
Disadvantages
The maximum number of particles, and duration of a virtual simulation is limited by computational power. Typical flows contain billions of particles, but contemporary DEM simulations on large cluster computing resources have only recently been able to approach this scale for sufficiently long time (simulated time, not actual program execution time).
DEM is computationally demanding, which is the reason why it has not been so readily and widely adopted as continuum approaches in computational engineering sciences and industry. However, the actual program execution times can be reduced significantly when graphical processing units (GPUs) are utilized to conduct DEM simulations, due to the large number of computing cores on typical GPUs. In addition GPUs tend to be significantly more energy efficient than conventional computing clusters when conducting DEM simulations i.e. a DEM simulation solved on GPUs requires less energy than when it is solved on a conventional computing cluster.
== See also ==
Compaction simulation
Movable Cellular Automata
== References ==
== Bibliography ==
Book
Bicanic, Ninad (2004). "Discrete Element Methods". In Stein, Erwin; De Borst; Hughes, Thomas J.R. (eds.). Encyclopedia of Computational Mechanics. Vol. 1. Wiley. ISBN 978-0-470-84699-5.
Griebel, Michael; et al. (2003). Numerische Simulation in der Moleküldynamik. Berlin: Springer. ISBN 978-3-540-41856-6.
Williams, J. R.; Hocking, G.; Mustoe, G. G. W. (January 1985). "The Theoretical Basis of the Discrete Element Method". NUMETA 1985, Numerical Methods of Engineering, Theory and Applications. Rotterdam: A.A. Balkema.
Williams, J. R.; Pande, G.; Beer, J.R. (1990). Numerical Methods in Rock Mechanics. Chichester: Wiley. ISBN 978-0471920212.
Radjai, Farang; Dubois, Frédéric, eds. (2011). Discrete-element modeling of granular materials. London: Wiley-ISTE. ISBN 978-1-84821-260-2.
Pöschel, Thorsten; Schwager, Thoms (2005). Computational Granular Dynamics: Models and Algorithms. Berlin: Springer. ISBN 978-3-540-21485-4.
Periodical
Bobet, A.; Fakhimi, A.; Johnson, S.; Morris, J.; Tonon, F.; Yeung, M. Ronald (November 2009). "Numerical Models in Discontinuous Media: Review of Advances for Rock Mechanics Applications". Journal of Geotechnical and Geoenvironmental Engineering. 135 (11): 1547–1561. doi:10.1061/(ASCE)GT.1943-5606.0000133.
Cundall, P. A.; Strack, O. D. L. (March 1979). "A discrete numerical model for granular assemblies". Géotechnique. 29 (1): 47–65. doi:10.1680/geot.1979.29.1.47.
Kafashan, J.; Wiącek, J.; Abd Rahman, N.; Gan, J. (2019). "Two-dimensional particle shapes modelling for DEM simulations in engineering: a review". Granular Matter. 21 (3): 80. doi:10.1007/s10035-019-0935-1. S2CID 199383188.
Kawaguchi, T.; Tanaka, T.; Tsuji, Y. (May 1998). "Numerical simulation of two-dimensional fluidized beds using the discrete element method (comparison between the two- and three-dimensional models)". Powder Technology. 96 (2): 129–138. doi:10.1016/S0032-5910(97)03366-4. Archived from the original on 2007-09-30. Retrieved 2005-08-23.
Williams, J. R.; O'Connor, R. (December 1999). "Discrete element simulation and the contact problem". Archives of Computational Methods in Engineering. 6 (4): 279–304. CiteSeerX 10.1.1.49.9391. doi:10.1007/BF02818917. S2CID 16642399.
Zhu, H.P.; Zhou, Z.Y.; Yang, R.Y.; Yu, A.B. (July 2007). "Discrete particle simulation of particulate systems: Theoretical developments". Chemical Engineering Science. 62 (13): 3378–3396. Bibcode:2007ChEnS..62.3378Z. doi:10.1016/j.ces.2006.12.089.
Zhu, HP; Zhou, ZY; Yang, RY; Yu, AB (2008). "Discrete particle simulation of particulate systems: A review of major applications and findings". Chemical Engineering Science. 63 (23): 5728–5770. Bibcode:2008ChEnS..63.5728Z. doi:10.1016/j.ces.2008.08.006.
Proceedings
Shi, Gen-Hua (February 1992). "Discontinuous Deformation Analysis: A New Numerical Model For The Statics And Dynamics of Deformable Block Structures". Engineering Computations. 9 (2): 157–168. doi:10.1108/eb023855.
Williams, John R.; Pentland, Alex P. (February 1992). "Superquadrics and Modal Dynamics For Discrete Elements in Interactive Design". Engineering Computations. 9 (2): 115–127. doi:10.1108/eb023852.
Williams, John R.; Mustoe, Graham G. W., eds. (1993). Proceedings of the 2nd International Conference on Discrete Element Methods (DEM) (2nd ed.). Cambridge, MA: IESL Publications. ISBN 978-0-918062-88-8. | Wikipedia/Discrete_element_method |
The finite element method (FEM) is a powerful technique originally developed for the numerical solution of complex problems in structural mechanics, and it remains the method of choice for analyzing complex systems. In FEM, the structural system is modeled by a set of appropriate finite elements interconnected at discrete points called nodes. Elements may have physical properties such as thickness, coefficient of thermal expansion, density, Young's modulus, shear modulus and Poisson's ratio.
== History ==
The origin of the finite element method can be traced to the matrix analysis of structures where the concept of a displacement or stiffness matrix approach was introduced. Finite element concepts were developed based on engineering methods in the 1950s. The finite element method obtained its real impetus in the 1960s and 1970s by John Argyris, and co-workers; at the University of Stuttgart, by Ray W. Clough; at the University of California, Berkeley, by Olgierd Zienkiewicz, and co-workers Ernest Hinton, Bruce Irons; at the University of Swansea, by Philippe G. Ciarlet; at the University of Paris; at Cornell University, by Richard Gallagher and co-workers. The original works such as those by Argyris and Clough became the foundation for today’s finite element structural analysis methods.
One-dimensional straight or curved elements with physical properties such as axial, bending, and torsional stiffnesses. This type of element is suitable for modeling cables, braces, trusses, beams, stiffeners, grids and frames. Straight elements usually have two nodes, one at each end, while curved elements require at least three nodes including the end-nodes. The elements are positioned at the centroidal axis of the actual members.
Two-dimensional elements that resist only in-plane forces by membrane action (plane stress, plane strain), and plates that resist transverse loads by transverse shear and bending action (plates and shells). They may have a variety of shapes such as flat or curved triangles and quadrilaterals. Nodes are usually placed at the element corners, and additional nodes can be placed along the element edges or even within the element for higher accuracy. The elements are positioned at the mid-surface of the actual layer thickness.
Torus-shaped elements for axisymmetric problems such as membranes, thick plates, shells, and solids. The cross-section of the elements are similar to the previously described types: one-dimensional for thin plates and shells, and two-dimensional for solids, thick plates and shells.
Three-dimensional elements for modeling 3-D solids such as machine components, dams, embankments or soil masses. Common element shapes include tetrahedrals and hexahedrals. Nodes are placed at the vertexes and possibly on element faces or within the element.
=== Element interconnection and displacement ===
The elements are interconnected only at the exterior nodes, and altogether they should cover the entire domain as accurately as possible. Nodes will have nodal (vector) displacements or degrees of freedom which may include translations, rotations, and for special applications, higher order derivatives of displacements. When the nodes displace, they will drag the elements along in a certain manner dictated by the element formulation. In other words, displacements of any points in the element will be interpolated from the nodal displacements, and this is the main reason for the approximate nature of the solution.
== Practical considerations ==
From the application point of view, it is important to model the system such that:
Symmetry or anti-symmetry conditions are exploited in order to reduce the size of the model.
Displacement compatibility, including any required discontinuity, is ensured at the nodes, and preferably, along the element edges as well, particularly when adjacent elements are of different types, material or thickness. Compatibility of displacements of many nodes can usually be imposed via constraint relations.
Elements' behaviors must capture the dominant actions of the actual system, both locally and globally.
The element mesh should be sufficiently fine in order to produce acceptable accuracy. To assess accuracy, the mesh is refined until the important results shows little change. For higher accuracy, the aspect ratio of the elements should be as close to unity as possible, and smaller elements are used over the parts of higher stress gradient.
Proper support constraints are imposed with special attention paid to nodes on symmetry axes.
Large scale commercial software packages often provide facilities for generating the mesh, and the graphical display of input and output, which greatly facilitate the verification of both input data and interpretation of the results.
== Theoretical overview of FEM-Displacement Formulation: From elements, to system, to solution ==
While the theory of FEM can be presented in different perspectives or emphases, its development for structural analysis follows the more traditional approach via the virtual work principle or the minimum total potential energy principle. The virtual work principle approach is more general as it is applicable to both linear and non-linear material behaviors. The virtual work method is an expression of conservation of energy: for conservative systems, the work added to the system by a set of applied forces is equal to the energy stored in the system in the form of strain energy of the structure's components.
The principle of virtual displacements for the structural system expresses the mathematical identity of external and internal virtual work:
In other words, the summation of the work done on the system by the set of external forces is equal to the work stored as strain energy in the elements that make up the system.
The virtual internal work in the right-hand-side of the above equation may be found by summing the virtual work done on the individual elements. The latter requires that force-displacement functions be used that describe the response for each individual element. Hence, the displacement of the structure is described by the response of individual (discrete) elements collectively. The equations are written only for the small domain of individual elements of the structure rather than a single equation that describes the response of the system as a whole (a continuum). The latter would result in an intractable problem, hence the utility of the finite element method. As shown in the subsequent sections, Eq.(1) leads to the following governing equilibrium equation for the system:
where
R
{\displaystyle \mathbf {R} }
= vector of nodal forces, representing external forces applied to the system's nodes.
K
{\displaystyle \mathbf {K} }
= system stiffness matrix, which is the collective effect of the individual elements' stiffness matrices :
k
e
{\displaystyle \mathbf {k} ^{e}}
.
r
{\displaystyle \mathbf {r} }
= vector of the system's nodal displacements.
R
o
{\displaystyle \mathbf {R} ^{o}}
= vector of equivalent nodal forces, representing all external effects other than the nodal forces which are already included in the preceding nodal force vector R. These external effects may include distributed or concentrated surface forces, body forces, thermal effects, initial stresses and strains.
Once the supports' constraints are accounted for, the nodal displacements are found by solving the system of linear equations (2), symbolically:
Subsequently, the strains and stresses in individual elements may be found as follows:
where
q
{\displaystyle \mathbf {q} }
= vector of a nodal displacements--a subset of the system displacement vector r that pertains to the elements under consideration.
B
{\displaystyle \mathbf {B} }
= strain-displacement matrix that transforms nodal displacements q to strains at any point in the element.
E
{\displaystyle \mathbf {E} }
= elasticity matrix that transforms effective strains to stresses at any point in the element.
ϵ
o
{\displaystyle \mathbf {\epsilon } ^{o}}
= vector of initial strains in the elements.
σ
o
{\displaystyle \mathbf {\sigma } ^{o}}
= vector of initial stresses in the elements.
By applying the virtual work equation (1) to the system, we can establish the element matrices
B
{\displaystyle \mathbf {B} }
,
k
e
{\displaystyle \mathbf {k} ^{e}}
as well as the technique of assembling the system matrices
R
o
{\displaystyle \mathbf {R} ^{o}}
and
K
{\displaystyle \mathbf {K} }
. Other matrices such as
ϵ
o
{\displaystyle \mathbf {\epsilon } ^{o}}
,
σ
o
{\displaystyle \mathbf {\sigma } ^{o}}
,
R
{\displaystyle \mathbf {R} }
and
E
{\displaystyle \mathbf {E} }
are known values and can be directly set up from data input.
== Interpolation or shape functions ==
Let
q
{\displaystyle \mathbf {q} }
be the vector of nodal displacements of a typical element. The displacements at any other point of the element may be found by the use of interpolation functions as, symbolically:
where
u
{\displaystyle \mathbf {u} }
= vector of displacements at any point {x,y,z} of the element.
N
{\displaystyle \mathbf {N} }
= matrix of shape functions serving as interpolation functions.
Equation (6) gives rise to other quantities of great interest:
Virtual displacements that are a function of virtual nodal displacements:
Strains in the elements that result from displacements of the element's nodes:
where
D
{\displaystyle \mathbf {D} }
= matrix of differential operators that convert displacements to strains using linear elasticity theory. Eq.(7) shows that matrix B in (4) is
Virtual strains consistent with element's virtual nodal displacements:
== Internal virtual work in a typical element ==
For a typical element of volume
V
e
{\displaystyle V^{e}}
, the internal virtual work due to virtual displacements is obtained by substitution of (5) and (9) into (1):
=== Element matrices ===
Primarily for the convenience of reference, the following matrices pertaining to a typical elements may now be defined:
Element stiffness matrix
Equivalent element load vector
These matrices are usually evaluated numerically using Gaussian quadrature for numerical integration.
Their use simplifies (10) to the following:
=== Element virtual work in terms of system nodal displacements ===
Since the nodal displacement vector q is a subset of the system nodal displacements r (for compatibility with adjacent elements), we can replace q with r by expanding the size of the element matrices with new columns and rows of zeros:
where, for simplicity, we use the same symbols for the element matrices, which now have expanded size as well as suitably rearranged rows and columns.
== System virtual work ==
Summing the internal virtual work (14) for all elements gives the right-hand-side of (1):
Considering now the left-hand-side of (1), the system external virtual work consists of:
The work done by the nodal forces R:
The work done by external forces
T
e
{\displaystyle \mathbf {T} ^{e}}
on the part
S
e
{\displaystyle \mathbf {S} ^{e}}
of the elements' edges or surfaces, and by the body forces
f
e
{\displaystyle \mathbf {f} ^{e}}
∑
e
∫
S
e
δ
u
T
T
e
d
S
e
+
∑
e
∫
V
e
δ
u
T
f
e
d
V
e
{\displaystyle \sum _{e}\int _{S^{e}}\delta \ \mathbf {u} ^{T}\mathbf {T} ^{e}\,dS^{e}+\sum _{e}\int _{V^{e}}\delta \ \mathbf {u} ^{T}\mathbf {f} ^{e}\,dV^{e}}
Substitution of (6b) gives:
δ
q
T
∑
e
∫
S
e
N
T
T
e
d
S
e
+
δ
q
T
∑
e
∫
V
e
N
T
f
e
d
V
e
{\displaystyle \delta \ \mathbf {q} ^{T}\sum _{e}\int _{S^{e}}\mathbf {N} ^{T}\mathbf {T} ^{e}\,dS^{e}+\delta \ \mathbf {q} ^{T}\sum _{e}\int _{V^{e}}\mathbf {N} ^{T}\mathbf {f} ^{e}\,dV^{e}}
or
where we have introduced additional element's matrices defined below:
Again, numerical integration is convenient for their evaluation. A similar replacement of q in (17a) with r gives, after rearranging and expanding the vectors
Q
t
e
,
Q
f
e
{\displaystyle \mathbf {Q} ^{te},\mathbf {Q} ^{fe}}
:
== Assembly of system matrices ==
Adding (16), (17b) and equating the sum to (15) gives:
δ
r
T
R
−
δ
r
T
∑
e
(
Q
t
e
+
Q
f
e
)
=
δ
r
T
(
∑
e
k
e
)
r
+
δ
r
T
∑
e
Q
o
e
{\displaystyle \delta \ \mathbf {r} ^{T}\mathbf {R} -\delta \ \mathbf {r} ^{T}\sum _{e}\left(\mathbf {Q} ^{te}+\mathbf {Q} ^{fe}\right)=\delta \ \mathbf {r} ^{T}\left(\sum _{e}\mathbf {k} ^{e}\right)\mathbf {r} +\delta \ \mathbf {r} ^{T}\sum _{e}\mathbf {Q} ^{oe}}
Since the virtual displacements
δ
r
{\displaystyle \delta \ \mathbf {r} }
are arbitrary, the preceding equality reduces to:
R
=
(
∑
e
k
e
)
r
+
∑
e
(
Q
o
e
+
Q
t
e
+
Q
f
e
)
{\displaystyle \mathbf {R} =\left(\sum _{e}\mathbf {k} ^{e}\right)\mathbf {r} +\sum _{e}\left(\mathbf {Q} ^{oe}+\mathbf {Q} ^{te}+\mathbf {Q} ^{fe}\right)}
Comparison with (2) shows that:
The system stiffness matrix is obtained by summing the elements' stiffness matrices:
K
=
∑
e
k
e
{\displaystyle \mathbf {K} =\sum _{e}\mathbf {k} ^{e}}
The vector of equivalent nodal forces is obtained by summing the elements' load vectors:
R
o
=
∑
e
(
Q
o
e
+
Q
t
e
+
Q
f
e
)
{\displaystyle \mathbf {R} ^{o}=\sum _{e}\left(\mathbf {Q} ^{oe}+\mathbf {Q} ^{te}+\mathbf {Q} ^{fe}\right)}
In practice, the element matrices are neither expanded nor rearranged. Instead, the system stiffness matrix
K
{\displaystyle \mathbf {K} }
is assembled by adding individual coefficients
k
i
j
e
{\displaystyle {k}_{ij}^{e}}
to
K
k
l
{\displaystyle {K}_{kl}}
where the subscripts ij, kl mean that the element's nodal displacements
q
i
e
,
q
j
e
{\displaystyle {q}_{i}^{e},{q}_{j}^{e}}
match respectively with the system's nodal displacements
r
k
,
r
l
{\displaystyle {r}_{k},{r}_{l}}
. Similarly,
R
o
{\displaystyle \mathbf {R} ^{o}}
is assembled by adding individual coefficients
Q
i
e
{\displaystyle {Q}_{i}^{e}}
to
R
k
o
{\displaystyle {R}_{k}^{o}}
where
q
i
e
{\displaystyle {q}_{i}^{e}}
matches
r
k
{\displaystyle {r}_{k}}
. This direct addition of
k
i
j
e
{\displaystyle {k}_{ij}^{e}}
into
K
k
l
{\displaystyle {K}_{kl}}
gives the procedure the name Direct Stiffness Method.
== See also ==
Finite element method
Flexibility method
Interval finite element
List of finite element software packages
Matrix stiffness method
Modal analysis using FEM
Structural analysis
Virtual work
== References == | Wikipedia/Finite_element_method_in_structural_mechanics |
Acoustics is a branch of physics that deals with the study of mechanical waves in gases, liquids, and solids including topics such as vibration, sound, ultrasound and infrasound. A scientist who works in the field of acoustics is an acoustician while someone working in the field of acoustics technology may be called an acoustical engineer. The application of acoustics is present in almost all aspects of modern society with the most obvious being the audio and noise control industries.
Hearing is one of the most crucial means of survival in the animal world and speech is one of the most distinctive characteristics of human development and culture. Accordingly, the science of acoustics spreads across many facets of human society—music, medicine, architecture, industrial production, warfare and more. Likewise, animal species such as songbirds and frogs use sound and hearing as a key element of mating rituals or for marking territories. Art, craft, science and technology have provoked one another to advance the whole, as in many other fields of knowledge. Robert Bruce Lindsay's "Wheel of Acoustics" is a well-accepted overview of the various fields in acoustics.
== History ==
=== Etymology ===
The word "acoustic" is derived from the Greek word ἀκουστικός (akoustikos), meaning "of or for hearing, ready to hear" and that from ἀκουστός (akoustos), "heard, audible", which in turn derives from the verb ἀκούω(akouo), "I hear".
The Latin synonym is "sonic", after which the term sonics used to be a synonym for acoustics and later a branch of acoustics. Frequencies above and below the audible range are called "ultrasonic" and "infrasonic", respectively.
=== Early research in acoustics ===
In the 6th century BC, the ancient Greek philosopher Pythagoras wanted to know why some combinations of musical sounds seemed more beautiful than others, and he found answers in terms of numerical ratios representing the harmonic overtone series on a string. He is reputed to have observed that when the lengths of vibrating strings are expressible as ratios of integers (e.g. 2 to 3, 3 to 4), the tones produced will be harmonious, and the smaller the integers the more harmonious the sounds. For example, a string of a certain length would sound particularly harmonious with a string of twice the length (other factors being equal). In modern parlance, if a string sounds the note C when plucked, a string twice as long will sound a C an octave lower. In one system of musical tuning, the tones in between are then given by 16:9 for D, 8:5 for E, 3:2 for F, 4:3 for G, 6:5 for A, and 16:15 for B, in ascending order.
Aristotle (384–322 BC) understood that sound consisted of compressions and rarefactions of air which "falls upon and strikes the air which is next to it...", a very good expression of the nature of wave motion. On Things Heard, generally ascribed to Strato of Lampsacus, states that the pitch is related to the frequency of vibrations of the air and to the speed of sound.
In about 20 BC, the Roman architect and engineer Vitruvius wrote a treatise on the acoustic properties of theaters including discussion of interference, echoes, and reverberation—the beginnings of architectural acoustics. In Book V of his De architectura (The Ten Books of Architecture) Vitruvius describes sound as a wave comparable to a water wave extended to three dimensions, which, when interrupted by obstructions, would flow back and break up following waves. He described the ascending seats in ancient theaters as designed to prevent this deterioration of sound and also recommended bronze vessels (echea) of appropriate sizes be placed in theaters to resonate with the fourth, fifth and so on, up to the double octave, in order to resonate with the more desirable, harmonious notes.
During the Islamic golden age, Abū Rayhān al-Bīrūnī (973–1048) is believed to have postulated that the speed of sound was much slower than the speed of light.
The physical understanding of acoustical processes advanced rapidly during and after the Scientific Revolution. Mainly Galileo Galilei (1564–1642) but also Marin Mersenne (1588–1648), independently, discovered the complete laws of vibrating strings (completing what Pythagoras and Pythagoreans had started 2000 years earlier). Galileo wrote "Waves are produced by the vibrations of a sonorous body, which spread through the air, bringing to the tympanum of the ear a stimulus which the mind interprets as sound", a remarkable statement that points to the beginnings of physiological and psychological acoustics. Experimental measurements of the speed of sound in air were carried out successfully between 1630 and 1680 by a number of investigators, prominently Mersenne. Inspired by Mersenne's Harmonie universelle (Universal Harmony) or 1634, the Rome-based Jesuit scholar Athanasius Kircher undertook research in acoustics. Kircher published two major books on acoustics: the Musurgia universalis (Universal Music-Making) in 1650 and the Phonurgia nova (New Sound-Making) in 1673. Meanwhile, Newton (1642–1727) derived the relationship for wave velocity in solids, a cornerstone of physical acoustics (Principia, 1687).
=== Age of Enlightenment and onward ===
Substantial progress in acoustics, resting on firmer mathematical and physical concepts, was made during the eighteenth century by Euler (1707–1783), Lagrange (1736–1813), and d'Alembert (1717–1783). During this era, continuum physics, or field theory, began to receive a definite mathematical structure. The wave equation emerged in a number of contexts, including the propagation of sound in air.
In the nineteenth century the major figures of mathematical acoustics were Helmholtz in Germany, who consolidated the field of physiological acoustics, and Lord Rayleigh in England, who combined the previous knowledge with his own copious contributions to the field in his monumental work The Theory of Sound (1877). Also in the 19th century, Wheatstone, Ohm, and Henry developed the analogy between electricity and acoustics.
The twentieth century saw a burgeoning of technological applications of the large body of scientific knowledge that was by then in place. The first such application was Sabine's groundbreaking work in architectural acoustics, and many others followed. Underwater acoustics was used for detecting submarines in the first World War. Sound recording and the telephone played important roles in a global transformation of society. Sound measurement and analysis reached new levels of accuracy and sophistication through the use of electronics and computing. The ultrasonic frequency range enabled wholly new kinds of application in medicine and industry. New kinds of transducers (generators and receivers of acoustic energy) were invented and put to use.
== Definition ==
Acoustics is defined by ANSI/ASA S1.1-2013 as "(a) Science of sound, including its production, transmission, and effects, including biological and psychological effects. (b) Those qualities of a room that, together, determine its character with respect to auditory effects."
The study of acoustics revolves around the generation, propagation and reception of mechanical waves and vibrations.
The steps shown in the above diagram can be found in any acoustical event or process. There are many kinds of cause, both natural and volitional. There are many kinds of transduction process that convert energy from some other form into sonic energy, producing a sound wave. There is one fundamental equation that describes sound wave propagation, the acoustic wave equation, but the phenomena that emerge from it are varied and often complex. The wave carries energy throughout the propagating medium. Eventually this energy is transduced again into other forms, in ways that again may be natural and/or volitionally contrived. The final effect may be purely physical or it may reach far into the biological or volitional domains. The five basic steps are found equally well whether we are talking about an earthquake, a submarine using sonar to locate its foe, or a band playing in a rock concert.
The central stage in the acoustical process is wave propagation. This falls within the domain of physical acoustics. In fluids, sound propagates primarily as a pressure wave. In solids, mechanical waves can take many forms including longitudinal waves, transverse waves and surface waves.
Acoustics looks first at the pressure levels and frequencies in the sound wave and how the wave interacts with the environment. This interaction can be described as either a diffraction, interference or a reflection or a mix of the three. If several media are present, a refraction can also occur. Transduction processes are also of special importance to acoustics.
== Fundamental concepts ==
=== Wave propagation: pressure levels ===
In fluids such as air and water, sound waves propagate as disturbances in the ambient pressure level. While this disturbance is usually small, it is still noticeable to the human ear. The smallest sound that a person can hear, known as the threshold of hearing, is nine orders of magnitude smaller than the ambient pressure. The loudness of these disturbances is related to the sound pressure level (SPL) which is measured on a logarithmic scale in decibels.
=== Wave propagation: frequency ===
Physicists and acoustic engineers tend to discuss sound pressure levels in terms of frequencies, partly because this is how our ears interpret sound. What we experience as "higher pitched" or "lower pitched" sounds are pressure vibrations having a higher or lower number of cycles per second. In a common technique of acoustic measurement, acoustic signals are sampled in time, and then presented in more meaningful forms such as octave bands or time frequency plots. Both of these popular methods are used to analyze sound and better understand the acoustic phenomenon.
The entire spectrum can be divided into three sections: audio, ultrasonic, and infrasonic. The audio range falls between 20 Hz and 20,000 Hz. This range is important because its frequencies can be detected by the human ear. This range has a number of applications, including speech communication and music. The ultrasonic range refers to the very high frequencies: 20,000 Hz and higher. This range has shorter wavelengths which allow better resolution in imaging technologies. Medical applications such as ultrasonography and elastography rely on the ultrasonic frequency range. On the other end of the spectrum, the lowest frequencies are known as the infrasonic range. These frequencies can be used to study geological phenomena such as earthquakes.
Analytic instruments such as the spectrum analyzer facilitate visualization and measurement of acoustic signals and their properties. The spectrogram produced by such an instrument is a graphical display of the time varying pressure level and frequency profiles which give a specific acoustic signal its defining character.
=== Transduction in acoustics ===
A transducer is a device for converting one form of energy into another. In an electroacoustic context, this means converting sound energy into electrical energy (or vice versa). Electroacoustic transducers include loudspeakers, microphones, particle velocity sensors, hydrophones and sonar projectors. These devices convert a sound wave to or from an electric signal. The most widely used transduction principles are electromagnetism, electrostatics and piezoelectricity.
The transducers in most common loudspeakers (e.g. woofers and tweeters), are electromagnetic devices that generate waves using a suspended diaphragm driven by an electromagnetic voice coil, sending off pressure waves. Electret microphones and condenser microphones employ electrostatics—as the sound wave strikes the microphone's diaphragm, it moves and induces a voltage change. The ultrasonic systems used in medical ultrasonography employ piezoelectric transducers. These are made from special ceramics in which mechanical vibrations and electrical fields are interlinked through a property of the material itself.
== Acoustician ==
An acoustician is an expert in the science of sound.
=== Education ===
There are many types of acoustician, but they usually have a Bachelor's degree or higher qualification. Some possess a degree in acoustics, while others enter the discipline via studies in fields such as physics or engineering. Much work in acoustics requires a good grounding in Mathematics and science. Many acoustic scientists work in research and development. Some conduct basic research to advance our knowledge of the perception (e.g. hearing, psychoacoustics or neurophysiology) of speech, music and noise. Other acoustic scientists advance understanding of how sound is affected as it moves through environments, e.g. underwater acoustics, architectural acoustics or structural acoustics. Other areas of work are listed under subdisciplines below. Acoustic scientists work in government, university and private industry laboratories. Many go on to work in Acoustical Engineering. Some positions, such as Faculty (academic staff) require a Doctor of Philosophy.
== Subdisciplines ==
=== Archaeoacoustics ===
Archaeoacoustics, also known as the archaeology of sound, is one of the only ways to experience the past with senses other than our eyes. Archaeoacoustics is studied by testing the acoustic properties of prehistoric sites, including caves. Iegor Rezkinoff, a sound archaeologist, studies the acoustic properties of caves through natural sounds like humming and whistling. Archaeological theories of acoustics are focused around ritualistic purposes as well as a way of echolocation in the caves. In archaeology, acoustic sounds and rituals directly correlate as specific sounds were meant to bring ritual participants closer to a spiritual awakening. Parallels can also be drawn between cave wall paintings and the acoustic properties of the cave; they are both dynamic. Because archaeoacoustics is a fairly new archaeological subject, acoustic sound is still being tested in these prehistoric sites today.
=== Aeroacoustics ===
Aeroacoustics is the study of noise generated by air movement, for instance via turbulence, and the movement of sound through the fluid air. This knowledge was applied in the 1920s and '30s to detect aircraft before radar was invented and is applied in acoustical engineering to study how to quieten aircraft. Aeroacoustics is important for understanding how wind musical instruments work.
=== Acoustic signal processing ===
Acoustic signal processing is the electronic manipulation of acoustic signals. Applications include: active noise control; design for hearing aids or cochlear implants; echo cancellation; music information retrieval, and perceptual coding (e.g. MP3 or Opus).
=== Architectural acoustics ===
Architectural acoustics (also known as building acoustics) involves the scientific understanding of how to achieve good sound within a building. It typically involves the study of speech intelligibility, speech privacy, music quality, and vibration reduction in the built environment. Commonly studied environments are hospitals, classrooms, dwellings, performance venues, recording and broadcasting studios. Focus considerations include room acoustics, airborne and impact transmission in building structures, airborne and structure-borne noise control, noise control of building systems and electroacoustic systems.
=== Bioacoustics ===
Bioacoustics is the scientific study of the hearing and calls of animal calls, as well as how animals are affected by the acoustic and sounds of their habitat.
=== Electroacoustics ===
This subdiscipline is concerned with the recording, manipulation and reproduction of audio using electronics. This might include products such as mobile phones, large scale public address systems or virtual reality systems in research laboratories.
=== Environmental noise and soundscapes ===
Environmental acoustics is the study of noise and vibrations, and their impact on structures, objects, humans, and animals.
The main aim of these studies is to reduce levels of environmental noise and vibration. Typical work and research within environmental acoustics concerns the development of models used in simulations, measurement techniques, noise mitigation strategies, and the development of standards and regulations. Research work now also has a focus on the positive use of sound in urban environments: soundscapes and tranquility.
Examples of noise and vibration sources include railways, road traffic, aircraft, industrial equipment and recreational activities.
=== Musical acoustics ===
Musical acoustics is the study of the physics of acoustic instruments; the audio signal processing used in electronic music; the computer analysis of music and composition, and the perception and cognitive neuroscience of music.
=== Psychoacoustics ===
Many studies have been conducted to identify the relationship between acoustics and cognition, or more commonly known as psychoacoustics, in which what one hears is a combination of perception and biological aspects. The information intercepted by the passage of sound waves through the ear is understood and interpreted through the brain, emphasizing the connection between the mind and acoustics. Psychological changes have been seen as brain waves slow down or speed up as a result of varying auditory stimulus which can in turn affect the way one thinks, feels, or even behaves. This correlation can be viewed in normal, everyday situations in which listening to an upbeat or uptempo song can cause one's foot to start tapping or a slower song can leave one feeling calm and serene. In a deeper biological look at the phenomenon of psychoacoustics, it was discovered that the central nervous system is activated by basic acoustical characteristics of music. By observing how the central nervous system, which includes the brain and spine, is influenced by acoustics, the pathway in which acoustic affects the mind, and essentially the body, is evident.
=== Speech ===
Acousticians study the production, processing and perception of speech. Speech recognition and Speech synthesis are two important areas of speech processing using computers. The subject also overlaps with the disciplines of physics, physiology, psychology, and linguistics.
=== Structural Vibration and Dynamics ===
Structural acoustics is the study of motions and interactions of mechanical systems with their environments and the methods of their measurement, analysis, and control. There are several sub-disciplines found within this regime:
Modal Analysis
Material characterization
Structural health monitoring
Acoustic Metamaterials
Friction Acoustics
Applications might include: ground vibrations from railways; vibration isolation to reduce vibration in operating theatres; studying how vibration can damage health (vibration white finger); vibration control to protect a building from earthquakes, or measuring how structure-borne sound moves through buildings.
=== Ultrasonics ===
Ultrasonics deals with sounds at frequencies too high to be heard by humans. Specialisms include medical ultrasonics (including medical ultrasonography), sonochemistry, ultrasonic testing, material characterisation and underwater acoustics (sonar).
=== Underwater acoustics ===
Underwater acoustics is the scientific study of natural and man-made sounds underwater. Applications include sonar to locate submarines, underwater communication by whales, climate change monitoring by measuring sea temperatures acoustically, sonic weapons, and marine bioacoustics.
== Research ==
=== Professional societies ===
The Acoustical Society of America (ASA)
Australian Acoustical Society (AAS)
The European Acoustics Association (EAA)
Institute of Electrical and Electronics Engineers (IEEE)
Institute of Acoustics (IoA UK)
The Audio Engineering Society (AES)
American Society of Mechanical Engineers, Noise Control and Acoustics Division (ASME-NCAD)
International Commission for Acoustics (ICA)
American Institute of Aeronautics and Astronautics, Aeroacoustics (AIAA)
International Computer Music Association (ICMA)
=== Academic journals ===
Acoustics | An Open Access Journal from MDPI
Acoustics Today
Acta Acustica united with Acustica
Advances in Acoustics and Vibration
Applied Acoustics
Building Acoustics
IEEE Transacions on Ultrasonics, Ferroelectrics, and Frequency Control
Journal of the Acoustical Society of America (JASA)
Journal of the Acoustical Society of America, Express Letters (JASA-EL)
Journal of the Audio Engineering Society
Journal of Sound and Vibration (JSV)
Journal of Vibration and Acoustics American Society of Mechanical Engineers
MDPI Acoustics
Noise Control Engineering Journal
SAE International Journal of Vehicle Dynamics, Stability and NVH
Ultrasonics (journal)
Ultrasonics Sonochemistry
Wave Motion
=== Conferences ===
InterNoise
NoiseCon
Forum Acousticum
SAE Noise and Vibration Conference and Exhibition
== See also ==
== References ==
== Further reading ==
Attenborough K, Postema M (2008). A pocket-sized introduction to acoustics. Kingston upon Hull: University of Hull. doi:10.5281/zenodo.7504060. ISBN 978-90-812588-2-1.{{cite book}}: CS1 maint: publisher location (link)
Benade AH (1976). Fundamentals of Musical Acoustics. New York: Oxford University Press. ISBN 978-0-19-502030-4. OCLC 2270137.
Biryukov SV, Gulyaev YV, Krylov VV, Plessky VP (1995). Surface Acoustic Waves in Inhomogeneous Media. Heidelberg: Springer. ISBN 978-3-540-58460-5.
Crocker MJ, ed. (1997). Encyclopedia of Acoustics. Hoboken: Wiley. OCLC 441305164.
Falkovich G (2011). Fluid Mechanics, a short course for physicists. Cambridge: Cambridge University Press. ISBN 978-1-107-00575-4.
Fahy FJ, Gardonio P (2007). Sound and Structural Vibration: Radiation, Transmission and Response (2nd ed.). Amsterdam: Academic Press. ISBN 978-0-08-047110-5.
Junger MC, Feit D (1986). Sound, Structures and Their Interaction (2nd ed.). Cambridge: MIT Press. Archived from the original on 2014-06-05.
Kinsler LE (1999). Fundamentals of Acoustics (4th ed.). Hoboken: Wiley. ISBN 978-04718-4-789-2.
Mason WP, Thurston RN (1981). Physical Acoustics. Heidelberg: Springer. Archived from the original on 2013-12-25.
Morse PM, Ingard KU (1986). Theoretical Acoustics. Princeton: Princeton University Press. ISBN 0-691-08425-4.
Pierce AD (1989). Acoustics: An Introduction to its Physical Principles and Applications. Melville: Acoustical Society of America. ISBN 0-88318-612-8.
Raichel DR (2006). The Science and Applications of Acoustics (2nd ed.). Heidelberg: Springer. ISBN 0-387-30089-9.
Lord Rayleigh (1894). The Theory of Sound. New York: Dover. ISBN 978-0-8446-3028-1. {{cite book}}: ISBN / Date incompatibility (help)
Skudrzyk E (1971). The Foundations of Acoustics: Basic Mathematics and Basic Acoustics. Heidelberg: Springer.
Stephens RW, Bate AE (1966). Acoustics and Vibrational Physics (2nd ed.). London: Edward Arnold.
Wilson CE (2006). Noise Control (Revised ed.). Malabar: Krieger. ISBN 978-1-57524-237-8. OCLC 59223706.
== External links ==
International Commission for Acoustics
European Acoustics Association
Acoustical Society of America
Institute of Noise Control Engineers
National Council of Acoustical Consultants
Institute of Acoustic in UK
Australian Acoustical Society (AAS) | Wikipedia/acoustics |
Music therapy, an allied health profession, "is the clinical and evidence-based use of music interventions to accomplish individualized goals within a therapeutic relationship by a credentialed professional who has completed an approved music therapy program." It is also a vocation, involving a deep commitment to music and the desire to use it as a medium to help others. Although music therapy has only been established as a profession relatively recently, the connection between music and therapy is not new.
Music therapy is a broad field. Music therapists use music-based experiences to address client needs in one or more domains of human functioning: cognitive, academic, emotional/psychological; behavioral; communication; social; physiological (sensory, motor, pain, neurological and other physical systems), spiritual, aesthetics. Music experiences are strategically designed to use the elements of music for therapeutic effects, including melody, harmony, key, mode, meter, rhythm, pitch/range, duration, timbre, form, texture, and instrumentation.
Some common music therapy practices include developmental work (communication, motor skills, etc.) with individuals with special needs, songwriting and listening in reminiscence, orientation work with the elderly, processing and relaxation work, and rhythmic entrainment for physical rehabilitation in stroke survivors. Music therapy is used in medical hospitals, cancer centers, schools, alcohol and drug recovery programs, psychiatric hospitals, nursing homes, and correctional facilities.
Music therapy is distinctive from musopathy, which relies on a more generic and non-cultural approach based on neural, physical, and other responses to the fundamental aspects of sound.
Music therapy might also incorporate practices from sound healing, also known as sound immersion or sound therapy, which focuses on sound rather song. Sound healing describes the use of vibrations and frequencies for relaxation, meditation, and other claimed healing benefits. Unlike music therapy, sound healing is unregulated and an alternative therapy.
Music therapy aims to provide physical and mental benefit. Music therapists use their techniques to help their patients in many areas, ranging from stress relief before and after surgeries to neuropathologies such as Alzheimer's disease. Studies on people diagnosed with mental health disorders such as anxiety, depression, and schizophrenia have associated some improvements in mental health after music therapy. The National Institute for Health and Care Excellence (NICE) have claimed that music therapy is an effective method in helping people experiencing mental health issues, and more should be done to offer those in need this type of help.
== Uses ==
=== Children and adolescents ===
Music therapy may be suggested for adolescent populations to help manage disorders usually diagnosed in adolescence, such as mood/anxiety disorders and eating disorders, or inappropriate behaviors, including suicide attempts, withdrawal from family, social isolation from peers, aggression, running away, and substance abuse. Goals in treating adolescents with music therapy, especially for those at high risk, often include increased recognition and awareness of emotions and moods, improved decision-making skills, opportunities for creative self expression, decreased anxiety, increased self-confidence, improved self-esteem, and better listening skills.
There is some evidence that, when combined with other types of rehabilitation, music therapy may contribute to the success rate of sensorimotor, cognitive, and communicative rehabilitation. For children and adolescents with major depressive or anxiety disorders, there is moderate to low quality evidence that music therapy added to the standard treatment may reduce internalizing symptoms and may be more effective than treatment as usual (without music therapy).
==== Methods ====
Among adolescents, group meetings and individual sessions are the main methods for music therapy. Both methods may include listening to music, discussing concerning moods and emotions in or toward music, analyzing the meanings of specific songs, writing lyrics, composing or performing music, and musical improvisation.
Private individual sessions can provide personal attention and are most effective when using music preferred by the patient. Using music that adolescents can relate to or connect with can help adolescent patients view the therapist as safe and trustworthy, and to engage in therapy with less resistance. Music therapy conducted in groups allows adolescent individuals to feel a sense of belonging, express their opinions, learn how to socialize and verbalize appropriately with peers, improve compromising skills, and develop tolerance and empathy. Group sessions that emphasize cooperation and cohesion can be effective in working with adolescents.
Music therapy intervention programs typically include about 18 sessions of treatment. The achievement of a physical rehabilitation goal relies on the child's existing motivation and feelings towards music and their commitment to engage in meaningful, rewarding efforts. Regaining full functioning also confides in the prognosis of recovery, the condition of the client, and the environmental resources available. Both techniques use systematic processes where the therapists assist the client by using musical experiences and connections that collaborate as a dynamic force of change toward rehabilitation.
==== Assessment ====
Assessment includes obtaining a full medical history, current non-musical functioning (social, physical/motor, emotional, etc.) and goals, musical (ability to duplicate a melody or identify changes in rhythm, etc.) and a determination of the potential for music therapy to be effective in addressing goals.
=== Premature infants ===
Premature infants are those born at 37 weeks after conception or earlier. They are subject to numerous health risks, such as abnormal breathing patterns, decreased body fat and muscle tissue, as well as feeding issues. The coordination for sucking and breathing is often not fully developed, making feeding a challenge. Offering musical therapy to premature infants while they are in the neonatal intensive care unit (NICU) aims to both mask unwanted auditory stimuli, stimulate infant development, and promote a calm environment for families. While there are no reported adverse effects from music therapy, the evidence supporting music therapy's beneficial effects for infants is weak as many of the clinical trials that have been performed either had mixed results or were poorly designed. There is no strong evidence to suggest that music therapy improves an infant's oxygen therapy, improves sucking, or improves development when compared to usual care. There is some weaker evidence that music therapy may decrease an infants' heart rate. There is no evidence to indicate that music therapy reduces anxiety in parents of preterm infants in the NICU or information to understand what type of music therapy may be more beneficial or how for how long.
=== Medical disorders ===
Music may both motivate and provide a sense of distraction. Rhythmic stimuli has been found to help balance training for those with a brain injury.
Singing is a form of rehabilitation for neurological impairments. Neurological impairments following a brain injury can be in the form of apraxia – loss to perform purposeful movements, dysarthria, muscle control disturbances (due to damage of the central nervous system), aphasia (defect in expression causing distorted speech), or language comprehension. It is shown that patients with schizophrenia have altered feeling on major and minor chord. Singing training has been found to improve lung, speech clarity, and coordination of speech muscles, thus, accelerating rehabilitation of such neurological impairments. For example, melodic intonation therapy is the practice of communicating with others by singing to enhance speech or increase speech production by promoting socialization, and emotional expression.
==== Autism ====
Music may help people with autism hone their motor and attention skills as well as healthy neurodevelopment of socio-communication and interaction skills. Music therapy may also contribute to improved selective attention, speech production, and language processing and acquisition in people with autism.
Music therapy may benefit the family as a whole. Some family members of children with autism claim that music therapy sessions have allowed their child to interact more with the family and the world. Music therapy is also beneficial in that it gives children an outlet to use outside of the sessions. Some children after participating in music therapy may want to keep making music long after the sessions end.
==== Heart disease ====
Listening to music may improve heart rate, respiratory rate, and blood pressure in those with coronary heart disease (CHD).
==== Stroke ====
Music may be a useful tool in the recovery of motor skills.
==== Dementia ====
Like many of the other disorders mentioned, some of the most common significant effects of the disorder can be seen in social behaviors, leading to improvements in interaction, conversation, and other such skills. A study of over 330 subjects showed that music therapy produces highly significant improvements in social behaviors, overt behaviors like wandering and restlessness, reductions in agitated behaviors, and improvements to cognitive defects, measured with reality orientation and face recognition tests. The effectiveness of the treatment seems to be strongly dependent on the patient and the quality and length of treatment.
A meta-study examined the proposed neurological mechanisms behind music therapy's effects on these patients. Many authors suspect that music has a soothing effect on the patient by affecting how noise is perceived: music renders noise familiar, or buffers the patient from overwhelming or extraneous noise in their environment. Others suggest that music serves as a sort of mediator for social interactions, providing a vessel through which to interact with others without requiring much cognitive load.
==== Aphasia ====
Broca's aphasia, or non-fluent aphasia, is a language disorder caused by damage to Broca's area and surrounding regions in the left frontal lobe. Those with non-fluent aphasia are able to understand language fairly well, but they struggle with language production and syntax.
Neurologist Oliver Sacks studied neurological oddities in people, trying to understand how the brain works. He concluded that people with some type of frontal lobe damage often "produced not only severe difficulties with expressive language (aphasia) but a strange access of musicality with incessant whistling, singing and a passionate interest in music. For him, this was an example of normally suppressed brain functions being released by damage to others". Sacks had a genuine interest in trying to help people affected with neurological disorders and other phenomena associated with music, and how it can provide access to otherwise unreachable emotional states, revivify neurological avenues that have been frozen, evoke memories of earlier, lost events or states of being and attempts to bring those with neurological disorders back to a time when the world was much richer for them. He was a firm believer that music has the power to heal.
Melodic intonation therapy (MIT), developed in 1973 by neurological researchers Sparks, Helm and Albert, is a method used by music therapists and speech–language pathologists to help people with communication disorders caused by damage to the left hemisphere of the brain by engaging the singing abilities and possibly engaging language-capable regions in the undamaged right hemisphere.
While unable to speak fluently, patients with non-fluent aphasia are often able to sing words, phrases, and even sentences they cannot express otherwise. MIT harnesses the singing ability of patients with non-fluent aphasia as a means to improve their communication. Although its exact nature depends on the therapist, in general MIT relies on the use of intonation (the rising and falling of the voice) and rhythm (beat/speed) to train patients to produce phrases verbally. In MIT, common words and phrases are turned into melodic phrases, generally starting with two step sing-song patterns and eventually emulating typical speech intonation and rhythmic patterns. A therapist will usually begin by introducing an intonation to their patient through humming. They will accompany this humming with a rhythm produced by the tapping of the left hand. At the same time, the therapist will introduce a visual stimuli of the written phrase to be learned. The therapist then sings the phrase with the patient, and ideally the patient is eventually able to sing the phrase on their own. With much repetition and through a process of "inner-rehearsal" (practicing internally hearing one's voice singing), a patient may eventually be able to produce the phrase verbally without singing. As the patient advances in therapy, the procedure can be adapted to give them more autonomy and to teach them more complex phrases. Through the use of MIT, a non-fluent aphasic patient can be taught numerous phrases which aid them to communicate and function during daily life.
The mechanisms of this success are yet to be fully understood. It is commonly agreed that while speech is lateralized mostly to the left hemisphere (for right-handed and most left-handed individuals), some speech functionality is also distributed in the right hemisphere. MIT is thought to stimulate these right language areas through the activation of music processing areas also in the right hemisphere Similarly, the rhythmic tapping of the left hand stimulates the right sensorimotor cortex to further engage the right hemisphere in language production. Overall, by stimulating the right hemisphere during language tasks, therapists hope to decrease dependence on the left hemisphere for language production.
While results are somewhat contradictory, studies have in fact found increased right hemispheric activation in non-fluent aphasic patients after MIT. This change in activation has been interpreted as evidence of decreased dependence on the left hemisphere. There is debate, however, as to whether changes in right hemispheric activation are part of the therapeutic process during/after MIT, or are simply a side effect of non-fluent aphasia. In hopes of making MIT more effective, researchers are continually studying the mechanisms of MIT and non-fluent aphasia.
==== Cancer ====
There is tentative evidence that music interventions led by a trained music therapist may have positive effects on psychological and physical outcomes in adults with cancer. The effectiveness of music therapy for children with cancer is not known.
=== Mental health ===
There is weak evidence to suggest that people with schizophrenia may benefit from the addition of music therapy along with their other standard treatment regieme. Potential improvements include decreased aggression, fewer hallucinations and delusions, social functioning, and quality of life of people with schizophrenia or schizophrenia-like disorders. In addition, moderate-to-low-quality evidence suggests that music therapy as an addition to standard care improves the global state, mental state (including negative and general symptoms). Further research using standardized music therapy programs and consistent monitoring protocols are necessary to understand the effectiveness of this approach for adults with schizophrenia. Music therapy may be a useful tool for helping treat people with post-traumatic stress disorder however more rigorous empirical study is required.
For adults with depressive symptoms, there is some weak evidence to suggest that music therapy may help reduce symptoms and recreative music therapy and guided imagery and music being superior to other methods in reducing depressive symptoms.
In the use of music therapy for adults, there is "music medicine" which is called for listening to prerecorded music as treated like a medicine. Music Therapy also uses "Receptive music therapy" using "music-assisted relaxation" and using images connecting to the music.
There is some discussion on the process of change facilitated by musical activities on mental wellness. Scholars proposed a six-dimensional framework, which contains emotional, psychological, social, cognitive, behavioral and spiritual aspects. Through conducting interview sessions with mental health service users (with mood disorders, anxiety disorders, schizophrenia and other psychotic disorders), their study showed the relevance of the six-dimensional framework.
==== Impact on general mental health ====
Music therapy has been used to help bring improvements to mental health among people of all age groups. It has been used as far back as the 1830s. One example of a mental hospital that used music therapy to aid in the healing process of their patients includes the Hanwell Lunatic Asylum. This mental hospital provided "music and movement sessions and musical performances" as well as "group and individual music therapy for patients with serious mental illness or emotional problems." Two main categories of music therapy were used in this study; analytic music therapy and Nordoff-Robbins music therapy. Analytic music therapy involves both words and music, while Nordoff-Robbins music therapy places great emphasis on assessing how clients react to music therapy and how the use of this type of therapy can be constantly altered and shifted to allow it to benefit the client the most.
=== Bereavement ===
The DSM-IV TR (Diagnostic and Statistical Manual of Mental Disorders) lists bereavement as a mental health diagnosis when the focus of clinical attention is related to the loss of a loved one and when symptoms of Major Depressive Disorder are present for up to two months. Music therapy models have been found to be successful in treating grief and bereavement (Rosner, Kruse & Hagl, 2010).In many countries, including the United States, music therapists do not diagnose, therefore diagnosing a bereavement-related disorder would not be within their scope of practice.
==== Grief treatment for adolescents ====
Grief treatment is very valuable within the adolescent age group. Just as adults and the elderly struggle with grief from loss, relationship issues, job-related stress, and financial issues, so do adolescents also experience grief from disappointments that occur early on in life, however different these disappointing life events may be. For example, many people of adolescent age experience life-altering events such as parental divorce, trauma from emotional or physical abuse, struggles within school, and loss. If this grief is not acted upon early on through the use of some kind of therapy, it can alter the entire course of an adolescent's life. In one particular study on the impact of music therapy on grief management within adolescents used songwriting to allow these adolescents to express what they were feeling through lyrics and instrumentals. In the article Development of the Grief Process Scale through music therapy songwriting with bereaved adolescents, the results of the study demonstrate that in all of the treatment groups combined, the mean GPS (grief process scale) score decreased by 43%. The use of music therapy songwriting allowed these adolescents to become less overwhelmed with grief and better able to process it as demonstrated by the decrease in mean GPS score.
=== Empirical evidence ===
Since 2017, providing evidence-based practice is becoming more and more important and music therapy has been continuously critiqued and regulated to provide that desired evidence-based practice. A number of research studies and meta-analyses have been conducted on, or included, music therapy and all have found that music therapy has at least some promising effects, especially when used for the treatment of grief and bereavement. The AMTA has largely supported the advancement of music therapy through research that would promote evidenced-based practice. With the definition of evidence-based health care as "the conscientious use of current best evidence in making decisions about the care of individual patients or the delivery of health services, current best evidence is up-to-date information from relevant, valid research about the effects of different forms of health care, the potential for harm from exposure to particular agents, the accuracy of diagnostic tests, and the predictive power of prognostic factors".
Both qualitative and quantitative studies have been completed and both have provided evidence to support music therapy in the use of bereavement treatment. One study that evaluated a number of treatment approaches found that only music therapy had significant positive outcomes where the others showed little improvement in participants (Rosner, Kruse & Hagl, 2010). Furthermore, a pilot study, which consisted of an experimental and control group, examined the effects of music therapy on mood and behaviors in the home and school communities. It was found that there was a significant change in grief symptoms and behaviors with the experimental group in the home, but conversely found that there was no significant change in the experimental group in the school community, despite the fact that mean scores on the Depression Self-Rating Index and the Behavior Rating Index decreased (Hilliard, 2001). Yet another study completed by Russel Hilliard (2007), looked at the effects of Orff-based music therapy and social work groups on childhood grief symptoms and behaviors. Using a control group that consisted of wait-listed clients, and employing the Behavior Rating Index for Children and the bereavement Group Questionnaire for Parents and Guardians as measurement tools, it was found that children who were in the music therapy group showed significant improvement in grief symptoms and also showed some improvement in behaviors compared to the control group, whereas the social work group also showed significant improvement in both grief and behaviors compared to the control group. The study concludes with support for music therapy as a medium from bereavement groups for children (Hilliard, 2007).
Though there has been research done on music therapy, and though the use of it has been evaluated, there remain a number of limitations in these studies and further research should be completed before absolute conclusions are made, though the results of using music therapy in the treatment have consistently shown to be positive.
Music therapy practice is working together with clients, through music, to promote healthy change (Bruscia, 1998). The American Music Therapy Association (AMTA) has defined the practice of music therapy as "a behavioral science concerned with changing unhealthy behaviors and replacing them with more adaptive ones through the use of musical stimuli".
=== Interventions ===
Though music therapy practice employs a large number of intervention techniques, some of the most commonly used interventions include improvisation, therapeutic singing, therapeutic instrumental music playing, music-facilitated reminiscence and life review, songwriting, music-facilitated relaxation, and lyric analysis. While there has been no conclusive research done on the comparison of interventions (Jones, 2005; Silverman, 2008; Silverman & Marcionetti, 2004), the use of particular interventions is individualized to each client based upon thorough assessment of needs, and the effectiveness of treatment may not rely on the type of intervention (Silverman, 2009).
Improvisation in music therapy allows for clients to make up, or alter, music as they see fit. While improvisation is an intervention in a methodical practice, it does allow for some freedom of expression, which is what it is often used for. Improvisation has several other clinical goals as well, which can also be found on the Improvisation in music therapy page, such as: facilitating verbal and nonverbal communication, self-exploration, creating intimacy, teamwork, developing creativity, and improving cognitive skills. Building on these goals, Botello and Krout designed a cognitive behavioral application to assess and improve communication in couples. Further research is needed before the use of improvisation is conclusively proven to be effective in this application, but there were positive signs in this study of its use.
Singing or playing an instrument is often used to help clients express their thoughts and feelings in a more structured manner than improvisation and can also allow participation with only limited knowledge of music. Singing in a group can facilitate a sense of community and can also be used as group ritual to structure a theme of the group or of treatment (Krout, 2005).
Research that compares types of music therapy intervention has been inconclusive. Music Therapists use lyric analysis in a variety of ways, but typically lyric analysis is used to facilitate dialogue with clients based on the lyrics, which can then lead to discussion that addresses the goals of therapy.
== Types of music therapy ==
Two fundamental types of music therapy are receptive music therapy and active music therapy (also known as expressive music therapy). Active music therapy engages clients or patients in the act of making music, whereas receptive music therapy guides patients or clients in listening or responding to live or recorded music. Either or both can lead to verbal discussions, depending on client needs and the therapist's orientation.
=== Receptive ===
Receptive music therapy involves listening to recorded or live genres of music such as classical, rock, jazz, and/or country music. In Receptive music therapy, patients are the recipient of the music experience, meaning that they are actively listening and responding to the music rather than creating it. During music sessions, patients participate in song discussion, music relaxation, and are given the ability to listen to their preferred music genre. It can improve mood, decrease stress, decrease pain, enhance relaxation, and decrease anxiety; this can help with coping skills. There is also evidence
of biochemical changes (e.g., lowered cortisol levels).
=== Active ===
In active music therapy, patients engage in some form of music-making (e.g., vocalizing, rapping, chanting, singing, playing instruments, improvising, song writing, composing, or conducting). Researchers at Baylor, Scott, and White Universities are studying the effect of harmonica playing on patients with COPD to determine if it helps improve lung function. Another example of active music therapy takes place in a nursing home in Japan: therapists teach the elderly how to play easy-to-use instruments so they can overcome physical difficulties.
== Models and approaches ==
Music therapist Kenneth Bruscia stated: "A model is a comprehensive approach to assessment, treatment, and evaluation that includes theoretical principles, clinical indications and contraindications, goals, methodological guidelines and specifications, and the characteristic use of certain procedural sequences and techniques.": 129 In the literature, the terms model, orientation, or approach might be encountered and may have slightly different meanings. Regardless, music therapists use both psychology models and models specific to music therapy. The theories these models are based on include beliefs about human needs, causes of distress, and how humans grow or heal.
Models developed specifically for music therapy include analytical music therapy,: 230 Benenzon,: 143–144 the Bonny Method of Guided Imagery and Music (GIM),: 230 community music therapy, Nordoff-Robbins music therapy (creative music therapy),: 230 neurologic music therapy, and vocal psychotherapy.
Psychological orientations used in music therapy include psychodynamic, cognitive behavioral, humanistic, existential,: 230 and the biomedical model.
=== The Bonny Method of Guided Imagery and Music ===
To be trained in this method, students are required to be healthcare professionals. Some courses are only open to music therapists and mental health professionals.
Music educator and therapist Helen Lindquist Bonny (1921–2010) developed an approach influenced by humanistic and transpersonal psychological views, known as the Bonny Method of guided imagery in music (BGIM or GIM). Guided imagery refers to a technique used in natural and alternative medicine that involves using mental imagery to help with the physiological and psychological ailments of patients.
The practitioner often suggests a relaxing and focusing image, and through the use of imagination and discussion, they aim to find constructive solutions to manage their problems. Bonny applied this psychotherapeutic method to the field of music therapy by using music as the means of guiding the patient to a higher state of consciousness where healing and constructive self-awareness can take place. Music is considered a "co-therapist" because of its importance. GIM with children can be used in one-on-one or group settings, and involves relaxation techniques, identification and sharing of personal feeling states, and improvisation to discover the self, and foster growth. The choice of music is carefully selected for the client based on their musical preferences and the goals of the session. The piece is usually classical, and it must reflect the age and attention abilities of the child in length and genre. A full explanation of the exercises must be offered at their level of understanding.
=== Nordoff-Robbins ===
Paul Nordoff, a Juilliard School graduate and Professor of Music, was a pianist and composer who, upon seeing disabled children respond so positively to music, gave up his academic career to further investigate the possibility of music as a means for therapy. Clive Robbins, a special educator, partnered with Nordoff for more than 17 years in the exploration and research of music's effects on disabled children—first in the UK, and then in the United States in the 1950s and 60s. Their pilot projects included placements at care units for autistic children and child psychiatry departments, where they put programs in place for children with mental disorders, emotional disturbances, developmental delays, and other handicaps. Their success at establishing a means of communication and relationship with children with cognitive impairments at the University of Pennsylvania gave rise to the National Institutes of Health's first grant given of this nature, and the 5-year study "Music therapy project for psychotic children under seven at the day care unit" involved research, publication, training and treatment. Several publications, including Therapy in Music for Handicapped Children, Creative Music Therapy, Music Therapy in Special Education, as well as instrumental and song books for children, were released during this time. Nordoff and Robbins's success became known globally in the mental health community, and they were invited to share their findings and offer training on an international tour that lasted several years. Funds were granted to support the founding of the Nordoff Robbins Music Therapy Centre in Great Britain in 1974, where a one-year graduate program for students was implemented. In the early eighties, a center was opened in Australia, and various programs and institutes for music therapy were founded in Germany and other countries. In the United States, the Nordoff-Robbins Center for Music Therapy was established at New York University in 1989
Today, Nordoff-Robbins is a music therapy Theoretical Model / Approach. The Nordoff-Robbins approach, based on the belief that everyone is capable of finding meaning in and benefiting from musical experience, is now practiced by hundreds of therapists internationally. This approach focuses on treatment through the creation of music by both therapist and client together. The therapist uses various techniques so that even the most low functioning individuals can actively participate.
=== Orff ===
Gertrude Orff developed Orff Music Therapy at the Kinderzentrum München. Both the clinical setting of social pediatrics and the Orff Schulwerk (schoolwork) approach in music education (developed by German composer Carl Orff) influence this method, which is used with children with developmental problems, delays, and disabilities. Theodor Hellbrügge developed the area of social pediatrics in Germany after the Second World War. He understood that medicine alone could not meet the complex needs of developmentally disabled children. Hellbrügge consulted psychologists, occupational therapists and other mental healthcare professionals whose knowledge and skills could aid in the diagnostics and treatment of children. Gertrude Orff was asked to develop a form of therapy based on the Orff Schulwerk approach to support the emotional development of patients. Elements found in both the music therapy and education approaches include the understanding of holistic music presentation as involving word, sound and movement, the use of both music and play improvisation as providing a creative stimulus for the child to investigate and explore, Orff instrumentation, including keyboard instruments and percussion instruments as a means of participation and interaction in a therapeutic setting, and the multisensory aspects of music used by the therapist to meet the particular needs of the child, such as both feeling and hearing sound.
Corresponding with the attitudes of humanistic psychology, the developmental potential of the child, as in the acknowledgement of their strengths as well as their handicaps, and the importance of the therapist-child relationship, are central factors in Orff music therapy. The strong emphasis on social integration and the involvement of parents in the therapeutic process found in social pediatrics also influence theoretical foundations. Knowledge of developmental psychology puts into perspective how developmental disabilities influence the child, as do their social and familial environments. The basis for interaction in this method is known as responsive interaction, in which the therapist meets the child at their level and responds according to their initiatives, combining both humanistic and developmental psychology philosophies. Involving the parents in this type of interaction by having them participate directly or observe the therapist's techniques equips the parents with ideas of how to interact appropriately with their child, thus fostering a positive parent-child relationship.
=== Liberation Music Therapy ===
Liberation Music Therapy (LMT) is an emancipatory approach to music-making that integrates healing, social justice, and revolutionary change. Rooted in the principles of liberation psychology and influenced by the global history of music's role in communal and spiritual practices, LMT challenges traditional, colonialist frameworks of mental health care. It emphasizes addressing systemic oppression and transgenerational trauma through culturally relevant music practices, particularly within marginalized communities. LMT practitioners view music not only as a therapeutic tool but as a form of activism and resistance, fostering solidarity, critical consciousness (concientización), and community empowerment.
This approach combines music's therapeutic qualities with its capacity for social and political transformation, drawing on a variety of influences, including folk traditions, hip-hop, drumming, and chanting, alongside modern and classical genres. Through methods such as lyric analysis, improvisation, and collective musicking, LMT bridges personal emotional experiences with broader societal struggles, engaging individuals and communities in processes of healing and liberation. Practitioners work collaboratively, meeting communities where they are and respecting their cultural genius, with the ultimate aim of fostering both individual well-being and collective resilience.
== Cultural aspects ==
Through the ages music has been an integral component of rituals, ceremonies, healing practices, and spiritual and cultural traditions. Further, Michael Bakan, author of World Music: Traditions and Transformations, states that "Music is a mode of cultural production and can reveal much about how the culture works," something ethnomusicologists study.
=== Cultural considerations in music therapy services, education, and research ===
The 21st century is a culturally pluralistic world. In some countries, such as the United States, an individual may have multiple cultural identities that are quite different from the music therapist's. These include race; ethnicity, culture, and/or heritage; religion; sex; ability/disability; education; or socioeconomic status. Music therapists strive to achieve multicultural competence through a lifelong journey of formal and informal education and self-reflection. Multicultural therapy "uses modalities and defines goals consistent with the life experiences and cultural values of clients": 6 rather than basing therapy on the therapist's worldview or the dominant culture's norms.
Empathy in general is an important aspect of any mental health practitioner and the same is true for music therapists, as is multicultural awareness. It is the added complexity to cultural empathy that comes from adding music that provides both the greater risk and potential to provide exceptional culturally sensitive therapy (Valentino, 2006). An extensive knowledge of a culture is really needed to provide this effective treatment as providing culturally sensitive music therapy goes beyond knowing the language of speech, the country, or even some background about the culture. Simply choosing music that is from the same country of origin or that has the same spoken language is not effective for providing music therapy as music genres vary as do the messages each piece of music sends. Also, different cultures view and use music in various ways and may not always be the same as how the therapist views and uses music. Melody Schwantes and her colleagues wrote an article that describes the effective use of the Mexican "corrido" in a bereavement group of Mexican migrant farm workers (Schwantes, Wigram, Lipscomb & Richards, 2011). This support group was dealing with the loss of two of their coworkers after an accident they were in and so the corrido, a song form traditionally used for telling stories of the deceased. An important element that was also mentioned was that songwriting has shown to be a large cultural artifact in many cultures, and that there are many subtle messages and thoughts provided in songs that would otherwise be hard to identify. Lastly, the authors of this study stated that "Given the position and importance of songs in all cultures, the example in this therapeutic process demonstrates the powerful nature of lyrics and music to contain and express difficult and often unspoken feelings" (Schwantes et al., 2011).
== Usage by region ==
=== African continent ===
In 1999, the first program for music therapy in Africa opened in Pretoria, South Africa. Research has shown that in Tanzania patients can receive palliative care for life-threatening illnesses directly after the diagnosis of these illnesses. This is different from many Western countries, because they reserve palliative care for patients who have an incurable illness. Music is also viewed differently between Africa and Western countries. In Western countries and a majority of other countries throughout the world, music is traditionally seen as entertainment whereas in many African cultures, music is used in recounting stories, celebrating life events, or sending messages.
=== Australia ===
==== Music for healing in ancient times ====
One of the first groups known to heal with sound were the aboriginal people of Australia. The modern name of their healing tool is the didgeridoo, but it was originally called the yidaki. The yidaki produced sounds that are similar to the sound healing techniques used in modern day. The sound of the didgeridoo produces a low, bass frequency. For at least 40,000 years, the healing tool was believed to assist in healing "broken bones, muscle tears and illnesses of every kind".
However, here are no reliable sources stating the didgeridoo's exact age. Archaeological studies of rock art in Northern Australia suggest that the people of the Kakadu region of the Northern Territory have been using the didgeridoo for less than 1,000 years, based on the dating of paintings on cave walls and shelters from this period. A clear rock painting in Ginga Wardelirrhmeng, on the northern edge of the Arnhem Land plateau, from the freshwater period (that had begun 1500 years ago) shows a didgeridoo player and two songmen participating in an Ubarr ceremony.
==== In modern times – an allied health profession ====
1949 in Australia, music therapy (not clinical music therapy as understood today) was started through concerts organized by the Australian Red Cross along with a Red Cross Music Therapy Committee. The key Australian body, the Australian Music Therapy Association (AMTA), was founded in 1975.
=== Canada ===
==== History: c. 1940 – present ====
For earlier history related to Western traditions, see § Western cultures sub-section.
In 1956, Fran Herman, one of Canada's music therapy pioneers, began a 'remedial music' program at the Home For Incurable Children, now known as the Holland Bloorview Kids Rehabilitation Hospital, in Toronto. Her group 'The Wheelchair Players' continued until 1964, and is considered to be the first music therapy group project in Canada. Its production "The Emperor's Nightingale" was the subject of a documentary film.
Composer/pianist Alfred Rosé, a professor at the University of Western Ontario, also pioneered the use of music therapy in London, Ontario, at Westminster Hospital in 1952 and at the London Psychiatric Hospital in 1956.
Two other music therapy programs were initiated during the 1950s; one by Norma Sharpe at St. Thomas Psychiatric Hospital in St. Thomas, Ontario, and the other by Thérèse Pageau at the Hôpital St-Jean-de-Dieu (now Hôpital Louis-Hippolyte Lafontaine) in Montreal.
A conference in August 1974, organized by Norma Sharpe and six other music therapists, led to the founding of the Canadian Music Therapy Association, which was later renamed the Canadian Association for Music Therapy (CAMT). As of 2009, the organization had more than 500 members.
Canada's first music therapy training program was founded in 1976, at Capilano College (now Capilano University) in North Vancouver, by Nancy McMaster and Carolyn Kenny.
=== China ===
The relationship between music therapy and health has long been documented in ancient China.
It is said that in ancient times, really good traditional Chinese medicine did not use acupuncture or traditional Chinese medicine, but music: at the end of a song, people were safe when they were discharged. As early as before the Spring and Autumn period and the Warring States period, the Yellow Emperor's Canon of internal medicine believed that the five tones (Palace, Shang, horn, emblem and feather) belonged to the five elements (gold, wood, water, fire and earth), and were associated with five basic emotions (joy, anger, worry, thought and fear), that is, the five chronicles. Different music such as palace, Shang, horn, micro and feather were used to target different diseases.
More than 2000 years ago, the book Yue Ji also talked about the important role of music in regulating life harmony and improving health; "Zuo Zhuan" recorded the famous doctors of the state of Qin and the discussion that music can prevent and treat diseases: "there are six or seven days, the hair is colorless, the emblem is five colors, and sex produces six diseases." It is emphasized that silence should be controlled and appropriate in order to have a beneficial regulating effect on the human body; The book "the soul and the body flow, the spirit also flows"; Zhang Jingyue and Xu Lingtai, famous medical experts in the Ming and Qing Dynasties, also specially discussed phonology and medicine in the "classics with wings" and "Yuefu Chuansheng".
For example, Liu Xueyu, one of the emperors of the Tang Dynasty, cured some stubborn diseases through the records of music in the Tang Dynasty.
Chinese contemporary music therapy began in the 1980s. In 1984, Professor Zhang Boyuan of the Department of psychology of Peking University published the experimental report on the research of physical and mental defense of music, which was the first published scientific research article on music therapy in China. In 1986, Professor Gao Tian of Beijing Conservatory of music published his paper "Research on the relieving effect of music on pain";
In 1989, the Chinese society of therapeutics was officially established. In 1994, pukaiyuan published his monograph music therapy. In 1995, he Huajun and Lu Tingzhu published a monograph music therapy. In 2000, Zhang Hongyi edited and published fundamentals of music therapy. In 2002, fan Xinsheng edited and published music therapy. In 2007, Gao Tian edited and published the basic theory of music therapy.
In short, Chinese music therapy has made rapid progress in theoretical research, literature review and clinical research. In addition, the music therapy methods under the guidance of ancient Chinese music therapy theory and traditional Chinese medicine theory with a long history have attracted worldwide attention. The prospect of Chinese music therapy is broad.
=== Germany ===
The Germany Music Therapy Society defines music therapy as the "targeted use of music as part of a therapeutic relationship to restore, maintain and promote mental, physical and cognitive health [Musiktherapie ist der gezielte Einsatz von Musik im Rahmen der therapeutischen Beziehung zur Wiederherstellung, Erhaltung und Förderung seelischer, körperlicher und geistiger Gesundheit]."
=== India ===
The roots of musical therapy in India can be traced back to ancient Hindu mythology, Vedic texts, and local folk traditions. An example of a practice dating back to Vedic texts would be, Nada Yoga. Nada yoga has been a practice in India for a long time, and it is to heal by listening to the body's inner vibrations(Bhanu, Y. 2022). It is very possible that music therapy has been used for hundreds of years in Indian culture. In the 1990s, another dimension to this, known as Musopathy, was postulated by Indian musician Chitravina Ravikiran based on fundamental criteria derived from acoustic physics.
The Indian Association of Music Therapy was established in 2010 by Dr. Dinesh C. Sharma with a motto "to use pleasant sounds in a specific manner like drug in due course of time as green medicine". He also published the International Journal of Music Therapy (ISSN 2249-8664) to popularize and promote music therapy research on an international platform.
Suvarna Nalapat has studied music therapy in the Indian context. Her books Nadalayasindhu-Ragachikitsamrutam (2008), Music Therapy in Management Education and Administration (2008) and Ragachikitsa (2008) are accepted textbooks on music therapy and Indian arts.
The Music Therapy Trust of India is another venture in the country. It was started by Margaret Lobo. She is the founder and director of the Otakar Kraus Music Trust and her work began in 2004.
=== Lebanon ===
In 2006, Hamda Farhat introduced music therapy to Lebanon, developing and inventing therapeutic methods such as the triple method to treat hyperactivity, depression, anxiety, addiction, and post traumatic stress disorder. She has met with great success in working with many international organizations, and in the training of therapists, educators, and doctors. The Lebanese Association Of Music Therapy L.A.M.T ref number 65 is the only reference at Lebanon, the president Dr Hamda farhat, members administer Dr Antoine chartouni, Dr Elia Francis Safi
TRAINING and Formation
=== Norway ===
Norway is recognized as an important country for music therapy research. Its two major research centers are the Center for Music and Health with the Norwegian Academy of Music in Oslo, and the Grieg Academy Centre for Music Therapy (GAMUT), at University of Bergen. The former was mostly developed by professor Even Ruud, while professor Brynjulf Stige is largely responsible for cultivating the latter. The center in Bergen has 18 staff, including 2 professors and 4 associate professors, as well as lecturers and PhD students. Two of the field's major international research journals are based in Bergen: Nordic Journal for Music Therapy and Voices: A World Forum for Music Therapy. Norway's main contribution to the field is mostly in the area of "community music therapy", which tends to be as much oriented toward social work as individual psychotherapy, and music therapy research from this country uses a wide variety of methods to examine diverse methods across an array of social contexts, including community centers, medical clinics, retirement homes, and prisons.
=== Nigeria ===
The origins of Musical therapy practices in Nigeria is unknown, however the country is identified to have a lengthy lineage and history of musical therapy being utilized throughout the culture. The most common people associated with music therapy are herbalists, Witch doctors, and faith healers according to Professor Charles O. Aluede of Ambrose Alli University (Ekpoma, Edo State, Nigeria). Applying music and thematic sounds to the healing process is believed to help the patient overcome true sickness in his/her mind which then will seemingly cure the disease. Another practice involving music is called "Igbeuku", a religious practice performed by faith healers. In the practice of Igbeuku, patients are persuaded to confess their sins which cause themselves serve discomfort. Following a confession, patients feel emotionally relieved because the priest has announced them clean and subjected them to a rigorous dancing exercise. The dancing exercise is a "thank you" for the healing and tribute to the spiritual greater beings. The dance is accompanied by music and can be included among the unorthodox medical practices of Nigerian culture. While most of the music therapy practices come in the medical field, musical therapy is often used in the passing of a loved one. The use of song and dance in a funeral setting is very common across the continent but especially in Nigeria. Songs allude to the idea the finally resting place is Hades (hell). The music helps alleviate the sorrows felt by the family members and friends of the lost loved one. Along with music therapy being a practice for funeral events, it is also implemented to those dying as a last resort tactic of healing. The Esan of Edo State of Nigeria, in particular, herbalists perform practices with an Oko – a small aerophone made of elephant tusk which is blown into dying patients' ears to resuscitate them. Nigeria is full of interesting cultural practices in which contribute a lot to the music therapy world.
=== South Africa ===
There are longstanding traditions of music healing, which in some ways may be very different than music therapy.
Mercédès Pavlicevic (1955–2018), an international music therapist, along with Kobie Temmingh, pioneered the music therapy program at the University of Pretoria, which debuted with a master's degree program in 1999. She noted the differences in longstanding traditions and other ways of viewing healing or music. A Nigerian colleague felt "that music in Africa is healing, and what is music therapy other than some colonial import?" Pavlicevic noted that "in Africa there is a long tradition of music healing" and asked "Can there be a synthesis of these two music-based practices towards something new?... I am not altogether convinced that African music healing and music therapy are especially closely related [emphasis added]. But I am utterly convinced that music therapy can learn an enormous amount from the African worldview and from music-making in Africa – rather than from African music-healing as such."
The South African Music Therapy Association can provide information to the public about music therapy or educational programs in South Africa.
South Africa was selected to host the 16th World Congress of Music Therapy in July 2020, a triennial World Federation of Music Therapy event. Due to the coronavirus pandemic (SARS-CoV-2) the congress was moved to an online event.
=== United States ===
==== Credential ====
National board certification (current as of 2021): MT-BC (Music Therapist-Board Certified, also written as Board Certified Music Therapist)
State license or registration: varies by state, see below
The credentials listed below were previously conferred by the former national organizations AAMT and NAMT; these credentials have not been available since 1998.
CMT (Certified Music Therapist)
ACMT (Advanced Certified Music Therapist)
RMT (Registered Music Therapist). There are other countries that use RMT as a credential, such as Australia, that is different from the U.S. credential.
The states of Georgia, Illinois, Iowa, Maryland, North Dakota, Nevada, New Jersey, Oklahoma, Oregon, Rhode Island, and Virginia have established licenses for music therapists, while in Wisconsin, music therapists must be registered, and in Utah hold state certification. In the State of New York, the Creative Arts Therapy license (LCAT) incorporates the music therapy credential within their licensure, a mental health license that requires a master's degree and post-graduate supervision. The states of California and Connecticut have title protection for music therapists, meaning only those with the MT-BC credential can use the title "Board Certified Music Therapist".
==== Professional association ====
The American Music Therapy Association (AMTA).
==== Education ====
Publication on music therapy education and training has been detailed in both single author (Goodman, 2011) and edited (Goodman, 2015, 2023) volumes. The register of the European Music Therapy Confederation lists all educational training programs throughout Europe.
A music therapy degree candidate can earn an undergraduate, master's or doctoral degree in music therapy. Many AMTA approved programs in the United States offer equivalency and certificate degrees in music therapy for students that have completed a degree in a related field. Some practicing music therapists have held PhDs either in music therapy or in fields related to music therapy. A music therapist typically incorporates music therapy techniques with broader clinical practices such as psychotherapy, rehabilitation, and other practices depending on client needs. Music therapy services rendered within the context of a social service, educational, or health care agency are often reimbursable by insurance or other sources of funding for individuals with certain needs.
A degree in music therapy requires proficiency in guitar, piano, voice, music theory, music history, reading music, improvisation, as well as varying levels of skill in assessment, documentation, and other counseling and health care skills depending on the focus of the particular university's program. 1200 hours of clinical experience are required, some of which are gained during an approximately six-month internship that takes place after all other degree requirements are met.
After successful completion of educational requirements, including internship, music therapists can apply to take, take, and pass the Board Certification Examination in Music Therapy.
==== Board Certification Examination in Music Therapy ====
The current national credential is MT-BC (Music Therapist-Board Certified). It is not required in all states. To be eligible to apply to take the Board Certification Examination in Music Therapy, an individual must successfully complete a music therapy degree from a program accredited by AMTA at a college or university (or have a bachelor's degree and complete all of the music therapy course requirements from an accredited program), which includes successfully completing a music therapy internship. To maintain the credential, 100 units of continuing education must be completed every five years. The board exam is created by and administered through The Certification Board for Music Therapists.
==== History: c. 1900–present ====
For earlier history related to Western traditions, see § Western cultures sub-section.
From a western viewpoint, music therapy in the 20th and 21st centuries (as of 2021), as an evidence-based, allied healthcare profession, grew out of the aftermath of World Wars I and II, when, particularly in the United Kingdom and United States, musicians would travel to hospitals and play music for soldiers suffering from with war-related emotional and physical trauma. Using music to treat the mental and physical ailments of active duty military and veterans was not new. Its use was recorded during the U.S. Civil War and Florence Nightingale used it a decade earlier in the Crimean War. Despite research data, observations by doctors and nurses, praise from patients, and willing musicians, it was difficult to vastly increase music therapy services or establish lasting music therapy education programs or organizations in the early 20th century. However, many of the music therapy leaders of this time period provided music therapy during WWI or to its veterans. These were pioneers in the field such as Eva Vescelius, musician, author, 1903 founder of the short-lived National Therapeutic Society of New York and the 1913 Music and Health journal, and creator/teacher of a musicotherapy course; Margaret Anderton, pianist, WWI music therapy provider for Canadian soldiers, a strong believer in training for music therapists, and 1919 Columbia University musicotherapy teacher; Isa Maud Ilsen, a nurse and musician who was the American Red Cross Director of Hospital Music in WWI reconstruction hospitals, 1919 Columbia University musicotherapy teacher, 1926 founder of the National Association for Music in Hospitals, and author; and Harriet Ayer Seymour, music therapist to WWI veterans, author, researcher, lecturer/teacher, founder of the National Foundation for Music Therapy in 1941, author of the first music therapy textbook published in the US. Several physicians also promoted music as a therapeutic agent during this time period.
In the 1940s, changes in philosophy regarding care of psychiatric patients as well as the influx of WWII veterans in Veterans Administration hospitals renewed interest in music programs for patients. Many musicians volunteered to provide entertainment and were primarily assigned to perform on psychiatric wards. Positive changes in patients' mental and physical health were noted by nurses. The volunteer musicians, many of whom had degrees in music education, becoming aware of the powerful effects music could have on patients realized that specialized training was necessary. The first music therapy bachelor's degree program was established in 1944 with three others and one master's degree program quickly following: "Michigan State College [now a University] (1944), the University of Kansas [master's degree only] (1946), the College of the Pacific (1947), The Chicago Musical College (1948) and Alverno College (1948)." The National Association for Music Therapy (NAMT), a professional association, was formed in 1950. In 1956, the first music therapy credential in the US, Registered Music Therapist (RMT), was instituted by the NAMT.
The American Music Therapy Association (AMTA) was founded in 1998 as a merger between the National Association for Music Therapy (NAMT, founded in 1950) and the American Association for Music Therapy (AAMT, founded in 1971).
=== United Kingdom ===
Live music was used in hospitals after both World Wars as part of the treatment program for recovering soldiers. Clinical music therapy in Britain as it is understood today was pioneered in the 1960s and 1970s by French cellist Juliette Alvin whose influence on the current generation of British music therapy lecturers remains strong. Mary Priestley, one of Juliette Alvin's students, created "analytical music therapy". The Nordoff-Robbins approach to music therapy developed from the work of Paul Nordoff and Clive Robbins in the 1950/60s.
Practitioners are registered with the Health Professions Council and, starting from 2007, new registrants must normally hold a master's degree in music therapy. There are master's level programs in music therapy in Manchester, Bristol, Cambridge, South Wales, Edinburgh and London, and there are therapists throughout the UK. The professional body in the UK is the British Association for Music Therapy In 2002, the World Congress of Music Therapy, coordinated and promoted by the World Federation of Music Therapy, was held in Oxford on the theme of Dialogue and Debate. In November 2006, Dr. Michael J. Crawford and his colleagues again found that music therapy helped the outcomes of schizophrenic patients.
== Military: active duty, veterans, family members ==
=== History ===
Music therapy finds its roots in the military. The United States Department of War issued Technical Bulletin 187 in 1945, which described the use of music in the recovery of military service members in Army hospitals. The use of music therapy in military settings started to flourish and develop following World War II and research and endorsements from both the United States Army and the Surgeon General of the United States. Although these endorsements helped music therapy develop, there was still a recognized need to assess the true viability and value of music as a medically based therapy. Walter Reed Army Medical Center and the Office of the Surgeon General worked together to lead one of the earliest assessments of a music therapy program. The goal of the study was to understand whether "music presented according to a specific plan" influenced recovery among service members with mental and emotional disorders. Eventually, case reports in reference to this study relayed not only the importance but also the impact of music therapy services in the recovery of military service personnel.
The first university sponsored music therapy course was taught by Margaret Anderton in 1919 at Columbia University. Anderton's clinical specialty was working with wounded Canadian soldiers during World War II, using music-based services to aid in their recovery process.
Today, Operation Enduring Freedom and Operation Iraqi Freedom have both presented an array of injuries; however, the two signature injuries are post-traumatic stress disorder (PTSD) and traumatic brain injury (TBI). These two signature injuries are increasingly common among millennial military service members and in music therapy programs.
A person diagnosed with PTSD can associate a memory or experience with a song they have heard. This can result in either good or bad experiences. If it is a bad experience, the song's rhythm or lyrics can bring out the person's anxiety or fear response. If it is a good experience, the song can bring feelings of happiness or peace which could bring back positive emotions. Either way, music can be used as a tool to bring emotions forward and help the person cope with them.
=== Methods ===
Music therapists work with active duty military personnel, veterans, service members in transition, and their families. Music therapists strive to engage clients in music experiences that foster trust and complete participation over the course of their treatment process. Music therapists use an array of music-centered tools, techniques, and activities when working with military-associated clients, many of which are similar to the techniques used in other music therapy settings. These methods include, but are not limited to: group drumming, listening, singing, and songwriting. Songwriting is a particularly effective tool with military veterans struggling with PTSD and TBI as it creates a safe space to, "... work through traumatic experiences, and transform traumatic memories into healthier associations".
=== Programs ===
Music therapy in the military is seen in programs on military bases, VA healthcare facilities, military treatment facilities, and military communities. Music therapy programs have a large outreach because they exist for all phases of military life: pre-mobilization, deployment, post-deployment, recovery (in the case of injury), and among families of fallen military service personnel.
The Exceptional Family Member Program (EFMP) also exists to provide music therapy services to active duty military families who have a family member with a developmental, physical, emotional, or intellectual disorder. Currently, programs at the Davis–Monthan Air Force Base, Resounding Joy, Inc., and the Music Institute of Chicago partner with EFMP services to provide music therapy services to eligible military family members.
Music therapy programs primarily target active duty military members and their treatment facility to provide reconditioning among members convalescing in Army hospitals. Although, music therapy programs not only benefit the military but rather a wide range of clients including the U.S. Air Force, American Navy, and U.S. Marines Corp. Individuals exposed to trauma benefit from their essential rehabilitative tools to follow the course of recovery from stress disorders. Music therapists are certified professionals who possess the abilities to determine appropriate interventions to support one recovering from a physically, emotionally, or mentally traumatic experience. In addition to their skills, they play an integral part throughout the treatment process of service members diagnosed with post-traumatic stress or brain injuries. In many cases, self-expression through songwriting or using instruments help restore emotions that can be lost following trauma. Music has a significant effect on troops traveling overseas or between bases because many soldiers view music to be an escape from war, a connection to their homeland and families, or as motivation. By working with a certified music therapist, marines undergo sessions re-instituting concepts of cognition, memory attention, and emotional processing. Although programs primarily focus on phases of military life, other service members such as the U.S. Air Force are eligible for treatment as well. For instance, during a music therapy session, a man begins to play a song to a wounded Airmen. The Airmen says "[music] allows me to talk about something that happened without talking about it". Music allows the active duty airmen to open up about previous experiences while reducing his anxiety level.
== History ==
The use of music to soothe grief has been used since the time of David and King Saul. In I Samuel, David plays the lyre to make King Saul feel relieved and better. It has since been used all over the world for treatment of various issues, though the first recorded use of official "music therapy" was in 1789 – an article titled "Music Physically Considered" by an unknown author was found in Columbian Magazine. The creation and expansion of music therapy as a treatment modality thrived in the early to mid 1900s and while a number of organizations were created, none survived for long. It was not until 1950 that the National Association for Music Therapy was founded in New York that clinical training and certification requirements were created. In 1971, the American Association for Music Therapy was created, though at that time called the Urban Federation of Music Therapists. The Certification Board for Music Therapists was created in 1983 which strengthened the practice of music therapy and the trust that it was given. In 1998, the American Music Therapy Association was formed out of a merger between National and American Associations and as of 2017 is the single largest music therapy organization in the world (American music therapy, 1998–2025).
Ancient flutes, carved from ivory and bone, were found by archaeologists, that were determined to be from as far back as 43,000 years ago. He also states that "The earliest fragment of musical notation is found on a 4,000-year-old Sumerian clay tablet, which includes instructions and tuning for a hymn honoring the ruler Lipit-Ishtar. But for the title of oldest extant song, most historians point to "Hurrian Hymn No. 6," an ode to the goddess Nikkal that was composed in cuneiform by the ancient Hurrian's sometime around the 14th century B.C.".
=== Western cultures ===
==== Music and healing ====
Music has been used as a healing implement for centuries. Apollo is the ancient Greek god of music and of medicine and his son Aesculapius was said to cure diseases of the mind by using song and music. By 5000 BC, music was used for healing by Egyptian priest-physicians. Plato said that music affected the emotions and could influence the character of an individual. Aristotle taught that music affects the soul and described music as a force that purified the emotions. Aulus Cornelius Celsus advocated the sound of cymbals and running water for the treatment of mental disorders. Music as therapy was practiced in the Bible when David played the harp to rid King Saul of a bad spirit (1 Sam 16:23). As early as 400 B.C., Hippocrates played music for mental patients. In the 13th century, Arab hospitals contained music-rooms for the benefit of the patients. In the United States, Native American medicine men often employed chants and dances as a method of healing patients. The Turco-Persian psychologist and music theorist al-Farabi (872–950), known as Alpharabius in Europe, dealt with music for healing in his treatise Meanings of the Intellect, in which he discussed the therapeutic effects of music on the soul. In his De vita libri tres published in 1489, Platonist Marsilio Ficino gives a lengthy account of how music and songs can be used to draw celestial benefits for staying healthy. Robert Burton wrote in the 17th century in his classic work, The Anatomy of Melancholy, that music and dance were critical in treating mental illness, especially melancholia.
The rise of an understanding of the body and mind in terms of the nervous system led to the emergence of a new wave of music for healing in the 18th century. Earlier works on the subject, such as Athanasius Kircher's Musurgia Universalis of 1650 and even early 18th-century books such as Michael Ernst Ettmüller's 1714 Disputatio effectus musicae in hominem (Disputation on the Effect of Music on Man) or Friedrich Erhardt Niedten's 1717 Veritophili, still tended to discuss the medical effects of music in terms of bringing the soul and body into harmony. But from the mid-18th century works on the subject such as Richard Brocklesby's 1749 Reflections of Antient and Modern Musick, the 1737 Memoires of the French Academy of Sciences, or Ernst Anton Nicolai's 1745 Die Verbindung der Musik mit der Arzneygelahrheit (The Connection of Music to Medicine), stressed the power of music over the nerves.
==== Music therapy: 19th century ====
After 1800, some books on music and medicine drew on the Brunonian system of medicine, arguing that the stimulation of the nerves caused by music could directly improve or harm health. Throughout the 19th century, an impressive number of books and articles were authored by physicians in Europe and the United States discussing use of music as a therapeutic agent to treat both mental and physical illness.
==== Music therapy: 1900 – c. 1940 ====
From a western viewpoint, music therapy in the 20th and 21st centuries (as of 2021), as an evidence-based, allied healthcare profession, grew out of the aftermath of World Wars I and II. Particularly in the United Kingdom and United States, musicians would travel to hospitals and play music for soldiers with war-related emotional and physical trauma. Using music to treat the mental and physical ailments of active duty military and veterans was not new. Its use was recorded during the US Civil War and Florence Nightingale used it a decade earlier in the Crimean War. Despite research data, observations by doctors and nurses, praise from patients, and willing musicians, it was difficult to vastly increase music therapy services or establish lasting music therapy education programs or organizations in the early 20th century. However, many of the music therapy leaders of this time period provided music therapy during WWI or to its veterans. These were pioneers in the field such as Eva Vescelius, musician, author, 1903 founder of the short-lived National Therapeutic Society of New York and the 1913 Music and Health journal, and creator/teacher of a musicotherapy course; Margaret Anderton, pianist, World War I music therapy provider for Canadian soldiers, a strong believer in training for music therapists, and 1919 Columbia University musicotherapy teacher; Isa Maud Ilsen, a nurse and musician who was the American Red Cross Director of Hospital Music in World War I reconstruction hospitals, 1919 Columbia University musicotherapy teacher, 1926 founder of the National Association for Music in Hospitals, and author; and Harriet Ayer Seymour, music therapist to World War I veterans, author, researcher, lecturer/teacher, founder of the National Foundation for Music Therapy in 1941, author of the first music therapy textbook published in the United States. Several physicians also promoted music as a therapeutic agent during this time period.
In the United States, the first music therapy bachelor's degree program was established in 1944 at Michigan State College (now Michigan State University).
For history from the early 20th century to the present, see continents or individual countries in § Usage by region section.
== See also ==
== References ==
== Bibliography ==
American Psychiatric Association (2000). Diagnostic and statistical manual of mental disorders (4th edn, revised). Washington, D.C.: Author.
Gibson, David (2018). The Complete Guide to Sound Healing (2nd edn), Sound of Light.
Goodman, K. D. (2011), Music therapy education and training: From theory to practice. Charles C. Thomas.
Goodman, K. D. (ed.) (2015), International perspectives in music therapy education and training. Charles C Thomas.
Goodman, K. D. (ed.) (2023), Developing issues in world music therapy education and training: A plurality of views. Charles C Thomas.
Hilliard, R. E. (2001). "The effects of music therapy-based bereavement groups on mood and behavior of grieving children: A pilot study". Journal of Music Therapy, 38(4), 291–306.
Hilliard, R. E. (2007). "The effects of orff-based music therapy and social work groups on childhood grief symptoms and behaviors". Journal of Music Therapy, 44(2), 123–38.
Jones, J. D. (2005). "A comparison of songwriting and lyric analysis techniques to evoke emotional change in a single session with people who are chemically dependent", Journal of Music Therapy, 42, 94–110.
Krout, R. E. (2005). "Applications of music therapist-composed songs in creating participant connections and facilitating goals and rituals during one-time bereavement support groups and programs". Music Therapy Perspectives, 23(2), 118–128.
Lindenfelser, K. J., Grocke, D., & McFerran, K. (2008). "Bereaved parents' experiences of music therapy with their terminally ill child". Journal of Music Therapy, 45(3), 330–48.
Rosner, R., Kruse, J., & Hagl, M. (2010). "A meta‐analysis of interventions for bereaved children and adolescents". Death Studies, 34(2), 99–136.
Schwantes, M., Wigram, T., McKinney, C., Lipscomb, A., & Richards, C. (2011). "The Mexican corrido and its use in a music therapy bereavement group". The Australian Journal of Music Therapy, 22, 2–20.
Silverman, M. J. (2008). "Quantitative comparison of cognitive behavioral therapy and music therapy research: A methodological best-practices analysis to guide future investigation for adult psychiatric patients". Journal of Music Therapy, 45(4), 457–506.
Silverman, M. J. (2009). "The use of lyric analysis interventions in contemporary psychiatric music therapy: Descriptive results of songs and objectives for clinical practice". Music Therapy Perspectives, 27(1), 55–61.
Silverman, M. J., & Marcionetti, M. J. (2004). "Immediate effects of a single music therapy intervention on persons who are severely mentally ill". Arts in Psychotherapy, 31, 291–301.
Valentino, R. E. (2006). Attitudes towards cross-cultural empathy in music therapy. Music Therapy Perspectives, 24(2), 108–114.
Whitehead-Pleaux, A. M., Baryza, M. J., & Sheridan, R. L. (2007). "Exploring the effects of music therapy on pediatric pain: phase 1". The Journal of Music Therapy, 44(3), 217–41.
== Further reading ==
== External links ==
Learning materials related to sound therapy at Wikiversity | Wikipedia/Music_therapy |
In earthquake engineering, vibration control is a set of technical means aimed to mitigate seismic impacts in building and non-building structures.
All seismic vibration control devices may be classified as passive, active or hybrid where:
passive control devices have no feedback capability between them, structural elements and the ground;
active control devices incorporate real-time recording instrumentation on the ground integrated with earthquake input processing equipment and actuators within the structure;
hybrid control devices have combined features of active and passive control systems.
When ground seismic waves reach up and start to penetrate a base of a building, their energy flow density, due to reflections, reduces dramatically: usually, up to 90%. However, the remaining portions of the incident waves during a major earthquake still bear a huge devastating potential.
After the seismic waves enter a superstructure, there is a number of ways to control them in order to soothe their damaging effect and improve the building's seismic performance, for instance:
to dissipate the wave energy inside a superstructure with properly engineered dampers;
to disperse the wave energy between a wider range of frequencies;
to absorb the resonant portions of the whole wave frequencies band with the help of so-called mass dampers.
Devices of the last kind, abbreviated correspondingly as TMD for the tuned (passive), as AMD for the active, and as HMD for the hybrid mass dampers, have been studied and installed in high-rise buildings, predominantly in Japan, for a quarter of a century.
However, there is quite another approach: partial suppression of the seismic energy flow into the superstructure known as seismic or base isolation which has been implemented in a number of historical buildings all over the world and remains in the focus of earthquake engineering research for years.
For this, some pads are inserted into all major load-carrying elements in the base of the building which should substantially decouple a superstructure from its substructure resting on a shaking ground. It also requires creating a rigidity diaphragm and a moat around the building, as well as making provisions against overturning and P-delta effect.
In refineries or plants snubbers are often used for vibration control. Snubbers come in two different variations: hydraulic snubber and a mechanical snubber.
Hydraulic snubbers are used on piping systems when restrained thermal movement is allowed.
Mechanical snubbers operate on the standards of restricting acceleration of any pipe movements to a threshold of 0.2 g's, which is the maximum acceleration that the snubber will permit the piping to see.
== Vibration Control of Mechanical, Electrical, Plumbing, and & HVAC ==
Standards and guidelines for testing, installation, and performance of mechanical equipment have been created in order to provide attachment methods for
equipment located in noise sensitive areas. One manual that provides such specifications is:
412 Manual: Installing Seismic Restraints for Mechanical Equipment (VISCMA / Vibration Isolation and Seismic Control Manufacturers Association)
== See also ==
Active vibration control
Anti-vibration compound
Cushioning
Earthquake-resistant structures
Metallic roller bearing
Tuned mass damper
Vibration isolation
== References == | Wikipedia/Vibration_control |
The neuroscience of music is the scientific study of brain-based mechanisms involved in the cognitive processes underlying music. These behaviours include music listening, performing, composing, reading, writing, and ancillary activities. It also is increasingly concerned with the brain basis for musical aesthetics and musical emotion. Scientists working in this field may have training in cognitive neuroscience, neurology, neuroanatomy, psychology, music theory, computer science, and other relevant fields.
The cognitive neuroscience of music represents a significant branch of music psychology, and is distinguished from related fields such as cognitive musicology in its reliance on direct observations of the brain and use of brain imaging techniques like functional magnetic resonance imaging (fMRI) and positron emission tomography (PET).
== Elements of music ==
=== Pitch ===
Sounds consist of waves of air molecules that vibrate at different frequencies. These waves travel to the basilar membrane in the cochlea of the inner ear. Different frequencies of sound will cause vibrations in different locations of the basilar membrane. We are able to hear different pitches because each sound wave with a unique frequency is correlated to a different location along the basilar membrane. This spatial arrangement of sounds and their respective frequencies being processed in the basilar membrane is known as tonotopy.
When the hair cells on the basilar membrane move back and forth due to the vibrating sound waves, they release neurotransmitters and cause action potentials to occur down the auditory nerve. The auditory nerve then leads to several layers of synapses at numerous clusters of neurons, or nuclei, in the auditory brainstem. These nuclei are also tonotopically organized, and the process of achieving this tonotopy after the cochlea is not yet well understood. This tonotopy is in general maintained up to primary auditory cortex in mammals.
A widely postulated mechanism for pitch processing in the early central auditory system is the phase-locking and mode-locking of action potentials to frequencies in a stimulus. Phase-locking to stimulus frequencies has been shown in the auditory nerve, the cochlear nucleus, the inferior colliculus, and the auditory thalamus. By phase- and mode-locking in this way, the auditory brainstem is known to preserve a good deal of the temporal and low-passed frequency information from the original sound; this is evident by measuring the auditory brainstem response using EEG. This temporal preservation is one way to argue directly for the temporal theory of pitch perception, and to argue indirectly against the place theory of pitch perception.
The right secondary auditory cortex has finer pitch resolution than the left. Hyde, Peretz and Zatorre (2008) used functional magnetic resonance imaging (fMRI) in their study to test the involvement of right and left auditory cortical regions in the frequency processing of melodic sequences. As well as finding superior pitch resolution in the right secondary auditory cortex, specific areas found to be involved were the planum temporale (PT) in the secondary auditory cortex, and the primary auditory cortex in the medial section of Heschl's gyrus (HG).
Many neuroimaging studies have found evidence of the importance of right secondary auditory regions in aspects of musical pitch processing, such as melody. Many of these studies such as one by Patterson, Uppenkamp, Johnsrude and Griffiths (2002) also find evidence of a hierarchy of pitch processing. Patterson et al. (2002) used spectrally matched sounds which produced: no pitch, fixed pitch or melody in an fMRI study and found that all conditions activated HG and PT. Sounds with pitch activated more of these regions than sounds without. When a melody was produced activation spread to the superior temporal gyrus (STG) and planum polare (PP). These results support the existence of a pitch processing hierarchy.
==== Absolute pitch ====
Absolute pitch (AP) is defined as the ability to identify the pitch of a musical tone or to produce a musical tone at a given pitch without the use of an external reference pitch. Neuroscientific research has not discovered a distinct activation pattern common for possessors of AP. Zatorre, Perry, Beckett, Westbury and Evans (1998) examined the neural foundations of AP using functional and structural brain imaging techniques. Positron emission tomography (PET) was utilized to measure cerebral blood flow (CBF) in musicians possessing AP and musicians lacking AP. When presented with musical tones, similar patterns of increased CBF in auditory cortical areas emerged in both groups. AP possessors and non-AP subjects demonstrated similar patterns of left dorsolateral frontal activity when they performed relative pitch judgments. However, in non-AP subjects activation in the right inferior frontal cortex was present whereas AP possessors showed no such activity. This finding suggests that musicians with AP do not need access to working memory devices for such tasks. These findings imply that there is no specific regional activation pattern unique to AP. Rather, the availability of specific processing mechanisms and task demands determine the recruited neural areas.
=== Melody ===
Studies suggest that individuals are capable of automatically detecting a difference or anomaly in a melody such as an out of tune pitch which does not fit with their previous music experience. This automatic processing occurs in the secondary auditory cortex. Brattico, Tervaniemi, Naatanen, and Peretz (2006) performed one such study to determine if the detection of tones that do not fit an individual's expectations can occur automatically. They recorded event-related potentials (ERPs) in nonmusicians as they were presented unfamiliar melodies with either an out of tune pitch or an out of key pitch while participants were either distracted from the sounds or attending to the melody. Both conditions revealed an early frontal error-related negativity independent of where attention was directed. This negativity originated in the auditory cortex, more precisely in the supratemporal lobe (which corresponds with the secondary auditory cortex) with greater activity from the right hemisphere. The negativity response was larger for pitch that was out of tune than that which was out of key. Ratings of musical incongruity were higher for out of tune pitch melodies than for out of key pitch. In the focused attention condition, out of key and out of tune pitches produced late parietal positivity. The findings of Brattico et al. (2006) suggest that there is automatic and rapid processing of melodic properties in the secondary auditory cortex. The findings that pitch incongruities were detected automatically, even in processing unfamiliar melodies, suggests that there is an automatic comparison of incoming information with long term knowledge of musical scale properties, such as culturally influenced rules of musical properties (common chord progressions, scale patterns, etc.) and individual expectations of how the melody should proceed.
=== Rhythm ===
The belt and parabelt areas of the right hemisphere are involved in processing rhythm. Rhythm is a strong repeated pattern of movement or sound. When individuals are preparing to tap out a rhythm of regular intervals (1:2 or 1:3) the left frontal cortex, left parietal cortex, and right cerebellum are all activated. With more difficult rhythms such as a 1:2.5, more areas in the cerebral cortex and cerebellum are involved. EEG recordings have also shown a relationship between brain electrical activity and rhythm perception. Snyder and Large (2005) performed a study examining rhythm perception in human subjects, finding that activity in the gamma band (20 – 60 Hz) corresponds to the beats in a simple rhythm. Two types of gamma activity were found by Snyder & Large: induced gamma activity, and evoked gamma activity. Evoked gamma activity was found after the onset of each tone in the rhythm; this activity was found to be phase-locked (peaks and troughs were directly related to the exact onset of the tone) and did not appear when a gap (missed beat) was present in the rhythm. Induced gamma activity, which was not found to be phase-locked, was also found to correspond with each beat. However, induced gamma activity did not subside when a gap was present in the rhythm, indicating that induced gamma activity may possibly serve as a sort of internal metronome independent of auditory input.
=== Tonality ===
Tonality describes the relationships between the elements of melody and harmony – tones, intervals, chords, and scales. These relationships are often characterized as hierarchical, such that one of the elements dominates or attracts another. They occur both within and between every type of element, creating a rich and time-varying perception between tones and their melodic, harmonic, and chromatic contexts. In one conventional sense, tonality refers to just the major and minor scale types – examples of scales whose elements are capable of maintaining a consistent set of functional relationships. The most important functional relationship is that of the tonic note (the first note in a scale) and the tonic chord (the first note in the scale with the third and fifth note) with the rest of the scale. The tonic is the element which tends to assert its dominance and attraction over all others, and it functions as the ultimate point of attraction, rest and resolution for the scale.
The right auditory cortex is primarily involved in perceiving pitch, and parts of harmony, melody and rhythm. One study by Petr Janata found that there are tonality-sensitive areas in the medial prefrontal cortex, the cerebellum, the superior temporal sulci of both hemispheres and the superior temporal gyri (which has a skew towards the right hemisphere). Hemispheric asymmetries in the processing of dissonant/consonant sounds have been demonstrated. ERP studies have shown larger evoked responses over the left temporal area in response to dissonant chords, and over the right one, in response to consonant chords.
== Music production and performance ==
=== Motor control functions ===
Musical performance usually involves at least three elementary motor control functions: timing, sequencing, and spatial organization of motor movements. Accuracy in timing of movements is related to musical rhythm. Rhythm, the pattern of temporal intervals within a musical measure or phrase, in turn creates the perception of stronger and weaker beats. Sequencing and spatial organization relate to the expression of individual notes on a musical instrument.
These functions and their neural mechanisms have been investigated separately in many studies, but little is known about their combined interaction in producing a complex musical performance. The study of music requires examining them together.
==== Timing ====
Although neural mechanisms involved in timing movement have been studied rigorously over the past 20 years, much remains controversial. The ability to phrase movements in precise time has been accredited to a neural metronome or clock mechanism where time is represented through oscillations or pulses. An opposing view to this metronome mechanism has also been hypothesized stating that it is an emergent property of the kinematics of movement itself. Kinematics is defined as parameters of movement through space without reference to forces (for example, direction, velocity and acceleration).
Functional neuroimaging studies, as well as studies of brain-damaged patients, have linked movement timing to several cortical and sub-cortical regions, including the cerebellum, basal ganglia and supplementary motor area (SMA). Specifically the basal ganglia and possibly the SMA have been implicated in interval timing at longer timescales (1 second and above), while the cerebellum may be more important for controlling motor timing at shorter timescales (milliseconds). Furthermore, these results indicate that motor timing is not controlled by a single brain region, but by a network of regions that control specific parameters of movement and that depend on the relevant timescale of the rhythmic sequence.
==== Sequencing ====
Motor sequencing has been explored in terms of either the ordering of individual movements, such as finger sequences for key presses, or the coordination of subcomponents of complex multi-joint movements. Implicated in this process are various cortical and sub-cortical regions, including the basal ganglia, the SMA and the pre-SMA, the cerebellum, and the premotor and prefrontal cortices, all involved in the production and learning of motor sequences but without explicit evidence of their specific contributions or interactions amongst one another. In animals, neurophysiological studies have demonstrated an interaction between the frontal cortex and the basal ganglia during the learning of movement sequences. Human neuroimaging studies have also emphasized the contribution of the basal ganglia for well-learned sequences.
The cerebellum is arguably important for sequence learning and for the integration of individual movements into unified sequences, while the pre-SMA and SMA have been shown to be involved in organizing or chunking of more complex movement sequences.
Chunking, defined as the re-organization or re-grouping of movement sequences into smaller sub-sequences during performance, is thought to facilitate the smooth performance of complex movements and to improve motor memory.
Lastly, the premotor cortex has been shown to be involved in tasks that require the production of relatively complex sequences, and it may contribute to motor prediction.
==== Spatial organization ====
Few studies of complex motor control have distinguished between sequential and spatial organization, yet expert musical performances demand not only precise sequencing but also spatial organization of movements. Studies in animals and humans have established the involvement of parietal, sensory–motor and premotor cortices in the control of movements, when the integration of spatial, sensory and motor information is required. Few studies so far have explicitly examined the role of spatial processing in the context of musical tasks.
=== Auditory-motor interactions ===
==== Feedforward and feedback interactions ====
An auditory–motor interaction may be loosely defined as any engagement of or communication between the two systems. Two classes of auditory-motor interaction are "feedforward" and "feedback". In feedforward interactions, it is the auditory system that predominately influences the motor output, often in a predictive way. An example is the phenomenon of tapping to the beat, where the listener anticipates the rhythmic accents in a piece of music. Another example is the effect of music on movement disorders: rhythmic auditory stimuli have been shown to improve walking ability in Parkinson's disease and stroke patients.
Feedback interactions are particularly relevant in playing an instrument such as a violin, or in singing, where pitch is variable and must be continuously controlled. If auditory feedback is blocked, musicians can still execute well-rehearsed pieces, but expressive aspects of performance are affected. When auditory feedback is experimentally manipulated by delays or distortions, motor performance is significantly altered: asynchronous feedback disrupts the timing of events, whereas alteration of pitch information disrupts the selection of appropriate actions, but not their timing. This suggests that disruptions occur because both actions and percepts depend on a single underlying mental representation.
==== Models of auditory–motor interactions ====
Several models of auditory–motor interactions have been advanced. The model of Hickok and Poeppel, which is specific for speech processing, proposes that a ventral auditory stream maps sounds onto meaning, whereas a dorsal stream maps sounds onto articulatory representations. They and others suggest that posterior auditory regions at the parieto-temporal boundary are crucial parts of the auditory–motor interface, mapping auditory representations onto motor representations of speech, and onto melodies.
==== Mirror/echo neurons and auditory–motor interactions ====
The mirror neuron system has an important role in neural models of sensory–motor integration. There is considerable evidence that neurons respond to both actions and the accumulated observation of actions. A system proposed to explain this understanding of actions is that visual representations of actions are mapped onto our own motor system.
Some mirror neurons are activated both by the observation of goal-directed actions, and by the associated sounds produced during the action. This suggests that the auditory modality can access the motor system. While these auditory–motor interactions have mainly been studied for speech processes, and have focused on Broca's area and the vPMC, as of 2011, experiments have begun to shed light on how these interactions are needed for musical performance. Results point to a broader involvement of the dPMC and other motor areas. The literature has shown a highly specialized cortical network in the skilled musician's brain that codes the relationship between musical gestures and their corresponding sounds. The data hint at the existence of an audiomotor mirror network involving the right superior temporal gyrus, the premotor cortex, the inferior frontal and inferior parietal areas, among other areas.
== Music and language ==
Certain aspects of language and melody have been shown to be processed in near identical functional brain areas. Brown, Martinez and Parsons (2006) examined the neurological structural similarities between music and language. Utilizing positron emission tomography (PET), the findings showed that both linguistic and melodic phrases produced activation in almost identical functional brain areas. These areas included the primary motor cortex, supplementary motor area, Broca's area, anterior insula, primary and secondary auditory cortices, temporal pole, basal ganglia, ventral thalamus and posterior cerebellum. Differences were found in lateralization tendencies as language tasks favoured the left hemisphere, but the majority of activations were bilateral which produced significant overlap across modalities.
Syntactical information mechanisms in both music and language have been shown to be processed similarly in the brain. Jentschke, Koelsch, Sallat and Friederici (2008) conducted a study investigating the processing of music in children with specific language impairments (SLI). Children with typical language development (TLD) showed ERP patterns different from those of children with SLI, which reflected their challenges in processing music-syntactic regularities. Strong correlations between the ERAN (Early Right Anterior Negativity—a specific ERP measure) amplitude and linguistic and musical abilities provide additional evidence for the relationship of syntactical processing in music and language.
However, production of melody and production of speech may be subserved by different neural networks. Stewart, Walsh, Frith and Rothwell (2001) studied the differences between speech production and song production using transcranial magnetic stimulation (TMS). Stewart et al. found that TMS applied to the left frontal lobe disturbs speech but not melody supporting the idea that they are subserved by different areas of the brain. The authors suggest that a reason for the difference is that speech generation can be localized well but the underlying mechanisms of melodic production cannot. Alternatively, it was also suggested that speech production may be less robust than melodic production and thus more susceptible to interference.
Language processing is a function more of the left side of the brain than the right side, particularly Broca's area and Wernicke's area, though the roles played by the two sides of the brain in processing different aspects of language are still unclear. Music is also processed by both the left and the right sides of the brain. Recent evidence further suggest shared processing between language and music at the conceptual level. It has also been found that, among music conservatory students, the prevalence of absolute pitch is much higher for speakers of tone language, even controlling for ethnic background, showing that language influences how musical tones are perceived.
== Musician vs. non-musician processing ==
=== Differences ===
Brain structure within musicians and non-musicians is distinctly different. Gaser and Schlaug (2003) compared brain structures of professional musicians with non-musicians and discovered gray matter volume differences in motor, auditory and visual-spatial brain regions. Specifically, positive correlations were discovered between musician status (professional, amateur and non-musician) and gray matter volume in the primary motor and somatosensory areas, premotor areas, anterior superior parietal areas and in the inferior temporal gyrus bilaterally. This strong association between musician status and gray matter differences supports the notion that musicians' brains show use-dependent structural changes. Due to the distinct differences in several brain regions, it is unlikely that these differences are innate but rather due to the long-term acquisition and repetitive rehearsal of musical skills.
Brains of musicians also show functional differences from those of non-musicians. Krings, Topper, Foltys, Erberich, Sparing, Willmes and Thron (2000) utilized fMRI to study brain area involvement of professional pianists and a control group while performing complex finger movements. Krings et al. found that the professional piano players showed lower levels of cortical activation in motor areas of the brain. It was concluded that a lesser amount of neurons needed to be activated for the piano players due to long-term motor practice which results in the different cortical activation patterns. Koeneke, Lutz, Wustenberg and Jancke (2004) reported similar findings in keyboard players. Skilled keyboard players and a control group performed complex tasks involving unimanual and bimanual finger movements. During task conditions, strong hemodynamic responses in the cerebellum were shown by both non-musicians and keyboard players, but non-musicians showed the stronger response. This finding indicates that different cortical activation patterns emerge from long-term motor practice. This evidence supports previous data showing that musicians require fewer neurons to perform the same movements.
Musicians have been shown to have significantly more developed left planum temporales, and have also shown to have a greater word memory. Chan's study controlled for age, grade point average and years of education and found that when given a 16 word memory test, the musicians averaged one to two more words above their non musical counterparts.
=== Similarities ===
Studies have shown that the human brain has an implicit musical ability. Koelsch, Gunter, Friederici and Schoger (2000) investigated the influence of preceding musical context, task relevance of unexpected chords and the degree of probability of violation on music processing in both musicians and non-musicians. Findings showed that the human brain unintentionally extrapolates expectations about impending auditory input. Even in non-musicians, the extrapolated expectations are consistent with music theory. The ability to process information musically supports the idea of an implicit musical ability in the human brain. In a follow-up study, Koelsch, Schroger, and Gunter (2002) investigated whether ERAN and N5 could be evoked preattentively in non-musicians. Findings showed that both ERAN and N5 can be elicited even in a situation where the musical stimulus is ignored by the listener indicating that there is a highly differentiated preattentive musicality in the human brain.
== Gender differences ==
Minor neurological differences regarding hemispheric processing exist between brains of males and females. Koelsch, Maess, Grossmann and Friederici (2003) investigated music processing through EEG and ERPs and discovered gender differences. Findings showed that females process music information bilaterally and males process music with a right-hemispheric predominance. However, the early negativity of males was also present over the left hemisphere. This indicates that males do not exclusively utilize the right hemisphere for musical information processing. In a follow-up study, Koelsch, Grossman, Gunter, Hahne, Schroger and Friederici (2003) found that boys show lateralization of the early anterior negativity in the left hemisphere but found a bilateral effect in girls. This indicates a developmental effect as early negativity is lateralized in the right hemisphere in men and in the left hemisphere in boys.
== Handedness differences ==
It has been found that subjects who are lefthanded, particularly those who are also ambidextrous, perform better than righthanders on short term memory for the pitch.
It was hypothesized that this handedness advantage is due to the fact that lefthanders have more duplication of storage in the two hemispheres than do righthanders. Other work has shown that there are pronounced differences between righthanders and lefthanders (on a statistical basis) in how musical patterns are perceived, when sounds come from different regions of space. This has been found, for example, in the Octave illusion and the Scale illusion.
== Musical imagery ==
Musical imagery refers to the experience of replaying music by imagining it inside the head. Musicians show a superior ability for musical imagery due to intense musical training. Herholz, Lappe, Knief and Pantev (2008) investigated the differences in neural processing of a musical imagery task in musicians and non-musicians. Utilizing magnetoencephalography (MEG), Herholz et al. examined differences in the processing of a musical imagery task with familiar melodies in musicians and non-musicians. Specifically, the study examined whether the mismatch negativity (MMN) can be based solely on imagery of sounds. The task involved participants listening to the beginning of a melody, continuation of the melody in his/her head and finally hearing a correct/incorrect tone as further continuation of the melody. The imagery of these melodies was strong enough to obtain an early preattentive brain response to unanticipated violations of the imagined melodies in the musicians. These results indicate similar neural correlates are relied upon for trained musicians imagery and perception. Additionally, the findings suggest that modification of the imagery mismatch negativity (iMMN) through intense musical training results in achievement of a superior ability for imagery and preattentive processing of music.
Perceptual musical processes and musical imagery may share a neural substrate in the brain. A PET study conducted by Zatorre, Halpern, Perry, Meyer and Evans (1996) investigated cerebral blood flow (CBF) changes related to auditory imagery and perceptual tasks. These tasks examined the involvement of particular anatomical regions as well as functional commonalities between perceptual processes and imagery. Similar patterns of CBF changes provided evidence supporting the notion that imagery processes share a substantial neural substrate with related perceptual processes. Bilateral neural activity in the secondary auditory cortex was associated with both perceiving and imagining songs. This implies that within the secondary auditory cortex, processes underlie the phenomenological impression of imagined sounds. The supplementary motor area (SMA) was active in both imagery and perceptual tasks suggesting covert vocalization as an element of musical imagery. CBF increases in the inferior frontal polar cortex and right thalamus suggest that these regions may be related to retrieval and/or generation of auditory information from memory.
== Emotion ==
Music is able to create an intensely pleasurable experience that can be described as "chills". Blood and Zatorre (2001) used PET to measure changes in cerebral blood flow while participants listened to music that they knew to give them the "chills" or any sort of intensely pleasant emotional response. They found that as these chills increase, many changes in cerebral blood flow are seen in brain regions such as the amygdala, orbitofrontal cortex, ventral striatum, midbrain, and the ventral medial prefrontal cortex. Many of these areas appear to be linked to reward, motivation, emotion, and arousal, and are also activated in other pleasurable situations. The resulting pleasure responses enable the release dopamine, serotonin, and oxytocin. Nucleus accumbens (a part of striatum) is involved in both music related emotions, as well as rhythmic timing.
According to the National Institute of Health, children and adults who are suffering from emotional trauma have been able to benefit from the use of music in a variety of ways. The use of music has been essential in helping children who struggle with focus, anxiety, and cognitive function by using music in therapeutic way. Music therapy has also helped children cope with autism, pediatric cancer, and pain from treatments.
Emotions induced by music activate similar frontal brain regions compared to emotions elicited by other stimuli. Schmidt and Trainor (2001) discovered that valence (i.e. positive vs. negative) of musical segments was distinguished by patterns of frontal EEG activity. Joyful and happy musical segments were associated with increases in left frontal EEG activity whereas fearful and sad musical segments were associated with increases in right frontal EEG activity. Additionally, the intensity of emotions was differentiated by the pattern of overall frontal EEG activity. Overall frontal region activity increased as affective musical stimuli became more intense.
When unpleasant melodies are played, the posterior cingulate cortex activates, which indicates a sense of conflict or emotional pain. The right hemisphere has also been found to be correlated with emotion, which can also activate areas in the cingulate in times of emotional pain, specifically social rejection (Eisenberger). This evidence, along with observations, has led many musical theorists, philosophers and neuroscientists to link emotion with tonality. This seems almost obvious because the tones in music seem like a characterization of the tones in human speech, which indicate emotional content. The vowels in the phonemes of a song are elongated for a dramatic effect, and it seems as though musical tones are simply exaggerations of the normal verbal tonality.
== Memory ==
=== Neuropsychology of musical memory ===
Musical memory involves both explicit and implicit memory systems. Explicit musical memory is further differentiated between episodic (where, when and what of the musical experience) and semantic (memory for music knowledge including facts and emotional concepts). Implicit memory centers on the 'how' of music and involves automatic processes such as procedural memory and motor skill learning – in other words skills critical for playing an instrument. Samson and Baird (2009) found that the ability of musicians with Alzheimer's Disease to play an instrument (implicit procedural memory) may be preserved.
=== Neural correlates of musical memory ===
A PET study looking into the neural correlates of musical semantic and episodic memory found distinct activation patterns. Semantic musical memory involves the sense of familiarity of songs. The semantic memory for music condition resulted in bilateral activation in the medial and orbital frontal cortex, as well as activation in the left angular gyrus and the left anterior region of the middle temporal gyri. These patterns support the functional asymmetry favouring the left hemisphere for semantic memory. Left anterior temporal and inferior frontal regions that were activated in the musical semantic memory task produced activation peaks specifically during the presentation of musical material, suggestion that these regions are somewhat functionally specialized for musical semantic representations.
Episodic memory of musical information involves the ability to recall the former context associated with a musical excerpt. In the condition invoking episodic memory for music, activations were found bilaterally in the middle and superior frontal gyri and precuneus, with activation predominant in the right hemisphere. Other studies have found the precuneus to become activated in successful episodic recall. As it was activated in the familiar memory condition of episodic memory, this activation may be explained by the successful recall of the melody.
When it comes to memory for pitch, there appears to be a dynamic and distributed brain network subserves pitch memory processes. Gaab, Gaser, Zaehle, Jancke and Schlaug (2003) examined the functional anatomy of pitch memory using functional magnetic resonance imaging (fMRI). An analysis of performance scores in a pitch memory task resulted in a significant correlation between good task performance and the supramarginal gyrus (SMG) as well as the dorsolateral cerebellum. Findings indicate that the dorsolateral cerebellum may act as a pitch discrimination processor and the SMG may act as a short-term pitch information storage site. The left hemisphere was found to be more prominent in the pitch memory task than the right hemispheric regions.
=== Therapeutic effects of music on memory ===
Musical training has been shown to aid memory. Altenmuller et al. studied the difference between active and passive musical instruction and found both that over a longer (but not short) period of time, the actively taught students retained much more information than the passively taught students. The actively taught students were also found to have greater cerebral cortex activation. The passively taught students weren't wasting their time; they, along with the active group, displayed greater left hemisphere activity, which is typical in trained musicians.
Research suggests we listen to the same songs repeatedly because of musical nostalgia. One major study, published in the journal Memory & Cognition, found that music enables the mind to evoke memories of the past, known as music-evoked autobiographical memories.
== Attention ==
Treder et al. identified neural correlates of attention when listening to simplified polyphonic music patterns. In a musical oddball experiment, they had participants shift selective attention to one out of three different instruments in music audio clips, with each instrument occasionally playing one or several notes deviating from an otherwise repetitive pattern. Contrasting attended versus unattended instruments, ERP analysis shows subject- and instrument-specific responses including P300 and early auditory components. The attended instrument could be classified offline with high accuracy. This indicates that attention paid to a particular instrument in polyphonic music can be inferred from ongoing EEG, a finding that is potentially relevant for building more ergonomic music-listing based brain-computer interfaces.
== Development ==
Musical four-year-olds have been found to have one greater left hemisphere intrahemispheric coherence. Musicians have been found to have more developed anterior portions of the corpus callosum in a study by Cowell et al. in 1992. This was confirmed by a study by Schlaug et al. in 1995 that found that classical musicians between the ages of 21 and 36 have significantly greater anterior corpora callosa than the non-musical control. Schlaug also found that there was a strong correlation of musical exposure before the age of seven, and a great increase in the size of the corpus callosum. These fibers join together the left and right hemispheres and indicate an increased relaying between both sides of the brain. This suggests the merging between the spatial- emotiono-tonal processing of the right brain and the linguistical processing of the left brain. This large relaying across many different areas of the brain might contribute to music's ability to aid in memory function.
== Impairment ==
=== Focal hand dystonia ===
Focal hand dystonia is a task-related movement disorder associated with occupational activities that require repetitive hand movements. Focal hand dystonia is associated with abnormal processing in the premotor and primary sensorimotor cortices. An fMRI study examined five guitarists with focal hand dystonia. The study reproduced task-specific hand dystonia by having guitarists use a real guitar neck inside the scanner as well as performing a guitar exercise to trigger abnormal hand movement. The dystonic guitarists showed significantly more activation of the contralateral primary sensorimotor cortex as well as a bilateral underactivation of premotor areas. This activation pattern represents abnormal recruitment of the cortical areas involved in motor control. Even in professional musicians, widespread bilateral cortical region involvement is necessary to produce complex hand movements such as scales and arpeggios. The abnormal shift from premotor to primary sensorimotor activation directly correlates with guitar-induced hand dystonia.
=== Music agnosia ===
Music agnosia, an auditory agnosia, is a syndrome of selective impairment in music recognition. Three cases of music agnosia are examined by Dalla Bella and Peretz (1999); C.N., G.L., and I.R.. All three of these patients suffered bilateral damage to the auditory cortex which resulted in musical difficulties while speech understanding remained intact. Their impairment is specific to the recognition of once familiar melodies. They are spared in recognizing environmental sounds and in recognizing lyrics. Peretz (1996) has studied C.N.'s music agnosia further and reports an initial impairment of pitch processing and spared temporal processing. C.N. later recovered in pitch processing abilities but remained impaired in tune recognition and familiarity judgments.
Musical agnosias may be categorized based on the process which is impaired in the individual. Apperceptive music agnosia involves an impairment at the level of perceptual analysis involving an inability to encode musical information correctly. Associative music agnosia reflects an impaired representational system which disrupts music recognition. Many of the cases of music agnosia have resulted from surgery involving the middle cerebral artery. Patient studies have surmounted a large amount of evidence demonstrating that the left side of the brain is more suitable for holding long-term memory representations of music and that the right side is important for controlling access to these representations. Associative music agnosias tend to be produced by damage to the left hemisphere, while apperceptive music agnosia reflects damage to the right hemisphere.
=== Congenital amusia ===
Congenital amusia, otherwise known as tone deafness, is a term for lifelong musical problems which are not attributable to intellectual disability, lack of exposure to music or deafness, or brain damage after birth. Amusic brains have been found in fMRI studies to have less white matter and thicker cortex than controls in the right inferior frontal cortex. These differences suggest abnormal neuronal development in the auditory cortex and inferior frontal gyrus, two areas which are important in musical-pitch processing.
Studies on those with amusia suggest different processes are involved in speech tonality and musical tonality. Congenital amusics lack the ability to distinguish between pitches and so are for example unmoved by dissonance and playing the wrong key on a piano. They also cannot be taught to remember a melody or to recite a song; however, they are still capable of hearing the intonation of speech, for example, distinguishing between "You speak French" and "You speak French?" when spoken.
=== Amygdala damage ===
Damage to the amygdala has selective emotional impairments on musical recognition. Gosselin, Peretz, Johnsen and Adolphs (2007) studied S.M., a patient with bilateral damage of the amygdala with the rest of the temporal lobe undamaged and found that S.M. was impaired in recognition of scary and sad music. S.M.'s perception of happy music was normal, as was her ability to use cues such as tempo to distinguish between happy and sad music. It appears that damage specific to the amygdala can selectively impair recognition of scary music.
=== Selective deficit in music reading ===
Specific musical impairments may result from brain damage leaving other musical abilities intact. Cappelletti, Waley-Cohen, Butterworth and Kopelman (2000) studied a single case study of patient P.K.C., a professional musician who sustained damage to the left posterior temporal lobe as well as a small right occipitotemporal lesion. After sustaining damage to these regions, P.K.C. was selectively impaired in the areas of reading, writing and understanding musical notation but maintained other musical skills. The ability to read aloud letters, words, numbers and symbols (including musical ones) was retained. However, P.K.C. was unable to read aloud musical notes on the staff regardless of whether the task involved naming with the conventional letter or by singing or playing. Yet despite this specific deficit, P.K.C. retained the ability to remember and play familiar and new melodies.
=== Auditory arrhythmia ===
Arrhythmia in the auditory modality is defined as a disturbance of rhythmic sense; and includes deficits such as the inability to rhythmically perform music, the inability to keep time to music and the inability to discriminate between or reproduce rhythmic patterns. A study investigating the elements of rhythmic function examined Patient H.J., who acquired arrhythmia after sustaining a right temporoparietal infarct. Damage to this region impaired H.J.'s central timing system which is essentially the basis of his global rhythmic impairment. H.J. was unable to generate steady pulses in a tapping task. These findings suggest that keeping a musical beat relies on functioning in the right temporal auditory cortex.
== References ==
== External links ==
MusicCognition.info - A Resource and Information Center | Wikipedia/Cognitive_neuroscience_of_music |
A node is a point along a standing wave where the wave has minimum amplitude. For instance, in a vibrating guitar string, the ends of the string are nodes. By changing the position of the end node through frets, the guitarist changes the effective length of the vibrating string and thereby the note played. The opposite of a node is an antinode, a point where the amplitude of the standing wave is at maximum. These occur midway between the nodes.
== Explanation ==
Standing waves result when two sinusoidal wave trains of the same frequency are moving in opposite directions in the same space and interfere with each other. They occur when waves are reflected at a boundary, such as sound waves reflected from a wall or electromagnetic waves reflected from the end of a transmission line, and particularly when waves are confined in a resonator at resonance, bouncing back and forth between two boundaries, such as in an organ pipe or guitar string.
In a standing wave the nodes are a series of locations at equally spaced intervals where the wave amplitude (motion) is zero (see animation above). At these points the two waves add with opposite phase and cancel each other out. They occur at intervals of half a wavelength (λ/2). Midway between each pair of nodes are locations where the amplitude is maximum. These are called the antinodes. At these points the two waves add with the same phase and reinforce each other.
In cases where the two opposite wave trains are not the same amplitude, they do not cancel perfectly, so the amplitude of the standing wave at the nodes is not zero but merely a minimum. This occurs when the reflection at the boundary is imperfect. This is indicated by a finite standing wave ratio (SWR), the ratio of the amplitude of the wave at the antinode to the amplitude at the node.
In resonance of a two dimensional surface or membrane, such as a drumhead or vibrating metal plate, the nodes become nodal lines, lines on the surface where the surface is motionless, dividing the surface into separate regions vibrating with opposite phase. These can be made visible by sprinkling sand on the surface, and the intricate patterns of lines resulting are called Chladni figures.
In transmission lines a voltage node is a current antinode, and a voltage antinode is a current node.
Nodes are the points of zero displacement, not the points where two constituent waves intersect.
== Boundary conditions ==
Where the nodes occur in relation to the boundary reflecting the waves depends on the end conditions or boundary conditions. Although there are many types of end conditions, the ends of resonators are usually one of two types that cause total reflection:
Fixed boundary: Examples of this type of boundary are the attachment point of a guitar string, the closed end of an open pipe like an organ pipe, or a woodwind pipe, the periphery of a drumhead, a transmission line with the end short circuited, or the mirrors at the ends of a laser cavity. In this type, the amplitude of the wave is forced to zero at the boundary, so there is a node at the boundary, and the other nodes occur at multiples of half a wavelength from it:
Free boundary: Examples of this type are an open-ended organ or woodwind pipe, the ends of the vibrating resonator bars in a xylophone, glockenspiel or tuning fork, the ends of an antenna, or a transmission line with an open end. In this type the derivative (slope) of the wave's amplitude (in sound waves the pressure, in electromagnetic waves, the current) is forced to zero at the boundary. So there is an amplitude maximum (antinode) at the boundary, the first node occurs a quarter wavelength from the end, and the other nodes are at half wavelength intervals from there:
== Examples ==
=== Sound ===
A sound wave consists of alternating cycles of compression and expansion of the wave medium. During compression, the molecules of the medium are forced together, resulting in the increased pressure and density. During expansion the molecules are forced apart, resulting in the decreased pressure and density.
The number of nodes in a specified length is directly proportional to the frequency of the wave.
Occasionally on a guitar, violin, or other stringed instrument, nodes are used to create harmonics. When the finger is placed on top of the string at a certain point, but does not push the string all the way down to the fretboard, a third node is created (in addition to the bridge and nut) and a harmonic is sounded. During normal play when the frets are used, the harmonics are always present, although they are quieter. With the artificial node method, the overtone is louder and the fundamental tone is quieter. If the finger is placed at the midpoint of the string, the first overtone is heard, which is an octave above the fundamental note which would be played, had the harmonic not been sounded. When two additional nodes divide the string into thirds, this creates an octave and a perfect fifth (twelfth). When three additional nodes divide the string into quarters, this creates a double octave. When four additional nodes divide the string into fifths, this creates a double-octave and a major third (17th). The octave, major third and perfect fifth are the three notes present in a major chord.
The characteristic sound that allows the listener to identify a particular instrument is largely due to the relative magnitude of the harmonics created by the instrument.
=== Waves in two or three dimensions ===
In two dimensional standing waves, nodes are curves (often straight lines or circles when displayed on simple geometries.) For example, sand collects along the nodes of a vibrating Chladni plate to indicate regions where the plate is not moving.
In chemistry, quantum mechanical waves, or "orbitals", are used to describe the wave-like properties of electrons. Many of these quantum waves have nodes and antinodes as well. The number and position of these nodes and antinodes give rise to many of the properties of an atom or covalent bond. Atomic orbitals are classified according to the number of radial and angular nodes. A radial node for the hydrogen atom is a sphere that occurs where the wavefunction for an atomic orbital is equal to zero, while the angular node is
a flat plane.
Molecular orbitals are classified according to bonding character. Molecular orbitals with an antinode between nuclei are very stable, and are known as "bonding orbitals" which strengthen the bond. In contrast, molecular orbitals with a node between nuclei will not be stable due to electrostatic repulsion and are known as "anti-bonding orbitals" which weaken the bond. Another such quantum mechanical concept is the particle in a box where the number of nodes of the wavefunction can help determine the quantum energy state—zero nodes corresponds to the ground state, one node corresponds to the 1st excited state, etc. In general, If one arranges the eigenstates in the order of increasing energies,
ϵ
1
,
ϵ
2
,
ϵ
3
,
.
.
.
{\displaystyle \epsilon _{1},\epsilon _{2},\epsilon _{3},...}
, the eigenfunctions likewise fall in the order of increasing number of nodes; the nth eigenfunction has n−1 nodes, between each of which the following eigenfunctions have at least one node.
== References == | Wikipedia/Node_(physics) |
A sound reinforcement system is the combination of microphones, signal processors, amplifiers, and loudspeakers in enclosures all controlled by a mixing console that makes live or pre-recorded sounds louder and may also distribute those sounds to a larger or more distant audience. In many situations, a sound reinforcement system is also used to enhance or alter the sound of the sources on the stage, typically by using electronic effects, such as reverb, as opposed to simply amplifying the sources unaltered.
A sound reinforcement system for a rock concert in a stadium may be very complex, including hundreds of microphones, complex live sound mixing and signal processing systems, tens of thousands of watts of amplifier power, and multiple loudspeaker arrays, all overseen by a team of audio engineers and technicians. On the other hand, a sound reinforcement system can be as simple as a small public address (PA) system, consisting of, for example, a single microphone connected to a 100-watt amplified loudspeaker for a singer-guitarist playing in a small coffeehouse. In both cases, these systems reinforce sound to make it louder or distribute it to a wider audience.
Some audio engineers and others in the professional audio industry disagree over whether these audio systems should be called sound reinforcement (SR) systems or PA systems. Distinguishing between the two terms by technology and capability is common, while others distinguish by intended use (e.g., SR systems are for live event support and PA systems are for reproduction of speech and recorded music in buildings and institutions). In some regions or markets, the distinction between the two terms is important, though the terms are considered interchangeable in many professional circles.
== Basic concept ==
A typical sound reinforcement system consists of; input transducers (e.g., microphones), which convert sound energy such as a person singing into an electric signal, signal processors which alter the signal characteristics (e.g., equalizers that adjust the bass and treble, compressors that reduce signal peaks, etc.), amplifiers, which produce a powerful version of the resulting signal that can drive a loudspeaker and output transducers (e.g., loudspeakers in speaker cabinets), which convert the signal back into sound energy (the sound heard by the audience and the performers). These primary parts involve varying numbers of individual components to achieve the desired goal of reinforcing and clarifying the sound to the audience, performers, or other individuals.
=== Signal path ===
Sound reinforcement in a large format system typically involves a signal path that starts with the signal inputs, which may be instrument pickups (on an electric guitar or electric bass) or a microphone that a vocalist is singing into or a microphone placed in front of an instrument or guitar amplifier. These signal inputs are plugged into the input jacks of a thick multicore cable (often called a snake). The snake then delivers the signals of all of the inputs to one or more mixing consoles.
In a coffeehouse or small nightclub, the snake may be only routed to a single mixing console, which an audio engineer will use to adjust the sound and volume of the onstage vocals and instruments that the audience hears through the main speakers and adjust the volume of the monitor speakers that are aimed at the performers.
Mid- to large-size performing venues typically route the onstage signals to two mixing consoles: the front of house (FOH), and the stage monitor system, which is often a second mixer at the side of the stage. In these cases, at least two audio engineers are required; one to do the main mix for the audience at FOH and another to do the monitor mix for the performers on stage.
Once the signal arrives at an input on a mixing console, this signal can be adjusted in many ways by the sound engineer. A signal can be equalized (e.g., by adjusting the bass or treble of the sound), compressed (to avoid unwanted signal peaks), or panned (that is sent to the left or right speakers). The signal may also be routed into an external effects processor, such as a reverb effect, which outputs a wet (effected) version of the signal, which is typically mixed in varying amounts with the dry (effect-free) signal. Many electronic effects units are used in sound reinforcement systems, including digital delay and reverb. Some concerts use pitch correction effects (e.g., AutoTune), which electronically correct any out-of-tune singing.
Mixing consoles also have additional sends, also referred to as auxes or aux sends (an abbreviation for "auxiliary send"), on each input channel so that a different mix can be created and sent elsewhere for another purpose. One usage for aux sends is to create a mix of the vocal and instrument signals for the monitor mix (this is what the onstage singers and musicians hear from their monitor speakers or in-ear monitors). Another use of an aux send is to select varying amounts of certain channels (via the aux send knobs on each channel), and then route these signals to an effects processor. A common example of the second use of aux sends is to send all of the vocal signals from a rock band through a reverb effect. While reverb is usually added to vocals in the main mix, it is not usually added to electric bass and other rhythm section instruments.
The processed input signals are then mixed to the master faders on the console. The next step in the signal path generally depends on the size of the system in place. In smaller systems, the main outputs are often sent to an additional equalizer, or directly to a power amplifier, with one or more loudspeakers (typically two, one on each side of the stage in smaller venues, or a large number in big venues) that are connected to that amplifier. In large-format systems, the signal is typically first routed through an equalizer then to a crossover. A crossover splits the signal into multiple frequency bands with each band being sent to separate amplifiers and speaker enclosures for low, middle, and high-frequency signals. Low-frequency signals are sent to amplifiers and then to subwoofers, and middle and high-frequency sounds are typically sent to amplifiers which power full-range speaker cabinets. Using a crossover to separate the sound into low, middle and high frequencies can lead to a "cleaner", clearer sound (see bi-amplification) than routing all of the frequencies through a single full-range speaker system. Nevertheless, many small venues still use a single full-range speaker system, as it is easier to set up and less expensive.
== System components ==
=== Input transducers ===
Many types of input transducers can be found in a sound reinforcement system, with microphones being the most commonly used input device. Microphones can be classified according to their method of transduction, polar pattern or their functional application. Most microphones used in sound reinforcement are either dynamic or condenser microphones. One type of directional microphone, called cardioid mics, are widely used in live sound, because they reduce pickup from the side and rear, helping to avoid unwanted feedback from the stage monitor system.
Microphones used for sound reinforcement are positioned and mounted in many ways, including base-weighted upright stands, podium mounts, tie-clips, instrument mounts, and headset mounts. Microphones on stands are also placed in front of instrument amplifiers to pick up the sound. Headset-mounted and tie-clip-mounted microphones are often used with wireless transmission to allow performers or speakers to move freely. Early adopters of headset mounted microphones technology included country singer Garth Brooks, Kate Bush, and Madonna.
Other types of input transducers include magnetic pickups used in electric guitars and electric basses, contact microphones used on stringed instruments, and pianos and phonograph pickups (cartridges) used in record players. Electronic instruments such as synthesizers can have their output signal routed directly to the mixing console. A DI unit may be necessary to adapt some of these sources to the inputs of the console.
=== Wireless ===
Wireless systems are typically used for electric guitar, bass, handheld microphones and in-ear monitor systems. This lets performers move about the stage during the show or even go out into the audience without the worry of tripping over or disconnecting cables.
=== Mixing consoles ===
Mixing consoles are the heart of a sound reinforcement system. This is where the sound engineer can adjust the volume and tone of each input, whether it is a vocalist's microphone or the signal from an electric bass, and mix, equalize and add effects to these sound sources. Doing the mixing for a live show requires a mix of technical and artistic skills. A sound engineer needs to have an expert knowledge of speaker and amplifier set-up, effects units and other technologies and a good "ear" for what the music should sound like in order to create a good mix.
Multiple consoles can be used for different purposes in a single sound reinforcement system. The front-of-house (FOH) mixing console is typically located where the operator can see the action on stage and hear what the audience hears. For broadcast and recording applications, the mixing console may be placed within an enclosed booth or outside in an OB van. Large music productions often use a separate stage monitor mixing console which is dedicated to creating mixes for the performers on-stage. These consoles are typically placed at the side of the stage so that the operator can communicate with the performers on stage.
=== Signal processors ===
Small PA systems for venues such as bars and clubs are now available with features that were formerly only available on professional-level equipment, such as digital reverb effects, graphic equalizers, and, in some models, feedback prevention circuits which electronically sense and prevent audio feedback when it becomes a problem. Digital effects units may offer multiple pre-set and variable reverb, echo and related effects. Digital loudspeaker management systems offer sound engineers digital delay (to ensure speakers are in sync with each other), limiting, crossover functions, EQ filters, compression and other functions in a single rack-mountable unit. In previous decades, sound engineers typically had to transport a substantial number of rack-mounted analog effects unit devices to accomplish these tasks.
==== Equalizers ====
Equalizers are electronic devices that allow audio engineers to control the tone and frequencies of the sound in a channel, group (e.g., all the mics on a drumkit) or an entire stage's mix. The bass and treble controls on a home stereo are a simple type of equalizer. Equalizers exist in professional sound reinforcement systems in three forms: shelving equalizers (typically for a whole range of bass and treble frequencies), graphic equalizers and parametric equalizers. Graphic equalizers have faders (vertical slide controls) which together resemble a frequency response curve plotted on a graph. The faders can be used to boost or cut specific frequency bands.
Using equalizers, frequencies that are too weak, such as a singer with modest projection in their lower register, can be boosted. Frequencies that are too loud, such as a "boomy" sounding bass drum, or an overly resonant dreadnought guitar can be cut. Sound reinforcement systems typically use graphic equalizers with one-third octave frequency centers. These are typically used to equalize output signals going to the main loudspeaker system or the monitor speakers on stage. Parametric equalizers are often built into each channel in mixing consoles, typically for the mid-range frequencies. They are also available as separate rack-mount units that can be connected to a mixing board. Parametric equalizers typically use knobs and sometimes buttons. The audio engineer can select which frequency band to cut or boost, and then use additional knobs to adjust how much to cut or boost this frequency range. Parametric equalizers first became popular in the 1970s and have remained the program equalizer of choice for many engineers since then.
A high-pass (low-cut) and/or low-pass (high-cut) filter may also be included on equalizers or audio consoles. High-pass and low-pass filters restrict a given channel's bandwidth extremes. Cutting very low-frequency sound signals (termed infrasonic, or subsonic) reduces the waste of amplifier power which does not produce audible sound and which moreover can be hard on the subwoofer drivers. A low-pass filter to cut ultrasonic energy is useful to prevent interference from radio frequencies, lighting control, or digital circuitry creeping into the power amplifiers. Such filters are often paired with graphic and parametric equalizers to give the audio engineer full control of the frequency range. High-pass filters and low-pass filters used together function as a band-pass filter, eliminating undesirable frequencies both above and below the auditory spectrum. A band-stop filter, does the opposite. It allows all frequencies to pass except for one band in the middle. A feedback suppressor, using an microprocessor, automatically detects the onset of feedback and applies a narrow band-stop filter (a notch filter) at specific frequency or frequencies pertaining to the feedback.
==== Compressors ====
Dynamic range compression is designed to help the audio engineer to manage the dynamic range of audio signals. Prior to the invention of automatic compressors, audio engineers accomplished the same goal by "riding the faders", listening carefully to the mix and lowering the faders of any singer or instrument which was getting too loud. A compressor accomplishes this by reducing the gain of a signal that is above a defined level (the threshold) by a defined amount determined by the ratio setting. Most compressors available are designed to allow the operator to select a ratio within a range typically between 1:1 and 20:1, with some allowing settings of up to ∞:1. A compressor with high compression ratio is typically referred to as a limiter. The speed that the compressor adjusts the gain of the signal (attack and release) is typically adjustable as is the final output or make-up gain of the device.
Compressor applications vary widely. Some applications use limiters for component protection and gain structure control. Artistic signal manipulation using a compressor is a subjective technique widely utilized by mix engineers to improve clarity or to creatively alter the signal in relation to the program material. An example of artistic compression is the typical heavy compression used on the various components of a modern rock drum kit. The drums are processed to be perceived as sounding more punchy and full.
==== Noise gates ====
A noise gate mutes signals below a set threshold level. A noise gate's function is in, a sense, opposite to that of a compressor. Noise gates are useful for microphones which will pick up noise that is not relevant to the program, such as the hum of a miked electric guitar amplifier or the rustling of papers on a minister's lectern. Noise gates are also used to process the microphones placed near the drums of a drum kit in many hard rock and metal bands. Without a noise gate, the microphone for a specific instrument such as the floor tom will also pick up signals from nearby drums or cymbals. With a noise gate, the threshold of sensitivity for each microphone on the drum kit can be set so that only the direct strike and subsequent decay of the drum will be heard, not the nearby sounds.
==== Effects ====
Reverberation and delay effects are widely used in sound reinforcement systems to enhance the sound of the mix and create a desired artistic effect. Reverb and delay add a sense of spaciousness to the sound. Reverb can give the effect of singing voice or instrument being present in anything from a small room to a massive hall, or even in a space that does not exist in the physical world. The use of reverb often goes unnoticed by the audience, as it often sounds more natural than if the signal was left "dry" (without effects). Many modern mixing boards designed for live sound include on-board reverb effects.
Other effects include modulation effects such as Flanger, phaser, and chorus and spectral manipulation or harmonic effects such as the exciter and harmonizer. The use of effects in the reproduction of 2010-era pop music is often in an attempt to mimic the sound of the studio version of the artist's music in a live concert setting. For example, an audio engineer may use an Auto Tune effect to produce unusual vocal sound effects that a singer used on their recordings.
The appropriate type, variation, and level of effects is quite subjective and is often collectively determined by a production's audio engineer, artists, bandleader, music producer, or musical director.
==== Feedback suppressor ====
A feedback suppressor detects unwanted audio feedback and suppresses it, typically by automatically inserting a notch filter into the signal path of the system. Audio feedback can create unwanted loud, screaming noises that are disruptive to the performance, and can damage speakers and performers' and audience members' ears. Audio feedback from microphones occurs when a microphone is too near a monitor or main speaker and the sound reinforcement system amplifies itself. Audio feedback through a microphone is almost universally regarded as a negative phenomenon, many electric guitarists use guitar feedback as part of their performance. This type of feedback is intentional, so the sound engineer does not try to prevent it.
=== Power amplifiers ===
A power amplifier is an electronic device that uses electrical power and circuitry to boost a line level signal and provides enough electrical power to drive a loudspeaker and produce sound. All loudspeakers, including headphones, require power amplification. Most professional audio power amplifiers also provide protection from clipping typically as some form of limiting. A power amplifier pushed into clipping can damage loudspeakers. Amplifiers also typically provide protection against short circuits across the output and overheating.
Audio engineers select amplifiers that provide enough headroom. Headroom refers to the amount by which the signal-handling capabilities of an audio system exceed a designated nominal level. Headroom can be thought of as a safety zone allowing transient audio peaks to exceed the nominal level without damaging the system or the audio signal, e.g., via clipping. Standards bodies differ in their recommendations for nominal level and headroom. Selecting amplifiers with enough headroom helps to ensure that the signal will remain clean and undistorted.
Like most sound reinforcement equipment, professional power amplifiers are typically designed to be mounted within standard 19-inch racks. Rack-mounted amps are typically housed in road cases to prevent damage to the equipment during transportation. Active loudspeakers have internally mounted amplifiers that have been selected by the manufacturer to match the requirements of the loudspeaker. Some active loudspeakers also have equalization, crossover and mixing circuitry built in.
Since amplifiers can generate a significant amount of heat, thermal dissipation is an important factor for operators to consider when mounting amplifiers into equipment racks. Many power amplifiers feature internal fans to draw air across their heat sinks. The heat sinks can become clogged with dust, which can adversely affect the cooling capabilities of the amplifier.
In the 1970s and 1980s, most PAs employed heavy class AB amplifiers. In the late 1990s, power amplifiers in PA applications became lighter, smaller, more powerful, and more efficient, with the increasing use of switching power supplies and class D amplifiers, which offered significant weight- and space-savings as well as increased efficiency. Often installed in railroad stations, stadia, and airports, class D amplifiers can run with minimal additional cooling and with higher rack densities, compared to older amplifiers.
Digital loudspeaker management systems (DLMS) that combine digital crossover functions, compression, limiting, and other features in a single unit are used to process the mix from the mixing console and route it to the various amplifiers. Systems may include several loudspeakers, each with its own output optimized for a specific range of frequencies (i.e. bass, midrange, and treble). Bi-amping and tri-amping of a sound reinforcement system with the aid of a DLMS results in more efficient use of amplifier power by sending each amplifier only the frequencies appropriate for its respective loudspeaker and eliminating losses associated with passive crossover circuits.
=== Main loudspeakers ===
A simple and inexpensive PA loudspeaker may have a single full-range loudspeaker driver, housed in a suitable enclosure. More elaborate, professional-caliber sound reinforcement loudspeakers may incorporate separate drivers to produce low, middle, and high frequency sounds. A crossover network routes the different frequencies to the appropriate drivers. In the 1960s, horn loaded theater and PA speakers were commonly columns of multiple drivers mounted in a vertical line within a tall enclosure.
The 1970s to early 1980s was a period of innovation in loudspeaker design with many sound reinforcement companies designing their own speakers using commercially available drivers. The areas of innovation were in cabinet design, durability, ease of packing and transport, and ease of setup. This period also saw the introduction of the hanging or flying of main loudspeakers at large concerts. During the 1980s the large speaker manufacturers started producing standard products using the innovations of the 1970s. These were mostly smaller two way systems with 12", 15" or double 15" woofers and a high frequency driver attached to a high frequency horn. The 1980s also saw the start of loudspeaker companies focused on the sound reinforcement market.
The 1990s saw the introduction of line arrays, where long vertical arrays of loudspeakers in smaller cabinets are used to increase efficiency and provide even dispersion and frequency response. Trapezoidal-shaped enclosures became popular as this shape allowed many of them to be easily arrayed together. This period also saw the introduction of inexpensive molded plastic speaker enclosures mounted on tripod stands. Many feature built-in power amplifiers which made them practical for non-professionals to set up and operate successfully. The sound quality available from these simple powered speakers varies widely depending on the implementation.
Many sound reinforcement loudspeaker systems incorporate protection circuitry to prevent damage from excessive power or operator error. Resettable fuses, specialized current-limiting light bulbs, and circuit breakers were used alone or in combination to reduce driver failures. During the same period, the professional sound reinforcement industry made the Neutrik Speakon NL4 and NL8 connectors the standard speaker connectors, replacing 1/4" jacks, XLR connectors, and Cannon multipin connectors which are all limited to a maximum of 15 amps of current. XLR connectors are still the standard input connector on active loudspeaker cabinets.
To help users avoid overpowering them, loudspeakers have a power rating (in watts) which indicates their maximum power capacity. Thanks to the efforts of the Audio Engineering Society (AES) and the loudspeaker industry group ALMA in developing the EIA-426 testing standard, power-handling specifications became more trustworthy.
Lightweight, portable speaker systems for small venues route the low-frequency parts of the music (electric bass, bass drum, etc.) to a powered subwoofer. Routing the low-frequency energy to a separate amplifier and subwoofer can substantially improve the bass response of the system. Also, clarity may be enhanced because low-frequency sounds can cause intermodulation and other distortion in speaker systems.
Professional sound reinforcement speaker systems often include dedicated hardware for safely flying them above the stage area, to provide more even sound coverage and to maximize sightlines within performance venues.
=== Monitor loudspeakers ===
Monitor loudspeakers, also called foldback loudspeakers, are speaker cabinets used onstage to help performers to hear their singing or playing. As such, monitor speakers are pointed towards a performer or a section of the stage. They are generally sent a different mix of vocals or instruments than the mix that is sent to the main loudspeaker system. Monitor loudspeaker cabinets are often a wedge shape, directing their output upwards towards the performer when set on the floor of the stage. Simple two-way, dual-driver designs with a speaker cone and a horn are common, as monitor loudspeakers need to be smaller to save space on the stage. These loudspeakers typically require less power and volume than the main loudspeaker system, as they only need to provide sound for a few people who are in relatively close proximity to the loudspeaker. Some manufacturers have designed loudspeakers for use either as a component of a small PA system or as a monitor loudspeaker. A number of manufacturers produce powered monitor speakers, which contain an integrated amplifier.
Using monitor speakers instead of in-ear monitors typically results in an increase of stage volume, which can lead to more feedback issues and progressive hearing damage for the performers in front of them. The clarity of the mix for the performer on stage is also typically compromised as they hear more extraneous noise from around them. The use of monitor loudspeakers, active (with an integrated amplifier) or passive, requires more cabling and gear on stage, resulting in a more cluttered stage. These factors, amongst others, have led to the increasing popularity of in-ear monitors.
=== In-ear monitors ===
In-ear monitors are headphones that have been designed for use as monitors by a live performer. They are either of a universal fit or custom fit design. The universal fit in-ear monitors feature rubber or foam tips that can be inserted into virtually anybody's ear. Custom-fit in-ear monitors are created from an impression of the user's ear that has been made by an audiologist. In-ear monitors are almost always used in conjunction with a wireless transmitting system, allowing the performer to freely move about the stage while receiving their monitor mix.
In-ear monitors offer considerable isolation for the performer using them: no on-stage sound is heard and the monitor engineer can deliver a much more accurate and clear mix for the performer. With in-ear monitors, each performer can be sent their own customized mix; although this was also the case with monitor speakers, the in-ear monitors of one performer cannot be heard by the other musicians. A downside of this isolation is that the performer cannot hear the crowd or the comments from other performers on stage that do not have microphones (e.g., if the bass player wishes to communicate to the drummer). This has been remedied in larger productions by setting up microphones facing the audience that can be mixed into the in-ear monitor sends.
Since their introduction in the mid-1980s, in-ear monitors have grown to be the most popular monitoring choice for large touring acts. The reduction or elimination of loudspeakers other than instrument amplifiers on stage has allowed for cleaner and less problematic mixing for both the front of house and monitor engineers. Audio feedback is greatly reduced and there is less sound reflecting off the back wall of the stage out into vocal mics and the audience, which improves the clarity of the front-of-house mix.
== Applications ==
Sound reinforcement systems are used in a broad range of different settings, each of which poses different challenges.
=== Rental systems ===
Audio-visual rental systems have to be able to withstand heavy use and even abuse from renters. For this reason, rental companies tend to own speaker cabinets that are heavily braced and protected with steel corners, and electronic equipment such as power amplifiers or effects are often mounted into protective road cases. Rental companies also tend to select gear that have electronic protection features, such as speaker-protection circuitry and amplifier limiters.
Rental systems for non-professionals need to be easy to use and set up and they must be easy to repair and maintain for the renting company. From this perspective, speaker cabinets need to have easy-to-access horns, speakers, and crossover circuitry, so that repairs or replacements can be made.
Many touring acts and large venue corporate events will rent large sound reinforcement systems that typically include one or more audio engineers on staff with the renting company. In the case of rental systems for tours, there are typically several audio engineers and technicians from the rental company that tour with the band to set up and calibrate the equipment. The individual that mixes the band is often selected and provided by the band, as they are familiar with the various aspects of the show and understand how the band wants the show to sound.
=== Live music clubs and dance events ===
Setting up sound reinforcement for live music clubs and dance events often poses unique challenges, because there is such a large variety of venues that are used as clubs, ranging from former warehouses or music theaters to small restaurants or basement pubs with concrete walls. Dance events may be held in huge warehouses, aircraft hangars or outdoor spaces. In some cases, clubs are housed in multi-story venues with balconies or in L-shaped rooms, which makes it hard to get a consistent sound for all audience members. The solution is to use fill-in speakers to obtain good coverage, using a delay to ensure that the audience does not hear the same reinforced sound at different times.
The number of subwoofer speaker cabinets and power amplifiers dedicated to low-frequency sounds used in a club depends on the type of club, the genres of music played there, and the size of the venue. A small coffeehouse where traditional folk, bluegrass or jazz groups are the main performers may have no subwoofers, and instead rely on the full-range main PA speakers to reproduce bass sounds. On the other hand, a club where hard rock or heavy metal music bands play or a nightclub where DJs play dance music may have multiple large subwoofers, as these genres and music styles typically use powerful, deep bass sound.
A challenge with designing sound systems for clubs is that the sound system may need to be used for both prerecorded music played by DJs and live music. A club system designed for DJs needs a DJ mixer and space for record players. In contrast, a live music club needs a mixing board designed for live sound, an onstage monitor system, and a multicore snake cable running from the stage to the mixer. Clubs that feature both types of shows may face challenges providing the desired equipment and set-up for both uses. Clubs can be a hostile environment for sound gear, in that the air may be hot, humid, and smoky. In some clubs, keeping power amplifiers cool may be a challenge.
=== Houses of worship ===
Churches and similar houses of worship often pose design challenges. Speakers may need to be unobtrusive to blend in with antique woodwork and stonework. In some cases, audio designers have designed custom-painted speaker cabinets. Some facilities, such as sanctuaries or chapels are long rooms with low ceilings and additional fill-in speakers are needed throughout the room to give good coverage. Once installed, church systems are often operated by amateur volunteers from the congregation, which means that they must be easy to operate and troubleshoot. To this end, some mixing consoles designed for houses of worship have automatic mixers, which turn down unused channels to reduce noise, and automatic feedback elimination circuits which detect and notch out frequencies that are feeding back. These features may also be available in multi-function consoles used in convention facilities and multi-purpose venues.
=== Touring systems ===
Touring sound systems are available in many different sizes and shapes as they have to be powerful and versatile enough to cover many different halls and venues. Touring systems range from mid-sized systems for bands playing nightclub and other mid-sized venues to large systems for groups playing stadiums, arenas and outdoor festivals. Tour sound systems are often designed with substantial redundancy features, so that in the event of equipment failure or amplifier overheating, the system will continue to function. Touring systems for bands performing for crowds of a few thousand people and up are typically set up and operated by a team of technicians and engineers who travel with the performers to every show.
Mainstream bands that are going to perform in mid- to large-sized venues during their tour schedule one to two weeks of technical rehearsal with the entire concert system and production staff, including audio engineers, at hand. This allows the audio and lighting engineers to become familiar with the show and establish presets on their digital equipment (e.g., digital mixers) for each part of the show, if needed. Many modern musical groups work with their front of house and monitor mixing engineers during this time to establish what their general idea is of how the show and mix should sound, both for themselves on stage and for the audience.
This often involves programming different effects and signal processing for use on specific songs, to make the songs sound somewhat similar to the studio versions. To manage a show with a lot of effects changes, the mixing engineers for the show often choose to use a digital mixing console so that they can save and automatically recall these many settings in between each song. This time is also used by the system technicians to get familiar with the specific combination of gear that is going to be used on the tour and how it acoustically responds during the show. These technicians remain busy during the show, making sure the SR system is operating properly and that the system is tuned correctly, as the acoustic response of a room or venue will respond differently throughout the day depending on the temperature, humidity, and number of people in the room or space.
=== Live theater ===
Sound for live theater, operatic theater, and other dramatic applications may pose problems similar to those of churches; theaters may be in heritage buildings where speakers and wiring is required to blend in with the architecture. The need for clear sightlines may make the use of regular speaker cabinets unacceptable; instead, slim, low-profile speakers are often used instead.
In live theater and drama, performers move around onstage, which means that wireless microphones may be necessary. Some of the higher-budget theater shows and musicals are mixed in surround sound live, often with the show's sound operator triggering sound effects that are being mixed with music and dialogue by the show's mixing engineer. These systems are usually much more extensive to design, typically involving separate sets of speakers for different zones in the theater.
=== Classical music and opera ===
A subtle type of sound reinforcement called acoustic enhancement is used in some concert halls where classical music such as symphonies and opera is performed. Acoustic enhancement systems add more sound to the hall and prevent dead spots in the audience seating area by "...augment[ing] a hall's intrinsic acoustic characteristics." The systems use "...an array of microphones connected to a computer [which is] connected to an array of loudspeakers." However, as concertgoers have become aware of the use of these systems, debates have arisen, because "...purists maintain that the natural acoustic sound of [Classical] voices [or] instruments in a given hall should not be altered."
Kai Harada's article Opera's Dirty Little Secret states that opera houses have begun using electronic acoustic enhancement systems "...to compensate for flaws in a venue's acoustical architecture." Despite the uproar that has arisen amongst operagoers, Harada points out that none of the opera houses using acoustic enhancement systems "...use traditional, Broadway-style sound reinforcement, in which most if not all singers are equipped with radio microphones mixed to a series of unsightly loudspeakers scattered throughout the theatre." Instead, most opera houses use the sound reinforcement system for acoustic enhancement, and for subtle boosting of offstage voices, onstage dialogue, and sound effects (e.g., church bells in Tosca or thunder in Wagnerian operas).
These systems use microphones, computer processing "with delay, phase, and frequency-response changes", and then send the signal "... to a large number of loudspeakers placed in extremities of the performance venue." Another acoustic enhancement system, VRAS uses "...different algorithms based on microphones placed around the room." The Deutsche Staatsoper in Berlin and the Hummingbird Centre in Toronto use a LARES system. The Ahmanson Theatre in Los Angeles, the Royal National Theatre in London, and the Vivian Beaumont Theater in New York City use the SIAP system.
=== Lecture halls and conference rooms ===
Lecture halls and conference rooms pose the challenge of reproducing speech clearly in a large hall, which may have reflective, echo-producing surfaces. One issue with reproducing speech is that the microphone used to pick up the sound of an individual's voice may also pick up unwanted sounds, such as the rustling of papers on a podium. A more tightly directional microphone may help to reduce unwanted background noises.
Another challenge with doing live sound for individuals who are speaking at a conference is that, in comparison with professional singers, individuals who are invited to speak at a forum may not be familiar with how microphones work. Some individuals may accidentally point the microphone towards a speaker or monitor speaker, which may cause audio feedback.
In some conferences, sound engineers have to provide microphones for a large number of people who are speaking, in the case of a panel conference or debate. In some cases, automatic mixers are used to control the levels of the microphones and turn off the channels for microphones that are not being spoken into, to reduce unwanted background noise and reduce the likelihood of feedback.
=== Sports sound systems ===
Systems for sports facilities often have to deal with substantial echo, which can make speech unintelligible. Sports and recreational sound systems often face environmental challenges as well, such as the need for weather-proof outdoor speakers in outdoor stadiums and humidity- and splash-resistant speakers in swimming pools. Another challenge with sports sound reinforcement setups is that in many arenas and stadiums, the spectators are on all four sides of the playing field. This requires 360-degree sound coverage. This is very different from the norm with music festivals and music halls, where the musicians are on stage and the audience is seated in front of the stage.
== Setting up and testing ==
Large-scale sound reinforcement systems are designed, installed, and operated by audio engineers and audio technicians. During the design phase of a newly constructed venue, audio engineers work with architects and contractors, to ensure that the proposed design will accommodate the speakers and provide an appropriate space for sound technicians and the racks of audio equipment. Audio engineers will also provide advice on which audio components would best suit the space and its intended use, and on the correct placement and installation of these components. During the installation phase, audio engineers ensure that high-power electrical components are safely installed and connected and that ceiling or wall-mounted speakers are properly mounted (or "flown") onto rigging. When the sound reinforcement components are installed, the audio engineers test and calibrate the system so that its sound production will be even across the frequency spectrum.
=== System testing ===
A sound reinforcement system should be able to accurately reproduce a signal from its input, through any processing, to its output without any coloration or distortion. However, due to inconsistencies in venue sizes, shapes, building materials, and even crowd densities, this is not always possible without prior calibration of the system. This can be done in one of several ways.
The oldest method of system calibration involves a set of healthy ears, test program material (i.e. music or speech), a graphic equalizer, and a familiarity with the desired frequency response. One must then listen to the program material through the system, take note of any noticeable frequency deviation or resonances, and correct them using the equalizer. Engineers typically use a familiar playlist to calibrate a new system. This by ear process is still done by many engineers, even when analysis equipment is used, as a final check of how the system sounds with music or speech playing through the system. Another method of manual calibration requires a pair of high-quality headphones patched into the input signal before any processing. One can then use this direct signal as a reference with which to identify any differences in frequency response.
Since the development of digital signal processing (DSP), there have been many pieces of equipment and computer software designed to shift the bulk of the work of system calibration from human auditory interpretation to software algorithms that run on microprocessors. One tool for calibrating a sound system is a real-time analyzer (RTA). This tool is usually used by piping pink noise into the system and measuring the result with a special calibrated microphone connected to the RTA. Using this information, the system can be adjusted to help achieve the desired frequency response.
More recently, sound engineers have seen the introduction of dual fast-Fourier transform (FFT) based audio analysis software, such as Smaart, which allows an engineer to view not only frequency response information that an RTA provides, but also in the time domain. This provides the engineer with much more meaningful data than an RTA alone. Dual FFT analysis allows one to compare the source signal with the output signal. A system can be calibrated using normal program material instead of pink noise or other special test signals. Calibration can be monitored during a performance.
== Equipment supply stores ==
Professional audio stores sell microphones, speaker enclosures, monitor speakers, mixing boards, rack-mounted effects units and related equipment designed for use by audio engineers and technicians. Stores often use the word professional or pro in their name or the description of their store, to differentiate their stores from consumer electronics stores, which sell consumer-grade loudspeakers, home cinema equipment, and amplifiers, which are designed for private, in-home use.
== Notes ==
== References ==
== Further reading ==
=== Books ===
=== Papers === | Wikipedia/Sound_reinforcement_system |
In earthquake engineering, vibration control is a set of technical means aimed to mitigate seismic impacts in building and non-building structures.
All seismic vibration control devices may be classified as passive, active or hybrid where:
passive control devices have no feedback capability between them, structural elements and the ground;
active control devices incorporate real-time recording instrumentation on the ground integrated with earthquake input processing equipment and actuators within the structure;
hybrid control devices have combined features of active and passive control systems.
When ground seismic waves reach up and start to penetrate a base of a building, their energy flow density, due to reflections, reduces dramatically: usually, up to 90%. However, the remaining portions of the incident waves during a major earthquake still bear a huge devastating potential.
After the seismic waves enter a superstructure, there is a number of ways to control them in order to soothe their damaging effect and improve the building's seismic performance, for instance:
to dissipate the wave energy inside a superstructure with properly engineered dampers;
to disperse the wave energy between a wider range of frequencies;
to absorb the resonant portions of the whole wave frequencies band with the help of so-called mass dampers.
Devices of the last kind, abbreviated correspondingly as TMD for the tuned (passive), as AMD for the active, and as HMD for the hybrid mass dampers, have been studied and installed in high-rise buildings, predominantly in Japan, for a quarter of a century.
However, there is quite another approach: partial suppression of the seismic energy flow into the superstructure known as seismic or base isolation which has been implemented in a number of historical buildings all over the world and remains in the focus of earthquake engineering research for years.
For this, some pads are inserted into all major load-carrying elements in the base of the building which should substantially decouple a superstructure from its substructure resting on a shaking ground. It also requires creating a rigidity diaphragm and a moat around the building, as well as making provisions against overturning and P-delta effect.
In refineries or plants snubbers are often used for vibration control. Snubbers come in two different variations: hydraulic snubber and a mechanical snubber.
Hydraulic snubbers are used on piping systems when restrained thermal movement is allowed.
Mechanical snubbers operate on the standards of restricting acceleration of any pipe movements to a threshold of 0.2 g's, which is the maximum acceleration that the snubber will permit the piping to see.
== Vibration Control of Mechanical, Electrical, Plumbing, and & HVAC ==
Standards and guidelines for testing, installation, and performance of mechanical equipment have been created in order to provide attachment methods for
equipment located in noise sensitive areas. One manual that provides such specifications is:
412 Manual: Installing Seismic Restraints for Mechanical Equipment (VISCMA / Vibration Isolation and Seismic Control Manufacturers Association)
== See also ==
Active vibration control
Anti-vibration compound
Cushioning
Earthquake-resistant structures
Metallic roller bearing
Tuned mass damper
Vibration isolation
== References == | Wikipedia/Seismic_vibration_control |
Bessel functions, named after Friedrich Bessel who was the first to systematically study them in 1824, are canonical solutions y(x) of Bessel's differential equation
x
2
d
2
y
d
x
2
+
x
d
y
d
x
+
(
x
2
−
α
2
)
y
=
0
{\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+x{\frac {dy}{dx}}+\left(x^{2}-\alpha ^{2}\right)y=0}
for an arbitrary complex number
α
{\displaystyle \alpha }
, which represents the order of the Bessel function. Although
α
{\displaystyle \alpha }
and
−
α
{\displaystyle -\alpha }
produce the same differential equation, it is conventional to define different Bessel functions for these two values in such a way that the Bessel functions are mostly smooth functions of
α
{\displaystyle \alpha }
.
The most important cases are when
α
{\displaystyle \alpha }
is an integer or half-integer. Bessel functions for integer
α
{\displaystyle \alpha }
are also known as cylinder functions or the cylindrical harmonics because they appear in the solution to Laplace's equation in cylindrical coordinates. Spherical Bessel functions with half-integer
α
{\displaystyle \alpha }
are obtained when solving the Helmholtz equation in spherical coordinates.
== Applications ==
Bessel's equation arises when finding separable solutions to Laplace's equation and the Helmholtz equation in cylindrical or spherical coordinates. Bessel functions are therefore especially important for many problems of wave propagation and static potentials. In solving problems in cylindrical coordinate systems, one obtains Bessel functions of integer order (α = n); in spherical problems, one obtains half-integer orders (α = n + 1/2). For example:
Electromagnetic waves in a cylindrical waveguide
Pressure amplitudes of inviscid rotational flows
Heat conduction in a cylindrical object
Modes of vibration of a thin circular or annular acoustic membrane (such as a drumhead or other membranophone) or thicker plates such as sheet metal (see Kirchhoff–Love plate theory, Mindlin–Reissner plate theory)
Diffusion problems on a lattice
Solutions to the Schrödinger equation in spherical and cylindrical coordinates for a free particle
Position space representation of the Feynman propagator in quantum field theory
Solving for patterns of acoustical radiation
Frequency-dependent friction in circular pipelines
Dynamics of floating bodies
Angular resolution
Diffraction from helical objects, including DNA
Probability density function of product of two normally distributed random variables
Analyzing of the surface waves generated by microtremors, in geophysics and seismology.
Bessel functions also appear in other problems, such as signal processing (e.g., see FM audio synthesis, Kaiser window, or Bessel filter).
== Definitions ==
Because this is a linear differential equation, solutions can be scaled to any amplitude. The amplitudes chosen for the functions originate from the early work in which the functions appeared as solutions to definite integrals rather than solutions to differential equations. Because the differential equation is second-order, there must be two linearly independent solutions: one of the first kind and one of the second kind. Depending upon the circumstances, however, various formulations of these solutions are convenient. Different variations are summarized in the table below and described in the following sections.The subscript n is typically used in place of
α
{\displaystyle \alpha }
when
α
{\displaystyle \alpha }
is known to be an integer.
Bessel functions of the second kind and the spherical Bessel functions of the second kind are sometimes denoted by Nn and nn, respectively, rather than Yn and yn.
=== Bessel functions of the first kind: Jα ===
Bessel functions of the first kind, denoted as Jα(x), are solutions of Bessel's differential equation. For integer or positive α, Bessel functions of the first kind are finite at the origin (x = 0); while for negative non-integer α, Bessel functions of the first kind diverge as x approaches zero. It is possible to define the function by
x
α
{\displaystyle x^{\alpha }}
times a Maclaurin series (note that α need not be an integer, and non-integer powers are not permitted in a Taylor series), which can be found by applying the Frobenius method to Bessel's equation:
J
α
(
x
)
=
∑
m
=
0
∞
(
−
1
)
m
m
!
Γ
(
m
+
α
+
1
)
(
x
2
)
2
m
+
α
,
{\displaystyle J_{\alpha }(x)=\sum _{m=0}^{\infty }{\frac {(-1)^{m}}{m!\,\Gamma (m+\alpha +1)}}{\left({\frac {x}{2}}\right)}^{2m+\alpha },}
where Γ(z) is the gamma function, a shifted generalization of the factorial function to non-integer values. Some earlier authors define the Bessel function of the first kind differently, essentially without the division by
2
{\displaystyle 2}
in
x
/
2
{\displaystyle x/2}
; this definition is not used in this article. The Bessel function of the first kind is an entire function if α is an integer, otherwise it is a multivalued function with singularity at zero. The graphs of Bessel functions look roughly like oscillating sine or cosine functions that decay proportionally to
x
−
1
/
2
{\displaystyle x^{-{1}/{2}}}
(see also their asymptotic forms below), although their roots are not generally periodic, except asymptotically for large x. (The series indicates that −J1(x) is the derivative of J0(x), much like −sin x is the derivative of cos x; more generally, the derivative of Jn(x) can be expressed in terms of Jn ± 1(x) by the identities below.)
For non-integer α, the functions Jα(x) and J−α(x) are linearly independent, and are therefore the two solutions of the differential equation. On the other hand, for integer order n, the following relationship is valid (the gamma function has simple poles at each of the non-positive integers):
J
−
n
(
x
)
=
(
−
1
)
n
J
n
(
x
)
.
{\displaystyle J_{-n}(x)=(-1)^{n}J_{n}(x).}
This means that the two solutions are no longer linearly independent. In this case, the second linearly independent solution is then found to be the Bessel function of the second kind, as discussed below.
==== Bessel's integrals ====
Another definition of the Bessel function, for integer values of n, is possible using an integral representation:
J
n
(
x
)
=
1
π
∫
0
π
cos
(
n
τ
−
x
sin
τ
)
d
τ
=
1
π
Re
(
∫
0
π
e
i
(
n
τ
−
x
sin
τ
)
d
τ
)
,
{\displaystyle J_{n}(x)={\frac {1}{\pi }}\int _{0}^{\pi }\cos(n\tau -x\sin \tau )\,d\tau ={\frac {1}{\pi }}\operatorname {Re} \left(\int _{0}^{\pi }e^{i(n\tau -x\sin \tau )}\,d\tau \right),}
which is also called Hansen-Bessel formula.
This was the approach that Bessel used, and from this definition he derived several properties of the function. The definition may be extended to non-integer orders by one of Schläfli's integrals, for Re(x) > 0:
J
α
(
x
)
=
1
π
∫
0
π
cos
(
α
τ
−
x
sin
τ
)
d
τ
−
sin
(
α
π
)
π
∫
0
∞
e
−
x
sinh
t
−
α
t
d
t
.
{\displaystyle J_{\alpha }(x)={\frac {1}{\pi }}\int _{0}^{\pi }\cos(\alpha \tau -x\sin \tau )\,d\tau -{\frac {\sin(\alpha \pi )}{\pi }}\int _{0}^{\infty }e^{-x\sinh t-\alpha t}\,dt.}
==== Relation to hypergeometric series ====
The Bessel functions can be expressed in terms of the generalized hypergeometric series as
J
α
(
x
)
=
(
x
2
)
α
Γ
(
α
+
1
)
0
F
1
(
α
+
1
;
−
x
2
4
)
.
{\displaystyle J_{\alpha }(x)={\frac {\left({\frac {x}{2}}\right)^{\alpha }}{\Gamma (\alpha +1)}}\;_{0}F_{1}\left(\alpha +1;-{\frac {x^{2}}{4}}\right).}
This expression is related to the development of Bessel functions in terms of the Bessel–Clifford function.
==== Relation to Laguerre polynomials ====
In terms of the Laguerre polynomials Lk and arbitrarily chosen parameter t, the Bessel function can be expressed as
J
α
(
x
)
(
x
2
)
α
=
e
−
t
Γ
(
α
+
1
)
∑
k
=
0
∞
L
k
(
α
)
(
x
2
4
t
)
(
k
+
α
k
)
t
k
k
!
.
{\displaystyle {\frac {J_{\alpha }(x)}{\left({\frac {x}{2}}\right)^{\alpha }}}={\frac {e^{-t}}{\Gamma (\alpha +1)}}\sum _{k=0}^{\infty }{\frac {L_{k}^{(\alpha )}\left({\frac {x^{2}}{4t}}\right)}{\binom {k+\alpha }{k}}}{\frac {t^{k}}{k!}}.}
=== Bessel functions of the second kind: Yα ===
The Bessel functions of the second kind, denoted by Yα(x), occasionally denoted instead by Nα(x), are solutions of the Bessel differential equation that have a singularity at the origin (x = 0) and are multivalued. These are sometimes called Weber functions, as they were introduced by H. M. Weber (1873), and also Neumann functions after Carl Neumann.
For non-integer α, Yα(x) is related to Jα(x) by
Y
α
(
x
)
=
J
α
(
x
)
cos
(
α
π
)
−
J
−
α
(
x
)
sin
(
α
π
)
.
{\displaystyle Y_{\alpha }(x)={\frac {J_{\alpha }(x)\cos(\alpha \pi )-J_{-\alpha }(x)}{\sin(\alpha \pi )}}.}
In the case of integer order n, the function is defined by taking the limit as a non-integer α tends to n:
Y
n
(
x
)
=
lim
α
→
n
Y
α
(
x
)
.
{\displaystyle Y_{n}(x)=\lim _{\alpha \to n}Y_{\alpha }(x).}
If n is a nonnegative integer, we have the series
Y
n
(
z
)
=
−
(
z
2
)
−
n
π
∑
k
=
0
n
−
1
(
n
−
k
−
1
)
!
k
!
(
z
2
4
)
k
+
2
π
J
n
(
z
)
ln
z
2
−
(
z
2
)
n
π
∑
k
=
0
∞
(
ψ
(
k
+
1
)
+
ψ
(
n
+
k
+
1
)
)
(
−
z
2
4
)
k
k
!
(
n
+
k
)
!
{\displaystyle Y_{n}(z)=-{\frac {\left({\frac {z}{2}}\right)^{-n}}{\pi }}\sum _{k=0}^{n-1}{\frac {(n-k-1)!}{k!}}\left({\frac {z^{2}}{4}}\right)^{k}+{\frac {2}{\pi }}J_{n}(z)\ln {\frac {z}{2}}-{\frac {\left({\frac {z}{2}}\right)^{n}}{\pi }}\sum _{k=0}^{\infty }(\psi (k+1)+\psi (n+k+1)){\frac {\left(-{\frac {z^{2}}{4}}\right)^{k}}{k!(n+k)!}}}
where
ψ
(
z
)
{\displaystyle \psi (z)}
is the digamma function, the logarithmic derivative of the gamma function.
There is also a corresponding integral formula (for Re(x) > 0):
Y
n
(
x
)
=
1
π
∫
0
π
sin
(
x
sin
θ
−
n
θ
)
d
θ
−
1
π
∫
0
∞
(
e
n
t
+
(
−
1
)
n
e
−
n
t
)
e
−
x
sinh
t
d
t
.
{\displaystyle Y_{n}(x)={\frac {1}{\pi }}\int _{0}^{\pi }\sin(x\sin \theta -n\theta )\,d\theta -{\frac {1}{\pi }}\int _{0}^{\infty }\left(e^{nt}+(-1)^{n}e^{-nt}\right)e^{-x\sinh t}\,dt.}
In the case where n = 0: (with
γ
{\displaystyle \gamma }
being Euler's constant)
Y
0
(
x
)
=
4
π
2
∫
0
1
2
π
cos
(
x
cos
θ
)
(
γ
+
ln
(
2
x
sin
2
θ
)
)
d
θ
.
{\displaystyle Y_{0}\left(x\right)={\frac {4}{\pi ^{2}}}\int _{0}^{{\frac {1}{2}}\pi }\cos \left(x\cos \theta \right)\left(\gamma +\ln \left(2x\sin ^{2}\theta \right)\right)\,d\theta .}
Yα(x) is necessary as the second linearly independent solution of the Bessel's equation when α is an integer. But Yα(x) has more meaning than that. It can be considered as a "natural" partner of Jα(x). See also the subsection on Hankel functions below.
When α is an integer, moreover, as was similarly the case for the functions of the first kind, the following relationship is valid:
Y
−
n
(
x
)
=
(
−
1
)
n
Y
n
(
x
)
.
{\displaystyle Y_{-n}(x)=(-1)^{n}Y_{n}(x).}
Both Jα(x) and Yα(x) are holomorphic functions of x on the complex plane cut along the negative real axis. When α is an integer, the Bessel functions J are entire functions of x. If x is held fixed at a non-zero value, then the Bessel functions are entire functions of α.
The Bessel functions of the second kind when α is an integer is an example of the second kind of solution in Fuchs's theorem.
=== Hankel functions: H(1)α, H(2)α ===
Another important formulation of the two linearly independent solutions to Bessel's equation are the Hankel functions of the first and second kind, H(1)α(x) and H(2)α(x), defined as
H
α
(
1
)
(
x
)
=
J
α
(
x
)
+
i
Y
α
(
x
)
,
H
α
(
2
)
(
x
)
=
J
α
(
x
)
−
i
Y
α
(
x
)
,
{\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&=J_{\alpha }(x)+iY_{\alpha }(x),\\[5pt]H_{\alpha }^{(2)}(x)&=J_{\alpha }(x)-iY_{\alpha }(x),\end{aligned}}}
where i is the imaginary unit. These linear combinations are also known as Bessel functions of the third kind; they are two linearly independent solutions of Bessel's differential equation. They are named after Hermann Hankel.
These forms of linear combination satisfy numerous simple-looking properties, like asymptotic formulae or integral representations. Here, "simple" means an appearance of a factor of the form ei f(x). For real
x
>
0
{\displaystyle x>0}
where
J
α
(
x
)
{\displaystyle J_{\alpha }(x)}
,
Y
α
(
x
)
{\displaystyle Y_{\alpha }(x)}
are real-valued, the Bessel functions of the first and second kind are the real and imaginary parts, respectively, of the first Hankel function and the real and negative imaginary parts of the second Hankel function. Thus, the above formulae are analogs of Euler's formula, substituting H(1)α(x), H(2)α(x) for
e
±
i
x
{\displaystyle e^{\pm ix}}
and
J
α
(
x
)
{\displaystyle J_{\alpha }(x)}
,
Y
α
(
x
)
{\displaystyle Y_{\alpha }(x)}
for
cos
(
x
)
{\displaystyle \cos(x)}
,
sin
(
x
)
{\displaystyle \sin(x)}
, as explicitly shown in the asymptotic expansion.
The Hankel functions are used to express outward- and inward-propagating cylindrical-wave solutions of the cylindrical wave equation, respectively (or vice versa, depending on the sign convention for the frequency).
Using the previous relationships, they can be expressed as
H
α
(
1
)
(
x
)
=
J
−
α
(
x
)
−
e
−
α
π
i
J
α
(
x
)
i
sin
α
π
,
H
α
(
2
)
(
x
)
=
J
−
α
(
x
)
−
e
α
π
i
J
α
(
x
)
−
i
sin
α
π
.
{\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&={\frac {J_{-\alpha }(x)-e^{-\alpha \pi i}J_{\alpha }(x)}{i\sin \alpha \pi }},\\[5pt]H_{\alpha }^{(2)}(x)&={\frac {J_{-\alpha }(x)-e^{\alpha \pi i}J_{\alpha }(x)}{-i\sin \alpha \pi }}.\end{aligned}}}
If α is an integer, the limit has to be calculated. The following relationships are valid, whether α is an integer or not:
H
−
α
(
1
)
(
x
)
=
e
α
π
i
H
α
(
1
)
(
x
)
,
H
−
α
(
2
)
(
x
)
=
e
−
α
π
i
H
α
(
2
)
(
x
)
.
{\displaystyle {\begin{aligned}H_{-\alpha }^{(1)}(x)&=e^{\alpha \pi i}H_{\alpha }^{(1)}(x),\\[6mu]H_{-\alpha }^{(2)}(x)&=e^{-\alpha \pi i}H_{\alpha }^{(2)}(x).\end{aligned}}}
In particular, if α = m + 1/2 with m a nonnegative integer, the above relations imply directly that
J
−
(
m
+
1
2
)
(
x
)
=
(
−
1
)
m
+
1
Y
m
+
1
2
(
x
)
,
Y
−
(
m
+
1
2
)
(
x
)
=
(
−
1
)
m
J
m
+
1
2
(
x
)
.
{\displaystyle {\begin{aligned}J_{-(m+{\frac {1}{2}})}(x)&=(-1)^{m+1}Y_{m+{\frac {1}{2}}}(x),\\[5pt]Y_{-(m+{\frac {1}{2}})}(x)&=(-1)^{m}J_{m+{\frac {1}{2}}}(x).\end{aligned}}}
These are useful in developing the spherical Bessel functions (see below).
The Hankel functions admit the following integral representations for Re(x) > 0:
H
α
(
1
)
(
x
)
=
1
π
i
∫
−
∞
+
∞
+
π
i
e
x
sinh
t
−
α
t
d
t
,
H
α
(
2
)
(
x
)
=
−
1
π
i
∫
−
∞
+
∞
−
π
i
e
x
sinh
t
−
α
t
d
t
,
{\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(x)&={\frac {1}{\pi i}}\int _{-\infty }^{+\infty +\pi i}e^{x\sinh t-\alpha t}\,dt,\\[5pt]H_{\alpha }^{(2)}(x)&=-{\frac {1}{\pi i}}\int _{-\infty }^{+\infty -\pi i}e^{x\sinh t-\alpha t}\,dt,\end{aligned}}}
where the integration limits indicate integration along a contour that can be chosen as follows: from −∞ to 0 along the negative real axis, from 0 to ±πi along the imaginary axis, and from ±πi to +∞ ± πi along a contour parallel to the real axis.
=== Modified Bessel functions: Iα, Kα ===
The Bessel functions are valid even for complex arguments x, and an important special case is that of a purely imaginary argument. In this case, the solutions to the Bessel equation are called the modified Bessel functions (or occasionally the hyperbolic Bessel functions) of the first and second kind and are defined as
I
α
(
x
)
=
i
−
α
J
α
(
i
x
)
=
∑
m
=
0
∞
1
m
!
Γ
(
m
+
α
+
1
)
(
x
2
)
2
m
+
α
,
K
α
(
x
)
=
π
2
I
−
α
(
x
)
−
I
α
(
x
)
sin
α
π
,
{\displaystyle {\begin{aligned}I_{\alpha }(x)&=i^{-\alpha }J_{\alpha }(ix)=\sum _{m=0}^{\infty }{\frac {1}{m!\,\Gamma (m+\alpha +1)}}\left({\frac {x}{2}}\right)^{2m+\alpha },\\[5pt]K_{\alpha }(x)&={\frac {\pi }{2}}{\frac {I_{-\alpha }(x)-I_{\alpha }(x)}{\sin \alpha \pi }},\end{aligned}}}
when α is not an integer. When α is an integer, then the limit is used. These are chosen to be real-valued for real and positive arguments x. The series expansion for Iα(x) is thus similar to that for Jα(x), but without the alternating (−1)m factor.
K
α
{\displaystyle K_{\alpha }}
can be expressed in terms of Hankel functions:
K
α
(
x
)
=
{
π
2
i
α
+
1
H
α
(
1
)
(
i
x
)
−
π
<
arg
x
≤
π
2
π
2
(
−
i
)
α
+
1
H
α
(
2
)
(
−
i
x
)
−
π
2
<
arg
x
≤
π
{\displaystyle K_{\alpha }(x)={\begin{cases}{\frac {\pi }{2}}i^{\alpha +1}H_{\alpha }^{(1)}(ix)&-\pi <\arg x\leq {\frac {\pi }{2}}\\{\frac {\pi }{2}}(-i)^{\alpha +1}H_{\alpha }^{(2)}(-ix)&-{\frac {\pi }{2}}<\arg x\leq \pi \end{cases}}}
Using these two formulae the result to
J
α
2
(
z
)
{\displaystyle J_{\alpha }^{2}(z)}
+
Y
α
2
(
z
)
{\displaystyle Y_{\alpha }^{2}(z)}
, commonly known as Nicholson's integral or Nicholson's formula, can be obtained to give the following
J
α
2
(
x
)
+
Y
α
2
(
x
)
=
8
π
2
∫
0
∞
cosh
(
2
α
t
)
K
0
(
2
x
sinh
t
)
d
t
,
{\displaystyle J_{\alpha }^{2}(x)+Y_{\alpha }^{2}(x)={\frac {8}{\pi ^{2}}}\int _{0}^{\infty }\cosh(2\alpha t)K_{0}(2x\sinh t)\,dt,}
given that the condition Re(x) > 0 is met. It can also be shown that
J
α
2
(
x
)
+
Y
α
2
(
x
)
=
8
cos
(
α
π
)
π
2
∫
0
∞
K
2
α
(
2
x
sinh
t
)
d
t
,
{\displaystyle J_{\alpha }^{2}(x)+Y_{\alpha }^{2}(x)={\frac {8\cos(\alpha \pi )}{\pi ^{2}}}\int _{0}^{\infty }K_{2\alpha }(2x\sinh t)\,dt,}
only when |Re(α)| < 1/2 and Re(x) ≥ 0 but not when x = 0.
We can express the first and second Bessel functions in terms of the modified Bessel functions (these are valid if −π < arg z ≤ π/2):
J
α
(
i
z
)
=
e
α
π
i
2
I
α
(
z
)
,
Y
α
(
i
z
)
=
e
(
α
+
1
)
π
i
2
I
α
(
z
)
−
2
π
e
−
α
π
i
2
K
α
(
z
)
.
{\displaystyle {\begin{aligned}J_{\alpha }(iz)&=e^{\frac {\alpha \pi i}{2}}I_{\alpha }(z),\\[1ex]Y_{\alpha }(iz)&=e^{\frac {(\alpha +1)\pi i}{2}}I_{\alpha }(z)-{\tfrac {2}{\pi }}e^{-{\frac {\alpha \pi i}{2}}}K_{\alpha }(z).\end{aligned}}}
Iα(x) and Kα(x) are the two linearly independent solutions to the modified Bessel's equation:
x
2
d
2
y
d
x
2
+
x
d
y
d
x
−
(
x
2
+
α
2
)
y
=
0.
{\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+x{\frac {dy}{dx}}-\left(x^{2}+\alpha ^{2}\right)y=0.}
Unlike the ordinary Bessel functions, which are oscillating as functions of a real argument, Iα and Kα are exponentially growing and decaying functions respectively. Like the ordinary Bessel function Jα, the function Iα goes to zero at x = 0 for α > 0 and is finite at x = 0 for α = 0. Analogously, Kα diverges at x = 0 with the singularity being of logarithmic type for K0, and 1/2Γ(|α|)(2/x)|α| otherwise.
Two integral formulas for the modified Bessel functions are (for Re(x) > 0):
I
α
(
x
)
=
1
π
∫
0
π
e
x
cos
θ
cos
α
θ
d
θ
−
sin
α
π
π
∫
0
∞
e
−
x
cosh
t
−
α
t
d
t
,
K
α
(
x
)
=
∫
0
∞
e
−
x
cosh
t
cosh
α
t
d
t
.
{\displaystyle {\begin{aligned}I_{\alpha }(x)&={\frac {1}{\pi }}\int _{0}^{\pi }e^{x\cos \theta }\cos \alpha \theta \,d\theta -{\frac {\sin \alpha \pi }{\pi }}\int _{0}^{\infty }e^{-x\cosh t-\alpha t}\,dt,\\[5pt]K_{\alpha }(x)&=\int _{0}^{\infty }e^{-x\cosh t}\cosh \alpha t\,dt.\end{aligned}}}
Bessel functions can be described as Fourier transforms of powers of quadratic functions. For example (for Re(ω) > 0):
2
K
0
(
ω
)
=
∫
−
∞
∞
e
i
ω
t
t
2
+
1
d
t
.
{\displaystyle 2\,K_{0}(\omega )=\int _{-\infty }^{\infty }{\frac {e^{i\omega t}}{\sqrt {t^{2}+1}}}\,dt.}
It can be proven by showing equality to the above integral definition for K0. This is done by integrating a closed curve in the first quadrant of the complex plane.
Modified Bessel functions of the second kind may be represented with Bassett's integral
K
n
(
x
z
)
=
Γ
(
n
+
1
2
)
(
2
z
)
n
π
x
n
∫
0
∞
cos
(
x
t
)
d
t
(
t
2
+
z
2
)
n
+
1
2
.
{\displaystyle K_{n}(xz)={\frac {\Gamma \left(n+{\frac {1}{2}}\right)(2z)^{n}}{{\sqrt {\pi }}x^{n}}}\int _{0}^{\infty }{\frac {\cos(xt)\,dt}{(t^{2}+z^{2})^{n+{\frac {1}{2}}}}}.}
Modified Bessel functions K1/3 and K2/3 can be represented in terms of rapidly convergent integrals
K
1
3
(
ξ
)
=
3
∫
0
∞
exp
(
−
ξ
(
1
+
4
x
2
3
)
1
+
x
2
3
)
d
x
,
K
2
3
(
ξ
)
=
1
3
∫
0
∞
3
+
2
x
2
1
+
x
2
3
exp
(
−
ξ
(
1
+
4
x
2
3
)
1
+
x
2
3
)
d
x
.
{\displaystyle {\begin{aligned}K_{\frac {1}{3}}(\xi )&={\sqrt {3}}\int _{0}^{\infty }\exp \left(-\xi \left(1+{\frac {4x^{2}}{3}}\right){\sqrt {1+{\frac {x^{2}}{3}}}}\right)\,dx,\\[5pt]K_{\frac {2}{3}}(\xi )&={\frac {1}{\sqrt {3}}}\int _{0}^{\infty }{\frac {3+2x^{2}}{\sqrt {1+{\frac {x^{2}}{3}}}}}\exp \left(-\xi \left(1+{\frac {4x^{2}}{3}}\right){\sqrt {1+{\frac {x^{2}}{3}}}}\right)\,dx.\end{aligned}}}
The modified Bessel function
K
1
2
(
ξ
)
=
(
2
ξ
/
π
)
−
1
/
2
exp
(
−
ξ
)
{\displaystyle K_{\frac {1}{2}}(\xi )=(2\xi /\pi )^{-1/2}\exp(-\xi )}
is useful to represent the Laplace distribution as an Exponential-scale mixture of normal distributions.
The modified Bessel function of the second kind has also been called by the following names (now rare):
Basset function after Alfred Barnard Basset
Modified Bessel function of the third kind
Modified Hankel function
Macdonald function after Hector Munro Macdonald
=== Spherical Bessel functions: jn, yn ===
When solving the Helmholtz equation in spherical coordinates by separation of variables, the radial equation has the form
x
2
d
2
y
d
x
2
+
2
x
d
y
d
x
+
(
x
2
−
n
(
n
+
1
)
)
y
=
0.
{\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+2x{\frac {dy}{dx}}+\left(x^{2}-n(n+1)\right)y=0.}
The two linearly independent solutions to this equation are called the spherical Bessel functions jn and yn, and are related to the ordinary Bessel functions Jn and Yn by
j
n
(
x
)
=
π
2
x
J
n
+
1
2
(
x
)
,
y
n
(
x
)
=
π
2
x
Y
n
+
1
2
(
x
)
=
(
−
1
)
n
+
1
π
2
x
J
−
n
−
1
2
(
x
)
.
{\displaystyle {\begin{aligned}j_{n}(x)&={\sqrt {\frac {\pi }{2x}}}J_{n+{\frac {1}{2}}}(x),\\y_{n}(x)&={\sqrt {\frac {\pi }{2x}}}Y_{n+{\frac {1}{2}}}(x)=(-1)^{n+1}{\sqrt {\frac {\pi }{2x}}}J_{-n-{\frac {1}{2}}}(x).\end{aligned}}}
yn is also denoted nn or ηn; some authors call these functions the spherical Neumann functions.
From the relations to the ordinary Bessel functions it is directly seen that:
j
n
(
x
)
=
(
−
1
)
n
y
−
n
−
1
(
x
)
y
n
(
x
)
=
(
−
1
)
n
+
1
j
−
n
−
1
(
x
)
{\displaystyle {\begin{aligned}j_{n}(x)&=(-1)^{n}y_{-n-1}(x)\\y_{n}(x)&=(-1)^{n+1}j_{-n-1}(x)\end{aligned}}}
The spherical Bessel functions can also be written as (Rayleigh's formulas)
j
n
(
x
)
=
(
−
x
)
n
(
1
x
d
d
x
)
n
sin
x
x
,
y
n
(
x
)
=
−
(
−
x
)
n
(
1
x
d
d
x
)
n
cos
x
x
.
{\displaystyle {\begin{aligned}j_{n}(x)&=(-x)^{n}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{n}{\frac {\sin x}{x}},\\y_{n}(x)&=-(-x)^{n}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{n}{\frac {\cos x}{x}}.\end{aligned}}}
The zeroth spherical Bessel function j0(x) is also known as the (unnormalized) sinc function. The first few spherical Bessel functions are:
j
0
(
x
)
=
sin
x
x
.
j
1
(
x
)
=
sin
x
x
2
−
cos
x
x
,
j
2
(
x
)
=
(
3
x
2
−
1
)
sin
x
x
−
3
cos
x
x
2
,
j
3
(
x
)
=
(
15
x
3
−
6
x
)
sin
x
x
−
(
15
x
2
−
1
)
cos
x
x
{\displaystyle {\begin{aligned}j_{0}(x)&={\frac {\sin x}{x}}.\\j_{1}(x)&={\frac {\sin x}{x^{2}}}-{\frac {\cos x}{x}},\\j_{2}(x)&=\left({\frac {3}{x^{2}}}-1\right){\frac {\sin x}{x}}-{\frac {3\cos x}{x^{2}}},\\j_{3}(x)&=\left({\frac {15}{x^{3}}}-{\frac {6}{x}}\right){\frac {\sin x}{x}}-\left({\frac {15}{x^{2}}}-1\right){\frac {\cos x}{x}}\end{aligned}}}
and
y
0
(
x
)
=
−
j
−
1
(
x
)
=
−
cos
x
x
,
y
1
(
x
)
=
j
−
2
(
x
)
=
−
cos
x
x
2
−
sin
x
x
,
y
2
(
x
)
=
−
j
−
3
(
x
)
=
(
−
3
x
2
+
1
)
cos
x
x
−
3
sin
x
x
2
,
y
3
(
x
)
=
j
−
4
(
x
)
=
(
−
15
x
3
+
6
x
)
cos
x
x
−
(
15
x
2
−
1
)
sin
x
x
.
{\displaystyle {\begin{aligned}y_{0}(x)&=-j_{-1}(x)=-{\frac {\cos x}{x}},\\y_{1}(x)&=j_{-2}(x)=-{\frac {\cos x}{x^{2}}}-{\frac {\sin x}{x}},\\y_{2}(x)&=-j_{-3}(x)=\left(-{\frac {3}{x^{2}}}+1\right){\frac {\cos x}{x}}-{\frac {3\sin x}{x^{2}}},\\y_{3}(x)&=j_{-4}(x)=\left(-{\frac {15}{x^{3}}}+{\frac {6}{x}}\right){\frac {\cos x}{x}}-\left({\frac {15}{x^{2}}}-1\right){\frac {\sin x}{x}}.\end{aligned}}}
The first few non-zero roots of the first few spherical Bessel functions are:
==== Generating function ====
The spherical Bessel functions have the generating functions
1
z
cos
(
z
2
−
2
z
t
)
=
∑
n
=
0
∞
t
n
n
!
j
n
−
1
(
z
)
,
1
z
sin
(
z
2
−
2
z
t
)
=
∑
n
=
0
∞
t
n
n
!
y
n
−
1
(
z
)
.
{\displaystyle {\begin{aligned}{\frac {1}{z}}\cos \left({\sqrt {z^{2}-2zt}}\right)&=\sum _{n=0}^{\infty }{\frac {t^{n}}{n!}}j_{n-1}(z),\\{\frac {1}{z}}\sin \left({\sqrt {z^{2}-2zt}}\right)&=\sum _{n=0}^{\infty }{\frac {t^{n}}{n!}}y_{n-1}(z).\end{aligned}}}
==== Finite series expansions ====
In contrast to the whole integer Bessel functions Jn(x), Yn(x), the spherical Bessel functions jn(x), yn(x) have a finite series expression:
j
n
(
x
)
=
π
2
x
J
n
+
1
2
(
x
)
=
=
1
2
x
[
e
i
x
∑
r
=
0
n
i
r
−
n
−
1
(
n
+
r
)
!
r
!
(
n
−
r
)
!
(
2
x
)
r
+
e
−
i
x
∑
r
=
0
n
(
−
i
)
r
−
n
−
1
(
n
+
r
)
!
r
!
(
n
−
r
)
!
(
2
x
)
r
]
=
1
x
[
sin
(
x
−
n
π
2
)
∑
r
=
0
[
n
2
]
(
−
1
)
r
(
n
+
2
r
)
!
(
2
r
)
!
(
n
−
2
r
)
!
(
2
x
)
2
r
+
cos
(
x
−
n
π
2
)
∑
r
=
0
[
n
−
1
2
]
(
−
1
)
r
(
n
+
2
r
+
1
)
!
(
2
r
+
1
)
!
(
n
−
2
r
−
1
)
!
(
2
x
)
2
r
+
1
]
y
n
(
x
)
=
(
−
1
)
n
+
1
j
−
n
−
1
(
x
)
=
(
−
1
)
n
+
1
π
2
x
J
−
(
n
+
1
2
)
(
x
)
=
=
(
−
1
)
n
+
1
2
x
[
e
i
x
∑
r
=
0
n
i
r
+
n
(
n
+
r
)
!
r
!
(
n
−
r
)
!
(
2
x
)
r
+
e
−
i
x
∑
r
=
0
n
(
−
i
)
r
+
n
(
n
+
r
)
!
r
!
(
n
−
r
)
!
(
2
x
)
r
]
=
=
(
−
1
)
n
+
1
x
[
cos
(
x
+
n
π
2
)
∑
r
=
0
[
n
2
]
(
−
1
)
r
(
n
+
2
r
)
!
(
2
r
)
!
(
n
−
2
r
)
!
(
2
x
)
2
r
−
sin
(
x
+
n
π
2
)
∑
r
=
0
[
n
−
1
2
]
(
−
1
)
r
(
n
+
2
r
+
1
)
!
(
2
r
+
1
)
!
(
n
−
2
r
−
1
)
!
(
2
x
)
2
r
+
1
]
{\displaystyle {\begin{alignedat}{2}j_{n}(x)&={\sqrt {\frac {\pi }{2x}}}J_{n+{\frac {1}{2}}}(x)=\\&={\frac {1}{2x}}\left[e^{ix}\sum _{r=0}^{n}{\frac {i^{r-n-1}(n+r)!}{r!(n-r)!(2x)^{r}}}+e^{-ix}\sum _{r=0}^{n}{\frac {(-i)^{r-n-1}(n+r)!}{r!(n-r)!(2x)^{r}}}\right]\\&={\frac {1}{x}}\left[\sin \left(x-{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n}{2}}\right]}{\frac {(-1)^{r}(n+2r)!}{(2r)!(n-2r)!(2x)^{2r}}}+\cos \left(x-{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n-1}{2}}\right]}{\frac {(-1)^{r}(n+2r+1)!}{(2r+1)!(n-2r-1)!(2x)^{2r+1}}}\right]\\y_{n}(x)&=(-1)^{n+1}j_{-n-1}(x)=(-1)^{n+1}{\frac {\pi }{2x}}J_{-\left(n+{\frac {1}{2}}\right)}(x)=\\&={\frac {(-1)^{n+1}}{2x}}\left[e^{ix}\sum _{r=0}^{n}{\frac {i^{r+n}(n+r)!}{r!(n-r)!(2x)^{r}}}+e^{-ix}\sum _{r=0}^{n}{\frac {(-i)^{r+n}(n+r)!}{r!(n-r)!(2x)^{r}}}\right]=\\&={\frac {(-1)^{n+1}}{x}}\left[\cos \left(x+{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n}{2}}\right]}{\frac {(-1)^{r}(n+2r)!}{(2r)!(n-2r)!(2x)^{2r}}}-\sin \left(x+{\frac {n\pi }{2}}\right)\sum _{r=0}^{\left[{\frac {n-1}{2}}\right]}{\frac {(-1)^{r}(n+2r+1)!}{(2r+1)!(n-2r-1)!(2x)^{2r+1}}}\right]\end{alignedat}}}
==== Differential relations ====
In the following, fn is any of jn, yn, h(1)n, h(2)n for n = 0, ±1, ±2, ...
(
1
z
d
d
z
)
m
(
z
n
+
1
f
n
(
z
)
)
=
z
n
−
m
+
1
f
n
−
m
(
z
)
,
(
1
z
d
d
z
)
m
(
z
−
n
f
n
(
z
)
)
=
(
−
1
)
m
z
−
n
−
m
f
n
+
m
(
z
)
.
{\displaystyle {\begin{aligned}\left({\frac {1}{z}}{\frac {d}{dz}}\right)^{m}\left(z^{n+1}f_{n}(z)\right)&=z^{n-m+1}f_{n-m}(z),\\\left({\frac {1}{z}}{\frac {d}{dz}}\right)^{m}\left(z^{-n}f_{n}(z)\right)&=(-1)^{m}z^{-n-m}f_{n+m}(z).\end{aligned}}}
=== Spherical Hankel functions: h(1)n, h(2)n ===
There are also spherical analogues of the Hankel functions:
h
n
(
1
)
(
x
)
=
j
n
(
x
)
+
i
y
n
(
x
)
,
h
n
(
2
)
(
x
)
=
j
n
(
x
)
−
i
y
n
(
x
)
.
{\displaystyle {\begin{aligned}h_{n}^{(1)}(x)&=j_{n}(x)+iy_{n}(x),\\h_{n}^{(2)}(x)&=j_{n}(x)-iy_{n}(x).\end{aligned}}}
There are simple closed-form expressions for the Bessel functions of half-integer order in terms of the standard trigonometric functions, and therefore for the spherical Bessel functions. In particular, for non-negative integers n:
h
n
(
1
)
(
x
)
=
(
−
i
)
n
+
1
e
i
x
x
∑
m
=
0
n
i
m
m
!
(
2
x
)
m
(
n
+
m
)
!
(
n
−
m
)
!
,
{\displaystyle h_{n}^{(1)}(x)=(-i)^{n+1}{\frac {e^{ix}}{x}}\sum _{m=0}^{n}{\frac {i^{m}}{m!\,(2x)^{m}}}{\frac {(n+m)!}{(n-m)!}},}
and h(2)n is the complex-conjugate of this (for real x). It follows, for example, that j0(x) = sin x/x and y0(x) = −cos x/x, and so on.
The spherical Hankel functions appear in problems involving spherical wave propagation, for example in the multipole expansion of the electromagnetic field.
=== Riccati–Bessel functions: Sn, Cn, ξn, ζn ===
Riccati–Bessel functions only slightly differ from spherical Bessel functions:
S
n
(
x
)
=
x
j
n
(
x
)
=
π
x
2
J
n
+
1
2
(
x
)
C
n
(
x
)
=
−
x
y
n
(
x
)
=
−
π
x
2
Y
n
+
1
2
(
x
)
ξ
n
(
x
)
=
x
h
n
(
1
)
(
x
)
=
π
x
2
H
n
+
1
2
(
1
)
(
x
)
=
S
n
(
x
)
−
i
C
n
(
x
)
ζ
n
(
x
)
=
x
h
n
(
2
)
(
x
)
=
π
x
2
H
n
+
1
2
(
2
)
(
x
)
=
S
n
(
x
)
+
i
C
n
(
x
)
{\displaystyle {\begin{aligned}S_{n}(x)&=xj_{n}(x)={\sqrt {\frac {\pi x}{2}}}J_{n+{\frac {1}{2}}}(x)\\C_{n}(x)&=-xy_{n}(x)=-{\sqrt {\frac {\pi x}{2}}}Y_{n+{\frac {1}{2}}}(x)\\\xi _{n}(x)&=xh_{n}^{(1)}(x)={\sqrt {\frac {\pi x}{2}}}H_{n+{\frac {1}{2}}}^{(1)}(x)=S_{n}(x)-iC_{n}(x)\\\zeta _{n}(x)&=xh_{n}^{(2)}(x)={\sqrt {\frac {\pi x}{2}}}H_{n+{\frac {1}{2}}}^{(2)}(x)=S_{n}(x)+iC_{n}(x)\end{aligned}}}
They satisfy the differential equation
x
2
d
2
y
d
x
2
+
(
x
2
−
n
(
n
+
1
)
)
y
=
0.
{\displaystyle x^{2}{\frac {d^{2}y}{dx^{2}}}+\left(x^{2}-n(n+1)\right)y=0.}
For example, this kind of differential equation appears in quantum mechanics while solving the radial component of the Schrödinger equation with hypothetical cylindrical infinite potential barrier. This differential equation, and the Riccati–Bessel solutions, also arises in the problem of scattering of electromagnetic waves by a sphere, known as Mie scattering after the first published solution by Mie (1908). See e.g., Du (2004) for recent developments and references.
Following Debye (1909), the notation ψn, χn is sometimes used instead of Sn, Cn.
== Asymptotic forms ==
The Bessel functions have the following asymptotic forms. For small arguments
0
<
z
≪
α
+
1
{\displaystyle 0<z\ll {\sqrt {\alpha +1}}}
, one obtains, when
α
{\displaystyle \alpha }
is not a negative integer:
J
α
(
z
)
∼
1
Γ
(
α
+
1
)
(
z
2
)
α
.
{\displaystyle J_{\alpha }(z)\sim {\frac {1}{\Gamma (\alpha +1)}}\left({\frac {z}{2}}\right)^{\alpha }.}
When α is a negative integer, we have
J
α
(
z
)
∼
(
−
1
)
α
(
−
α
)
!
(
2
z
)
α
.
{\displaystyle J_{\alpha }(z)\sim {\frac {(-1)^{\alpha }}{(-\alpha )!}}\left({\frac {2}{z}}\right)^{\alpha }.}
For the Bessel function of the second kind we have three cases:
Y
α
(
z
)
∼
{
2
π
(
ln
(
z
2
)
+
γ
)
if
α
=
0
−
Γ
(
α
)
π
(
2
z
)
α
+
1
Γ
(
α
+
1
)
(
z
2
)
α
cot
(
α
π
)
if
α
is a positive integer (one term dominates unless
α
is imaginary)
,
−
(
−
1
)
α
Γ
(
−
α
)
π
(
z
2
)
α
if
α
is a negative integer,
{\displaystyle Y_{\alpha }(z)\sim {\begin{cases}{\dfrac {2}{\pi }}\left(\ln \left({\dfrac {z}{2}}\right)+\gamma \right)&{\text{if }}\alpha =0\\[1ex]-{\dfrac {\Gamma (\alpha )}{\pi }}\left({\dfrac {2}{z}}\right)^{\alpha }+{\dfrac {1}{\Gamma (\alpha +1)}}\left({\dfrac {z}{2}}\right)^{\alpha }\cot(\alpha \pi )&{\text{if }}\alpha {\text{ is a positive integer (one term dominates unless }}\alpha {\text{ is imaginary)}},\\[1ex]-{\dfrac {(-1)^{\alpha }\Gamma (-\alpha )}{\pi }}\left({\dfrac {z}{2}}\right)^{\alpha }&{\text{if }}\alpha {\text{ is a negative integer,}}\end{cases}}}
where γ is the Euler–Mascheroni constant (0.5772...).
For large real arguments z ≫ |α2 − 1/4|, one cannot write a true asymptotic form for Bessel functions of the first and second kind (unless α is half-integer) because they have zeros all the way out to infinity, which would have to be matched exactly by any asymptotic expansion. However, for a given value of arg z one can write an equation containing a term of order |z|−1:
J
α
(
z
)
=
2
π
z
(
cos
(
z
−
α
π
2
−
π
4
)
+
e
|
Im
(
z
)
|
O
(
|
z
|
−
1
)
)
for
|
arg
z
|
<
π
,
Y
α
(
z
)
=
2
π
z
(
sin
(
z
−
α
π
2
−
π
4
)
+
e
|
Im
(
z
)
|
O
(
|
z
|
−
1
)
)
for
|
arg
z
|
<
π
.
{\displaystyle {\begin{aligned}J_{\alpha }(z)&={\sqrt {\frac {2}{\pi z}}}\left(\cos \left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)+e^{\left|\operatorname {Im} (z)\right|}{\mathcal {O}}\left(|z|^{-1}\right)\right)&&{\text{for }}\left|\arg z\right|<\pi ,\\Y_{\alpha }(z)&={\sqrt {\frac {2}{\pi z}}}\left(\sin \left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)+e^{\left|\operatorname {Im} (z)\right|}{\mathcal {O}}\left(|z|^{-1}\right)\right)&&{\text{for }}\left|\arg z\right|<\pi .\end{aligned}}}
(For α = 1/2, the last terms in these formulas drop out completely; see the spherical Bessel functions above.)
The asymptotic forms for the Hankel functions are:
H
α
(
1
)
(
z
)
∼
2
π
z
e
i
(
z
−
α
π
2
−
π
4
)
for
−
π
<
arg
z
<
2
π
,
H
α
(
2
)
(
z
)
∼
2
π
z
e
−
i
(
z
−
α
π
2
−
π
4
)
for
−
2
π
<
arg
z
<
π
.
{\displaystyle {\begin{aligned}H_{\alpha }^{(1)}(z)&\sim {\sqrt {\frac {2}{\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<2\pi ,\\H_{\alpha }^{(2)}(z)&\sim {\sqrt {\frac {2}{\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-2\pi <\arg z<\pi .\end{aligned}}}
These can be extended to other values of arg z using equations relating H(1)α(zeimπ) and H(2)α(zeimπ) to H(1)α(z) and H(2)α(z).
It is interesting that although the Bessel function of the first kind is the average of the two Hankel functions, Jα(z) is not asymptotic to the average of these two asymptotic forms when z is negative (because one or the other will not be correct there, depending on the arg z used). But the asymptotic forms for the Hankel functions permit us to write asymptotic forms for the Bessel functions of first and second kinds for complex (non-real) z so long as |z| goes to infinity at a constant phase angle arg z (using the square root having positive real part):
J
α
(
z
)
∼
1
2
π
z
e
i
(
z
−
α
π
2
−
π
4
)
for
−
π
<
arg
z
<
0
,
J
α
(
z
)
∼
1
2
π
z
e
−
i
(
z
−
α
π
2
−
π
4
)
for
0
<
arg
z
<
π
,
Y
α
(
z
)
∼
−
i
1
2
π
z
e
i
(
z
−
α
π
2
−
π
4
)
for
−
π
<
arg
z
<
0
,
Y
α
(
z
)
∼
i
1
2
π
z
e
−
i
(
z
−
α
π
2
−
π
4
)
for
0
<
arg
z
<
π
.
{\displaystyle {\begin{aligned}J_{\alpha }(z)&\sim {\frac {1}{\sqrt {2\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<0,\\[1ex]J_{\alpha }(z)&\sim {\frac {1}{\sqrt {2\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}0<\arg z<\pi ,\\[1ex]Y_{\alpha }(z)&\sim -i{\frac {1}{\sqrt {2\pi z}}}e^{i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}-\pi <\arg z<0,\\[1ex]Y_{\alpha }(z)&\sim i{\frac {1}{\sqrt {2\pi z}}}e^{-i\left(z-{\frac {\alpha \pi }{2}}-{\frac {\pi }{4}}\right)}&&{\text{for }}0<\arg z<\pi .\end{aligned}}}
For the modified Bessel functions, Hankel developed asymptotic expansions as well:
I
α
(
z
)
∼
e
z
2
π
z
(
1
−
4
α
2
−
1
8
z
+
(
4
α
2
−
1
)
(
4
α
2
−
9
)
2
!
(
8
z
)
2
−
(
4
α
2
−
1
)
(
4
α
2
−
9
)
(
4
α
2
−
25
)
3
!
(
8
z
)
3
+
⋯
)
for
|
arg
z
|
<
π
2
,
K
α
(
z
)
∼
π
2
z
e
−
z
(
1
+
4
α
2
−
1
8
z
+
(
4
α
2
−
1
)
(
4
α
2
−
9
)
2
!
(
8
z
)
2
+
(
4
α
2
−
1
)
(
4
α
2
−
9
)
(
4
α
2
−
25
)
3
!
(
8
z
)
3
+
⋯
)
for
|
arg
z
|
<
3
π
2
.
{\displaystyle {\begin{aligned}I_{\alpha }(z)&\sim {\frac {e^{z}}{\sqrt {2\pi z}}}\left(1-{\frac {4\alpha ^{2}-1}{8z}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)}{2!(8z)^{2}}}-{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)\left(4\alpha ^{2}-25\right)}{3!(8z)^{3}}}+\cdots \right)&&{\text{for }}\left|\arg z\right|<{\frac {\pi }{2}},\\K_{\alpha }(z)&\sim {\sqrt {\frac {\pi }{2z}}}e^{-z}\left(1+{\frac {4\alpha ^{2}-1}{8z}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)}{2!(8z)^{2}}}+{\frac {\left(4\alpha ^{2}-1\right)\left(4\alpha ^{2}-9\right)\left(4\alpha ^{2}-25\right)}{3!(8z)^{3}}}+\cdots \right)&&{\text{for }}\left|\arg z\right|<{\frac {3\pi }{2}}.\end{aligned}}}
There is also the asymptotic form (for large real
z
{\displaystyle z}
)
I
α
(
z
)
=
1
2
π
z
1
+
α
2
z
2
4
exp
(
−
α
arcsinh
(
α
z
)
+
z
1
+
α
2
z
2
)
(
1
+
O
(
1
z
1
+
α
2
z
2
)
)
.
{\displaystyle {\begin{aligned}I_{\alpha }(z)={\frac {1}{{\sqrt {2\pi z}}{\sqrt[{4}]{1+{\frac {\alpha ^{2}}{z^{2}}}}}}}\exp \left(-\alpha \operatorname {arcsinh} \left({\frac {\alpha }{z}}\right)+z{\sqrt {1+{\frac {\alpha ^{2}}{z^{2}}}}}\right)\left(1+{\mathcal {O}}\left({\frac {1}{z{\sqrt {1+{\frac {\alpha ^{2}}{z^{2}}}}}}}\right)\right).\end{aligned}}}
When α = 1/2, all the terms except the first vanish, and we have
I
1
/
2
(
z
)
=
2
π
sinh
(
z
)
z
∼
e
z
2
π
z
for
|
arg
z
|
<
π
2
,
K
1
/
2
(
z
)
=
π
2
e
−
z
z
.
{\displaystyle {\begin{aligned}I_{{1}/{2}}(z)&={\sqrt {\frac {2}{\pi }}}{\frac {\sinh(z)}{\sqrt {z}}}\sim {\frac {e^{z}}{\sqrt {2\pi z}}}&&{\text{for }}\left|\arg z\right|<{\tfrac {\pi }{2}},\\[1ex]K_{{1}/{2}}(z)&={\sqrt {\frac {\pi }{2}}}{\frac {e^{-z}}{\sqrt {z}}}.\end{aligned}}}
For small arguments
0
<
|
z
|
≪
α
+
1
{\displaystyle 0<|z|\ll {\sqrt {\alpha +1}}}
, we have
I
α
(
z
)
∼
1
Γ
(
α
+
1
)
(
z
2
)
α
,
K
α
(
z
)
∼
{
−
ln
(
z
2
)
−
γ
if
α
=
0
Γ
(
α
)
2
(
2
z
)
α
if
α
>
0
{\displaystyle {\begin{aligned}I_{\alpha }(z)&\sim {\frac {1}{\Gamma (\alpha +1)}}\left({\frac {z}{2}}\right)^{\alpha },\\[1ex]K_{\alpha }(z)&\sim {\begin{cases}-\ln \left({\dfrac {z}{2}}\right)-\gamma &{\text{if }}\alpha =0\\[1ex]{\frac {\Gamma (\alpha )}{2}}\left({\dfrac {2}{z}}\right)^{\alpha }&{\text{if }}\alpha >0\end{cases}}\end{aligned}}}
== Properties ==
For integer order α = n, Jn is often defined via a Laurent series for a generating function:
e
x
2
(
t
−
1
t
)
=
∑
n
=
−
∞
∞
J
n
(
x
)
t
n
{\displaystyle e^{{\frac {x}{2}}\left(t-{\frac {1}{t}}\right)}=\sum _{n=-\infty }^{\infty }J_{n}(x)t^{n}}
an approach used by P. A. Hansen in 1843. (This can be generalized to non-integer order by contour integration or other methods.)
Infinite series of Bessel functions in the form
∑
ν
=
−
∞
∞
J
N
ν
+
p
(
x
)
{\textstyle \sum _{\nu =-\infty }^{\infty }J_{N\nu +p}(x)}
where
ν
,
p
∈
Z
,
N
∈
Z
+
\nu ,p\in \mathbb {Z} ,\ N\in \mathbb {Z} ^{+}
arise in many physical systems and are defined in closed form by the Sung series. For example, when N = 3:
∑
ν
=
−
∞
∞
J
3
ν
+
p
(
x
)
=
1
3
[
1
+
2
cos
(
x
3
/
2
−
2
π
p
/
3
)
]
{\textstyle \sum _{\nu =-\infty }^{\infty }J_{3\nu +p}(x)={\frac {1}{3}}\left[1+2\cos {(x{\sqrt {3}}/2-2\pi p/3)}\right]}
. More generally, the Sung series and the alternating Sung series are written as:
∑
ν
=
−
∞
∞
J
N
ν
+
p
(
x
)
=
1
N
∑
q
=
0
N
−
1
e
i
x
sin
2
π
q
/
N
e
−
i
2
π
p
q
/
N
{\displaystyle \sum _{\nu =-\infty }^{\infty }J_{N\nu +p}(x)={\frac {1}{N}}\sum _{q=0}^{N-1}e^{ix\sin {2\pi q/N}}e^{-i2\pi pq/N}}
∑
ν
=
−
∞
∞
(
−
1
)
ν
J
N
ν
+
p
(
x
)
=
1
N
∑
q
=
0
N
−
1
e
i
x
sin
(
2
q
+
1
)
π
/
N
e
−
i
(
2
q
+
1
)
π
p
/
N
{\displaystyle \sum _{\nu =-\infty }^{\infty }(-1)^{\nu }J_{N\nu +p}(x)={\frac {1}{N}}\sum _{q=0}^{N-1}e^{ix\sin {(2q+1)\pi /N}}e^{-i(2q+1)\pi p/N}}
A series expansion using Bessel functions (Kapteyn series) is
1
1
−
z
=
1
+
2
∑
n
=
1
∞
J
n
(
n
z
)
.
{\displaystyle {\frac {1}{1-z}}=1+2\sum _{n=1}^{\infty }J_{n}(nz).}
Another important relation for integer orders is the Jacobi–Anger expansion:
e
i
z
cos
ϕ
=
∑
n
=
−
∞
∞
i
n
J
n
(
z
)
e
i
n
ϕ
{\displaystyle e^{iz\cos \phi }=\sum _{n=-\infty }^{\infty }i^{n}J_{n}(z)e^{in\phi }}
and
e
±
i
z
sin
ϕ
=
J
0
(
z
)
+
2
∑
n
=
1
∞
J
2
n
(
z
)
cos
(
2
n
ϕ
)
±
2
i
∑
n
=
0
∞
J
2
n
+
1
(
z
)
sin
(
(
2
n
+
1
)
ϕ
)
{\displaystyle e^{\pm iz\sin \phi }=J_{0}(z)+2\sum _{n=1}^{\infty }J_{2n}(z)\cos(2n\phi )\pm 2i\sum _{n=0}^{\infty }J_{2n+1}(z)\sin((2n+1)\phi )}
which is used to expand a plane wave as a sum of cylindrical waves, or to find the Fourier series of a tone-modulated FM signal.
More generally, a series
f
(
z
)
=
a
0
ν
J
ν
(
z
)
+
2
⋅
∑
k
=
1
∞
a
k
ν
J
ν
+
k
(
z
)
{\displaystyle f(z)=a_{0}^{\nu }J_{\nu }(z)+2\cdot \sum _{k=1}^{\infty }a_{k}^{\nu }J_{\nu +k}(z)}
is called Neumann expansion of f. The coefficients for ν = 0 have the explicit form
a
k
0
=
1
2
π
i
∫
|
z
|
=
c
f
(
z
)
O
k
(
z
)
d
z
{\displaystyle a_{k}^{0}={\frac {1}{2\pi i}}\int _{|z|=c}f(z)O_{k}(z)\,dz}
where Ok is Neumann's polynomial.
Selected functions admit the special representation
f
(
z
)
=
∑
k
=
0
∞
a
k
ν
J
ν
+
2
k
(
z
)
{\displaystyle f(z)=\sum _{k=0}^{\infty }a_{k}^{\nu }J_{\nu +2k}(z)}
with
a
k
ν
=
2
(
ν
+
2
k
)
∫
0
∞
f
(
z
)
J
ν
+
2
k
(
z
)
z
d
z
{\displaystyle a_{k}^{\nu }=2(\nu +2k)\int _{0}^{\infty }f(z){\frac {J_{\nu +2k}(z)}{z}}\,dz}
due to the orthogonality relation
∫
0
∞
J
α
(
z
)
J
β
(
z
)
d
z
z
=
2
π
sin
(
π
2
(
α
−
β
)
)
α
2
−
β
2
{\displaystyle \int _{0}^{\infty }J_{\alpha }(z)J_{\beta }(z){\frac {dz}{z}}={\frac {2}{\pi }}{\frac {\sin \left({\frac {\pi }{2}}(\alpha -\beta )\right)}{\alpha ^{2}-\beta ^{2}}}}
More generally, if f has a branch-point near the origin of such a nature that
f
(
z
)
=
∑
k
=
0
a
k
J
ν
+
k
(
z
)
{\displaystyle f(z)=\sum _{k=0}a_{k}J_{\nu +k}(z)}
then
L
{
∑
k
=
0
a
k
J
ν
+
k
}
(
s
)
=
1
1
+
s
2
∑
k
=
0
a
k
(
s
+
1
+
s
2
)
ν
+
k
{\displaystyle {\mathcal {L}}\left\{\sum _{k=0}a_{k}J_{\nu +k}\right\}(s)={\frac {1}{\sqrt {1+s^{2}}}}\sum _{k=0}{\frac {a_{k}}{\left(s+{\sqrt {1+s^{2}}}\right)^{\nu +k}}}}
or
∑
k
=
0
a
k
ξ
ν
+
k
=
1
+
ξ
2
2
ξ
L
{
f
}
(
1
−
ξ
2
2
ξ
)
{\displaystyle \sum _{k=0}a_{k}\xi ^{\nu +k}={\frac {1+\xi ^{2}}{2\xi }}{\mathcal {L}}\{f\}\left({\frac {1-\xi ^{2}}{2\xi }}\right)}
where
L
{
f
}
{\displaystyle {\mathcal {L}}\{f\}}
is the Laplace transform of f.
Another way to define the Bessel functions is the Poisson representation formula and the Mehler-Sonine formula:
J
ν
(
z
)
=
(
z
2
)
ν
Γ
(
ν
+
1
2
)
π
∫
−
1
1
e
i
z
s
(
1
−
s
2
)
ν
−
1
2
d
s
=
2
(
z
2
)
ν
⋅
π
⋅
Γ
(
1
2
−
ν
)
∫
1
∞
sin
z
u
(
u
2
−
1
)
ν
+
1
2
d
u
{\displaystyle {\begin{aligned}J_{\nu }(z)&={\frac {\left({\frac {z}{2}}\right)^{\nu }}{\Gamma \left(\nu +{\frac {1}{2}}\right){\sqrt {\pi }}}}\int _{-1}^{1}e^{izs}\left(1-s^{2}\right)^{\nu -{\frac {1}{2}}}\,ds\\[5px]&={\frac {2}{{\left({\frac {z}{2}}\right)}^{\nu }\cdot {\sqrt {\pi }}\cdot \Gamma \left({\frac {1}{2}}-\nu \right)}}\int _{1}^{\infty }{\frac {\sin zu}{\left(u^{2}-1\right)^{\nu +{\frac {1}{2}}}}}\,du\end{aligned}}}
where ν > −1/2 and z ∈ C.
This formula is useful especially when working with Fourier transforms.
Because Bessel's equation becomes Hermitian (self-adjoint) if it is divided by x, the solutions must satisfy an orthogonality relationship for appropriate boundary conditions. In particular, it follows that:
∫
0
1
x
J
α
(
x
u
α
,
m
)
J
α
(
x
u
α
,
n
)
d
x
=
δ
m
,
n
2
[
J
α
+
1
(
u
α
,
m
)
]
2
=
δ
m
,
n
2
[
J
α
′
(
u
α
,
m
)
]
2
{\displaystyle \int _{0}^{1}xJ_{\alpha }\left(xu_{\alpha ,m}\right)J_{\alpha }\left(xu_{\alpha ,n}\right)\,dx={\frac {\delta _{m,n}}{2}}\left[J_{\alpha +1}\left(u_{\alpha ,m}\right)\right]^{2}={\frac {\delta _{m,n}}{2}}\left[J_{\alpha }'\left(u_{\alpha ,m}\right)\right]^{2}}
where α > −1, δm,n is the Kronecker delta, and uα,m is the mth zero of Jα(x). This orthogonality relation can then be used to extract the coefficients in the Fourier–Bessel series, where a function is expanded in the basis of the functions Jα(x uα,m) for fixed α and varying m.
An analogous relationship for the spherical Bessel functions follows immediately:
∫
0
1
x
2
j
α
(
x
u
α
,
m
)
j
α
(
x
u
α
,
n
)
d
x
=
δ
m
,
n
2
[
j
α
+
1
(
u
α
,
m
)
]
2
{\displaystyle \int _{0}^{1}x^{2}j_{\alpha }\left(xu_{\alpha ,m}\right)j_{\alpha }\left(xu_{\alpha ,n}\right)\,dx={\frac {\delta _{m,n}}{2}}\left[j_{\alpha +1}\left(u_{\alpha ,m}\right)\right]^{2}}
If one defines a boxcar function of x that depends on a small parameter ε as:
f
ε
(
x
)
=
1
ε
rect
(
x
−
1
ε
)
{\displaystyle f_{\varepsilon }(x)={\frac {1}{\varepsilon }}\operatorname {rect} \left({\frac {x-1}{\varepsilon }}\right)}
(where rect is the rectangle function) then the Hankel transform of it (of any given order α > −1/2), gε(k), approaches Jα(k) as ε approaches zero, for any given k. Conversely, the Hankel transform (of the same order) of gε(k) is fε(x):
∫
0
∞
k
J
α
(
k
x
)
g
ε
(
k
)
d
k
=
f
ε
(
x
)
{\displaystyle \int _{0}^{\infty }kJ_{\alpha }(kx)g_{\varepsilon }(k)\,dk=f_{\varepsilon }(x)}
which is zero everywhere except near 1. As ε approaches zero, the right-hand side approaches δ(x − 1), where δ is the Dirac delta function. This admits the limit (in the distributional sense):
∫
0
∞
k
J
α
(
k
x
)
J
α
(
k
)
d
k
=
δ
(
x
−
1
)
{\displaystyle \int _{0}^{\infty }kJ_{\alpha }(kx)J_{\alpha }(k)\,dk=\delta (x-1)}
A change of variables then yields the closure equation:
∫
0
∞
x
J
α
(
u
x
)
J
α
(
v
x
)
d
x
=
1
u
δ
(
u
−
v
)
{\displaystyle \int _{0}^{\infty }xJ_{\alpha }(ux)J_{\alpha }(vx)\,dx={\frac {1}{u}}\delta (u-v)}
for α > −1/2. The Hankel transform can express a fairly arbitrary function as an integral of Bessel functions of different scales. For the spherical Bessel functions the orthogonality relation is:
∫
0
∞
x
2
j
α
(
u
x
)
j
α
(
v
x
)
d
x
=
π
2
u
v
δ
(
u
−
v
)
{\displaystyle \int _{0}^{\infty }x^{2}j_{\alpha }(ux)j_{\alpha }(vx)\,dx={\frac {\pi }{2uv}}\delta (u-v)}
for α > −1.
Another important property of Bessel's equations, which follows from Abel's identity, involves the Wronskian of the solutions:
A
α
(
x
)
d
B
α
d
x
−
d
A
α
d
x
B
α
(
x
)
=
C
α
x
{\displaystyle A_{\alpha }(x){\frac {dB_{\alpha }}{dx}}-{\frac {dA_{\alpha }}{dx}}B_{\alpha }(x)={\frac {C_{\alpha }}{x}}}
where Aα and Bα are any two solutions of Bessel's equation, and Cα is a constant independent of x (which depends on α and on the particular Bessel functions considered). In particular,
J
α
(
x
)
d
Y
α
d
x
−
d
J
α
d
x
Y
α
(
x
)
=
2
π
x
{\displaystyle J_{\alpha }(x){\frac {dY_{\alpha }}{dx}}-{\frac {dJ_{\alpha }}{dx}}Y_{\alpha }(x)={\frac {2}{\pi x}}}
and
I
α
(
x
)
d
K
α
d
x
−
d
I
α
d
x
K
α
(
x
)
=
−
1
x
,
{\displaystyle I_{\alpha }(x){\frac {dK_{\alpha }}{dx}}-{\frac {dI_{\alpha }}{dx}}K_{\alpha }(x)=-{\frac {1}{x}},}
for α > −1.
For α > −1, the even entire function of genus 1, x−αJα(x), has only real zeros. Let
0
<
j
α
,
1
<
j
α
,
2
<
⋯
<
j
α
,
n
<
⋯
{\displaystyle 0<j_{\alpha ,1}<j_{\alpha ,2}<\cdots <j_{\alpha ,n}<\cdots }
be all its positive zeros, then
J
α
(
z
)
=
(
z
2
)
α
Γ
(
α
+
1
)
∏
n
=
1
∞
(
1
−
z
2
j
α
,
n
2
)
{\displaystyle J_{\alpha }(z)={\frac {\left({\frac {z}{2}}\right)^{\alpha }}{\Gamma (\alpha +1)}}\prod _{n=1}^{\infty }\left(1-{\frac {z^{2}}{j_{\alpha ,n}^{2}}}\right)}
(There are a large number of other known integrals and identities that are not reproduced here, but which can be found in the references.)
=== Recurrence relations ===
The functions Jα, Yα, H(1)α, and H(2)α all satisfy the recurrence relations
2
α
x
Z
α
(
x
)
=
Z
α
−
1
(
x
)
+
Z
α
+
1
(
x
)
{\displaystyle {\frac {2\alpha }{x}}Z_{\alpha }(x)=Z_{\alpha -1}(x)+Z_{\alpha +1}(x)}
and
2
d
Z
α
(
x
)
d
x
=
Z
α
−
1
(
x
)
−
Z
α
+
1
(
x
)
,
{\displaystyle 2{\frac {dZ_{\alpha }(x)}{dx}}=Z_{\alpha -1}(x)-Z_{\alpha +1}(x),}
where Z denotes J, Y, H(1), or H(2). These two identities are often combined, e.g. added or subtracted, to yield various other relations. In this way, for example, one can compute Bessel functions of higher orders (or higher derivatives) given the values at lower orders (or lower derivatives). In particular, it follows that
(
1
x
d
d
x
)
m
[
x
α
Z
α
(
x
)
]
=
x
α
−
m
Z
α
−
m
(
x
)
,
(
1
x
d
d
x
)
m
[
Z
α
(
x
)
x
α
]
=
(
−
1
)
m
Z
α
+
m
(
x
)
x
α
+
m
.
{\displaystyle {\begin{aligned}\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{m}\left[x^{\alpha }Z_{\alpha }(x)\right]&=x^{\alpha -m}Z_{\alpha -m}(x),\\\left({\frac {1}{x}}{\frac {d}{dx}}\right)^{m}\left[{\frac {Z_{\alpha }(x)}{x^{\alpha }}}\right]&=(-1)^{m}{\frac {Z_{\alpha +m}(x)}{x^{\alpha +m}}}.\end{aligned}}}
Using the previous relations one can arrive to similar relations for the Spherical Bessel functions:
2
α
+
1
x
j
α
(
x
)
=
j
α
−
1
+
j
α
+
1
{\displaystyle {\frac {2\alpha +1}{x}}j_{\alpha }(x)=j_{\alpha -1}+j_{\alpha +1}}
and
d
j
α
(
x
)
d
x
=
j
α
−
1
−
α
+
1
x
j
α
{\displaystyle {\frac {dj_{\alpha }(x)}{dx}}=j_{\alpha -1}-{\frac {\alpha +1}{x}}j_{\alpha }}
Modified Bessel functions follow similar relations:
e
(
x
2
)
(
t
+
1
t
)
=
∑
n
=
−
∞
∞
I
n
(
x
)
t
n
{\displaystyle e^{\left({\frac {x}{2}}\right)\left(t+{\frac {1}{t}}\right)}=\sum _{n=-\infty }^{\infty }I_{n}(x)t^{n}}
and
e
z
cos
θ
=
I
0
(
z
)
+
2
∑
n
=
1
∞
I
n
(
z
)
cos
n
θ
{\displaystyle e^{z\cos \theta }=I_{0}(z)+2\sum _{n=1}^{\infty }I_{n}(z)\cos n\theta }
and
1
2
π
∫
0
2
π
e
z
cos
(
m
θ
)
+
y
cos
θ
d
θ
=
I
0
(
z
)
I
0
(
y
)
+
2
∑
n
=
1
∞
I
n
(
z
)
I
m
n
(
y
)
.
{\displaystyle {\frac {1}{2\pi }}\int _{0}^{2\pi }e^{z\cos(m\theta )+y\cos \theta }d\theta =I_{0}(z)I_{0}(y)+2\sum _{n=1}^{\infty }I_{n}(z)I_{mn}(y).}
The recurrence relation reads
C
α
−
1
(
x
)
−
C
α
+
1
(
x
)
=
2
α
x
C
α
(
x
)
,
C
α
−
1
(
x
)
+
C
α
+
1
(
x
)
=
2
d
d
x
C
α
(
x
)
,
{\displaystyle {\begin{aligned}C_{\alpha -1}(x)-C_{\alpha +1}(x)&={\frac {2\alpha }{x}}C_{\alpha }(x),\\[1ex]C_{\alpha -1}(x)+C_{\alpha +1}(x)&=2{\frac {d}{dx}}C_{\alpha }(x),\end{aligned}}}
where Cα denotes Iα or eαiπKα. These recurrence relations are useful for discrete diffusion problems.
=== Transcendence ===
In 1929, Carl Ludwig Siegel proved that Jν(x), J'ν(x), and the logarithmic derivative J'ν(x)/Jν(x) are transcendental numbers when ν is rational and x is algebraic and nonzero. The same proof also implies that
Γ
(
v
+
1
)
(
2
/
x
)
v
J
v
(
x
)
{\displaystyle \Gamma (v+1)(2/x)^{v}J_{v}(x)}
is transcendental under the same assumptions.
=== Sums with Bessel functions ===
The product of two Bessel functions admits the following sum:
∑
ν
=
−
∞
∞
J
ν
(
x
)
J
n
−
ν
(
y
)
=
J
n
(
x
+
y
)
,
{\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{n-\nu }(y)=J_{n}(x+y),}
∑
ν
=
−
∞
∞
J
ν
(
x
)
J
ν
+
n
(
y
)
=
J
n
(
y
−
x
)
.
{\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{\nu +n}(y)=J_{n}(y-x).}
From these equalities it follows that
∑
ν
=
−
∞
∞
J
ν
(
x
)
J
ν
+
n
(
x
)
=
δ
n
,
0
{\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }(x)J_{\nu +n}(x)=\delta _{n,0}}
and as a consequence
∑
ν
=
−
∞
∞
J
ν
2
(
x
)
=
1.
{\displaystyle \sum _{\nu =-\infty }^{\infty }J_{\nu }^{2}(x)=1.}
These sums can be extended to include a term multiplier that is a polynomial function of the index. For example,
∑
ν
=
−
∞
∞
ν
J
ν
(
x
)
J
ν
+
n
(
x
)
=
x
2
(
δ
n
,
1
+
δ
n
,
−
1
)
,
{\displaystyle \sum _{\nu =-\infty }^{\infty }\nu J_{\nu }(x)J_{\nu +n}(x)={\frac {x}{2}}\left(\delta _{n,1}+\delta _{n,-1}\right),}
∑
ν
=
−
∞
∞
ν
J
ν
2
(
x
)
=
0
,
{\displaystyle \sum _{\nu =-\infty }^{\infty }\nu J_{\nu }^{2}(x)=0,}
∑
ν
=
−
∞
∞
ν
2
J
ν
(
x
)
J
ν
+
n
(
x
)
=
x
2
(
δ
n
,
−
1
−
δ
n
,
1
)
+
x
2
4
(
δ
n
,
−
2
+
2
δ
n
,
0
+
δ
n
,
2
)
,
{\displaystyle \sum _{\nu =-\infty }^{\infty }\nu ^{2}J_{\nu }(x)J_{\nu +n}(x)={\frac {x}{2}}\left(\delta _{n,-1}-\delta _{n,1}\right)+{\frac {x^{2}}{4}}\left(\delta _{n,-2}+2\delta _{n,0}+\delta _{n,2}\right),}
∑
ν
=
−
∞
∞
ν
2
J
ν
2
(
x
)
=
x
2
2
.
{\displaystyle \sum _{\nu =-\infty }^{\infty }\nu ^{2}J_{\nu }^{2}(x)={\frac {x^{2}}{2}}.}
== Multiplication theorem ==
The Bessel functions obey a multiplication theorem
λ
−
ν
J
ν
(
λ
z
)
=
∑
n
=
0
∞
1
n
!
(
(
1
−
λ
2
)
z
2
)
n
J
ν
+
n
(
z
)
,
{\displaystyle \lambda ^{-\nu }J_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {1}{n!}}\left({\frac {\left(1-\lambda ^{2}\right)z}{2}}\right)^{n}J_{\nu +n}(z),}
where λ and ν may be taken as arbitrary complex numbers. For |λ2 − 1| < 1, the above expression also holds if J is replaced by Y. The analogous identities for modified Bessel functions and |λ2 − 1| < 1 are
λ
−
ν
I
ν
(
λ
z
)
=
∑
n
=
0
∞
1
n
!
(
(
λ
2
−
1
)
z
2
)
n
I
ν
+
n
(
z
)
{\displaystyle \lambda ^{-\nu }I_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {1}{n!}}\left({\frac {\left(\lambda ^{2}-1\right)z}{2}}\right)^{n}I_{\nu +n}(z)}
and
λ
−
ν
K
ν
(
λ
z
)
=
∑
n
=
0
∞
(
−
1
)
n
n
!
(
(
λ
2
−
1
)
z
2
)
n
K
ν
+
n
(
z
)
.
{\displaystyle \lambda ^{-\nu }K_{\nu }(\lambda z)=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{n!}}\left({\frac {\left(\lambda ^{2}-1\right)z}{2}}\right)^{n}K_{\nu +n}(z).}
== Zeros of the Bessel function ==
=== Bourget's hypothesis ===
Bessel himself originally proved that for nonnegative integers n, the equation Jn(x) = 0 has an infinite number of solutions in x. When the functions Jn(x) are plotted on the same graph, though, none of the zeros seem to coincide for different values of n except for the zero at x = 0. This phenomenon is known as Bourget's hypothesis after the 19th-century French mathematician who studied Bessel functions. Specifically it states that for any integers n ≥ 0 and m ≥ 1, the functions Jn(x) and Jn + m(x) have no common zeros other than the one at x = 0. The hypothesis was proved by Carl Ludwig Siegel in 1929.
=== Transcendence ===
Siegel proved in 1929 that when ν is rational, all nonzero roots of Jν(x) and J'ν(x) are transcendental, as are all the roots of Kν(x). It is also known that all roots of the higher derivatives
J
ν
(
n
)
(
x
)
{\displaystyle J_{\nu }^{(n)}(x)}
for n ≤ 18 are transcendental, except for the special values
J
1
(
3
)
(
±
3
)
=
0
{\displaystyle J_{1}^{(3)}(\pm {\sqrt {3}})=0}
and
J
0
(
4
)
(
±
3
)
=
0
{\displaystyle J_{0}^{(4)}(\pm {\sqrt {3}})=0}
.
=== Numerical approaches ===
For numerical studies about the zeros of the Bessel function, see Gil, Segura & Temme (2007), Kravanja et al. (1998) and Moler (2004).
=== Numerical values ===
The first zeros in J0 (i.e., j0,1, j0,2 and j0,3) occur at arguments of approximately 2.40483, 5.52008 and 8.65373, respectively.
== History ==
=== Waves and elasticity problems ===
The first appearance of a Bessel function appears in the work of Daniel Bernoulli in 1732, while working on the analysis of a vibrating string, a problem that was tackled before by his father Johann Bernoulli. Daniel considered a flexible chain suspended from a fixed point above and free at its lower end. The solution of the differential equation led to the introduction of a function that is now considered
J
0
(
x
)
{\displaystyle J_{0}(x)}
. Bernoulli also developed a method to find the zeros of the function.
Leonhard Euler in 1736, found a link between other functions (now known as Laguerre polynomials) and Bernoulli's solution. Euler also introduced a non-uniform chain that lead to the introduction of functions now related to modified Bessel functions
I
n
(
x
)
{\displaystyle I_{n}(x)}
.
In the middle of the eighteen century, Jean le Rond d'Alembert had found a formula to solve the wave equation. By 1771 there was dispute between Bernoulli, Euler, d'Alembert and Joseph-Louis Lagrange on the nature of the solutions vibrating strings.
Euler worked in 1778 on buckling, introducing the concept of Euler's critical load. To solve the problem he introduced the series for
J
±
1
/
3
(
x
)
{\displaystyle J_{\pm 1/3}(x)}
. Euler also worked out the solutions of vibrating 2D membranes in cylindrical coordinates in 1780. In order to solve his differential equation he introduced a power series associated to
J
n
(
x
)
{\displaystyle J_{n}(x)}
, for integer n.
During the end of the 19th century Lagrange, Pierre-Simon Laplace and Marc-Antoine Parseval also found equivalents to the Bessel functions. Parseval for example found an integral representation of
J
0
(
x
)
{\displaystyle J_{0}(x)}
using cosine.
At the beginning of the 1800s, Joseph Fourier used
J
0
(
x
)
{\displaystyle J_{0}(x)}
to solve the heat equation in a problem with cylindrical symmetry. Fourier won a prize of the French Academy of Sciences for this work in 1811. But most of the details of his work, including the use of a Fourier series, remained unpublished until 1822. Poisson in rivalry with Fourier, extended Fourier's work in 1823, introducing new properties of Bessel functions including Bessel functions of half-integer order (now known as spherical Bessel functions).
=== Astronomical problems ===
In 1770, Lagrangre introduced the series expansion of Bessel functions to solve Kepler's equation, a trascendental equation in astronomy. Friedrich Wilhelm Bessel had seen Lagrange's solution but found it difficult to handle. In 1813 in a letter to Carl Friedrich Gauss, Bessel simplified the calculation using trigonometric functions. Bessel published his work in 1819, independently introducing the method of Fourier series unaware of the work of Fourier which was published later.
In 1824, Bessel carried out a systematic investigation of the functions, which earned the functions his name. In older literature the functions were called cylindrical functions or even Bessel–Fourier functions.
== See also ==
== Notes ==
== References ==
== External links == | Wikipedia/Hankel_function |
A one-way wave equation is a first-order partial differential equation describing one wave traveling in a direction defined by the vector wave velocity. It contrasts with the second-order two-way wave equation describing a standing wavefield resulting from superposition of two waves in opposite directions (using the squared scalar wave velocity). In the one-dimensional case it is also known as a transport equation, and it allows wave propagation to be calculated without the mathematical complication of solving a 2nd order differential equation. Due to the fact that in the last decades no general solution to the 3D one-way wave equation could be found, numerous approximation methods based on the 1D one-way wave equation are used for 3D seismic and other geophysical calculations, see also the section § Three-dimensional case.
== One-dimensional case ==
The scalar second-order (two-way) wave equation describing a standing wavefield can be written as:
∂
2
s
∂
t
2
−
c
2
∂
2
s
∂
x
2
=
0
,
{\displaystyle {\frac {\partial ^{2}s}{\partial t^{2}}}-c^{2}{\frac {\partial ^{2}s}{\partial x^{2}}}=0,}
where
x
{\displaystyle x}
is the coordinate,
t
{\displaystyle t}
is time,
s
=
s
(
x
,
t
)
{\displaystyle s=s(x,t)}
is the displacement, and
c
{\displaystyle c}
is the wave velocity.
Due to the ambiguity in the direction of the wave velocity,
c
2
=
(
+
c
)
2
=
(
−
c
)
2
{\displaystyle c^{2}=(+c)^{2}=(-c)^{2}}
, the equation does not contain information about the wave direction and therefore has solutions propagating in both the forward (
+
x
{\displaystyle +x}
) and backward (
−
x
{\displaystyle -x}
) directions. The general solution of the equation is the summation of the solutions in these two directions:
s
(
x
,
t
)
=
s
+
(
t
−
x
/
c
)
+
s
−
(
t
+
x
/
c
)
{\displaystyle s(x,t)=s_{+}(t-x/c)+s_{-}(t+x/c)}
where
s
+
{\displaystyle s_{+}}
and
s
−
{\displaystyle s_{-}}
are the displacement amplitudes of the waves running in
+
c
{\displaystyle +c}
and
−
c
{\displaystyle -c}
direction.
When a one-way wave problem is formulated, the wave propagation direction has to be (manually) selected by keeping one of the two terms in the general solution.
Factoring the operator on the left side of the equation yields a pair of one-way wave equations, one with solutions that propagate forwards and the other with solutions that propagate backwards.
(
∂
2
∂
t
2
−
c
2
∂
2
∂
x
2
)
s
=
(
∂
∂
t
−
c
∂
∂
x
)
(
∂
∂
t
+
c
∂
∂
x
)
s
=
0
,
{\displaystyle \left({\partial ^{2} \over \partial t^{2}}-c^{2}{\partial ^{2} \over \partial x^{2}}\right)s=\left({\partial \over \partial t}-c{\partial \over \partial x}\right)\left({\partial \over \partial t}+c{\partial \over \partial x}\right)s=0,}
The backward- and forward-travelling waves are described respectively (for
c
>
0
{\displaystyle c>0}
),
∂
s
∂
t
−
c
∂
s
∂
x
=
0
∂
s
∂
t
+
c
∂
s
∂
x
=
0
{\displaystyle {\begin{aligned}&{{\frac {\partial s}{\partial t}}-c{\frac {\partial s}{\partial x}}=0}\\[6pt]&{{\frac {\partial s}{\partial t}}+c{\frac {\partial s}{\partial x}}=0}\end{aligned}}}
The one-way wave equations can also be physically derived directly from specific acoustic impedance.
In a longitudinal plane wave, the specific impedance determines the local proportionality of pressure
p
=
p
(
x
,
t
)
{\displaystyle p=p(x,t)}
and particle velocity
v
=
v
(
x
,
t
)
{\displaystyle v=v(x,t)}
:
p
v
=
ρ
c
,
{\displaystyle {\frac {p}{v}}=\rho c,}
with
ρ
{\displaystyle \rho }
= density.
The conversion of the impedance equation leads to:
A longitudinal plane wave of angular frequency
ω
{\displaystyle \omega }
has the displacement
s
=
s
(
x
,
t
)
{\displaystyle s=s(x,t)}
.
The pressure
p
{\displaystyle p}
and the particle velocity
v
{\displaystyle v}
can be expressed in terms of the displacement
s
{\displaystyle s}
(
E
{\displaystyle E}
: Elastic Modulus):
p
:=
E
∂
s
∂
x
{\displaystyle p:=E{\partial s \over \partial x}}
for the 1D case this is in full analogy to stress
σ
{\displaystyle \sigma }
in mechanics:
σ
=
E
ε
{\displaystyle \sigma =E\varepsilon }
, with strain being defined as
ε
=
Δ
L
L
{\displaystyle \varepsilon ={\frac {\Delta L}{L}}}
v
=
∂
s
∂
t
{\displaystyle v={\partial s \over \partial t}}
These relations inserted into the equation above (⁎) yield:
∂
s
∂
t
−
E
ρ
c
∂
s
∂
x
=
0
{\displaystyle {\partial s \over \partial t}-{E \over \rho c}{\partial s \over \partial x}=0}
With the local wave velocity definition (speed of sound):
c
=
E
(
x
)
ρ
(
x
)
⇔
c
=
E
ρ
c
{\displaystyle c={\sqrt {E(x) \over \rho (x)}}\Leftrightarrow c={E \over \rho c}}
directly(!) follows the 1st-order partial differential equation of the one-way wave equation:
∂
s
∂
t
−
c
∂
s
∂
x
=
0
{\displaystyle {{\frac {\partial s}{\partial t}}-c{\frac {\partial s}{\partial x}}=0}}
The wave velocity
c
{\displaystyle c}
can be set within this wave equation as
+
c
{\displaystyle +c}
or
−
c
{\displaystyle -c}
according to the direction of wave propagation.
For wave propagation in the direction of
+
c
{\displaystyle +c}
the unique solution is
s
(
x
,
t
)
=
s
+
(
t
−
x
/
c
)
{\displaystyle s(x,t)=s_{+}(t-x/c)}
and for wave propagation in the
−
c
{\displaystyle -c}
direction the respective solution is
s
(
x
,
t
)
=
s
−
(
t
+
x
/
c
)
{\displaystyle s(x,t)=s_{-}(t+x/c)}
There also exists a spherical one-way wave equation describing the wave propagation of a monopole sound source in spherical coordinates, i.e., in radial direction. By a modification of the radial nabla operator an inconsistency between spherical divergence and Laplace operators is solved and the resulting solution does not show Bessel functions (in contrast to the known solution of the conventional two-way approach).
== Three-dimensional case ==
The one-way equation and solution in the three-dimensional case was assumed to be similar way as for the one-dimensional case by a mathematical decomposition (factorization) of a 2nd order differential equation. In fact, the 3D one-way wave equation can be derived from first principles:
derivation from impedance theorem and
derivation from a tensorial impulse flow equilibrium in a field point.
It is also possible to derive the vectorial two-way wave operator from synthesis of two one-way wave operators (using a combined field variable). This approach shows that the two-way wave equation or two-way wave operator can be used for the specific condition
∇
c
=
0
{\displaystyle \nabla \mathbf {c} =0}
, i.e. for homogeneous and anisotropic medium, whereas the one-way wave equation resp. one-way wave operator is also valid in inhomogeneous media.
== Inhomogeneous media ==
For inhomogeneous media with location-dependent elasticity module
E
(
x
)
{\displaystyle E(x)}
, density
ρ
(
x
)
{\displaystyle \rho (x)}
and wave velocity
c
(
x
)
{\displaystyle c(x)}
an analytical solution of the one-way wave equation can be derived by introduction of a new field variable.
== Further mechanical and electromagnetic waves ==
The method of PDE factorization can also be transferred to other 2nd or 4th order wave equations, e.g. transversal, and string, Moens/Korteweg, bending, and electromagnetic wave equations and electromagnetic waves.
== See also ==
Wave equation – Differential equation important in physics
Standing wave – Wave that remains in a constant position
Continuity equation – Equation describing the transport of some quantity
== References == | Wikipedia/One-way_wave_equation |
In the fields of physics, engineering, and earth sciences, advection is the transport of a substance or quantity by bulk motion of a fluid. The properties of that substance are carried with it. Generally the majority of the advected substance is also a fluid. The properties that are carried with the advected substance are conserved properties such as energy. An example of advection is the transport of pollutants or silt in a river by bulk water flow downstream. Another commonly advected quantity is energy or enthalpy. Here the fluid may be any material that contains thermal energy, such as water or air. In general, any substance or conserved extensive quantity can be advected by a fluid that can hold or contain the quantity or substance.
During advection, a fluid transports some conserved quantity or material via bulk motion. The fluid's motion is described mathematically as a vector field, and the transported material is described by a scalar field showing its distribution over space. Advection requires currents in the fluid, and so cannot happen in rigid solids. It does not include transport of substances by molecular diffusion.
Advection is sometimes confused with the more encompassing process of convection, which is the combination of advective transport and diffusive transport.
In meteorology and physical oceanography, advection often refers to the transport of some property of the atmosphere or ocean, such as heat, humidity (see moisture) or salinity.
Advection is important for the formation of orographic clouds and the precipitation of water from clouds, as part of the hydrological cycle.
== Mathematical description ==
The advection equation is a first-order hyperbolic partial differential equation that governs the motion of a conserved scalar field as it is advected by a known velocity vector field. It is derived using the scalar field's conservation law, together with Gauss's theorem, and taking the infinitesimal limit.
One easily visualized example of advection is the transport of ink dumped into a river. As the river flows, ink will move downstream in a "pulse" via advection, as the water's movement itself transports the ink. If added to a lake without significant bulk water flow, the ink would simply disperse outwards from its source in a diffusive manner, which is not advection. Note that as it moves downstream, the "pulse" of ink will also spread via diffusion. The sum of these processes is called convection.
=== The advection equation ===
The advection equation for a conserved quantity described by a scalar field
ψ
(
t
,
x
,
y
,
z
)
{\displaystyle \psi (t,x,y,z)}
is expressed by a continuity equation:
∂
ψ
∂
t
+
∇
⋅
(
ψ
u
)
=
0
,
{\displaystyle {\frac {\partial \psi }{\partial t}}+\nabla \cdot \left(\psi {\mathbf {u} }\right)=0,}
where vector field
u
=
(
u
x
,
u
y
,
u
z
)
{\displaystyle \mathbf {u} =(u_{x},u_{y},u_{z})}
is the flow velocity and
∇
{\displaystyle \nabla }
is the del operator. If the flow is assumed to be incompressible then
u
{\displaystyle \mathbf {u} }
is solenoidal, that is, the divergence is zero:
∇
⋅
u
=
0
,
{\displaystyle \nabla \cdot {\mathbf {u} }=0,}
and (by using a product rule associated with the divergence) the above equation reduces to
∂
ψ
∂
t
+
u
⋅
∇
ψ
=
0.
{\displaystyle {\frac {\partial \psi }{\partial t}}+{\mathbf {u} }\cdot \nabla \psi =0.}
In particular, if the flow is steady, then
u
⋅
∇
ψ
=
0
,
{\displaystyle {\mathbf {u} }\cdot \nabla \psi =0,}
which shows that
ψ
{\displaystyle \psi }
is constant (because
∇
ψ
=
0
{\textstyle \nabla \psi =0}
for any vector
u
{\textstyle \mathbf {u} }
) along a streamline.
If a vector quantity
a
{\displaystyle \mathbf {a} }
(such as a magnetic field) is being advected by the solenoidal velocity field
u
{\displaystyle \mathbf {u} }
, then the advection equation above becomes:
∂
a
∂
t
+
(
u
⋅
∇
)
a
=
0.
{\displaystyle {\frac {\partial {\mathbf {a} }}{\partial t}}+\left({\mathbf {u} }\cdot \nabla \right){\mathbf {a} }=0.}
Here,
a
{\displaystyle \mathbf {a} }
is a vector field instead of the scalar field
ψ
{\displaystyle \psi }
.
=== Solution ===
Solutions to the advection equation can be approximated using numerical methods, where interest typically centers on discontinuous "shock" solutions and necessary conditions for convergence (e.g. the CFL condition).
Numerical simulation can be aided by considering the skew-symmetric form of advection
1
2
u
⋅
∇
u
+
1
2
∇
(
u
u
)
,
{\displaystyle {\tfrac {1}{2}}{\mathbf {u} }\cdot \nabla {\mathbf {u} }+{\tfrac {1}{2}}\nabla ({\mathbf {u} }{\mathbf {u} }),}
where
∇
(
u
u
)
=
∇
⋅
[
u
u
x
,
u
u
y
,
u
u
z
]
.
{\displaystyle \nabla ({\mathbf {u} }{\mathbf {u} })=\nabla \cdot [{\mathbf {u} }u_{x},{\mathbf {u} }u_{y},{\mathbf {u} }u_{z}].}
Since skew symmetry implies only imaginary eigenvalues, this form reduces the "blow up" and "spectral blocking" often experienced in numerical solutions with sharp discontinuities.
== Distinction between advection and convection ==
The term advection often serves as a synonym for convection, and this correspondence of terms is used in the literature. More technically, convection applies to the movement of a fluid (often due to density gradients created by thermal gradients), whereas advection is the movement of some material by the velocity of the fluid. Thus, although it might seem confusing, it is technically correct to think of momentum being advected by the velocity field in the Navier-Stokes equations, although the resulting motion would be considered to be convection. Because of the specific use of the term convection to indicate transport in association with thermal gradients, it is probably safer to use the term advection if one is uncertain about which terminology best describes their particular system.
== Meteorology ==
In meteorology and physical oceanography, advection often refers to the horizontal transport of some property of the atmosphere or ocean, such as heat, humidity or salinity, and convection generally refers to vertical transport (vertical advection). Advection is important for the formation of orographic clouds (terrain-forced convection) and the precipitation of water from clouds, as part of the hydrological cycle.
== Other quantities ==
The advection equation also applies if the quantity being advected is represented by a probability density function at each point, although accounting for diffusion is more difficult.
== See also ==
== Notes ==
== References ==
Boyd, John P. (2001). Chebyshev and Fourier Spectral Methods (PDF). Mineola, NY: Courier Corporation. ISBN 0-486-41183-4.
LeVeque, Randall J. (2002). Finite Volume Methods for Hyperbolic Problems. Cambridge University Press. doi:10.1017/cbo9780511791253. ISBN 978-0-521-81087-6. | Wikipedia/Advection_equation |
Logotherapy is a form of existential therapy developed by neurologist and psychiatrist Viktor Frankl. It is founded on the premise that the primary motivational force of individuals is to find meaning in life. Frankl describes it as "the Third Viennese School of Psychotherapy" along with Freud's psychoanalysis and Alfred Adler's individual psychology.
Logotherapy is based on an existential analysis focusing on Kierkegaard's will to meaning as opposed to Adler's Nietzschean doctrine of will to power or Freud's will to pleasure. Rather than power or pleasure, logotherapy is founded upon the belief that striving to find meaning in life is the primary, most powerful motivating and driving force in humans. A short introduction to this system is given in Frankl's most famous book, Man's Search for Meaning (1946), in which he outlines how his theories helped him to survive his Holocaust experience and how that experience further developed and reinforced his theories. Presently, there are a number of logotherapy institutes around the world.
== Basic principles ==
The notion of logotherapy was created with the Greek word logos ("meaning"). Frankl's concept is based on the premise that the primary motivational force of an individual is to find meaning in life. The following list of tenets represents basic principles of logotherapy:
Life has meaning under all circumstances, even the most miserable ones.
Our main motivation for living is our will to find meaning in life.
We have freedom to find meaning in what we do, and what we experience, or at least in the stance we take when faced with a situation of unchangeable suffering.
The human spirit is referred to in several of the assumptions of logotherapy, but the use of the term spirit is not "spiritual" or "religious." In Frankl's view, the spirit is the will of the human being. The emphasis, therefore, is on the search for meaning, which is not necessarily the search for God or any other supernatural being. Frankl also noted the barriers to humanity's quest for meaning in life. He warns against "...affluence, hedonism, [and] materialism..." in the search for meaning.
Purpose in life and meaning in life constructs appeared in Frankl's logotherapy writings with relation to existential vacuum and will to meaning, as well as others who have theorized about and defined positive psychological functioning. Frankl observed that it may be psychologically damaging when a person's search for meaning is blocked. Positive life purpose and meaning were associated with strong religious beliefs, membership in groups, dedication to a cause, life values, and clear goals. Adult development and maturity theories include the purpose in life concept. Maturity emphasizes a clear comprehension of life's purpose, directedness, and intentionality which contributes to the feeling that life is meaningful.
Frankl's ideas were operationalized by Crumbaugh and Maholick's Purpose in Life (PIL) test, which measures an individual's meaning and purpose in life. With the test, investigators found that meaning in life mediated the relationships between religiosity and well-being; uncontrollable stress and substance use; depression and self-derogation. Crumbaugh found that the Seeking of Noetic Goals Test (SONG) is a complementary measure of the PIL. While the PIL measures the presence of meaning, the SONG measures orientation towards meaning. A low score in the PIL but a high score in the SONG, would predict a better outcome in the application of Logotherapy.
=== Discovering meaning ===
According to Frankl, "We can discover this meaning in life in three different ways: (1) by creating a work or doing a deed; (2) by experiencing something or encountering someone; and (3) by the attitude we take toward unavoidable suffering" and that "everything can be taken from a man but one thing: the last of the human freedoms – to choose one's attitude in any given set of circumstances". On the meaning of suffering, Frankl gives the following example:
"Once, an elderly general practitioner consulted me because of his severe depression. He could not overcome the loss of his wife who had died two years before and whom he had loved above all else. Now how could I help him? What should I tell him? I refrained from telling him anything, but instead confronted him with a question, "What would have happened, Doctor, if you had died first, and your wife would have had to survive without you?:" "Oh," he said, "for her this would have been terrible; how she would have suffered!" Whereupon I replied, "You see, Doctor, such a suffering has been spared her, and it is you who have spared her this suffering; but now, you have to pay for it by surviving and mourning her." He said no word but shook my hand and calmly left the office.: 178–179
Frankl emphasized that realizing the value of suffering is meaningful only when the first two creative possibilities are not available (for example, in a concentration camp) and only when such suffering is inevitable – he was not proposing that people suffer unnecessarily.: 115
== Philosophical basis of logotherapy ==
Frankl described the meta-clinical implications of logotherapy in his book The Will to Meaning: Foundations and Applications of Logotherapy. He believed that there is no psychotherapy apart from the theory of the individual. As an existential psychologist, he inherently disagreed with the "machine model" or "rat model", as it undermines the human quality of humans. As a neurologist and psychiatrist, Frankl developed a unique view of determinism to coexist with the three basic pillars of logotherapy (the freedom of will). Though Frankl admitted that a person can never be free from every condition, such as, biological, sociological, or psychological determinants; based on his experience during his life in the Nazi concentration camps, he believed that a person is "capable of resisting and braving even the worst conditions". In doing such, a person can detach from situations and themselves, choose an attitude about themselves, and determine their own determinants, thus shaping their own character and becoming responsible for themselves.
== Logotherapeutic views and treatment ==
=== Overcoming anxiety ===
By recognizing the purpose of our circumstances, one can master anxiety. Anecdotes about this use of logotherapy are given by New York Times writer Tim Sanders, who explained how he uses its concept to relieve the stress of fellow airline travelers by asking them the purpose of their journey. When he does this, no matter how miserable they are, their whole demeanor changes, and they remain happy throughout the flight. Overall, Frankl believed that the anxious individual does not understand that their anxiety is the result of dealing with a sense of "unfulfilled responsibility" and ultimately a lack of meaning.
=== Treatment of neurosis ===
Frankl cites two neurotic pathogens: hyper-intention, a forced intention toward some end which makes that end unattainable; and hyper-reflection, an excessive attention to oneself which stifles attempts to avoid the neurosis to which one thinks oneself predisposed. Frankl identified anticipatory anxiety, a fear of a given outcome which makes that outcome more likely. To relieve the anticipatory anxiety and treat the resulting neuroses, logotherapy offers paradoxical intention, wherein the patient intends to do the opposite of their hyper-intended goal.
A person, then, who fears (i.e. experiences anticipatory anxiety over) not getting a good night's sleep may try too hard (that is, hyper-intend) to fall asleep, and this would hinder their ability to do so. A logotherapist would recommend, then, that the person go to bed and intentionally try not to fall asleep. This would relieve the anticipatory anxiety which kept the person awake in the first place, thus allowing them to fall asleep in an acceptable amount of time.
=== Depression ===
Viktor Frankl believed depression occurred at the psychological, physiological, and spiritual levels. At the psychological level, he believed that feelings of inadequacy stem from undertaking tasks beyond our abilities. At the physiological level, he recognized a "vital low", which he defined as a "diminishment of physical energy". Finally, Frankl believed that at the spiritual level, the depressed individual faces tension between who they actually are in relation to what they should be. Frankl refers to this as the gaping abyss.: 202 Finally Frankl suggests that if goals seem unreachable, an individual loses a sense of future and thus meaning, resulting in depression. Thus logotherapy aims "to change the patient's attitude toward their disease as well as toward their life as a task".: 200
In order to overcome depressed feelings and thoughts, Frankl challenges individuals who suffer from depression to find meaning in their suffering. Frankl frequently cites Nietzsche's words, "If we have our own why in life, we shall get along with almost any how". Suffering and all the negative emotions that come with it are a normal part of the human experience and should even be expected. Edith Weisskopf-Joelson, a psychologist and follower of logotherapy, argues that "our current mental-hygiene philosophy stresses the idea that people ought to be happy, that unhappiness is a symptom of maladjustment. Such a value system might be responsible for the fact that the burden of unavoidable unhappiness is increased by unhappiness about being unhappy".
=== Obsessive–compulsive disorder ===
Frankl believed that those with obsessive–compulsive disorder lack the sense of completion that most other individuals possess. Instead of fighting the tendencies to repeat thoughts or actions, or focusing on changing the individual symptoms of the disease, the therapist should focus on "transform[ing] the neurotic's attitude toward their neurosis".: 185 Therefore, it is important to recognize that the patient is "not responsible for his obsessional ideas", but that "he is certainly responsible for his attitude toward these ideas".: 188 Frankl suggested that it is important for the patient to recognize their inclinations toward perfection as fate, and therefore, must learn to accept some degrees of uncertainty. Ultimately, following the premise of logotherapy, the patient must eventually ignore their obsessional thoughts and find meaning in their life despite such thoughts.
=== Schizophrenia ===
Though logotherapy was not intended to deal with severe disorders, Frankl believed that logotherapy could benefit even those with schizophrenia. He recognized the roots of schizophrenia in physiological dysfunction. In this dysfunction, the person with schizophrenia "experiences himself as an object" rather than as a subject.: 208 Frankl suggested that a person with schizophrenia could be helped by logotherapy by first being taught to ignore voices and to end persistent self-observation. Then, during this same period, the person with schizophrenia must be led toward meaningful activity, as "even for the schizophrenic there remains that residue of freedom toward fate and toward the disease which man always possesses, no matter how ill he may be, in all situations and at every moment of life, to the very last".: 216
=== Terminally ill patients ===
In 1977, Terry Zuehlke and John Watkins conducted a study analyzing the effectiveness of logotherapy in treating terminally ill patients. The study's design used 20 male Veterans Administration volunteers who were randomly assigned to one of two possible treatments – (1) group that received eight 45-minute sessions over a 2-week period and (2) group used as control that received delayed treatment. Each group was tested on five scales – the MMPI K Scale, MMPI L Scale, Death Anxiety Scale, Brief Psychiatric Rating Scale, and the Purpose of Life Test. The results showed an overall significant difference between the control and treatment groups. While the univariate analyses showed that there were significant group differences in 3/5 of the dependent measures. These results confirm the idea that terminally ill patients can benefit from logotherapy in coping with death.
=== Forms of treatment ===
Ecce Homo is a method used in logotherapy. It requires of the therapist to note the innate strengths that people have and how they have dealt with adversity and suffering in life; to ask the patient to consider how, despite everything a person may have gone through, they made the best of their suffering. The method is called "Ecce Homo", which is Latin for "Behold the Man", because the method involves beholding how other people have made the best of their adversity.
== Critiques ==
=== Authoritarianism ===
In 1961 Rollo May argued that logotherapy is, in essence, authoritarian. He suggested that Frankl's therapy presents a plain solution to all of life's problems, an assertion that would seem to undermine the complexity of human life itself. May contended that if a patient could not find their own meaning, Frankl would provide a goal for his patient. In effect, this would negate the patient's personal responsibility, thus "diminish[ing] the patient as a person". Frankl explicitly replied to May's arguments through a written dialogue, sparked by Rabbi Reuven Bulka's article "Is Logotherapy Authoritarian?". Frankl responded that he combined the prescription of medication, if necessary, with logotherapy, to deal with the person's psychological and emotional reaction to the illness, and highlighted areas of freedom and responsibility, where the person is free to search and to find meaning.
=== Religiousness ===
Critical views of the life and word of logotherapy's founder and his work assume that Frankl's religious background and experience of suffering guided his conception of meaning within the boundaries of the person and therefore that logotherapy is founded on Viktor Frankl's worldview. Some researchers argue that logotherapy is not a "scientific" psychotherapeutic school in the traditional sense but a philosophy of life, a system of values, or a secular religion that is not fully coherent and based on questionable metaphysical premises.
Frankl openly spoke and wrote on religion and psychiatry, throughout his life, and specifically in his last book, Man's Search for Ultimate Meaning (1997). He asserted that every person has a spiritual unconscious, independently of religious views or beliefs, yet Frankl's conception of the spiritual unconscious does not necessarily entail religiosity. In Frankl's words: "It is true, Logotherapy, deals with the Logos; it deals with Meaning. Specifically, I see Logotherapy in helping others to see meaning in life. But we cannot "give" meaning to the life of others. And if this is true of meaning per se, how much does it hold for Ultimate Meaning?" The American Psychiatric Association awarded Viktor Frankl the 1985 Oskar Pfister Award (for important contributions to religion and psychiatry).
== Recent developments ==
Since the 1990s, the number of institutes providing education and training in logotherapy continues to increase worldwide.VFI / Institutes worldwide (E) Numerous logotherapeutic concepts have been integrated and applied in different fields, such as cognitive behavioral therapy, acceptance and commitment therapy (ACT), and burnout prevention. The logotherapeutic concepts of noogenic neurosis and existential crisis were added to the ICD 11 under the name demoralisation crisis, i.e. a construct that features hopelessness, meaninglessness, and existential distress as first described by Frankl in the 1950s. Logotherapy has also been associated with psychosomatic and physiological health benefits. Besides Logotherapy, other meaning-centered psychotherapeutic approaches such as positive psychology and meaning therapy have emerged. Paul Wong's meaning therapy attempts to translate logotherapy into psychological mechanisms, integrating cognitive behavioral therapy, positive psychotherapy and the positive psychology research on meaning. Logotherapy is also being applied in the field of oncology and palliative care (William Breitbart). These recent developments introduce Viktor Frankl's logotherapy to a new generation and extend its impact to new areas of research.
== Locations ==
A number of logotherapeutic institutes have opened up in various countries around the world and include:
Viktor Frankl Institute of Logotherapy
== See also ==
Existential therapy
Ikigai—similar Japanese concept
== References ==
== Bibliography ==
Frankl, Viktor Man's Search for Meaning. An Introduction to Logotherapy, Beacon Press, Boston, MA, 2006. ISBN 978-0-8070-1427-1
Frankl, Viktor (12 October 1986). The Doctor and the Soul: From Psychotherapy to Logotherapy. Random House Digital, Inc. ISBN 978-0-394-74317-2. Retrieved 17 May 2012.
Frankl, Viktor Psychotherapy and Existentialism. Selected Papers on Logotherapy, Simon & Schuster, New York, 1967. ISBN 0-671-20056-9
Frankl, Viktor The Will to Meaning. Foundations and Applications of Logotherapy, New American Library, New York, 1988 ISBN 0-452-01034-9
Frankl, Viktor The Unheard Cry for Meaning. Psychotherapy and Humanism, Simon & Schuster, New York, 2011 ISBN 978-1-4516-6438-6
Frankl, Viktor On the Theory and Therapy of Mental Disorders. An Introduction to Logotherapy and Existential Analysis, Brunner-Routledge, London-New York, 2004. ISBN 0-415-95029-5
Frankl, Viktor Viktor Frankl Recollections. An Autobiography, Basic Books, Cambridge, MA 2000. ISBN 978-0-7382-0355-3.
Frankl, Viktor Man's Search for Ultimate Meaning. Perseus Book Publishing, New York, 1997; ISBN 978-0-7382-0354-6.
== External links ==
Viktor Frankl Institute Vienna
Viktor Frankl Institute of America
Viktor Frankl Centre
Viktor and I (documentary)
Viktor Frankl Institute of Logotherapy
Viktor Frankl Institute of Logotherapy in Israel | Wikipedia/Logotherapy |
Music as a coping strategy involves the use of music (through listening or playing music) in order to reduce stress, as well as many of the psychological and physical manifestations associated with it. The use of music to cope with stress is an example of an emotion-focused, adaptive coping strategy. Rather than focusing on the stressor itself, music therapy is typically geared towards reducing or eliminating the emotions that arise in response to stress. In essence, advocates of this therapy claim that the use of music helps to lower stress levels in patients, as well as lower more biologically measurable quantities such as the levels of epinephrine and cortisol. Additionally, music therapy programs have been repeatedly demonstrated to reduce depression and anxiety symptoms in the long term.
== Major theories ==
In the context of psychology, a coping strategy is any technique or practice designed to reduce or manage the negative effects associated with stress. While stress is known to be a natural biological response, biologists and psychologists have repeatedly demonstrated that stress in excess can lead to negative effects on one's physical and psychological well-being. Elevated stress levels can lead to conditions including mental illnesses, cardiovascular conditions, eating disorders, gastrointestinal complications, sexual dysfunction, and skin and hair problems. The variety and potential fatality from these conditions push the need for a coping mechanism to reduce the manifestations associated with stress.
While there are hundreds of different coping strategies, the use of music is one specific example of a coping strategy that is used to combat the negative effects of stress. Due to the substantially large number of strategies to choose from, psychologists break down coping strategies into three types:
Appraisal-based - Intended to modify the individual's thought process Stress is typically eliminated through rationalization, changes in values or thinking patterns, or with humor.
Problem-based - Targets the cause of the stress. The process could either involve eliminating or adapting to a stressor in order to cope. An example of a problem based strategy is time management.
Emotion-based - Geared towards influencing one's emotional reactions when stressed. Meditation, distractions, or the release of emotion are all forms of emotion-based coping strategies. Mindfulness-based stress reduction is another example of this, as it is a more personal reflection based aspect of coping.
Since music-based coping is designed to modify an individual's emotional reactions to a certain event, it is best classified as an emotion-based coping strategy. Rather than attempting to directly influence or eliminate a particular stressor, music-based coping relies on influencing an individual's emotional and mental reaction to the stressor. Music assuages stress by either reducing or altering emotional response or alleviate some of the physiological effects of the stress response.
== Major empirical findings ==
Psychologists and medical practitioners have recently focused more time and attention on the concept of music as a coping strategy and the effects of its use on patients. In literature linking music and stress, empirical findings are typically grouped together according to the method in which they are gathered. For example, some methods may include studies like survey questions or more invasive methods of study like invasive psychoacoustic observations. Despite the fact that different methods are used, most of these studies demonstrate the impact different types of music have on human emotions.
=== Patient response-based findings ===
One of the more popular methods used to collect data on coping strategies involves the use of non-invasive, patient response-based methods. This method is directed more towards the psychological realm, in that the methods used to collect data were not very invasive but more of a “tell me how you feel” type of question/response system. Once the findings had been gathered, statistical analysis was performed in an effort to discover a correlation between the coping mechanism and its effect on the stress response. These non-invasive treatments are more popular among children and elderly patients, since they prevent the results from being altered due to the patients's nervousness. Proponents of these methods claim that if children are prompted with general, unthreatening questions, they are much more comfortable and willing to provide accurate accounts of their levels of stress. In several studies using non-invasive methods, music has been documented as being effective in reducing the subject's perceived level of stress.
==== Music and effects on psychological trauma ====
Posttraumatic stress disorder (PTSD) is a psychological stress disorder that involves the experience of strong emotional reactions due to traumatic events in an individual's past. PTSD is almost always a result of a traumatic experience. Certain triggers, such as images, sounds, or other significant sensory details associated with the experience can evoke extreme stress responses, panic attacks, or severe anxiety. PTSD is commonly experienced by veterans of armed conflicts, and can be frequently diagnosed in victims of rape or other violent assaults.
If an individual diagnosed with PTSD associates a certain song with a traumatic memory, it typically triggers a stronger stress/anxiety response than the individual would otherwise have otherwise experienced when listening to the song. While one cannot assume that music is the only factor that triggers PTSD-influenced stress and panic attacks, these can be especially memorable because of music's rhythm, beat, and/or memorable lyrics. However, associating music with psychological responses is not necessarily guaranteed to bring up bad memories, because music can often hold psychological connotations to very happy memories. For example, it has been demonstrated that supplying the residents of nursing homes with iPods that feature nostalgic music is a means of reducing the stress of the elderly.
Music has been used to treat dementia patients by utilizing methods similar to the treatments that are used in the management of PTSD. However, in the treatment of dementia, more emphasis is placed on providing the patient with music that triggers pleasant memories or feelings, rather than avoiding music that triggers negative emotions. After the music is listened to, one sees the change in mood and attitude from closed and distant to joyful, open and happy.
There is a wealth of anecdotal evidence demonstrating the effectiveness that music can have as a coping response in this regard. For example, if a patient of either PTSD or dementia were to have a loved one die, he or she might associate a certain song with the person being mourned for, and hearing that song could bring about feelings of happiness or deep sadness. In addition, if there was a certain connection between them, such as in marriage, and their wedding song came on, an overly powerful emotional reaction could occur. These overly emotional situations trigger memories and a stress response that anguishes the person remembering these hurtful memories. A certain song that pertains to that memory can trigger nearly any emotion.
Music's effects on dementia patients have shown to bring them out of their shell, and engage them in singing and being happy, opposed to their usual closed and distant personalities. The patients have been shown to sing and perk up, even cry out of pure joy of the music that they loved in their youth. After the patients listen to their music they were interviewed and actually engaged, because of how happy the music had made them. The patients talked about how much they loved the music and the memories that the music invoked.
==== Stress and music in the medical field ====
The use of music as a coping strategy also has applications in the medical field. For example, patients who listen to music during surgery or post-operative recovery have been shown to have less stress than their counterparts who do not listen to music. Studies have shown that the family members and parents of the patient had reduced stress levels when listening to music while waiting, and can even reduce their anxiety for the surgery results. The use of music has also been proven effective in pediatric oncology. Music therapy is mainly used in these cases as a diversion technique, play therapy, designed to distract the patient from the pain or stress experienced during these operations. The focus of the patient is directed at a more pleasurable activity and the mind shifts toward that activity creating a “numbing” effect founded on an “out of sight, out of mind” type approach. This can even transcend to elderly patients in nursing homes and adult day care centers. Music therapy in these places have shown reductions in elder aggression and agitated moods. However, because several of these studies rely mainly on patient responses, some concerns have been raised as to the strength of the correlation between music and stress reduction.
Music as a form of coping has been used multiple times in cancer patients, with promising results. A study done on 113 patients going through stem cell transplants split the patients into two group; one group made their own lyrics about their journey and then produced a music video, and the other group listened to audiobooks. The results of the study showed that the music video group had better coping skills and better social interactions in comparison, by taking their mind of the pain and stress accompanying treatment, and giving them an outlet to express their feelings.
Another study done at UNC showed remarkable improvement in a young girl who was born without the ability to speak. A therapist would come in and sing with her, as the only thing she could do was sing. Miraculously, the singing allowed to her gain the ability of speech, as music and speech are similar in nature and help the brain form new connections. In the same hospital, the therapist visits children daily and plays music with them, singing and using instruments. The music fosters creativity and reduces stress associated with treatments, and takes the children's minds off of their current surroundings.
It also cannot be ignored the importance of coping strategies in families and caregivers of those going through serious and even terminal illness. These family members are often responsible for a vast majority of the care of their loved ones, on top of the stress of seeing them struggle. Therapists have worked with these family members, singing and playing instruments, to help them take their minds off of the stress of helping their loved ones undergo treatment. Just like in the patients themselves, the music therapy has been shown to help them cope with the intense emotions and situations they deal with on a daily basis.
=== Physiological findings ===
Other studies, which use more invasive techniques to measure the response of individuals to stress, demonstrate that the use of music can mitigate many of the physiological effects often associated with the stress response - such as a lowering of blood pressure or a decrease in heart rate. Most research associated with the use of music as a coping strategy makes use of empirical measurements through devices like an EKG or heart rate monitor in order to provide a stronger correlation between music and its proposed effects on the stress response. In these studies, subjects are typically exposed to a stressor and then assigned music to listen to, while the parties conducting the study measure changes in the subjects' physiological status.
Some studies, using more invasive physiological research methods, have demonstrated that the use of sedative music or preferred sedative music cause a decrease in tension and state-anxiety levels of adult individuals. This decrease in tension or feeling of anxiety is more prevalent and noticeable in the attempt to return to homeostasis, and shows far less effectiveness during the actual stressful event. Other studies expose their subjects to an immediate physical stressor, such as running on a treadmill, while having them listen to different genres of music. These studies have shown that the respiratory rates of the participants are increased when they listen to faster, upbeat music while running in comparison to no music or sedative music. In addition to the raised respiratory frequency caused by the initial stressor "running" music still had a noticeable physiological effect on the participants.
By and large, a collective review of these studies shows that music can be effective in reducing physiological effects that stress has on the human body. This can be anywhere from changing pulse rates, breathing rates, to even decreasing the occurrence of fatigue. This can even be seen in different tempo's and pitch, such as low pitch creates a relatively calming effect on the body whereas high pitch tends to generate stressors for the body. Furthermore, it has been suggested that if a patient can control the music that he or she listens to in the recovery process, then the return to normalcy happens at a much faster, more efficient rate than if the subject was assigned a music genre that he or she did not find appealing. With the use of the EKG monitor and other empirical methods of study, researchers are able to remove the superficial qualities associated with patient response-based findings and provide a more substantial correlation between the use of music and its effects on the human stress response.
== Specific techniques ==
One particular technique that uses music as a coping strategy is choosing and listening to music genres that have been shown to correlate with lower levels of stress. For example, it has been suggested that listening to classical music or self-selected music can lower stress levels in adult individuals. Music that is fast, heavy or even dark in nature may produce an increase in these same stress levels, however many people also find the cathartic effects of music to be intensified with the listening of music that is intense in such a way. Ambient music is a genre of music that is often associated with feelings of calmness or introspectiveness. While listening to self-selected genres, an individual is provided with a sense of control after choosing the type of music he or she would like to listen to. In certain situations, this choice can be one of the few moments where stressed and depressed individuals feel a locus of control over their respective lives. Introducing the feeling of control can be a valuable asset as the individual attempts to cope with his or her stress.
With that in mind, there are a few specific techniques specifically involving the use of music that have been suggested to aid in the reduction of stress and stress-related effects.
Listening to softer genres such as classical music.
Listening to music of one's choice and introducing an element of control to one's life.
Listening to music that reminds one of pleasant memories.
Avoiding music that reminds one of sad or depressing memories.
Listening to music as a way of bonding with a social group.
Another specific technique that can be used is the utilization of music as a “memory time machine” of sorts. In this regard, music can allow one to escape to pleasant or unpleasant memories and trigger a coping response. It has been suggested that music can be closely tied to re-experiencing the psychological aspects of past memories, so selecting music with positive connotations is one possible way that music can reduce stress.
A technique that is starting to be employed more often is vibroacoustic therapy. During therapy the patient lies on his/her back on a mat with speakers within it that send out low frequency sound waves, essentially sitting on a subwoofer. This therapy has been found to help with Parkinson's disease, fibromyalgia, and depression. Studies are also being conducted on patients with mild Alzheimer's disease in hopes to identify possible benefits from vibroacoustic therapy. Vibroacoustic therapy can also be used as an alternative to music therapy for the deaf.
== Controversies ==
Several of the empirical studies carried out to demonstrate the correlation between listening to music and the reduction in the human stress response have been criticized for relying too heavily on a small sample size. Another criticism of these studies is that they have been carried out in response to no stressor in particular. These critics claim that because no specific stressor is identified in many of these studies, it is somewhat difficult to identify whether the stress response was lessened by music or by some other means.
A more theoretical critique of this coping strategy is that the use of music in stress coping is largely a short-term coping response and therefore lacks long-term sustainability. These critics argue that while music may be effective in lowering perceived stress levels of patients, it is not necessarily making a difference on the actual cause of the stress response. Because the root cause of the stress is not affected, it is possible that the stress response may return shortly after therapy is ended. Those who hold this position advocate instead for a more problem-focused coping strategy that directly deals with the stressors affecting the patient.
== Conclusion ==
The use of music as a stress coping strategy has a demonstrated effect on the human response to stress. The use of music has been proven to lower the perceived levels of stress in patients, while greatly reducing the physical manifestations of stress as well– such as heart rate, blood pressure, or levels of stress hormones. It seems as though different types of music have different effects on stress levels, with classical and self-selected genres being the most effective. However, despite demonstrated effectiveness in empirical studies, there are many who still question the effectiveness of this coping strategy. Nevertheless, it is still an attractive option for some patients who want an easy and inexpensive way to respond to stress.
== See also ==
Music therapy
Coping (psychology)
Mindfulness-based stress reduction
Coping strategies
Stress management
== References == | Wikipedia/Music_as_a_coping_strategy |
The practitioner–scholar model is an advanced educational and operational model that is focused on practical application of scholarly knowledge. It was initially developed to train clinical psychologists but has since been adapted by other specialty programs such as business, public health, and law.
== Model ==
=== Creation ===
In 1973, a new clinical psychology training model was proposed at the historic Vail Conference on Professional Training in Psychology in Vail, Colorado—the practitioner-scholar model—providing yet another path of training for those primarily interested in clinical practice.
Prior to this, in 1949, a ground breaking conference was held in Boulder, Colorado, endorsing a model of study for clinicians that to this day has dominated clinical programs at most University based institutions: the scientist–practitioner model, designed to provide a rigorous grounding in research methods and a breadth of exposure to clinical psychology. Before this, research scientists had dominated the field of psychological work, and this second, new model, known as the 'Vail' model, called for more practitioner-oriented course work.
=== Features ===
Several features differentiate the practitioner-scholar model from the other two (scientist-practitioner & clinical-scientist models):
Training in this model is more strongly focused on clinical practice than either of the other two.
Many (but not all) of these training programs grant a Psy.D. degree rather than a Ph.D. or Ed.D.
Admissions criteria may place more of an emphasis on personal qualities of the applicants or clinically related work experience.
Accepts a much larger number of students than the typical Ph.D. degree.
These programs are typically housed in a greater variety of institutional settings than are research scientist or scientist-practitioner programs.
Like scientist-practitioner training, practitioner-scholar training is characterized by core courses in both basic and applied psychology, supervision during extensive clinical experience, and research consumption. Both require predoctoral internships that are usually full-time appointments in universities, medical centers, community mental health centers, or hospitals.
== See also ==
Participant observation
Qualitative research
== References == | Wikipedia/Practitioner–scholar_model |
Mindfulness-based cognitive therapy (MBCT) is an approach to psychotherapy that uses cognitive behavioral therapy (CBT) methods in conjunction with mindfulness meditative practices and similar psychological strategies. The origins to its conception and creation can be traced back to the traditional approaches from East Asian formative and functional medicine, philosophy and spirituality, birthed from the basic underlying tenets from classical Taoist, Buddhist and Traditional Chinese medical texts, doctrine and teachings.
Recently, mindfulness therapy has become of great interest to the scientific and medical community in the West, leading to the development of many new innovative approaches to preventative and treatment strategies to physical and mental health conditions and care. One such approach is the relapse-prevention for individuals with major depressive disorder (MDD). A focus on MDD and attention to negative thought processes such as false beliefs and rumination, distinguishes MBCT from other mindfulness-based therapies. Mindfulness-based stress reduction (MBSR), for example, is a more generalized program that also utilizes the practice of mindfulness. MBSR is a group-intervention program, like MBCT, that uses mindfulness to help improve the lives of individuals with chronic clinical ailments and high-stress.
CBT-inspired methods are used in MBCT, such as educating the participant about depression and the role that cognition plays within it. MBCT takes practices from CBT and applies aspects of mindfulness to the approach. One example would be "decentering", a focus on becoming aware of all incoming thoughts and feelings and accepting them, but not attaching or reacting to them. This process aims to aid an individual in disengaging from self-criticism, rumination, and dysphoric moods that can arise when reacting to negative thinking patterns.
Like CBT, MBCT functions on the etiological theory that when individuals who have historically had depression become distressed, they return to automatic cognitive processes that can trigger a depressive episode. The goal of MBCT is to interrupt these automatic processes and teach the participants to focus less on reacting to incoming stimuli, and instead accepting and observing them without judgment. Like MBSR, this mindfulness practice encourages the participant to notice when automatic processes are occurring and to alter their reaction to be more of a reflection. With regard to development, MBCT emphasizes awareness of thoughts, which helps individuals recognize negative thoughts that lead to rumination. It is theorized that this aspect of MBCT is responsible for the observed clinical outcomes.
Beyond the use of MBCT to reduce depressive symptoms, a meta-analysis done by Chiesa and Serretti (2014) supports the effectiveness of mindfulness meditation in reducing cravings for individuals with substance abuse issues. Addiction is known to involve interference with the prefrontal cortex, which ordinarily allows for delaying of immediate gratification for longer-term benefits by the limbic and paralimbic brain regions. The nucleus accumbens, together with the ventral tegmental area, constitutes the central link in the reward circuit. The nucleus accumbens is also one of the brain structures that is most closely involved in drug dependency. In an experiment with smokers, mindfulness meditation practiced over a two-week period totaling five hours of meditation decreased smoking by about 60% and reduced their cravings, even for those smokers who had no prior intentions to quit. Neuroimaging among those who practice mindfulness meditation reveals increased activity in the prefrontal cortex.
== Background ==
The tradition of mindful cognitive learning has been an important part of Buddhist and Taoist practices and tradition for thousands of years in East Asia, it is an important component of Traditional Chinese medicine and used extensively in Daoyin, Taiqi, Qigong and Wuxing heqidao as a therapy based on traditional intersectional medicine for prevention and treatment of mind and body disease, pain, and suffering.
In 1991, Philip Barnard and John Teasdale created a multilevel concept of the mind called "Interacting Cognitive Subsystems" (ICS). The ICS model is based on Barnard and Teasdale's concept that the mind has multiple modes that are responsible for receiving and processing new information cognitively and emotionally. This concept associates an individual's vulnerability to depression with the degree to which he/she relies on only one of the modes of mind, inadvertently blocking the other modes. The two main modes of mind are the "doing" mode and the "being" mode. The "doing" mode is also known as the "driven" mode. This mode is very goal-oriented and is triggered when the mind develops a discrepancy between how things are and how the mind wishes things to be. The second main mode of mind is the "being" mode. This mode is not focused on achieving specific goals; instead, the emphasis is on "accepting and allowing what is," without any immediate pressure to change it. The central component of ICS is metacognitive awareness: the ability to experience negative thoughts and feelings as mental events that pass through the mind, rather than as a part of the self. Individuals with high metacognitive awareness are able to avoid depression and negative thought patterns more easily during stressful life situations, in comparison with individuals with low metacognitive awareness. Meta-cognitive awareness is regularly reflected through an individual's ability to decenter. Decentering is the ability to perceive thoughts and feelings as both impermanent and objective occurrences in the mind.
In Barnard and Teasdale's (1991) model, mental health is related to an individual's ability to disengage from one mode or to easily move among the modes of mind. Individuals who are able to flexibly move between the modes of mind based on conditions in the environment are in the most favorable state. The ICS model theorizes that the "being" mode is the most likely mode of mind that will lead to lasting emotional changes. Therefore, to prevent relapse in depression, cognitive therapy must promote this mode. This led Teasdale to the creation of MBCT, which promotes the "being" mode.
This therapy was also created by Zindel Segal and Mark Williams and was partially based on the mindfulness-based stress reduction program, developed by Jon Kabat-Zinn. The theories behind mindfulness-based approaches to psychological issues function on the idea that being aware of things in the present, and not focusing on the past or the future, will allow the individual to be more apt to deal with current stressors and distressing feelings with a flexible and accepting mindset, rather than avoiding and, therefore, prolonging them.
== Applications ==
The MBCT program is a group intervention that lasts eight weeks, or in eight sessions. During these eight weeks, there is a weekly course, which lasts two hours, and one day-long class after the fifth week. However, much of the practice is done outside class, with the participant using guided meditations and attempts to cultivate mindfulness in their daily lives.
MBCT prioritizes learning how to pay attention or concentrate with purpose, in each moment and, most importantly, without judgment. Through mindfulness, clients can recognize that holding onto some of these feelings is ineffective and mentally destructive. MBCT focuses on having individuals recognize and be aware of their feelings instead of focusing on changing feelings. Mindfulness is also thought by Fulton et al. to be useful for the therapists during therapy sessions.
MBCT is an intervention program developed to specifically target vulnerability to depressive relapse. Throughout the program, patients learn mind management skills leading to heightened meta-cognitive awareness, acceptance of negative thought patterns, and an ability to respond in skillful ways. During MBCT patients learn to decenter their negative thoughts and feelings, allowing the mind to move from an automatic thought pattern to conscious emotional processing. MBCT can be used as an alternative to maintenance antidepressant treatment, though it may be no more effective.
Although the primary purpose of MBCT is to prevent relapse in depressive symptomology, clinicians have been formulating ways in which MBCT can be used to treat physical symptoms of other diseases, such as diabetes and cancer. Clinicians are also discovering ways to use MBCT to treat the anxiety and weariness associated with these diseases.
== Evaluation of effectiveness ==
A meta-analysis by Jacob Piet and Esben Hougaard of the University of Aarhus, Denmark Research found that MBCT could be a viable option for individuals with MDD in preventing a relapse. Various studies have shown that it is most effective with individuals who have a history of at least three or more past episodes of MDD. Within that population, participants with life-event-triggered depressive episodes were least receptive to MBCT. According to a 2017 meta-analysis of 547 patients, mindfulness-based interventions support a 30–60% decrease in depressive and anxious symptoms, in addition to the overall level of patient stress.
An MBCT-based program offered by the Tees, Esk, and Wear Valleys NHS Foundation Trust showed that measures of psychological distress, risk of burnout, self-compassion, anxiety, worry, mental well-being, and compassion for others all showed significant improvements after completing the program. Research supports that MBCT results in increased self-reported mindfulness, which suggests increased present-moment awareness, decentering, and acceptance, in addition to decreased maladaptive cognitive processes such as judgment, reactivity, rumination, and thought suppression. Results of a 2017 meta-analysis highlight the importance of home practice and its relation to conducive outcomes for mindfulness-based interventions.
== See also ==
Buddhism and psychology
Buddhist meditation
Neural mechanisms of mindfulness meditation
Mindfulness-based stress reduction
Mindfulness-based pain management
Full Catastrophe Living
== References ==
== Further reading ==
Mindfulness-based cognitive therapy for depression: a new approach to preventing relapse, by Zindel V. Segal, J. Mark G. Williams, John D. Teasdale. Guilford Press, 2002. ISBN 1-57230-706-4.
Mindfulness: Finding Peace in a Frantic World by Professor Mark Williams & Dr Danny Penman Rodale Books US (October 25, 2011). Piatkus UK (5 May 2011)
Mindfulness-based treatment approaches: clinician's guide to evidence base and applications, by Ruth A. Baer. Academic Press, 2006. ISBN 0-12-088519-0.
Mindfulness-Based Cognitive Therapy for Anxious Children: A Manual for Treating Childhood Anxiety, by Randye Semple, Jennifer Lee. New Harbinger Pubns Inc, 2010. ISBN 1-57224-719-3.
Mindfulness Practice in the Treatment of Traumatic Stress, U.S. Department of Veterans Affairs.
id=Mindfulnet.org The independent mindfulness information resourceInformation on MBCT., MBSR Research, applications and resources
== External links ==
Your Guide to Mindfulness-Based Cognitive Therapy, MBCT.com
Mindfulness-based Cognitive Therapy
Oxford Mindfulness Centre
Mindfulness Meditation in daily life | Wikipedia/Mindfulness-based_cognitive_therapy |
Art therapy is a distinct discipline that incorporates creative methods of expression through visual art media. Art therapy, as a creative arts therapy profession, originated in the fields of art and psychotherapy and may vary in definition. Art therapy encourages creative expression through painting, drawing, or modeling. It may work by providing a person with a safe space to express their feelings and allow them to feel more in control over their life.
There are three main ways that art therapy is employed. The first one is called analytic art therapy. Analytic art therapy is based on the theories that come from analytical psychology, and in more cases, psychoanalysis. Analytic art therapy focuses on the client, the therapist, and the ideas that are transferred between both of them through art. Another way that art therapy is used in art psychotherapy. This approach focuses more on the psychotherapists and their analyses of their clients' artwork verbally. The last way art therapy is looked at is through the lens of art as therapy. Some art therapists practicing art as therapy believe that analyzing the client's artwork verbally is not essential, therefore they stress the creation process of the art instead. In all approaches to art therapy, the art therapist's client utilizes paint, paper and pen, clay, sand, fabric, or other media to understand and express their emotions.
Art therapy can be used to help people improve cognitive and sensory motor function, self-esteem, self-awareness, and emotional resilience. It may also aide in resolving conflicts and reduce distress.
Current art therapy includes a vast number of other approaches such as person-centered, cognitive, behavior, Gestalt, narrative, Adlerian, and family. The tenets of art therapy involve humanism, creativity, reconciling emotional conflicts, fostering self-awareness, and personal growth.
Art therapy improves positive psychology by helping people find well-being through different unique pathways that add meaning to one's life to help improve positivity.
== History ==
In the history of mental health treatment, art therapy (combining studies of psychology and art) is still a relatively new field. This type of unconventional therapy is used to cultivate self-esteem and awareness, improve cognitive and motor abilities, resolve conflicts or stress, and inspire resilience in patients. It invites sensory, kinesthetic, perceptual, and sensory symbolization to address issues that verbal psychotherapy cannot reach. Although art therapy is a relatively young therapeutic discipline, its roots lie in the use of the arts in the 'moral treatment' of psychiatric patients in the late 18th century.
Art therapy as a profession began in the mid-20th century, arising independently in English-speaking and European countries. Art had been used at the time for various reasons: communication, inducing creativity in children, and in religious contexts. The early art therapists who published accounts of their work acknowledged the influence of aesthetics, psychiatry, psychoanalysis, rehabilitation, early childhood education, and art education, to varying degrees, on their practices.
The British artist Adrian Hill coined the term art therapy in 1942. Hill, recovering from tuberculosis in a sanatorium, discovered the therapeutic benefits of drawing and painting while convalescing. He wrote that the value of art therapy lay in "completely engrossing the mind (as well as the fingers)…releasing the creative energy of the frequently inhibited patient", which enabled the patient to "build up a strong defense against his misfortunes". He suggested artistic work to his fellow patients. That began his art therapy work, which was documented in 1945 in his book, Art Versus Illness.
The artist Edward Adamson, demobilized after World War II, joined Adrian Hill to extend Hill's work to the British long stay mental hospitals. Adamson studied connections between one's artistic expression and their release of emotions. One way in which Adamson practiced Art Therapy was through the depiction of patients' emotions in the art they created. In order to gain a deeper understanding of how the mind is affected by mental illness, Adamson's Collection started as a way to create an environment where patients felt comfortable expressing themselves through art. Mental health professionals would then analyze the art. Other early proponents of art therapy in Britain include E. M. Lyddiatt, Michael Edwards, Diana Raphael-Halliday and Rita Simons. The British Association of Art Therapists was founded in 1964.
U.S. art therapy pioneers Margaret Naumburg and Edith Kramer began practicing at around the same time as Hill. Naumburg, an educator, asserted that "art therapy is psychoanalytically oriented" and that free art expression "becomes a form of symbolic speech which ... leads to an increase in verbalization in the course of therapy." Edith Kramer, an artist, pointed out the importance of the creative process, psychological defenses, and artistic quality, writing that "sublimation is attained when forms are created that successfully contain ... anger, anxiety, or pain." Other early proponents of art therapy in the United States include Elinor Ulman, Robert "Bob" Ault, and Judith Rubin. The American Art Therapy Association was founded in 1969.
National professional associations of art therapy exist in many countries, including Brazil, Canada, Finland, Lebanon, Israel, Japan, the Netherlands, Romania, South Korea, Sweden, and Egypt. International networking contributes to the establishment of standards for education and practice.
Diverse perspectives exist on history of art therapy, which complement those that focus on the institutionalization of art therapy as a profession in Britain and the United States.
== Definitions ==
There are various definitions of the term art therapy.
The British Association of Art Therapists defines art therapy as: "a form of psychotherapy that uses art media as its primary mode of expression and communication." They also add that "clients who are referred to an art therapist need not have previous experience in art, the art therapist is not primarily concerned with making an aesthetic or diagnostic assessment of the client's image."
The American Art Therapy Association defines art therapy as: "an integrative mental health and human services profession that enriches the lives of individuals, families, and communities through active art-making, creative process, applied psychological theory, and human experience within a psychotherapeutic relationship."
The website Psychology.org defines art therapy as: "a tool therapists use to help patients interpret, express, and resolve their emotions and thoughts. Patients work with an art therapist to explore their emotions, understand conflicts or feelings that are causing them distress, and use art to help them find resolutions to those issues."
== Uses ==
As a regulated mental health profession, art therapy is employed in many clinical and other settings with diverse populations. It is increasingly recognized as a valid form of therapy. Art therapy can also be found in non-clinical settings as well, such as in art studios and creativity development workshops. Licensing for art therapists can vary from state to state with some recognizing art therapy as a separate license and some licensing under a related field such a professional counseling or mental health counseling. Some of the states that are licensed are Connecticut, Delaware, New Jersey, New Mexico, Kentucky, Mississippi, Maryland, Oregon, Ohio, Tennessee Virginia, District of Columbia, Texas, New York, Pennsylvania, Wisconsin, and Utah. Art therapists must have a master's degree that includes training in the creative process, psychological development, and group therapy, and they must complete a clinical internship. Depending on the state, province, or country, the term "art therapist" may be reserved for those who are professionals trained in both art and therapy and hold a master or doctoral degree in art therapy or certification in art therapy obtained after a graduate degree in a related field. Other professionals, such as Clinical mental health counseling, social workers, psychologists, and play therapists, optionally combine art making with basic psychotherapeutic modalities in their treatment. Therapists may better understand a client's absorption of information after assessing elements of their artwork.
As of 2011, there has been consistent research showing the efficiency of art therapy through systematic reviews and meta-analyses in various contexts.
=== Acute illness ===
A review of the literature has shown the influence of art therapy on patient care and found that participants in art therapy programs have less difficulty sleeping, among other benefits. Additionally, clinical studies have uncovered that patients in units with art therapy exhibited better vital signs, reduced stress-related cortisol levels, and required less medication to induce sleep. Other studies have found that merely observing a landscape photograph in a hospital room had reduced need for narcotic pain killers and less time in recovery at the hospital. In addition, either looking at or creating art in hospitals helped stabilize vital signs, speed up the healing process, and increase optimism in patients.
=== Cancer ===
Many studies have been conducted on the benefits of art therapy on cancer patients. Art therapy has been found useful for supporting patients during the stress of surgery, radiation, and chemotherapy treatment.
In a study involving women facing cancer-related difficulties such as fear, pain, and altered social relationships, it was found that: Engaging in different types of visual art (textiles, card making, collage, pottery, watercolor, acrylics) helped these women in 4 major ways. First, it helped them focus on positive life experiences, relieving their ongoing preoccupation with cancer. Second, it enhanced their self-worth and identity by providing them with opportunities to demonstrate continuity, challenge, and achievement. Third, it enabled them to maintain a social identity that resisted being defined by cancer. Finally, it allowed them to express their feelings in a symbolic manner, especially during chemotherapy.Another study showed those who participated in these types of activities were discharged earlier than those who did not participate. Even relatively short-term art therapy interventions may significantly improve patients' emotional states and symptoms.
A review of twelve studies investigated the use of art therapy in cancer patients by investigating the symptoms of emotional, social, physical, and spiritual concerns of cancer patients. They found that art therapy can improve the process of psychological readjustment to the change, loss, and uncertainty associated with surviving cancer. It was suggested that art therapy can provide a sense of "meaning-making" through the physical act of creating the art. When given five individual sessions of art therapy once per week, art therapy was shown to be useful for personal empowerment by helping the cancer patients understand their own boundaries in relation to the needs of other people. In turn, those who had art therapy treatment felt more connected to others and found social interaction more enjoyable than individuals who did not receive art therapy treatment. Furthermore, art therapy improved motivation levels, ability to discuss emotional and physical health, general well-being, and increased quality of life in cancer patients.
Additionally, recent research has shown that creative expression during hospital stays can lower anxiety, pain perception, and enhance physiological stability. In one clinical study, art therapy led to a statistically significant reduction in cancer-related symptoms such as fatigue and emotional distress.
=== Dementia ===
Art therapy has been observed to have positive effects on patients with dementia, with tentative evidence supports benefits with respect to quality of life. Although art therapy helps with behavioral issues, it does not appear to reverse degenerating mental faculties. It is important that the art tools are easy to use and relatively simple to understand. Art therapy had no clear results on affecting memory or emotional well-being scales. However, Alzheimer's Association states that art and music can enrich people's lives and allow for self-expression. D.W. Zaidel, a researcher and therapist at VAGA, claims that engagement with art can stimulate specific areas of the brain involved in language processing and visuospatial perception, two cognitive functions which decline significantly in dementia patients. Art therapy allows those experiencing memory loss to stay connected with other people and the world around them, by bringing them the opportunity to bond with those that matter to their lives. People with dementia can become very isolated, as many of their abilities, including the ability to understand abstract thinking and verbalize and communicate disappears. By creating art in a group setting, it gives people with dementia a chance to interact with those around them, while reducing the pressure that many social gatherings can normally bring to those facing this condition.
=== Autism ===
Art therapy is increasingly recognized to help address challenges of people with autism. Art therapy may address core symptoms of autism spectrum disorders by promoting sensory regulation, supporting psychomotor development, and facilitating communication. The creative activities involved in art therapy, such as painting or drawing can affect certain skills, like social interaction skills, which can be beneficial for those with autism. Art therapy is also thought to promote emotional and mental growth by allowing self-expression, visual communication, and creativity. Most importantly, studies have found that painting, drawing, or music therapies may allow people with autism to communicate in a manner more comfortable for them than speech. In Egypt, the Egyptian Autism Society implemented Art Therapy as a way to grow self esteem and quality of life in children. They incorporated basket weaving, a common cultural art activity, in art therapy programs. These art therapy activities were part of studies that focused on self esteem and proved that art therapy significantly, "...increased inner strength and daily living skills and reduced symptoms of emotional disorders...". Other forms of therapy that tend to help individuals with autism include play therapy and ABA therapy. In India, a study was done to show the effectiveness of art therapy by using both a controlled and experimental group on nine individuals with autism. One of the researchers, Koo, stated, "The positive changes were notable in the participants' cognitive, social, and motor skills".
=== Schizophrenia ===
A 2005 systematic review of art therapy as supplemental treatment for schizophrenia found unclear effects. Group art therapy has been shown to improve some symptoms of schizophrenia. While studies concluded that art therapy did not improve Clinical Global Impression or Global Assessment of Functioning, they showed that the use of haptic art materials to express one's emotions, cognitions, and perceptions in a group setting lowered depressing themes and may improve self-esteem, enforce creativity, and facilitate the integrative therapeutic process for people with schizophrenia. Overall some tests on the effectiveness of art therapy on patients with schizophrenia shows effective results. From a pilot study done by Crawford, professionals worked through art-therapy inventions to assist the patient's process and understanding of the image when creating art, and these patients showed a decline in negative manifestations of schizophrenia, compared to patents who received the typical care for schizophrenia. Art therapy resulted in an increased emotional awareness for these patients, as by the end of the treatment, the art-therapy group had very few positive manifestations of schizophrenia compared to the control group.
=== Post-traumatic stress disorder ===
Art therapy may alleviate trauma-induced emotions, such as shame and anger. It is also likely to increase trauma survivors' sense of empowerment and control by encouraging children to make choices in their artwork. Art therapy in addition to psychotherapy offered more reduction in trauma symptoms than just psychotherapy alone.
Art therapy may be an effective way to access and process traumatic memories that were encoded visually in clients. Through art therapy, individuals may be able to make more sense of their traumatic experiences and form accurate trauma narratives. Gradual exposure to these narratives may reduce trauma-induced symptoms, such as flashbacks and nightmares. Repetition of directives reduces anxiety, and visually creating narratives helps clients build coping skills and balanced nervous system responses. This has been proven effective only in long-term art therapy interventions.
The ways in which art therapy addresses trauma can be summarized as follows: the process of making the artwork is a challenge which involves multiple cognitive, emotional, and physical factors.
=== Depression ===
"Depression is considered a mood disorder characterized by distorted or inconsistent emotional states that interfere with an individual’s ability to function". Since art therapy was originated in the psychotherapy field, just like the other mental-health related issues art therapy has been a new technique used to help individuals with depression and anxiety. Art therapy is not limited to traditional art mediums, it can range from painting, dancing, writing, knitting, etc.
Art can be a powerful tool for relieving depression symptoms because it can instill confidence, create room for expression, and foster creativity, which has been linked to decreases in anxiety, rigid behaviors, and even physical ailments, such as heart disease and cancer. Art allows individuals to process emotions they might not have known they were dealing with or help express emotions they weren't verbally able to communicate. Creativity and creation can both be capable of lending tremendous confidence to an individual, which can lift some of the symptoms of depression.
==== In children ====
Children who have experienced trauma may benefit from group art therapy. The group format is effective in helping survivors develop relationships with others who have experienced similar situations. Group art therapy may also be beneficial in helping children with trauma regain trust and social self-esteem. As children sometimes have a hard time expressing their emotions through words, art therapy gives them an opportunity to do so in another manner, which a therapist can use to determine the type of care the child needs. Not only does it benefit the therapist as it helps them create a treatment plan, but it also helps the child express their emotions and alleviate ill feelings.
==== In veterans ====
Art therapy has an established history of being used to treat veterans, with the American Art Therapy Association documenting its use as early as 1945. As with other sources of trauma, combat veterans may benefit from art therapy to access memories and to engage with treatment. A 2016 randomized control trial found that art therapy in conjunction with cognitive processing therapy (CPT) was more beneficial than CPT alone. Walter Reed Army Medical Center, the National Intrepid Center of Excellence and other Veteran Association institutions use art therapy to help veterans with PTSD.
=== Bereavement ===
According to the American Art Therapy Association, art therapy is "particularly effective during times of crisis, changes in circumstance, trauma, and grief." Bereavement is one challenging time where clients find it difficult to verbalize their feelings of loss and shock, and so may use creative means to express their feelings. For example, it has been used to enable children to express their feelings of loss where they may lack the maturity to verbalize their bereavement.
=== Eating disorders ===
Art therapy may help people with anorexia with associated depression and weight management. Traumatic or negative childhood experiences can result in unintentionally harmful coping mechanisms, such as eating disorders. Art therapy may provide an outlet for exploring these experiences and emotions.
Art therapy may be beneficial for clients with eating disorders because clients can create visual representations with art material of progress made, represent alterations to the body, and provide a nonthreatening method of acting out impulses. Individuals with eating disorders tend to rely heavily on defense mechanisms to feel a sense of control; it is important that clients feel a sense of authority over their art products through freedom of expression and controllable art materials.
=== Daily challenges ===
Healthy individuals without mental or physical illnesses are also treated with art therapy; these patients often have ongoing challenges such as high-intensity jobs, financial constraints, and other non-traumatic personal issues. Findings revealed that art therapy reduces levels of stress and burnout related to patients' professions.
== Methods ==
Art therapists choose materials and interventions appropriate to their clients' needs and design sessions to achieve therapeutic goals. They may use the creative process to help their clients increase insight, cope with stress, work through traumatic experiences, increase cognitive, memory and neurosensory abilities, improve interpersonal relationships and achieve greater self-fulfillment. Activities an art therapist chooses to do with clients depend on a variety of factors such as their mental state or age. Art therapists may draw upon images from resources such as the Archive for Research in Archetypal Symbolism to incorporate historical art and symbols into their work with patients.
Art therapy can take place in a variety of different settings. Art therapists may vary the goals of art therapy and the way they provide art therapy, depending upon the institution's or client's needs. After an assessment of the client's strengths and needs, art therapy may be offered in either an individual or group format, according to which is better suited to the person. Art therapist Dr. Ellen G. Horovitz wrote, "My responsibilities vary from job to job. It is wholly different when one works as a consultant or in an agency as opposed to private practice. In private practice, it becomes more complex and far reaching. If you are the primary therapist, then your responsibilities can swing from the spectrum of social work to the primary care of the patient. This includes dovetailing with physicians, judges, family members, and sometimes even community members that might be important in the caretaking of the individual."
Some types of art therapies include therapeutic photography, photo-art therapy, and video therapy. Therapeutic photography involves using photography for artistic statements with no real therapist involved. It's a type of self directed art therapy that involves self-awareness, creative expression, and wellness. Photo-art therapy involves using photos as the method of art. It involves art making during the therapy session, and can involve photographic techniques. Films can also be considered in this classification of art therapy. Finally, video therapy is an early term that is used to describe film in art therapy. Art therapists use videos to assist in treating clients, as patients watch certain videos known to have therapeutic results.
== Art-based assessments ==
Art therapists and other professionals use art-based assessments to evaluate emotional, cognitive, and developmental conditions. The first drawing assessment for psychological purposes was created in 1906 by German psychiatrist Fritz Mohr. In 1926, researcher Florence Goodenough created a drawing test to measure the intelligence in children called the Draw-A-Man test which posited the notion that a child who incorporated more detail into a drawing was more intelligent than one who did not. Goodenough and other researchers concluded the test had just as much to do with personality as it did intelligence. Several other psychiatric art assessments were created in the 1940s and are still used today.
However, many art therapists eschew diagnostic testing and some writers question the validity of therapists making interpretative assumptions. Below are some examples of popular art therapy assessments:
=== Mandala Assessment Research Instrument ===
In this assessment, a person is asked to select a card from a deck with different mandalas, a repetitive symbol originating in Buddhism, and then must choose a color from a set of colored cards. The person is then asked to draw the mandala from the card they choose with an oil pastel of the color of their choice. The artist is then asked to explain if there were any meanings, experiences, or related information related to the mandala they drew. This test is based on the beliefs of Joan Kellogg, who sees a correlation between the images, pattern and shapes in the mandalas that people draw and the personalities of the artists.
Mandala drawing is one of the most diverse art therapy methods that can reach different groups of people to address a wide variety of needs. A mandala is a drawing that starts from an inner point and then expands outwards using circles. Mandalas have been used to identify psychological issues, reduce stress and anxiety, and improve one's self worth and well-being.
=== House–Tree–Person ===
Modeled after Goodenough's Draw-A-Man Test, childhood psychologist John Buck created the house-tree-person test in 1946. In the assessment, the client is asked to create a drawing that includes a house, a tree and a person, after which the therapist asks several questions about each. For example, with reference to the house, Buck wrote questions such as, "Is it a happy house?" and "What is the house made of?" Regarding the tree, questions include, "About how old is that tree?" and "Is the tree alive?" Concerning the person, questions include, "Is that person happy?" and "How does that person feel?"
The house–tree–person test is a projective personality test, a type of exam in which the test taker responds to or provides ambiguous, abstract, or unstructured stimuli (often in the form of pictures or drawings). It is designed to measure aspects of a person's personality through interpretation of drawings and responses to questions, self-perceptions and attitudes.
== Outsider art ==
The relation between the fields of art therapy and outsider art has been widely debated. The term art brut was first coined by French artist Jean Dubuffet to describe "art created outside the boundaries of official culture". Dubuffet used the term art brut to focus on artistic practice by insane-asylum patients. The English translation "outsider art" was first used by art critic Roger Cardinal in 1972. Outsider art continues to be associated with mentally ill or developmentally disabled individuals.
Both terms have been criticized because of their social and personal impact on both patients and artists. Art therapy professionals have been accused of not putting enough emphasis on the artistic value and meaning of the artist's works, considering them only from a medical perspective. However, critics of the outsider art movement suggest that crediting an artist's work to an impairment is reductive.
== See also ==
Artistic freedom
Bibliotherapy
Comic book therapy
Creativity and mental health
Expressive therapy
List of psychotherapies
List of therapies
== References ==
Wang, Qiu-Yue; Li, Dong-Mei (2016-09-01). "Advances in art therapy for patients with dementia". Chinese Nursing Research. 3 (3): 105–108. doi:10.1016/j.cnre.2016.06.011. ISSN 2095-7718.
== External links ==
Media related to Art therapy at Wikimedia Commons | Wikipedia/Art_therapy |
Mentalization-based treatment (MBT) is an integrative form of psychotherapy, bringing together aspects of psychodynamic, cognitive-behavioral, systemic and ecological approaches. MBT was developed and manualised by Peter Fonagy and Anthony Bateman, designed for individuals with borderline personality disorder (BPD). Some of these individuals suffer from disorganized attachment and failed to develop a robust mentalization capacity. Fonagy and Bateman define mentalization as the process by which we implicitly and explicitly interpret the actions of oneself and others as meaningful on the basis of intentional mental states. An alternative and simpler definition is "Seeing others from the inside and ourselves from the outside." The object of treatment is that patients with BPD increase their mentalization capacity, which should improve affect regulation, thereby reducing suicidality and self-harm, as well as strengthening interpersonal relationships. A version of MBT has also been developed for individuals with antisocial personality disorder (MBT-ASPD), delivered primarily in a group setting. Because individuals with ASPD are more likely to engage and learn from peers they perceive as similar, the focus of MBT-ASPD is on facilitating constructive group interactions that support mentalizing and behavioral change.
More recently, a range of mentalization-based treatments, using the "mentalizing stance" defined in MBT but directed at children (MBT-C), families (MBT-F) and adolescents (MBT-A), and for chaotic multi-problem youth, AMBIT (adaptive mentalization-based integrative treatment) has been under development by groups mainly gravitating around the Anna Freud National Centre for Children and Families. Moreover, the MBT model has been used in treating patients with eating disorders (MBT-ED)
The treatment should be distinguished from and has no connection with mindfulness-based stress reduction (MBSR) therapy developed by Jon Kabat-Zinn.
== Goals ==
The major goals of MBT are:
better behavioral control
increased affect regulation
more intimate and gratifying relationships
the ability to pursue life goals
This is believed to be accomplished through increasing the patient's capacity for mentalization in order to stabilize the client's sense of self and to enhance stability in emotions and relationships.
== Focus of treatment ==
A distinctive feature of MBT is placing the enhancement of mentalizing itself as focus of treatment. The aim of therapy is not developing insight, but the recovery of mentalizing. Therapy examines mainly the present moment, attending to events of the past only insofar as they affect the individual in the present. Other core aspects of treatment include a stance of curiosity, partnership with the patient rather than an 'expert' type role, monitoring and regulating emotional arousal, and identifying the affect focus. Transference is not included in the MBT model. MBT does encourage consideration of the patient-therapist relationship, but without necessarily generalizing to other relationships, past or present.
== Treatment procedure ==
MBT should be offered to patients twice per week with sessions alternating between group therapy and individual treatment. During sessions the therapist works to stimulate or nurture mentalizing. Particular techniques are employed to lower or raise emotional arousal as needed, to interrupt non-mentalizing and to foster flexibility in perspective-taking. Activation occurs through the elaboration of current attachment relationships, the therapist's encouragement and regulation of the patient's attachment bond with the therapist and the therapist's attempts to create attachment bonds between members of the therapy group.
== Mechanisms of change ==
The safe attachment relationship with the therapist provides a relational context in which it is safe for the patient to explore the mind of the other. Fonagy and Bateman have recently proposed that MBT (and other evidence-based therapies) works by providing ostensive cues that stimulate epistemic trust. The increase in epistemic trust, together with a persistent focus on mentalizing in therapy, appear to facilitate change by leaving people more open to learning outside of therapy, in the social interactions of their day-to-day lives.
== Efficacy ==
Fonagy, Bateman, and colleagues have done extensive outcome research on MBT for borderline personality disorder. The first randomized, controlled trial was published in 1999, concerning MBT delivered in a partial hospital setting. The results showed real-world clinical effectiveness that compared favorably with existing treatments for BPD. A follow-up study published in 2003 demonstrated that MBT is cost-effective. Encouraging results were also found in an 18-month study, in which subjects were randomly assigned to an outpatient MBT treatment condition versus a structured clinical management (SCM) treatment. The lasting efficacy of MBT was demonstrated in an 8-year follow-up of patients from the original trial, comparing MBT versus treatment as usual. In that research, patients who had received MBT had less medication use, fewer hospitalizations and longer periods of employment compared to patients who received standard care. Replication studies have been published by other European investigators. Researchers have also demonstrated the effectiveness of MBT for adolescents as well as that of a group-only format of MBT.
== References ==
Bateman, A.W.; Fonagy, P. (2004). "Mentalization-based treatment of BPD". Journal of Personality Disorders. 18 (1): 36–51. doi:10.1521/pedi.18.1.36.32772. PMID 15061343.
Bateman, A.W.; Fonagy, P. (2008). "Comorbid antisocial and borderline personality disorders: mentalization-based treatment". Journal of Clinical Psychology. 64 (2): 181–194. doi:10.1002/jclp.20451. PMID 18186112.
Midgley N.; Vrouva I., eds. (2012). Minding the Child: mentalization-based interventions with children, young people and their families. Routledge. ISBN 978-1-136-33641-6.
== Further reading ==
Allen, J.G., Fonagy, P. (2006). Handbook of mentalization-based treatment. Chichester, UK: John Wiley. ISBN 978-0470015612.
Allen, J.G., Fonagy, P., Bateman, A.W. (2008) Mentalizing in clinical practice. Arlington, USA: American Psychiatric Publishing. ISBN 978-1585623068.
John M. Grohol (March 17, 2008). "Mentalization Based Therapy". PsychCentral. | Wikipedia/Mentalization-based_treatment |
The generative theory of tonal music (GTTM) is a system of music analysis developed by music theorist Fred Lerdahl and linguist Ray Jackendoff. First presented in their 1983 book of the same title, it constitutes a "formal description of the musical intuitions of a listener who is experienced in a musical idiom" with the aim of illuminating the unique human capacity for musical understanding.
The musical collaboration between Lerdahl and Jackendoff was inspired by Leonard Bernstein's 1973 Charles Eliot Norton Lectures at Harvard University, wherein he called for researchers to uncover a musical grammar that could explain the human musical mind in a scientific manner comparable to Noam Chomsky's revolutionary transformational or generative grammar.
Unlike the major methodologies of music analysis that preceded it, GTTM construes the mental procedures under which the listener constructs an unconscious understanding of music, and uses these tools to illuminate the structure of individual compositions. The theory has been influential, spurring further work by its authors and other researchers in the fields of music theory, music cognition and cognitive musicology.
== Theory ==
GTTM focuses on four hierarchical systems that shape our musical intuitions. Each of these systems is expressed in a strict hierarchical structure where dominant regions contain smaller subordinate elements and equal elements exist contiguously within a particular and explicit hierarchical level. In GTTM any level can be small-scale or large-scale depending on the size of its elements.
=== Structures ===
==== I. Grouping structure ====
GTTM considers grouping analysis to be the most basic component of musical understanding. It expresses a hierarchical segmentation of a piece into motives, phrases, periods, and still larger sections.
==== II. Metrical structure ====
Metrical structure expresses the intuition that the events of a piece are related to a regular alternation of strong and weak beats at a number of hierarchical levels. It is a crucial basis for all the structures and reductions of GTTM.
==== III. Time-span reduction ====
Time-span reductions (TSRs) are based on information gleaned from metrical and grouping structures. They establish tree structure-style hierarchical organizations uniting time-spans at all temporal levels of a work. The TSR analysis begins at the smallest levels, where metrical structure marks off the music into beats of equal length (or more precisely into attack points separated by uniform time-spans) and moves through all larger levels where grouping structure divides the music into motives, phrases, periods, theme groups, and still greater divisions. It further specifies a “head” (or most structurally important event) for each time-span at all hierarchical levels of the analysis. A completed TSR analysis is often called a time-span tree.
==== IV. Prolongational reduction ====
Prolongational reduction (PR) provides our "psychological" awareness of tensing and relaxing patterns in a given piece with precise structural terms. In time-span reduction, the hierarchy of less and more important events is established according to rhythmic stability. In prolongational reduction, hierarchy is concerned with relative stability expressed in terms of continuity and progression, the movement toward tension or relaxation, and the degree of closure or non-closure. A PR analysis also produces a tree-structure style hierarchical analysis, but this information is often conveyed in a visually condensed modified "slur" notation.
The need for prolongational reduction mainly arises from two limitations of time-span reductions. The first is that time-span reduction fails to express the sense of continuity produced by harmonic rhythm. The second is that time-span reduction—even though it establishes that particular pitch-events are heard in relation to a particular beat, within a particular group—fails to say anything about how music flows across these segments.
=== More on TSR vs PR ===
It is helpful to note some basic differences between a time-span tree produced by TSR and a prolongational tree produced by PR. First, though the basic branching divisions produced by the two trees are often the same or similar at high structural levels, branching variations between the two trees often occur as one travels further down towards the musical surface.
A second and equally important differentiation is that a prolongational tree carries three types of branching: strong prolongation (represented by an open node at the branching point), weak prolongation (a filled node at the branching point) and progression (simple branching, with no node). Time-span trees do not make this distinction. All time-span tree branches are simple branches without nodes (though time-span tree branches are often annotated with other helpful comments).
== Rules ==
Each of the four major hierarchical organizations (grouping structure, metrical structure, time-span reduction and prolongational reduction) is established through rules, which are in three categories:
The well-formedness rules, which specify possible structural descriptions.
The preference rules, which draw on possible structural descriptions eliciting those descriptions that correspond to experienced listeners’ hearings of any particular piece.
The transformational rules, which provide a means of associating distorted structures with well-formed descriptions.
=== I. Grouping structure rules ===
==== Grouping well-formedness rules (G~WFRs) ====
"Any contiguous sequence of pitch-events, drum beats, or the like can constitute a group, and only contiguous sequences can constitute a group."
"A piece constitutes a group."
"A group may contain smaller groups."
"If a group G1 contains part of a group G2, it must contain all of G2."
'If a group G1 contains a smaller group G2, then G1 must be exhaustively partitioned into smaller groups."
==== Grouping preference rules (G~PRs) ====
alternative form: "Avoid analyses with very small groups – the smaller, the less preferable."
(Proximity) Consider a sequence of four notes, n1–n4, the transition n2–n3 may be heard as a group boundary if:
(slur/rest) the interval of time from the end of n2 is greater than that from the end of n1 to the beginning of n2 and that from the end of n3 to the beginning of n4 or if
(attack/point) the interval of time between the attack points of n2 and n3 is greater than between those of n1 and n2 and between those of n3 and n4.
(Change) Consider a sequence of four notes, n1–n4. The transition n2–n3 may be heard as a group boundary if marked by
(Register) the transition n2-n3 involves a greater intervallic distance than both n1-n2 and n3-n4, or if
(Dynamics) the transition n2-n3 involves a change in dynamics and n1-n2 and n3-n4 do not, or if
(Articulation) the transition n2-n3 involves a change in articulation and n1-n2 and n3-n4 do not, or if
(Length) n2 and n3 are of different length and both pairs n1,n2 and n3,n4 do not differ in length.
(Intensification) A larger-level group may be placed where the effects picked out by GPRs 2 and 3 are more pronounced.
(Symmetry) "Prefer grouping analyses that most closely approach the ideal subdivision of groups into two parts of equal length."
(Parallelism) "Where two or more segments of music can be construed as parallel, they preferably form parallel parts of groups."
(Time-span and prolongational stability) "Prefer a grouping structure that results in more stable time-span and/or prolongational reductions."
==== Transformational grouping rules ====
Grouping overlap (p. 60)
Given a well-formed underlying grouping structure G as described by GWFRs 1-5, containing two adjacent groups g1 and g2 such that
g1 ends with event e1,
g2 begins with event e2, and
e1 = e2
a well-formed surface grouping structure G' may be formed that is identical to G except that
it contains one event e' where G had the sequence e1e2,
e'=e1=e2
all groups ending with e1 in G end with e' in G', and
all groups beginning with e2 in G begin with e' in G'.
Grouping elision (p. 61).
Given a well-formed underlying grouping structure G as described by GWFRs 1-5, containing two adjacent group g1 and g2 such that
g1 ends with event e1,
g2 begins with event e2, and
(for left elision) e1 is harmonically identical to e2 and less than e2 in dynamics and pitch range or
(for right elision) e2 is harmonically identical to e1 and less than e1 in dynamics and pitch range,
a well-formed surface grouping structure G' may be formed that is identical to G except that
it contains one event e' where G had the sequence e1e2,
(for left elision) e'=e2,
(for right elision) e'=e1,
all groups ending with e1 in G end with e' in G', and
all groups beginning with e2 in G begin with e' in G'.
=== II. Metrical structure rules ===
==== Metrical well-formedness rules (M~WFRs) ====
"Every attack point must be associated with a beat at the smallest metrical level present at that point in the piece."
"Every beat at a given level must also be a beat at all smaller levels present at that point in that piece."
"At each metrical level, strong beats are spaced either two or three beats apart."
"The tactus and immediately larger metrical levels must consist of beats equally spaced throughout the piece. At subtactus metrical levels, weak beats must be equally spaced between the surrounding strong beats."
==== Metrical preference rules (M~PRs) ====
(Parallelism) "Where two or more groups or parts of groups can be construed as parallel, they preferably receive parallel metrical structure."
(Strong beat early) "Weakly prefer a metrical structure in which the strongest beat in a group appears relatively early in the group."
(Event) "Prefer a metrical structure in which beats of level Li that coincide with the inception of pitch-events are strong beats of Li."
(Stress) "Prefer a metrical structure in which beats of level Li that are stressed are strong beats of Li."
(Length) Prefer a metrical structure in which a relatively strong beat occurs at the inception of either
a relatively long pitch-event;
a relatively long duration of a dynamic;
a relatively long slur;
a relatively long pattern of articulation;
a relatively long duration of a pitch in the relevant levels of the time-span reduction;
a relatively long duration of a harmony in the relevant levels of the time-span reduction (harmonic rhythm).
(Bass) "Prefer a metrically stable bass."
(Cadence) "Strongly prefer a metrical structure in which cadences are metrically stable; that is, strongly avoid violations of local preference rules within cadences."
(Suspension) "Strongly prefer a metrical structure in which a suspension is on a stronger beat than its resolution."
(Time-span interaction) "Prefer a metrical analysis that minimizes conflict in the time-span reduction."
(Binary regularity) "Prefer metrical structures in which at each level every other beat is strong."
==== Transformational metrical rule ====
Metrical deletion (p. 101).
Given a well-formed metrical structure M in which
B1, B2 and B3 are adjacent beats of M at level L1, and B2 is also a beat at level Li+1,
T1 is the time-span from B1 to B2 and T2 is the time-span from B2 to B3, and
M is associated with and underlying grouping structure G in such a way that both T1 and T2 are related to a surface time-span T' by the grouping transformation performed on G of
left elision or
overlap,
then a well-formed metrical structure M' can be formed from M and associated with the surface grouping structure by
deleting B1 and all beats at all levels between B1 and B2 and associating B2 with the onset of T', or
deleting B2 and all beats at all levels between B2 and B3 and associating B1 with the onset of T'.
=== III. Time-span reduction rules ===
Time-span reduction rules begin with two segmentation rules and proceed to the standard WFRs, PRs and TRs.
==== Time-span segmentation rules ====
"Every group in a piece is a time-span in the time-span segmentation of the piece."
"In underlying grouping structure: a. each beat B of the smallest metrical level determines a time-span TB extending from B up to but not including the next beat of the smallest level; b. each beat B of metrical level Li determines a regular time-span of all beats of level Li-1 from B up to but not including (i) the next beat B’ of level Li or (ii) a group boundary, whichever comes sooner; and c. if a group boundary G intervenes between B and the preceding beat of the same level, B determines an augmented time-span T’B, which is the interval from G to the end of the regular time-span TB."
==== Time-span reduction well-formedness rules (TSR~WFRs) ====
"For every time-span T there is an event e (or a sequence of events e1 – e2) that is the head of T."
"If T does not contain any other time-span (that is, if T is the smallest level of time-spans), there e is whatever event occurs in T."
If T contains other time-spans, let T1,...,Tn be the (regular or augmented) time-spans immediately contained in T and let e1,...,en be their respective heads. Then the head is defined depending on: a. ordinary reduction; b. fusion; c. transformation; d. cadential retention (p. 159).
"If a two-element cadence is directly subordinate to the head e of a time-span T, the final is directly subordinate to e and the penult is directly subordinate to the final."
==== Time-span reduction preference rules (TSR~PRs) ====
(Metrical position) "Of the possible choices for head of time-span T, prefer that is in a relatively strong metrical position."
(Local harmony) "Of the possible choices for head of time-span T, prefer that is: a. relatively intrinsically consonant, b. relatively closely related to the local tonic."
(Registral extremes) "Of the possible choices for head of time-span T, weakly prefer a choice that has: a. a higher melodic pitch; b. a lower bass pitch."
(Parallelism) "If two or more time-spans can be construed as motivically and/or rhythmically parallel, preferably assign them parallel heads."
(Metrical stability) "In choosing the head of a time-span T, prefer a choice that results in more stable choice of metrical structure."
(Prolongational stability) "In choosing the head of a time-span T, prefer a choice that results in more stable choice of prolongational structure."
(Cadential retention) (p. 170).
(Structural beginning) "If for a time-span T there is a larger group G containing T for which the head of T can function as the structural beginning, then prefer as head of T an event relatively close to the beginning of T (and hence to the beginning of G as well)."
"In choosing the head of a piece, prefer the structural ending to the structural beginning."
=== IV. Prolongational reduction rules ===
==== Prolongational reduction well-formedness rules (PR~WFRs) ====
"There is a single event in the underlying grouping structure of every piece that functions as prolongational head."
"An event ei can be a direct elaboration of another pitch ej in any of the following ways: ei is a strong prolongation of ej if the roots, bass notes, and melodic notes of the two events are identical; ei is a weak prolongation of ej if the roots of the two events are identical but the bass and/or melodic notes differ;ei is a progression to or from ej if the harmonic roots of the two events are different."
"Every event in the underlying grouping structure is either the prolongational head or a recursive elaboration of the prolongational head."
(No crossing branches) "If an event ei is a direct elaboration of an event ej, every event between ei and ej must be a direct elaboration of either ei, ej, or some event between them."
==== Prolongational reduction preference rules (PR~PRs) ====
(Time-span importance) "In choosing the prolongational most important event ek of a prolongational region (ei – ej), strongly prefer a choice in which ek is relatively time-span important."
(Time-span segmentation) "Let ek be the prolongationally most important region (ei – ej). If there is a time-span that contains ei and ek but not ej, prefer a prolongational reduction in which ek is an elaboration of ei; similarly with the roles of ei and ej reversed."
(Prolongational connection) "In choosing the prolongationally most important region (ei – ej), prefer an ek that attaches to as to form a maximally stable prolongational connections with one of the endpoints of the region."
(Prolongational importance) "Let ek be the prolongationally most important region (ei – ej). Prefer a prolongational reduction in which ek is an elaboration of the prolongationally more important of the endpoints."
(Parallelism) "Prefer a prolongational reduction in which parallel passages receive parallel analyses."
(Normative prolongational structure) "A cadenced group preferably contains four (five) elements in its prolongational structure: a. a prolongational beginning; b. a prolongational ending consisting of one element of the cadences; (c. a right-branching prolongational as the most important direct elaboration direct of the prolongational beginning); d. a right-branching progression as the (next) most important direct elaboration of the prolongational beginning; e. a left-branching ‘subdominant’ progression as the most important elaboration of the first element of the cadence."
==== Prolongational reduction transformational rules ====
Stability conditions for prolongational connection (p. 224): a. Branching condition; b. Pitch-collection condition; c. Melodic condition; d. Harmonic condition.
Interaction principle: "to make a sufficiently stable prolongational connection ek must be chosen from the events in the two most important levels of time-span reduction represented in (ei – ej)."
== References ==
== Sources ==
Lerdahl, Fred; Jackendoff, Ray (1983). A Generative Theory of Tonal Music. Cambridge, Massachusetts: MIT Press.
== Further reading by the authors ==
=== Lerdahl ===
Lerdahl, Fred (1987). "Timbral Hierarchies". Contemporary Music Review 2, no. 1, p. 135–160.
Lerdahl, Fred (1989). "Atonal Prolongational Structure". Contemporary Music Review 3, no. 2. p. 65–87.
Lerdahl, Fred (1992). "Cognitive Constraints on Compositional Systems". Contemporary Music Review 6, no. 2, p. 97–121.
Lerdahl, Fred (Fall 1997). "Spatial and Psychoacoustic Factors in Atonal Prolongation". Current Musicology 63, p. 7–26.
Lerdahl, Fred (1998). "Prolongational Structure and Schematic Form in Tristan's Alte Weise". Musicae Scientiae, p. 27–41.
Lerdahl, Fred (1999). "Composing Notes". Current Musicology 67–68, p. 243–251.
Lerdahl, Fred (Autumn 2003). "Two Ways in Which Music Relates to the World". Music Theory Spectrum 25, no. 2, p. 367–373.
Lerdahl, Fred (2001). Tonal Pitch Space. New York: Oxford University Press. 391 pages. (This volume includes integrated and expanded versions of these articles: Lerdahl, Fred (Spring/Fall, 1988). "Tonal Pitch Space". Music Perception 5, no. 3, p. 315–350; and Lerdahl, Fred (1996). "Calculating Tonal Tension". Music Perception 13, no. 3, p. 319–363.)
Lerdahl, Fred (2009): "Genesis and Architecture of the GTTM Project". Music Perception 26(3), doi:10.1525/MP.2009.26.3.187, pp. 187–194.
=== Jackendoff ===
Jackendoff, Ray (1987): Consciousness and the Computational Mind. Cambridge: MIT Press. Chapter 11: "Levels of Musical Structure".
Jackendoff, Ray (2009): "Parallels and Nonparallels Between Language and Music". Music Perception 26(3), pp. 195–204.
=== Lerdahl and Jackendoff ===
(Autumn 1979 – Summer 1980). "Discovery Procedures vs. Rules of Musical Grammar in a Generative Music Theory". Perspectives of New Music 18, no. ½, p. 503–510.
(Spring 1981). "Generative Music Theory and Its Relation to Psychology". Journal of Music Theory (25th anniversary issue) 25, no. 1, p. 45–90.
(October 1981). "On the Theory of Grouping and Meter". The Musical Quarterly 67, no. 4, p. 479–506.
(1983). "An Overview of Hierarchical Structure in Music". Music Perception 1, no. 2.
=== Reviews of GTTM ===
Child, Peter (Winter 1984). "Review of A Generative Theory of Tonal Music, by Fred Lerdahl and Ray Jackendoff". Computer Music Journal 8, no. 4, p. 56–64.
Clarke, Eric F. (April 1986). "Theory, Analysis and the Psychology of Music: A Critical Evaluation of Lerdahl, F. and Jackendoff, R., A Generative Theory of Tonal Music". Psychology of Music 14, no. 1, pp. 3–16.
Feld, Steven (March 1984). "Review of A Generative Theory of Tonal Music, by Fred Lerdahl and Ray Jackendoff". Language in Society 13, no. 1, p. 133–135.
Hantz, Edwin (Spring 1985). "Review of A Generative Theory of Tonal Music, by Fred Lerdahl and Ray Jackendoff". Music Theory Spectrum 1, p. 190–202.
== Further reading ==
Sunberg, J. and B. Lindblom (1976). "Generative theories in language and music description". Cognition 4, 99–122.
Temperley, D. (2001). The Cognition of Basic Musical Structures. Cambridge, Massachusetts: MIT Press.
Palme C. and C. L. Krumhansl (1987). "Independent temporal and pitch structures in determination of musical phrases". Journal of Experimental Psychology: Human Perception and Performance 13, 116–126.
Palmer C. and C. L. Krumhansl (1990). "Mental representations for musical meter". Journal of Experimental Psychology: Human Perception and Performance 16, 728–741.
Boros, James (Winter 1996). "A Response to Lerdahl". Perspectives of New Music 34, no. 1, 252–258.
Foulkes-Levy, Laurdella (1996). A Synthesis of Recent Theories of Tonal Melody, Contour, and the Diatonic Scale: Implications for Aural Perception and Cognition. Ph.D. diss., State University of New York at Buffalo.
David Temperley (2007). Music and Probability. Cambridge, Massachusetts: MIT Press.
Cook, Nicholas (1994). "Perception: A Perspective from Music Theory". In Musical Perceptions, ed. Rita Aiello with John A. Sloboda, 64–95. Oxford: Oxford University Press.
Cook, Nicholas (1999). "Analysing Performance and Performing Analysis". In Rethinking Music, ed. Nicholas Cook and Mark Everist, 239–261. Oxford: Oxford University Press.
Cook, Nicholas (2007). Music, Performance, Meaning: Selected Essays. Ashgate Contemporary Thinkers on Critical Musicology Series. Aldershot: Ashgate.
Nattiez, Jean-Jacques (1997). "What is the pertinence of the Lerdahl-Jackendoff theory?" In Perception and Cognition of Music ed. Irene Deliege and John A. Sloboda, 413–419. London: Psychology Press.
== Bibliography on automation of GTTM ==
Keiji Hirata, Satoshi Tojo, Masatoshi Hamanaka. An Automatic Music Analyzing System based on GTTM.
Masatoshi Hamanaka, Satoshi Tojo: Interactive Gttm Analyzer, Proceedings of the 10th International Conference on Music Information Retrieval Conference (ISMIR2009), pp. 291–296, October 2009.
Keiji Hirata, Satoshi Tojo, Masatoshi Hamanaka: Techniques for Implementing the Generative Theory of Tonal Music, ISMIR 2007 (7th International Conference on Music Information Retrieval) Tutorial, September 2007.
Masatoshi Hamanaka, Keiji Hirata, Satoshi Tojo: "Implementing a Generating Theory of Tonal Music". Journal of New Music Research, vol. 35, no. 4, pp. 249–277, 2006.
Masatoshi Hamanaka, Keiji Hirata, Satoshi Tojo: "FATTA: Full Automatic Time-span Tree Analyzer", Proceedings of the 2007 International Computer Music conference (ICMC2007), vol. 1, pp. 153–156, August 2007.
Masatoshi Hamanaka, Keiji Hirata, Satoshi Tojo: "Grouping Structure Generator Based on Music Theory GTTM", Transactions of Information Processing Society of Japan, vol. 48, no. 1, pp. 284–299, January 2007 (in Japanese).
Masatoshi Hamanaka, Keiji Hirata, Satoshi Tojo: "ATTA: Automatic Time-span Tree Analyzer based on Extended GTTM", Proceedings of the 6th International Conference on Music Information Retrieval Conference (ISMIR2005), pp. 358–365, September 2005.
Masatoshi Hamanaka, Keiji Hirata, Satoshi Tojo: "Automatic Generation of Metrical Structure based on GTTM", Proceedings of the 2005 International Computer Music conference (ICMC2005), pp. 53–56, September 2005.
Masatoshi Hamanaka, Keiji Hirata, Satoshi Tojo: "Automatic Generation of Grouping Structure based on the GTTM", Proceedings of the 2004 International Computer Music conference (ICMC2004), pp. 141–144, November 2004.
Masatoshi Hamanaka, Keiji Hirata, Satoshi Tojo: "An Implementation of Grouping Rules of the GTTM: Introducing of Parameters for Controlling Rules". Information Processing Society of Japan SIG Technical Report, vol. 2004, no. 41, pp. 1–8, May 2004 (in Japanese).
Lerdahl, F., & C. L. Krumhansl (2007). "Modeling Tonal Tension". Music Perception 24.4, pp. 329–366.
Lerdahl, F. (2009). "Genesis and Architecture of the GTTM Project". Music Perception 26, pp. 187–194. | Wikipedia/Generative_theory_of_tonal_music |
Aversion therapy is a form of psychological treatment in which the patient is exposed to a stimulus while simultaneously being subjected to some form of discomfort. This conditioning is intended to cause the patient to associate the stimulus with unpleasant sensations with the intention of quelling the targeted (sometimes compulsive) behavior.
Aversion therapies can take many forms, for example: placing unpleasant-tasting substances on the fingernails to discourage nail-chewing; pairing the use of an emetic with the experience of alcohol; or pairing behavior with electric shocks of mild to higher intensities.
Aversion therapy, when used in a nonconsensual manner, is widely considered to be inhumane. At the Judge Rotenberg Educational Center, aversion therapy is used to perform behavior modification in students as part of the center's applied behavioral analysis program. The center has been condemned by the United Nations for torture.
== In addictions ==
Various forms of aversion therapy have been used in the treatment of addiction to alcohol and other drugs since 1932 (discussed in Principles of Addiction Medicine, Chapter 8, published by the American Society of Addiction Medicine in 2003).
=== Alcohol addiction ===
An approach to the treatment of alcohol dependence that has been wrongly characterized as aversion therapy involves the use of disulfiram, a drug which is sometimes used as a second-line treatment under appropriate medical supervision. When a person drinks even a small amount of alcohol, disulfiram causes sensitivity involving highly unpleasant reactions, which can be clinically severe. Rather than as an actual aversion therapy, the nastiness of the disulfiram-alcohol reaction is deployed as a drinking deterrent for people receiving other forms of therapy who actively wish to be kept in a state of enforced sobriety (disulfiram is not administered to active drinkers).
Another approach in creating aversions to alcohol consumption is the implementation of succinylcholine chloride-induced paralysis and respiratory arrest following exposure to alcohol. However, this method has not been found to be effective in emetic therapy or covert sensitation. Additionally, many patients reported a sense of fear and anxiety pertaining to dying as a result of the treatment, therefore this tactic is not recommended for therapeutic use.
=== Cocaine dependency ===
Emetic (to induce vomiting) therapy and faradic (administered shock) aversion therapy have been used to induce aversion for cocaine dependency. When used in a multimodal program, chemical aversion therapy displayed high patient acceptability among cocaine users as well as promising outcomes such as aversions to the sight, taste, and smell of the drug.
=== Cigarette addiction ===
It is unknown whether aversion therapy, in the form of rapid smoking (to provide an unpleasant stimulus), can help tobacco smokers overcome the urge to smoke. Although in recent years, a new tactic in aversion therapy has been introduced specifically to individuals who struggle with nicotine addiction. A device, which is worn on the wrist of the user, holds a self administered electrical stimulus within it aimed at deterring the use of nicotine.
== In compulsive habits ==
Aversion therapy has been used in the context of subconscious or compulsive habits, such as chronic nailbiting, hair-pulling (trichotillomania), or skin-picking (commonly associated with forms of obsessive compulsive disorder as well as trichotillomania).
In treating sexually deviant behavior, aversion therapy is implemented in the form of shame. The goal in this kind of therapy is to target the individuals who feel disgusted by their compulsive behaviors. The disgust aspect is what would implement shame, thus hopefully limiting their need and want to act on their compulsive behaviors. This is done by ensuring that the individual is aware they are being observed and judged during the act.
== In history ==
Pliny the Elder attempted to heal alcoholism in the first century Rome by putting putrid spiders in alcohol abusers' drinking glasses.
In 1935, Charles Shadel turned a colonial mansion in Seattle into the Shadel Sanatorium where he began treating alcoholics for their substance use disorder. His enterprise was launched with the help of gastroenterologist Walter Voegtlin and psychiatrist Fred Lemere. Together, they created a medical practice that exclusively treated chronic alcoholism through Pavlovian conditioned reflex aversion therapy.
In the 1960s and 1970s aversion therapy was used on a small group of lesbian and bisexual identifying women in England. Electric shocks and injections to induce vomiting were used to prevent the woman from looking at other women. This was meant to work as a form of conversion therapy.
== In popular culture ==
In Anthony Burgess's novel A Clockwork Orange (1962) and the film adaptation (1971) directed by Stanley Kubrick, the main character Alex is subjected to a fictional form of aversion therapy, called the "Ludovico technique", with the aim of stopping his violent behavior.
In The Simpsons episode "There's No Disgrace Like Home" (1990), Dr. Monroe administers aversion therapy to the family to deter bad behavior.
In the King of the Hill episode "Keeping up with the Joneses" (1997), one of the characters is forced to smoke an entire carton of cigarettes to discourage them from smoking, only for this tactic to backfire and worsen addiction.
== Judge Rotenberg Center ==
The Judge Rotenberg Center is a school in Canton, Massachusetts that uses the methods of ABA to perform behavior modification in children with developmental disabilities. Before it was banned in 2020, the center used a device called a Graduated Electronic Decelerator (GED) to deliver electric skin shocks as aversives. The Judge Rotenberg Center has been condemned by the United Nations for torture as a result of this practice. While many human rights and disability rights advocates have campaigned to shut down the center, as of 2020 it remains open. Six students have died of preventable incidents at the school since it opened in 1971.
== Criticism ==
Aversion therapy has been scrutinized in recent decades due to the controversy surrounding the techniques implemented in this kind of psychological treatment. These techniques such as electrical shocks and taste aversion, directly aim at creating an unpleasant stimuli to deter unwanted compulsive behavior. Some mental health professionals deem this tactic to be unethical since it is implementing punishment as a therapeutic tool. Aversion therapy has the risk of creating other psychological issues such as anxiety, depression, pain, fear and in severe cases even post-traumatic stress disorder (PTSD).
== See also ==
Behavior modification
== References == | Wikipedia/Aversion_therapy |
Play therapy refers to a range of methods of capitalising on children's natural urge to explore and harnessing it to meet and respond to the developmental and later also their mental health needs. It is also used for forensic or psychological assessment purposes where the individual is too young or too traumatised to give a verbal account of adverse, abusive or potentially criminal circumstances in their life.
Play therapy is extensively acknowledged by specialists as an effective intervention in complementing children's personal and inter-personal development. Play and play therapy are generally employed with children aged six months through late adolescence and young adulthood. They provide a contained way for them to express their experiences and feelings through an imaginative self-expressive process in the context of a trusted relationship with the care giver or therapist. As children's and young people's experiences and knowledge are typically communicated through play, it is an essential vehicle for personality and social development.
In recent years, play therapists in the western hemisphere, as a body of health professionals, are usually members or affiliates of professional training institutions and tend to be subject to codes of ethical practice.
== Play as therapy ==
Jean Piaget emphasized play as an essential expression of children's feelings, especially because they do not know how to communicate their feelings with words. Play helps a child develop a sense of true self and a mastery over their innate abilities resulting in a sense of worth and aptitude. During play, children are driven to meet the essential need of exploring and affecting their environment. Play also contributes in the advancement of creative thinking. Play likewise provides a way for children to release strong emotions. During play, children may play out challenging life experiences by re-engineering them, thereby discharging emotional states, with the potential of integrating every experience back into stability and gaining a greater sense of mastery.
== General ==
Play therapy is a form of psychotherapy which uses play as the main mode of communication especially with children, and people whose speech capacity may be compromised, to determine and overcome psychosocial challenges. It is aimed at helping patients towards better growth and development, social integration, decreased aggression, emotional modulation, social skill development, empathy, and trauma resolution. Play therapy also assists with sensorimotor development and coping skills.
Play therapy is an effective technique for therapy, regardless of age, gender, or nature of the problem. When children do not know how to communicate their problems, they act out. This may look like misbehavior in school, with friends or at home. Play therapy seeks to provide a way children can cope with difficult emotions and helps them find healthier solutions and coping mechanisms.
=== Diagnostic tool ===
Play therapy can also be used as a tool for diagnosis. A play therapist observes a client playing with toys (play-houses, soft toys, dolls, etc.) to determine the cause of the disturbed behaviour. The objects and patterns of play, as well as the willingness to interact with the therapist, can be used to understand the underlying rationale for behaviour both inside and outside of therapy session. Caution, however, should be taken when using play therapy for assessment and/or diagnostic purposes.
According to the psychodynamic view, people (especially children) will engage in play behaviour to work through their interior anxieties. According to this viewpoint, play therapy can be used as a self-regulating mechanism, as long as children are allowed time for free play or unstructured play. However, some forms of therapy depart from non-directiveness in fantasy play, and introduce varying amounts of direction, during the therapy session.
An example of a more directive approach to play therapy, for example, can entail the use of a type of desensitisation or relearning therapy, to change troubling behaviours, either systematically or through a less structured approach. The hope is that through the language of symbolic play, such desensitisation may take place, as a natural part of the therapeutic experience, and lead to positive treatment outcomes.
== Origins ==
Children's play has been recorded in artefacts at least since antiquity. In eighteenth-century Europe, Rousseau (1712–1778) wrote, in his book Emile, about the importance of observing play as a way to learn about and understand children.
=== From Education to Therapeutics ===
During the 19th century, European educationalists began to address play as an integral part of childhood education. They include Friedrich Fröbel, Rudolf Steiner, Maria Montessori, L. S. Vygotsky, Margaret Lowenfeld, and Hans Zulliger.
Hermine Hug-Hellmuth formalised play as therapy by providing children with toys to express themselves and observed play to analyse the child. In 1919, Melanie Klein began to use play as a means of analyzing children under the age of six. She believed that child's play was essentially the same as free association used with adults, and that as such, it was provide access to the child's unconscious. Anna Freud (1946, 1965) used play as a means to facilitate an attachment to the therapist and supposedly gain access to the child's psyche.
Arguably, the first documented case, describing a proto-therapeutic use of play, was in 1909 when Sigmund Freud published his work with "Little Hans", a five-year-old child suffering from a horse phobia. Freud saw him once briefly and recommended his father take note of Hans' play to provide observations which might assist the child. The case of "Little Hans" was the first case where a child's difficulty was adduced to emotional factors.
== Models ==
Play therapy can be divided into two basic types: non-directive and directive. Non-directive play therapy is a non-intrusive method in which children are encouraged to play in the expectation that this will alleviate their problems as perceived by their care-givers and other adults. It is often classified as a psychodynamic therapy. In contrast, directed play therapy is a method that includes more structure and guidance by the therapist as children work through emotional and behavioural difficulties through play. It often contains a behavioural component and the process includes more prompting by the therapist. Both types of play therapy have received at least some empirical support. On average, play therapy treatment groups, when compared to control groups, improve by .8 standard deviations.
Jessie Taft (1933), (Otto Rank's American translator), and Frederick H. Allen (1934) developed an approach they entitled relationship therapy. The primary emphasis is placed on the emotional relationship between the therapist and the child. The focus is placed on the child's freedom and strength to choose.
Virginia Axline, a child therapist from the 1950s applied Carl Rogers' work to children. Rogers had explored the work of the therapist relationship and developed non-directive therapy, later called Client-Centred Therapy. Axline summarized her concept of play therapy in her article, 'Entering the child's world via play experiences'. She described play as a therapeutic experience that allows the child to express themselves in their own way and time. That type of freedom allows adults and children to develop a secure relationship.(Progressive Education, 27, p. 68). Axline also wrote Dibs in Search of Self, which describes a series of play therapy sessions over a period of a year.
=== Nondirective play therapy ===
Non-directive play therapy, may encompass child psychotherapy and unstructured play therapy. It is guided by the notion that if given the chance to speak and play freely in appropriate therapeutic conditions, troubled children and young people will be helped towards resolving their difficulties. Non-directive play therapy is generally regarded as mainly non-intrusive. The hallmark of non-directive play therapy is that it has minimal constraints apart from the frame and thus can be used at any age. These approaches to therapy may originate from Margaret Lowenfeld, Anna Freud, Donald Winnicott, Michael Fordham, Dora Kalff, all of them child specialists or even from the adult therapist, Carl Rogers' non-directive psychotherapy and in his characterisation of "the optimal therapeutic conditions". Virginia Axline adapted Carl Rogers's theories to child therapy in 1946 and is widely considered the founder of this therapy. Different techniques have since been established that fall under the realm of non-directive play therapy, including traditional sandplay therapy, play therapy using provided toys and Winnicott's Squiggle and Spatula games. Each of these forms is covered briefly below.
Using toys in non-directive play therapy with children is a method used by child psychotherapists and play therapists. These approaches are derived from the way toys were used in Anna Freud's theoretical orientation. The idea behind this method is that children will be better able to express their feelings toward themselves and their environment through play with toys than through verbalisation of their feelings. Through this experience children may be able to achieve catharsis, gain more stability and enjoyment in their emotions, and test their own reality. Popular toys used during therapy are animals, dolls, hand puppets, soft toys, crayons, and cars. Therapists have deemed such objects as more likely to open imaginative play or creative associations, both of which are important in expression.
==== Sandplay ====
Jungian analytical method of psychotherapy using a tray of sand and miniature, symbolic figures is attributed to Dr. Margaret Lowenfeld, a paediatrician interested in child psychology who pioneered her "World Technique" in 1929, drawn from the writer H. G. Wells and his Floor Games published in 1911. Dora Kalff, who studied with her, combined Lowenfeld's World Technique with Carl Jung's idea of the collective unconscious and received Lowenfeld's permission to name her version of the work "sandplay". As in traditional non-directive play therapy, research has shown that allowing an individual to freely play with the sand and accompanying objects in the contained space of the sandtray (22.5" x 28.5") can facilitate a healing process as the unconscious expresses itself in the sand and influences the sand player. When a client creates "scenes" in the sandtray, little instruction is provided and the therapist offers little or no talk during the process. This protocol emphasises the importance of holding what Kalff referred to as the "free and protected space" to allow the unconscious to express itself in symbolic, non-verbal play. Upon completion of a tray, the client may or may not choose to talk about his or her creation, and the therapist, without the use of directives and without touching the sandtray, may offer supportive response that does not include interpretation. The rationale is that the therapist trusts and respects the process by allowing the images in the tray to exert their influence without interference.
Sandplay Therapy can be used during individual sessions. The limitations presented by the boundaries of the sandtray can serve as physical and symbolic limitations to unconscious, symbolic matherial that can be further reflected in analytical dialogue. The ISST, International Society for Sandplay Therapy, defines guidelines for training in Sandplay Therapy as well as guidelines for becoming a teaching therapist.
==== Winnicott's Squiggle and Spatula games ====
Donald Winnicott probably first came upon the central notion of play from his collaboration in wartime with the psychiatric social worker, Clare Britton, (later a psychoanalyst and his second wife), who in 1945 published an article on the importance of play for children. By "playing", he meant not only the ways that children of all ages play, but also the way adults "play" through making art, or engaging in sports, hobbies, humour, meaningful conversation, etc. Winnicott believed that it was only in playing that people are entirely their true selves, so it followed that for psychoanalysis to be effective, it needed to serve as a mode of playing.
Two of the playing techniques Winnicott used in his work with children were the squiggle game and the spatula game. The first involved Winnicott drawing a shape for the child to play with and extend (or vice versa) – a practice extended by his followers into that of using partial interpretations as a 'squiggle' for a patient to make use of.
The second involved Winnicott placing a spatula (medical tongue depressor) within the child's reach for her/him to play with. Winnicott considered that babies will be automatically attracted to an object, reach for it, and then discover what they intend to do with it after a while. p. 75–6. From the child's initial hesitation in making use of the spatula, Winnicott derived his idea of the necessary 'period of hesitation' in childhood (or analysis), which makes possible a true connection to the toy, interpretation or object presented for transference. p. 12.
==== Efficacy ====
Winnicott came to consider that "Playing takes place in the potential space between the baby and the mother-figure....[T]he initiation of playing is associated with the life experience of the baby who has come to trust the mother figure". "Potential space" was Winnicott's term for a sense of an inviting and safe interpersonal field in which one can be spontaneously playful while at the same time connected to others. p. 162. Playing can also be seen in the use of a transitional object, a term Winnicott coined for an object, such as a teddy bear, which may have a quality for a small child of being both real and made-up at the same time. Winnicott pointed out that no one demands that a toddler explain whether his Binky is a "real bear" or a creation of the child's own imagination, and went on to argue that it was very important that the child be allowed to experience the Binky as being in an undefined, "transitional" status between the child's imagination and the real world outside the child. p. 169. For Winnicott, one of the most important and precarious stages of development was in the first three years of life, when an infant grows into a child with an increasingly separate sense of self in relation to a larger world of other people. In health, the child learns to bring his or her spontaneous, real self into play with others; whereas in a False self disorder, the child may find it unsafe or impossible to do so, and instead may feel compelled to hide the true self from other people, and pretend to be whatever they want instead. Playing with a transitional object can be an important early bridge "between self and other", which helps a child develop the capacity to be creative and genuine in relationships. p. 170-2.
==== Research ====
Play therapy has been considered to be an established and popular mode of therapy for children for over sixty years. Critics of play therapy have questioned the effectiveness of the technique for use with children and have suggested using other interventions with greater empirical support such as Cognitive behavioral therapy. They also argue that therapists focus more on the institution of play rather than the empirical literature when conducting therapy. Classically, Lebo argued against the efficacy of play therapy in 1953, and Phillips reiterated his argument again in 1985. Both claimed that play therapy lacks in several areas of hard research. Many studies included small sample sizes, which limits the generalisability, and many studies also only compared the effects of play therapy to a control group. Without a comparison to other therapies, it is difficult to determine if play therapy really is the most effective treatment. Recent play therapy researchers have worked to conduct more experimental studies with larger sample sizes, specific definitions and measures of treatment, and more direct comparisons.
Outside of the psychoanalytic child psychotherapy field, which is well annotated, research is comparatively lacking in other, or random applications, on the overall effectiveness of using toys in non-directive play therapy. Dell Lebo found that out of a sample of over 4,000 children, those who played with recommended toys vs. non-recommended or no toys during non-directive play therapy were no more likely to verbally express themselves to the therapist. Examples of recommended toys would be dolls or crayons, while example of non-recommended toys would be marbles or a checkers board game. There is also ongoing controversy in choosing toys for use in non-directive play therapy, with choices being largely made through intuition rather than through research. However, other research shows that following specific criteria when choosing toys in non-directive play therapy can make treatment more efficacious. Criteria for a desirable treatment toy include a toy that facilitates contact with the child, encourages catharsis, and lead to play that can be easily interpreted by a therapist.
Several meta analyses have shown promising results about the efficacy of non-directive play therapy. Meta analysis by authors LeBlanc and Ritchie, 2001, found an effect size of 0.66 for non-directive play therapy. This finding is comparable to the effect size of 0.71 found for psychotherapy used with children, indicating that both non-directive play and non-play therapies are almost equally effective in treating children with emotional difficulties. Meta analysis by authors Ray, Bratton, Rhine and Jones, 2001, found an even larger effect size for nondirective play therapy, with children performing at 0.93 standard deviations better than non-treatment groups. These results are stronger than previous meta-analytic results, which reported effect sizes of 0.71, 0.71, and 0.66. Meta analysis by authors Bratton, Ray, Rhine, and Jones, 2005, also found a large effect size of 0.92 for children being treated with non-directive play therapy. Results from all meta-analyses indicate that non-directive play therapy has been shown to be just as effective as psychotherapy used with children and even generates higher effect sizes in some studies.
==== Predictors of effectiveness ====
There are several predictors that may also influence how effectiveness play therapy is with children. The number of sessions is a significant predictor in post-test outcomes, with more sessions being indicative of higher effect sizes. Positive effects can be seen with 16 sessions, however, there is a peak effect when a child can complete 35–40 sessions. An exception is children that undergo play therapy in critical-incident settings, such as hospitals and domestic violence shelters. Results from studies that looked at these children indicated a large positive effect size after only 7 sessions, which shows that children in crisis may respond more readily to treatment. Parental involvement is also a significant predictor of positive play therapy results. This involvement generally entails participation in each session with the therapist and the child. Parental involvement in play therapy sessions has also been shown to diminish stress in the parent-child relationship when kids are exhibiting both internal and external behaviour problems. Despite these predictors which have been shown to increase effect sizes, play therapy has been shown to be equally effective across age, gender, and individual vs. group settings.
Play Therapist Training
Frequently counselors in the play therapy field address a number of obstacles when it comes to helping children. The vast majority of counselors starting off lack the basic knowledge needed to be an effective play therapist. Training for these counselors is done through many different techniques such as university counselor education programs, workshops in hopes to meet the various needs of the children. Different studies are also performed to further assess the progress of the counselor's skill set based on which type of training they pursued. Studies have shown that those that studied play therapy through the university level have displayed higher levels of skills, attitudes and knowledge. The children that need play therapy deal with many different disorders and behaviors and it is imperative that the therapist have these main skills in order for play therapy to be effective. Understanding the stages of child development and how play can help assist them with it is an important step to their learning process.
Play therapist requirements may differ from state to state, but generally, play therapists need a Master's degree or higher degree in a mental health related subject. They must also have demonstrated skills in the field of Child Development. After obtaining a degree, additional classes and work is needed to obtain a certification as a Registered Play Therapist (RPT). Additional work includes: 150 documented hours of instruction, specific to play therapy, a minimum of 350 direct client contact hours (under Supervision of someone who is a Registered Play Therapist Supervisor RPT-S), and 35 hours of direct supervision with 5 session observations.
=== Directive play therapy ===
In the 1930s David Levy developed a technique he called release therapy. His technique emphasized a structured approach. A child, who had experienced a specific stressful situation, would be allowed to engage in free play. Subsequently, the therapist would introduce play materials related to the stress-evoking situation allowing the child to reenact the traumatic event and release the associated emotions.
In 1955, Gove Hambidge expanded on Levy's work emphasizing a "structured play therapy" model, which was more direct in introducing situations. The format of the approach was to establish rapport, recreate the stress-evoking situation, play out the situation and then free play to recover.
Directive play therapy is guided by the notion that using directives to guide the child through play will cause a faster change than is generated by nondirective play therapy. The therapist plays a much bigger role in directive play therapy. Therapists may use several techniques to engage the child, such as engaging in play with the child themselves or suggesting new topics instead of letting the child direct the conversation himself. Stories read by directive therapists are more likely to have an underlying purpose, and therapists are more likely to create interpretations of stories that children tell. In directive therapy games are generally chosen for the child, and children are given themes and character profiles when engaging in doll or puppet activities. This therapy still leaves room for free expression by the child, but it is more structured than nondirective play therapy. There are also different established techniques that are used in directive play therapy, including directed sandtray therapy and cognitive behavioral play therapy.
Directed sandtray therapy is more commonly used with trauma victims and involves the "talk" therapy to a much greater extent. Because trauma is often debilitating, directed sandplay therapy works to create change in the present, without the lengthy healing process often required in traditional sandplay therapy. This is why the role of the therapist is important in this approach. Therapists may ask clients questions about their sandtray, suggest them to change the sandtray, ask them to elaborate on why they chose particular objects to put in the tray, and on rare occasions, change the sandtray themselves. Use of directives by the therapist is very common. While traditional sandplay therapy is thought to work best in helping clients access troubling memories, directed sandtray therapy is used to help people manage their memories and the impact it has had on their lives.
Filial therapy, developed by Bernard and Louise Guerney, was an innovation in play therapy during the 1960s. The filial approach emphasizes a structured training program for parents in which they learn how to employ child-centered play sessions in the home. In the 1960s, with the advent of school counselors, school-based play therapy began a major shift from the private sector. Counselor-educators such as Alexander (1964); Landreth; Muro (1968); Myrick and Holdin (1971); Nelson (1966); and Waterland (1970) began to contribute significantly, especially in terms of using play therapy as both an educational and preventive tool in dealing with children's issues.
Roger Phillips, in the early 1980s, was one of the first to suggest that combining aspects of cognitive behavioral therapy with play interventions would be a good theory to investigate. Cognitive behavioral play therapy was then developed to be used with very young children between two and six years of age. It incorporates aspects of Aaron Beck's cognitive therapy with play therapy because children may not have the developed cognitive abilities necessary for participation in straight cognitive therapy. In this therapy, specific toys such as dolls and stuffed animals may be used to model particular cognitive strategies, such as effective coping mechanisms and problem-solving skills. Little emphasis is placed on the children's verbalizations in these interactions but rather on their actions and their play. Creating stories with the dolls and stuffed animals is a common method used by cognitive behavioral play therapists to change children's maladaptive thinking.
==== Efficacy ====
The efficacy of directive play therapy has been less established than that of nondirective play therapy, yet the numbers still indicate that this mode of play therapy is also effective. In 2001 meta analysis by authors Ray, Bratton, Rhine, and Jones, direct play therapy was found to have an effect size of .73 compared to the .93 effect size that nondirective play therapy was found to have. Similarly in 2005 meta analysis by authors Bratton, Ray, Rhine, and Jones, directive therapy had an effect size of 0.71, while nondirective play therapy had an effect size of 0.92. Although the effect sizes of directive therapy are statistically significantly lower than those of nondirective play therapy, they are still comparable to the effect sizes for psychotherapy used with children, demonstrated by Casey, Weisz, and LeBlanc. A potential reason for the difference in the effect size may be due to the number of studies that have been done on nondirective vs. directive play therapy. Approximately 73 studies in each meta analysis examined nondirective play therapy, while there were only 12 studies that looked at directive play therapy. Once more research is done on directive play therapy, there is potential that effect sizes between nondirective and directive play therapy will be more comparable.
=== Application of electronic games ===
The prevalence and popularity of video games in recent years has created a wealth of psychological studies centred around them. While the bulk of those studies have covered video game violence and addiction, some mental health practitioners in the West, are becoming interested in including such games as therapeutic tools. These are by definition "directive" tools since they are internally governed by algorithms. Since the introduction of electronic media into popular Western culture, the nature of games has become more diverse, complex, realistic and social. The commonalities between electronic and traditional play (such as providing a safe space to work through strong emotions) infer similar benefits. Video games have been broken into two categories: "serious" games, or games developed specifically for health or learning reasons, and "off-the-shelf" games, or games without a clinical focus that may be re-purposed for a clinical setting. Use of electronic games by clinicians is a new practice, and unknown risks as well as benefits may arise as the practice becomes more mainstream.
==== Research ====
Most of the current research relating to electronic games in therapeutic settings is focused on alleviating the symptoms of depression, primarily in adolescents. However, some games have been developed specifically for children with anxiety and Attention deficit hyperactivity disorder (ADHD), The same company behind the latter intends to create electronic treatments for children on the autism spectrum, and those living with Major depressive disorder, among other disorders. The favoured approach for mental health treatment is through Cognitive behavioral therapy (CBT). While this method is effective, it is not without its limitations: for example, boredom with the material, patients forgetting or not practicing techniques outside of a session, or the accessibility of care. It is these areas that therapists hope to address through the use of electronic games. Preliminary research has been done with small groups, and the conclusions drawn warrant studying the issue in greater depth.
Role-playing games (RPGs) are the most common type of electronic game used as part of therapeutic interventions. These are games where players assume roles, and outcomes depend on the actions taken by the player in a virtual world. Psychologists are able to gain insights into the elements of the capability of the patient to create or experiment with an alternate identity. There are also those who underscore the ease in the treatment process since playing an RPG as a treatment situation is often experienced as an invitation to play, which makes the process safe and without risk of exposure or embarrassment. The most well-known and well-documented RPG-style game used in treatment is SPARX. Taking place in a fantasy world, SPARX users play through seven levels, each lasting about half an hour, and each level teaching a technique to overcome depressive thoughts and behaviours. Reviews of the study have found the game treatment comparable to CBT-only therapy. However one review noted that SPARX alone is not more effective than standard CBT treatment. There are also studies that found role-playing games, when combined with the Adlerian Play Therapy (AdPT) techniques, lead to increased psychosocial development. ReachOutCentral is geared toward youth and teens, providing gamified information on the intersection of thoughts, feelings, and behavior. An edition developed specifically to aid clinicians, ReachOutPro, offers more tools to increase patients' engagement.
==== Other applications ====
Biofeedback (sometimes known as applied psychophysiological feedback) media is more suited to treating a range of anxiety disorders. Biofeedback tools are able to measure heart rate, skin moisture, blood flow, and brain activity to ascertain stress levels, with a goal of teaching stress management and relaxation techniques. The development of electronic games using this equipment is still in its infancy, and thus few games are on the market. The Journey to Wild Divine's developers have asserted that their products are a tool, not a game, though the three instalments contain many game elements. Conversely, Freeze Framer's design is reminiscent of an Atari system. Three simplistic games are included in Freeze Framer's 2.0 model, using psychophysiological feedback as a controller. The effectiveness of both pieces of software saw significant changes in participants' depression levels. A biofeedback game initially designed to assist with anxiety symptoms, Relax to Win, was similarly found to have broader treatment applications. Extended Attention Span Training (EAST), developed by NASA to gauge the attention of pilots, was remodeled as an ADHD aid. Brain waves of participants were monitored during play of commercial video games available on PlayStation, and the difficulty of the games increased as participants' attention waned. The efficacy of this treatment is comparable to traditional ADHD intervention.
Several online-only or mobile games (Re-Mission, Personal Investigator, Treasure Hunt, and Play Attention) have been specifically noted for use in alleviating disorders other than those for anxiety and mood. Re-Mission 2 especially targets children, the game having been designed with the knowledge that today's western youth are immersed in digital media. Mobile applications for anxiety, depression, relaxation, and other areas of mental health are readily available in the Android Play Store and the Apple App Store. The proliferation of laptops, mobile phones, and tablets means one can access these apps at any time, in any place. Many of them are low-cost or even free, and the games do not need to be complex to be of benefit. Playing a three-minute game of Tetris has the potential to curb a number of cravings, a longer play time could reduce flashback symptoms from posttraumatic stress disorder, and an initial study found that a visual-spatial game such as Tetris or Candy Crush, when played closely following a traumatic event, could be used as a "'therapeutic vaccine" to prevent future flashbacks.
==== Efficacy ====
While the field of allowing electronic media a place in a therapist's office is new, the equipment is not necessarily so. Most western children are familiar with modern PCs, consoles, and handheld devices even if the practitioner is not. An even more recent addition to interacting with a game environment is virtual reality equipment, which both adolescent and clinician might need to learn to use properly. The umbrella term for the preliminary studies done with VR is Virtual reality exposure therapy (VRET). This research is based on traditional exposure therapy and has been found to be more effective for participants than for those placed in a wait list control group, though not as effective as in-person treatments. One study tracked two groups – one group receiving a typical, lengthier treatment while the other was treated via shorter VRET sessions – and found that the effectiveness for VRET patients was significantly less at the six-month mark.
In the future, clinicians may look forward to using electronic media as a way to assess patients, as a motivational tool, and facilitate social in-person and virtual interactions. Current data, though limited, points toward combining traditional therapy methods with electronic media for the most effective treatment.
== Play therapy in literature ==
In 1953 Clark Moustakas wrote his first book, Children in Play Therapy. In 1956 he compiled Publication of The Self, the result of the dialogues between Moustakas, Abraham Maslow, Carl Rogers, and others, forging the humanistic psychology movement. In 1973 Moustakas continued his journey into play therapy and published his novel The child's discovery of himself. Moustakas' work as being concerned with the kind of relationship needed to make therapy a growth experience. His stages start with the child's feelings being generally negative and as they are expressed, they become less intense, the end results tend to be the emergence of more positive feelings and more balanced relationships.
Now, there are several published books outlining play therapy and specific techniques within play therapy. The Association for Play Therapy has a comprehensive list of play therapy books on their website. These books include 101 Play Therapy Techniques by Jason Aronson, A Handbook of Play Therapy with Aggressive Children by David E. Crenshaw, ADAPT: A Developmental Attachment-based, Play Therapy, by Jennifer Lefebre, and many others that outline Play Therapy and its use in specific circumstances.
== Parent/child play therapy ==
Play therapy is an evidence based approach for children that allows them to find ways to learn, process their emotions, and make meaning of the world around them. Play therapy can be used for several reasons including trauma, autism, behavior, attachment, and language.
Training in nondirective play for parents has been shown to significantly reduce mental health problems in at-risk preschool children. One of the first parent/child play therapy approaches developed was Filial Therapy (in the 1960s - see History section above), in which parents are trained to facilitate nondirective play therapy sessions with their own children. Filial therapy has been shown to help children work through trauma and also resolve behavior problems.
Allowing children who struggle with trauma to use play therapy allows for them to work through their trauma and begin to trust beyond it. Adults that respond differently to the child's closed off and defensive behaviors will help children start to develop trust beyond their trauma. (Parker, Hergenrather, Smelser, & Kelly, 2021). When parents respond to children defensively, the child doesn't trust them due to their past trauma. Working with a child-centered play therapist allows for the therapist to engage with the child, convey messages, and is open with the child may express regarding their previous or current trauma. The therapist responds in an empathetic and understanding way to allow the child to become openminded and respond in an enjoyable way rather than a self-protective, defensive way.
Another approach to play therapy that involves parents is Theraplay, which was developed in the 1970s. At first, trained therapists worked with children, but Theraplay later evolved into an approach in which parents are trained to play with their children in specific ways at home. Theraplay is based on the idea that parents can improve their children's behavior and help them overcome emotional problems by engaging their children in forms of play that replicate the playful, attuned, and empathic interactions of a parent with an infant. Studies have shown that Theraplay is effective in changing children's behavior, especially for children suffering from attachment disorders.
In the 1980s, Stanley Greenspan developed Floortime, a comprehensive, play-based approach for parents and therapists to use with autistic children. There is evidence for the success of this program with children diagnosed with autistic spectrum disorders.
Lawrence J. Cohen has created an approach called Playful Parenting, in which he encourages parents to play with their children to help resolve emotional and behavioral issues. Parents are encouraged to connect playfully with their children through silliness, laughter, and roughhousing.
In 2006, Garry Landreth and Sue Bratton developed a highly researched and structured way of teaching parents to engage in therapeutic play with their children. It is based on a supervised entry-level training in child centred play therapy. They named it Child Parent Relationship Therapy. These 10 sessions focus on parenting issues in a group environment and utilises video and audio recordings to help the parents receive feedback on their 30-minute 'special play times' with their children.
More recently, Aletha Solter has developed a comprehensive approach for parents called Attachment Play, which describes evidence-based forms of play therapy, including non-directive play, more directive symbolic play, contingency play, and several laughter-producing activities. Parents are encouraged to use these playful activities to strengthen their connection with their children, resolve discipline issues, and also help the children work through traumatic experiences such as hospitalization or parental divorce.
The emotional bond formed between a caregiver and their child is called attachment. (Lin, 2003). A child having attachment issues is significant because a child can have either a good or bad attachment to their primary caregiver. Which can lead to development and behavioral issues as the age depending on the type of attachment. When using play therapy for attachment issues it is essential to ease into it because the child could have emotional isolating and the therapy benefits both the parent and child due to being connected on a deeper level. It allows the parent and the child to build their relationship and the child to feel more secure with the parent.
== See also ==
Art therapy
Drama therapy
Eurythmy
Music therapy
Froebel gifts
Eva Frommer
Montessori education
Charles E. Schaefer
International Journal of Play Therapy
The P.L.A.Y. Project
Waldorf education
== References ==
== Further reading ==
Axline, V. (1947). Nondirective therapy for poor readers" Journal of Consulting Psychology 11, 61–69.
Axline, V. (1969, revised ed.). Play Therapy. New York: Ballantine Books.
Barrett, C. Hampe, T.E. & Miller, L. (1978). Research on child psychotherapy. In Garfield, S. & Bergin, A. (Eds.). Handbook of Psychotherapy and Behavior Change. New York: Wiley.
Freud, A. (1946). The psycho-analytic treatment of children. London: Imago.
Freud, A. (1965). The psycho-analytical treatment of children. New York: International Universities Press.
Freud, S. (1909). The case of "Little Hans" and the "Rat Man." London: Hogarth Press.
Froebel (1903). The education of man. New York: D. Appleton.
Guerney, B., Guerney, L., & Andronico, M. (1976). The therapeutic use of children's play. New York: Jason Aronson.
Grant, Robert, Jason. (Ed.) with Stone, Jessica and Mellenthin, Clair. (2020). Play Therapy Theories and Perspectives: A Collection of Thoughts in the Field. London: Routledge. ISBN 9780367418373
Hug-Hellmuth, H (1921). "On the technique of child-analysis". International Journal of Psycho-Analysis. 2: 287–305.
Klein, M. The Collected Writings of Melanie Klein in four volumes, London: Hogarth Press.
Landreth, G. L. (2002). Play therapy: The art of the relationship. (2nd ed.). New York: (Second Edition 2002). ISBN 1-58391-327-0. Brunner-Routledge.
Lanyado, Monica and Horne, Ann. (Eds.) (1999). The Handbook of Child and Adolescent Psychotherapy: Psychoanalytic Approaches. London: Routledge. ISBN 9780203135341. DOI https://doi.org/10.4324/9780203135341
Lowenfeld, M. (1939). "The world pictures of children: A method of recording and studying them". British Journal of Medical Psychology. 18: 65–101. doi:10.1111/j.2044-8341.1939.tb00710.x.
O'Connor, Kevin J; Schaefer, Charles E; Braverman, Lisa D, eds. (2015). Handbook of Play Therapy, 2nd Edition. Wiley. ISBN 978-1-118-85983-4.
Phillips, R.; Landreth, G. (1998). "Play therapists on play therapy (Part 2) Clinical issues in play therapy". International Journal of Play Therapy. 6 (2): 1–24. doi:10.1037/h0089416.
Schaefer, C. (1993). The therapeutic powers of play. New Jersey: Jason Aronson.
Schaefer, Charles, E. Kaduson, Heidi. (2006). Contemporary Play Therapy: Theory, Research, and Practice. United Kingdom: Guilford Publications.
Winnicott, D. W. (1971) The Piggle: An Account of the Psychoanalytic Treatment of a Little Girl. London: Hogarth Press, ISBN 0-140-1466-79
== External links ==
Association of Child Psychotherapists (ACP) the professional body for Psychoanalytic Child and Adolescent Psychotherapists in the UK
Arquetipo Ludi (Spanish)
Canadian Association of Play Therapy
Association of Play Therapy
British Association of Play Therapists
Play Therapy International
Play Therapy United Kingdom
The Play Therapy Institute
Play Therapy Qualification Training
Play Therapy Australia
British Association of Clinical Play Therapists
Sandtray Therapy
The Squiggle Foundation, London Archived 1 October 2016 at the Wayback Machine | Wikipedia/Play_therapy |
Feminist therapy is a set of related therapies arising from what proponents see as a disparity between the origin of most psychological theories and the majority of people seeking counseling being female. It focuses on societal, cultural, and political causes and solutions to issues faced in the counseling process. It openly encourages the client to participate in the world in a more social and political way.
Feminist therapy contends that women are in a disadvantaged position in the world due to sex, gender, sexuality, race, ethnicity, religion, age and other categories. Feminist therapists argue that many problems that arise in therapy are due to disempowering social forces; thus the goal of therapy is to recognize these forces and empower the client. In a feminist therapy setting the therapist and client work as equals. The therapist must demystify therapy from the beginning to show the client that she is her own rescuer, and the expectations, roles, and responsibilities of both client and therapist must be explored and equally agreed upon. The therapist recognizes that with every symptom a client has, there is a strength.
Feminist therapy grew out of concerns that established therapies were not helping women. Specific concerns of feminist therapists included gender bias and stereotyping in therapy; blaming victims of physical abuse and sexual abuse; and the assumption of a traditional nuclear family.
== Principles ==
An egalitarian relationship (a relationship in which the participants have equal status) between therapist and client is key in feminist therapy, utilizing the therapist's psychological knowledge and the client's knowledge of herself. The inherent power differentials between therapist and client are addressed, and the client must realize that the therapist is not giving her power, but power comes from within herself. This relationship provides a model for women to take responsibility in making all of their relationships egalitarian. Feminist therapists focus on embracing the client's strengths rather than fixing their weaknesses, and accept and validate the client's feelings.
Feminist therapy theory is always being revised and added to as social contexts change and the discourse develops.
The therapist always retains accountability.
The feminist therapy model is non-victim blaming.
The client's well-being is the leading principle in all aspects of therapy.
== Feminist therapists' responsibilities ==
Feminist therapists must integrate feminist analysis in all spheres of their work.
Feminist therapists must recognize the client's socioeconomic and political circumstances, especially with issues in access to mental health care.
Feminist therapists must be actively involved in ending oppression, empowering women and girls, respecting differences, and social change.
Feminist therapists must be aware of their own situated experience (their own socioeconomic and political situations as well as sex, gender, race, sexuality, etc.) and is constantly self-evaluating and remedying their own biases and oppressive actions. As well as must be learning about other dominant and non-dominant cultural and ethnic experiences.
Feminist therapists must accept and validate their client's experiences and feelings.
== Contributors ==
Jamie Kohanyi
Judith Worell
Pam Remer
Sandra Bem
Laura Brown
Jean Baker Miller
Carolyn Enns
Ellyn Kaschak
Bonnie Burstow
Judith V. Jordan
Mary N. Russell
== Criticism ==
In 1977, scholar Susan Thomas argued that feminist therapy was "more [a] part of a social movement than [a] type of psychotherapy", and was so intimately tied to broader social and political feminism that its legitimacy as a therapeutic school was questionable.
Psychiatrist Sally Satel of Yale University has been critical of feminist therapy since the late 1990s, characterizing it as promoting a paranoid conspiracy. Satel argued in her 2000 book P.C. MD: How Political Correctness Is Corrupting Medicine that the very concept of feminist therapy is contrary to the methods and goals of psychotherapy, sometimes so far as to veer into potential malpractice. Traditionally, notes Satel, the goal of therapy is to help the patient understand and alter unrealistic thinking and unhealthy behaviors to improve the patient's confidence, interpersonal skills, and quality of life. Traditional therapy, while rooted in well-tested methods, must also be flexible enough to adapt to each patient's unique experiences, personality and needs.
== See also ==
Trauma-informed feminist therapy
== References == | Wikipedia/Feminist_therapy |
Clinical behavior analysis (CBA; also called clinical behaviour analysis or third-generation behavior therapy) is the clinical application of behavior analysis (ABA). CBA represents a movement in behavior therapy away from methodological behaviorism and back toward radical behaviorism and the use of functional analytic models of verbal behavior—particularly, relational frame theory (RFT).
== Current models ==
Clinical behavior analysis (CBA) therapies include acceptance and commitment therapy (ACT), behavioral medicine (such as behavioral gerontology and pediatric feeding therapy), community reinforcement approach and family training (CRAFT), exposure therapies/desensitization (such as systematic desensitization), functional analytic psychotherapy (FAP, such as behavioral activation (BA) and integrative behavioral couples therapy), and voucher-based contingency management.
=== Acceptance and commitment therapy ===
Acceptance and commitment therapy is probably the most well-researched of all the third-generation behavior therapy models. Its development co-occurred with that of relational frame theory, with several researchers such as Steven C Hayes being involved with both. ACT has been argued to be based on relational frame theory. Although this is a matter of some debate within the community, Originally, this approach was referred to as comprehensive distancing. Every practitioner mixes acceptance with a commitment to one's values. These ingredients become enmeshed into the treatment in different ways which leads to ACT being either more on the mindfulness side, or more on the behavior-changing side. ACT has, as of May 2022, been evaluated in over 900 randomized clinical trials for a variety of client problems. Overall, when compared to other active treatments designed or known to be helpful, the effect size for ACT is a Cohen's d of around 0.6, which is considered a medium effect size.
=== Behavioral activation ===
Behavioral activation emerged from a component analysis of cognitive behavior therapy. Cognitive behavior therapy focuses on trying to reverse those negative thoughts that contribute to emotional difficulties such as depression and anxiety. This research found no additive effect for the cognitive component. Behavioral activation is based on a matching law model of reinforcement. A recent review of the research supports the notion that the use of behavioral activation is clinically important for the treatment of depression.
=== Community reinforcement approach and family training ===
Community reinforcement approach and family training (CRAFT) is a model developed by Robert Meyer and based on the community reinforcement approach (CRA) first developed by Nathan Azrin and Hunt. The model focuses on the use of functional behavioral assessment to reduce drinking behavior. CRAFT combines CRA with family therapy.
=== Functional analytic psychotherapy ===
Functional analytic psychotherapy is based on a functional analysis of the therapeutic relationship. It places a greater emphasis on the therapeutic context and returns to the use of in-session reinforcement. The basic FAP analysis utilizes what is called the clinically relevant behavior (CRB1), which is the client's presenting problem as presented in-session. Client in-session actions that improve their CRB1s are referred to as CRB2s. Client statements, or verbal behavior, about CRBs are referred to as CRB3s. In general, 40 years of research supports the idea that in-session reinforcement of behavior can lead to behavioral change.
=== Integrative behavioral couples therapy ===
Integrative behavioral couples therapy developed from dissatisfaction with traditional behavioral couples therapy. Integrative behavioral couples therapy looks to Skinner (1966) for the difference between contingency shaped and rule-governed behavior. It couples this analysis with a thorough functional assessment of the couples relationship. Recent efforts have used radical behavioral concepts to interpret a number of clinical phenomena including forgiveness.
== Clinical formulation ==
As with all behavior therapy, clinical behavior analysis relies on a functional analysis of problem behavior. Depending on the clinical model this analysis draws on B. F. Skinner's model of verbal behavior or relational frame theory.
== Professional organizations ==
The Association for Behavior Analysis International (ABAI) has a special interest group in clinical behavior analysis ABA:I. ABA:I serves as the core intellectual home for behavior analysts.
The Association for Behavioral and Cognitive Therapies (ABCT) also has an interest group in behavior analysis, which focuses on clinical behavior analysis.
The Association for Contextual Behavioral Science (ACBS) is devoted to third-generation therapies and basic research on derived relational responding and relational frame theory.
The Behavior Analyst Certification Board (BACB), in partnership with subject-matter experts, has produced a "Clinical Behavior Analysis" fact sheet.
== See also ==
Behavioral psychotherapy
== References == | Wikipedia/Clinical_behavior_analysis |
Psychotherapy discontinuation, also known as unilateral termination, patient dropout, and premature termination, is a patient's decision to stop mental health treatment before they have received an adequate number of sessions. In the United States, the prevalence of patient dropout is estimated to be between 40–60% over the course of treatment however, the overwhelming majority of patients will drop after two sessions.
An exhaustive meta-analysis of 146 studies in Western countries showed that the mean dropout rate is 34.8% with a wide range of 10.3% to 81.0%. The studies from the US (n = 85) had a dropout rate of 37.9% (range: 33.0% to 43.0%).
== Differing definitions ==
Psychotherapy discontinuation can mean different things to different researchers or clinicians. Although the important aspects of what discontinuation consist of (client's decision, symptoms not adequately reduced) typically remain constant, there can still be differences of how these are measured. For example, one researcher may designate that completing 50% of sessions will mark the client as a treatment completer, where another may designate this amount at 75%. When looking at patient dropout rates, these inconsistencies can make the data difficult to understand. But the same patients might be considered non-completes in another study.
== Associated issues ==
=== Poor patient outcomes ===
Patient dropout is associated with numerous problems, such as: loss of potential patient improvement, poorer outcomes, increased likelihood of over-utilizing resources, and disruption in group therapy settings. Intuitively, these patients lose out on the benefits they may have received if they continued treatment. They also face poorer outcomes and fewer benefits of therapy compared to those who continue with treatment. Further, patients who discontinue treatment are more likely to be characterized as chronic patients, resulting in over-utilization of services, up to twice as much as "appropriate" terminators. In a group therapy session, premature discontinuation of one member may in turn adversely affect the other members of the group.
=== Narcissistic injury ===
Narcissistic injury is a possible outcome of patient dropout, where therapists and clinicians may feel a diminished sense of self and may even feel inadequate. They may interpret a patient's discontinuation of treatment as a direct result of something they did. This can lead to lower self-esteem, confidence, and thus their effectiveness which will negatively impact their delivery of treatments to other patients. There is no current research as to how often this occurs in patient dropout cases.
=== Clinician and administrative losses ===
Less apparent are the effects non-completes have on the entire mental health care system. Clinicians experience losses in the form of time spent on patient intakes, missed appointments prior to termination, and other diagnostic work performed. Administratively, these inefficiencies contribute to long waiting lists, which in turn: deny services to others, worsen community perception, and create lost income for clinics. Cyclically, long waiting lists have shown some increased dropout effects, further exacerbating the problem.
== Predicting at-risk patients ==
Predicting patients at risk of dropping treatment is a difficult task that is still being researched. However, there are different factors associated with patient dropout that are worth identifying. There are several meta-analysis studies that addressed these issues.
=== Patient characteristics ===
Patient characteristics are anything innate about the patients themselves. These include: age, race, gender, education, and socioeconomic status. Several studies identify minorities as more likely candidates for dropping psychotherapy treatment. Young clients are also more likely to drop out compared to older clients. Further, socioeconomic status has been linked to client dropout, where poorer patients drop out more frequently.
=== Environmental factors ===
Environmental factors relate both to the environment of the patient and to the physical environment of the clinician's office. Research has shown that refurbishing the waiting room of an urban office resulted in a 10% increase in attendance at the first session. Also included as an environmental factor is the patient's access to care. In the United States, many insurance companies do not cover mental health treatment. This denial of care can quickly lead to patient dropout.
=== Beliefs and perceptions of mental health ===
Social stigma of mental health treatment may also result in increased patient discontinuation. This is particularly true amongst ethnic minorities. In the Latino community, the male value of machismo can often increase shame of seeking mental health due to beliefs that the individual should be able to overcome problems on their own.
Perceptions of mental health may also alter patient beliefs about the effectiveness of mental health treatment. Patients receive cues on therapist expertise through their interactions, and may feel the therapist is inadequate. They may also feel that they do not share the same treatment goals. It's also possible that the initial perception that treatment is ineffective can lead to patient's seeking a reason to end treatment. Lastly, a client may have an expectation about how many sessions they will be attending. This number strongly predicts the number of sessions actually attended, which may differ from the number the therapist feels is necessary, leading to dropout.
== Possible solutions ==
=== Role induction ===
Role induction involves preparing clients for what to expect in therapy. It consists of educating patients about the nature and process of therapy, aimed to offer clients an expectation of success and to dispel therapy misconceptions. This has been found to effectively reduce discontinuation, and even to help reduce client distress.
=== Fostering therapeutic alliance ===
The therapeutic relationship is generally based on three concepts: a collaborative relationship, an affective bond between the therapist and patient, and the ability of both the client and therapist to agree on treatment goals. To strengthen this alliance, research suggests to reaffirm the main therapeutic conditions of warmth, positive regard for the client, and empathy. Communicating both respect for the patient's perspective and one's interest in working with them will help develop trust.
=== Motivational interviewing ===
Motivational interviewing (MI) or motivational enhancement is defined as "increasing a person’s willingness to enter into, continue, and adhere to a specific change strategy.” MI is typically seen broken into the acronyms FRAMES (Feedback, Responsibility, Advice, Menu of strategies, Empathy, and Self-efficacy) or OARS (Open questions. Affirmation, Reflection, and Summary). Other strategies have included: correcting patient misconceptions, creating incentives for change, eliciting self-motivational statements, praising patient's serious consideration of change, and refraining problem behaviors so that they appear less formidable.
=== Therapist feedback ===
By consistently checking in with patient goals and progress, therapists can detect patient deviation from the intended path and thus consider changing treatment plans or other strategies before the patient drops. An example of therapist feedback would be a chart that displays client progress. This is a concrete picture of how the client is progressing, and will engage the client to take an active role in their treatment.
== See also ==
American Psychological Association
== References == | Wikipedia/Psychotherapy_discontinuation |
Psychodynamic psychotherapy (or psychodynamic therapy) and psychoanalytic psychotherapy (or psychoanalytic therapy) are two categories of psychological therapies. Their main purpose is revealing the unconscious content of a client's psyche in an effort to alleviate psychic tension, which is inner conflict within the mind that was created in a situation of extreme stress or emotional hardship, often in the state of distress. The terms "psychoanalytic psychotherapy" and "psychodynamic psychotherapy" are often used interchangeably, but a distinction can be made in practice: though psychodynamic psychotherapy largely relies on psychoanalytical theory, it employs substantially shorter treatment periods than traditional psychoanalytical therapies. Studies on the specific practice of psychodynamic psychotherapy suggest that it is evidence-based. In contrast, the methods used by psychoanalysis lack high-quality studies and therefore makes it difficult to assert their effectiveness.
Psychodynamic psychotherapy relies on the interpersonal relationship between client and therapist more than other forms of depth psychology. They must have a strong relationship built heavily on trust. In terms of approach, this form of therapy uses psychoanalysis adapted to a less intensive style of working, usually at a frequency of once or twice per week, often the same frequency as many other therapies. The techniques draw on the theories of Freud, Klein, and the object relations movement, e.g., Winnicott, Guntrip, and Bion. Some psychodynamic therapists also draw on Jung, Lacan, or Langs. It is a focus that has been used in individual psychotherapy, group psychotherapy, family therapy, and to understand and work with institutional and organizational contexts. In psychiatry, it has been used for adjustment disorders as well as post-traumatic stress disorder (PTSD), but more often for personality-related disorders.
== History ==
The principles of psychodynamics were introduced in the 1874 publication Lectures on Physiology by German physician and physiologist Ernst Wilhelm von Brücke. Von Brücke, taking a cue from thermodynamics, suggested all living organisms are energy systems, governed by the principle of energy conservation. During the same year, von Brücke was supervisor to first-year medical student Sigmund Freud at the University of Vienna. Freud later adopted this new construct of "dynamic" physiology to aid in his own conceptualization of the human psyche. Later, both the concept and application of psychodynamics were further developed by the likes of Carl Jung, Alfred Adler, Otto Rank, and Melanie Klein. Psychodynamic therapy has evolved from psychoanalytic theory, with some later modifications in the therapeutic practice experienced since the mid-20th century.
== Approaches ==
Most psychodynamic approaches are centered on the concept that some maladaptive functioning is in play and that this maladaptation is, at least in part, unconscious. The presumed maladaptation develops early in life and eventually causes daily difficulties. Psychodynamic therapies focus on revealing and resolving these unconscious conflicts driving their symptoms. The therapist takes a more interpretive and much less directive role.
Major techniques used by psychodynamic therapists include:
Free association: The client is encouraged to communicate their true feelings and thoughts to the therapist. This is done with the client knowing it is a safe space and done without judgment and/or consequence. These thoughts and/ or responses could possibly be irrelevant, illogical, and embarrassing to the patient. This is to help access unconscious information, memories, or impulses that the patient might otherwise have been unable to bring to the surface. After being brought to the conscious mind, they can then be interpreted.
Dream interpretation: (also known as dream analysis) The client records their dreams and communicates or relays them to the therapist, sometimes aided by free association. Then, the content is analyzed or interpreted for hidden meanings, underlying motivations, and other portrayals.
Recognizing resistance: This could be in many forms with slight variations depending on the type of resistance. The clients withstanding or withholding information for their better help and interpretation. Often, the client could be using this as a defense. This could be categorized into three different types of resistance.
The first type of resistance is conscious resistance, where the client is deliberate about not communicating the information needed because of distrust in the system, therapist, shame, or rejection of the interpreter.
The second, repression resistance, or ego resistance, is used by the client to keep unacceptable thoughts, feelings, actions, and/or impulses in the unconscious. This could be done by the patient blocking thoughts and communications during free associations and not remembering events.
The third, id resistance, is unlike the other two because it arises from the unconscious and is driven by id impulses. It resists change or treatment to further repeat the trauma in different situations, known as repetition compulsion. Additionally, there may be transference of views, feelings, and/or wishes of the patient onto the analyst, often the therapist, that were initially directed towards other impactful individuals in the patient's life. This is frequently people in early childhood, such as parents, siblings, or other important people. Addressing these projected views is hoped to help the patient reexperience, address, and analyze the effects and resolve the current distress it could be causing. As in some psychoanalytic approaches, the therapeutic relationship is seen as a key means to understanding and working through the relational difficulties which the client has suffered in life.
== Core principles and characteristics ==
Although psychodynamic psychotherapy can take many forms, commonalities include:
An emphasis on the centrality of intrapsychic and unconscious conflicts and their relation to development;
Identifying defenses as developing in internal psychic structures to avoid unpleasant consequences of conflict;
A belief that psychopathology develops mainly from early childhood experiences;
A view that internal representations of experiences are organized around interpersonal relations;
A conviction that life issues and dynamics will re-emerge in the context of the client-therapist relationship as transference and counter-transference;
Use of free association as a major method for exploration of internal conflicts and problems;
Focusing on interpretations of transference, defense mechanisms, and current symptoms and the working through of these present problems;
Trust in insight is critically important for success in therapy.
== Efficacy ==
Psychodynamic psychotherapy is an evidence-based therapy. Later meta-analyses showed psychoanalysis and psychodynamic therapy to be effective, with outcomes comparable or greater than other kinds of psychotherapy or antidepressant drugs, but these arguments have also been subjected to various criticisms. For example, meta-analyses in 2012 and 2013 came to the conclusion that there is little support or evidence for the efficacy of psychoanalytic therapy, thus further research is needed.
A systematic review of Long Term Psychodynamic Psychotherapy (LTPP) in 2009 found an overall effect size of 0.33. Others have found effect sizes of 0.44–0.68.
Meta-analyses of Short-Term Psychodynamic Psychotherapy (STPP) have found effect sizes ranging from 0.34 to 0.71 compared to no treatment and were found to be slightly better than other therapies in follow-up. Other reviews have found an effect size of 0.78–0.91 for somatic disorders compared to no treatment and 0.69 for treating depression. A 2012 meta-analysis by the Harvard Review of Psychiatry of Intensive Short-Term Dynamic Psychotherapy (ISTDP) found effect sizes ranging from 0.84 for interpersonal problems to 1.51 for depression. Overall, ISTDP had an effect size of 1.18 compared to no treatment.
In 2011, a study published in the American Journal of Psychiatry made 103 comparisons between psychodynamic treatment and a non-dynamic competitor and found that 6 were superior, 5 were inferior, 28 had no difference, and 63 were adequate. The study found that this could be used as a basis "to make psychodynamic psychotherapy an "empirically validated" treatment." In 2017, a meta-analysis of randomized controlled trials found psychodynamic therapy to be as efficacious as other therapies, including cognitive behavioral therapy.
== Client-therapist relationship ==
Because of the subjectivity of each patient's potential psychological ailments, there is rarely a clear-cut treatment approach. Most often, therapists vary general approaches in order to best fit a patient's specific needs. If a therapist does not understand the psychological ailments of their patient extremely well, then it is unlikely that they are able to decide upon a treatment structure that will help the patient. Therefore, the patient-therapist relationship must be extremely strong.
Therapists encourage their patients to be as open and honest as possible. Patients must trust their therapist if this is to happen. Because the effectiveness of treatment relies so heavily on the patient giving information to their therapist, the patient-therapist relationship is more vital to psychodynamic therapy than almost every other type of medical practice.
== See also ==
Anna Freud
Malan triangles
Models of abnormality
Psychodynamic Diagnostic Manual
== References == | Wikipedia/Psychodynamic_psychotherapy |
Integrative psychotherapy is the integration of elements from different schools of psychotherapy in the treatment of a client. Integrative psychotherapy may also refer to the psychotherapeutic process of integrating the personality: uniting the "affective, cognitive, behavioral, and physiological systems within a person".
== Background ==
Initially, Sigmund Freud developed a talking cure called psychoanalysis; then he wrote about his therapy and popularized psychoanalysis. After Freud, many different disciplines splintered off. Some of the more common therapies include: psychodynamic psychotherapy, transactional analysis, cognitive behavioral therapy, gestalt therapy, body psychotherapy, family systems therapy, person-centered psychotherapy, and existential therapy. Hundreds of different theories of psychotherapy are practiced.
A new therapy is born in several stages. After being trained in an existing school of psychotherapy, the therapist begins to practice. Then, after follow up training in other schools, the therapist may combine the different theories as a basis of a new practice. Then, some practitioners write about their new approach and label this approach with a new name.
A pragmatic or a theoretical approach can be taken when fusing schools of psychotherapy. Pragmatic practitioners blend a few strands of theory from a few schools as well as various techniques; such practitioners are sometimes called eclectic psychotherapists and are primarily concerned with what works. Alternatively, other therapists consider themselves to be more theoretically grounded as they blend their theories; they are called integrative psychotherapists and are not only concerned with what works, but also why it works.
For example, an eclectic therapist might experience a change in their client after administering a particular technique and be satisfied with a positive result. In contrast, an integrative therapist is curious about the "why and how" of the change as well. A theoretical emphasis is important: for example, the client may only have been trying to please the therapist and was adapting to the therapist rather than becoming more fully empowered in themselves.
== Different routes to integration ==
The most recent edition of the Handbook of Psychotherapy Integration (Norcross & Goldfried, 2005) recognized four general routes to integration: common factors, technical eclecticism, theoretical integration, and assimilative integration.
=== Common factors ===
The first route to integration is called common factors and "seeks to determine the core ingredients that different therapies share in common". The advantage of a common factors approach is the emphasis on therapeutic actions that have been demonstrated to be effective. The disadvantage is that common factors may overlook specific techniques that have been developed within particular theories. Common factors have been described by Jerome Frank, Bruce Wampold, and Miller, Duncan and Hubble (2005). Common factors theory asserts it is precisely the factors common to the most psychotherapies that make any psychotherapy successful.
Some psychologists have converged on the conclusion that a wide variety of different psychotherapies can be integrated via their common ability to trigger the neurobiological mechanism of memory reconsolidation.
=== Technical eclecticism ===
The second route to integration is technical eclecticism which is designed "to improve our ability to select the best treatment for the person and the problem…guided primarily by data on what has worked best for others in the past". The advantage of technical eclecticism is that it encourages the use of diverse strategies without being hindered by theoretical differences. A disadvantage is that there may not be a clear conceptual framework describing how techniques drawn from divergent theories might fit together. The most well known model of technical eclectic psychotherapy is Arnold Lazarus' (2005) multimodal therapy. Another model of technical eclecticism is Larry E. Beutler and colleagues' systematic treatment selection.
фтйо
=== Theoretical integration ===
The third route to integration commonly recognized in the literature is theoretical integration in which "two or more therapies are integrated in the hope that the result will be better than the constituent therapies alone". Some models of theoretical integration focus on combining and synthesizing a small number of theories at a deep level, whereas others describe the relationship between several systems of psychotherapy. One prominent example of theoretical synthesis is Paul L. Wachtel's model of cyclical psychodynamics that integrates psychodynamic, behavioral, and family systems theories. Another example of synthesis is Anthony Ryle's model of cognitive analytic therapy, integrating ideas from psychoanalytic object relations theory and cognitive psychotherapy. Another model of theoretical integration is specifically called integral psychotherapy (Forman, 2010; Ingersoll & Zeitler, 2010). The most notable model describing the relationship between several different theories is the transtheoretical model.
=== Assimilative integration ===
Assimilative integration is the fourth route and acknowledges that most psychotherapists select a theoretical orientation that serves as their foundation but, with experience, incorporate ideas and strategies from other sources into their practice. "This mode of integration favors a firm grounding in any one system of psychotherapy, but with a willingness to incorporate or assimilate, in a considered fashion, perspectives or practices from other schools". Some counselors may prefer the security of one foundational theory as they begin the process of integrative exploration. Formal models of assimilative integration have been described based on a psychodynamic foundation, and based on cognitive behavioral therapy.
Govrin (2015) pointed out a form of integration, which he called "integration by conversion", whereby theorists import into their own system of psychotherapy a foreign and quite alien concept, but they give the concept a new meaning that allows them to claim that the newly imported concept was really an integral part of their original system of psychotherapy, even if the imported concept significantly changes the original system. Govrin gave as two examples Heinz Kohut's novel emphasis on empathy in psychoanalysis in the 1970s and the novel emphasis on mindfulness and acceptance in "third-wave" cognitive behavioral therapy in the 1990s to 2000s.
=== Other models that combine routes ===
In addition to well-established approaches that fit into the five routes mentioned above, there are newer models that combine aspects of the traditional routes.
Clara E. Hill's (2014) three-stage model of helping skills encourages counselors to emphasize skills from different theories during different stages of helping. Hill's model might be considered a combination of theoretical integration and technical eclecticism. The first stage is the exploration stage. This is based on client-centered therapy. The second stage is entitled insight. Interventions used in this stage are based on psychoanalytic therapy. The last stage, the action stage, is based on behavioral therapy.
Good and Beitman (2006) described an integrative approach highlighting both core components of effective therapy and specific techniques designed to target clients' particular areas of concern. This approach can be described as an integration of common factors and technical eclecticism.
Multitheoretical psychotherapy is an integrative model that combines elements of technical eclecticism and theoretical integration. Therapists are encouraged to make intentional choices about combining theories and intervention strategies.
An approach called integral psychotherapy is grounded in the work of theoretical psychologist and philosopher Ken Wilber (2000), who integrates insights from contemplative and meditative traditions. Integral theory is a meta-theory that recognizes that reality can be organized from four major perspectives: subjective, intersubjective, objective, and interobjective. Various psychotherapies typically ground themselves in one of these four foundational perspectives, often minimizing the others. Integral psychotherapy includes all four. For example, psychotherapeutic integration using this model would include subjective approaches (cognitive, existential), intersubjective approaches (interpersonal, object relations, multicultural), objective approaches (behavioral, pharmacological), and interobjective approaches (systems science). By understanding that each of these four basic perspectives all simultaneously co-occur, each can be seen as essential to a comprehensive view of the life of the client. Integral theory also includes a stage model that suggests that various psychotherapies seek to address issues arising from different stages of psychological development.
The generic term, integrative psychotherapy, can be used to describe any multi-modal approach which combines therapies. For example, an effective form of treatment for some clients is psychodynamic psychotherapy combined with hypnotherapy. Kraft & Kraft (2007) gave a detailed account of this treatment with a 54-year-old female client with refractory IBS in a setting of a phobic anxiety state. The client made a full recovery and this was maintained at the follow-up a year later.
== Comparison with eclecticism ==
In Integrative and Eclectic Counselling and Psychotherapy, the authors make clear the distinction between integrative and eclectic psychotherapy approaches: "Integration suggests that the elements are part of one combined approach to theory and practice, as opposed to eclecticism which draws ad hoc from several approaches in the approach to a particular case." Psychotherapy's eclectic practitioners are not bound by the theories, dogma, conventions or methodology of any one particular school. Instead, they may use what they believe or feel or experience tells them will work best, either in general or suiting the often immediate needs of individual clients; and working within their own preferences and capabilities as practitioners.
== See also ==
Integrative body psychotherapy
Journal of Psychotherapy Integration
== Notes ==
== References ==
Beutler, L. E., Consoli, A. J. & Lane, G. (2005). Systematic treatment selection and prescriptive psychotherapy: an integrative eclectic approach. In J. C. Norcross & M. R. Goldfried (Eds.), Handbook of Psychotherapy Integration (2nd ed., pp. 121–143). New York: Oxford.
Brooks-Harris, J. E. (2008). Integrative Multitheoretical Psychotherapy. Boston: Houghton-Mifflin.
Castonguay, L. G., Newman, M. G., Borkovec, T. D., Holtforth, M. G. & Maramba, G. G. (2005). Cognitive-behavioral assimilative integration. In J. C. Norcross & M. R. Goldfried (Eds.), Handbook of Psychotherapy Integration (2nd ed., pp. 241–260). New York: Oxford.
Ecker, B. (2024). A proposal for the unification of psychotherapeutic action understood as memory modification processes. Journal of Psychotherapy Integration, 34(3), 291–314.
Ecker, B., Ticic, R., Hulley, L. (2012). Unlocking the Emotional Brain: Eliminating Symptoms at Their Roots Using Memory Reconsolidation. New York: Routledge.
Forman, M. D. (2010). A Guide to Integral Psychotherapy: Complexity, Integration, and Spirituality in Practice. Albany, NY: SUNY Press.
Frank, J. D. & Frank, J. B. (1991). Persuasion and Healing: A Comparative Study of Psychotherapy (3rd ed.). Baltimore, MD: Johns Hopkins University.
Frank, K. A. (1999). Psychoanalytic Participation: Action, Interaction, and Integration. Mahwah, NJ: Analytic Press.
Good, G. E. & Beitman, B. D. (2006). Counseling and Psychotherapy Essentials: Integrating Theories, Skills, and Practices. New York: W. W. Norton.
Govrin, A. (2015). Blurring the threat of 'otherness': integration by conversion in psychoanalysis and CBT. Journal of Psychotherapy Integration, 26(1): 78–90.
Hill, C. E. (2014). Helping Skills: Facilitating Exploration, Insight, and Action (4th ed.). Washington, DC: American Psychological Association.
Ingersoll, E. & Zeitler, D. (2010). Integral Psychotherapy: Inside Out/Outside In. Albany, NY: SUNY Press.
Kraft T. & Kraft D. (2007). Irritable bowel syndrome: symptomatic treatment approaches versus integrative psychotherapy. Contemporary Hypnosis, 24(4): 161–177.
Lane, R. D., Ryan, L., Nadel, L., Greenberg, L. S. (2015). Memory reconsolidation, emotional arousal and the process of change in psychotherapy: new insights from brain science. Behavioral and Brain Sciences, 38: e1.
Lazarus, A. A. (2005). Multimodal therapy. In J. C. Norcross & M. R. Goldfried (Eds.), Handbook of Psychotherapy Integration (2nd ed., pp. 105–120). New York: Oxford.
Messer, S. B. (1992). A critical examination of belief structures in integrative and eclectic psychotherapy. In J. C. Norcross, & M. R. Goldfried, (Eds.), Handbook of Psychotherapy Integration (pp. 130–165). New York: Basic Books.
Miller, S. D., Duncan, B. L., & Hubble, M. A. (2005). Outcome-informed clinical work. In J. C. Norcross, & M. R. Goldfried (Eds.), Handbook of Psychotherapy Integration (2nd ed., pp. 84–102). New York: Oxford.
Norcross, J. C. (2005). A primer on psychotherapy integration. In J. C. Norcross & M. R. Goldfried (Eds.), Handbook of Psychotherapy Integration (2nd ed., pp. 3–23). New York: Oxford.
Norcross, J. C. & Goldfried, M. R. (Eds.) (2005). Handbook of Psychotherapy Integration (2nd ed.). New York: Oxford.
Prochaska, J. O. & DiClemente, C. C. (2005). The transtheoretical approach. In J. C. Norcross & M. R. Goldfried (Eds.), Handbook of Psychotherapy Integration (2nd ed., pp. 147–171). New York: Oxford.
Ryle, A. (2005). Cognitive analytic therapy. In J. C. Norcross & M. R. Goldfried (Eds.), Handbook of Psychotherapy Integration (2nd ed., pp. 196–217). New York: Oxford.
Stricker, G. & Gold, J. (2005). Assimilative psychodynamic psychotherapy. In J. C. Norcross & M. R. Goldfried (Eds.), Handbook of Psychotherapy Integration (2nd ed., pp. 221–240). New York: Oxford.
Wachtel, P. L., Kruk, J. C., & McKinney, M. K. (2005). Cyclical psychodynamics and integrative relational psychotherapy. In J. C. Norcross & M. R. Goldfried (Eds.), Handbook of Psychotherapy Integration (2nd ed., pp. 172–195). New York: Oxford.
Wampold, B. E. & Imel Z. E. (2015). The Great Psychotherapy Debate: The Evidence for What Makes Psychotherapy Work (2nd ed.). New York: Routledge.
Welling, H. (June 2012). Transformative emotional sequence: towards a common principle of change. Journal of Psychotherapy Integration, 22(2): 109–136.
Wilber, K. (2000). Integral Psychology: Consciousness, Spirit, Psychology, Therapy. Boston: Shambhala.
Woolfe, R. & Palmer, S. (2000). Integrative and Eclectic Counselling and Psychotherapy. London; Thousand Oaks, CA: Sage Publications.
Žvelc, G. & Žvelc, M. (2021). Integrative psychotherapy: A mindfulness- and compassion-oriented approach. Routledge.
== Further reading ==
Fromme, D. K. (2011). Systems of Psychotherapy: Dialectical Tensions and Integration. New York: Springer.
Magnavita, J. J. & Anchin, J. C. (2014). Unifying Psychotherapy: Principles, Methods, and Evidence from Clinical Science. New York: Springer.
Prochaska, J. O. & Norcross, J. C. (2018). Systems of Psychotherapy: A Transtheoretical Analysis (9th ed.). New York: Oxford.
Scaturo, D. J. (2005). Clinical Dilemmas in Psychotherapy: a Transtheoretical Approach to Psychotherapy Integration. Washington, DC: American Psychological Association.
Schneider, K. J. (Ed.) (2008). Existential-Integrative Psychotherapy: Guideposts to the Core of Practice. New York: Routledge.
Schneider, K. J. & Krug, O.T. (2010). Existential-Humanistic Therapy. Washington, DC: American Psychological Association.
Stricker, G. & Gold, J. R. (2006). A Casebook of Psychotherapy Integration. Washington, DC: American Psychological Association.
Urban, W. J. (1978) Integrative Therapy: Foundations of Holistic and Self Healing. Los Angeles: Guild of Tutors Press.
== External links ==
The Problem of Psychotherapy Integration by Tullio Carere
The Rise of Integrative Psychotherapy by John Söderlund
Society for the Exploration of Psychotherapy Integration
International Integrative Psychotherapy Association
Institute for Integrative Psychotherapy and Counselling, Ljubljana
International Journal of Integrative Psychotherapy | Wikipedia/Integrative_psychotherapy |
Exposure therapy is a technique in behavior therapy to treat anxiety disorders. Exposure therapy involves exposing the patient to the anxiety source or its context (without the intention to cause any danger). Doing so is thought to help them overcome their anxiety or distress.: 141–142 Numerous studies have demonstrated its effectiveness in the treatment of disorders such as generalized anxiety disorder (GAD), social anxiety disorder (SAD), obsessive-compulsive disorder (OCD), post-traumatic stress disorder (PTSD), and specific phobias.
As of 2024, focus is particularly on exposure and response prevention (ERP or ExRP) therapy, in which exposure is continued and the resolution to refrain from the escape response is maintained at all times (not just during specific therapy sessions).
== Techniques ==
Exposure therapy is based on the principle of respondent conditioning often termed Pavlovian extinction. The exposure therapist identifies the cognitions, emotions and physiological arousal that accompany a fear-inducing stimulus and then tries to break the pattern of escape that maintains the fear. This is done by exposing the patient to fear-inducing stimuli.
This may be done:
using progressively stronger stimuli. Fear is minimized at each of a series of steadily escalating steps or challenges (a hierarchy), which can be explicit ("static") or implicit ("dynamic" — see Method of Factors) until the fear is finally gone. The patient is able to terminate the procedure at any time.
using flooding therapy, which exposes the patient to feared stimuli starting at the most feared item in a fear hierarchy.
There are several types of exposure procedures:
In vivo or "real life". This type exposes the patient to actual fear-inducing situations. For example, if someone fears public speaking, the person may be asked to give a speech to a small group of people.
Virtual reality, in which technology is used to simulate in vivo exposure.
Imaginal, where patients are asked to imagine a situation that they are afraid of. This procedure is helpful for people who need to confront feared thoughts and memories.
Written exposure therapy, where patients write down their account of the traumatic event
Interoceptive, in which patients confront feared bodily symptoms such as increased heart rate and shortness of breath. This may be used for more specific disorders such as panic or post-traumatic stress disorder.
The various types of exposure may be used together or separately. Discussion continues on how best to carry out exposure therapy, including on whether safety behaviours should be discontinued.
=== Exposure and response prevention (ERP) ===
In the exposure and response prevention (ERP or EX/RP) form of exposure therapy, the resolution to refrain from the escape response is to be maintained at all times (not just during specific practice sessions). Thus, not only does the subject experience habituation to the feared stimulus, but they also practice a fear-incompatible behavioral response to the stimulus. The distinctive feature is that individuals confront their fears and discontinue their escape response.
While this type of therapy typically causes some short-term anxiety, this facilitates long-term reduction in obsessive and compulsive symptoms.: 103
The American Psychiatric Association recommends ERP for the treatment of OCD, citing that ERP has the richest empirical support. As of 2019, ERP is considered a first-line psychotherapy for OCD. A 2024 systematic review found that ERP is highly effective in treating pediatric OCD using both in-person and telehealth-based modailites.
Effectiveness is heterogeneous. Higher efficacy correlates with lower avoidance behaviours, and greater adherence to homework. Using SSRI meds whilst doing ERP does not appear to correlate with better outcomes. Discussion continues on how to best conduct ERP.
Generally, ERP incorporates a relapse prevention plan toward the end of the course of therapy. This can include being ready to re-apply ERP if an anxiety does occur.
== Mechanism ==
Mechanism research has been limited in the field.
Habituation was seen as a mechanism in the past, but is seen more recently as a model of therapeutic process.
=== Inhibitory learning ===
As of 2022, the inhibitory learning model is the most common conjecture of the mechanism which causes exposure therapy efficacy. This model posits that in exposure therapy the unpleasant reactions such as anxiety (that were previously learned during fear conditioning) remain intact - they are not expected to be eliminated - but that they are now inhibited or balanced or overcome by new learning about the situation (for instance that the feared result will not necessarily happen). More research is needed.
=== Inhibitory retrieval ===
This model posits that additional associative learning processes, such as counterconditioning and novelty-enhanced extinction may contribute to exposure therapy.
== Under-use and barriers to use ==
Exposure therapy is seen as under-used in relation to its efficacy. Barriers to use of exposure therapy by psychologists include it appearing antithetical to mainline psychology, lack of confidence, and negative beliefs about exposure therapy.
== Uses ==
=== Phobia ===
Exposure therapy is the most successful known treatment for phobias. Several published meta-analyses included studies of one-to-three-hour single-session treatments of phobias, using imaginal exposure. At a post-treatment follow-up four years later 90% of people retained a considerable reduction in fear, avoidance, and overall level of impairment, while 65% no longer experienced any symptoms of a specific phobia.
Agoraphobia and social anxiety disorder are examples of phobias that have been successfully treated by exposure therapy.
=== Post-traumatic stress disorder ===
Exposure therapy in PTSD involves exposing the patient to PTSD-anxiety triggering stimuli, with the aim of weakening the neural connections between triggers and trauma memories (a.k.a. desensitisation). Exposure may involve:
a real-life trigger ("in vivo")
an imagined trigger ("imaginal")
Virtual reality exposure
a triggered feeling generated in a physical way ("interoceptive")
Forms include:
Flooding – exposing the patient directly to a triggering stimulus, while simultaneously making them not feel afraid.
Systematic desensitisation (a.k.a. "graduated exposure") – gradually exposing the patient to increasingly vivid experiences that are related to the trauma, but do not trigger post-traumatic stress.
Narrative exposure therapy - creates a written account of the traumatic experiences of a patient or group of patients, in a way that serves to recapture their self-respect and acknowledges their value. Under this name it is used mainly with refugees, in groups. It also forms an important part of cognitive processing therapy and is conditionally recommended for treatment of PTSD by the American Psychological Association.
Prolonged exposure therapy (PE) - a form of behavior therapy and cognitive behavioral therapy designed to treat post-traumatic stress disorder, characterized by two main treatment procedures – imaginal and in vivo exposures. Imaginal exposure is a repeated 'on-purpose' retelling of the trauma memory. In vivo exposure is gradually confronting situations, places, and things that are reminders of the trauma or feel dangerous (despite being objectively safe). Additional procedures include processing of the trauma memory and breathing retraining. The American Psychological Association strongly recommends PE as a first-line psychotherapy treatment for PTSD.
Researchers began experimenting with Virtual reality exposure (VRE) therapy in PTSD exposure therapy in 1997 with the advent of the "Virtual Vietnam" scenario. Virtual Vietnam was used as a graduated exposure therapy treatment for Vietnam veterans meeting the qualification criteria for PTSD. A 50-year-old Caucasian male was the first veteran studied. The preliminary results concluded improvement post-treatment across all measures of PTSD and maintenance of the gains at the six-month follow up. Subsequent open clinical trial of Virtual Vietnam using 16 veterans, showed a reduction in PTSD symptoms.
This method was also tested on several active duty Army soldiers, using an immersive computer simulation of military settings over six sessions. Self-reported PTSD symptoms of these soldiers were greatly diminished following the treatment. Exposure therapy has shown promise in the treatment of co-morbid PTSD and substance abuse.
In the area of PTSD, historic barriers to the use of exposure therapy include that clinicians may not understand it, are not confident in their own ability to use it, or more commonly, see significant contraindications for their client.
=== Obsessive compulsive disorder ===
Exposure and response prevention (also known as exposure and ritual prevention; ERP or EX/RP) is a variant of exposure therapy that is recommended by the American Academy of Child and Adolescent Psychiatry (AACAP), the American Psychiatric Association (APA), and the Mayo Clinic as first-line treatment of OCD citing that it has the richest empirical support for both youth and adolescent outcomes.
ERP is predicated on the idea that a therapeutic effect is achieved as subjects confront their fears, but refrain from engaging in the escape response or ritual that delays or eliminates distress. In the case of individuals with OCD or an anxiety disorder, there is a thought or situation that causes distress. Individuals usually combat this distress through specific behaviors that include avoidance or rituals. However, ERP involves purposefully evoking fear, anxiety, and or distress in the individual by exposing him/her to the feared stimulus. The response prevention then involves having the individual refrain from the ritualistic or otherwise compulsive behavior that functions to decrease distress. The patient is then taught to tolerate distress until it fades away on its own, thereby learning that rituals are not always necessary to decrease distress or anxiety. Over repeated practice of ERP, patients with OCD expect to find that they can have obsessive thoughts and images but not have the need to engage in compulsive rituals to decrease distress.
The AACAP's practice parameters for OCD recommends cognitive behavioral therapy, and more specifically ERP, as first line treatment for youth with mild to moderate severity OCD and combination psychotherapy and pharmacotherapy for severe OCD. The Cochrane Review's examinations of different randomized control trials echoes repeated findings of the superiority of ERP over waitlist control or pill-placebos, the superiority of combination ERP and pharmacotherapy, but similar effect sizes of efficacy between ERP or pharmacotherapy alone.
=== Generalized anxiety disorder ===
There is empirical evidence that exposure therapy can be an effective treatment for people with generalized anxiety disorder, citing specifically in vivo exposure therapy (exposure through a real-life situation), which has greater effectiveness than imaginal exposure in regards to generalized anxiety disorder. The aim of in vivo exposure treatment is to promote emotional regulation using systematic and controlled therapeutic exposure to traumatic stimuli. Exposure is used to promote fear tolerance.
Exposure therapy is also a preferred method for children who struggle with anxiety.
=== Other possible uses of exposure therapy ===
Exposure therapy has been posited as potentially helpful for other uses, including substance abuse disorders, overeating, binge eating, and obesity, and depression.
Exposure therapy has also been found effective in treating separation anxiety and panic disorder. In cases of separation anxiety, gradual exposure helps individuals become more comfortable with being apart from attachment figures. For panic disorder, exposure techniques can reduce sensitivity to physical sensations that might otherwise trigger panic attacks.
== History ==
The 9th century Persian polymath Abu Zayd al-Balkhi wrote about 'tranquilizing fear' by 'forcing oneself to repeatedly expose one's hearing and sight to noxious things' and to 'moved again and again near the thing it is scared of until it becomes used to it and loses its fear.'
The use of exposure as a mode of therapy began in the 1950s, at a time when psychodynamic views dominated Western clinical practice and behavioral therapy was first emerging. South African psychologists and psychiatrists first used exposure as a way to reduce pathological fears, such as phobias and anxiety-related problems, and they brought their methods to England in the Maudsley Hospital training program.
Joseph Wolpe (1915–1997) was one of the first psychiatrists to spark interest in treating psychiatric problems as behavioral issues. He sought consultation with other behavioral psychologists, among them James G. Taylor (1897–1973), who worked in the psychology department of the University of Cape Town in South Africa. Although most of his work went unpublished, Taylor was the first psychologist known to use exposure therapy treatment for anxiety, including methods of situational exposure with response prevention—a common exposure therapy technique still being used.
Since the 1950s, several sorts of exposure therapy have been developed, including systematic desensitization, flooding, implosive therapy, prolonged exposure therapy, in vivo exposure therapy, and imaginal exposure therapy.
Exposure and response prevention (ERP) traces its roots back to the work of psychologist Vic Meyer in the 1960s. Meyer devised this treatment from his analysis of fear extinguishment in animals via flooding and applied it to human cases in the psychiatric setting that, at the time, were considered intractable. The success of ERP clinically and scientifically has been summarized as "spectacular" by prominent OCD researcher Stanley Rachman decades following Meyer's creation of the method.
== Possibly related psychological techniques ==
=== Mindfulness ===
A 2015 review pointed out parallels between exposure therapy and mindfulness, stating that mindful meditation "resembles an exposure situation because [mindfulness] practitioners 'turn towards their emotional experience', bring acceptance to bodily and affective responses, and refrain from engaging in internal reactivity towards it." Imaging studies have shown that the ventromedial prefrontal cortex, hippocampus, and the amygdala are all affected by exposure therapy; imaging studies have shown similar activity in these regions with mindfulness training.
=== EMDR ===
Eye movement desensitization and reprocessing (EMDR) includes an element of exposure therapy (desensitization), though whether this is an effective method or not, is controversial.
=== Other ===
Desensitization and extinction also involve exposure to a cause of disturbance.
=== Research; Expectations violation ===
Research has been undertaken on the therapy impact of a focus on the mismatch between threat expectancy before exposure and what actually occurs in exposure, but this has not produced clearly positive results.
== See also ==
Catharsis
EMDR
Desensitization (psychology)
Extinction (psychology)
== Explanatory footnotes ==
== References == | Wikipedia/Exposure_therapy |
Cryotherapy, sometimes known as cold therapy, is the local or general use of low temperatures in medical therapy. Cryotherapy can be used in many ways, including whole body exposure for therapeutic health benefits or may be used locally to treat a variety of tissue lesions.
Cryotherapy is often used in an effort to prevent or relieve muscle pain, sprains and swelling after soft tissue damage or surgery. When a musculoskeletal injury occurs, the body sends signals to the inflammatory cells, macrophages, which release IGF-1. IGF-1 is a hormone-insulin-like growth factor which initiates the termination of damaged tissue. In some cases, this inflammatory response can be aggravated and cause increased swelling and edema, which can actually prolong the recovery process.
For decades, it has been commonly used to accelerate recovery in athletes after exercise. Cryotherapy decreases the temperature of tissue surfaces to minimize hypoxic cell death, edema accumulation, and muscle spasms. Minimising each or all of these ultimately alleviates discomfort and inflammation. It can involve a range of treatments, from the application of ice packs or immersion in ice baths (generally known as cold therapy), to the use of cold chambers.
== Cryotherapy chamber ==
Electric cryotherapy chambers are fully enclosed, walk-in rooms designed to expose the human body to ultra low temperatures for 2–3 minutes resulting in various therapeutic and health benefits.
The purpose of cryotherapy is to trigger the body’s natural response to the extreme cold. Upon entering the cryo chamber, the extreme cold temperatures elicit a fight or flight response, causing increased levels of dopamine and Norepinephrine resulting in the feeling of extreme focus and sense of euphoria. The process of thermoregulation occurs to protect vital organs from the cold temperatures, promoting increased blood flow and delivery of nutrients throughout the body via vasoconstriction and vasodilation of the blood vessels.
Electric cryo chambers use refrigeration as a cooling agent providing a safe, breathable environment, not exposing the client to any dangerous gases such as nitrogen.
Cryotherapy is a specific type of low-temperature treatment used to reduce inflammation and its associated pain.
Cryotherapy was developed in the 1970s by Japanese rheumatologist Toshima Yamaguchi and introduced to Europe, US and Australia in the 1980s and 1990s.
== Mechanism of action ==
When the body is subjected to extreme cooling, the blood vessels are narrowed which reduces blood flow to the areas of swelling. Once outside the cryogenic chamber, the vessels expand, and an increased presence of anti-inflammatory proteins (IL-10) is established in the blood. The treatment typically involves exposing the individual to freezing, dry temperatures (at −40 °C) for 2 to 4 minutes in one of these chambers. While in the cryotherapy chamber, blood flow is reduced in that injured area. This will reduce muscle spasms and soreness. This is important to activate the circulatory system to encourage healing and regenerate muscle fibers.
== Main uses ==
Proponents say that cryotherapy may reduce pain and inflammation, help with mental disorders, support exercise recovery and improve joint function. Cryotherapy chambers belong to the group of equipment associated with sports rehabilitation and wellness.
Reducing the symptoms of eczema
== Cryosurgery ==
Cryosurgery is the application of extreme cold to destroy abnormal or diseased tissue. The application of ultra-cold liquid causes damage to the treated tissue due to intracellular ice formation. The degree of damage depends upon the minimum temperature achieved and the rate of cooling. Cryosurgery is used to treat a number of diseases and disorders, most especially skin conditions like warts, moles, skin tags and solar keratoses. Liquid nitrogen is usually used to freeze the tissues at the cellular level. The procedure is used often as it is relatively easy and quick, can be done in the doctor's office, and is deemed quite low risk. If a cancerous lesion is suspected then excision rather than cryosurgery may be deemed more appropriate. Contraindications to the use of cryosurgery include but are not limited to; using it over a neoplasm, someone with conditions that are worsened by exposure to cold (i.e. Raynaud's, urticaria), and poor circulation or no sensation in the area. There are some precautions to using cryosurgery. They include someone with collagen vascular disease, dark-skinned individuals (due to high risk of hypopigmentation), and impaired sensation at the area being treated.
== Ice pack therapy ==
Ice pack therapy is a treatment of cold temperatures to an injured area of the body. Though the therapy is extensively used, and it is agreed that it alleviates symptoms, testing has produced conflicting results about its efficacy and possibility of producing undesirable results.
An ice pack is placed over an injured area and is intended to absorb heat of a closed traumatic or Edematous injury by using conduction to transfer thermal energy. The physiologic effects of cold application include immediate vasoconstriction with reflexive vasodilation, decreased local metabolism and enzymatic activity, and decreased oxygen demand. Cold decreases muscle spindle fiber activity and slows nerve conduction velocity; therefore, it is often used to decrease spasticity and muscle guarding. It is commonly used to alleviate the pain of minor injuries, as well as decrease muscle soreness. The use of ice packs in treatment decreases the blood flow most rapidly at the beginning of the cooling period, this occurs as a result of vasoconstriction, the initial reflex sympathetic activity. Although the use of cryotherapy has been shown to aid in muscle recovery, some studies have highlighted that the degree of muscle cooling in humans is not significant enough to produce a considerable effect on muscle recovery. Based on previous research comparing human and animal models, the insufficient degree of cooling is due to larger limb size, more adipose tissue, and a higher muscle diameter in humans.
Ice is not commonly used prior to rehabilitation or performance because of its known adverse effects to performance such as decreased myotatic reflex and force production, as well as a decrease in balance immediately following ice pack therapy for 20 minutes. However, if ice pack therapy is applied for less than 10 minutes, performance can occur without detrimental effects. If the ice pack is removed at this time, athletes are sent back to training or competition directly with no decrease in performance. Ice has also been shown to possibly slow and impair muscle protein synthesis and repair in recreational athletes. This is especially true for cold water immersion, but equivalent controlled studies have not been done to see if the same effects hold true for ice packs. Regardless, ice has been shown in studies to inhibit the uptake of dietary protein post-muscle conditioning exercise.
Although there are many positive effects of cryotherapy in athletes' short-term recovery, in recent years, there has been much controversy regarding whether cryotherapy is actually beneficial or may be causing the opposite effect. While inflammation that occurs post-injury or from a damaging exercise may be detrimental to secondary tissue, it is beneficial for the structural and functional repair of the damaged tissue. Therefore, some researchers are now recommending that ice not be used so as not to delay the natural healing process following an injury. The original RICE (rest, ice, compression, elevation) method was rescinded because the inflammatory response is necessary for the healing process, and this practice may delay healing instead of facilitating it. Animal studies also show that a disrupted inflammatory stage of healing may lead to impaired tissue repair and redundant collagen synthesis.
There is a study that concludes that cryotherapy has a positive impact on the short-term recovery of athletes. Cryotherapy helped manage muscle soreness and facilitate recovery within the first 24 hours following a sport-related activity. Athletes who use cryotherapy within the first 24 hours to alleviate pain recovered at a faster rate than athletes who did not use cryotherapy after their sport-related activity.
== Cryotherapy following total knee replacement ==
Cryotherapy may be associated with reducing inflammation and pain reduction. However, the effectiveness of cryotherapy on total knee arthroplasty (TKA) is still unclear. A systemic review found that six out of eight randomized controlled trails indicate that there is no significant benefit of using cryotherapy. The result may depend on several factors such as application time per session, duration and frequency of application of cryotherapy
Post-surgical management following total knee replacement surgery may include cryotherapy with the goal of helping with pain management and blood loss following surgery. Cryotherapy is applied using ice, cold water, or gel packs, sometimes in specialized devices that surround the skin and surgical site (but keeps the surgical site clean). Evidence from clinical trials regarding the effectiveness of cryotherapy is weak and because of this, the use of cryotherapy may not be justified. Weak evidence indicates that cryotherapy used postoperatively may be associated with a small decrease in blood loss and pain following the surgery. No clinically significant improvements in range of motion have been reported. There are not many side effects or adverse effects reported with this intervention. Some studies suggest that cryotherapy may offer minor reductions in swelling and pain after total knee arthroplasty, but systematic reviews indicate that its overall effectiveness remains inconclusive. The application methods vary, with durations ranging from brief ice pack sessions to continuous cooling for up to 48 hours using automated devices. Further study is needed to assess any potential harms or adverse effects associated with cryotherapy after total knee arthroplasty.
== Traditional vs continuous cryotherapy after total knee arthroplasty ==
Cryotherapy, the withdrawal of heat from an individual's body via the application of cold modalities to reduce tissue temperature, has been known as a treatment intervention for the overall management of musculoskeletal injuries, especially when it comes to relieving pain and improving functional outcomes after total knee arthroplasty. Over the years, new cryotherapy devices that aim to maintain a fixed temperature for a prolonged time have become more apparent, thereby questioning both the efficacy and therapeutic outcomes of continuous cryotherapy with the ones of traditional cryotherapy.
The most concurrent systematic review and meta-analysis aimed to compare continuous and traditional applications of cryotherapy on patients who have undergone total knee arthroplasty, specifically in pain intensity, analgesics consumption, swelling, blood loss, postoperative range of motion (PROM), and length of hospital stay. According to the study's findings, there were no statistically significant differences in pain intensity, analgesic consumption, swelling, blood loss, PROM, and length of hospital stay between the continuous and traditional cryotherapy groups. At the same time, the study acknowledges its limitations, including lack of blinding, substantial heterogeneity, and modest sample sizes in eligible trials.
In addition to such findings, the study compared the financial implications of both continuous cryotherapy and traditional cryotherapy. They found that continuous cryotherapy may be subject to additional costs not covered by insurance. In contrast, the cost of traditional cryotherapy is nearly negligible.
With that in mind, continuous cryotherapy was shown to have produced similar clinical effects to traditional cryotherapy; the only difference being the additional costs that insurance companies do not cover with continuous cryotherapy. Therefore, the researchers state the current evidence isn’t substantial enough to support the theoretical cost-effectiveness of continuous cryotherapy after total knee arthroplasty.
== Cold spray anesthetics ==
In addition to their use in cryosurgery, several types of cold aerosol sprays are used for short-term pain relief. Unlike other cold modalities, it does not produce similar physiological effects due to the fact it decreases skin temperature, not muscle temperature. It reflexively inhibits underling muscle by using evaporation to cool the area. Ordinary spray cans containing tetrafluoroethane, dimethyl ether, or similar substances, are used to numb the skin prior to or possibly in place of local anesthetic injections, and prior to other needles, small incisions, sutures, and so on. Other products containing chloroethane are used to ease sports injuries, similar to ice pack therapy. Cold aerosol spray could also be used to relieve trigger points and improve range of motion. After applying the cold spray, one can stretch the muscle and will then have improved mobility and a decrease in pain immediately. However, this is only a short-term effect as the pain relief and improved range of motion can wear off within a minute.
== Whole body cryotherapy ==
An increasing amount of research is done on the effects of whole-body cryotherapy on exercise, beauty, and health. Research is often inconsistent because of the usage of the different types of cryo-chambers, and different treatment periods. However, it becomes increasingly clear that whole body cryotherapy has a positive effect on muscle soreness and decreases the recovery time after exercise. Some older papers show inconsistencies in the effects.
Cryotherapy is also increasingly used as a non-drug treatment against rheumatoid arthritis, stress, anxiety, or chronic pain, multiple sclerosis and fibromyalgia. Studies for these, and other diseases (Alzheimer's, migraines), are ongoing although more evidence becomes available on the positive effects of Whole Body Cryotherapy. The FDA points out that the effects of Whole Body Cryotherapy lacks evidence and should be researched more.
Cryotherapy treatment involves exposing individuals to extremely cold dry air (below −100 °C) for two to four minutes. Yet, three to four minute exposure to whole body cryotherapy is different from a one to two minute exposure. It is more beneficial to expose for a shorter amount of time to increase therapeutic benefits. Longer durations have negative effects on thermal sensation, tissue oxygenation, and blood volume. Also, the amount of sessions is an important part of the healing process. Just one session will not exhibit significant effects. A minimum of twenty sessions is required. Thirty sessions is recommended for optimal effects though.
To achieve the subzero temperatures required for whole body cryotherapy, two methods are typically used: liquid nitrogen and refrigerated cold air. During these exposures, individuals wear minimal clothing, which usually consists of shorts for males, and shorts and a crop top for females. Gloves, a woollen headband covering the ears, and a nose and mouth mask, in addition to dry shoes and socks, are commonly worn to reduce the risk of cold-related injury. The first whole body cryotherapy chamber was built in Japan in the late 1970s, introduced to Europe in the 1980s, and has been used in the US and Australia in the past decade.
=== Adverse effects ===
Reviews of whole-body cryotherapy have called for research studies to implement active surveillance of adverse events, which are suspected of being underreported. If the cold temperatures are produced by evaporating liquid nitrogen, there is the risk of inert gas asphyxiation as well as frostbite. However, these risks are not present in the electronically operated chambers.
=== Contraindications ===
Contraindications include patients with cardiovascular disease, arterial hypertension, acute infectious diseases, seizures, cold allergy, and some psychiatric disorders.
=== Partial body ===
Partial body cryotherapy devices also exist. If the cold temperatures are produced by evaporating liquid nitrogen, there is the risk of inert gas asphyxiation as well as frostbite.
== See also ==
Cryonics
Cold shock response
Cold compression therapy
Freeze spray
== References ==
== External links ==
Cryotherapy at the U.S. National Library of Medicine Medical Subject Headings (MeSH) | Wikipedia/Cryotherapy |
The Master of Physical Therapy (MPT or MSPT) is a post baccalaureate degree conferred upon successful completion of an accredited physical therapy professional education program.
== United States ==
Successful candidates are then qualified to apply for and take the Physical Therapy national licensure exam (in their particular state); students who pass this exam are then licensed as Physical Therapists (and may typically use the designation MPT or simply PT).
Until the late 1990s, Physical therapy education was structured as a Bachelor's Degree. Those who completed the program were qualified to apply for the exam (and to subsequently enter Physical Therapy practice). However, with the ongoing support of the American Physical Therapy Association (the accrediting organization for all American PT academic programs), the bachelor's degree in physical therapy was slowly replaced by the Master of Physical Therapy. Physical therapy education is currently transitioning to a clinical doctorate, the Doctor of Physical Therapy degree, with the majority of current programs offering the DPT.
== References ==
== External links ==
Homepage of the American Physical Therapy Association (APTA) | Wikipedia/Master_of_Physical_Therapy |
The Nordoff–Robbins approach to music therapy is a method developed to help children with psychological, physical, or developmental disabilities. It originated from the collaboration of Paul Nordoff and Clive Robbins, which began in 1958, with early influences from Rudolph Steiner and anthroposophical philosophy and teachings. Nordoff–Robbins music therapy asserts that music therapy can improve communication, support change, and help people live more resourcefully and creatively. Nordoff–Robbins music therapy training programs exist in various countries such as the United Kingdom, United States, Australia, Germany, New Zealand, South Africa, and Asia.
== United Kingdom ==
Nordoff and Robbins is a registered charity in the United Kingdom and Scotland. The charity runs the Nordoff Robbins Music Therapy Centre in London and music therapy outreach projects. The charity runs postgraduate training courses in music therapy and a research program, with public courses and conferences.
Nordoff Robbins runs the annual Silver Clef Awards that raise money for the charity. In 2024, they launched the Northern Music Awards as a further fundraising initiative, holding the inaugural ceremony in Manchester.
== United States ==
Founded by Clive Robbins and his wife Carol Robbins, the Nordoff–Robbins Center for Music Therapy at New York University, Steinhardt School of Culture, Education, and Human Development, opened in 1989. The center is affiliated with New York University's Graduate Music Therapy Program. The mission of the center has six main components:
Providing music therapy services to people with disabilities, including autism spectrum disorders, behavioral disorders, developmental delays, sensory impairments, and psychiatric disorders.
Offering advanced music therapy training.
Conducting and publishing research. The center maintains an extensive archive that includes recordings and documentation of the work of Nordoff and Robbins (1959–1976). The archive is updated by contemporary clinical work. Ongoing research in clinical practice focuses on the role of improvisational music therapy in addressing the needs of clients with different areas of disability including autism spectrum disorder, stroke, and hearing impairment.
Presenting lectures, workshops, and symposia to professional audiences.
Publishing musical and instructional materials to in the clinical process and improvisation.
Disseminating information and resources; serving as a resource for music therapists, students, the media, and the public. It provides consultant services, organizes seminars and workshops, and hosts over 150 visitors annually.
The Nordoff–Robbins training at Molloy College, established in 2010, is an approved Nordoff–Robbins program in the US. It is located at the Rebecca Center for Music Therapy at Molloy College, an outpatient center serving children and adults in the Long Island and metropolitan New York area.
Both training programs include assessment, archival coursework, clinical work, group music therapy, and clinical improvisation instruction.
== References ==
== External links ==
Nordoff Robbins website
EEUU: Nordoff - Robbins Center For Music Therapy
History of Nordoff-Robbins Music Therapy, The Steinhardt School, New York University
Osbournes win Silver Clef honour, BBC News, June 16, 2006 | Wikipedia/Nordoff–Robbins_music_therapy |
Manual therapy, or manipulative therapy, is a treatment primarily used by physical therapists, occupational therapists, and massage therapists to treat musculoskeletal pain and disability. It mostly includes kneading and manipulation of muscles, joint mobilization and joint manipulation. It is also used by Rolfers, athletic trainers, osteopaths, and physicians.
== Definitions ==
Irvin Korr, J. S. Denslow and colleagues did the original body of research on manual therapy. Korr described it as the "Application of an accurately determined and specifically directed manual force to the body, in order to improve mobility in areas that are restricted; in joints, in connective tissues or in skeletal muscles."
According to the Orthopaedic Manual Physical Therapy Description of Advanced Specialty Practice manual therapy is defined as a clinical approach utilizing specific hands-on techniques, including but not limited to manipulation/mobilization, used by the physical therapist to diagnose and treat soft tissues and joint structures for the purpose of modulating pain; increasing range of motion (ROM); reducing or eliminating soft tissue inflammation; inducing relaxation; improving contractile and non-contractile tissue repair, extensibility, and/or stability; facilitating movement; and improving function.
A consensus study of US chiropractors defined manual therapy (generally known as the "chiropractic adjustment" in the profession) as "Procedures by which the hands directly contact the body to treat the articulations and/or soft tissues."
== Use and method ==
In Pakistan, Western Europe, North America and Australasia, manual therapy is usually practiced by members of specific health care professions (e.g. Chiropractors, Occupational Therapists, Osteopaths, Osteopathic physicians, Physiotherapists/Physical Therapists, Massage Therapists and Physiatrists). However, some lay practitioners (not members of a structured profession), such as bonesetters also provide some forms of manual therapy.
A survey released in May 2004 by the National Center for Complementary and Integrative Health focused on who used complementary and alternative medicine (CAM), what was used, and why it was used in the United States by adults during 2002. Massage was the fifth most commonly use CAM in the United States in 2007.
=== Techniques ===
Myofascial therapy targets the muscle and fascial systems, promotes flexibility and mobility of the body's connective tissues. It is said to mobilize adhesions and reduce severity/sensitivity of scarring. A critical analysis finds that the relevance of fascia to therapy doubtful.
Massage may be used as part of a treatment. Proponents claim this may reduce inflammation. Science writer Paul Ingraham notes that there is no evidence to support the claim.
Friction massage is said to increase mobilization of adhesions between fascial layers, muscles, compartments and other soft tissues. They are thought to create an inflammatory response and instigate focus to injured areas. A 2002 systematic review found that no additional benefit was incurred from the inclusion of deep tissue friction massage in a therapeutic regimen, although the conclusions were limited by the small sample sizes in available randomized clinical trials.
Soft Tissue Technique is firm, direct pressure to relax hypertonic muscles and stretch tight fascial structures. A 2015 review concluded that the technique is ineffective for lower back pain, and the quality of research testing its effectiveness is poor.
Trigger point techniques claim to address myofascial trigger points, though the explanation of how this works is controversial.
=== Stretching ===
From the main article's effectiveness section:
Apart from before running, stretching does not appear to reduce risk of injury during exercise.
Some evidence shows that pre-exercise stretching may increase range of movement.
The Mayo Clinic advises against bouncing, and to hold for thirty seconds. They suggest warming up before stretching or stretching post-exercise.
=== Taping ===
Manual therapy practitioners often use therapeutic taping to relieve pressure on injured soft tissue, alter muscle firing patterns or prevent re-injury. Some techniques are designed to enhance lymphatic fluid exchange. After a soft tissue injury to muscles or tendons from sports activities, over exertion or repetitive strain injury swelling may impede blood flow to the area and slow healing. Elastic taping methods may relieve pressure from swollen tissue and enhance circulation to the injured area.
According to the medical and skeptical community there is no known benefit from this technique and it is a pseudoscience.
== Styles of manual therapy ==
There are many different styles of manual therapy. It is a fundamental feature of ayurvedic medicine, traditional Chinese medicine and some forms of alternative medicine as well as being used by mainstream medical practitioners. Hands-on bodywork is a feature of therapeutic interactions in traditional cultures around the world.
== Efficacy ==
In 2018, the Journal of Orthopaedic & Sports Physical Therapy stated that due to the wide range of issues with various parts of the body and different techniques used, as well as a lack of modeling behavior, it can be difficult to tell just how effective manual therapy can be for a patient.
More recent research published in 2024 explained that historically, traditional manual therapy had no basis deeming it an effective modality for treatment of musculoskeletal diseases and pain. This faulty modality was centered around the clinician’s palpation, patho-anatomical reasoning, and technique specificity. The previously known manual therapy is shifting into a highly effective modern day physical therapy, which is not dependent on perfect palpation, and rather utilizes a patient-centered care model. Based on clinical trials and current data, modern day manual therapy is deemed effective when used in conjunction with other modalities for patients suffering with musculoskeletal diseases. The modern practice of manual therapy is centered around values such as safety, comfort, efficiency, communication, and patient-centeredness. Through this new approach, clinicians are encouraging their patient’s to assess their outcomes and progress and reevaluate their pain, thereby aligning the practice of manual therapy with the holistic approach to healthcare.
Results for migraines, headaches, and asthma are mixed due to a lack of clinical trials, though at least one article states that manual therapy is effective for asthma.
Manual therapy was shown to be effective for treating back pain, with trigger point therapy being used for myofascial pain, and manual manipulation for lower back pain.
The therapeutic pressure relieves pain and increases range of motion. While patient’s may complain of muscle soreness post treatment, this effect is expected and it is not deemed adverse.
== See also ==
Body psychotherapy
McKenzie method
Osteopathy
Physical therapy
Qigong
Siddha medicine
Fascial Manipulation
== References ==
== Further reading ==
=== Journals ===
The Journal of Manual and Manipulative Therapy
Journal of Manipulative and Physiological Therapeutics - PubMed access found here
=== Books ===
Karel Lewit (1999). Manipulative therapy in rehabilitation of the locomotor system. Oxford: Butterworth-Heinemann. ISBN 0-7506-2964-9.
Umasankar Mohanty (2017). Clinical Symposia In Manual Therapy. Mangalore: MTFI Healthcare Publications. ISBN 978-81-908154-1-3.
Weiselfish-Giammatteo, S., J. B. Kain; et al. (2005). Integrative manual therapy for the connective tissue system: myofascial release. Berkeley, Calif: North Atlantic Books.{{cite book}}: CS1 maint: multiple names: authors list (link)
Kimberly Burnham (2007). Integrative Manual Therapy. West Hartford, CT: The Burnham Review.
Umasankar Mohanty (2010). Manual therapy of the pelvic complex. Mangalore: MTFI Healthcare Publications. ISBN 978-81-908154-0-6.
== External links ==
American Academy of Orthopaedic Manual Physical Therapists
American Organization for Bodywork Therapies of Asia
Manual Therapy Foundation of India
International Federation of Orthopaedic Manipulative Therapists | Wikipedia/Soft_tissue_therapy |
Person-centered therapy (PCT), also known as person-centered psychotherapy, person-centered counseling, client-centered therapy and Rogerian psychotherapy, is a humanistic approach psychotherapy developed by psychologist Carl Rogers and colleagues beginning in the 1940s and extending into the 1980s. Person-centered therapy emphasizes the importance of creating a therapeutic environment grounded in three core conditions: unconditional positive regard (acceptance), congruence (genuineness), and empathic understanding. It seeks to facilitate a client's actualizing tendency, "an inbuilt proclivity toward growth and fulfillment", via acceptance (unconditional positive regard), therapist congruence (genuineness), and empathic understanding.
== History and influences ==
Person-centered therapy was developed by Carl Rogers in the 1940s and 1950s,: 138 and was brought to public awareness largely through his book Client-centered Therapy, published in 1951. It has been recognized as one of the major types of psychotherapy (theoretical orientations), along with psychodynamic psychotherapy, psychoanalysis, classical Adlerian psychology, cognitive behavioral therapy, existential therapy, and others.: 3 Its underlying theory arose from the results of empirical research; it was the first theory of therapy to be driven by empirical research, with Rogers at pains to reassure other theorists that "the facts are always friendly". Originally called non-directive therapy, it "offered a viable, coherent alternative to Freudian psychotherapy. ... [Rogers] redefined the therapeutic relationship to be different from the Freudian authoritarian pairing."
Person-centered therapy is often described as a humanistic therapy, but its main principles appear to have been established before those of humanistic psychology. Some have argued that "it does not in fact have much in common with the other established humanistic therapies" but, by the mid-1960s, Rogers accepted being categorized with other humanistic (or phenomenological-existential) psychologists in contrast to behavioral and psychoanalytic psychologists. Despite the importance of the self to person-centered theory, the theory is fundamentally organismic and holistic in nature, with the individual's unique self-concept at the center of the unique "sum total of the biochemical, physiological, perceptual, cognitive, emotional and interpersonal behavioural subsystems constituting the person".
Rogers coined the term counselling in the 1940s because, at that time, psychologists were not legally permitted to provide psychotherapy in the US. Only medical practitioners were allowed to use the term psychotherapy to describe their work.
Rogers affirmed individual personal experience as the basis and standard for living and therapeutic effect.: 142–143 This emphasis contrasts with the dispassionate position which may be intended in other therapies, particularly the behavioral therapies. Hallmarks of Rogers's person-centered therapy include: living in the present rather than the past or future; organismic trust; naturalistic faith in one's own thoughts and the accuracy in one's feelings; a responsible acknowledgment of one's freedom; and a view toward participating fully in our world and contributing to other peoples' lives. Rogers also claimed that the therapeutic process is, in essence, composed of the accomplishments made by the client. The client, having already progressed further along in their growth and maturation development, only progresses further with the aid of a psychologically favored environment.
== The necessary and sufficient conditions ==
Rogers (1957; 1959) stated that there are six necessary and sufficient conditions required for therapeutic change:: 142–143
Therapist–client psychological contact: A relationship between client and therapist must exist, and it must be a relationship in which each person's perception of the other is important.
Client incongruence: Incongruence (as defined by Carl Rogers; "a lack of alignment between the real self and the ideal self") exists between the client's experience and awareness.
Therapist congruence, or genuineness: The therapist is congruent within the therapeutic relationship; the therapist is deeply involved—they are not "acting"—and they can draw on their own experiences (self-disclosure) to facilitate the relationship.
Therapist unconditional positive regard: The therapist accepts the client unconditionally, without judgment, disapproval, or approval. This facilitates increased self-regard in the client, as they can begin to become aware of experiences in which their view of self-worth was distorted or denied.
Therapist empathic understanding: The therapist experiences an empathic understanding of the client's internal frame of reference. Accurate empathy on the part of the therapist helps the client believe the therapist's unconditional regard for them.
Client perception: The client perceives, to at least a minimal degree, the therapist's unconditional positive regard and empathic understanding.
The three conditions specific to the therapist/counselor came to be called the core conditions of PCT: therapist congruence, unconditional positive regard or acceptance, and accurate empathic understanding. There is a large body of publications of empirical research on these conditions.
== Processes ==
Rogers believed that a therapist who embodies the three critical and reflexive attitudes (the three core conditions) will help liberate their client to more confidently express their true feelings without fear of judgement. To achieve this, the client-centered therapist carefully avoids directly challenging their client's way of communicating themselves in the session in order to enable a deeper exploration of the issues most intimate to them and free from external referencing. Rogers was not prescriptive in telling his clients what to do, but believed that the answers to the clients' questions were within the client and not the therapist. Accordingly, the therapist's role was to create a facilitative, empathic environment wherein the client could discover the answers for themselves.
Recent studies suggest that narrative shifts within therapy, such as "innovative moments" where clients express thoughts or behaviors inconsistent with their previous problematic self-narratives, are associated with meaningful psychological change in client-centered therapy. Additionally, a study found that person-centered and experiential therapies were effective in treating anxiety, particularly when emotional depth and self-exploration were central to the process. However, these therapies were sometimes less effective than cognitive-behavioral therapy in direct comparisons, which supports the importance of tailoring treatment to individual client needs.
Building on this, another study used a machine learning approach to determine which clients would respond better to person-centered therapy versus cognitive-behavioral therapy. Their findings showed that outcomes significantly improved when therapy was matched to the client’s predicted needs, reinforcing the value of personalized care. Person-centered therapy has also been shown to benefit specific populations. In a randomized controlled trial, von Humboldt and Leal found that older adults receiving PCT reported significant improvements in self-esteem that were sustained for a full year after treatment. This suggests that the core principles of PCT are adaptable and effective across age groups.
== Effectiveness ==
Research on the effectiveness of person-centered therapy (PCT) across various clinical conditions has produced mixed but encouraging results. While PCT has generally been found to yield positive outcomes for anxiety and depression, some studies suggest it may be less effective than structured approaches like cognitive-behavioral therapy (CBT) in certain contexts. For example, a 2013 meta-analysis found that experiential therapies, including PCT, showed improvement in clients with anxiety from pre- to post-treatment, although they often performed below CBT in direct comparisons.
Even so, PCT offers distinct advantages. Its focus on emotional depth, client autonomy, and a non-directive therapeutic environment can be particularly helpful for individuals who prefer a more supportive and less structured approach to therapy. These qualities may also make PCT a good fit for clients who have had negative experiences with more prescriptive or diagnosis-driven models.
Recent findings suggest that outcomes improve when therapy is matched to individual client needs. Delgadillo and Duhne used machine learning to analyze which clients responded best to CBT versus PCT. Their results showed that clients who received the therapy most aligned with their predicted treatment response experienced significantly better outcomes than those who received a non-matching therapy. This supports the idea that while PCT may not be ideal for every individual, it can be highly effective when personalized to the client. PCT has also shown promise with specific populations. In a randomized controlled trial, von Humboldt and Leal found that older adults who received person-centered therapy reported significant improvements in self-esteem. These gains were maintained for at least 12 months after the intervention, highlighting PCT’s potential for long-term impact and its adaptability across age groups.
== Applications ==
Person-centered therapy has been adapted for a variety of populations and settings. For example, a randomized controlled trial in Portugal demonstrated that PCT significantly improved self-esteem in older adults by reducing the gap between their real and ideal selves. These improvements were maintained at a 12-month follow-up, suggesting long-term effectiveness in aging populations.
PCT has also been applied in educational and youth counseling settings. Its emphasis on empathy, acceptance, and authentic communication makes it particularly effective for adolescents and young adults who are navigating identity development, interpersonal challenges, and emotional regulation. Additionally, the non-directive nature of PCT allows it to be used across cultural contexts where traditional therapist-led approaches may not align with community values or client expectations.
The adaptability of person-centered therapy stems from its core belief that the client is the expert in their own experience. This principle enables therapists to work effectively with diverse populations while maintaining a strong respect for individual autonomy and cultural differences.
== Criticism and limitations ==
Although client-centered therapy has been criticized by behaviorists for lacking structure and by psychoanalysts for offering what they view as a conditional rather than truly neutral therapeutic relationship, research has shown that person-centered therapy can be effective across a variety of clinical issues. Critics have also noted that the non-directive nature of PCT can make it difficult to measure outcomes consistently, as well as to assess the uniform application of its core conditions across therapists.
Another concern involves the generalizability and adaptability of the approach. A study by Delgadillo and Duhne used machine learning to examine whether certain clients with depression responded better to person-centered counseling or cognitive-behavioral therapy. The results showed that clients who received the therapy most closely aligned with their predicted treatment response experienced significantly better outcomes than those who received a non-matching therapy. This supports the idea that while PCT can be highly effective, it may not be the best choice for every individual unless selected based on specific client needs.
In addition, some have questioned whether PCT provides sufficient structure for clients with more severe or complex mental health conditions, such as trauma or chronic depression. Although PCT encourages emotional growth within a supportive relationship, it may require adaptation or integration with other therapeutic models to effectively meet the needs of clients dealing with more intensive clinical presentations.
== See also ==
Humanistic psychology
Human Potential Movement
ELIZA
== References ==
== Bibliography ==
Arnold, Kyle (2014). "Behind the mirror: reflective listening and its tain in the work of Carl Rogers". The Humanistic Psychologist. 42 (4): 354–369. doi:10.1080/08873267.2014.913247.
Bruno, Frank Joe (1977). "Client-centered counseling: becoming a person". Human adjustment and personal growth: seven pathways. New York: John Wiley & Sons. pp. 362–370. ISBN 9780471114352. OCLC 2614322.
Cooper, Mick; O'Hara, Maureen; Schmid, Peter F.; Wyatt, Gill, eds. (2013) [2007]. The handbook of person-centred psychotherapy and counselling (2nd ed.). New York: Palgrave Macmillan. ISBN 9780230280496. OCLC 937523949.
Rogers, Carl R. (1951). Client-centered therapy, its current practice, implications, and theory. The Houghton Mifflin psychological series. Boston: Houghton Mifflin. OCLC 2571303.
Rogers, Carl R. (1957). "The necessary and sufficient conditions of therapeutic personality change". Journal of Consulting Psychology. 21 (2): 95–103. CiteSeerX 10.1.1.605.9768. doi:10.1037/h0045357. PMID 13416422.
Rogers, Carl R. (1959). "A theory of therapy, personality and interpersonal relationships as developed in the client-centered framework". In Koch, Sigmund (ed.). Psychology: a study of a science. Vol. 3: Formulations of the person and the social context. New York: McGraw Hill. pp. 184-256. OCLC 3731949.
Rogers, Carl R. (1961). On becoming a person: a therapist's view of psychotherapy. Boston: Houghton Mifflin. ISBN 9780395081341. OCLC 172718. {{cite book}}: ISBN / Date incompatibility (help)
Rogers, Carl R. (1980). A way of being. Boston: Houghton Mifflin. ISBN 9780395299159. OCLC 6602382.
Rogers, Carl R.; Lyon, Harold C.; Tausch, Reinhard (2013). On becoming an effective teacher: person-centered teaching, psychology, philosophy, and dialogues with Carl R. Rogers and Harold Lyon. London; New York: Routledge. ISBN 9780415816977. OCLC 820119514.
Delgadillo, J., & Gonzalez Salas Duhne, P. (2020). Targeted prescription of cognitive–behavioral therapy versus person-centered counseling for depression using a machine learning approach. Journal of Consulting and Clinical Psychology, 88(1), 14–24. https://doi.org/10.1037/ccp0000452
Elliott, R. (2013). Person-centered/experiential psychotherapy for anxiety difficulties: Theory, research and practice. Person-Centered & Experiential Psychotherapies, 12(1), 16–32. https://doi.org/10.1080/14779757.2013.767750
Gonçalves, M. M., Mendes, I., Cruz, G., Ribeiro, A. P., Sousa, I., Angus, L., & Greenberg, L. S. (2012). Innovative moments and change in client-centered therapy. Psychotherapy Research, 22(4), 389–401. https://doi.org/10.1080/10503307.2012.662608
Potter, C. M., Drabick, D. A., & Heimberg, R. G. (2014). Panic symptom profiles in social anxiety disorder: A person-centered data-analytic approach. Behaviour Research and Therapy, 56, 53–59. https://doi.org/10.1016/j.brat.2014.03.004
von Humboldt, S., & Leal, I. (2012). Person-centered therapy and older adults' self-esteem: A pilot study with follow-up. Studies in Sociology of Science, 3(4), 1–10. https://doi.org/10.3968/j.sss.1923018420120304.176
== External links ==
World Association for Person-Centered and Experiential Psychotherapy and Counseling | Wikipedia/Person-centered_therapy |
The neural encoding of sound is the representation of auditory sensation and perception in the nervous system. The complexities of contemporary neuroscience are continually redefined. Thus what is known of the auditory system has been continually changing. The encoding of sounds includes the transduction of sound waves into electrical impulses (action potentials) along auditory nerve fibers, and further processing in the brain.
== Basic physics of sound ==
Sound waves are what physicists call longitudinal waves, which consist of propagating regions of high pressure (compression) and corresponding regions of low pressure (rarefaction).
=== Waveform ===
Waveform is a description of the general shape of the sound wave. Waveforms are sometimes described by the sum of sinusoids, via Fourier analysis.
=== Amplitude ===
Amplitude is the size (magnitude) of the pressure variations in a sound wave, and primarily determines the loudness with which the sound is perceived. In a sinusoidal function such as
C
sin
(
2
π
f
t
)
{\displaystyle C\sin(2\pi ft)}
, C represents the amplitude of the sound wave.
=== Frequency and wavelength ===
The frequency of a sound is defined as the number of repetitions of its waveform per second, and is measured in hertz; frequency is inversely proportional to wavelength (in a medium of uniform propagation velocity, such as sound in air). The wavelength of a sound is the distance between any two consecutive matching points on the waveform. The audible frequency range for young humans is about 20 Hz to 20 kHz. Hearing of higher frequencies decreases with age, limiting to about 16 kHz for adults, and even down to 3 kHz for elders.
== Anatomy of the ear ==
Given the simple physics of sound, the anatomy and physiology of hearing can be studied in greater detail.
=== Outer ear ===
The Outer ear consists of the pinna or auricle (visible parts including ear lobes and concha), and the auditory meatus (the passageway for sound). The fundamental function of this part of the ear is to gather sound energy and deliver it to the eardrum. Resonances of the external ear selectively boost sound pressure with frequency in the range 2–5 kHz.
The pinna as a result of its asymmetrical structure is able to provide further cues about the elevation from which the sound originated. The vertical asymmetry of the pinna selectively amplifies sounds of higher frequency from high elevation thereby providing spatial information by virtue of its mechanical design.
=== Middle ear ===
The middle ear plays a crucial role in the auditory process, as it essentially converts pressure variations in air to perturbations in the fluids of the inner ear. In other words, it is the mechanical transfer function that allows for efficient transfer of collected sound energy between two different media. The three small bones that are responsible for this complex process are the malleus, the incus, and the stapes, collectively known as the ear ossicles. The impedance matching is done through via lever ratios and the ratio of areas of the tympanic membrane and the footplate of the stapes, creating a transformer-like mechanism. Furthermore, the ossicles are arranged in such a manner as to resonate at 700–800 Hz while at the same time protecting the inner ear from excessive energy. A certain degree of top-down control is present at the middle ear level primarily through two muscles present in this anatomical region: the tensor tympani and the stapedius. These two muscles can restrain the ossicles so as to reduce the amount of energy that is transmitted into the inner ear in loud surroundings.
=== Inner ear ===
The cochlea of the inner ear, a marvel of physiological engineering, acts as both a frequency analyzer and nonlinear acoustic amplifier. The cochlea has over 32,000 hair cells. Outer hair cells primarily provide amplification of traveling waves that are induced by sound energy, while inner hair cells detect the motion of those waves and excite the (Type I) neurons of the auditory nerve.
The basal end of the cochlea, where sounds enter from the middle ear, encodes the higher end of the audible frequency range while the apical end of the cochlea encodes the lower end of the frequency range. This tonotopy plays a crucial role in hearing, as it allows for spectral separation of sounds. A cross section of the cochlea will reveal an anatomical structure with three main chambers (scala vestibuli, scala media, and scala tympani). At the apical end of the cochlea, at an opening known as the helicotrema, the scala vestibuli merges with the scala tympani. The fluid found in these two cochlear chambers is perilymph, while scala media, or the cochlear duct, is filled with endolymph.
== Transduction ==
=== Auditory hair cells ===
The auditory hair cells in the cochlea are at the core of the auditory system's special functionality (similar hair cells are located in the semicircular canals). Their primary function is mechanotransduction, or conversion between mechanical and neural signals. The relatively small number of the auditory hair cells is surprising when compared to other sensory cells such as the rods and cones of the visual system. Thus the loss of a lower number (in the order of thousands) of auditory hair cells can be devastating while the loss of a larger number of retinal cells (in the order to hundreds of thousands) will not be as bad from a sensory standpoint.
Cochlear hair cells are organized as inner hair cells and outer hair cells; inner and outer refer to relative position from the axis of the cochlear spiral. The inner hair cells are the primary sensory receptors and a significant amount of the sensory input to the auditory cortex occurs from these hair cells. Outer hair cells on the other hand boost the mechanical signal by using electromechanical feedback.
==== Mechanotransduction ====
The apical surface of each cochlear hair cell contains a hair bundle. Each hair bundle contains approximately 300 fine projections known as stereocilia, formed by actin cytoskeletal elements. The stereocilia in a hair bundle are arranged in multiple rows of different heights. In addition to the stereocilia, a true ciliary structure known as the kinocilium exists and is believed to play a role in hair cell degeneration that is caused by exposure to high frequencies.
A stereocilium is able to bend at its point of attachment to the apical surface of the hair cell. The actin filaments that form the core of a stereocilium are highly interlinked and cross linked with fibrin, and are therefore stiff and inflexible at positions other than the base. When stereocilia in the tallest row are deflected in the positive-stimulus direction, the shorter rows of stereocilia are also deflected. These simultaneous deflections occur due to filaments called tip links that attach the side of each taller stereocilium to the top of the shorter stereocilium in the adjacent row. When the tallest stereocilia are deflected, tension is produced in the tip links and causes the stereocilia in the other rows to deflect as well. At the lower end of each tip link is one or more mechano-electrical transduction (MET) channels, which are opened by tension in the tip links. These MET channels are cation-selective transduction channels that allow potassium and calcium ions to enter the hair cell from the endolymph that bathes its apical end.
The influx of cations, particularly potassium, through the open MET channels causes the membrane potential of the hair cell to depolarize. This depolarization opens voltage-gated calcium channels to allow the further influx of calcium. This results in an increase in the calcium concentration, which triggers the exocytosis of neurotransmitter vesicles at ribbon synapses at the basolateral surface of the hair cell. The release of neurotransmitter at a ribbon synapse, in turn, generates an action potential in the connected auditory-nerve fiber. Hyperpolarization of the hair cell, which occurs when potassium leaves the cell, is also important, as it stops the influx of calcium and therefore stops the fusion of vesicles at the ribbon synapses. Thus, as elsewhere in the body, the transduction is dependent on the concentration and distribution of ions. The perilymph that is found in the scala tympani has a low potassium concentration, whereas the endolymph found in the scala media has a high potassium concentration and an electrical potential of about 80 millivolts compared to the perilymph. Mechanotransduction by stereocilia is highly sensitive and able to detect perturbations as small as fluid fluctuations of 0.3 nanometers, and can convert this mechanical stimulation into an electrical nerve impulse in about 10 microseconds.
=== Nerve fibers from the cochlea ===
There are two types of afferent neurons found in the cochlear nerve: Type I and Type II. Each type of neuron has specific cell selectivity within the cochlea. The mechanism that determines the selectivity of each type of neuron for a specific hair cell has been proposed by two diametrically opposed theories in neuroscience known as the peripheral instruction hypothesis and the cell autonomous instruction hypothesis. The peripheral instruction hypothesis states that phenotypic differentiation between the two neurons are not made until after these undifferentiated neurons attach to hair cells which in turn will dictate the differentiation pathway. The cell autonomous instruction hypothesis states that differentiation into Type I and Type II neurons occur following the last phase of mitotic division but preceding innervations. Both types of neuron participate in the encoding of sound for transmission to the brain.
==== Type I neurons ====
Type I neurons innervate inner hair cells. There is significantly greater convergence of this type of neuron towards the basal end in comparison with the apical end. A radial fiber bundle acts as an intermediary between Type I neurons and inner hair cells. The ratio of innervation that is seen between Type I neurons and inner hair cells is 1:1 which results in high signal transmission fidelity and resolution.
==== Type II neurons ====
Type II neurons on the other hand innervate outer hair cells. However, there is significantly greater convergence of this type of neuron towards the apex end in comparison with the basal end. A 1:30-60 ratio of innervation is seen between Type II neurons and outer hair cells which in turn make these neurons ideal for electromechanical feedback. Type II neurons can be physiologically manipulated to innervate inner hair cells provided outer hair cells have been destroyed either through mechanical damage or by chemical damage induced by drugs such as gentamicin.
== Brainstem and midbrain ==
The auditory nervous system includes many stages of information processing between the ear and cortex.
== Auditory cortex ==
Primary auditory neurons carry action potentials from the cochlea into the transmission pathway shown in the adjacent image. Multiple relay stations act as integration and processing centers. The signals reach the first level of cortical processing at the primary auditory cortex (A1), in the superior temporal gyrus of the temporal lobe. Most areas up to and including A1 are tonotopically mapped (that is, frequencies are kept in an ordered arrangement). However, A1 participates in coding more complex and abstract aspects of auditory stimuli without coding well the frequency content, including the presence of a distinct sound or its echoes.
Like lower regions, this region of the brain has combination-sensitive neurons that have nonlinear responses to stimuli.
Recent studies conducted in bats and other mammals have revealed that the ability to process and interpret modulation in frequencies primarily occurs in the superior and middle temporal gyri of the temporal lobe. Lateralization of brain function exists in the cortex, with the processing of speech in the left cerebral hemisphere and environmental sounds in the right hemisphere of the auditory cortex. Music, with its influence on emotions, is also processed in the right hemisphere of the auditory cortex. While the reason for such localization is not quite understood, lateralization in this instance does not imply exclusivity as both hemispheres do participate in the processing, but one hemisphere tends to play a more significant role than the other.
== Recent ideas ==
Alternation in encoding mechanisms have been noticed as one progresses through the auditory cortex. Encoding shifts from synchronous responses in the cochlear nucleus and later becomes dependent on rate encoding in the inferior colliculus.
Despite advances in gene therapy that allow for the alteration of the expression of genes that affect audition, such as ATOH1, and the use of viral vectors for such end, the micro-mechanical and neural complexities that surrounds the inner ear hair cells, artificial regeneration in vitro remains a distant reality.
Recent studies suggest that the auditory cortex may not be as involved in top down processing as was previous thought. In studies conducted on primates for tasks that required the discrimination of acoustic flutter, Lemus found that the auditory cortex played only a sensory role and had nothing to do with the cognition of the task at hand.
Due to the presence of the tonotopic maps in the auditory cortex at an early age, it has been assumed that cortical reorganization had little to do with the establishment of these maps, but these maps are subject to plasticity. The cortex seems to perform a more complex processing than spectral analysis or even spectro-temporal analysis.
== References == | Wikipedia/Neuronal_encoding_of_sound |
Hydrotherapy, formerly called hydropathy and also called water cure, is a branch of alternative medicine (particularly naturopathy), occupational therapy, and physiotherapy, that involves the use of water for pain relief and treatment. The term encompasses a broad range of approaches and therapeutic methods that take advantage of the physical properties of water, such as temperature and pressure, to stimulate blood circulation and treat the symptoms of certain diseases.
Various therapies used in the present-day hydrotherapy employ water jets, underwater massage and mineral baths (e.g. balneotherapy, Iodine-Grine therapy, Kneipp treatments, Scotch hose, Swiss shower, thalassotherapy) or whirlpool bath, hot Roman bath, hot tub, Jacuzzi, and cold plunge.
Hydrotherapy lacks robust evidence supporting its efficacy beyond placebo effects. Systematic reviews of randomized controlled trials have constitently found no clear evidence of curative effects, citing methodological flaws and insufficient data. Overall, the scientific consensus indicates that hydrotherapy's benefits are not conclusively greater than those of placebo treatments.
== Uses ==
Water therapy may be restricted to use as aquatic therapy, a form of physical therapy, and a cleansing agent. However, it is also used as a medium for delivering heat and cold to the body, which has long been the basis for its application. Hydrotherapy involves a range of methods and techniques, many of which use water as a medium to facilitate thermoregulatory reactions for therapeutic benefit.
Shower-based hydrotherapy techniques have been increasingly used in preference to full-immersion methods, partly for the ease of cleaning the equipment and reducing infections due to contamination. When removal of tissue is necessary for the treatment of wounds, hydrotherapy, which performs selective mechanical debridement can be used. Examples of this include directed wound irrigation and therapeutic irrigation with suction.
== Technique ==
The following methods are used for their hydrotherapeutic effects:
Packings, general and local;
Hot air and steam baths;
General baths;
Treadmills
Sitz (sitting), spinal, head, and foot baths;
Bandages or compresses, wet and dry; also;
Fomentations and poultices, sinapisms, stupes, rubbings, and water potations.
Hydrotherapy, which involves submerging all or part of the body in water, can involve several types of equipment:
Full body immersion tanks (a "Hubbard tank" is a large size)
Arm, hip, and leg whirlpool
Whirling water movement, provided by mechanical pumps, has been used in water tanks since at least the 1940s. Similar technologies have been marketed for recreational use under the terms "hot tub" or "spa".
In some cases, baths with whirlpool water flow are not used to manage wounds, as a whirlpool will not selectively target the tissue to be removed, and can damage all tissue. Whirlpools also create an unwanted risk of bacterial infection, can damage fragile body tissue, and in the case of treating arms and legs, bring risk of complications from edema.
== History ==
The therapeutic use of water has been recorded in ancient Egyptian, Greek and Roman civilizations. Egyptian royalty bathed with essential oils and flowers. Romans had communal public baths for their citizens. Hippocrates prescribed bathing in spring water for sickness. Other cultures noted for a long history of hydrotherapy include China and Japan, the latter being centred primarily around Japanese hot springs. Many such histories predate the Roman thermae.
=== Modern revival ===
Hydrotherapy became more prominent following the growth and development of modern medical practices in the 18th and 19th centuries. As traditional medical practice became increasingly professional, it was felt that medical treatment became increasingly less personalized. The development of hydrotherapy was believed to be a more personal form of medical treatment that did not necessarily present to patients the alienating scientific language that modern developments of medical treatment entailed.
==== 1700–1810 ====
Two English works on the medical uses of water were published in the 18th century that inaugurated the new fashion for hydrotherapy. One of these was by Sir John Floyer, a physician of Lichfield, who, struck by the remedial use of certain springs by the neighbouring peasantry, investigated the history of cold bathing and published a book on the subject in 1702. The book ran through six editions within a few years, and the translation of this book into German was largely drawn upon by J. S. Hahn of Silesia as the basis for his book called On the Healing Virtues of Cold Water, Inwardly and Outwardly Applied, as Proved by Experience, published in 1738.
The other work was a 1797 publication by James Currie of Liverpool on the use of hot and cold water in the treatment of fever and other illnesses, with a fourth edition published in 1805, not long before his death. It was also translated into German by Michaelis (1801) and Hegewisch (1807). It was highly popular and first placed the subject on a scientific basis. Hahn's writings had meanwhile created much enthusiasm among his countrymen, societies having been formed everywhere to promote the medicinal and dietetic use of water; and in 1804 Professor E.F.C. Oertel of Anspach republished them and quickened the popular movement by unqualified commendation of water drinking as a remedy for all diseases.
The general idea behind hydropathy during the 1800s was to be able to induce something called a crisis. The thinking was that water invaded any cracks, wounds, or imperfections in the skin, which were filled with impure fluids. Health was considered to be the body's natural state, and filling these spaces with pure water would flush the impurities out, which would rise to the skin's surface, producing pus. The event of this pus emerging was called a crisis, and was achieved through a multitude of methods. These methods included techniques such as sweating, the plunging bath, the half bath, the head bath, the sitting bath, and the douche bath. All of these were ways to gently expose the patient to cold water in different ways.
==== Vincenz Priessnitz (1799–1851) ====
Vincenz Priessnitz was the son of a peasant farmer who, as a young child, observed a wounded deer bathing a wound in a pond near his home. Over several days, he would see this deer return, and eventually the wound was healed. Later, as a teenager, Priessnitz was attending to a horse cart, when the cart ran him over, breaking three of his ribs. A physician told him that they would never heal. Priessnitz decided to try his hand at healing himself and wrapped his wounds with damp bandages. By daily changing his bandages and drinking large quantities of water, after about a year, his broken ribs had healed. Priessnitz quickly gained fame in his hometown and became the consulting physician.
Later in life, Priessnitz became the head of a hydropathy clinic in Gräfenberg in 1826. He was extremely successful and by 1840, he had 1600 patients in his clinic, including many fellow physicians, and important political figures such as nobles and prominent military officials. Treatment length at Priessnitz's clinic varied. Much of his theory was about inducing the aforementioned crisis, which could happen quickly or could occur after three to four years. Under the simplistic nature of hydropathy, a large part of the treatment was based on living a simple lifestyle. These lifestyle adjustments included dietary changes such as eating only very coarse food, such as jerky and bread, and of course, drinking large quantities of water. Priessnitz's treatments also included a great deal of less strenuous exercise, mostly including walking. Ultimately, Priessnitz's clinic was extremely successful, and he gained fame across the western world. His practice even influenced the hydropathy that took root overseas in America.
==== Sebastian Kneipp (1821–1897) ====
Sebastian Kneipp was born in Germany, and he considered his role in hydropathy to be that of continuing Priessnitz's work. Kneipp's practice of hydropathy was even gentler than the norm. He believed that typical hydropathic practices deployed were "too violent or too frequent," and he expressed concern that such techniques would cause emotional or physical trauma to the patient. Kneipp's practice was more all-encompassing than Priessnitz's, and his practice involved not only curing the patients' physical woes, but also emotional and mental as well.
Kneipp introduced four additional principles to the therapy: medicinal herbs, massages, balanced nutrition, and "regulative therapy to seek inner balance". Kneipp had a very simple view of an already simple practice. For him, hydropathy's primary goals were strengthening the constitution and removing poisons and toxins in the body. These basic interpretations of how hydropathy worked hinted at his complete lack of medical training. Kneipp did have, however, a very successful medical practice despite, perhaps even because of, his lack of medical training. As mentioned above, some patients were beginning to feel uncomfortable with traditional doctors because of the elitism of the medical profession. The new terms and techniques that doctors were using were difficult for the average person to understand. Having no formal training, all of his instructions and published works are described in easy-to-understand language and would have seemed very appealing to a patient who was displeased with the direction traditional medicine was taking.
A significant factor in the popular revival of hydrotherapy was that it could be practised relatively cheaply at home. The growth of hydrotherapy (or 'hydropathy' to use the name of the time) was thus partly derived from two interacting spheres: "the hydro and the home".
Hydrotherapy as a formal medical tool dates from about 1829 when Vincenz Priessnitz (1799–1851), a farmer of Gräfenberg in Silesia, then part of the Austrian Empire, began his public career in the paternal homestead, extended so as to accommodate the increasing numbers attracted by the fame of his cures.
At Gräfenberg, to which the fame of Priessnitz drew people of every rank and many countries, medical men were conspicuous by their numbers, some being attracted by curiosity, others by the desire of knowledge, but the majority by the hope of cure for ailments which had as yet proved incurable. Many records of experiences at Gräfenberg were published, all more or less favorable to the claims of Priessnitz, and some enthusiastic in their estimate of his genius and penetration.
=== Spread of hydrotherapy ===
Captain R. T. Claridge was responsible for introducing and promoting hydropathy in Britain, first in London in 1842, then with lecture tours in Ireland and Scotland in 1843. His 10-week tour in Ireland included Limerick, Cork, Wexford, Dublin and Belfast, over June, July and August 1843, with two subsequent lectures in Glasgow.
Some other Englishmen preceded Claridge to Graefenberg, although not many. One of these was James Wilson, who himself, along with James Manby Gully, established and operated a water cure establishment at Malvern in 1842. In 1843, Wilson and Gully published a comparison of the efficacy of the water-cure with drug treatments, including accounts of some cases treated at Malvern, combined with a prospectus of their Water Cure Establishment. Then in 1846 Gully published The Water Cure in Chronic Disease, further describing the treatments available at the clinic.
The fame of the water-cure establishment grew, and Gully and Wilson became well-known national figures. Two more clinics were opened at Malvern. Famous patients included Charles Darwin, Charles Dickens, Thomas Carlyle, Florence Nightingale, Lord Tennyson and Samuel Wilberforce. With his fame he also attracted criticism:
Sir Charles Hastings, a physician and founder of the British Medical Association, was a forthright critic of hydropathy, and Gully in particular.
From the 1840s, hydropathics were established across Britain. Initially, many of these were small institutions, catering to at most dozens of patients. By the later nineteenth century, the typical hydropathic establishment had evolved into a more substantial undertaking, with thousands of patients treated annually for weeks at a time in a large purpose-built building with lavish facilities – baths, recreation rooms and the like – under the supervision of fully trained and qualified medical practitioners and staff.
In Germany, France, America, and the UK (especially in Scotland), the number of hydropathic establishments rapidly increased. Antagonism ran high between the old practice and the new. Unsparing condemnation was heaped by each on the other; and a legal prosecution, leading to a royal commission of inquiry, served but to make Priessnitz and his system stand higher in public estimation.
Increasing popularity soon diminished caution about whether the new method would help minor ailments and benefit the more seriously injured. Hydropathists occupied themselves mainly with studying chronic invalids well able to bear a rigorous regimen and the severities of unrestricted crisis. The need of a radical adaptation to the former class was first adequately recognized by John Smedley, a manufacturer of Derbyshire, who, impressed in his own person with the severities as well as the benefits of the cold water cure, practised among his workpeople a milder form of hydropathy, and began about 1852 a new era in its history, founding at Matlock a counterpart of the establishment at Gräfenberg.
Ernst Brand (1827–1897) of Berlin, Raljen and Theodor von Jürgensen of Kiel, and Karl Liebermeister of Basel, between 1860 and 1870, employed the cooling bath in abdominal typhus with striking results, and led to its introduction to England by Wilson Fox. In the Franco-German War the cooling bath was largely employed, in conjunction frequently with quinine; and it was used in the treatment of hyperpyrexia.
=== Hot-air baths ===
Hydrotherapy, especially as promoted during the height of its Victorian revival, has often been associated with cold water, as evidenced by many titles from that era. However, not all therapists limited their practice of hydrotherapy to cold water, even during the height of this popular revival.
The specific use of heat was often associated with Victorian Turkish baths. Inspired by David Urquhart's travel book, The Pillars of Hercules, and with Urquhart’s help, Dr Richard Barter built the first such bath at his hydropathic establishment near Blarney, Co. Cork, Ireland in 1856. Urquhart built the first bath open to the general public in Manchester the following year, and soon baths were being opened around the whole of the then UK and British Empire. Over 800 such baths were opened in the British Isles between 1856 and the 1970s. Today, only 11 remain open. The Turkish bath became a public institution, and, with the morning tub and the general practice of water drinking, is the most noteworthy of the many contributions by hydropathy to public health.
=== Spread to the United States ===
The first U.S. hydropathic facilities were established by Joel Shew and Russell Thacher Trall in the 1840s. Charles Munde also established early hydrotherapy facilities in the 1850s. Trall also co-edited the Water Cure Journal.
By 1850, it was said that "there are probably more than one hundred" facilities, along with numerous books and periodicals, including the New York Water Cure Journal, which had "attained an extent of circulation equalled by few monthlies in the world". By 1855, there were attempts by some to weigh the evidence of treatments in vogue at that time.
By October 1863, Dr Charles Shepard had added a Victorian Turkish bath, the first in the United States, to his hydropathic Sanitorium in Brooklyn
Heights. Two years later, Dr Martin L Holbrook opened the first in Manhattan. They then spread across the country as fast as they did in the British Isles, making a similar impact on hydropathic practice.
Following the introduction of hydrotherapy to the U.S., John Harvey Kellogg employed it at Battle Creek Sanitarium, which opened in 1866, where he strove to improve the scientific foundation for hydrotherapy. Other notable hydropathic centers of the era included the Cleveland Water Cure Establishment, founded in 1848, which operated successfully for two decades, before being sold to an organization which transformed it into an orphanage.
At its height, there were over 200 water-cure establishments in the United States, most located in the northeast. Few of these lasted into the postbellum years, although some survived into the 20th century, including institutions in Scott (Cortland County), Elmira, Clifton Springs and Dansville. While none were in Jefferson County, the Oswego Water Cure operated in the city of Oswego.
=== Subsequent developments ===
In November 1881, the British Medical Journal noted that hydropathy was a specific instance, or "particular case", of general principles of thermodynamics. That is, "the application of heat and cold in general", as it applies to physiology, mediated by hydropathy. In 1883, another writer stated "Not, be it observed, that hydropathy is a water treatment after all, but that water is the medium for the application of heat and cold to the body".
Hydrotherapy was used to treat people with mental illness in the 19th and 20th centuries and before World War II, various forms of hydrotherapy were used to treat alcoholism. The basic text of the Alcoholics Anonymous fellowship, Alcoholics Anonymous, reports that A.A. co-founder Bill Wilson was treated by hydrotherapy for his alcoholism in the early 1930s.
=== Recent techniques ===
A subset of cryotherapy involves cold water immersion or ice baths, used by physical therapists, sports medicine facilities, and rehab clinics. Proponents assert that it results in improved return of blood flow and byproducts of cellular breakdown to the lymphatic system and more efficient recycling.
Alternating the temperatures, either in a shower or complementary tanks, combines hot and cold in the same session. Proponents claim improvement in the circulatory system and lymphatic drainage. Experimental evidence suggests that contrast hydrotherapy helps to reduce injury in the acute stages by stimulating blood flow and reducing swelling.
== Society and culture ==
The growth of hydrotherapy and various forms of hydropathic establishments resulted in a form of tourism, both in the UK, and in Europe. At least one book listed English, Scottish, Irish and European establishments suitable for each specific malady, while another focused primarily on German spas and hydropathic establishments, but including other areas. While many bathing establishments were open all year round, doctors advised patients not to go before May, "nor to remain after October. English visitors rather prefer cold weather, and they often arrive for the baths in May and return in September. Americans come during the whole season, but prefer summer. The most fashionable and crowded time is during July and August". In Europe, interest in various forms of hydrotherapy and spa tourism continued unabated through the 19th century and into the 20th century, where "in France, Italy and Germany, several million people spend time each year at a spa." In 1891, when Mark Twain toured Europe and discovered that a bath of spring water at Aix-les-Bains soothed his rheumatism, he described the experience as "so enjoyable that if I hadn't had a disease I would have borrowed one just to have a pretext for going on".
This was not the first time such forms of spa tourism had been popular in Europe and the U.K. Indeed,
in Europe, the application of water in the treatment of fevers and other maladies had, since the seventeenth century, been consistently promoted by a number of medical writers. In the eighteenth century, taking to the waters became a fashionable pastime for the wealthy classes who decamped to resorts around Britain and Europe to cure the ills of over-consumption. In the main, treatment in the heyday of the British spa consisted of sense and sociability: promenading, bathing, and the repetitive quaffing of foul-tasting mineral waters.
A hydropathic establishment is a place where people receive hydropathic treatment. They are commonly built in spa towns, where mineral-rich or hot water occurs naturally.
Several hydropathic institutions wholly transferred their operations away from therapeutic purposes to become tourist hotels in the late 20th century while retaining the name 'Hydro'. There are several prominent examples in Scotland at Crieff, Peebles and Seamill amongst others.
== Animal hydrotherapy ==
Canine hydrotherapy is a form of hydrotherapy directed at the treatment of chronic conditions, post-operative recovery, and pre-operative or general fitness in dogs.
== See also ==
== Notes ==
== References ==
== Further reading ==
Abbott, George Knapp (2007). Elements of Hydrotherapy for Nurses. Brushton, New York: Teach Services. ISBN 978-1-57258-521-8.
Campion, Margaret Reid, ed. (2001). Hydrotherapy: Principles and Practice. Woburn, Massachusetts: Butterworth-Heineman. ISBN 0-7506-2261-X.
Cayleff, Susan E (1991). Wash and Be Healed: The Water-Cure Movement and Women's Health. Philadelphia: Temple University Press. ISBN 0-87722-859-0.
Dail, Clarence; Thomas, Charles (1989). Hydrotherapy: Simple Treatments for Common Ailments. Brushton, New York: Teach Services. ISBN 0-945383-08-8.
Grüber, C; Riesberg, A; et al. (March 2003). "The effect of hydrotherapy on the incidence of common cold episodes in children: A randomised clinical trial". European Journal of Pediatrics. 162 (3): 168–76. doi:10.1007/s00431-002-1138-y. PMID 12655421. S2CID 20497073.
Landewé, Rb; Peeters, R; et al. (January 1992). "No difference in effectiveness measured between treatment in a thermal bath and in an exercise bath in patients with rheumatoid arthritis". Nederlands Tijdschrift voor Geneeskunde. 136 (4): 173–6. PMID 1736128.{{cite journal}}: CS1 maint: multiple names: authors list (link)
Sinclair, Marybetts (2008). Modern Hydrotherapy for the Massage Therapist. Philadelphia: Wolters Kluwer/Lippincott Williams & Wilkins. ISBN 978-0-7817-9209-7.
Thrash, Agatha; Thrash, Calvin (1981). Home Remedies: Hydrotherapy, Massage, Charcoal and Other Simple Treatments. Seale, Alabama: Thrash Publications. ISBN 0-942658-02-7. | Wikipedia/Hydrotherapy |
Systemic therapy is a type of psychotherapy that seeks to address people in relationships, dealing with the interactions of groups and their interactional patterns and dynamics.
Early forms of systemic therapy were based on cybernetics and systems theory. Systemic therapy practically addresses stagnant behavior patterns within living systems without analyzing their cause. The therapist's role is to introduce creative "nudges" to help systems change themselves. This approach is increasingly applied in various fields like business, education, politics, psychiatry, social work, and family medicine.
== History ==
Systemic therapy has its roots in family therapy, or more precisely family systems therapy as it later came to be known. In particular, systemic therapy traces its roots to the Milan school of Mara Selvini Palazzoli, but also derives from the work of Salvador Minuchin, Murray Bowen, Ivan Boszormenyi-Nagy, as well as Virginia Satir and Jay Haley from MRI in Palo Alto. These early schools of family therapy represented therapeutic adaptations of the larger interdisciplinary field of systems theory which originated in the fields of biology and physiology.
The Systemic Family Therapy develops from Murray Bowen's theory, from the research he conducted in the late 1940s till the early 1950s at the NIMH. The research project had families live on the research ward for extended periods. Bowen and his staff conducted extensive observational research on each family's interactions. Bowen's theory of Systemic Family therapy had 8 concepts: "Triangles", "Differentiation of Self", "Nuclear Family Emotional Process", "Family Projection Process", "Multigenerational Transmission Process", "Emotional Cutoff", "Sibling Position", "Societal Emotional Process" In the late 1960s, he introduced the theory of family systems which was based on the structure and behavior of the family’s relationship system as opposed to traditional individual therapy. Bowen researched the family patterns of people with schizophrenia who were receiving treatment and the patterns of his own family of origin when families were viewed as complex systems. The number of elements and how they are organized can alter how complex the system is. The system is required to have control and feedback mechanisms, which is where cybernetics come in place. Norbert Wiener, a mathematician, came up with the term Cybernetics which refers to the study of the automatic control system. Another contributor to this system came from Gregory Bateson, he created the idea that the family is a system governed by cybernetic principles. In one of those principles the Systemic theory is mentioned, this theory explains further into how individuals interact with each other, their connections to others, patterns, and their relationships.
Early forms of systemic therapy were based on cybernetics. In the 1970s this understanding of systems theory was central to the structural (Minuchin) and strategic (Haley, Selvini Palazzoli) schools of family therapy which would later develop into systemic therapy. In the light of postmodern critique, the notion that one could control systems or say objectively "what is" came increasingly into question. Based largely on the work of anthropologists Gregory Bateson and Margaret Mead, this resulted in a shift towards what is known as "second-order cybernetics" which acknowledges the influence of the subjective observer in any study, essentially applying the principles of cybernetics to cybernetics – examining the examination.
As a result, the focus of systemic therapy (ca. 1980 and forward) has moved away from a modernist model of linear causality and understanding of reality as objective, to a postmodern understanding of reality as socially and linguistically constructed.
== Practical application ==
Systemic therapy approaches problems practically rather than analytically. It seeks to identify stagnant patterns of behavior within a living system - a group of people, such as a family. It then addresses those patterns directly, without analysing their cause. Systemic therapy does not attempt to determine past causes, such as subconscious impulses or childhood trauma, or to diagnose. Thus, it differs from psychoanalytic and psychodynamic forms of family therapy (for example, the work of Horst-Eberhard Richter).
Systemic therapies are increasingly being used in personal and professional settings, but also have evidence in benefitting children with mental disorders as well. Behavioral disorders that affect mood and learning abilities have working evidence that supports the implementation of systemic therapy amongst younger groups of children who may struggle with these issues (Retzlaff et al., 2013). The approach of reframing daily struggles for those with mood disorders helps to aid in the grounding and practicality of their situations. Those receiving help from systemic therapies are set to focus on the realities of their daily lives and offer a pragmatic perspective on problem-solving skill sets.
When approaching systemic therapy, a multitude of factors are considered in order to reach the desired results. Approach is determined on a case-by-case basis, involving the consideration of factors such as; mental disorders, the adolescent’s upbringing, situational life events, stress induced by societal factors, unconventional family dynamics, etc. (Lorås, 2017). The methodology of systemic therapy involves an amalgamation of various data points to be able to practice what approach might be best to implement for the individual. All contributing stress factors of the individual's reality are considered during the development of the grounded theory analysis, in order to best aid the individuals in need.
Although systemic therapy does not attempt to determine past causes, it is important to recognize that Systemic Therapy is used in family therapy also known as "Systemic Family Therapy". These practices can often be seen and used in families or children that abuse drugs, have behavior problems, chronic illness, and many other uses (Cottrell & Boston, 2002) These are some way Systemic Therapy has been utilized in our mental health institutions, and continues to be practiced on patients.
A key point of this postmodern perspective is not a denial of absolutes. Instead, the therapist recognises that they do not hold the capacity to change people or systems. Their role is to introduce creative "nudges" which help systems to change themselves.
An interesting study by Eugene K. Epstein supports the idea that a therapist does not hold the capacity to change people or systems. Epstein argues that although we can't change systems, we can influence them. Part of Postmodernism relies on our self-agency, our cultures, practices, etc. (Epstein, 2016) Therefore these views and cultural biases affect and influence the approach of therapy, in this instance systemic Therapy. Therapists and those practicing Systemic Therapy can analyze and see patterns of emotions. Many times people can feel constrained on what they feel or be confused about what they are feeling, when you can clarify and understand what emotions you are feeling it can lead to a positive change (Bertrando & Arcelloni, 2014). This means Systemic Therapy also helps exercise emotional interpretation.
There are various forms of techniques that involve systemic therapy. One form of therapy used is structural family therapy. This consists of Structural family therapists interfering to form the ideal family structures that are known. As for families who have complex family dynamics. A few techniques that are advised to put into practice is to confront the complex family boundaries. As well as, reestablishing the family structure by shifting the families composure and forming family relatives in pairs opposed to one another. These are a few procedures that are believed to restore position scales.
An additional, overview that best helps to comprehend this approach is the outcome of this form of therapy is to gather family individuals closer to the model. Therefore the proper approach is to use guidance and recommendations. The therapists believe this is one of the most effective techniques. The therapist addresses this form of technique by implementing an oral form of communication. For instance, the therapist will begin by asking a series of questions. The questions involve demonstrating characteristics of authority.The individual who discusses new indications establishes to a situation or set of routines. Then the therapist will provide the individuals with a certain scenario that will help them better navigate an upcoming conflict that may arise. This will allow family individuals to engage in discussion and offer possible resolutions.
Also, there is additional information that provides insight into the positive outcome of systemic interference in families of children with distinct difficulties. This refers to family therapy or additional family-orientated techniques. This refers to family therapy or additional family-orientated techniques. For instance, family-orientated interceptions have demonstrated positive results regarding infants' sleeping issues.
There is a brief discussion of the positive impact that family-orientated approaches are a proper remedy for establishing wakening issues. These are the most common issues presented during the infancy stage. In these forms of techniques, parents are advised on how to minimize their infant's afternoon naps. And constructing effective nighttime practices. As well as, eliminating parent-infant interaction during the nighttime sleeping cycle. There was also a sleeping agenda that helped minimize the sudden awakening of infants.
The final result indicated that the systemic approach helped reduce the awakening in infants and had a positive resultion on their sleeping issues.
Another technique that involves systemic therapy is conceptualization, which allows the therapist to gather the patient's symptoms in context and looks into how the patient experiences creating a pattern with other individuals or family. These forms of systemic therapy help people of any age group resolve their issues. Issues including anger management, addictions to substances, relationship problems, mood disorders, and more. Human interactions are connected to their emotions and in terms can branch out to their social, or cultural interventions. Evidence supports how systemic interventions have a positive effect on infants and certain emotional problems they may have such as behavior issues.
Systemic therapy neither attempts a 'treatment of causes' nor of symptoms; rather it gives living systems nudges that help them to develop new patterns together, taking on a new organizational structure that allows growth.
While family systems therapy only addresses families, systemic therapy in a similar fashion to
Systemic hypothesising addresses other systems. The systemic approach is increasingly used in business, education, politics, psychiatry, social work, and family medicine.
== See also ==
List of therapies
Systems theory
Family therapy
Systemic coaching
Systems psychology
== References == | Wikipedia/Systemic_therapy_(psychotherapy) |
A physical therapy practice act is a statute defining the scope and practice of physical therapy within the jurisdiction, outlining licensing requirements for Physical Therapists and Physical Therapist Assistants, and establishing penalties for violations of the law. In the United States, each state enacts its own practice act, resulting in some variation among the states, though the Federation of State Boards of Physical Therapy (FSBPT) has drafted a model definition in order to limit this variation.
== Model definition of physical therapy for state practice acts ==
In 1997, the Federation of State Boards of Physical Therapy (FSBPT) published The Model Practice Act for Physical Therapy: A Tool for Public Protection and Legislative Change as a model framework to serve as a basis for inter-jurisdictional consistency among state physical therapy practice acts. The FSBPT published the 5th edition of this document in 2011. According to the introduction to the Fifth edition, "since 1997 many states have enacted large portions and, in some instances, nearly the entire Model Practice Act as their jurisdiction statute."
== List of practice acts ==
== References ==
== External links ==
APTA List of PT practice acts by state
FSBPT List of PT licensing authorities | Wikipedia/Physical_therapy_practice_act |
Music therapy, an allied health profession, "is the clinical and evidence-based use of music interventions to accomplish individualized goals within a therapeutic relationship by a credentialed professional who has completed an approved music therapy program." It is also a vocation, involving a deep commitment to music and the desire to use it as a medium to help others. Although music therapy has only been established as a profession relatively recently, the connection between music and therapy is not new.
Music therapy is a broad field. Music therapists use music-based experiences to address client needs in one or more domains of human functioning: cognitive, academic, emotional/psychological; behavioral; communication; social; physiological (sensory, motor, pain, neurological and other physical systems), spiritual, aesthetics. Music experiences are strategically designed to use the elements of music for therapeutic effects, including melody, harmony, key, mode, meter, rhythm, pitch/range, duration, timbre, form, texture, and instrumentation.
Some common music therapy practices include developmental work (communication, motor skills, etc.) with individuals with special needs, songwriting and listening in reminiscence, orientation work with the elderly, processing and relaxation work, and rhythmic entrainment for physical rehabilitation in stroke survivors. Music therapy is used in medical hospitals, cancer centers, schools, alcohol and drug recovery programs, psychiatric hospitals, nursing homes, and correctional facilities.
Music therapy is distinctive from musopathy, which relies on a more generic and non-cultural approach based on neural, physical, and other responses to the fundamental aspects of sound.
Music therapy might also incorporate practices from sound healing, also known as sound immersion or sound therapy, which focuses on sound rather song. Sound healing describes the use of vibrations and frequencies for relaxation, meditation, and other claimed healing benefits. Unlike music therapy, sound healing is unregulated and an alternative therapy.
Music therapy aims to provide physical and mental benefit. Music therapists use their techniques to help their patients in many areas, ranging from stress relief before and after surgeries to neuropathologies such as Alzheimer's disease. Studies on people diagnosed with mental health disorders such as anxiety, depression, and schizophrenia have associated some improvements in mental health after music therapy. The National Institute for Health and Care Excellence (NICE) have claimed that music therapy is an effective method in helping people experiencing mental health issues, and more should be done to offer those in need this type of help.
== Uses ==
=== Children and adolescents ===
Music therapy may be suggested for adolescent populations to help manage disorders usually diagnosed in adolescence, such as mood/anxiety disorders and eating disorders, or inappropriate behaviors, including suicide attempts, withdrawal from family, social isolation from peers, aggression, running away, and substance abuse. Goals in treating adolescents with music therapy, especially for those at high risk, often include increased recognition and awareness of emotions and moods, improved decision-making skills, opportunities for creative self expression, decreased anxiety, increased self-confidence, improved self-esteem, and better listening skills.
There is some evidence that, when combined with other types of rehabilitation, music therapy may contribute to the success rate of sensorimotor, cognitive, and communicative rehabilitation. For children and adolescents with major depressive or anxiety disorders, there is moderate to low quality evidence that music therapy added to the standard treatment may reduce internalizing symptoms and may be more effective than treatment as usual (without music therapy).
==== Methods ====
Among adolescents, group meetings and individual sessions are the main methods for music therapy. Both methods may include listening to music, discussing concerning moods and emotions in or toward music, analyzing the meanings of specific songs, writing lyrics, composing or performing music, and musical improvisation.
Private individual sessions can provide personal attention and are most effective when using music preferred by the patient. Using music that adolescents can relate to or connect with can help adolescent patients view the therapist as safe and trustworthy, and to engage in therapy with less resistance. Music therapy conducted in groups allows adolescent individuals to feel a sense of belonging, express their opinions, learn how to socialize and verbalize appropriately with peers, improve compromising skills, and develop tolerance and empathy. Group sessions that emphasize cooperation and cohesion can be effective in working with adolescents.
Music therapy intervention programs typically include about 18 sessions of treatment. The achievement of a physical rehabilitation goal relies on the child's existing motivation and feelings towards music and their commitment to engage in meaningful, rewarding efforts. Regaining full functioning also confides in the prognosis of recovery, the condition of the client, and the environmental resources available. Both techniques use systematic processes where the therapists assist the client by using musical experiences and connections that collaborate as a dynamic force of change toward rehabilitation.
==== Assessment ====
Assessment includes obtaining a full medical history, current non-musical functioning (social, physical/motor, emotional, etc.) and goals, musical (ability to duplicate a melody or identify changes in rhythm, etc.) and a determination of the potential for music therapy to be effective in addressing goals.
=== Premature infants ===
Premature infants are those born at 37 weeks after conception or earlier. They are subject to numerous health risks, such as abnormal breathing patterns, decreased body fat and muscle tissue, as well as feeding issues. The coordination for sucking and breathing is often not fully developed, making feeding a challenge. Offering musical therapy to premature infants while they are in the neonatal intensive care unit (NICU) aims to both mask unwanted auditory stimuli, stimulate infant development, and promote a calm environment for families. While there are no reported adverse effects from music therapy, the evidence supporting music therapy's beneficial effects for infants is weak as many of the clinical trials that have been performed either had mixed results or were poorly designed. There is no strong evidence to suggest that music therapy improves an infant's oxygen therapy, improves sucking, or improves development when compared to usual care. There is some weaker evidence that music therapy may decrease an infants' heart rate. There is no evidence to indicate that music therapy reduces anxiety in parents of preterm infants in the NICU or information to understand what type of music therapy may be more beneficial or how for how long.
=== Medical disorders ===
Music may both motivate and provide a sense of distraction. Rhythmic stimuli has been found to help balance training for those with a brain injury.
Singing is a form of rehabilitation for neurological impairments. Neurological impairments following a brain injury can be in the form of apraxia – loss to perform purposeful movements, dysarthria, muscle control disturbances (due to damage of the central nervous system), aphasia (defect in expression causing distorted speech), or language comprehension. It is shown that patients with schizophrenia have altered feeling on major and minor chord. Singing training has been found to improve lung, speech clarity, and coordination of speech muscles, thus, accelerating rehabilitation of such neurological impairments. For example, melodic intonation therapy is the practice of communicating with others by singing to enhance speech or increase speech production by promoting socialization, and emotional expression.
==== Autism ====
Music may help people with autism hone their motor and attention skills as well as healthy neurodevelopment of socio-communication and interaction skills. Music therapy may also contribute to improved selective attention, speech production, and language processing and acquisition in people with autism.
Music therapy may benefit the family as a whole. Some family members of children with autism claim that music therapy sessions have allowed their child to interact more with the family and the world. Music therapy is also beneficial in that it gives children an outlet to use outside of the sessions. Some children after participating in music therapy may want to keep making music long after the sessions end.
==== Heart disease ====
Listening to music may improve heart rate, respiratory rate, and blood pressure in those with coronary heart disease (CHD).
==== Stroke ====
Music may be a useful tool in the recovery of motor skills.
==== Dementia ====
Like many of the other disorders mentioned, some of the most common significant effects of the disorder can be seen in social behaviors, leading to improvements in interaction, conversation, and other such skills. A study of over 330 subjects showed that music therapy produces highly significant improvements in social behaviors, overt behaviors like wandering and restlessness, reductions in agitated behaviors, and improvements to cognitive defects, measured with reality orientation and face recognition tests. The effectiveness of the treatment seems to be strongly dependent on the patient and the quality and length of treatment.
A meta-study examined the proposed neurological mechanisms behind music therapy's effects on these patients. Many authors suspect that music has a soothing effect on the patient by affecting how noise is perceived: music renders noise familiar, or buffers the patient from overwhelming or extraneous noise in their environment. Others suggest that music serves as a sort of mediator for social interactions, providing a vessel through which to interact with others without requiring much cognitive load.
==== Aphasia ====
Broca's aphasia, or non-fluent aphasia, is a language disorder caused by damage to Broca's area and surrounding regions in the left frontal lobe. Those with non-fluent aphasia are able to understand language fairly well, but they struggle with language production and syntax.
Neurologist Oliver Sacks studied neurological oddities in people, trying to understand how the brain works. He concluded that people with some type of frontal lobe damage often "produced not only severe difficulties with expressive language (aphasia) but a strange access of musicality with incessant whistling, singing and a passionate interest in music. For him, this was an example of normally suppressed brain functions being released by damage to others". Sacks had a genuine interest in trying to help people affected with neurological disorders and other phenomena associated with music, and how it can provide access to otherwise unreachable emotional states, revivify neurological avenues that have been frozen, evoke memories of earlier, lost events or states of being and attempts to bring those with neurological disorders back to a time when the world was much richer for them. He was a firm believer that music has the power to heal.
Melodic intonation therapy (MIT), developed in 1973 by neurological researchers Sparks, Helm and Albert, is a method used by music therapists and speech–language pathologists to help people with communication disorders caused by damage to the left hemisphere of the brain by engaging the singing abilities and possibly engaging language-capable regions in the undamaged right hemisphere.
While unable to speak fluently, patients with non-fluent aphasia are often able to sing words, phrases, and even sentences they cannot express otherwise. MIT harnesses the singing ability of patients with non-fluent aphasia as a means to improve their communication. Although its exact nature depends on the therapist, in general MIT relies on the use of intonation (the rising and falling of the voice) and rhythm (beat/speed) to train patients to produce phrases verbally. In MIT, common words and phrases are turned into melodic phrases, generally starting with two step sing-song patterns and eventually emulating typical speech intonation and rhythmic patterns. A therapist will usually begin by introducing an intonation to their patient through humming. They will accompany this humming with a rhythm produced by the tapping of the left hand. At the same time, the therapist will introduce a visual stimuli of the written phrase to be learned. The therapist then sings the phrase with the patient, and ideally the patient is eventually able to sing the phrase on their own. With much repetition and through a process of "inner-rehearsal" (practicing internally hearing one's voice singing), a patient may eventually be able to produce the phrase verbally without singing. As the patient advances in therapy, the procedure can be adapted to give them more autonomy and to teach them more complex phrases. Through the use of MIT, a non-fluent aphasic patient can be taught numerous phrases which aid them to communicate and function during daily life.
The mechanisms of this success are yet to be fully understood. It is commonly agreed that while speech is lateralized mostly to the left hemisphere (for right-handed and most left-handed individuals), some speech functionality is also distributed in the right hemisphere. MIT is thought to stimulate these right language areas through the activation of music processing areas also in the right hemisphere Similarly, the rhythmic tapping of the left hand stimulates the right sensorimotor cortex to further engage the right hemisphere in language production. Overall, by stimulating the right hemisphere during language tasks, therapists hope to decrease dependence on the left hemisphere for language production.
While results are somewhat contradictory, studies have in fact found increased right hemispheric activation in non-fluent aphasic patients after MIT. This change in activation has been interpreted as evidence of decreased dependence on the left hemisphere. There is debate, however, as to whether changes in right hemispheric activation are part of the therapeutic process during/after MIT, or are simply a side effect of non-fluent aphasia. In hopes of making MIT more effective, researchers are continually studying the mechanisms of MIT and non-fluent aphasia.
==== Cancer ====
There is tentative evidence that music interventions led by a trained music therapist may have positive effects on psychological and physical outcomes in adults with cancer. The effectiveness of music therapy for children with cancer is not known.
=== Mental health ===
There is weak evidence to suggest that people with schizophrenia may benefit from the addition of music therapy along with their other standard treatment regieme. Potential improvements include decreased aggression, fewer hallucinations and delusions, social functioning, and quality of life of people with schizophrenia or schizophrenia-like disorders. In addition, moderate-to-low-quality evidence suggests that music therapy as an addition to standard care improves the global state, mental state (including negative and general symptoms). Further research using standardized music therapy programs and consistent monitoring protocols are necessary to understand the effectiveness of this approach for adults with schizophrenia. Music therapy may be a useful tool for helping treat people with post-traumatic stress disorder however more rigorous empirical study is required.
For adults with depressive symptoms, there is some weak evidence to suggest that music therapy may help reduce symptoms and recreative music therapy and guided imagery and music being superior to other methods in reducing depressive symptoms.
In the use of music therapy for adults, there is "music medicine" which is called for listening to prerecorded music as treated like a medicine. Music Therapy also uses "Receptive music therapy" using "music-assisted relaxation" and using images connecting to the music.
There is some discussion on the process of change facilitated by musical activities on mental wellness. Scholars proposed a six-dimensional framework, which contains emotional, psychological, social, cognitive, behavioral and spiritual aspects. Through conducting interview sessions with mental health service users (with mood disorders, anxiety disorders, schizophrenia and other psychotic disorders), their study showed the relevance of the six-dimensional framework.
==== Impact on general mental health ====
Music therapy has been used to help bring improvements to mental health among people of all age groups. It has been used as far back as the 1830s. One example of a mental hospital that used music therapy to aid in the healing process of their patients includes the Hanwell Lunatic Asylum. This mental hospital provided "music and movement sessions and musical performances" as well as "group and individual music therapy for patients with serious mental illness or emotional problems." Two main categories of music therapy were used in this study; analytic music therapy and Nordoff-Robbins music therapy. Analytic music therapy involves both words and music, while Nordoff-Robbins music therapy places great emphasis on assessing how clients react to music therapy and how the use of this type of therapy can be constantly altered and shifted to allow it to benefit the client the most.
=== Bereavement ===
The DSM-IV TR (Diagnostic and Statistical Manual of Mental Disorders) lists bereavement as a mental health diagnosis when the focus of clinical attention is related to the loss of a loved one and when symptoms of Major Depressive Disorder are present for up to two months. Music therapy models have been found to be successful in treating grief and bereavement (Rosner, Kruse & Hagl, 2010).In many countries, including the United States, music therapists do not diagnose, therefore diagnosing a bereavement-related disorder would not be within their scope of practice.
==== Grief treatment for adolescents ====
Grief treatment is very valuable within the adolescent age group. Just as adults and the elderly struggle with grief from loss, relationship issues, job-related stress, and financial issues, so do adolescents also experience grief from disappointments that occur early on in life, however different these disappointing life events may be. For example, many people of adolescent age experience life-altering events such as parental divorce, trauma from emotional or physical abuse, struggles within school, and loss. If this grief is not acted upon early on through the use of some kind of therapy, it can alter the entire course of an adolescent's life. In one particular study on the impact of music therapy on grief management within adolescents used songwriting to allow these adolescents to express what they were feeling through lyrics and instrumentals. In the article Development of the Grief Process Scale through music therapy songwriting with bereaved adolescents, the results of the study demonstrate that in all of the treatment groups combined, the mean GPS (grief process scale) score decreased by 43%. The use of music therapy songwriting allowed these adolescents to become less overwhelmed with grief and better able to process it as demonstrated by the decrease in mean GPS score.
=== Empirical evidence ===
Since 2017, providing evidence-based practice is becoming more and more important and music therapy has been continuously critiqued and regulated to provide that desired evidence-based practice. A number of research studies and meta-analyses have been conducted on, or included, music therapy and all have found that music therapy has at least some promising effects, especially when used for the treatment of grief and bereavement. The AMTA has largely supported the advancement of music therapy through research that would promote evidenced-based practice. With the definition of evidence-based health care as "the conscientious use of current best evidence in making decisions about the care of individual patients or the delivery of health services, current best evidence is up-to-date information from relevant, valid research about the effects of different forms of health care, the potential for harm from exposure to particular agents, the accuracy of diagnostic tests, and the predictive power of prognostic factors".
Both qualitative and quantitative studies have been completed and both have provided evidence to support music therapy in the use of bereavement treatment. One study that evaluated a number of treatment approaches found that only music therapy had significant positive outcomes where the others showed little improvement in participants (Rosner, Kruse & Hagl, 2010). Furthermore, a pilot study, which consisted of an experimental and control group, examined the effects of music therapy on mood and behaviors in the home and school communities. It was found that there was a significant change in grief symptoms and behaviors with the experimental group in the home, but conversely found that there was no significant change in the experimental group in the school community, despite the fact that mean scores on the Depression Self-Rating Index and the Behavior Rating Index decreased (Hilliard, 2001). Yet another study completed by Russel Hilliard (2007), looked at the effects of Orff-based music therapy and social work groups on childhood grief symptoms and behaviors. Using a control group that consisted of wait-listed clients, and employing the Behavior Rating Index for Children and the bereavement Group Questionnaire for Parents and Guardians as measurement tools, it was found that children who were in the music therapy group showed significant improvement in grief symptoms and also showed some improvement in behaviors compared to the control group, whereas the social work group also showed significant improvement in both grief and behaviors compared to the control group. The study concludes with support for music therapy as a medium from bereavement groups for children (Hilliard, 2007).
Though there has been research done on music therapy, and though the use of it has been evaluated, there remain a number of limitations in these studies and further research should be completed before absolute conclusions are made, though the results of using music therapy in the treatment have consistently shown to be positive.
Music therapy practice is working together with clients, through music, to promote healthy change (Bruscia, 1998). The American Music Therapy Association (AMTA) has defined the practice of music therapy as "a behavioral science concerned with changing unhealthy behaviors and replacing them with more adaptive ones through the use of musical stimuli".
=== Interventions ===
Though music therapy practice employs a large number of intervention techniques, some of the most commonly used interventions include improvisation, therapeutic singing, therapeutic instrumental music playing, music-facilitated reminiscence and life review, songwriting, music-facilitated relaxation, and lyric analysis. While there has been no conclusive research done on the comparison of interventions (Jones, 2005; Silverman, 2008; Silverman & Marcionetti, 2004), the use of particular interventions is individualized to each client based upon thorough assessment of needs, and the effectiveness of treatment may not rely on the type of intervention (Silverman, 2009).
Improvisation in music therapy allows for clients to make up, or alter, music as they see fit. While improvisation is an intervention in a methodical practice, it does allow for some freedom of expression, which is what it is often used for. Improvisation has several other clinical goals as well, which can also be found on the Improvisation in music therapy page, such as: facilitating verbal and nonverbal communication, self-exploration, creating intimacy, teamwork, developing creativity, and improving cognitive skills. Building on these goals, Botello and Krout designed a cognitive behavioral application to assess and improve communication in couples. Further research is needed before the use of improvisation is conclusively proven to be effective in this application, but there were positive signs in this study of its use.
Singing or playing an instrument is often used to help clients express their thoughts and feelings in a more structured manner than improvisation and can also allow participation with only limited knowledge of music. Singing in a group can facilitate a sense of community and can also be used as group ritual to structure a theme of the group or of treatment (Krout, 2005).
Research that compares types of music therapy intervention has been inconclusive. Music Therapists use lyric analysis in a variety of ways, but typically lyric analysis is used to facilitate dialogue with clients based on the lyrics, which can then lead to discussion that addresses the goals of therapy.
== Types of music therapy ==
Two fundamental types of music therapy are receptive music therapy and active music therapy (also known as expressive music therapy). Active music therapy engages clients or patients in the act of making music, whereas receptive music therapy guides patients or clients in listening or responding to live or recorded music. Either or both can lead to verbal discussions, depending on client needs and the therapist's orientation.
=== Receptive ===
Receptive music therapy involves listening to recorded or live genres of music such as classical, rock, jazz, and/or country music. In Receptive music therapy, patients are the recipient of the music experience, meaning that they are actively listening and responding to the music rather than creating it. During music sessions, patients participate in song discussion, music relaxation, and are given the ability to listen to their preferred music genre. It can improve mood, decrease stress, decrease pain, enhance relaxation, and decrease anxiety; this can help with coping skills. There is also evidence
of biochemical changes (e.g., lowered cortisol levels).
=== Active ===
In active music therapy, patients engage in some form of music-making (e.g., vocalizing, rapping, chanting, singing, playing instruments, improvising, song writing, composing, or conducting). Researchers at Baylor, Scott, and White Universities are studying the effect of harmonica playing on patients with COPD to determine if it helps improve lung function. Another example of active music therapy takes place in a nursing home in Japan: therapists teach the elderly how to play easy-to-use instruments so they can overcome physical difficulties.
== Models and approaches ==
Music therapist Kenneth Bruscia stated: "A model is a comprehensive approach to assessment, treatment, and evaluation that includes theoretical principles, clinical indications and contraindications, goals, methodological guidelines and specifications, and the characteristic use of certain procedural sequences and techniques.": 129 In the literature, the terms model, orientation, or approach might be encountered and may have slightly different meanings. Regardless, music therapists use both psychology models and models specific to music therapy. The theories these models are based on include beliefs about human needs, causes of distress, and how humans grow or heal.
Models developed specifically for music therapy include analytical music therapy,: 230 Benenzon,: 143–144 the Bonny Method of Guided Imagery and Music (GIM),: 230 community music therapy, Nordoff-Robbins music therapy (creative music therapy),: 230 neurologic music therapy, and vocal psychotherapy.
Psychological orientations used in music therapy include psychodynamic, cognitive behavioral, humanistic, existential,: 230 and the biomedical model.
=== The Bonny Method of Guided Imagery and Music ===
To be trained in this method, students are required to be healthcare professionals. Some courses are only open to music therapists and mental health professionals.
Music educator and therapist Helen Lindquist Bonny (1921–2010) developed an approach influenced by humanistic and transpersonal psychological views, known as the Bonny Method of guided imagery in music (BGIM or GIM). Guided imagery refers to a technique used in natural and alternative medicine that involves using mental imagery to help with the physiological and psychological ailments of patients.
The practitioner often suggests a relaxing and focusing image, and through the use of imagination and discussion, they aim to find constructive solutions to manage their problems. Bonny applied this psychotherapeutic method to the field of music therapy by using music as the means of guiding the patient to a higher state of consciousness where healing and constructive self-awareness can take place. Music is considered a "co-therapist" because of its importance. GIM with children can be used in one-on-one or group settings, and involves relaxation techniques, identification and sharing of personal feeling states, and improvisation to discover the self, and foster growth. The choice of music is carefully selected for the client based on their musical preferences and the goals of the session. The piece is usually classical, and it must reflect the age and attention abilities of the child in length and genre. A full explanation of the exercises must be offered at their level of understanding.
=== Nordoff-Robbins ===
Paul Nordoff, a Juilliard School graduate and Professor of Music, was a pianist and composer who, upon seeing disabled children respond so positively to music, gave up his academic career to further investigate the possibility of music as a means for therapy. Clive Robbins, a special educator, partnered with Nordoff for more than 17 years in the exploration and research of music's effects on disabled children—first in the UK, and then in the United States in the 1950s and 60s. Their pilot projects included placements at care units for autistic children and child psychiatry departments, where they put programs in place for children with mental disorders, emotional disturbances, developmental delays, and other handicaps. Their success at establishing a means of communication and relationship with children with cognitive impairments at the University of Pennsylvania gave rise to the National Institutes of Health's first grant given of this nature, and the 5-year study "Music therapy project for psychotic children under seven at the day care unit" involved research, publication, training and treatment. Several publications, including Therapy in Music for Handicapped Children, Creative Music Therapy, Music Therapy in Special Education, as well as instrumental and song books for children, were released during this time. Nordoff and Robbins's success became known globally in the mental health community, and they were invited to share their findings and offer training on an international tour that lasted several years. Funds were granted to support the founding of the Nordoff Robbins Music Therapy Centre in Great Britain in 1974, where a one-year graduate program for students was implemented. In the early eighties, a center was opened in Australia, and various programs and institutes for music therapy were founded in Germany and other countries. In the United States, the Nordoff-Robbins Center for Music Therapy was established at New York University in 1989
Today, Nordoff-Robbins is a music therapy Theoretical Model / Approach. The Nordoff-Robbins approach, based on the belief that everyone is capable of finding meaning in and benefiting from musical experience, is now practiced by hundreds of therapists internationally. This approach focuses on treatment through the creation of music by both therapist and client together. The therapist uses various techniques so that even the most low functioning individuals can actively participate.
=== Orff ===
Gertrude Orff developed Orff Music Therapy at the Kinderzentrum München. Both the clinical setting of social pediatrics and the Orff Schulwerk (schoolwork) approach in music education (developed by German composer Carl Orff) influence this method, which is used with children with developmental problems, delays, and disabilities. Theodor Hellbrügge developed the area of social pediatrics in Germany after the Second World War. He understood that medicine alone could not meet the complex needs of developmentally disabled children. Hellbrügge consulted psychologists, occupational therapists and other mental healthcare professionals whose knowledge and skills could aid in the diagnostics and treatment of children. Gertrude Orff was asked to develop a form of therapy based on the Orff Schulwerk approach to support the emotional development of patients. Elements found in both the music therapy and education approaches include the understanding of holistic music presentation as involving word, sound and movement, the use of both music and play improvisation as providing a creative stimulus for the child to investigate and explore, Orff instrumentation, including keyboard instruments and percussion instruments as a means of participation and interaction in a therapeutic setting, and the multisensory aspects of music used by the therapist to meet the particular needs of the child, such as both feeling and hearing sound.
Corresponding with the attitudes of humanistic psychology, the developmental potential of the child, as in the acknowledgement of their strengths as well as their handicaps, and the importance of the therapist-child relationship, are central factors in Orff music therapy. The strong emphasis on social integration and the involvement of parents in the therapeutic process found in social pediatrics also influence theoretical foundations. Knowledge of developmental psychology puts into perspective how developmental disabilities influence the child, as do their social and familial environments. The basis for interaction in this method is known as responsive interaction, in which the therapist meets the child at their level and responds according to their initiatives, combining both humanistic and developmental psychology philosophies. Involving the parents in this type of interaction by having them participate directly or observe the therapist's techniques equips the parents with ideas of how to interact appropriately with their child, thus fostering a positive parent-child relationship.
=== Liberation Music Therapy ===
Liberation Music Therapy (LMT) is an emancipatory approach to music-making that integrates healing, social justice, and revolutionary change. Rooted in the principles of liberation psychology and influenced by the global history of music's role in communal and spiritual practices, LMT challenges traditional, colonialist frameworks of mental health care. It emphasizes addressing systemic oppression and transgenerational trauma through culturally relevant music practices, particularly within marginalized communities. LMT practitioners view music not only as a therapeutic tool but as a form of activism and resistance, fostering solidarity, critical consciousness (concientización), and community empowerment.
This approach combines music's therapeutic qualities with its capacity for social and political transformation, drawing on a variety of influences, including folk traditions, hip-hop, drumming, and chanting, alongside modern and classical genres. Through methods such as lyric analysis, improvisation, and collective musicking, LMT bridges personal emotional experiences with broader societal struggles, engaging individuals and communities in processes of healing and liberation. Practitioners work collaboratively, meeting communities where they are and respecting their cultural genius, with the ultimate aim of fostering both individual well-being and collective resilience.
== Cultural aspects ==
Through the ages music has been an integral component of rituals, ceremonies, healing practices, and spiritual and cultural traditions. Further, Michael Bakan, author of World Music: Traditions and Transformations, states that "Music is a mode of cultural production and can reveal much about how the culture works," something ethnomusicologists study.
=== Cultural considerations in music therapy services, education, and research ===
The 21st century is a culturally pluralistic world. In some countries, such as the United States, an individual may have multiple cultural identities that are quite different from the music therapist's. These include race; ethnicity, culture, and/or heritage; religion; sex; ability/disability; education; or socioeconomic status. Music therapists strive to achieve multicultural competence through a lifelong journey of formal and informal education and self-reflection. Multicultural therapy "uses modalities and defines goals consistent with the life experiences and cultural values of clients": 6 rather than basing therapy on the therapist's worldview or the dominant culture's norms.
Empathy in general is an important aspect of any mental health practitioner and the same is true for music therapists, as is multicultural awareness. It is the added complexity to cultural empathy that comes from adding music that provides both the greater risk and potential to provide exceptional culturally sensitive therapy (Valentino, 2006). An extensive knowledge of a culture is really needed to provide this effective treatment as providing culturally sensitive music therapy goes beyond knowing the language of speech, the country, or even some background about the culture. Simply choosing music that is from the same country of origin or that has the same spoken language is not effective for providing music therapy as music genres vary as do the messages each piece of music sends. Also, different cultures view and use music in various ways and may not always be the same as how the therapist views and uses music. Melody Schwantes and her colleagues wrote an article that describes the effective use of the Mexican "corrido" in a bereavement group of Mexican migrant farm workers (Schwantes, Wigram, Lipscomb & Richards, 2011). This support group was dealing with the loss of two of their coworkers after an accident they were in and so the corrido, a song form traditionally used for telling stories of the deceased. An important element that was also mentioned was that songwriting has shown to be a large cultural artifact in many cultures, and that there are many subtle messages and thoughts provided in songs that would otherwise be hard to identify. Lastly, the authors of this study stated that "Given the position and importance of songs in all cultures, the example in this therapeutic process demonstrates the powerful nature of lyrics and music to contain and express difficult and often unspoken feelings" (Schwantes et al., 2011).
== Usage by region ==
=== African continent ===
In 1999, the first program for music therapy in Africa opened in Pretoria, South Africa. Research has shown that in Tanzania patients can receive palliative care for life-threatening illnesses directly after the diagnosis of these illnesses. This is different from many Western countries, because they reserve palliative care for patients who have an incurable illness. Music is also viewed differently between Africa and Western countries. In Western countries and a majority of other countries throughout the world, music is traditionally seen as entertainment whereas in many African cultures, music is used in recounting stories, celebrating life events, or sending messages.
=== Australia ===
==== Music for healing in ancient times ====
One of the first groups known to heal with sound were the aboriginal people of Australia. The modern name of their healing tool is the didgeridoo, but it was originally called the yidaki. The yidaki produced sounds that are similar to the sound healing techniques used in modern day. The sound of the didgeridoo produces a low, bass frequency. For at least 40,000 years, the healing tool was believed to assist in healing "broken bones, muscle tears and illnesses of every kind".
However, here are no reliable sources stating the didgeridoo's exact age. Archaeological studies of rock art in Northern Australia suggest that the people of the Kakadu region of the Northern Territory have been using the didgeridoo for less than 1,000 years, based on the dating of paintings on cave walls and shelters from this period. A clear rock painting in Ginga Wardelirrhmeng, on the northern edge of the Arnhem Land plateau, from the freshwater period (that had begun 1500 years ago) shows a didgeridoo player and two songmen participating in an Ubarr ceremony.
==== In modern times – an allied health profession ====
1949 in Australia, music therapy (not clinical music therapy as understood today) was started through concerts organized by the Australian Red Cross along with a Red Cross Music Therapy Committee. The key Australian body, the Australian Music Therapy Association (AMTA), was founded in 1975.
=== Canada ===
==== History: c. 1940 – present ====
For earlier history related to Western traditions, see § Western cultures sub-section.
In 1956, Fran Herman, one of Canada's music therapy pioneers, began a 'remedial music' program at the Home For Incurable Children, now known as the Holland Bloorview Kids Rehabilitation Hospital, in Toronto. Her group 'The Wheelchair Players' continued until 1964, and is considered to be the first music therapy group project in Canada. Its production "The Emperor's Nightingale" was the subject of a documentary film.
Composer/pianist Alfred Rosé, a professor at the University of Western Ontario, also pioneered the use of music therapy in London, Ontario, at Westminster Hospital in 1952 and at the London Psychiatric Hospital in 1956.
Two other music therapy programs were initiated during the 1950s; one by Norma Sharpe at St. Thomas Psychiatric Hospital in St. Thomas, Ontario, and the other by Thérèse Pageau at the Hôpital St-Jean-de-Dieu (now Hôpital Louis-Hippolyte Lafontaine) in Montreal.
A conference in August 1974, organized by Norma Sharpe and six other music therapists, led to the founding of the Canadian Music Therapy Association, which was later renamed the Canadian Association for Music Therapy (CAMT). As of 2009, the organization had more than 500 members.
Canada's first music therapy training program was founded in 1976, at Capilano College (now Capilano University) in North Vancouver, by Nancy McMaster and Carolyn Kenny.
=== China ===
The relationship between music therapy and health has long been documented in ancient China.
It is said that in ancient times, really good traditional Chinese medicine did not use acupuncture or traditional Chinese medicine, but music: at the end of a song, people were safe when they were discharged. As early as before the Spring and Autumn period and the Warring States period, the Yellow Emperor's Canon of internal medicine believed that the five tones (Palace, Shang, horn, emblem and feather) belonged to the five elements (gold, wood, water, fire and earth), and were associated with five basic emotions (joy, anger, worry, thought and fear), that is, the five chronicles. Different music such as palace, Shang, horn, micro and feather were used to target different diseases.
More than 2000 years ago, the book Yue Ji also talked about the important role of music in regulating life harmony and improving health; "Zuo Zhuan" recorded the famous doctors of the state of Qin and the discussion that music can prevent and treat diseases: "there are six or seven days, the hair is colorless, the emblem is five colors, and sex produces six diseases." It is emphasized that silence should be controlled and appropriate in order to have a beneficial regulating effect on the human body; The book "the soul and the body flow, the spirit also flows"; Zhang Jingyue and Xu Lingtai, famous medical experts in the Ming and Qing Dynasties, also specially discussed phonology and medicine in the "classics with wings" and "Yuefu Chuansheng".
For example, Liu Xueyu, one of the emperors of the Tang Dynasty, cured some stubborn diseases through the records of music in the Tang Dynasty.
Chinese contemporary music therapy began in the 1980s. In 1984, Professor Zhang Boyuan of the Department of psychology of Peking University published the experimental report on the research of physical and mental defense of music, which was the first published scientific research article on music therapy in China. In 1986, Professor Gao Tian of Beijing Conservatory of music published his paper "Research on the relieving effect of music on pain";
In 1989, the Chinese society of therapeutics was officially established. In 1994, pukaiyuan published his monograph music therapy. In 1995, he Huajun and Lu Tingzhu published a monograph music therapy. In 2000, Zhang Hongyi edited and published fundamentals of music therapy. In 2002, fan Xinsheng edited and published music therapy. In 2007, Gao Tian edited and published the basic theory of music therapy.
In short, Chinese music therapy has made rapid progress in theoretical research, literature review and clinical research. In addition, the music therapy methods under the guidance of ancient Chinese music therapy theory and traditional Chinese medicine theory with a long history have attracted worldwide attention. The prospect of Chinese music therapy is broad.
=== Germany ===
The Germany Music Therapy Society defines music therapy as the "targeted use of music as part of a therapeutic relationship to restore, maintain and promote mental, physical and cognitive health [Musiktherapie ist der gezielte Einsatz von Musik im Rahmen der therapeutischen Beziehung zur Wiederherstellung, Erhaltung und Förderung seelischer, körperlicher und geistiger Gesundheit]."
=== India ===
The roots of musical therapy in India can be traced back to ancient Hindu mythology, Vedic texts, and local folk traditions. An example of a practice dating back to Vedic texts would be, Nada Yoga. Nada yoga has been a practice in India for a long time, and it is to heal by listening to the body's inner vibrations(Bhanu, Y. 2022). It is very possible that music therapy has been used for hundreds of years in Indian culture. In the 1990s, another dimension to this, known as Musopathy, was postulated by Indian musician Chitravina Ravikiran based on fundamental criteria derived from acoustic physics.
The Indian Association of Music Therapy was established in 2010 by Dr. Dinesh C. Sharma with a motto "to use pleasant sounds in a specific manner like drug in due course of time as green medicine". He also published the International Journal of Music Therapy (ISSN 2249-8664) to popularize and promote music therapy research on an international platform.
Suvarna Nalapat has studied music therapy in the Indian context. Her books Nadalayasindhu-Ragachikitsamrutam (2008), Music Therapy in Management Education and Administration (2008) and Ragachikitsa (2008) are accepted textbooks on music therapy and Indian arts.
The Music Therapy Trust of India is another venture in the country. It was started by Margaret Lobo. She is the founder and director of the Otakar Kraus Music Trust and her work began in 2004.
=== Lebanon ===
In 2006, Hamda Farhat introduced music therapy to Lebanon, developing and inventing therapeutic methods such as the triple method to treat hyperactivity, depression, anxiety, addiction, and post traumatic stress disorder. She has met with great success in working with many international organizations, and in the training of therapists, educators, and doctors. The Lebanese Association Of Music Therapy L.A.M.T ref number 65 is the only reference at Lebanon, the president Dr Hamda farhat, members administer Dr Antoine chartouni, Dr Elia Francis Safi
TRAINING and Formation
=== Norway ===
Norway is recognized as an important country for music therapy research. Its two major research centers are the Center for Music and Health with the Norwegian Academy of Music in Oslo, and the Grieg Academy Centre for Music Therapy (GAMUT), at University of Bergen. The former was mostly developed by professor Even Ruud, while professor Brynjulf Stige is largely responsible for cultivating the latter. The center in Bergen has 18 staff, including 2 professors and 4 associate professors, as well as lecturers and PhD students. Two of the field's major international research journals are based in Bergen: Nordic Journal for Music Therapy and Voices: A World Forum for Music Therapy. Norway's main contribution to the field is mostly in the area of "community music therapy", which tends to be as much oriented toward social work as individual psychotherapy, and music therapy research from this country uses a wide variety of methods to examine diverse methods across an array of social contexts, including community centers, medical clinics, retirement homes, and prisons.
=== Nigeria ===
The origins of Musical therapy practices in Nigeria is unknown, however the country is identified to have a lengthy lineage and history of musical therapy being utilized throughout the culture. The most common people associated with music therapy are herbalists, Witch doctors, and faith healers according to Professor Charles O. Aluede of Ambrose Alli University (Ekpoma, Edo State, Nigeria). Applying music and thematic sounds to the healing process is believed to help the patient overcome true sickness in his/her mind which then will seemingly cure the disease. Another practice involving music is called "Igbeuku", a religious practice performed by faith healers. In the practice of Igbeuku, patients are persuaded to confess their sins which cause themselves serve discomfort. Following a confession, patients feel emotionally relieved because the priest has announced them clean and subjected them to a rigorous dancing exercise. The dancing exercise is a "thank you" for the healing and tribute to the spiritual greater beings. The dance is accompanied by music and can be included among the unorthodox medical practices of Nigerian culture. While most of the music therapy practices come in the medical field, musical therapy is often used in the passing of a loved one. The use of song and dance in a funeral setting is very common across the continent but especially in Nigeria. Songs allude to the idea the finally resting place is Hades (hell). The music helps alleviate the sorrows felt by the family members and friends of the lost loved one. Along with music therapy being a practice for funeral events, it is also implemented to those dying as a last resort tactic of healing. The Esan of Edo State of Nigeria, in particular, herbalists perform practices with an Oko – a small aerophone made of elephant tusk which is blown into dying patients' ears to resuscitate them. Nigeria is full of interesting cultural practices in which contribute a lot to the music therapy world.
=== South Africa ===
There are longstanding traditions of music healing, which in some ways may be very different than music therapy.
Mercédès Pavlicevic (1955–2018), an international music therapist, along with Kobie Temmingh, pioneered the music therapy program at the University of Pretoria, which debuted with a master's degree program in 1999. She noted the differences in longstanding traditions and other ways of viewing healing or music. A Nigerian colleague felt "that music in Africa is healing, and what is music therapy other than some colonial import?" Pavlicevic noted that "in Africa there is a long tradition of music healing" and asked "Can there be a synthesis of these two music-based practices towards something new?... I am not altogether convinced that African music healing and music therapy are especially closely related [emphasis added]. But I am utterly convinced that music therapy can learn an enormous amount from the African worldview and from music-making in Africa – rather than from African music-healing as such."
The South African Music Therapy Association can provide information to the public about music therapy or educational programs in South Africa.
South Africa was selected to host the 16th World Congress of Music Therapy in July 2020, a triennial World Federation of Music Therapy event. Due to the coronavirus pandemic (SARS-CoV-2) the congress was moved to an online event.
=== United States ===
==== Credential ====
National board certification (current as of 2021): MT-BC (Music Therapist-Board Certified, also written as Board Certified Music Therapist)
State license or registration: varies by state, see below
The credentials listed below were previously conferred by the former national organizations AAMT and NAMT; these credentials have not been available since 1998.
CMT (Certified Music Therapist)
ACMT (Advanced Certified Music Therapist)
RMT (Registered Music Therapist). There are other countries that use RMT as a credential, such as Australia, that is different from the U.S. credential.
The states of Georgia, Illinois, Iowa, Maryland, North Dakota, Nevada, New Jersey, Oklahoma, Oregon, Rhode Island, and Virginia have established licenses for music therapists, while in Wisconsin, music therapists must be registered, and in Utah hold state certification. In the State of New York, the Creative Arts Therapy license (LCAT) incorporates the music therapy credential within their licensure, a mental health license that requires a master's degree and post-graduate supervision. The states of California and Connecticut have title protection for music therapists, meaning only those with the MT-BC credential can use the title "Board Certified Music Therapist".
==== Professional association ====
The American Music Therapy Association (AMTA).
==== Education ====
Publication on music therapy education and training has been detailed in both single author (Goodman, 2011) and edited (Goodman, 2015, 2023) volumes. The register of the European Music Therapy Confederation lists all educational training programs throughout Europe.
A music therapy degree candidate can earn an undergraduate, master's or doctoral degree in music therapy. Many AMTA approved programs in the United States offer equivalency and certificate degrees in music therapy for students that have completed a degree in a related field. Some practicing music therapists have held PhDs either in music therapy or in fields related to music therapy. A music therapist typically incorporates music therapy techniques with broader clinical practices such as psychotherapy, rehabilitation, and other practices depending on client needs. Music therapy services rendered within the context of a social service, educational, or health care agency are often reimbursable by insurance or other sources of funding for individuals with certain needs.
A degree in music therapy requires proficiency in guitar, piano, voice, music theory, music history, reading music, improvisation, as well as varying levels of skill in assessment, documentation, and other counseling and health care skills depending on the focus of the particular university's program. 1200 hours of clinical experience are required, some of which are gained during an approximately six-month internship that takes place after all other degree requirements are met.
After successful completion of educational requirements, including internship, music therapists can apply to take, take, and pass the Board Certification Examination in Music Therapy.
==== Board Certification Examination in Music Therapy ====
The current national credential is MT-BC (Music Therapist-Board Certified). It is not required in all states. To be eligible to apply to take the Board Certification Examination in Music Therapy, an individual must successfully complete a music therapy degree from a program accredited by AMTA at a college or university (or have a bachelor's degree and complete all of the music therapy course requirements from an accredited program), which includes successfully completing a music therapy internship. To maintain the credential, 100 units of continuing education must be completed every five years. The board exam is created by and administered through The Certification Board for Music Therapists.
==== History: c. 1900–present ====
For earlier history related to Western traditions, see § Western cultures sub-section.
From a western viewpoint, music therapy in the 20th and 21st centuries (as of 2021), as an evidence-based, allied healthcare profession, grew out of the aftermath of World Wars I and II, when, particularly in the United Kingdom and United States, musicians would travel to hospitals and play music for soldiers suffering from with war-related emotional and physical trauma. Using music to treat the mental and physical ailments of active duty military and veterans was not new. Its use was recorded during the U.S. Civil War and Florence Nightingale used it a decade earlier in the Crimean War. Despite research data, observations by doctors and nurses, praise from patients, and willing musicians, it was difficult to vastly increase music therapy services or establish lasting music therapy education programs or organizations in the early 20th century. However, many of the music therapy leaders of this time period provided music therapy during WWI or to its veterans. These were pioneers in the field such as Eva Vescelius, musician, author, 1903 founder of the short-lived National Therapeutic Society of New York and the 1913 Music and Health journal, and creator/teacher of a musicotherapy course; Margaret Anderton, pianist, WWI music therapy provider for Canadian soldiers, a strong believer in training for music therapists, and 1919 Columbia University musicotherapy teacher; Isa Maud Ilsen, a nurse and musician who was the American Red Cross Director of Hospital Music in WWI reconstruction hospitals, 1919 Columbia University musicotherapy teacher, 1926 founder of the National Association for Music in Hospitals, and author; and Harriet Ayer Seymour, music therapist to WWI veterans, author, researcher, lecturer/teacher, founder of the National Foundation for Music Therapy in 1941, author of the first music therapy textbook published in the US. Several physicians also promoted music as a therapeutic agent during this time period.
In the 1940s, changes in philosophy regarding care of psychiatric patients as well as the influx of WWII veterans in Veterans Administration hospitals renewed interest in music programs for patients. Many musicians volunteered to provide entertainment and were primarily assigned to perform on psychiatric wards. Positive changes in patients' mental and physical health were noted by nurses. The volunteer musicians, many of whom had degrees in music education, becoming aware of the powerful effects music could have on patients realized that specialized training was necessary. The first music therapy bachelor's degree program was established in 1944 with three others and one master's degree program quickly following: "Michigan State College [now a University] (1944), the University of Kansas [master's degree only] (1946), the College of the Pacific (1947), The Chicago Musical College (1948) and Alverno College (1948)." The National Association for Music Therapy (NAMT), a professional association, was formed in 1950. In 1956, the first music therapy credential in the US, Registered Music Therapist (RMT), was instituted by the NAMT.
The American Music Therapy Association (AMTA) was founded in 1998 as a merger between the National Association for Music Therapy (NAMT, founded in 1950) and the American Association for Music Therapy (AAMT, founded in 1971).
=== United Kingdom ===
Live music was used in hospitals after both World Wars as part of the treatment program for recovering soldiers. Clinical music therapy in Britain as it is understood today was pioneered in the 1960s and 1970s by French cellist Juliette Alvin whose influence on the current generation of British music therapy lecturers remains strong. Mary Priestley, one of Juliette Alvin's students, created "analytical music therapy". The Nordoff-Robbins approach to music therapy developed from the work of Paul Nordoff and Clive Robbins in the 1950/60s.
Practitioners are registered with the Health Professions Council and, starting from 2007, new registrants must normally hold a master's degree in music therapy. There are master's level programs in music therapy in Manchester, Bristol, Cambridge, South Wales, Edinburgh and London, and there are therapists throughout the UK. The professional body in the UK is the British Association for Music Therapy In 2002, the World Congress of Music Therapy, coordinated and promoted by the World Federation of Music Therapy, was held in Oxford on the theme of Dialogue and Debate. In November 2006, Dr. Michael J. Crawford and his colleagues again found that music therapy helped the outcomes of schizophrenic patients.
== Military: active duty, veterans, family members ==
=== History ===
Music therapy finds its roots in the military. The United States Department of War issued Technical Bulletin 187 in 1945, which described the use of music in the recovery of military service members in Army hospitals. The use of music therapy in military settings started to flourish and develop following World War II and research and endorsements from both the United States Army and the Surgeon General of the United States. Although these endorsements helped music therapy develop, there was still a recognized need to assess the true viability and value of music as a medically based therapy. Walter Reed Army Medical Center and the Office of the Surgeon General worked together to lead one of the earliest assessments of a music therapy program. The goal of the study was to understand whether "music presented according to a specific plan" influenced recovery among service members with mental and emotional disorders. Eventually, case reports in reference to this study relayed not only the importance but also the impact of music therapy services in the recovery of military service personnel.
The first university sponsored music therapy course was taught by Margaret Anderton in 1919 at Columbia University. Anderton's clinical specialty was working with wounded Canadian soldiers during World War II, using music-based services to aid in their recovery process.
Today, Operation Enduring Freedom and Operation Iraqi Freedom have both presented an array of injuries; however, the two signature injuries are post-traumatic stress disorder (PTSD) and traumatic brain injury (TBI). These two signature injuries are increasingly common among millennial military service members and in music therapy programs.
A person diagnosed with PTSD can associate a memory or experience with a song they have heard. This can result in either good or bad experiences. If it is a bad experience, the song's rhythm or lyrics can bring out the person's anxiety or fear response. If it is a good experience, the song can bring feelings of happiness or peace which could bring back positive emotions. Either way, music can be used as a tool to bring emotions forward and help the person cope with them.
=== Methods ===
Music therapists work with active duty military personnel, veterans, service members in transition, and their families. Music therapists strive to engage clients in music experiences that foster trust and complete participation over the course of their treatment process. Music therapists use an array of music-centered tools, techniques, and activities when working with military-associated clients, many of which are similar to the techniques used in other music therapy settings. These methods include, but are not limited to: group drumming, listening, singing, and songwriting. Songwriting is a particularly effective tool with military veterans struggling with PTSD and TBI as it creates a safe space to, "... work through traumatic experiences, and transform traumatic memories into healthier associations".
=== Programs ===
Music therapy in the military is seen in programs on military bases, VA healthcare facilities, military treatment facilities, and military communities. Music therapy programs have a large outreach because they exist for all phases of military life: pre-mobilization, deployment, post-deployment, recovery (in the case of injury), and among families of fallen military service personnel.
The Exceptional Family Member Program (EFMP) also exists to provide music therapy services to active duty military families who have a family member with a developmental, physical, emotional, or intellectual disorder. Currently, programs at the Davis–Monthan Air Force Base, Resounding Joy, Inc., and the Music Institute of Chicago partner with EFMP services to provide music therapy services to eligible military family members.
Music therapy programs primarily target active duty military members and their treatment facility to provide reconditioning among members convalescing in Army hospitals. Although, music therapy programs not only benefit the military but rather a wide range of clients including the U.S. Air Force, American Navy, and U.S. Marines Corp. Individuals exposed to trauma benefit from their essential rehabilitative tools to follow the course of recovery from stress disorders. Music therapists are certified professionals who possess the abilities to determine appropriate interventions to support one recovering from a physically, emotionally, or mentally traumatic experience. In addition to their skills, they play an integral part throughout the treatment process of service members diagnosed with post-traumatic stress or brain injuries. In many cases, self-expression through songwriting or using instruments help restore emotions that can be lost following trauma. Music has a significant effect on troops traveling overseas or between bases because many soldiers view music to be an escape from war, a connection to their homeland and families, or as motivation. By working with a certified music therapist, marines undergo sessions re-instituting concepts of cognition, memory attention, and emotional processing. Although programs primarily focus on phases of military life, other service members such as the U.S. Air Force are eligible for treatment as well. For instance, during a music therapy session, a man begins to play a song to a wounded Airmen. The Airmen says "[music] allows me to talk about something that happened without talking about it". Music allows the active duty airmen to open up about previous experiences while reducing his anxiety level.
== History ==
The use of music to soothe grief has been used since the time of David and King Saul. In I Samuel, David plays the lyre to make King Saul feel relieved and better. It has since been used all over the world for treatment of various issues, though the first recorded use of official "music therapy" was in 1789 – an article titled "Music Physically Considered" by an unknown author was found in Columbian Magazine. The creation and expansion of music therapy as a treatment modality thrived in the early to mid 1900s and while a number of organizations were created, none survived for long. It was not until 1950 that the National Association for Music Therapy was founded in New York that clinical training and certification requirements were created. In 1971, the American Association for Music Therapy was created, though at that time called the Urban Federation of Music Therapists. The Certification Board for Music Therapists was created in 1983 which strengthened the practice of music therapy and the trust that it was given. In 1998, the American Music Therapy Association was formed out of a merger between National and American Associations and as of 2017 is the single largest music therapy organization in the world (American music therapy, 1998–2025).
Ancient flutes, carved from ivory and bone, were found by archaeologists, that were determined to be from as far back as 43,000 years ago. He also states that "The earliest fragment of musical notation is found on a 4,000-year-old Sumerian clay tablet, which includes instructions and tuning for a hymn honoring the ruler Lipit-Ishtar. But for the title of oldest extant song, most historians point to "Hurrian Hymn No. 6," an ode to the goddess Nikkal that was composed in cuneiform by the ancient Hurrian's sometime around the 14th century B.C.".
=== Western cultures ===
==== Music and healing ====
Music has been used as a healing implement for centuries. Apollo is the ancient Greek god of music and of medicine and his son Aesculapius was said to cure diseases of the mind by using song and music. By 5000 BC, music was used for healing by Egyptian priest-physicians. Plato said that music affected the emotions and could influence the character of an individual. Aristotle taught that music affects the soul and described music as a force that purified the emotions. Aulus Cornelius Celsus advocated the sound of cymbals and running water for the treatment of mental disorders. Music as therapy was practiced in the Bible when David played the harp to rid King Saul of a bad spirit (1 Sam 16:23). As early as 400 B.C., Hippocrates played music for mental patients. In the 13th century, Arab hospitals contained music-rooms for the benefit of the patients. In the United States, Native American medicine men often employed chants and dances as a method of healing patients. The Turco-Persian psychologist and music theorist al-Farabi (872–950), known as Alpharabius in Europe, dealt with music for healing in his treatise Meanings of the Intellect, in which he discussed the therapeutic effects of music on the soul. In his De vita libri tres published in 1489, Platonist Marsilio Ficino gives a lengthy account of how music and songs can be used to draw celestial benefits for staying healthy. Robert Burton wrote in the 17th century in his classic work, The Anatomy of Melancholy, that music and dance were critical in treating mental illness, especially melancholia.
The rise of an understanding of the body and mind in terms of the nervous system led to the emergence of a new wave of music for healing in the 18th century. Earlier works on the subject, such as Athanasius Kircher's Musurgia Universalis of 1650 and even early 18th-century books such as Michael Ernst Ettmüller's 1714 Disputatio effectus musicae in hominem (Disputation on the Effect of Music on Man) or Friedrich Erhardt Niedten's 1717 Veritophili, still tended to discuss the medical effects of music in terms of bringing the soul and body into harmony. But from the mid-18th century works on the subject such as Richard Brocklesby's 1749 Reflections of Antient and Modern Musick, the 1737 Memoires of the French Academy of Sciences, or Ernst Anton Nicolai's 1745 Die Verbindung der Musik mit der Arzneygelahrheit (The Connection of Music to Medicine), stressed the power of music over the nerves.
==== Music therapy: 19th century ====
After 1800, some books on music and medicine drew on the Brunonian system of medicine, arguing that the stimulation of the nerves caused by music could directly improve or harm health. Throughout the 19th century, an impressive number of books and articles were authored by physicians in Europe and the United States discussing use of music as a therapeutic agent to treat both mental and physical illness.
==== Music therapy: 1900 – c. 1940 ====
From a western viewpoint, music therapy in the 20th and 21st centuries (as of 2021), as an evidence-based, allied healthcare profession, grew out of the aftermath of World Wars I and II. Particularly in the United Kingdom and United States, musicians would travel to hospitals and play music for soldiers with war-related emotional and physical trauma. Using music to treat the mental and physical ailments of active duty military and veterans was not new. Its use was recorded during the US Civil War and Florence Nightingale used it a decade earlier in the Crimean War. Despite research data, observations by doctors and nurses, praise from patients, and willing musicians, it was difficult to vastly increase music therapy services or establish lasting music therapy education programs or organizations in the early 20th century. However, many of the music therapy leaders of this time period provided music therapy during WWI or to its veterans. These were pioneers in the field such as Eva Vescelius, musician, author, 1903 founder of the short-lived National Therapeutic Society of New York and the 1913 Music and Health journal, and creator/teacher of a musicotherapy course; Margaret Anderton, pianist, World War I music therapy provider for Canadian soldiers, a strong believer in training for music therapists, and 1919 Columbia University musicotherapy teacher; Isa Maud Ilsen, a nurse and musician who was the American Red Cross Director of Hospital Music in World War I reconstruction hospitals, 1919 Columbia University musicotherapy teacher, 1926 founder of the National Association for Music in Hospitals, and author; and Harriet Ayer Seymour, music therapist to World War I veterans, author, researcher, lecturer/teacher, founder of the National Foundation for Music Therapy in 1941, author of the first music therapy textbook published in the United States. Several physicians also promoted music as a therapeutic agent during this time period.
In the United States, the first music therapy bachelor's degree program was established in 1944 at Michigan State College (now Michigan State University).
For history from the early 20th century to the present, see continents or individual countries in § Usage by region section.
== See also ==
== References ==
== Bibliography ==
American Psychiatric Association (2000). Diagnostic and statistical manual of mental disorders (4th edn, revised). Washington, D.C.: Author.
Gibson, David (2018). The Complete Guide to Sound Healing (2nd edn), Sound of Light.
Goodman, K. D. (2011), Music therapy education and training: From theory to practice. Charles C. Thomas.
Goodman, K. D. (ed.) (2015), International perspectives in music therapy education and training. Charles C Thomas.
Goodman, K. D. (ed.) (2023), Developing issues in world music therapy education and training: A plurality of views. Charles C Thomas.
Hilliard, R. E. (2001). "The effects of music therapy-based bereavement groups on mood and behavior of grieving children: A pilot study". Journal of Music Therapy, 38(4), 291–306.
Hilliard, R. E. (2007). "The effects of orff-based music therapy and social work groups on childhood grief symptoms and behaviors". Journal of Music Therapy, 44(2), 123–38.
Jones, J. D. (2005). "A comparison of songwriting and lyric analysis techniques to evoke emotional change in a single session with people who are chemically dependent", Journal of Music Therapy, 42, 94–110.
Krout, R. E. (2005). "Applications of music therapist-composed songs in creating participant connections and facilitating goals and rituals during one-time bereavement support groups and programs". Music Therapy Perspectives, 23(2), 118–128.
Lindenfelser, K. J., Grocke, D., & McFerran, K. (2008). "Bereaved parents' experiences of music therapy with their terminally ill child". Journal of Music Therapy, 45(3), 330–48.
Rosner, R., Kruse, J., & Hagl, M. (2010). "A meta‐analysis of interventions for bereaved children and adolescents". Death Studies, 34(2), 99–136.
Schwantes, M., Wigram, T., McKinney, C., Lipscomb, A., & Richards, C. (2011). "The Mexican corrido and its use in a music therapy bereavement group". The Australian Journal of Music Therapy, 22, 2–20.
Silverman, M. J. (2008). "Quantitative comparison of cognitive behavioral therapy and music therapy research: A methodological best-practices analysis to guide future investigation for adult psychiatric patients". Journal of Music Therapy, 45(4), 457–506.
Silverman, M. J. (2009). "The use of lyric analysis interventions in contemporary psychiatric music therapy: Descriptive results of songs and objectives for clinical practice". Music Therapy Perspectives, 27(1), 55–61.
Silverman, M. J., & Marcionetti, M. J. (2004). "Immediate effects of a single music therapy intervention on persons who are severely mentally ill". Arts in Psychotherapy, 31, 291–301.
Valentino, R. E. (2006). Attitudes towards cross-cultural empathy in music therapy. Music Therapy Perspectives, 24(2), 108–114.
Whitehead-Pleaux, A. M., Baryza, M. J., & Sheridan, R. L. (2007). "Exploring the effects of music therapy on pediatric pain: phase 1". The Journal of Music Therapy, 44(3), 217–41.
== Further reading ==
== External links ==
Learning materials related to sound therapy at Wikiversity | Wikipedia/Melodic_intonation_therapy |
Dance/movement therapy (DMT) in USA and Australia or dance movement psychotherapy (DMP) in the UK is the psychotherapeutic use of movement and dance to support intellectual, emotional, and motor functions of the body. As a modality of the creative arts therapies, DMT looks at the correlation between movement and emotion.
== Efficacy ==
Dance/movement therapy, alone and in conjunction with other forms of therapy, has been shown to be an effective form of treatment for anxiety and anxiety related disorders across age ranges and across a wide population of individuals. Certain studies show that dance movement therapy has been an effective form of anxiety treatment for those with and without intellectual disabilities and musculoskeletal disorders. It has also been shown to be effective at reducing aggression in young children.
There are insufficient high quality trials to assess the effect of DMT on behavioral, social, cognitive and emotional symptoms in people with dementia.
== Principles ==
The theory of DMT is based mainly upon the belief that body and mind interact. Both conscious and unconscious movement of the person, based on the dualist mind body premise, affects total control, and also reflects the individual's personality. Therefore, the therapist-client relationship is partly based on non-verbal cues such as body language. Movement is believed to have a symbolic function and as such can aid in understanding the self. Movement improvisation allows the client to experiment with new ways of being and DMT provides a manner or channel in which the client can consciously understand early relationships with negative experiences through non-verbal mediation by the therapist.
By integrating the body, mind, and spirit, DMT aims to foster a sense of wholeness among participants. The body refers to the "discharging of energy through muscular-skeletal responses to stimuli received by the brain." The mind refers to "mental activities...such as memory, imagery, perception, attention, evaluation, reasoning and decision making." The spirit refers to the "subjectively experienced feeling of engaging in or empathically observing dancing."
Dance movement therapy works to improve the social skills, as well as relational dynamics among the clients that choose to participate in it to better improve their quality of life. This therapy seeks to deepen clients' self-awareness through a meditative process that involves movement, motion, and realization through exploration of one's body.
== Methodology ==
DMT/P methodology is fairly heterogenous and practitioners draw on a variety of psychotherapeutic and kinetic principles. Most training in Dance Movement Therapy will have an established theoretical base which they work from – for example Psychodynamic theory, Humanistic psychology, Integrative therapy, Cognitive behavioral therapy, Existential therapy etc. Depending on the approach or combinations of approaches practitioners work from very different processes and aims will be worked towards.
Additionally to the psychotherapeutic basis of their work, different approaches to movement and dance may be employed.
Some dance therapists use codified dance styles, like ballet, folk dance, and contemporary dance. Majority of dance therapists work within a kinetic framework of creative and expressive movement practices, incorporating structured improvisation.
Commonly requirements of most DMT/P graduate programmes are Movement Analysis and Profiling, human development and Developmental psychology.
Additionally since a variety of populations may be encountered in DMT/P, methods are adapted to meet the needs of the circumstances and clients and this further reduces standardisation.
Bonnie Meekums, a second wave dance therapist, described four stages of the therapy process, based on her experience in the field:
Preparation: the warm-up stage, a safe space is established without obstacles nor distractions, a supportive relationship with a witness is formed, comfort for participants to be familiar with moving with their eyes closed.
Incubation: leader verbally prompts participant to go into subconscious, open-ended imagery used to create an internal environment that is catered to the participant, relaxed atmosphere, symbolic movements.
Illumination: process which is integrated through conscious awareness via dialogue with witness, self-reflection in which the participant uncovers and resolves subconscious motivations, increased self awareness, can have positive and negative effects.
Evaluation: discuss insights and significance of the process, prepare to end therapy
== The use of props ==
Dance movement therapists frequently use props during sessions to support grounding skills and to increase clients' awareness of their bodies and personal boundaries. These props might include blankets, sensory balls, weighted blankets, colorful scarves, coloring pencils, and resistance bands. Clients also often can select the type of music they prefer for the session.
== Proposed mechanisms ==
Various hypothesis have been proposed for mechanisms by which dance therapy may benefit participants. There is a social component to dance therapy, which can be valuable for psychological functioning through human interaction. Another possible mechanism is the music that is used during the session, which may be able to reduce pain, decrease anxiety, and increase relaxation. Since dance requires learning and involves becoming active and discovering capacities for movement, there is also the physical training that could provide benefits as well. Dancing may be considered more uplifting and enjoyable than other types of exercise. Dance therapy can also involve nonverbal communication, "which enables participants to express their feelings without words. This might be helpful when normal communication is absent or has broken down (eg, for patients with dementia)."
== Locations ==
DMT is practiced in a large variety of locations. Such locations include:
Physical medicine
Rehabilitation centers
Medical settings
Education school facilities
Nursing Homes
Day care facilities
Disease prevention centers
Health promotion programs
Hospitals
Mental health settings
Private practice
== Organizations ==
Organizations such as the American Dance Therapy Association were created in order to uphold high standards in the field of DMT. Such organizations help connect individuals to therapists and DMT.
=== American Dance Therapy Association ===
American Dance Therapy Association (ADTA) was founded in 1966 in order to uphold high standards throughout dance therapy. The ADTA was created by Marian Chace, the first president of the ADTA, Elissa Queyquep White, Claire Schmais, and other pioneers in dance movement. Along with setting standards for which therapists must attain to become licensed therapists, ADTA keeps an updated registry of all movement/dance therapists who have met ADTA's standards. In addition, ADTA also publishes the American Journal of Dance Therapy and sponsors annual professional conferences. According to the ADTA, movement is considered to be a language which allows our body. mind, and spirit to communicate. There are recorded webinars that you can watch at any point in time that can educate and give you more knowledge about the dance therapy field. Along with this, there are also live webinars that you can purchase which allow you to receive a deeper education about how you can use dance therapy in your daily life.
=== Association for Dance Movement Psychotherapy, United Kingdom ===
The Association for Dance Movement Psychotherapy, United Kingdom (ADMP UK) was one of the first organizations established to regulate the field of dance therapy. ADMP UK accredits therapists and oversees that all regulations are followed. The association actively promotes dance in the UK and other countries, and collaborates with other art therapy organizations. The ADMP UK is providing dance therapy to the community which can be done individually or in group sessions. They use Dance Movement Psychotherapy (DMP), which explains how body movement is a key instrument of expression and communication, throughout these sessions. DMP can support trust within the relationships in your life, the potential for you to physically and spiritually grow within yourself, and the discovery of who you truly are.
=== European Association Dance Movement Therapy ===
The European Association of Dance Movement Therapy is an umbrella association which represents national professional bodies for Dance Movement Therapy in Europe. It represents members in Germany, Greece, Hungary, Italy, Latvia, Netherlands, Poland, Russia, Spain and the UK; with partial members in Austria, Czech Republic, Finland, France, Switzerland, Ukraine and associate members in Croatia, Cyprus, Denmark, Israel, Portugal, Romania and Sweden. Their mission statement is to work extremely hard to continue the development of dance therapy and the legal recognition of this practice. This association aims to exchange ideas and collaborate with other countries about dance therapy.
NVDAT (Nederlandse Vereniging voor Danstherapie-Dutch Dance Movement Therapy Association)
The Nederlandse Vereniging voor Danstherapie supports the interests of dance movement therapists based in The Netherlands.
=== Korean Dance Therapy Association ===
The Korean Dance Therapy Association was established in 1993 by Dr. Ryu Boon Soon as the first dance therapy association in South Korea. It was modeled after the structure of the ADTA and provides education, credentialing, and professional development opportunities to dance therapists in Korea.
== Allied professions ==
Allied professions are areas that a person could do, special studies, short courses, or eventually become trained in the area of DMT.
Dance
Physical education
Occupational therapy
Physiotherapy
Psychology
Art therapy
== Therapist qualifications ==
American Association of Dance Therapy
ADTA is the main regulator of the required education and training in order to become a dance/movement therapist in the USA. A master's degree is required to become a dance/movement therapist. "Registered Dance/Movement Therapist" (R-DMT) is the title given to entry-level dance/movement therapists who have completed requisite education and a minimum 700-hour supervised clinical internship. Those who have completed over 2400 hours of supervised professional clinical work may apply for the advanced credential "Board Certified Dance/Movement Therapist (BC-DMT).
Association for Dance Movement Psychotherapy, United Kingdom
ADMP UK is the main regulator of the required education and training in order to become a dance/movement therapist in the UK. The ADMP is also a member of the European Association Dance Movement Therapy (EADMT). To become a licensed dance/movement therapist, a Master’s Degree in Dance Movement Psychotherapy (DMP) is required. There are three DMP training programs in the UK – at the Goldsmiths University of London, University of Roehampton in London, and University of Derby.
European Association of Dance Movement Therapy
EADMT is the main regulator of the required education and training in order to become a dance/movement therapist in the EU. DMT training is taught in private and university settings across the EU in countries that include Austria, Estonia, Germany, Greece, Hungary, Italy, Latvia, the Netherlands, Poland, Russia, Spain, and the United Kingdom. Introductory course training in DMT ranges from 10–120 hours. These hours vary based on country. Full university accreditation courses at the bachelor’s and postgraduate levels range from 2–4 years.
The EADMT training standard criteria were adopted by the EADMT General Assembly in Barcelona, Spain in 2017. These criteria help DMT programs meet best practice and achieve high quality DMT practitioners across Europe.
== Education ==
Typically becoming a dance therapist requires a graduate degree of at least a Master's level. There is no specific undergraduate degree, however many practitioners hold undergraduate degrees fields in, or related to, psychology or dance.
All master's degrees in the UK and the USA require clinical placements, personal therapy and supervision, as well as experiential and theoretical learning, and typically require between 2 and 3 years to complete. Upon completion of a Masters graduates are eligible to register as Dance Movement Therapists/Psychotherapists with their professional associations. In the UK graduates may also register with the UK Council of Psychotherapists(UKCP).
It is also possible to register as a Dance Movement Therapist/Psychotherapist without a DMT/DMP Masters. This usually requires equilvilant psychotherapeutic training and substantial experience of applying dance into therapeutic settings.
== History ==
The American Dance Therapy Association was founded in 1966 as an organization to support the emerging profession of dance/movement therapy and is the only U.S. organization dedicated to the profession of dance/movement therapy.
Dance has been used therapeutically for thousands of years. It has been used as a healing ritual in the influence of fertility, birth, sickness, and death since early human history. Over the period from 1840 to 1930, a new philosophy of dance developed in Europe and the United States, defined by the idea that movement could have an effect on the mover vis-a-vis that dance was not simply an expressive art. There is a general opinion that Dance/movement as active imagination was originated by Jung in 1916, developed in the 1960s by dance therapy pioneer Mary Starks Whitehouse. Tina Keller-Jenny and other therapists started practicing the therapy in 1940. The actual establishment of dance as a therapy and as a profession occurred in the 1950s, beginning with future American Dance Therapy Association founder Marian Chace.
=== First wave ===
Marian Chace spearheaded the movement of dance in the medical community as a form of therapy.
She is considered the principal founder of what is now dance therapy in the United States. In 1942, through her work, dance was first introduced to western medicine. Chace was originally a dancer, choreographer, and performer. After opening her own dance school in Washington, D.C., Chace began to realize the effects dance and movement had on her students. The reported feelings of wellbeing from her students began to attract the attention of the medical community, and some local doctors began sending patients to her classes. She was soon asked to work at St. Elizabeth's Hospital in Washington, D.C. once psychiatrists too realized the benefits their patients were receiving from attending Chace's dance classes. In 1966 Chace became the first president of the American Dance Therapy Association, an organization which she and several other DMT pioneers founded. According to the ADTA, dance is "the psychotherapeutic use of movement as a process which furthers the emotional, social, cognitive, and physical integration of the individual."
=== Second wave ===
The second wave of Dance Movement Therapy came around the 1970s to the 1980s and it sparked much interest from American therapists. During this time, therapists began to experiment with the psychotherapeutic applications of dance and movement. As a result of the therapists' experiments, DMT was then categorized as a form of psychotherapy. It was from this second wave that today's Dance Movement Therapy evolved.
== See also ==
Authentic Movement
Expressive therapy
Process art
Rudolf Laban
== References == | Wikipedia/Dance_therapy |
Clinical pluralism is a term used by some psychotherapists to denote an approach to clinical treatment that would seek to remain respectful towards divergences in meaning-making. It can signify both an undertaking to negotiate theoretical difference between clinicians, and an undertaking to negotiate differences of belief occurring within the therapeutic relationship itself. While the notion of clinical pluralism is associated with the practice of psychotherapy, similar issues have been raised within the field of medical ethics (see Medical ethics § Cultural concerns).
Clinical pluralism can be applied within a particular approach to psychotherapy, such as psychoanalytic psychotherapy. Modern psychoanalytic training involves not only hours of training sessions but the use of diverse clinical practices. An example of psychoanalytic treatment following clinical pluralism is coparticipant psychoanalysis, which features an individualized treatment but is diverse in the practices employed. This technique holds that all analyses represent unique sets of practices, which depend on the varying characteristics of the personalities that make up the analytic dyad.
Clinical pluralism is also associated with eclectic and integrative psychotherapy, which are distinguished from clinical practice that follows a specific theoretical school with its own therapeutic techniques. These approaches to therapy all maintain that there is no single theory or therapeutic modality that can offer optimum efficacy.
== See also ==
Eclecticism
Integrative psychotherapy § Comparison with eclecticism
== References == | Wikipedia/Clinical_pluralism |
Alternative medicine is any practice that aims to achieve the healing effects of medicine despite lacking biological plausibility, testability, repeatability or evidence of effectiveness. Unlike modern medicine, which employs the scientific method to test plausible therapies by way of responsible and ethical clinical trials, producing repeatable evidence of either effect or of no effect, alternative therapies reside outside of mainstream medicine and do not originate from using the scientific method, but instead rely on testimonials, anecdotes, religion, tradition, superstition, belief in supernatural "energies", pseudoscience, errors in reasoning, propaganda, fraud, or other unscientific sources. Frequently used terms for relevant practices are New Age medicine, pseudo-medicine, unorthodox medicine, holistic medicine, fringe medicine, and unconventional medicine, with little distinction from quackery.
Some alternative practices are based on theories that contradict the established science of how the human body works; others appeal to the supernatural or superstitious to explain their effect or lack thereof. In others, the practice has plausibility but lacks a positive risk–benefit outcome probability. Research into alternative therapies often fails to follow proper research protocols (such as placebo-controlled trials, blind experiments and calculation of prior probability), providing invalid results. History has shown that if a method is proven to work, it eventually ceases to be alternative and becomes mainstream medicine.
Much of the perceived effect of an alternative practice arises from a belief that it will be effective, the placebo effect, or from the treated condition resolving on its own (the natural course of disease). This is further exacerbated by the tendency to turn to alternative therapies upon the failure of medicine, at which point the condition will be at its worst and most likely to spontaneously improve. In the absence of this bias, especially for diseases that are not expected to get better by themselves such as cancer or HIV infection, multiple studies have shown significantly worse outcomes if patients turn to alternative therapies. While this may be because these patients avoid effective treatment, some alternative therapies are actively harmful (e.g. cyanide poisoning from amygdalin, or the intentional ingestion of hydrogen peroxide) or actively interfere with effective treatments.
The alternative medicine sector is a highly profitable industry with a strong lobby, and faces far less regulation over the use and marketing of unproven treatments. Complementary medicine (CM), complementary and alternative medicine (CAM), integrated medicine or integrative medicine (IM), and holistic medicine attempt to combine alternative practices with those of mainstream medicine. Traditional medicine practices become "alternative" when used outside their original settings and without proper scientific explanation and evidence. Alternative methods are often marketed as more "natural" or "holistic" than methods offered by medical science, that is sometimes derogatorily called "Big Pharma" by supporters of alternative medicine. Billions of dollars have been spent studying alternative medicine, with few or no positive results and many methods thoroughly disproven.
== Definitions and terminology ==
The terms alternative medicine, complementary medicine, integrative medicine, holistic medicine, natural medicine, unorthodox medicine, fringe medicine, unconventional medicine, and new age medicine are used interchangeably as having the same meaning and are almost synonymous in most contexts. Terminology has shifted over time, reflecting the preferred branding of practitioners. For example, the United States National Institutes of Health department studying alternative medicine, currently named the National Center for Complementary and Integrative Health (NCCIH), was established as the Office of Alternative Medicine (OAM) and was renamed the National Center for Complementary and Alternative Medicine (NCCAM) before obtaining its current name. Therapies are often framed as "natural" or "holistic", implicitly and intentionally suggesting that conventional medicine is "artificial" and "narrow in scope".
The meaning of the term "alternative" in the expression "alternative medicine", is not that it is an effective alternative to medical science (though some alternative medicine promoters may use the loose terminology to give the appearance of effectiveness). Loose terminology may also be used to suggest meaning that a dichotomy exists when it does not (e.g., the use of the expressions "Western medicine" and "Eastern medicine" to suggest that the difference is a cultural difference between the Asian east and the European west, rather than that the difference is between evidence-based medicine and treatments that do not work).
=== Alternative medicine ===
Alternative medicine is defined loosely as a set of products, practices, and theories that are believed or perceived by their users to have the healing effects of medicine, but whose effectiveness has not been established using scientific methods, or whose theory and practice is not part of biomedicine, or whose theories or practices are directly contradicted by scientific evidence or scientific principles used in biomedicine. "Biomedicine" or "medicine" is that part of medical science that applies principles of biology, physiology, molecular biology, biophysics, and other natural sciences to clinical practice, using scientific methods to establish the effectiveness of that practice. Unlike medicine, an alternative product or practice does not originate from using scientific methods, but may instead be based on hearsay, religion, tradition, superstition, belief in supernatural energies, pseudoscience, errors in reasoning, propaganda, fraud, or other unscientific sources.
Some other definitions seek to specify alternative medicine in terms of its social and political marginality to mainstream healthcare. This can refer to the lack of support that alternative therapies receive from medical scientists regarding access to research funding, sympathetic coverage in the medical press, or inclusion in the standard medical curriculum. For example, a widely used definition devised by the US NCCIH calls it "a group of diverse medical and health care systems, practices, and products that are not generally considered part of conventional medicine". However, these descriptive definitions are inadequate in the present-day when some conventional doctors offer alternative medical treatments and introductory courses or modules can be offered as part of standard undergraduate medical training; alternative medicine is taught in more than half of US medical schools and US health insurers are increasingly willing to provide reimbursement for alternative therapies.
=== Complementary or integrative medicine ===
Complementary medicine (CM) or integrative medicine (IM) is when alternative medicine is used together with mainstream medical treatment in a belief that it improves the effect of treatments. For example, acupuncture (piercing the body with needles to influence the flow of a supernatural energy) might be believed to increase the effectiveness or "complement" science-based medicine when used at the same time. Significant drug interactions caused by alternative therapies may make treatments less effective, notably in cancer therapy.
Several medical organizations differentiate between complementary and alternative medicine including the UK National Health Service (NHS), Cancer Research UK, and the US Center for Disease Control and Prevention (CDC), the latter of which states that "Complementary medicine is used in addition to standard treatments" whereas "Alternative medicine is used instead of standard treatments."
Complementary and integrative interventions are used to improve fatigue in adult cancer patients.
David Gorski has described integrative medicine as an attempt to bring pseudoscience into academic science-based medicine with skeptics such as Gorski and David Colquhoun referring to this with the pejorative term "quackademia". Robert Todd Carroll described Integrative medicine as "a synonym for 'alternative' medicine that, at its worst, integrates sense with nonsense. At its best, integrative medicine supports both consensus treatments of science-based medicine and treatments that the science, while promising perhaps, does not justify" Rose Shapiro has criticized the field of alternative medicine for rebranding the same practices as integrative medicine.
CAM is an abbreviation of the phrase complementary and alternative medicine. The 2019 World Health Organization (WHO) Global Report on Traditional and Complementary Medicine states that the terms complementary and alternative medicine "refer to a broad set of health care practices that are not part of that country's own traditional or conventional medicine and are not fully integrated into the dominant health care system. They are used interchangeably with traditional medicine in some countries."
In the 1990s, integrative medicine started to be marketed by a new term, functional medicine.
The Integrative Medicine Exam by the American Board of Physician Specialties includes the following subjects: Manual Therapies, Biofield Therapies, Acupuncture, Movement Therapies, Expressive Arts, Traditional Chinese Medicine, Ayurveda, Indigenous Medical Systems, Homeopathic Medicine, Naturopathic Medicine, Osteopathic Medicine, Chiropractic, and Functional Medicine.
=== Other terms ===
Traditional medicine (TM) refers to certain practices within a culture which have existed since before the advent of medical science, Many TM practices are based on "holistic" approaches to disease and health, versus the scientific evidence-based methods in conventional medicine. The 2019 WHO report defines traditional medicine as "the sum total of the knowledge, skill and practices based on the theories, beliefs and experiences indigenous to different cultures, whether explicable or not, used in the maintenance of health as well as in the prevention, diagnosis, improvement or treatment of physical and mental illness." When used outside the original setting and in the absence of scientific evidence, TM practices are typically referred to as "alternative medicine".
Holistic medicine is another rebranding of alternative medicine. In this case, the words balance and holism are often used alongside complementary or integrative, claiming to take into fuller account the "whole" person, in contrast to the supposed reductionism of medicine.
=== Challenges in defining alternative medicine ===
Prominent members of the science and biomedical science community say that it is not meaningful to define an alternative medicine that is separate from a conventional medicine because the expressions "conventional medicine", "alternative medicine", "complementary medicine", "integrative medicine", and "holistic medicine" do not refer to any medicine at all. Others say that alternative medicine cannot be precisely defined because of the diversity of theories and practices it includes, and because the boundaries between alternative and conventional medicine overlap, are porous, and change. Healthcare practices categorized as alternative may differ in their historical origin, theoretical basis, diagnostic technique, therapeutic practice and in their relationship to the medical mainstream. Under a definition of alternative medicine as "non-mainstream", treatments considered alternative in one location may be considered conventional in another.
Critics say the expression is deceptive because it implies there is an effective alternative to science-based medicine, and that complementary is deceptive because it implies that the treatment increases the effectiveness of (complements) science-based medicine, while alternative medicines that have been tested nearly always have no measurable positive effect compared to a placebo. Journalist John Diamond wrote that "there is really no such thing as alternative medicine, just medicine that works and medicine that doesn't", a notion later echoed by Paul Offit: "The truth is there's no such thing as conventional or alternative or complementary or integrative or holistic medicine. There's only medicine that works and medicine that doesn't. And the best way to sort it out is by carefully evaluating scientific studies—not by visiting Internet chat rooms, reading magazine articles, or talking to friends."
== Types ==
Alternative medicine consists of a wide range of health care practices, products, and therapies. The shared feature is a claim to heal that is not based on the scientific method. Alternative medicine practices are diverse in their foundations and methodologies. Alternative medicine practices may be classified by their cultural origins or by the types of beliefs upon which they are based. Methods may incorporate or be based on traditional medicinal practices of a particular culture, folk knowledge, superstition, spiritual beliefs, belief in supernatural energies (antiscience), pseudoscience, errors in reasoning, propaganda, fraud, new or different concepts of health and disease, and any bases other than being proven by scientific methods. Different cultures may have their own unique traditional or belief based practices developed recently or over thousands of years, and specific practices or entire systems of practices.
=== Unscientific belief systems ===
Alternative medicine, such as using naturopathy or homeopathy in place of conventional medicine, is based on belief systems not grounded in science.
=== Traditional ethnic systems ===
Alternative medical systems may be based on traditional medicine practices, such as traditional Chinese medicine (TCM), Ayurveda in India, or practices of other cultures around the world. Some useful applications of traditional medicines have been researched and accepted within ordinary medicine, however the underlying belief systems are seldom scientific and are not accepted.
Traditional medicine is considered alternative when it is used outside its home region; or when it is used together with or instead of known functional treatment; or when it can be reasonably expected that the patient or practitioner knows or should know that it will not work – such as knowing that the practice is based on superstition.
=== Supernatural energies ===
Bases of belief may include belief in existence of supernatural energies undetected by the science of physics, as in biofields, or in belief in properties of the energies of physics that are inconsistent with the laws of physics, as in energy medicine.
=== Herbal remedies and other substances ===
Substance based practices use substances found in nature such as herbs, foods, non-vitamin supplements and megavitamins, animal and fungal products, and minerals, including use of these products in traditional medical practices that may also incorporate other methods. Examples include healing claims for non-vitamin supplements, fish oil, Omega-3 fatty acid, glucosamine, echinacea, flaxseed oil, and ginseng. Herbal medicine, or phytotherapy, includes not just the use of plant products, but may also include the use of animal and mineral products. It is among the most commercially successful branches of alternative medicine, and includes the tablets, powders and elixirs that are sold as "nutritional supplements". Only a very small percentage of these have been shown to have any efficacy, and there is little regulation as to standards and safety of their contents.
=== Religion, faith healing, and prayer ===
=== NCCIH classification ===
The United States agency National Center for Complementary and Integrative Health (NCCIH) has created a classification system for branches of complementary and alternative medicine that divides them into five major groups. These groups have some overlap, and distinguish two types of energy medicine: veritable which involves scientifically observable energy (including magnet therapy, colorpuncture and light therapy) and putative, which invokes physically undetectable or unverifiable energy. None of these energies have any evidence to support that they affect the body in any positive or health promoting way.
Whole medical systems: Cut across more than one of the other groups; examples include traditional Chinese medicine, naturopathy, homeopathy, and ayurveda.
Mind-body interventions: Explore the interconnection between the mind, body, and spirit, under the premise that they affect "bodily functions and symptoms". A connection between mind and body is conventional medical fact, and this classification does not include therapies with proven function such as cognitive behavioral therapy.
"Biology"-based practices: Use substances found in nature such as herbs, foods, vitamins, and other natural substances. (As used here, "biology" does not refer to the science of biology, but is a usage newly coined by NCCIH in the primary source used for this article. "Biology-based" as coined by NCCIH may refer to chemicals from a nonbiological source, such as use of the poison lead in traditional Chinese medicine, and to other nonbiological substances.)
Manipulative and body-based practices: feature manipulation or movement of body parts, such as is done in bodywork, chiropractic, and osteopathic manipulation.
Energy medicine: is a domain that deals with putative and verifiable energy fields:
Biofield therapies are intended to influence energy fields that are purported to surround and penetrate the body. The existence of such energy fields have been disproven.
Bioelectromagnetic-based therapies use verifiable electromagnetic fields, such as pulsed fields, alternating-current, or direct-current fields in a non-scientific manner.
== History ==
The history of alternative medicine may refer to the history of a group of diverse medical practices that were collectively promoted as "alternative medicine" beginning in the 1970s, to the collection of individual histories of members of that group, or to the history of western medical practices that were labeled "irregular practices" by the western medical establishment. It includes the histories of complementary medicine and of integrative medicine. Before the 1970s, western practitioners that were not part of the increasingly science-based medical establishment were referred to "irregular practitioners", and were dismissed by the medical establishment as unscientific and as practicing quackery. Until the 1970s, irregular practice became increasingly marginalized as quackery and fraud, as western medicine increasingly incorporated scientific methods and discoveries, and had a corresponding increase in success of its treatments. In the 1970s, irregular practices were grouped with traditional practices of nonwestern cultures and with other unproven or disproven practices that were not part of biomedicine, with the entire group collectively marketed and promoted under the single expression "alternative medicine".
Use of alternative medicine in the west began to rise following the counterculture movement of the 1960s, as part of the rising new age movement of the 1970s. This was due to misleading mass marketing of "alternative medicine" being an effective "alternative" to biomedicine, changing social attitudes about not using chemicals and challenging the establishment and authority of any kind, sensitivity to giving equal measure to beliefs and practices of other cultures (cultural relativism), and growing frustration and desperation by patients about limitations and side effects of science-based medicine. At the same time, in 1975, the American Medical Association, which played the central role in fighting quackery in the United States, abolished its quackery committee and closed down its Department of Investigation.: xxi By the early to mid 1970s the expression "alternative medicine" came into widespread use, and the expression became mass marketed as a collection of "natural" and effective treatment "alternatives" to science-based biomedicine. By 1983, mass marketing of "alternative medicine" was so pervasive that the British Medical Journal (BMJ) pointed to "an apparently endless stream of books, articles, and radio and television programmes urge on the public the virtues of (alternative medicine) treatments ranging from meditation to drilling a hole in the skull to let in more oxygen".
An analysis of trends in the criticism of complementary and alternative medicine (CAM) in five prestigious American medical journals during the period of reorganization within medicine (1965–1999) was reported as showing that the medical profession had responded to the growth of CAM in three phases, and that in each phase, changes in the medical marketplace had influenced the type of response in the journals. Changes included relaxed medical licensing, the development of managed care, rising consumerism, and the establishment of the USA Office of Alternative Medicine (later National Center for Complementary and Alternative Medicine, currently National Center for Complementary and Integrative Health).
=== Medical education ===
Mainly as a result of reforms following the Flexner Report of 1910 medical education in established medical schools in the US has generally not included alternative medicine as a teaching topic. Typically, their teaching is based on current practice and scientific knowledge about: anatomy, physiology, histology, embryology, neuroanatomy, pathology, pharmacology, microbiology and immunology. Medical schools' teaching includes such topics as doctor-patient communication, ethics, the art of medicine, and engaging in complex clinical reasoning (medical decision-making). Writing in 2002, Snyderman and Weil remarked that by the early twentieth century the Flexner model had helped to create the 20th-century academic health center, in which education, research, and practice were inseparable. While this had much improved medical practice by defining with increasing certainty the pathophysiological basis of disease, a single-minded focus on the pathophysiological had diverted much of mainstream American medicine from clinical conditions that were not well understood in mechanistic terms, and were not effectively treated by conventional therapies.
By 2001 some form of CAM training was being offered by at least 75 out of 125 medical schools in the US. Exceptionally, the School of Medicine of the University of Maryland, Baltimore, includes a research institute for integrative medicine (a member entity of the Cochrane Collaboration). Medical schools are responsible for conferring medical degrees, but a physician typically may not legally practice medicine until licensed by the local government authority. Licensed physicians in the US who have attended one of the established medical schools there have usually graduated Doctor of Medicine (MD). All states require that applicants for MD licensure be graduates of an approved medical school and complete the United States Medical Licensing Examination (USMLE).
== Efficacy ==
There is a general scientific consensus that alternative therapies lack the requisite scientific validation, and their effectiveness is either unproved or disproved. Many of the claims regarding the efficacy of alternative medicines are controversial, since research on them is frequently of low quality and methodologically flawed. Selective publication bias, marked differences in product quality and standardisation, and some companies making unsubstantiated claims call into question the claims of efficacy of isolated examples where there is evidence for alternative therapies.
The Scientific Review of Alternative Medicine points to confusions in the general population – a person may attribute symptomatic relief to an otherwise-ineffective therapy just because they are taking something (the placebo effect); the natural recovery from or the cyclical nature of an illness (the regression fallacy) gets misattributed to an alternative medicine being taken; a person not diagnosed with science-based medicine may never originally have had a true illness diagnosed as an alternative disease category.
Edzard Ernst, the first university professor of Complementary and Alternative Medicine, characterized the evidence for many alternative techniques as weak, nonexistent, or negative and in 2011 published his estimate that about 7.4% were based on "sound evidence", although he believes that may be an overestimate. Ernst has concluded that 95% of the alternative therapies he and his team studied, including acupuncture, herbal medicine, homeopathy, and reflexology, are "statistically indistinguishable from placebo treatments", but he also believes there is something that conventional doctors can usefully learn from the chiropractors and homeopath: this is the therapeutic value of the placebo effect, one of the strangest phenomena in medicine.
In 2003, a project funded by the CDC identified 208 condition-treatment pairs, of which 58% had been studied by at least one randomized controlled trial (RCT), and 23% had been assessed with a meta-analysis. According to a 2005 book by a US Institute of Medicine panel, the number of RCTs focused on CAM has risen dramatically.
As of 2005, the Cochrane Library had 145 CAM-related Cochrane systematic reviews and 340 non-Cochrane systematic reviews. An analysis of the conclusions of only the 145 Cochrane reviews was done by two readers. In 83% of the cases, the readers agreed. In the 17% in which they disagreed, a third reader agreed with one of the initial readers to set a rating. These studies found that, for CAM, 38.4% concluded positive effect or possibly positive (12.4%), 4.8% concluded no effect, 0.7% concluded harmful effect, and 56.6% concluded insufficient evidence. An assessment of conventional treatments found that 41.3% concluded positive or possibly positive effect, 20% concluded no effect, 8.1% concluded net harmful effects, and 21.3% concluded insufficient evidence. However, the CAM review used the more developed 2004 Cochrane database, while the conventional review used the initial 1998 Cochrane database.
Alternative therapies do not "complement" (improve the effect of, or mitigate the side effects of) functional medical treatment. Significant drug interactions caused by alternative therapies may instead negatively impact functional treatment by making prescription drugs less effective, such as interference by herbal preparations with warfarin.
In the same way as for conventional therapies, drugs, and interventions, it can be difficult to test the efficacy of alternative medicine in clinical trials. In instances where an established, effective, treatment for a condition is already available, the Helsinki Declaration states that withholding such treatment is unethical in most circumstances. Use of standard-of-care treatment in addition to an alternative technique being tested may produce confounded or difficult-to-interpret results.
Cancer researcher Andrew J. Vickers has stated:
Contrary to much popular and scientific writing, many alternative cancer treatments have been investigated in good-quality clinical trials, and they have been shown to be ineffective. The label "unproven" is inappropriate for such therapies; it is time to assert that many alternative cancer therapies have been "disproven".
== Perceived mechanism of effect ==
Anything classified as alternative medicine by definition does not have a proven healing or medical effect. However, there are different mechanisms through which it can be perceived to "work". The common denominator of these mechanisms is that effects are mis-attributed to the alternative treatment.
=== Placebo effect ===
A placebo is a treatment with no intended therapeutic value. An example of a placebo is an inert pill, but it can include more dramatic interventions like sham surgery. The placebo effect is the concept that patients will perceive an improvement after being treated with an inert treatment. The opposite of the placebo effect is the nocebo effect, when patients who expect a treatment to be harmful will perceive harmful effects after taking it.
Placebos do not have a physical effect on diseases or improve overall outcomes, but patients may report improvements in subjective outcomes such as pain and nausea. A 1955 study suggested that a substantial part of a medicine's impact was due to the placebo effect. However, reassessments found the study to have flawed methodology. This and other modern reviews suggest that other factors like natural recovery and reporting bias should also be considered.
All of these are reasons why alternative therapies may be credited for improving a patient's condition even though the objective effect is non-existent, or even harmful. David Gorski argues that alternative treatments should be treated as a placebo, rather than as medicine. Almost none have performed significantly better than a placebo in clinical trials. Furthermore, distrust of conventional medicine may lead to patients experiencing the nocebo effect when taking effective medication.
=== Regression to the mean ===
A patient who receives an inert treatment may report improvements afterwards that it did not cause. Assuming it was the cause without evidence is an example of the regression fallacy. This may be due to a natural recovery from the illness, or a fluctuation in the symptoms of a long-term condition. The concept of regression toward the mean implies that an extreme result is more likely to be followed by a less extreme result.
=== Other factors ===
There are also reasons why a placebo treatment group may outperform a "no-treatment" group in a test which are not related to a patient's experience. These include patients reporting more favourable results than they really felt due to politeness or "experimental subordination", observer bias, and misleading wording of questions. In their 2010 systematic review of studies into placebos, Asbjørn Hróbjartsson and Peter C. Gøtzsche write that "even if there were no true effect of placebo, one would expect to record differences between placebo and no-treatment groups due to bias associated with lack of blinding." Alternative therapies may also be credited for perceived improvement through decreased use or effect of medical treatment, and therefore either decreased side effects or nocebo effects towards standard treatment.
== Use and regulation ==
=== Appeal ===
Practitioners of complementary medicine usually discuss and advise patients as to available alternative therapies. Patients often express interest in mind-body complementary therapies because they offer a non-drug approach to treating some health conditions.
In addition to the social-cultural underpinnings of the popularity of alternative medicine, there are several psychological issues that are critical to its growth, notably psychological effects, such as the will to believe, cognitive biases that help maintain self-esteem and promote harmonious social functioning, and the post hoc, ergo propter hoc fallacy.
In a 2018 interview with The BMJ, Edzard Ernst stated: "The present popularity of complementary and alternative medicine is also inviting criticism of what we are doing in mainstream medicine. It shows that we aren't fulfilling a certain need-we are not giving patients enough time, compassion, or empathy. These are things that complementary practitioners are very good at. Mainstream medicine could learn something from complementary medicine."
==== Marketing ====
Alternative medicine is a profitable industry with large media advertising expenditures. Accordingly, alternative practices are often portrayed positively and compared favorably to "big pharma".
The popularity of complementary & alternative medicine (CAM) may be related to other factors that Ernst mentioned in a 2008 interview in The Independent:
Why is it so popular, then? Ernst blames the providers, customers and the doctors whose neglect, he says, has created the opening into which alternative therapists have stepped. "People are told lies. There are 40 million websites and 39.9 million tell lies, sometimes outrageous lies. They mislead cancer patients, who are encouraged not only to pay their last penny but to be treated with something that shortens their lives." At the same time, people are gullible. It needs gullibility for the industry to succeed. It doesn't make me popular with the public, but it's the truth.
Paul Offit proposed that "alternative medicine becomes quackery" in four ways: by recommending against conventional therapies that are helpful, promoting potentially harmful therapies without adequate warning, draining patients' bank accounts, or by promoting "magical thinking". Promoting alternative medicine has been called dangerous and unethical.
==== Social factors ====
Authors have speculated on the socio-cultural and psychological reasons for the appeal of alternative medicines among the minority using them in lieu of conventional medicine. There are several socio-cultural reasons for the interest in these treatments centered on the low level of scientific literacy among the public at large and a concomitant increase in antiscientific attitudes and new age mysticism. Related to this are vigorous marketing of extravagant claims by the alternative medical community combined with inadequate media scrutiny and attacks on critics. Alternative medicine is criticized for taking advantage of the least fortunate members of society.
There is also an increase in conspiracy theories toward conventional medicine and pharmaceutical companies, mistrust of traditional authority figures, such as the physician, and a dislike of the current delivery methods of scientific biomedicine, all of which have led patients to seek out alternative medicine to treat a variety of ailments. Many patients lack access to contemporary medicine, due to a lack of private or public health insurance, which leads them to seek out lower-cost alternative medicine. Medical doctors are also aggressively marketing alternative medicine to profit from this market.
Patients can be averse to the painful, unpleasant, and sometimes-dangerous side effects of biomedical treatments. Treatments for severe diseases such as cancer and HIV infection have well-known, significant side-effects. Even low-risk medications such as antibiotics can have potential to cause life-threatening anaphylactic reactions in a very few individuals. Many medications may cause minor but bothersome symptoms such as cough or upset stomach. In all of these cases, patients may be seeking out alternative therapies to avoid the adverse effects of conventional treatments.
=== Prevalence of use ===
According to research published in 2015, the increasing popularity of CAM needs to be explained by moral convictions or lifestyle choices rather than by economic reasoning.
In developing nations, access to essential medicines is severely restricted by lack of resources and poverty. Traditional remedies, often closely resembling or forming the basis for alternative remedies, may comprise primary healthcare or be integrated into the healthcare system. In Africa, traditional medicine is used for 80% of primary healthcare, and in developing nations as a whole over one-third of the population lack access to essential medicines.
In Latin America, inequities against BIPOC communities keep them tied to their traditional practices and therefore, it is often these communities that constitute the majority of users of alternative medicine. Racist attitudes towards certain communities disable them from accessing more urbanized modes of care. In a study that assessed access to care in rural communities of Latin America, it was found that discrimination is a huge barrier to the ability of citizens to access care; more specifically, women of Indigenous and African descent, and lower-income families were especially hurt. Such exclusion exacerbates the inequities that minorities in Latin America already face. Consistently excluded from many systems of westernized care for socioeconomic and other reasons, low-income communities of color often turn to traditional medicine for care as it has proved reliable to them across generations.
Commentators including David Horrobin have proposed adopting a prize system to reward medical research. This stands in opposition to the current mechanism for funding research proposals in most countries around the world. In the US, the NCCIH provides public research funding for alternative medicine. The NCCIH has spent more than US$2.5 billion on such research since 1992 and this research has not demonstrated the efficacy of alternative therapies. As of 2011, the NCCIH's sister organization in the NIC Office of Cancer Complementary and Alternative Medicine had given out grants of around $105 million each year for several years. Testing alternative medicine that has no scientific basis (as in the aforementioned grants) has been called a waste of scarce research resources.
That alternative medicine has been on the rise "in countries where Western science and scientific method generally are accepted as the major foundations for healthcare, and 'evidence-based' practice is the dominant paradigm" was described as an "enigma" in the Medical Journal of Australia. A 15-year systematic review published in 2022 on the global acceptance and use of CAM among medical specialists found the overall acceptance of CAM at 52% and the overall use at 45%.
==== In the United States ====
In the United States, the 1974 Child Abuse Prevention and Treatment Act (CAPTA) required that for states to receive federal money, they had to grant religious exemptions to child neglect and abuse laws regarding religion-based healing practices. Thirty-one states have child-abuse religious exemptions.
The use of alternative medicine in the US has increased, with a 50 percent increase in expenditures and a 25 percent increase in the use of alternative therapies between 1990 and 1997 in America. According to a national survey conducted in 2002, "36 percent of U.S. adults aged 18 years and over use some form of complementary and alternative medicine." Americans spend many billions on the therapies annually. Most Americans used CAM to treat and/or prevent musculoskeletal conditions or other conditions associated with chronic or recurring pain. In America, women were more likely than men to use CAM, with the biggest difference in use of mind-body therapies including prayer specifically for health reasons". In 2008, more than 37% of American hospitals offered alternative therapies, up from 27 percent in 2005, and 25% in 2004. More than 70% of the hospitals offering CAM were in urban areas.
A survey of Americans found that 88 percent thought that "there are some good ways of treating sickness that medical science does not recognize". Use of magnets was the most common tool in energy medicine in America, and among users of it, 58 percent described it as at least "sort of scientific", when it is not at all scientific. In 2002, at least 60 percent of US medical schools have at least some class time spent teaching alternative therapies. "Therapeutic touch" was taught at more than 100 colleges and universities in 75 countries before the practice was debunked by a nine-year-old child for a school science project.
==== Prevalence of use of specific therapies ====
The most common CAM therapies used in the US in 2002 were prayer (45%), herbalism (19%), breathing meditation (12%), meditation (8%), chiropractic medicine (8%), yoga (5–6%), body work (5%), diet-based therapy (4%), progressive relaxation (3%), mega-vitamin therapy (3%) and visualization (2%).
In Britain, the most often used alternative therapies were Alexander technique, aromatherapy, Bach and other flower remedies, body work therapies including massage, Counseling stress therapies, hypnotherapy, meditation, reflexology, Shiatsu, Ayurvedic medicine, nutritional medicine, and yoga. Ayurvedic medicine remedies are mainly plant based with some use of animal materials. Safety concerns include the use of herbs containing toxic compounds and the lack of quality control in Ayurvedic facilities.
According to the National Health Service (England), the most commonly used complementary and alternative medicines (CAM) supported by the NHS in the UK are: acupuncture, aromatherapy, chiropractic, homeopathy, massage, osteopathy and clinical hypnotherapy.
==== In palliative care ====
Complementary therapies are often used in palliative care or by practitioners attempting to manage chronic pain in patients. Integrative medicine is considered more acceptable in the interdisciplinary approach used in palliative care than in other areas of medicine. "From its early experiences of care for the dying, palliative care took for granted the necessity of placing patient values and lifestyle habits at the core of any design and delivery of quality care at the end of life. If the patient desired complementary therapies, and as long as such treatments provided additional support and did not endanger the patient, they were considered acceptable." The non-pharmacologic interventions of complementary medicine can employ mind-body interventions designed to "reduce pain and concomitant mood disturbance and increase quality of life."
=== Regulation ===
The alternative medicine lobby has successfully pushed for alternative therapies to be subject to far less regulation than conventional medicine. Some professions of complementary/traditional/alternative medicine, such as chiropractic, have achieved full regulation in North America and other parts of the world and are regulated in a manner similar to that governing science-based medicine. In contrast, other approaches may be partially recognized and others have no regulation at all. In some cases, promotion of alternative therapies is allowed when there is demonstrably no effect, only a tradition of use. Despite laws making it illegal to market or promote alternative therapies for use in cancer treatment, many practitioners promote them.
Regulation and licensing of alternative medicine ranges widely from country to country, and state to state. In Austria and Germany complementary and alternative medicine is mainly in the hands of doctors with MDs, and half or more of the American alternative practitioners are licensed MDs. In Germany herbs are tightly regulated: half are prescribed by doctors and covered by health insurance.
Government bodies in the US and elsewhere have published information or guidance about alternative medicine. The U.S. Food and Drug Administration (FDA), has issued online warnings for consumers about medication health fraud. This includes a section on Alternative Medicine Fraud, such as a warning that Ayurvedic products generally have not been approved by the FDA before marketing.
== Risks and problems ==
The National Science Foundation has studied the problematic side of the public's attitudes and understandings of science fiction, pseudoscience, and belief in alternative medicine. They use a quote from Robert L. Park to describe some issues with alternative medicine:
Alternative medicine is another concern. As used here, alternative medicine refers to all treatments that have not been proven effective using scientific methods. A scientist's view of the situation appeared in a recent book (Park 2000b)":
Between homeopathy and herbal therapy lies a bewildering array of untested and unregulated treatments, all labeled alternative by their proponents. Alternative seems to define a culture rather than a field of medicine—a culture that is not scientifically demanding. It is a culture in which ancient traditions are given more weight than biological science, and anecdotes are preferred over clinical trials. Alternative therapies steadfastly resist change, often for centuries or even millennia, unaffected by scientific advances in the understanding of physiology or disease. Incredible explanations invoking modern physics are sometimes offered for how alternative therapies might work, but there seems to be little interest in testing these speculations scientifically.
=== Negative outcomes ===
According to the Institute of Medicine, use of alternative medical techniques may result in several types of harm:
"Direct harm, which results in adverse patient outcome."
"Economic harm, which results in monetary loss but presents no health hazard;"
"Indirect harm, which results in a delay of appropriate treatment, or in unreasonable expectations that discourage patients and their families from accepting and dealing effectively with their medical conditions;"
==== Interactions with conventional pharmaceuticals ====
Forms of alternative medicine that are biologically active can be dangerous even when used in conjunction with conventional medicine. Examples include immuno-augmentation therapy, shark cartilage, bioresonance therapy, oxygen and ozone therapies, and insulin potentiation therapy. Some herbal remedies can cause dangerous interactions with chemotherapy drugs, radiation therapy, or anesthetics during surgery, among other problems. An example of these dangers was reported by Associate Professor Alastair MacLennan of Adelaide University, Australia regarding a patient who almost bled to death on the operating table after neglecting to mention that she had been taking "natural" potions to "build up her strength" before the operation, including a powerful anticoagulant that nearly caused her death.
To ABC Online, MacLennan also gives another possible mechanism:
And lastly there's the cynicism and disappointment and depression that some patients get from going on from one alternative medicine to the next, and they find after three months the placebo effect wears off, and they're disappointed and they move on to the next one, and they're disappointed and disillusioned, and that can create depression and make the eventual treatment of the patient with anything effective difficult, because you may not get compliance, because they've seen the failure so often in the past.
==== Side-effects ====
Conventional treatments are subjected to testing for undesired side-effects, whereas alternative therapies, in general, are not subjected to such testing at all. Any treatment – whether conventional or alternative – that has a biological or psychological effect on a patient may also have potential to possess dangerous biological or psychological side-effects. Attempts to refute this fact with regard to alternative therapies sometimes use the appeal to nature fallacy, i.e., "That which is natural cannot be harmful." Specific groups of patients such as patients with impaired hepatic or renal function are more susceptible to side effects of alternative remedies.
An exception to the normal thinking regarding side-effects is homeopathy. Since 1938, the FDA has regulated homeopathic products in "several significantly different ways from other drugs." Homeopathic preparations, termed "remedies", are extremely dilute, often far beyond the point where a single molecule of the original active (and possibly toxic) ingredient is likely to remain. They are, thus, considered safe on that count, but "their products are exempt from good manufacturing practice requirements related to expiration dating and from finished product testing for identity and strength", and their alcohol concentration may be much higher than allowed in conventional drugs.
==== Treatment delay ====
Alternative medicine may discourage people from getting the best possible treatment. Those having experienced or perceived success with one alternative therapy for a minor ailment may be convinced of its efficacy and persuaded to extrapolate that success to some other alternative therapy for a more serious, possibly life-threatening illness. For this reason, critics argue that therapies that rely on the placebo effect to define success are very dangerous. According to mental health journalist Scott Lilienfeld in 2002, "unvalidated or scientifically unsupported mental health practices can lead individuals to forgo effective treatments" and refers to this as opportunity cost. Individuals who spend large amounts of time and money on ineffective treatments may be left with precious little of either, and may forfeit the opportunity to obtain treatments that could be more helpful. In short, even innocuous treatments can indirectly produce negative outcomes. Between 2001 and 2003, four children died in Australia because their parents chose ineffective naturopathic, homeopathic, or other alternative medicines and diets rather than conventional therapies.
==== Unconventional cancer "cures" ====
There have always been "many therapies offered outside of conventional cancer treatment centers and based on theories not found in biomedicine. These alternative cancer cures have often been described as 'unproven,' suggesting that appropriate clinical trials have not been conducted and that the therapeutic value of the treatment is unknown." However, "many alternative cancer treatments have been investigated in good-quality clinical trials, and they have been shown to be ineffective.... The label 'unproven' is inappropriate for such therapies; it is time to assert that many alternative cancer therapies have been 'disproven'."
Edzard Ernst has stated:
any alternative cancer cure is bogus by definition. There will never be an alternative cancer cure. Why? Because if something looked halfway promising, then mainstream oncology would scrutinize it, and if there is anything to it, it would become mainstream almost automatically and very quickly. All curative "alternative cancer cures" are based on false claims, are bogus, and, I would say, even criminal.
=== Rejection of science ===
Complementary and alternative medicine (CAM) is not as well researched as conventional medicine, which undergoes intense research before release to the public. Practitioners of science-based medicine also discard practices and treatments when they are shown ineffective, while alternative practitioners do not. Funding for research is also sparse making it difficult to do further research for effectiveness of CAM. Most funding for CAM is funded by government agencies. Proposed research for CAM are rejected by most private funding agencies because the results of research are not reliable. The research for CAM has to meet certain standards from research ethics committees, which most CAM researchers find almost impossible to meet. Even with the little research done on it, CAM has not been proven to be effective. Studies that have been done will be cited by CAM practitioners in an attempt to claim a basis in science. These studies tend to have a variety of problems, such as small samples, various biases, poor research design, lack of controls, negative results, etc. Even those with positive results can be better explained as resulting in false positives due to bias and noisy data.
Alternative medicine may lead to a false understanding of the body and of the process of science. Steven Novella, a neurologist at Yale School of Medicine, wrote that government-funded studies of integrating alternative medicine techniques into the mainstream are "used to lend an appearance of legitimacy to treatments that are not legitimate." Marcia Angell considered that critics felt that healthcare practices should be classified based solely on scientific evidence, and if a treatment had been rigorously tested and found safe and effective, science-based medicine will adopt it regardless of whether it was considered "alternative" to begin with. It is possible for a method to change categories (proven vs. unproven), based on increased knowledge of its effectiveness or lack thereof. Prominent supporters of this position are George D. Lundberg, former editor of the Journal of the American Medical Association (JAMA) and the journal's interim editor-in-chief Phil Fontanarosa.
Writing in 1999 in CA: A Cancer Journal for Clinicians Barrie R. Cassileth mentioned a 1997 letter to the United States Senate's Subcommittee on Public Health and Safety, which had deplored the lack of critical thinking and scientific rigor in OAM-supported research, had been signed by four Nobel Laureates and other prominent scientists. (This was supported by the National Institutes of Health (NIH).)
In March 2009, a staff writer for The Washington Post reported that the impending national discussion about broadening access to health care, improving medical practice and saving money was giving a group of scientists an opening to propose shutting down the National Center for Complementary and Alternative Medicine. They quoted one of these scientists, Steven Salzberg, a genome researcher and computational biologist at the University of Maryland, as saying "One of our concerns is that NIH is funding pseudoscience." They noted that the vast majority of studies were based on fundamental misunderstandings of physiology and disease, and had shown little or no effect.
Writers such as Carl Sagan, a noted astrophysicist, advocate of scientific skepticism and the author of The Demon-Haunted World: Science as a Candle in the Dark (1996), have lambasted the lack of empirical evidence to support the existence of the putative energy fields on which these therapies are predicated.
Sampson has also pointed out that CAM tolerated contradiction without thorough reason and experiment. Barrett has pointed out that there is a policy at the NIH of never saying something does not work, only that a different version or dose might give different results. Barrett also expressed concern that, just because some "alternatives" have merit, there is the impression that the rest deserve equal consideration and respect even though most are worthless, since they are all classified under the one heading of alternative medicine.
Some critics of alternative medicine are focused upon health fraud, misinformation, and quackery as public health problems, notably Wallace Sampson and Paul Kurtz founders of Scientific Review of Alternative Medicine and Stephen Barrett, co-founder of The National Council Against Health Fraud and webmaster of Quackwatch. Grounds for opposing alternative medicine include that:
Alternative therapies typically lack any scientific validation, and their effectiveness either is unproven or has been disproved.
It is usually based on religion, tradition, superstition, belief in supernatural energies, pseudoscience, errors in reasoning, propaganda, or fraud.
Methods may incorporate or base themselves on traditional medicine, folk knowledge, spiritual beliefs, ignorance or misunderstanding of scientific principles, errors in reasoning, or newly conceived approaches claiming to heal.
Research on alternative medicine is frequently of low quality and methodologically flawed.
Treatments are not part of the conventional, science-based healthcare system.
Where alternative therapies have replaced conventional science-based medicine, even with the safest alternative medicines, failure to use or delay in using conventional science-based medicine has caused deaths.
Many alternative medical treatments are not patentable, which may lead to less research funding from the private sector. In addition, in most countries, alternative therapies (in contrast to pharmaceuticals) can be marketed without any proof of efficacy – also a disincentive for manufacturers to fund scientific research.
English evolutionary biologist Richard Dawkins, in his 2003 book A Devil's Chaplain, defined alternative medicine as a "set of practices that cannot be tested, refuse to be tested, or consistently fail tests." Dawkins argued that if a technique is demonstrated effective in properly performed trials then it ceases to be alternative and simply becomes medicine.
CAM is also often less regulated than conventional medicine. There are ethical concerns about whether people who perform CAM have the proper knowledge to treat patients. CAM is often done by non-physicians who do not operate with the same medical licensing laws which govern conventional medicine, and it is often described as an issue of non-maleficence.
According to two writers, Wallace Sampson and K. Butler, marketing is part of the training required in alternative medicine, and propaganda methods in alternative medicine have been traced back to those used by Hitler and Goebels in their promotion of pseudoscience in medicine.
In November 2011 Edzard Ernst stated that the "level of misinformation about alternative medicine has now reached the point where it has become dangerous and unethical. So far, alternative medicine has remained an ethics-free zone. It is time to change this."
Harriet Hall criticized the low standard of evidence accepted by the alternative medicine community:
Science-based medicine has one rigorous standard of evidence, the kind [used for pharmaceuticals] .... CAM has a double standard. They gladly accept a lower standard of evidence for treatments they believe in. However, I suspect they would reject a pharmaceutical if it were approved for marketing on the kind of evidence they accept for CAM.
=== Conflicts of interest ===
Some commentators have said that special consideration must be given to the issue of conflicts of interest in alternative medicine. Edzard Ernst has said that most researchers into alternative medicine are at risk of "unidirectional bias" because of a generally uncritical belief in their chosen subject. Ernst cites as evidence the phenomenon whereby 100% of a sample of acupuncture trials originating in China had positive conclusions. David Gorski contrasts evidence-based medicine, in which researchers try to disprove hyphotheses, with what he says is the frequent practice in pseudoscience-based research, of striving to confirm pre-existing notions. Harriet Hall writes that there is a contrast between the circumstances of alternative medicine practitioners and disinterested scientists: in the case of acupuncture, for example, an acupuncturist would have "a great deal to lose" if acupuncture were rejected by research; but the disinterested skeptic would not lose anything if its effects were confirmed; rather their change of mind would enhance their skeptical credentials.
=== Use of health and research resources ===
Research into alternative therapies has been criticized for "diverting research time, money, and other resources from more fruitful lines of investigation in order to pursue a theory that has no basis in biology." Research methods expert and author of Snake Oil Science, R. Barker Bausell, has stated that "it's become politically correct to investigate nonsense." A commonly cited statistic is that the US National Institute of Health had spent $2.5 billion on investigating alternative therapies prior to 2009, with none being found to be effective.
== See also ==
Alternative therapies for developmental and learning disabilities
Conservation medicine
Ethnomedicine
Gallbladder flush
Psychic surgery
Siddha medicine
Thomsonianism, in United States early 19th century
== Notes ==
== References ==
== Bibliography ==
Bivins, R. (2007). Alternative Medicine? A History. Oxford University Press. ISBN 978-0-19-921887-5.
Board of Science and Education, British Medical Association (1993). Complementary Medicine: New Approaches to Good Practice. Oxford University Press. ISBN 978-0-19-286166-5.
Callahan, D., ed. (2004). The Role of Complementary and Alternative Medicine: Accommodating Pluralism. Washington, D.C.: Georgetown University Press. ISBN 978-1-58901-464-0.
Cohen, Michael H. (1998). Complementary & Alternative Medicine: Legal Boundaries and Regulatory Perspectives. Baltimore: Johns Hopkins University Press. ISBN 978-0-8018-5689-1.
Committee on the Use of Complementary and Alternative Medicine by the American Public for the Board on Health Promotion and Disease Prevention, Institute of Medicine (2005). Complementary and Alternative Medicine in the United States. Washington, D.C.: National Academy Press. ISBN 978-0-309-09270-8.
Gevitz, N. (1997) [1993]. "Chapter 28: Unorthodox Medical Theories". In Bynum, W.F.; Porter, R.S. (eds.). Companion Encyclopedia of the History of Medicine. Vol. 1. New York & London: Routledge. ISBN 978-0-415-16419-1.
Hahnemann, S. (1833). The Homœopathic Medical Doctrine, or "Organon of the Healing Art". Translated by Devrient, C.H. Annotated by Stratten, S. Dublin: W.F. Wakeman.
Kasper, Dennis L; Fauci, Anthony S.; Hauser, Stephen L.; Longo, Dan L.; Jameson, J. Larry; Loscalzo, Joseph (2015). Harrison's Principles of Internal Medicine (19th ed.). New York: McGraw Hill Education. ISBN 978-0-07-180215-4.
Kopelman, L. "The Role of Science in Assessing Conventional, Complementary, and Alternative Medicines". In Callahan (2004), pp. 36–53.
Mishra, Lakshmi Chandra (2004). Scientific Basis for Ayurvedic Therapies. Boca Raton: CRC Press. ISBN 978-0-8493-1366-0.
O'Connor, Bonnie Blair (1995). Healing Traditions: Alternative Medicine and the Health Professions. Philadelphia: University of Pennsylvania Press. ISBN 978-0-8122-1398-0.
Ruggie, M. (2004). Marginal to Mainstream: Alternative Medicine in America. Cambridge University Press. ISBN 978-0-521-83429-2.
Sagan, C. (1996). The Demon-Haunted World: Science As a Candle in the Dark. New York: Random House. ISBN 978-0-394-53512-8.
Saks, M. (2003). Orthodox and Alternative Medicine: Politics, Professionalization and Health Care. Sage Publications. ISBN 978-1-4462-6536-9.
Sointu, E. (2012). Theorizing Complementary and Alternative Medicines: Wellbeing, Self, Gender, Class. Basingstoke, England: Palgrave Macmillan. ISBN 978-0-230-30931-9.
Taylor, Kim (2005). Chinese Medicine in Early Communist China, 1945–63: a Medicine of Revolution. Needham Research Institute Studies. London and New York: RoutledgeCurzon. ISBN 978-0-415-34512-5.
Walton J. (2000) [Session 1999–2000, HL 123]. Sixth Report: Complementary and Alternative Medicine. London: The Stationery Office. ISBN 978-0-10-483100-7.
Wieland, L.S.; et al. (2011). "Development and classification of an operational definition of complementary and alternative medicine for the Cochrane Collaboration". Alternative Therapies in Health and Medicine. 17 (2): 50–59. PMC 3196853. PMID 21717826.
Wujastyk, D., ed. (2003). The Roots of Ayurveda: Selections from Sanskrit Medical Writings. Translated by D. Wujastyk. London and New York: Penguin Books. ISBN 978-0-14-044824-5.
General Guidelines for Methodologies on Research and Evaluation of Traditional Medicine (PDF). Vol. WHO/EDM/TRM/2001.1. Geneva: World Health Organization (WHO). 2000. Archived (PDF) from the original on 2022-10-09. This document is not a formal publication of the WHO. The views expressed in documents by named authors are solely the responsibility of those authors.
WHO Guidelines on Basic Training and Safety in Chiropractic (PDF). Geneva: WHO. 2005. ISBN 978-92-4-159371-7. Archived (PDF) from the original on 2022-10-09.
== Further reading ==
Bausell, R.B (2007). Snake oil science: the truth about complementary and alternative medicine. Oxford University Press. ISBN 978-0-19-531368-0.
Benedetti, F.; et al. (2003). "Open versus hidden medical treatments: The patient's knowledge about a therapy affects the therapy outcome". Prevention & Treatment. 6 (1). doi:10.1037/1522-3736.6.1.61a.
Dawkins, R. (2001). "Foreword". In Diamond, J. (ed.). Snake Oil and Other Preoccupations. London: Vintage. ISBN 978-0-09-942833-6. Reprinted in Dawkins 2003.
Downing AM, Hunter DG (2003). "Validating clinical reasoning: A question of perspective, but whose perspective?". Manual Therapy. 8 (2): 117–119. doi:10.1016/S1356-689X(02)00077-2. PMID 12890440.
Eisenberg DM (July 1997). "Advising patients who seek alternative medical therapies". Annals of Internal Medicine. 127 (1): 61–69. doi:10.7326/0003-4819-127-1-199707010-00010. PMID 9214254. S2CID 23351104.
Gunn IP (December 1998). "A critique of Michael L. Millenson's book, Demanding Medical Excellence: Doctors and Accountability in the Information Age, and its Relevance to CRNAs and Nursing". AANA Journal. 66 (6): 575–582. ISSN 0094-6354. PMID 10488264.
Hand, W.D. (1980). "Folk Magical Medicine and Symbolism in the West". Magical Medicine. Berkeley: University of California Press. pp. 305–319. ISBN 978-0-520-04129-5. OCLC 6420468.
Illich, I. (1976). Limits to Medicine: Medical Nemesis: The Expropriation of Health. Penguin. ISBN 978-0-14-022009-4. OCLC 4134656.
Mayo Clinic (2007). Mayo Clinic Book of Alternative Medicine: The New Approach to Using the Best of Natural Therapies and Conventional Medicine. Parsippany, New Jersey: Time Home Entertainment. ISBN 978-1-933405-92-6.
Planer, F.E. (1988). Superstition (Rev. ed.). Buffalo, New York: Prometheus Books. ISBN 978-0-87975-494-5. OCLC 18616238.
Rosenfeld, A. (c. 2000). "Where Do Americans Go for Healthcare?". Cleveland, Ohio: Case Western Reserve University. Archived from the original on 2006-05-09. Retrieved 2010-09-23.
Snyder, Mariah; Lindquist, Ruth (May 2001). "Issues in Complementary Therapies: How We Got to Where We Are". Online Journal of Issues in Nursing. 6 (2): 1. PMID 11469921. Archived from the original on 2017-02-03. Retrieved 2017-01-18.
Stevens, P. Jr. (November–December 2001). "Magical thinking in complementary and alternative medicine". Skeptical Inquirer.
Tonelli MR (2001). "The limits of evidence-based medicine". Respiratory Care. 46 (12): 1435–1440, discussion 1440–1441. PMID 11728302.
Trivieri, L. Jr. (2002). Anderson, J.W. (ed.). Alternative Medicine: The Definitive Guide. Berkeley: Ten Speed Press. ISBN 978-1-58761-141-4.
Wisneski, L.A.; et al. (2005). The scientific basis of integrative medicine. CRC Press. ISBN 978-0-8493-2081-1.
Zalewski, Z. (1999). "Importance of philosophy of science to the history of medical thinking". CMJ. 40 (1): 8–13. PMID 9933889. Archived from the original on 2004-02-06.
=== World Health Organization ===
Benchmarks for training in traditional / complementary and alternative medicine
WHO Kobe Centre; Bodeker, G.; et al. (2005). WHO Global Atlas of Traditional, Complementary and Alternative Medicine. WHO. ISBN 978-92-4-156286-7. Summary.
=== Journals ===
Alternative Medicine Review: A Journal of Clinical Therapeutics. Sandpoint, Idaho : Thorne Research, c. 1996 NLM ID: 9705340 Archived 2018-06-12 at the Wayback Machine
Alternative Therapies in Health and Medicine. Aliso Viejo, California : InnoVision Communications, c1995- NLM ID: 9502013 Archived 2018-06-12 at the Wayback Machine
BMC Complementary and Alternative Medicine Archived 2015-09-24 at the Wayback Machine. London: BioMed Central, 2001 NLM ID: 101088661 Archived 2018-06-12 at the Wayback Machine
Complementary Therapies in Medicine. Edinburgh; New York : Churchill Livingstone, c. 1993 NLM ID: 9308777 Archived 2018-06-12 at the Wayback Machine
Evidence Based Complementary and Alternative Medicine: eCAM. New York: Hindawi, c. 2004 NLM ID: 101215021 Archived 2018-09-14 at the Wayback Machine
Forschende Komplementärmedizin / Research in Complementary Medicine
Journal for Alternative and Complementary Medicine New York : Mary Ann Liebert, c. 1995
Scientific Review of Alternative Medicine (SRAM) Archived 2010-08-22 at the Wayback Machine
== External links == | Wikipedia/Alternative_therapy |
Heat therapy, also called thermotherapy, is the use of heat in therapy, such as for pain relief and health. It can take the form of a hot cloth, hot water bottle, ultrasound, heating pad, hydrocollator packs, whirlpool baths, cordless FIR heat therapy wraps, and others. It can be beneficial to those with arthritis and stiff muscles and injuries to the deep tissue of the skin. Heat may be an effective self-care treatment for conditions like rheumatoid arthritis.
Heat therapy is most commonly used for rehabilitation purposes. The therapeutic effects of heat include increasing the extensibility of collagen tissues; decreasing joint stiffness; reducing pain; relieving muscle spasms; reducing inflammation, edema, and aids in the post acute phase of healing; and increasing blood flow. The increased blood flow to the affected area provides proteins, nutrients, and oxygen for better healing. There is some evidence to suggest that heat therapy can also aid in the treatment of neurodegenerative diseases like Alzheimer's; as well as for cardiovascular benefits.
== Application ==
Direct contact
Moist heat therapy has been believed to be more effective at warming tissues than dry heat, because water transfers heat more quickly than air. Frequent use of saunas has been linked to a lower risk of vascular disease. Clinical studies do not support the popular belief that moist heat is more effective than dry heat. Moist heat results in the perception that the tissue is heated more deeply. In fact, recent studies indicate that vasodilation, the expansion of the blood capillaries (vessels) to allow more blood flow, is improved with dry heat therapy. Expansion of the blood capillaries is the primary objective of heat therapy. Heat therapy increases the effect on muscles, joints, and soft tissue. Heat is typically applied by placing a warming device on the relevant body part.
Newer breeds of heat therapy devices combine a carbon fiber heater with a cordless rechargeable lithium battery and are built into the specific body wrap (i.e., shoulder wrap or back wrap) for targeted heat therapy. Such devices can be used as alternatives to chemical or plugged-in heating pads, but have not been shown to improve the clinical benefit. All devices primarily provide heat to promote vasodilation.
Infrared radiation
Infrared radiation is a convenient system to heat parts of our body. It has the advantage over direct contact in that radiation can heat directly the area where the blood capillaries and neuron terminals are. When heat comes from a direct contact source it has to heat the external layer of the skin, and heat is transferred to the deeper layer by conduction. Since heat conduction needs a temperature gradient to proceed, and there is a maximum temperature that can be safely used (around 42 °C), this means lower temperature where warming is needed.
Infrared (IR for short) is the part of the electromagnetic radiation spectrum comprised between 0.78 μm and 1 mm wavelength. It is usually divided into three segments:
IR-A, from 0.78 to 1.4 μm.
IR-B, from 1.4 to 3 μm.
IR-C, from 3 μm to 1 mm.
IR radiation is more useful than the visible radiation for heating our body, because we absorb most of it, compared to a strong reflection of visible light. Penetration depth of infrared radiation in our skin is dependent of wavelength. IR-A is the most penetrating, and reaches some millimeters, IR-B penetrates into the dermis (about 1 mm), and IR-C is mostly absorbed in the external layer of the epidermis (stratum corneum). For this reason the infrared lamps used for therapeutic purposes produce mainly IR-A radiation.
== Mechanism of action, and indications ==
Heat creates higher tissue temperatures, which produces vasodilation that increases the supply of oxygen and nutrients and the elimination of carbon dioxide and metabolic waste.
Heat therapy is useful for muscle spasms, myalgia, fibromyalgia, contracture, bursitis.
Moist heat can be used on abscesses to help drain the abscess faster. A study from 2005 showed heat therapy to be effective in treating leishmaniasis, a tropical parasitic skin infection.
Heat therapy is also sometimes used in cancer treatment to augment the effect of chemotherapy or radiotherapy, but it is not enough to kill cancer cells on its own.
Heat therapy has been shown to be beneficial in treating sub-acute and chronic musculoskeletal pain, but the choice to use heat therapy to treat acute musculoskeletal injuries is contraindicated. The duration, frequency, and type of heat application may differ depending on the quality of the pain and the depth of the tissue being targeted. According to a recent article published in the Archives of Physical Medicine and Rehabilitation in 2021, heat therapy, particularly local heat application (LHA), can provide pain relief, reduce muscle stiffness (increasing muscle available range of motion), and improve blood flow through vasodilation to the affected area, thereby promoting healing for chronic musculoskeletal injuries
Heat therapy is contraindicated in case of acute injury and bleeding disorders (because of vasodilation), tissues with a severe lack of sensitivity, scars and in tissues with inadequate vascular supply (because of increased metabolic rate and demand which a tissue with poor blood supply may fail to meet resulting in ischemia).
In the case of chronic musculoskeletal pain, heat therapy can be used to help reduce pain, increase range of motion, and improve flexibility. A longer duration of heat application may be required for more chronic conditions, such as 10 to 30 minutes, two to three times a day. Physical therapy heat modalities that can be utilized to treat chronic conditions include hot packs, paraffin, warm whirlpool, fluidotherapy, and thermal ultrasound. Assessing skin integrity is crucial before and after the application of long durations of heat therapy. Prolonged heat therapy can help promote tissue healing, which can be especially beneficial for chronic conditions including fibromyalgia and low back pain.
The use of Heat therapy for deep-seated tissue can be treated with shortwave, microwave, and ultrasonic waves. This produces a high temperature that penetrates deeper. Shortwave produces a 27 MHz current, microwaves use 915 and 2456 MHz, and ultrasound is an acoustic vibration of 1 MHz. The way ultrasonic waves work is they selectively superimpose the incoming wave and increase the energy for absorption, and the significant part of the longitudinal compression gets converted into shear waves. When they are rapidly absorbed, the interface between soft tissue and bone is selectively heated.
== For headaches ==
Heat therapy can be used for the treatment of headaches and migraines. Many people with chronic headaches also experience tight muscles in their neck and upper back. The application of constant heat to the back/upper back area can help to release the tension associated with headache pain. In order to achieve heat therapy for headaches, many use microwaveable pads which can often overheat, potentially leading to injury, and lose their heat after a few minutes. Some new products use heated water, running through pads, to maintain a constant temperature, allowing people with headaches to use hands-free heat therapy in the treatment of their headache pain. However no substantial scientific evidence exist for many of these claims.
== Therapeutic Benefits ==
Thermotherapy increase the extensibility of collagen tissues. Using heat, it can relieve the stiffness in joints in different cases. Shortwave and Microwave heat application may reduce muscle spasms, and selective heating with microwaves can accelerate absorption of hematomas. In addition, hot pack therapy has been found to be particularly effective for reducing muscle soreness and enhancing recovery. By improving blood flow and promoting tissue healing, it helps relieve pain both within the first 24 hours after exercise and in the days that follow. This will, in turn, allow the stiff muscle to stretch. Ultrasounds are not absorbed significantly in homogenous muscle. Heat therapy using hyperthermia has been used to treat cancer in combination with ionizing radiation.
== For Muscle Soreness ==
The immediate use of either dry or moist heat helps with preserving muscle strength and activity. There is also a great deal of pain reduction after the application of moist heat. To decide whether or not to use dry or moist heat, studies show that moist heat has enhanced healing benefits for muscle soreness and can have a positive effect in only 25% of the time of the application of the dry heat. When discussing delayed onset muscle soreness, a myogenic disease that effects the longevity and intensity of muscle soreness, it has been proven that heat can be used to reduce pain if applied within 1 hour of exercise. When applied within one hour after exercise, heat therapy not only reduces pain but also continues to provide relief beyond 24 hours. This effect is likely due to improved blood flow and enhanced tissue healing, with hot packs showing the most consistent benefits.
== For Edema After a Distal-Radius Fracture ==
Swelling is inevitable when using heating modalities, but many people are unaware of the effect they have on the volume of swelling after application. Studies show that there was an increase in edema immediately after the use of whirlpool treatments more than the use of a moist hot pack. However, 30 minutes later, it was shown that there was no difference in swelling between patients that received either heating modality. This leads us to the conclusion that moist hot packs as well as whirlpool therapy can help decrease edema in distal-radius fractures. According to the available data, heat therapy for lower limb lymphoedema may be beneficial in lowering limb circumference and volume when administered over an extended length of time (1200–3600 min) at a certain skin temperature (39–42 °C) in a controlled setting (lab, hospital, or outpatient). There was no proof that heat therapy was harmful to lymphoedema patients when used within these parameters. As of right now, there is insufficient evidence to support any recommendations for the use of heat therapy for lymphoedema patients in practice.
== For Women During Menstruation and Labor ==
Heat therapy is shown to be a great modality for women with dysmenorrhea, which is pain during menstruation. NSAIDs are usually the primary treatment for dysmenorrhea but are associated with adverse effects, such as indigestion, headaches, and drowsiness. Superficial moist heat is a great alternative can help calm abdominal muscle cramps associated with dysmenorrhea without the adverse effects of menstruation. Moist heat can also improve pelvic circulation that further helps reduce pain. Heat therapy is shown to assist women with pain and reduce the duration of the first stage of labor. The first stage of labor is associated with painful contractions of the cervix. Heat therapy can help calm these painful contractions while improving circulation which blocks pain signals to the brain.
== See also ==
Contrast bath therapy
Diathermy
Infrared radiation, one means for delivering heat
Migraine#Cryotherapy and Thermotherapy
== References == | Wikipedia/Heat_therapy |
Rational emotive behavior therapy (REBT), previously called rational therapy and rational emotive therapy, is an active-directive, philosophically and empirically based psychotherapy, the aim of which is to resolve emotional and behavioral problems and disturbances and to help people to lead happier and more fulfilling lives.
REBT posits that people have erroneous beliefs about situations they are involved in, and that these beliefs cause disturbance, but can be disputed and changed.
== History ==
Rational emotive behavior therapy was created and developed by the American psychotherapist and psychologist Albert Ellis, who was inspired by many of the teachings of Asian, Greek, Roman and modern philosophers. REBT is a form of cognitive behavioral therapy (CBT) and was first expounded by Ellis in the mid-1950s; development continued until his death in 2007. Ellis became synonymous with the highly influential therapy. Psychology Today noted, "No individual—not even Freud himself—has had a greater impact on modern psychotherapy."
REBT is both a psychotherapeutic system of theory and practices and a school of thought established by Ellis. He first presented his ideas at a conference of the American Psychological Association in 1956 then published a seminal article in 1957 entitled "Rational psychotherapy and individual psychology", in which he set the foundation for what he was calling rational therapy (RT) and carefully responded to questions from Rudolf Dreikurs and others about the similarities and differences with Alfred Adler's individual psychology. This was around a decade before psychiatrist Aaron Beck first set forth his "cognitive therapy", after Ellis had contacted him in the mid-1960s. Ellis' own approach was renamed Rational Emotive Therapy in 1959, then the current term in 1992.
Precursors of certain fundamental aspects of rational emotive behavior therapy have been identified in ancient philosophical traditions, particularly to Stoics Marcus Aurelius, Epictetus, Zeno of Citium, Chrysippus, Panaetius of Rhodes, Cicero, and Seneca, and early Asian philosophers Confucius and Gautama Buddha. In his first major book on rational therapy, Ellis wrote that the central principle of his approach, that people are rarely emotionally affected by external events but rather by their thinking about such events, "was originally discovered and stated by the ancient Stoic philosophers." Ellis illustrates this with a quote from the Enchiridion of Epictetus: "Men are disturbed not by things, but by the views which they take of them." Ellis noted that Shakespeare expressed a similar thought in Hamlet: "There's nothing good or bad but thinking makes it so." Ellis also acknowledges early 20th century therapists, particularly Paul Charles Dubois, though he only read his work several years after developing his therapy.
== Theoretical assumptions ==
The REBT framework posits that humans have both innate rational (meaning self-helping, socially helping, and constructive) and irrational (meaning self-defeating, socially defeating, and unhelpful) tendencies and leanings. REBT claims that people to a large degree consciously and unconsciously construct emotional difficulties such as self-blame, self-pity, clinical anger, hurt, guilt, shame, depression and anxiety, and behavior tendencies like procrastination, compulsiveness, avoidance, addiction and withdrawal by the means of their irrational and self-defeating thinking, emoting and behaving.
REBT is then applied as an educational process in which the therapist often active-directively teaches the client how to identify irrational and self-defeating beliefs and philosophies which in nature are rigid, extreme, unrealistic, illogical and absolutist, and then to forcefully and actively question and dispute them and replace them with more rational and self-helping ones. By using different cognitive, emotive and behavioral methods and activities, the client, together with help from the therapist and in homework exercises, can gain a more rational, self-helping and constructive rational way of thinking, emoting and behaving.
One of the main objectives in REBT is to show the client that whenever unpleasant and unfortunate activating events occur in people's lives, they have a choice between making themselves feel healthily or, self-helpingly, sorry, disappointed, frustrated, and annoyed or making themselves feel unhealthily and self-defeatingly horrified, terrified, panicked, depressed, self-hating and self-pitying. By attaining and ingraining a more rational and self-constructive philosophy of themselves, others and the world, people often are more likely to behave and emote in more life-serving and adaptive ways.
A fundamental premise of REBT is that humans do not get emotionally disturbed by unfortunate circumstances, but by how they construct their views of these circumstances through their language, evaluative beliefs, meanings and philosophies about the world, themselves and others. This concept has been attributed as far back as the Stoic philosopher Epictetus, who is often cited as utilizing similar ideas in antiquity.
== A-B-C-D-E-F Model ==
In REBT, clients usually learn and begin to apply this premise by learning the A-B-C-D-E-F model of psychological disturbance and change. The following letters represent the following meanings in this model:
A Adversity
B Beliefs about adversity
C Emotional consequences
D Disputations to challenge beliefs about adversity
E Effective new rational beliefs
F New feelings
The A-B-C model states that it is not an A, adversity (or activating event) that cause disturbed and dysfunctional emotional and behavioral Cs, consequences, but also what people B, irrationally believe about the A, adversity. A, adversity can be an external situation, or a thought, a feeling or other kind of internal event, and it can refer to an event in the past, present, or future.
The Bs, irrational beliefs that are most important in the A-B-C model are the explicit and implicit philosophical meanings and assumptions about events, personal desires, and preferences. The Bs, beliefs that are most significant are highly evaluative and consist of interrelated and integrated cognitive, emotional and behavioral aspects and dimensions. According to REBT, if a person's evaluative B, belief about the A, activating event is rigid, absolutistic, fictional and dysfunctional, the C, the emotional and behavioral consequence, is likely to be self-defeating and destructive. Alternatively, if a person's belief is preferential, flexible, and constructive, the C, the emotional and behavioral consequence is likely to be self-helping and constructive.
Through REBT, by understanding the role of their mediating, evaluative and philosophically based illogical, unrealistic and self-defeating meanings, interpretations and assumptions in disturbance, individuals can learn to identify them, then go to D, disputing and questioning the evidence for them. At E, effective new philosophy, they can recognize and reinforce the notion no evidence exists for any psychopathological must, ought or should and distinguish them from healthy constructs, and subscribe to more constructive and self-helping philosophies. This new reasonable perspective leads to F, new feelings and behaviors appropriate to the A they are addressing in the exercise.
== Psychological dysfunction ==
One of the main pillars of REBT is that irrational and dysfunctional ways and patterns of thinking, feeling, and behaving are contributing to human disturbance and emotional and behavioral self-defeatism and social defeatism. REBT generally teaches that when people turn flexible preferences, desires and wishes into grandiose, absolutistic and fatalistic dictates, this tends to contribute to disturbance and upset. These dysfunctional patterns are examples of cognitive distortions.
=== Irrational beliefs ===
REBT proposes four core irrational ways of thinking that create suffering:
Demands: The tendency to demand success, fair treatment, and respect (e.g., I must be treated fairly).
Awfulizing: The tendency to consider adverse events as awful or terrible (e.g., It's awful when I am disrespected).
Low Frustration Tolerance (LFT): The belief that one could not stand or tolerate adversity (e.g., I cannot stand being treated unfairly).
Depreciation: The belief that one event reflects the person as a whole (e.g., When I fail it shows that I am a complete failure).
=== Core beliefs that disturb humans ===
Ellis has suggested that humans take the above distorted ways of thinking and created three core beliefs or philosophies that humans tend to disturb themselves through:
=== Rigid demands that humans make ===
REBT commonly posits that at the core of irrational beliefs there often are explicit or implicit rigid demands and commands, and that extreme derivatives like awfulizing, low frustration tolerance, people deprecation and overgeneralizations are accompanied by these. According to REBT, the core dysfunctional philosophies in a person's evaluative emotional and behavioral belief system are also very likely to contribute to unrealistic, arbitrary and crooked inferences and distortions in thinking. REBT therefore first teaches that when people in an insensible and devout way overuse absolutistic, dogmatic and rigid "shoulds", "musts", and "oughts", they tend to disturb and upset themselves.
=== Over-generalization ===
Further, REBT generally posits that disturbed evaluations to a large degree occur through overgeneralization, wherein people exaggerate and globalize events or traits, usually unwanted events or traits or behavior, out of context, while almost always ignoring the positive events or traits or behaviors. For example, awfulizing is partly mental magnification of the importance of an unwanted situation to a catastrophe or horror, elevating the rating of something from bad to worse than it should be, to beyond totally bad, worse than bad to the intolerable and to a "holocaust". The same exaggeration and overgeneralizing occurs with human rating, wherein humans come to be arbitrarily and axiomatically defined by their perceived flaws or misdeeds.
=== Low frustration tolerance ===
Low frustration tolerance is the inability to tolerate unpleasant feelings or stressful situations.
=== Secondary disturbances ===
Essential to REBT theory is also the concept of secondary disturbances which people sometimes construct on top of their primary disturbance. As Ellis emphasizes:
Because of their self-consciousness and their ability to think about their thinking, they can very easily disturb themselves about their disturbances and can also disturb themselves about their ineffective attempts to overcome their emotional disturbances.
== Origins of dysfunction ==
Regarding cognitive-affective-behavioral processes in mental functioning and dysfunctioning, originator Albert Ellis explains:
REBT assumes that human thinking, emotion, and action are not really separate or disparate processes, but that they all significantly overlap and are rarely experienced in a pure state. Much of what we call emotion is nothing more nor less than a certain kind—a biased, prejudiced, or strongly evaluative kind—of thought. But emotions and behaviors significantly influence and affect thinking, just as thinking influences emotions and behaviors. Evaluating is a fundamental characteristic of human organisms and seems to work in a kind of closed circuit with a feedback mechanism: First, perception biases response, and then response tends to bias subsequent perception. Also, prior perceptions appear to bias subsequent perceptions, and prior responses appear to bias subsequent responses. What we call feelings almost always have a pronounced evaluating or appraisal element.
REBT then generally proposes that many of these self-defeating cognitive, emotive and behavioral tendencies are both innately biological and indoctrinated early in and during life, and further grow stronger as a person continually revisits, clings and acts on them. Ellis alludes to similarities between REBT and the general semantics when explaining the role of irrational beliefs in self-defeating tendencies, citing Alfred Korzybski as a significant modern influence on this thinking.
REBT differs from other clinical approaches like psychoanalysis in that it places little emphasis on exploring the past, but instead focuses on changing the current evaluations and philosophical thinking-emoting and behaving in relation to themselves, others and the conditions under which people live.
== Disturbances ==
REBT sees disturbances as caused by characteristics of a person, rather than a particular past event;
Almost all (neurotic clients) have innate tendencies to take their strong desires and preferences (which they learn and which they also have biological predispositions to construct) and to escalate them into unrealistic, illogical, absolutist demands and to thereby disturb themselves when these rigid imperatives are not fulfilled.
== Other insights ==
Other insights of REBT (some referring to the ABCDEF model above) are:
Insight 1 – People seeing and accepting the reality that their emotional disturbances at point C are only partially caused by the activating events or adversities at point A that precede C. Although A contributes to C, and although disturbed Cs (such as feelings of panic and depression) are much more likely to follow strong negative As (such as being assaulted or raped), than they are to follow weak As (such as being disliked by a stranger), the main or more direct cores of extreme and dysfunctional emotional disturbances (Cs) are people's irrational beliefs—the "absolutistic" (inflexible) "musts" and their accompanying inferences and attributions that people strongly believe about the activating event.
Insight 2 – No matter how, when, and why people acquire self-defeating or irrational beliefs (i.e. beliefs that are the main cause of their dysfunctional emotional-behavioral consequences), if they are disturbed in the present, they tend to keep holding these irrational beliefs and continue upsetting themselves with these thoughts. They do so not because they held them in the past, but because they still actively hold them in the present (often unconsciously), while continuing to reaffirm their beliefs and act as if they are still valid. In their minds and hearts, the troubled people still follow the core "masturbatory" philosophies they adopted or invented long ago or ones they recently accepted or constructed.
Insight 3 – No matter how well they have gained insights 1 and 2, insight alone rarely enables people to undo their emotional disturbances. They may feel better when they know, or think they know, how they became disturbed, because insights can feel useful and curative. But it is unlikely that people will actually get better and stay better unless they have and apply insight 3, which is that there is usually no way to get better and stay better except by continual work and practice in looking for and finding one's core irrational beliefs; actively, energetically, and scientifically disputing them; replacing one's absolute "musts" (rigid requirements about how things should be) with more flexible preferences; changing one's unhealthy feelings to healthy, self-helping emotions; and firmly acting against one's dysfunctional fears and compulsions. Only by a combined cognitive, emotive, and behavioral, as well as a quite persistent and forceful attack on one's serious emotional problems, is one likely to significantly ameliorate or remove them, and keep them removed.
== Intervention ==
As explained, REBT is a therapeutic system of both theory and practice; generally one of the goals of REBT is to help clients see the ways in which they have learned how they often needlessly upset themselves, teach them how to "un-upset" themselves and then how to empower themselves to lead happier and more fulfilling lives. The emphasis in therapy is generally to establish a successful collaborative therapeutic working alliance based on the REBT educational model. Although REBT teaches that the therapist or counsellor is better served by demonstrating unconditional other-acceptance or unconditional positive regard, the therapist is not necessarily always encouraged to build a warm and caring relationship with the client. The tasks of the therapist or counselor include understanding the client's concerns from his point of reference and work as a facilitator, teacher and encourager.
In traditional REBT, the client together with the therapist, in a structured active-directive manner, often work through a set of target problems and establish a set of therapeutic goals. In these target problems, situational dysfunctional emotions, behaviors and beliefs are assessed in regards to the client's values and goals. After working through these problems, the client learns to generalize insights to other relevant situations. In many cases after going through a client's different target problems, the therapist is interested in examining possible core beliefs and more deep rooted philosophical evaluations and schemas that might account for a wider array of problematic emotions and behaviors. Although REBT much of the time is used as a brief therapy, in deeper and more complex problems, longer therapy is promoted.
In therapy, the first step often is that the client acknowledges the problems, accepts emotional responsibility for these and has willingness and determination to change. This normally requires a considerable amount of insight, but as originator Albert Ellis explains:
Humans, unlike just about all the other animals on earth, create fairly sophisticated languages which not only enable them to think about their feeling, their actions, and the results they get from doing and not doing certain things, but they also are able to think about their thinking and even think about thinking about their thinking.
Through the therapeutic process, REBT employs a wide array of forceful and active, meaning multimodal and disputing, methodologies. Central through these methods and techniques is the intent to help the client challenge, dispute and question their destructive and self-defeating cognitions, emotions and behaviors. The methods and techniques incorporate cognitive-philosophic, emotive-evocative-dramatic, and behavioral methods for disputation of the client's irrational and self-defeating constructs and helps the client come up with more rational and self-constructive ones. REBT seeks to acknowledge that understanding and insight are not enough; in order for clients to significantly change, they need to pinpoint their irrational and self-defeating constructs and work forcefully and actively at changing them to more functional and self-helping ones.
REBT posits that the client must work hard to get better, and in therapy this normally includes a wide array of homework exercises in day-to-day life assigned by the therapist. The assignments may for example include desensitization tasks, i.e., by having the client confront the very thing he or she is afraid of. By doing so, the client is actively acting against the belief that often is contributing significantly to the disturbance.
Another factor contributing to the brevity of REBT is that the therapist seeks to empower the client to help himself through future adversities. REBT only promotes temporary solutions if more fundamental solutions are not found. An ideal successful collaboration between the REBT therapist and a client results in changes to the client's philosophical way of evaluating himself or herself, others, and his or her life, which will likely yield effective results. The client then moves toward unconditional self-acceptance, other-acceptance and life-acceptance while striving to live a more self-fulfilling and happier life.
== Applications and interfaces ==
Applications and interfaces of REBT are used with a broad range of clinical problems in traditional psychotherapeutic settings such as individual-, group- and family therapy. It is used as a general treatment for a vast number of different conditions and psychological problems normally associated with psychotherapy.
In addition, REBT is used with non-clinical problems and problems of living through counselling, consultation and coaching settings dealing with problems including relationships, social skills, career changes, stress management, assertiveness training, grief, problems with aging, money, weight control etc. More recently, the reported use of REBT in sport and exercise settings has grown, with the efficacy of REBT demonstrated across a range of sports.
REBT also has many interfaces and applications through self-help resources, phone and internet counseling, workshops & seminars, workplace and educational programmes, etc. This includes Rational Emotive Education (REE) where REBT is applied in education settings, Rational Effectiveness Training in business and work-settings and SMART Recovery (Self Management And Recovery Training) in supporting those in addiction recovery, in addition to a wide variety of specialized treatment strategies and applications.
== Efficacy ==
REBT and CBT in general have a strong and substantial research base to verify and support their psychotherapeutic efficiency and their theoretical underpinnings. Meta-analyses of outcome-based studies reveal REBT to be effective for treating various psychopathologies, conditions and problems. Recently, REBT randomized clinical trials have offered a positive view on the efficacy of REBT.
In general REBT is arguably one of the most investigated theories in the field of psychotherapy and a large amount of clinical experience and a substantial body of modern psychological research have validated and substantiated many of REBTs theoretical assumptions on personality and psychotherapy.
REBT may be effective in improving sports performance and mental health.
Ellis himself later in life accepted that REBT was not universally effective; "I hope I am also not a devout REBTer, since I do not think it is an unmitigated cure for everyone and do accept its distinct limitations."
== Limitations and critique ==
The clinical research on REBT has been criticized by both supporters and detractors. For instance, originator Albert Ellis has on occasion emphasized the difficulty and complexity of measuring psychotherapeutic effectiveness, because many studies only tend to measure whether clients merely feel better after therapy instead of them getting better and staying better. Ellis has also criticized studies for having limited focus primarily to cognitive restructuring aspects, as opposed to the combination of cognitive, emotive and behavioral aspects of REBT. As REBT has been subject to criticisms during its existence, especially in its early years, REBT theorists have a long history of publishing and addressing those concerns. It has also been argued by Ellis and by other clinicians that REBT theory on numerous occasions has been misunderstood and misconstrued both in research and in general.
Some have criticized REBT for being harsh, formulaic and failing to address deep underlying problems. REBT theorists have argued in reply that a careful study of REBT shows that it is both philosophically deep, humanistic and individualized collaboratively working on the basis of the client's point of reference. They have further argued that REBT utilizes an integrated and interrelated methodology of cognitive, emotive-experiential and behavioral interventions. Others have questioned REBTs view of rationality, both radical constructivists who have claimed that reason and logic are subjective properties and those who believe that reason can be objectively determined. REBT theorists have argued in reply that REBT raises objections to clients' irrational choices and conclusions as a working hypothesis and through collaborative efforts demonstrate the irrationality on practical, functional and social consensual grounds. In 1998 when asked what the main criticism on REBT was, Albert Ellis replied that it was the claim that it was too rational and not dealing sufficiently enough with emotions. He repudiated the claim by saying that REBT on the contrary emphasizes that thinking, feeling, and behaving are interrelated and integrated, and that it includes a vast amount of both emotional and behavioural methods in addition to cognitive ones.
Ellis has himself in very direct terms criticized opposing approaches such as psychoanalysis, transpersonal psychology and abreactive psychotherapies in addition to on several occasions questioning some of the doctrines in certain religious systems, spiritualism and mysticism. Many, including REBT practitioners, have warned against dogmatizing and sanctifying REBT as a supposedly perfect psychological panacea. Prominent REBTers have promoted the importance of high quality and programmatic research, including originator Ellis, a self-proclaimed "passionate skeptic". He has on many occasions been open to challenges and acknowledged errors and inefficiencies in his approach and concurrently revised his theories and practices. In general, with regard to cognitive-behavioral psychotherapies' interventions, others have pointed out that as about 30–40% of people are still unresponsive to interventions, that REBT could be a platform of reinvigorating empirical studies on the effectiveness of the cognitive-behavioral models of psychopathology and human functioning.
REBT has been developed, revised and augmented through the years as understanding and knowledge of psychology and psychotherapy have progressed. This includes its theoretical concepts, practices and methodology. The teaching of scientific thinking, reasonableness and un-dogmatism has been inherent in REBT as an approach, and these ways of thinking are an inextricable part of REBT's empirical and skeptical nature.
I hope I am also not a devout REBTer, since I do not think it is an unmitigated cure for everyone and do accept its distinct limitations.
== Mental wellness ==
As would be expected, REBT argues that mental wellness and mental health to a large degree results from an adequate amount of self-helping, flexible, logico-empirical ways of thinking, emoting and behaving. When a perceived undesired and stressful activating event occurs, and the individual is interpreting, evaluating and reacting to the situation rationally and self-helpingly, then the resulting consequence is, according to REBT, likely to be more healthy, constructive and functional. This does not by any means mean that a relatively un-disturbed person never experiences negative feelings, but REBT does hope to keep debilitating and un-healthy emotions and subsequent self-defeating behavior to a minimum. To do this, REBT generally promotes a flexible, un-dogmatic, self-helping and efficient belief system and constructive life philosophy about adversities and human desires and preferences.
REBT clearly acknowledges that people, in addition to disturbing themselves, also are innately constructivists. Because they largely upset themselves with their beliefs, emotions and behaviors, they can be helped to, in a multimodal manner, dispute and question these and develop a more workable, more self-helping set of constructs.
REBT generally teaches and promotes:
That the concepts and philosophies of life of unconditional self-acceptance, other-acceptance, and life-acceptance are effective philosophies of life in achieving mental wellness and mental health.
That human beings are inherently fallible and imperfect and that they are better served by accepting their and other human beings' totality and humanity, while at the same time they may not like some of their behaviors and characteristics. That they are better off not measuring their entire self or their "being" and give up the narrow, grandiose and ultimately destructive notion to give themselves any global rating or report card. This is partly because all humans are continually evolving and are far too complex to accurately rate; all humans do both self-defeating / socially defeating and self-helping/socially helping deeds, and have both beneficial and un-beneficial attributes and traits at certain times and in certain conditions. REBT holds that ideas and feelings about self-worth are largely definitional and are not empirically confirmable or falsifiable.
That people had better accept life with its hassles and difficulties not always in accordance with their wants, while trying to change what they can change and live as elegantly as possible with what they cannot change.
== References ==
== Further reading ==
== External links ==
The Albert Ellis Institute
Association for Rational Emotive Behaviour Therapy
UK Centre for Rational Emotive Behaviour Therapy
International Institute for the Advanced Studies of Psychotherapy and Applied Mental Health
Journal of Rational-Emotive and Cognitive Behaviour Therapy
Wife of Dr Albert Ellis
REBT Information site | Wikipedia/Rational_emotive_behavior_therapy |
Emotionally focused therapy and emotion-focused therapy (EFT) are related humanistic approaches to psychotherapy that aim to resolve emotional and relationship issues with individuals, couples, and families. These therapies combine experiential therapy techniques, including person-centered and Gestalt therapies, with systemic therapy and attachment theory. The central premise is that emotions influence cognition, motivate behavior, and are strongly linked to needs. The goals of treatment include transforming maladaptive behaviors, such as emotional avoidance, and developing awareness, acceptance, expression, and regulation of emotion and understanding of relationships. EFT is usually a short-term treatment (eight to 20 sessions).
Emotion-focused therapy for individuals was originally known as process-experiential therapy, and continues to be referred to by this name in some contexts. EFT should not be confused with emotion-focused coping, a separate concept involving coping strategies for managing emotions. EFT has been used to improve clients' emotion-focused coping abilities.
== History ==
EFT began in the mid-1980s as an approach to helping couples. EFT was originally formulated and tested by Sue Johnson and Les Greenberg in 1985, and the first manual for emotionally focused couples therapy was published in 1988.
To develop the approach, Johnson and Greenberg began reviewing videos of sessions of couples therapy to identify, through observation and task analysis, the elements that lead to positive change. They were influenced in their observations by the humanistic experiential psychotherapies of Carl Rogers and Fritz Perls, both of whom valued (in different ways) present-moment emotional experience for its power to create meaning and guide behavior. Johnson and Greenberg saw the need to combine experiential therapy with the systems theoretical view that meaning-making and behavior cannot be considered outside of the whole situation in which they occur. In this "experiential–systemic" approach to couples therapy, as in other approaches to systemic therapy, the problem is viewed as belonging not to one partner, but rather to the cyclical reinforcing patterns of interactions between partners. Emotion is viewed not only as a within-individual phenomena, but also as part of the whole system that organizes the interactions between partners.
In 1986, Greenberg chose "to refocus his efforts on developing and studying an experiential approach to individual therapy". Greenberg and colleagues shifted their attention away from couples therapy toward individual psychotherapy. They attended to emotional experiencing and its role in individual self-organization. Building on the experiential theories of Rogers and Perls and others such as Eugene Gendlin, as well as on their own extensive work on information processing and the adaptive role of emotion in human functioning, Greenberg, Rice & Elliott (1993) created a treatment manual with numerous clearly outlined principles for what they called a process-experiential approach to psychological change. Elliott et al. (2004) and Goldman & Greenberg (2015) have further expanded the process-experiential approach, providing detailed manuals of specific principles and methods of therapeutic intervention. Goldman & Greenberg (2015) presented case formulation maps for this approach.
Johnson continued to develop EFT for couples, integrating attachment theory with systemic and humanistic approaches, and explicitly expanding attachment theory's understanding of love relationships. Johnson's model retained the original three stages and nine steps and two sets of interventions that aim to reshape the attachment bond: one set of interventions to track and restructure patterns of interaction and one to access and reprocess emotion (see § Stages and steps below). Johnson's goal is the creation of positive cycles of interpersonal interaction wherein individuals are able to ask for and offer comfort and support to safe others, facilitating interpersonal emotion regulation.
Greenberg & Goldman (2008) developed a variation of EFT for couples that contains some elements from Greenberg and Johnson's original formulation but adds several steps and stages. Greenberg and Goldman posit three motivational dimensions—(1) attachment, (2) identity or power, and (3) attraction or liking—that impact emotion regulation in intimate relationships.
== Similar terminology, different meanings ==
The terms emotion-focused therapy and emotionally focused therapy have different meanings for different therapists.
In Les Greenberg's approach the term emotion-focused is sometimes used to refer to psychotherapy approaches in general that emphasize emotion. Greenberg "decided that on the basis of the development in emotion theory that treatments such as the process experiential approach, as well as some other approaches that emphasized emotion as the target of change, were sufficiently similar to each other and different from existing approaches to merit being grouped under the general title of emotion-focused approaches." He and colleague Rhonda Goldman noted their choice to "use the more American phrasing of emotion-focused to refer to therapeutic approaches that focused on emotion, rather than the original, possibly more English term (reflecting both Greenberg's and Johnson's backgrounds) emotionally focused." Greenberg uses the term emotion-focused to suggest assimilative integration of an emotional focus into any approach to psychotherapy. He considers the focus on emotions to be a common factor among various systems of psychotherapy: "The term emotion-focused therapy will, I believe, be used in the future, in its integrative sense, to characterize all therapies that are emotion-focused, be they psychodynamic, cognitive-behavioral, systemic, or humanistic." Greenberg co-authored a chapter on the importance of research by clinicians and integration of psychotherapy approaches that stated:
In addition to these empirical findings, leaders of major orientations have voiced serious criticisms of their preferred theoretical approaches, while encouraging an open-minded attitude toward other orientations.... Furthermore, clinicians of different orientations recognized that their approaches did not provide them with the clinical repertoire sufficient to address the diversity of clients and their presenting problems.
Sue Johnson's use of the term emotionally focused therapy refers to a specific model of relationship therapy that explicitly integrates systems and experiential approaches and places prominence upon attachment theory as a theory of emotion regulation. Johnson views attachment needs as a primary motivational system for mammalian survival; her approach to EFT focuses on attachment theory as a theory of adult love wherein attachment, care-giving, and sex are intertwined. Attachment theory is seen to subsume the search for personal autonomy, dependability of the other and a sense of personal and interpersonal attractiveness, love-ability and desire. Johnson's approach to EFT aims to reshape attachment strategies towards optimal inter-dependency and emotion regulation, for resilience and physical, emotional, and relational health.
== Features ==
=== Experiential focus ===
All EFT approaches have retained emphasis on the importance of Rogerian empathic attunement and communicated understanding. They all focus upon the value of engaging clients in emotional experiencing moment-to-moment in session. Thus, an experiential focus is prominent in all EFT approaches. All EFT theorists have expressed the view that individuals engage with others on the basis of their emotions, and construct a sense of self from the drama of repeated emotionally laden interactions.
The information-processing theory of emotion and emotional appraisal (in accordance with emotion theorists such as Magda B. Arnold, Paul Ekman, Nico Frijda, and James Gross) and the humanistic, experiential emphasis on moment-to-moment emotional expression (developing the earlier psychotherapy approaches of Carl Rogers, Fritz Perls, and Eugene Gendlin) have been strong components of all EFT approaches since their inception. EFT approaches value emotion as the target and agent of change, honoring the intersection of emotion, cognition, and behavior. EFT approaches posit that emotion is the first, often subconscious response to experience. All EFT approaches also use the framework of primary and secondary (reactive) emotion responses.
=== Maladaptive emotion responses and negative patterns of interaction ===
Greenberg and some other EFT theorists have categorized emotion responses into four types (see § Emotion response types below) to help therapists decide how to respond to a client at a particular time: primary adaptive, primary maladaptive, secondary reactive, and instrumental. Greenberg has posited six principles of emotion processing: (1) awareness of emotion or naming what one feels, (2) emotional expression, (3) regulation of emotion, (4) reflection on experience, (5) transformation of emotion by emotion, and (6) corrective experience of emotion through new lived experiences in therapy and in the world. While primary adaptive emotion responses are seen as a reliable guide for behavior in the present situation, primary maladaptive emotion responses are seen as an unreliable guide for behavior in the present situation (alongside other possible emotional difficulties such as lack of emotional awareness, emotion dysregulation, and problems in meaning-making).
Johnson rarely distinguishes between adaptive and maladaptive primary emotion responses, and rarely distinguishes emotion responses as dysfunctional or functional. Instead, primary emotional responses are usually construed as normal survival reactions in the face of what John Bowlby called "separation distress". EFT for couples, like other systemic therapies that emphasize interpersonal relationships, presumes that the patterns of interpersonal interaction are the problematic or dysfunctional element. The patterns of interaction are amenable to change after accessing the underlying primary emotion responses that are subconsciously driving the ineffective, negative reinforcing cycles of interaction. Validating reactive emotion responses and reprocessing newly accessed primary emotion responses is part of the change process.
== Individual therapy ==
Goldman & Greenberg 2015 proposed a 14-step case formulation process that regards emotion-related problems as stemming from at least four different possible causes: lack of awareness or avoidance of emotion, dysregulation of emotion, maladaptive emotion response, or a problem with making meaning of experiences. The theory features four types of emotion response (see § Emotion response types below), categorizes needs under "attachment" and "identity", specifies four types of emotional processing difficulties, delineates different types of empathy, has at least a dozen different task markers (see § Therapeutic tasks below), relies on two interactive tracks of emotion and narrative processes as sources of information about a client, and presumes a dialectical-constructivist model of psychological development and an emotion schematic system.
The emotion schematic system is seen as the central catalyst of self-organization, often at the base of dysfunction and ultimately the road to cure. For simplicity, we use the term emotion schematic process to refer to the complex synthesis process in which a number of co-activated emotion schemes co-apply, to produce a unified sense of self in relation to the world.
Techniques used in "coaching clients to work through their feelings" may include the Gestalt therapy empty chair technique, frequently used for resolving "unfinished business", and the two-chair technique, frequently used for self-critical splits.
=== Emotion response types ===
Emotion-focused theorists have posited that each person's emotions are organized into idiosyncratic emotion schemes that are highly variable both between people and within the same person over time, but for practical purposes emotional responses can be classified into four broad types: primary adaptive, primary maladaptive, secondary reactive, and instrumental.
Primary adaptive emotion responses are initial emotional responses to a given stimulus that have a clear beneficial value in the present situation—for example, sadness at loss, anger at violation, and fear at threat. Sadness is an adaptive response when it motivates people to reconnect with someone or something important that is missing. Anger is an adaptive response when it motivates people to take assertive action to end the violation. Fear is an adaptive response when it motivates people to avoid or escape an overwhelming threat. In addition to emotions that indicate action tendencies (such as the three just mentioned), primary adaptive emotion responses include the feeling of being certain and in control or uncertain and out of control, and/or a general felt sense of emotional pain—these feelings and emotional pain do not provide immediate action tendencies but do provide adaptive information that can be symbolized and worked through in therapy. Primary adaptive emotion responses "are attended to and expressed in therapy in order to access the adaptive information and action tendency to guide problem solving."
Primary maladaptive emotion responses are also initial emotional responses to a given stimulus; however, they are based on emotion schemes that are no longer useful (and that may or may not have been useful in the person's past) and that were often formed through previous traumatic experiences. Examples include sadness at the joy of others, anger at the genuine caring or concern of others, fear at harmless situations, and chronic feelings of insecurity/fear or worthlessness/shame. For example, a person may respond with anger at the genuine caring or concern of others because as a child he or she was offered caring or concern that was usually followed by a violation; as a result, he or she learned to respond to caring or concern with anger even when there is no violation. The person's angry response is understandable, and needs to be met with empathy and compassion even though his or her angry response is not helpful. Primary maladaptive emotion responses are accessed in therapy with the aim of transforming the emotion scheme through new experiences.
Secondary reactive emotion responses are complex chain reactions where a person reacts to his or her primary adaptive or maladaptive emotional response and then replaces it with another, secondary emotional response. In other words, they are emotional responses to prior emotional responses. ("Secondary" means that a different emotion response occurred first.) They can include secondary reactions of hopelessness, helplessness, rage, or despair that occur in response to primary emotion responses that are experienced (secondarily) as painful, uncontrollable, or violating. They may be escalations of a primary emotion response, as when people are angry about being angry, afraid of their fear, or sad about their sadness. They may be defenses against a primary emotion response, such as feeling anger to avoid sadness or fear to avoid anger; this can include gender role-stereotypical responses such as expressing anger when feeling primarily afraid (stereotypical of men's gender role), or expressing sadness when primarily angry (stereotypical of women's gender role). "These are all complex, self-reflexive processes of reacting to one's emotions and transforming one emotion into another. Crying, for example, is not always true grieving that leads to relief, but rather can be the crying of secondary helplessness or frustration that results in feeling worse." Secondary reactive emotion responses are accessed and explored in therapy in order to increase awareness of them and to arrive at more primary and adaptive emotion responses.
Instrumental emotion responses are experienced and expressed by a person because the person has learned that the response has an effect on others, "such as getting them to pay attention to us, to go along with something we want them to do for us, to approve of us, or perhaps most often just not to disapprove of us." Instrumental emotion responses can be consciously intended or unconsciously learned (i.e., through operant conditioning). Examples include crocodile tears (instrumental sadness), bullying (instrumental anger), crying wolf (instrumental fear), and feigned embarrassment (instrumental shame). When a client responds in therapy with instrumental emotion responses, it may feel manipulative or superficial to the therapist. Instrumental emotion responses are explored in therapy in order to increase awareness of their interpersonal function and/or the associated primary and secondary gain.
=== The therapeutic process with different emotion responses ===
Emotion-focused theorists have proposed that each type of emotion response calls for a different intervention process by the therapist. Primary adaptive emotion responses need be more fully allowed and accessed for their adaptive information. Primary maladaptive emotion responses need to be accessed and explored to help the client identify core unmet needs (e.g., for validation, safety, or connection), and then regulated and transformed with new experiences and new adaptive emotions. Secondary reactive emotion responses need empathic exploration in order to discover the sequence of emotions that preceded them. Instrumental emotion responses need to be explored interpersonally in the therapeutic relationship to increase awareness of them and address how they are functioning in the client's situation.
Primary emotion responses are not called "primary" because they are somehow more real than the other responses; all of the responses feel real to a person, but therapists can classify them into these four types in order to help clarify the functions of the response in the client's situation and how to intervene appropriately.
=== Therapeutic tasks ===
A therapeutic task is an immediate problem that a client needs to resolve in a psychotherapy session. In the 1970s and 1980s, researchers such as Laura North Rice (a former colleague of Carl Rogers) applied task analysis to transcripts of psychotherapy sessions in an attempt to describe in more detail the process of clients' cognitive and emotional change, so that therapists might more reliably provide optimal conditions for change. This kind of psychotherapy process research eventually led to a standardized (and evolving) set of therapeutic tasks in emotion-focused therapy for individuals.
The following table summarizes the standard set of these therapeutic tasks as of 2012. The tasks are classified into five broad groups: empathy-based, relational, experiencing, reprocessing, and action. The task marker is an observable sign that a client may be ready to work on the associated task. The intervention process is a sequence of actions carried out by therapist and client in working on the task. The end state is the desired resolution of the immediate problem.
In addition to the task markers listed below, other markers and intervention processes for working with emotion and narrative have been specified: same old stories, empty stories, unstoried emotions, and broken stories.
Experienced therapists can create new tasks; EFT therapist Robert Elliott, in a 2010 interview, noted that "the highest level of mastery of the therapy—EFT included—is to be able to create new structures, new tasks. You haven't really mastered EFT or some other therapy until you actually can begin to create new tasks."
=== Emotion-focused therapy for trauma ===
The interventions and the structure of emotion-focused therapy have been adapted for the specific needs of psychological trauma survivors. A manual of emotion-focused therapy for individuals with complex trauma (EFTT) has been published. For example, modifications of the traditional Gestalt empty chair technique have been developed.
=== Other versions of EFT for individuals ===
Brubacher (2017) proposed an emotionally focused approach to individual therapy that focuses on attachment, while integrating the experiential focus of empathic attunement for engaging and reprocessing emotional experience and tracking and restructuring the systemic aspects and patterns of emotion regulation. The therapist follows the attachment model by addressing deactivating and hyperactivating strategies. Individual therapy is seen as a process of developing secure connections between therapist and client, between client and past and present relationships, and within the client. Attachment principles guide therapy in the following ways: forming the collaborative therapeutic relationship, shaping the overall goal for therapy to be that of "effective dependency" (following John Bowlby) upon one or two safe others, depathologizing emotion by normalizing separation distress responses, and shaping change processes. The change processes are: identifying and strengthening patterns of emotion regulation, and creating corrective emotional experiences to transform negative patterns into secure bonds.
Gayner (2019) integrated EFT principles and methods with mindfulness-based cognitive therapy and mindfulness-based stress reduction.
== Couples therapy ==
A systemic perspective is important in all approaches to EFT for couples. Tracking conflictual patterns of interaction, often referred to as a "dance" in Johnson's popular literature, has been a hallmark of the first stage of Johnson and Greenberg's approach since its inception in 1985. In Goldman and Greenberg's newer approach, therapists help clients "also work toward self-change and the resolution of pain stemming from unmet childhood needs that affect the couple interaction, in addition to working on interactional change." Goldman and Greenberg justify their added emphasis on self-change by noting that not all problems in a relationship can be solved only by tracking and changing patterns of interaction:
In addition, in our observations of psychotherapeutic work with couples, we have found that problems or difficulties that can be traced to core identity concerns such as needs for validation or a sense of worth are often best healed through therapeutic methods directed toward the self rather than to the interactions. For example, if a person's core emotion is one of shame and they feel "rotten at the core" or "simply fundamentally flawed," soothing or reassuring from one's partner, while helpful, will not ultimately solve the problem, lead to structural emotional change, or alter the view of oneself.
In Greenberg and Goldman's approach to EFT for couples, although they "fully endorse" the importance of attachment, attachment is not considered to be the only interpersonal motivation of couples; instead, attachment is considered to be one of three aspects of relational functioning, along with issues of identity/power and attraction/liking. In Johnson's approach, attachment theory is considered to be the defining theory of adult love, subsuming other motivations, and it guides the therapist in processing and reprocessing emotion.
In Greenberg and Goldman's approach, the emphasis is on working with core issues related to identity (working models of self and other) and promoting both self-soothing and other-soothing for a better relationship, in addition to interactional change. In Johnson's approach, the primary goal is to reshape attachment bonds and create "effective dependency" (including secure attachment).
=== Stages and steps ===
EFT for couples features a nine-step model of restructuring the attachment bond between partners. In this approach, the aim is to reshape the attachment bond and create more effective co-regulation and "effective dependency", increasing individuals' self-regulation and resilience. In good-outcome cases, the couple is helped to respond and thereby meet each other's unmet needs and injuries from childhood. The newly shaped secure attachment bond may become the best antidote to a traumatic experience from within and outside of the relationship.
Adding to the original three-stage, nine-step EFT framework developed by Johnson and Greenberg, Greenberg and Goldman's emotion-focused therapy for couples has five stages and 14 steps. It is structured to work on identity issues and self-regulation prior to changing negative interactions. It is considered necessary, in this approach, to help partners experience and reveal their own underlying vulnerable feelings first, so they are better equipped to do the intense work of attuning to the other partner and to be open to restructuring interactions and the attachment bond.
Johnson (2008) summarizes the nine treatment steps in Johnson's model of EFT for couples: "The therapist leads the couple through these steps in a spiral fashion, as one step incorporates and leads into the other. In mildly distressed couples, partners usually work quickly through the steps at a parallel rate. In more distressed couples, the more passive or withdrawn partner is usually invited to go through the steps slightly ahead of the other."
==== Stage 1. Stabilization (assessment and de-escalation phase) ====
Step 1: Identify the relational conflict issues between the partners
Step 2: Identify the negative interaction cycle where these issues are expressed
Step 3: Access attachment emotions underlying the position each partner takes in this cycle
Step 4: Reframe the problem in terms of the cycle, unacknowledged emotions, and attachment needs
During this stage, the therapist creates a comfortable and stable environment for the couple to have an open discussion about any hesitations the couples may have about the therapy, including the trustworthiness of the therapist. The therapist also gets a sense of the couple's positive and negative interactions from past and present and is able to summarize and present the negative patterns for them. Partners soon no longer view themselves as victims of their negative interaction cycle; they are now allies against it.
==== Stage 2. Restructuring the bond (changing interactional positions phase) ====
Step 5: Access disowned or implicit needs (e.g., need for reassurance), emotions (e.g., shame), and models of self
Step 6: Promote each partner's acceptance of the other's experience
Step 7: Facilitate each partner's expression of needs and wants to restructure the interaction based on new understandings and create bonding events
This stage involves restructuring and widening the emotional experiences of the couple. This is done through couples recognizing their attachment needs and then changing their interactions based on those needs. At first, their new way of interacting may be strange and hard to accept, but as they become more aware and in control of their interactions they are able to stop old patterns of behavior from reemerging.
==== Stage 3. Integration and consolidation ====
Step 8: Facilitate the formulation of new stories and new solutions to old problems
Step 9: Consolidate new cycles of behavior
This stage focuses on the reflection of new emotional experiences and self-concepts. It integrates the couple's new ways of dealing with problems within themselves and in the relationship.
=== Styles of attachment ===
Johnson & Sims (2000) described four attachment styles that affect the therapy process:
Secure attachment: People who are secure and trusting perceive themselves as lovable, able to trust others and themselves within a relationship. They give clear emotional signals, and are engaged, resourceful and flexible in unclear relationships. Secure partners express feelings, articulate needs, and allow their own vulnerability to show.
Avoidant attachment: People who have a diminished ability to articulate feelings, tend not to acknowledge their need for attachment, and struggle to name their needs in a relationship. They tend to adopt a safe position and solve problems dispassionately without understanding the effect that their safe distance has on their partners.
Anxious attachment: People who are psychologically reactive and who exhibit anxious attachment. They tend to demand reassurance in an aggressive way, demand their partner's attachment and tend to use blame strategies (including emotional blackmail) in order to engage their partner.
Fearful–avoidant attachment: People who have been traumatized and have experienced little to no recovery from it vacillate between attachment and hostility. This is sometimes referred to as disorganized attachment.
== Family therapy ==
The emotionally focused family therapy (EFFT) of Johnson and her colleagues aims to promote secure bonds among distressed family members. It is a therapy approach consistent with the attachment-oriented experiential–systemic emotionally focused model in three stages: (1) de-escalating negative cycles of interaction that amplify conflict and insecure connections between parents and children; (2) restructuring interactions to shape positive cycles of parental accessibility and responsiveness to offer the child or adolescent a safe haven and a secure base; (3) consolidation of the new responsive cycles and secure bonds. Its primary focus is on strengthening parental responsiveness and care-giving, to meet children and adolescents' attachment needs. It aims to "build stronger families through (1) recruiting and strengthening parental emotional responsiveness to children, (2) accessing and clarifying children's attachment needs, and (3) facilitating and shaping care-giving interactions from parent to child". Some clinicians have integrated EFFT with play therapy.
One group of clinicians, inspired in part by Greenberg's approach to EFT, developed a treatment protocol specifically for families of individuals struggling with an eating disorder. The treatment is based on the principles and techniques of four different approaches: emotion-focused therapy, behavioral family therapy, motivational enhancement therapy, and the New Maudsley family skills-based approach. It aims to help parents "support their child in the processing of emotions, increasing their emotional self-efficacy, deepening the parent–child relationships and thereby making ED [eating disorder] symptoms unnecessary to cope with painful emotional experiences". The treatment has three main domains of intervention, four core principles, and five steps derived from Greenberg's emotion-focused approach and influenced by John Gottman: (1) attending to the child's emotional experience, (2) naming the emotions, (3) validating the emotional experience, (4) meeting the emotional need, and (5) helping the child to move through the emotional experience, problem solving if necessary.
== Efficacy ==
Johnson, Greenberg, and many of their colleagues have spent their long careers as academic researchers publishing the results of empirical studies of various forms of EFT.
The American Psychological Association considers emotion-focused therapy for individuals to be an empirically supported treatment for depression. Studies have suggested that it is effective in the treatment of depression, interpersonal problems, trauma, and avoidant personality disorder.
Practitioners of EFT have claimed that studies have consistently shown clinically significant improvement post therapy. Studies, again mostly by EFT practitioners, have suggested that emotionally focused therapy for couples is an effective way to restructure distressed couple relationships into safe and secure bonds with long-lasting results. Johnson et al. (1999) conducted a meta-analysis of the four most rigorous outcome studies before 2000 and concluded that the original nine-step, three-stage emotionally focused therapy approach to couples therapy had a larger effect size than any other couple intervention had achieved to date, but this meta-analysis was later harshly criticized by psychologist James C. Coyne, who called it "a poor quality meta-analysis of what should have been left as pilot studies conducted by promoters of a therapy in their own lab". A study with an fMRI component conducted in collaboration with American neuroscientist Jim Coan suggested that emotionally focused couples therapy reduces the brain's response to threat in the presence of a romantic partner; this study was also criticized by Coyne.
A 2019 meta-analysis on EFT effectiveness for couples therapy concluded that the approach significantly improves relationship satisfaction, with these improvements being sustained for up to two years at follow-up.
== Strengths ==
Some of the strengths of EFT approaches can be summarized as follows:
EFT aims to be collaborative and respectful of clients, combining experiential person-centered therapy techniques with systemic therapy interventions.
Change strategies and interventions are specified through intensive analysis of psychotherapy process.
EFT has been validated by 30 years of empirical research. There is also research on the change processes and predictors of success.
EFT has been applied to different kinds of problems and populations, although more research on different populations and cultural adaptations is needed.
EFT for couples is based on conceptualizations of marital distress and adult love that are supported by empirical research on the nature of adult interpersonal attachment.
== Criticism ==
Psychotherapist Campbell Purton, in his 2014 book The Trouble with Psychotherapy, criticized a variety of approaches to psychotherapy, including behavior therapy, person-centered therapy, psychodynamic therapy, cognitive behavioral therapy, emotion-focused therapy, and existential therapy; he argued that these psychotherapies have accumulated excessive and/or flawed theoretical baggage that deviates too much from an everyday common-sense understanding of personal troubles. With regard to emotion-focused therapy, Purton argued that "the effectiveness of each of the 'therapeutic tasks' can be understood without the theory": 124 and that what clients say "is not well explained in terms of the interaction of emotion schemes; it is better explained in terms of the person's situation, their response to it, and their having learned the particular language in which they articulate their response.": 129
In 2014, psychologist James C. Coyne criticized some EFT research for lack of rigor (for example, being underpowered and having high risk of bias), but he also noted that such problems are common in the field of psychotherapy research.
In a 2015 article in Behavioral and Brain Sciences on "memory reconsolidation, emotional arousal and the process of change in psychotherapy", Richard D. Lane and colleagues summarized a common claim in the literature on emotion-focused therapy that "emotional arousal is a key ingredient in therapeutic change" and that "emotional arousal is critical to psychotherapeutic success". In a response accompanying the article, Bruce Ecker and colleagues (creators of coherence therapy) disagreed with this claim and argued that the key ingredient in therapeutic change involving memory reconsolidation is not emotional arousal but instead a perceived mismatch between an expected pattern and an experienced pattern; they wrote:
The brain clearly does not require emotional arousal per se for inducing deconsolidation. That is a fundamental point. If the target learning happens to be emotional, then its reactivation (the first of the two required elements) of course entails an experience of that emotion, but the emotion itself does not inherently play a role in the mismatch that then deconsolidates the target learning, or in the new learning that then rewrites and erases the target learning (discussed at greater length in Ecker 2015). [...] The same considerations imply that "changing emotion with emotion" (stated three times by Lane et al.) inaccurately characterizes how learned responses change through reconsolidation. Mismatch consists most fundamentally of a direct, unmistakable perception that the world functions differently from one's learned model. "Changing model with mismatch" is the core phenomenology.
Other responses to Lane et al. (2015) argued that their emotion-focused approach "would be strengthened by the inclusion of predictions regarding additional factors that might influence treatment response, predictions for improving outcomes for non-responsive patients, and a discussion of how the proposed model might explain individual differences in vulnerability for mental health problems", and that their model needed further development to account for the diversity of states called "psychopathology" and the relevant maintaining and worsening processes.
== See also ==
== Notes ==
== References ==
=== EFT for couples ===
=== EFT for families ===
=== Videos === | Wikipedia/Emotionally_focused_therapy |
Eclectic psychotherapy is a form of psychotherapy in which the clinician uses more than one theoretical approach, or multiple sets of techniques, to help with clients' needs. The use of different therapeutic approaches will be based on the effectiveness in resolving the patient's problems, rather than the theory behind each therapy.
== Background ==
Over the history of clinical psychology, many therapeutic approaches have been created as stand-alone methods. Eclectic psychotherapy, which involves using multiple therapeutic methods, attempts to avoid the dilemma of choosing one method by utilizing multiple approaches.
Therapists may be trained in one particular method or theoretical orientation, but may shift to a more eclectic approach, adding other methods to their original training. A therapist can also be trained as eclectic. Psychotherapists-in-training are typically exposed to a variety of different methods and theories.
Eclectic psychotherapy might include using a behavior modification approach for one symptom and a psychoanalytic approach for a second symptom. An eclectic psychotherapist may use one mode of treatment for one patient and a different one for another patient.
== Types ==
=== Overview ===
All of the following listed types of psychotherapies are different forms of eclectic psychotherapies. The decision to use or not use each form may be based upon therapist preference, patient preference or effectiveness for certain presenting problems.
=== Brief ===
Brief eclectic psychotherapy, as the name suggests, is a short-term form of psychotherapy using an eclectic approach. It often consists of a combination of cognitive-behavioral and psychodynamic approaches over a limited number of sessions, often sixteen or fewer. The term brief eclectic psychotherapy may be defined in several different ways, but is generally regarded as just short-term eclectic psychotherapy. One specific form of brief eclectic therapy is brief eclectic therapy for traumatic grief (BEP-TG). Posttraumatic stress disorder (PTSD), major depressive disorder (MDD), and persistent complex bereavement disorder (PCBD) can all be treated with BEP-TG.
=== Systematic ===
When using the systematic eclectic psychotherapy approach developed by Larry E. Beutler and colleagues in the early 1990s, the therapist bases his or her choice of treatment method on four key factors: client characteristics, the context of treatment, relationship variables, and specific strategies and techniques "that will maximally focus on relevant problems, manage levels of client motivation, overcome obstacles to successful resolution of problems, achieve treatment objectives, consolidate treatment gains, and prevent or reduce relapse". Unlike brief eclectic psychotherapy, there is not necessarily a limit on the number of sessions. The therapist chooses the approach that he or she believes will be most helpful to the patient based on evaluation of the four factors.
=== Prescriptive ===
The focus of prescriptive eclectic psychotherapy, described in 1978 by Richard E. Dimond and colleagues, is to create a personalized treatment plan for each client that is based on a combination of different theories and techniques, while sticking to a structure that is based on research. The therapy allows the therapist to use multiple theoretical approaches, but must be rooted in evidence from psychological research. The psychotherapist must not only choose the type of psychotherapy used, but also the type of therapeutic relationship that should be utilized. There is a great emphasis on using clinical research and prior clinical knowledge in determining how to move forward with treatment.
=== Technical ===
Technical eclectic psychotherapy focuses only on using multiple techniques and ignores the theoretical background of those techniques. In this form of eclectic therapy, the therapist uses a variety of techniques based on what is expected to help the patient. Theory is not considered an important factor in this approach, as only the techniques used matter. Depending on the techniques selected by the therapist, the methods of treatment may come from similar psychological schools of thought or completely opposite ones. One form of technical eclectic psychotherapy is multimodal therapy, developed by Arnold Lazarus starting in the 1960s.
== Comparison with integrative psychotherapy ==
The terms integrative psychotherapy and eclectic psychotherapy are sometimes used interchangeably, but the two terms are not synonymous. The American Psychological Association lists the two types of therapies as unique and different types of psychotherapies. Both eclectic and integrative psychotherapy combine the use of multiple psychological theories. Integrative psychotherapy tends to place greater emphasis on the theories being combined, while eclectic therapy tends to be more outcome focused. An eclectic psychotherapist will use whatever theory will help his or her patient and an integrative psychotherapist will use one theory to complement another.
== See also ==
Developmental eclecticism
== References == | Wikipedia/Eclectic_psychotherapy |
Focusing is an internally oriented psychotherapeutic process developed by psychotherapist Eugene Gendlin. It can be used in any kind of therapeutic situation, including peer-to-peer sessions. It involves holding a specific kind of open, non-judging attention to an internal knowing which is experienced but is not yet in words. Focusing can, among other things, be used to become clear on what one feels or wants, to obtain new insights about one's situation, and to stimulate change or healing of the situation. Focusing is set apart from other methods of inner awareness by three qualities: something called the "felt sense", a quality of engaged accepting attention, and a research-based technique that facilitates change.
== Origin ==
At the University of Chicago, beginning in 1953, Eugene Gendlin did 15 years of research analyzing what made psychotherapy either successful or unsuccessful. His conclusion was that it is not the therapist's technique that determines the success of psychotherapy, but rather the way the patient behaves, and what the patient does inside himself during the therapy sessions. Gendlin found that, without exception, the successful patient intuitively focuses inside himself on a very subtle and vague internal bodily awareness—or "felt sense"—which contains information that, if attended to or focused on, holds the key to the resolution of the problems the patient is experiencing.
"Focusing" is a process and learnable skill developed by Gendlin which re-creates this successful-patient behavior in a form that can be taught to other patients. Gendlin detailed the techniques in his book Focusing which, intended for the layperson, is written in conversational terms and describes the six steps of Focusing and how to do them. Gendlin stated: "I did not invent Focusing. I simply made some steps which help people to find Focusing."
== "Felt sense" and "felt shift" ==
Gendlin gave the name "felt sense" to the unclear, pre-verbal sense of "something"—the inner knowledge or awareness that has not been consciously thought or verbalized—as that "something" is experienced in the body. It is not the same as an emotion. This bodily felt "something" may be an awareness of a situation or an old hurt, or of something that is "coming"—perhaps an idea or insight. Crucial to the concept, as defined by Gendlin, is that it is unclear and vague, and it is always more than any attempt to express it verbally. Gendlin also described it as "sensing an implicit complexity, a wholistic sense of what one is working on".
According to Gendlin, the Focusing process makes a felt sense more tangible and easier to work with. To help the felt sense form and to accurately identify its meaning, the focuser tries out words that might express it. These words can be tested against the felt sense: The felt sense will not resonate with a word or phrase that does not adequately describe it.
Gendlin observed clients, writers, and people in ordinary life ("Focusers") turning their attention to this not-yet-articulated knowing. As a felt sense formed, there would be long pauses together with sounds like "uh...." Once the person had accurately identified this felt sense in words, new words would come, and new insights into the situation. There would be a sense of felt movement—a "felt shift"—and the person would begin to be able to move beyond the "stuck" place, having fresh insights, and also sometimes indications of steps to take.
== Learning and using Focusing ==
One can learn the Focusing technique from one of several books, or from a Focusing trainer, practitioner, or therapist. Focusing is easiest to sense and do in the presence of a "listener"—either a Focusing trainer, a therapist, or a layperson trained in Focusing. However, the practice can be done alone. Gendlin's book details the six steps of Focusing, however it emphasizes that the essence of Focusing is not adhering to these steps, but following the organic process. When the person learns the basics, they are able to weave through the process increasingly more and more organically.
Focusing is now practiced all over the world by thousands of people—both in professional settings with Focusing trainers, and informally between laypersons. As a stand-alone process, a Focusing session can last from approximately 10 minutes to an hour, on average—with the "focuser" being listened to, and their verbalized thoughts and feelings being reflected back by the "listener". Generally speaking, but not always, the focuser has their eyes closed, in order to more accurately focus inwardly on their "felt sense" and the shifts that take place from it.
== Subsequent developments ==
In 1996, Gendlin published a comprehensive book on Focusing-oriented psychotherapy. The Focusing-oriented psychotherapist attributes a central importance to the client's capacity to be aware of their "felt sense" and the meaning behind their words or images. The client is encouraged to sense into feelings and meanings which are not yet formed. Other elements of Focusing are also incorporated into the therapy practice so that Focusing remains the basis of the process—allowing for inner resonance and verification of ideas and feelings, and allowing new and fresh insights to come from within the client.
Several adaptations of Gendlin's original six-step Focusing process have been developed. The most popular and prevalent of these is the process Ann Weiser Cornell teaches, called Inner Relationship Focusing.
Other developments in Focusing include focusing alone using a journal or a sketchbook. Drawing and painting can be used with Focusing processes with children. Focusing also happens in other domains besides therapy. Attention to the felt sense naturally takes place in all manner of processes where something new is being formed: for example in creative process, learning, thinking, and decision making.
== See also ==
Emotion-focused therapy
Internal Family Systems Model
Intuition (mind)
Method of levels
Nonviolent Communication
== References ==
== Further reading ==
Cornell, Ann Weiser (1996). The power of focusing: a practical guide to emotional self-healing. Oakland, CA: New Harbinger Publications. ISBN 157224044X. OCLC 34828579.
Gendlin, Eugene (1978). Focusing. New York, NY: Bantam Dell. ISBN 9780553278330. OCLC 221962901.
Gendlin, Eugene (2018). A process model. Evanston, IL: Northwestern University Press. doi:10.2307/j.ctv47w48p. ISBN 9780810136199. JSTOR j.ctv47w48p. OCLC 991536018. Archived from the original on 2017-05-08.
Madison, Greg, ed. (2014). Emerging practice in focusing-oriented psychotherapy: innovative theory and applications. Foreword by Mary Hendricks-Gendlin. London; Philadelphia: Jessica Kingsley Publishers. ISBN 9781849053716. OCLC 866622379.
Madison, Greg, ed. (2014). Theory and practice of focusing-oriented psychotherapy: beyond the talking cure. Foreword by Eugene Gendlin. London: Jessica Kingsley Publishers. ISBN 9781849053242. OCLC 864418245.
Rome, David (2014). Your body knows the answer: using your felt sense to solve problems, effect change, and liberate creativity. Boston, MA: Shambhala Publications. ISBN 9781611800906. OCLC 874557194.
== External links ==
International Focusing Institute
Focusing-Oriented Psychotherapy
British Focusing Association | Wikipedia/Focusing_(psychotherapy) |
Behaviour therapy or behavioural psychotherapy is a broad term referring to clinical psychotherapy that uses techniques derived from behaviourism and/or cognitive psychology. It looks at specific, learned behaviours and how the environment, or other people's mental states, influences those behaviours, and consists of techniques based on behaviorism's theory of learning: respondent or operant conditioning. Behaviourists who practice these techniques are either behaviour analysts or cognitive-behavioural therapists. They tend to look for treatment outcomes that are objectively measurable. Behaviour therapy does not involve one specific method, but it has a wide range of techniques that can be used to treat a person's psychological problems.
Behavioural psychotherapy is sometimes juxtaposed with cognitive psychotherapy. While cognitive behavioural therapy integrates aspects of both approaches, such as cognitive restructuring, positive reinforcement, habituation (or desensitisation), counterconditioning, and modelling.
Applied behaviour analysis (ABA) is the application of behaviour analysis that focuses on functionally assessing how behaviour is influenced by the observable learning environment and how to change such behaviour through contingency management or exposure therapies, which are used throughout clinical behaviour analysis therapies or other interventions based on the same learning principles.
Cognitive-behavioural therapy views cognition and emotions as preceding overt behaviour and implements treatment plans in psychotherapy to lessen the issue by managing competing thoughts and emotions, often in conjunction with behavioural learning principles.
A 2013 Cochrane review comparing behaviour therapies to psychological therapies found them to be equally effective, although at the time the evidence base that evaluates the benefits and harms of behaviour therapies was weak.
== History ==
Precursors of certain fundamental aspects of behaviour therapy have been identified in various ancient philosophical traditions, particularly Stoicism. For example, Wolpe and Lazarus wrote,
While the modern behavior therapist deliberately applies principles of learning to this therapeutic operations, empirical behavior therapy is probably as old as civilization – if we consider civilization as having started when man first did things to further the well-being of other men. From the time that this became a feature of human life there must have been occasions when a man complained of his ills to another who advised or persuaded him of a course of action. In a broad sense, this could be called behavior therapy whenever the behavior itself was conceived as the therapeutic agent. Ancient writings contain innumerable behavioral prescriptions that accord with this broad conception of behavior therapy.
The first use of the term behaviour modification appears to have been by Edward Thorndike in 1911. His article Provisional Laws of Acquired Behavior or Learning makes frequent use of the term "modifying behavior". Through early research in the 1940s and the 1950s the term was used by Joseph Wolpe's research group. The experimental tradition in clinical psychology used it to refer to psycho-therapeutic techniques derived from empirical research. It has since come to refer mainly to techniques for increasing adaptive behaviour through reinforcement and decreasing maladaptive behaviour through extinction or punishment (with emphasis on the former). Two related terms are behaviour therapy and applied behaviour analysis. Since techniques derived from behavioural psychology tend to be the most effective in altering behaviour, most practitioners consider behaviour modification along with behaviour therapy and applied behaviour analysis to be founded in behaviourism. While behaviour modification and applied behaviour analysis typically uses interventions based on the same behavioural principles, many behaviour modifiers who are not applied behaviour analysts tend to use packages of interventions and do not conduct functional assessments before intervening.
Possibly the first occurrence of the term "behavior therapy" was in a 1953 research project by B.F. Skinner, Ogden Lindsley, Nathan Azrin and Harry C. Solomon. The paper talked about operant conditioning and how it could be used to help improve the functioning of people who were diagnosed with chronic schizophrenia. Early pioneers in behaviour therapy include Joseph Wolpe and Hans Eysenck.
In general, behaviour therapy is seen as having three distinct points of origin: South Africa (Wolpe's group), the United States (Skinner), and the United Kingdom (Rachman and Eysenck). Each had its own distinct approach to viewing behaviour problems. Eysenck in particular viewed behaviour problems as an interplay between personality characteristics, environment, and behaviour. Skinner's group in the United States took more of an operant conditioning focus. The operant focus created a functional approach to assessment and interventions focused on contingency management such as the token economy and behavioural activation. Skinner's student Ogden Lindsley is credited with forming a movement called precision teaching, which developed a particular type of graphing program called the standard celeration chart to monitor the progress of clients. Skinner became interested in the individualising of programs for improved learning in those with or without disabilities and worked with Fred S. Keller to develop programmed instruction. Programmed instruction had some clinical success in aphasia rehabilitation. Gerald Patterson used programme instruction to develop his parenting text for children with conduct problems. (see Parent management training.) With age, respondent conditioning appears to slow but operant conditioning remains relatively stable. While the concept had its share of advocates and critics in the west, its introduction in the Asian setting, particularly in India in the early 1970s and its grand success were testament to the famous Indian psychologist H. Narayan Murthy's enduring commitment to the principles of behavioural therapy and biofeedback.
While many behaviour therapists remain staunchly committed to the basic operant and respondent paradigm, in the second half of the 20th century, many therapists coupled behaviour therapy with the cognitive therapy, of Aaron Beck, Albert Ellis, and Donald Meichenbaum to form cognitive behaviour therapy. In some areas the cognitive component had an additive effect (for example, evidence suggests that cognitive interventions improve the result of social phobia treatment.) but in other areas it did not enhance the treatment, which led to the pursuit of third generation behaviour therapies. Third generation behaviour therapy uses basic principles of operant and respondent psychology but couples them with functional analysis and a clinical formulation/case conceptualisation of verbal behaviour more inline with view of the behaviour analysts. Some research supports these therapies as being more effective in some cases than cognitive therapy, but overall the question is still in need of answers.
== Theoretical basis ==
The behavioural approach to therapy assumes that behaviour that is associated with psychological problems develops through the same processes of learning that affects the development of other behaviours. Therefore, behaviourists see personality problems in the way that personality was developed. They do not look at behaviour disorders as something a person has, but consider that it reflects how learning has influenced certain people to behave in a certain way in certain situations.
Behaviour therapy is based upon the principles of classical conditioning developed by Ivan Pavlov and operant conditioning developed by B.F. Skinner. Classical conditioning happens when a neutral stimulus comes right before another stimulus that triggers a reflexive response. The idea is that if the neutral stimulus and whatever other stimulus that triggers a response is paired together often enough that the neutral stimulus will produce the reflexive response. Operant conditioning has to do with rewards and punishments and how they can either increase or decrease certain behaviours.
Contingency management programs are a direct product of research from operant conditioning.
== Current forms ==
Behavioural therapy based on operant and respondent principles has considerable evidence base to support its usage. This approach remains a vital area of clinical psychology and is often termed clinical behavior analysis. Behavioral psychotherapy has become increasingly contextual in recent years. Behavioral psychotherapy has developed greater interest in recent years in personality disorders as well as a greater focus on acceptance and complex case conceptualizations.
=== Functional analytic psychotherapy ===
One current form of behavioural psychotherapy is functional analytic psychotherapy. Functional analytic psychotherapy is a longer duration behaviour therapy. Functional analytic therapy focuses on in-session use of reinforcement and is primarily a relationally-based therapy. As with most of the behavioural psychotherapies, functional analytic psychotherapy is contextual in its origins and nature. and draws heavily on radical behaviourism and functional contextualism.
Functional analytic psychotherapy holds to a process model of research, which makes it unique compared to traditional behaviour therapy and cognitive behavioural therapy.
Functional analytic psychotherapy has a strong research support. Recent functional analytic psychotherapy research efforts are focusing on management of aggressive inpatients.
== Assessment ==
Behaviour therapists complete a functional analysis or a functional assessment that looks at four important areas: stimulus, organism, response and consequences. The stimulus is the condition or environmental trigger that causes behaviour. An organism involves the internal responses of a person, like physiological responses, emotions and cognition. A response is the behaviour that a person exhibits and the consequences are the result of the behaviour. These four things are incorporated into an assessment done by the behaviour therapist.
Most behaviour therapists use objective assessment methods like structured interviews, objective psychological tests or different behavioural rating forms. These types of assessments are used so that the behaviour therapist can determine exactly what a client's problem may be and establish a baseline for any maladaptive responses that the client may have. By having this baseline, as therapy continues this same measure can be used to check a client's progress, which can help determine if the therapy is working. Behaviour therapists do not typically ask the why questions but tend to be more focused on the how, when, where and what questions. Tests such as the Rorschach inkblot test or personality tests like the MMPI (Minnesota Multiphasic Personality Inventory) are not commonly used for behavioural assessment because they are based on personality trait theory assuming that a person's answer to these methods can predict behaviour. Behaviour assessment is more focused on the observations of a person's behaviour in their natural environment.
Behavioural assessment specifically attempts to find out what the environmental and self-imposed variables are. These variables are the things that are allowing a person to maintain their maladaptive feelings, thoughts and behaviours. In a behavioural assessment "person variables" are also considered. These "person variables" come from a person's social learning history and they affect the way in which the environment affects that person's behaviour. An example of a person variable would be behavioural competence. Behavioural competence looks at whether a person has the appropriate skills and behaviours that are necessary when performing a specific response to a certain situation or stimuli.
When making a behavioural assessment the behaviour therapist wants to answer two questions: (1) what are the different factors (environmental or psychological) that are maintaining the maladaptive behaviour and (2) what type of behaviour therapy or technique that can help the individual improve most effectively. The first question involves looking at all aspects of a person, which can be summed up by the acronym BASIC ID. This acronym stands for behaviour, affective responses, sensory reactions, imagery, cognitive processes, interpersonal relationships and drug use.
== Clinical applications ==
Behaviour therapy based its core interventions on functional analysis. Just a few of the many problems that behaviour therapy have functionally analyzed include intimacy in couples relationships, forgiveness in couples, chronic pain, stress-related behaviour problems of being an adult child of a person with an alcohol use disorder, anorexia, chronic distress, substance abuse, depression, anxiety, insomnia and obesity.
Functional analysis has even been applied to problems that therapists commonly encounter like client resistance, partially engaged clients and involuntary clients. Applications to these problems have left clinicians with considerable tools for enhancing therapeutic effectiveness. One way to enhance therapeutic effectiveness is to use positive reinforcement or operant conditioning. Although behaviour therapy is based on the general learning model, it can be applied in a lot of different treatment packages that can be specifically developed to deal with problematic behaviours. Some of the more well known types of treatments are: Relaxation training, systematic desensitization, virtual reality exposure, exposure and response prevention techniques, social skills training, modelling, behavioural rehearsal and homework, and aversion therapy and punishment.
Relaxation training involves clients learning to lower arousal to reduce their stress by tensing and releasing certain muscle groups throughout their body. Systematic desensitization is a treatment in which the client slowly substitutes a new learned response for a maladaptive response by moving up a hierarchy of situations involving fear. Systematic desensitization is based in part on counter conditioning. Counter conditioning is learning new ways to change one response for another and in the case of desensitization it is substituting that maladaptive behaviour for a more relaxing behaviour. Exposure and response prevention techniques (also known as flooding and response prevention) is the general technique in which a therapist exposes an individual to anxiety-provoking stimuli while keeping them from having any avoidance responses.
Virtual reality therapy provides realistic, computer-based simulations of troublesome situations. The modelling process involves a person being subjected to watching other individuals who demonstrate behaviour that is considered adaptive and that should be adopted by the client. This exposure involves not only the cues of the "model person" as well as the situations of a certain behaviour that way the relationship can be seen between the appropriateness of a certain behaviour and situation in which that behaviour occurs is demonstrated. With the behavioural rehearsal and homework treatment a client gets a desired behaviour during a therapy session and then they practice and record that behaviour between their sessions. Aversion therapy and punishment is a technique in which an aversive (painful or unpleasant) stimulus is used to decrease unwanted behaviours from occurring. It is concerned with two procedures: 1) the procedures are used to decrease the likelihood of the frequency of a certain behaviour and 2) procedures that will reduce the attractiveness of certain behaviours and the stimuli that elicit them. The punishment side of aversion therapy is when an aversive stimulus is presented at the same time that a negative stimulus and then they are stopped at the same time when a positive stimulus or response is presented. Examples of the type of negative stimulus or punishment that can be used is shock therapy treatments, aversive drug treatments as well as response cost contingent punishment which involves taking away a reward.
Applied behaviour analysis is using behavioural methods to modify certain behaviours that are seen as being important socially or personally. There are four main characteristics of applied behaviour analysis. First behaviour analysis is focused mainly on overt behaviours in an applied setting. Treatments are developed as a way to alter the relationship between those overt behaviours and their consequences.
Another characteristic of applied behaviour analysis is how it (behaviour analysis) goes about evaluating treatment effects. The individual subject is where the focus of study is on, the investigation is centred on the one individual being treated. A third characteristic is that it focuses on what the environment does to cause significant behaviour changes. Finally the last characteristic of applied behaviour analysis is the use of those techniques that stem from operant and classical conditioning such as providing reinforcement, punishment, stimulus control and any other learning principles that may apply.
Social skills training teaches clients skills to access reinforcers and lessen life punishment. Operant conditioning procedures in meta-analysis had the largest effect size for training social skills, followed by modelling, coaching, and social cognitive techniques in that order. Social skills training has some empirical support particularly for schizophrenia. However, with schizophrenia, behavioural programs have generally lost favour.
Some other techniques that have been used in behaviour therapy are contingency contracting, response costs, token economies, biofeedback, and using shaping and grading task assignments.
Shaping and graded task assignments are used when behaviour that needs to be learned is complex. The complex behaviours that need to be learned are broken down into simpler steps where the person can achieve small things gradually building up to the more complex behaviour. Each step approximates the eventual goal and helps the person to expand their activities in a gradual way. This behaviour is used when a person feels that something in their lives can not be changed and life's tasks appear to be overwhelming.
Another technique of behaviour therapy involves holding a client or patient accountable of their behaviours in an effort to change them. This is called a contingency contract, which is a formal written contract between two or more people that defines the specific expected behaviours that you wish to change and the rewards and punishments that go along with that behaviour. In order for a contingency contract to be official it needs to have five elements. First it must state what each person will get if they successfully complete the desired behaviour. Secondly those people involved have to monitor the behaviours. Third, if the desired behaviour is not being performed in the way that was agreed upon in the contract the punishments that were defined in the contract must be done. Fourth if the persons involved are complying with the contract they must receive bonuses. The last element involves documenting the compliance and noncompliance while using this treatment in order to give the persons involved consistent feedback about the target behaviour and the provision of reinforcers.
Token economies is a behaviour therapy technique where clients are reinforced with tokens that are considered a type of currency that can be used to purchase desired rewards, like being able to watch television or getting a snack that they want when they perform designated behaviours. Token economies are mainly used in institutional and therapeutic settings. In order for a token economy to be effective there must be consistency in administering the program by the entire staff. Procedures must be clearly defined so that there is no confusion among the clients. Instead of looking for ways to punish the patients or to deny them of rewards, the staff has to reinforce the positive behaviours so that the clients will increase the occurrence of the desired behaviour. Over time the tokens need to be replaced with less tangible rewards such as compliments so that the client will be prepared when they leave the institution and won't expect to get something every time they perform a desired behaviour.
Closely related to token economies is a technique called response costs. This technique can either be used with or without token economies. Response costs is the punishment side of token economies where there is a loss of a reward or privilege after someone performs an undesirable behaviour. Like token economies this technique is used mainly in institutional and therapeutic settings.
Considerable policy implications have been inspired by behavioural views of various forms of psychopathology. One form of behaviour therapy, habit reversal training, has been found to be highly effective for treating tics.
=== In rehabilitation ===
Currently, there is a greater call for behavioural psychologists to be involved in rehabilitation efforts.
=== Treatment of mental disorders ===
Two large studies done by the Faculty of Health Sciences at Simon Fraser University indicate that both behaviour therapy and cognitive-behavioural therapy (CBT) are equally effective for OCD. CBT is typically considered the "first-line" treatment for OCD. CBT has also been shown to perform slightly better at treating co-occurring depression.
Considerable policy implications have been inspired by behavioural views of various forms of psychopathology. One form of behaviour therapy (habit reversal training) has been found to be highly effective for treating tics.
There has been a development towards combining techniques to treat psychiatric disorders. Cognitive interventions are used to enhance the effects of more established behavioural interventions based on operant and classical conditioning. An increased effort has also been placed to address the interpersonal context of behaviour.
Behaviour therapy can be applied to a number of mental disorders and in many cases is more effective for specific disorders as compared to others. Behaviour therapy techniques can be used to deal with any phobias that a person may have. Desensitization has also been successfully applied to other issues such as dealing with anger, if a person has trouble sleeping and certain speech disorders. Desensitization does not occur over night, there is a process of treatment. Desensitization is done on a hierarchy and happens over a number of sessions. The hierarchy goes from situations that make a person less anxious or nervous up to things that are considered to be extreme for the patient.
Modelling has been used in dealing with fears and phobias. Fears are thought to develop through observational learning, and so positive modelling, when a person's behaviour is imitated, can used to counter these effects. In a systematic review of 1,677 papers, positive modelling was found to lower fear levels. Modelling has been used in the treatment of fear of snakes as well as a fear of water.
Aversive therapy techniques have been used to treat sexual deviations, as well as alcohol use disorder.
Exposure and prevention procedure techniques can be used to treat people who have anxiety problems as well as any fears or phobias. These procedures have also been used to help people dealing with any anger issues as well as pathological grievers (people who have distressing thoughts about a deceased person).
Virtual reality therapy deals with fear of heights, fear of flying, and a variety of other anxiety disorders. VRT has also been applied to help people with substance abuse problems reduce their responsiveness to certain cues that trigger their need to use drugs.
Shaping and graded task assignments has been used in dealing with suicide and depressed or inhibited individuals. This is used when a patient feel hopeless and they have no way of changing their lives. This hopelessness involves how the person reacts and responds to someone else and certain situations and their perceived powerlessness to change that situation that adds to the hopelessness. For a person with suicidal ideation, it is important to start with small steps. Because that person may perceive everything as being a big step, the smaller you start the easier it will be for the person to master each step. This technique has also been applied to people dealing with agoraphobia, or fear of being in public places or doing something embarrassing.
Contingency contracting has been used to effectively deal with behaviour problems in delinquents and when dealing with on task behaviours in students.
Token economies are used in controlled environments and are found mostly in psychiatric hospitals. They can be used to help patients with different mental illnesses but it doesn't focus on the treatment of the mental illness but instead on the behavioural aspects of a patient. The response cost technique has been used to successfully address a variety of behaviours such as smoking, overeating, stuttering, and psychotic talk.
=== Treatment outcomes ===
Systematic desensitization has been shown to successfully treat phobias about heights, driving, insects as well as any anxiety that a person may have. Anxiety can include social anxiety, anxiety about public speaking as well as test anxiety. It has been shown that the use of systematic desensitization is an effective technique that can be applied to a number of problems that a person may have.
When using modelling procedures this technique is often compared to another behavioural therapy technique. When compared to desensitization, the modelling technique does appear to be less effective. However it is clear that the greater the interaction between the patient and the subject he is modelling the greater the effectiveness of the treatment.
While undergoing exposure therapy, a person typically needs five sessions to assess the treatment's effectiveness. After five sessions, exposure treatment has been shown to provide benefit to the patient. However, it is still recommended treatment continue beyond the initial five sessions.
Virtual reality therapy (VRT) has shown to be effective for a fear of heights. It has also been shown to help with the treatment of a variety of anxiety disorders. Due to the costs associated with VRT in 2007, therapists were still awaiting results of controlled trials investigating VRT, to assess which applications demonstrate the best results.
For those with suicidal ideation, treatment depends on how severe the person's depression and sense of hopelessness is. If these things are severe, the person's response to completing small steps will not be of importance to them, because they don't consider the success an accomplishment. Generally, in those without severe depression or fear, this technique has been successful, as completion of simpler activities builds their confidences and allows them to progress to more complex situations.
Contingency contracts have been seen to be effective in changing any undesired behaviours of individuals. It has been seen to be effective in treating behaviour problems in delinquents regardless of the specific characteristics of the contract.
Token economies have been shown to be effective when treating patients in psychiatric wards who had chronic schizophrenia. The results showed that the contingent tokens were controlling the behaviour of the patients.
Response costs has been shown to work in suppressing a variety of behaviours such as smoking, overeating or stuttering with a diverse group of clinical populations ranging from sociopaths to school children. These behaviours that have been suppressed using this technique often do not recover when the punishment contingency is withdrawn. Also undesirable side effects that are usually seen with punishment are not typically found when using the response cost technique.
== "Third generation" ==
Since the 1980s, a series of new behavioral therapies have been developed. These have been later labeled by Steven C. Hayes as "the third-generation" of behavioural therapy. Under this classification, the first generation of behavioural therapy is that independently developed in the 1950s by Joseph Wolpe, Ogden Lindsley and Hans Eysenck, while the second generation is the cognitive therapy developed by Aaron Beck in the 1970s.
Other authors object to the term "third generation" or "third wave" and incorporate many of the "third wave" therapeutic techniques under the general umbrella term of modern cognitive behavioural therapies.
This "third wave" of behavioural therapy has sometimes been called clinical behaviour analysis because it has been claimed that it represents a movement away from cognitivism and back toward radical behaviourism and other forms of behaviourism, in particular functional analysis and behavioural models of verbal behaviour. This area includes acceptance and commitment therapy (ACT), cognitive behavioural analysis system of psychotherapy (CBASP) (McCullough, 2000), behavioural activation (BA), dialectical behaviour therapy, functional analytic psychotherapy (FAP), integrative behavioural couples therapy, metacognitive therapy and metacognitive training. These approaches are squarely within the applied behaviour analysis tradition of behaviour therapy.
Acceptance and Commitment Therapy (ACT) may be the most well-researched of all the third-generation behaviour therapy models. It is based on relational frame theory. As of March 2022, there are over 900 randomized trials of Acceptance and Commitment Therapy and 60 mediational studies of the ACT literature. ACT has been included in over 275 meta-analyses and systematic reviews. As the result of multiple randomized trials of ACT by the World Health Organization, WHO now distribute ACT-based self-help for "anyone who experiences stress, wherever they live, and whatever their circumstances." As of March 2022, a number of different organizations have stated that Acceptance and Commitment Therapy is empirically supported in certain areas or as a whole according to their standards. These include: American Psychological Association, Society of Clinical Psychology (Div. 12), The World Health Organization, The United Kingdom National Institute for Health and Care Excellence (NICE), Australian Psychological Society, Netherlands Institute of Psychologists: Sections of Neuropsychology and Rehabilitation, Sweden Association of Physiotherapists, SAMHSA's National Registry of Evidence-based Programs and Practices, California Evidence-Based Clearinghouse for Child Welfare, and the U.S. Veterans Affairs/DoD.
Functional analytic psychotherapy is based on a functional analysis of the therapeutic relationship. It places a greater emphasis on the therapeutic context and returns to the use of in-session reinforcement. In general, 40 years of research supports the idea that in-session reinforcement of behaviour can lead to behavioural change.
Behavioural activation emerged from a component analysis of cognitive behaviour therapy. Researchers hope to prove that it can be complete treatment in its own right. Behavioural activation is based on a matching model of reinforcement. A recent review of the research, supports the notion that the use of behavioural activation is clinically important for the treatment of depression.
Integrative behavioural couples therapy developed from dissatisfaction with traditional behavioural couples therapy. Integrative behavioural couples therapy looks to Skinner (1966) for the difference between contingency-shaped and rule-governed behaviour. It couples this analysis with a thorough functional assessment of the couple's relationship. Recent efforts have used radical behavioural concepts to interpret a number of clinical phenomena including forgiveness.
A review study published in 2008, concluded that at the time, third-generation behavioral psychotherapies did not meet the criteria for empirically supported treatments.
== Organisations ==
Many organisations exist for behaviour therapists around the world. In the United States, the American Psychological Association's Division 25 is the division for behaviour analysis. The Association for Contextual Behavioral Science is another professional organisation. ACBS is home to many clinicians with specific interest in third generation behaviour therapy. Doctoral-level behaviour analysts who are psychologists belong to American Psychological Association's Division 25 – behaviour analysis. APA offers a diploma in behavioural psychology.
The Association for Behavioral and Cognitive Therapies (formerly the Association for the Advancement of Behavior Therapy) is for those with a more cognitive orientation. The ABCT also has an interest group in behaviour analysis, which focuses on clinical behaviour analysis. In addition, the Association for Behavioral and Cognitive Therapies has a special interest group on addictions.
== Characteristics ==
By nature, behavioural therapies are empirical (data-driven), contextual (focused on the environment and context), functional (interested in the effect or consequence a behaviour ultimately has), probabilistic (viewing behaviour as statistically predictable), monistic (rejecting mind–body dualism and treating the person as a unit), and relational (analysing bidirectional interactions).
Behavioural therapy develops, adds and provides behavioural intervention strategies and programs for clients, and training to people who care to facilitate successful lives in various communities.
== Training ==
Recent efforts in behavioural psychotherapy have focused on the supervision process. A key point of behavioural models of supervision is that the supervisory process parallels the behavioural psychotherapy provided.
== See also ==
== References ==
== Sources ==
Bellack, A.S.; Hersen, M. (1985). Dictionary of Behavior Therapy Techniques. General Psychology Series. Pergamon Press. ISBN 978-0-08-030168-6.
Boyle, S.W. (2006). "Knowledge and Skills for Intervention". Direct Practice in Social Work (1st ed.). Pearson/Allyn & Bacon. ISBN 978-0-205-40162-8.
O'Leary, K.D.; Wilson, G.T. (1975). Behavior Therapy: Application and Outcome. Prentice-Hall series on social learning theory. Prentice-Hall. ISBN 978-0-13-073890-5.
Rimm, David; Masters, John C. (1974). Behavior therapy: techniques and empirical findings. New York: Academic Press. ISBN 0-12-588850-3. OCLC 793562.
Schaefer, H.H.; Martin, P.L. (1969). Behavioral Therapy. Blakiston Division, McGraw-Hill.
== External links ==
Library resources in your library and in other libraries about Behaviour therapy | Wikipedia/Behaviour_therapy |
In music therapy improvisation is defined as a process where the client and therapist relate to each other. The client makes up music, musical improvisation, while singing or playing, extemporaneously creating a melody, rhythm, song, or instrumental piece. In clinical improvisation, client and therapist (or client and other clients) relate to one another through the music. Improvisation may occur individually, in a duet, or in a group. The client may use any musical or nonmusical medium within their capabilities. Musical media includes voice, body sound, percussion, and string, wind, and keyboard instruments. Nonmusical media can consist of images, titles, and stories.
== How improvisation fits into music therapy ==
Music therapy is a systematic process; it is not a series of random events. Systematic means that music therapy is "purposeful, organized, methodical, knowledge-based, and regulated" (Bruscia 1998). One of the most important features is its methodical processes. Methodical means that music therapy always proceeds in an orderly fashion. It involves three basic steps: assessment, treatment, and evaluation. Treatment is the part of a music therapy process in which the therapist engages the client in various musical experiences, employing specific methods and in-the-moment techniques. When planning treatment, the music therapist has to select the types of music and music experiences that will be most relevant to the client. There are four basic types of music experiences, or methods, in which a client may be engaged: listening, re-creating, composition, and improvisation.
== Characteristics of improvisation in music therapy ==
Clinical Improvisation is a generative and creative process of musical intervention involving the client's spontaneous creation of sounds and music. It helps the client to explore aspects of self, in relation to others, in an appropriate way. Improvisation also generates new and individualized musical forms. Using musical improvisation in a therapeutic setting can increase independence. The interactive use of improvisation facilitates problem-solving, because it is flexible rather than predetermined. Getting the client involved in an improvisational experience can develop social skills and interaction.
== Clinical goals of improvisation experiences ==
According to Bruscia (1998), clinical goals that can be achieved through improvisation are as follows:
Establish a nonverbal channel of communication, and a bridge to verbal communication
Provide a fulfilling means of self-expression and identity formation
Explore various aspects of self in relation to others
Develop the capacity for interpersonal intimacy
Develop group skills
Develop creativity, expressive freedom, and playfulness with various degrees of structure
Stimulate and develop the senses
Play, on the spot, with a decisiveness that invites clarity of intention
Develop perceptual and cognitive skills
== Improvisational methods and their variations ==
Improvisation can be carried out with both musical and nonmusical references. (Bruscia 1987, 10)
Referential improvisations are those in which the client improvises to portray a nonmusical reference (e.g., an event, feeling, image, relationship, etc.)
Non-referential improvisations are those in which the client improvises without reference to anything other than the sounds or music.
Frequently used variations are as follows:
== Basic therapeutic techniques ==
Bruscia (1987) and Wigram (2004) introduced a variety of improvisational techniques in their books. Among these, there are a few major therapeutic techniques. Imitating is a basic technique of empathy in which the music therapist copies or repeats a client's response, after the response has been displayed. The music therapist focuses on any sound, rhythm, interval or even facial expression. Reflecting is a technique in which the music therapist expresses the same moods or feelings which have been presented by the client. Rhythmic grounding is implemented by establishing a steady beat or rhythm, supporting the client's improvisation. The use of a rhythmic ostinato is an example of rhythmic grounding. Dialoguing is a process in which the music therapist and the client communicate through their improvisations. Lastly, accompanying is a technique in which the music therapist supports the client's improvisation by giving an accompaniment that consists of rhythm, melody, and chord progressions.
== Integration of therapeutic methods ==
It is important to have variety in music therapy sessions. Improvisation should be conducted using more than just one or two methods and techniques. It is also critical to maintain flexibility during the improvisation. For example, the music therapist can preserve a flexible session flow by incorporating several methods, such as imitating, accompanying, dialoguing, and rhythmic grounding.
== Benefits of using Improvisation methods ==
Using Improvisation in musical therapy actually has specific benefits for those with neurological problems. These benefits can span from reducing anxiety and stress to improving communication and behavioral attention problems in younger adults/children. This is due to the proposed idea that musical therapy with improvisation links the unconscious and the conscious brain, promoting social and creative interaction. Many believe that it is a useful tool used to connected on a deeper level with patients in order to bring out these characteristics and benefits within themselves. Improvisation is a fun way to challenge the psyche of individuals and it shows to have very good results in promoting healthy benefits later on down the road.
== References ==
Bruscia, Kenneth E. 1987. Improvisational Models of Music Therapy. Springfield, IL; Charles C. Thomas Publications.
Bruscia, Kenneth E. 1998. Defining Music Therapy. Gilsum, NH: Barcelona Publishers.
Wigram, Tony. 2004. Improvisation: Methods and Techniques for Music Therapy Clinicians, Educators and Students. New York:Jessica Kingsley Publishers.
== External links ==
Improvisation, with definitions and characteristics of improvisation.
American Music Therapy Association (AMTA) | Wikipedia/Improvisation_in_music_therapy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.