content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
YIELD Function in Excel (Formula, Examples) | How to Use YIELD?
Updated May 2, 2023
YIELD Function in Excel
Yield function is any financial function used to calculate the yield value of a deposit for security for a fixed period rate of interest. For example, we have a security deposit for some reason
somewhere. We apply the payment of mutually agreed-upon interest. With the help of the Yield function, we can find what the yield would apply to the security deposit asset.
YIELD Formula in Excel:
Below is the YIELD Formula in Excel.
Explanation of YIELD Function in Excel
YIELD formula in Excel has the following arguments:
• Settlement: The buyer is provided with the date of purchase for bonds or securities or the date of issue when they are traded.
Note: Users should always enter settlement dates using the DATE function in Excel instead of entering them as text values.
E.g. =DATE(2018,6,14) is used for the 14th day of June 2018
• Maturity: When the coupon will be purchased back, or It is the maturity date when the security or bond expires
Note: Users should always input the maturity date using the DATE function in Excel instead of as text. For example, the formula =DATE(2018,6,14) would represent June 14th, 2018.
• Rate: The guaranteed annual interest rate (%) offered.
• Pr: The price the bond was purchased at.
• Redemption: The face value of the bond that it will be purchased back at or is the security’s redemption value per $100 face value.
• Frequency: It is the number of payments per year. Usually, for annual payments, the frequency will be 1. For semiannual, the frequency is 2 & for quarterly. A frequency is 4.
• Basis: It’s an optional parameter. It is an optional integer parameter that specifies the day count basis used by the bond or security.
The YIELD function in Excel allows any of the values between 0 to 4 mentioned below:
• 0 or omitted US (NASD) 30/360
• 1 Actual/actual
• 2 Actual/360
• 3 Actual/365
• 4 European 30/360
How to Use YIELD Function in Excel?
The YIELD function in Excel is very simple and easy to use. Let us understand the working of the YIELD function in Excel by some YIELD Formula in Excel examples.
Example #1 – Bond YIELD For QuarterlyPayment
I need to calculate bond yield in this YIELD function in the Excel example. Here the bond is purchased on 16-May-2010, with a maturity date of 16-May-2020 (10 years from the date of settlement), and
a rate of interest is 9%. The bond is bought at a price of 95, and the redemption value is 100; here, it pays interest every quarter.
• Select the cell “C15,” where the YIELD function needs to be applied.
• Click the insert function button (fx) under the formula toolbar. A dialog box will appear, type the keyword “YIELD” in the search for a function box, and the YIELD function will appear in the
select a function box. Double-click on the YIELD function.
• A dialog box appears where arguments for the YIELD function need to be filled or entered. (Note: Settlement & maturity date arguments are entered in the cell using the Excel DATE function) i.e. =
• The yield for the bond with the above terms will be 9.79%. It returns a value of 9.79%.
Example #2 – Semiannual or Half-Yearly Payment
I need to calculate bond yield in this YIELD function in the Excel example. Here, the bond was purchased on 16th November 2018, with a maturity date of 16th November 2023 (5 years from the date of
settlement), and a rate of interest is 9%. The bond is bought at a price of 95, and the redemption value is 101; here, it pays the interest on a half-yearly or semiannual basis.
• Select the cell “G15,” where the YIELD function needs to be applied.
• Click the insert function button (fx) under the formula toolbar. A dialog box will appear, type the keyword “YIELD” in the search for a function box, and the YIELD function will appear in the
select a function box. Double-click on the YIELD function.
• A dialog box appears where arguments for the YIELD function need to be filled or entered (Note: Settlement & maturity date argument are entered in the cell using the Excel DATE function), i.e. =
YIELD(G8,G9,G10,G11, G12,2,0).
• The yield for the bond with the above terms will be 10.47%. It returns a value of 10.47%.
Example #3 – Annual or Yearly Payment
I need to calculate bond yield in this YIELD function in the Excel example. Here, the bond was purchased on 13th November 2018, with a maturity date of 13th November 2020 (2 years from the date of
settlement) rate of interest is 9%. The bond is bought at a price of 95, and the redemption value is 101; here, it pays the interest yearly or annually.
• Select the cell “D30,” where the YIELD function needs to be applied.
• Click the insert function button (fx) under the formula toolbar. A dialog box will appear, type the keyword “YIELD” in the search for a function box, and the YIELD function will appear in the
select a function box. Double-click on the YIELD function.
• A dialog box appears where arguments for the YIELD function need to be filled or entered (Note: Settlement & maturity date argument is entered in the cell using the Excel DATE function) i.e. =
• The yield for the bond with the above terms will be 12.45%. It returns a value of 12.45%.
Things to Remember
• #VALUE! Error: It occurs if the settlement & maturity dates are not in the correct format (If it is entered as text value instead of using the date function) or if any of the other values are not
a numeric value, i.e. Non-numeric.
• #NUM! Error: It occurs if the settlement date exceeds the maturity date or if any other numbers are entered incorrectly. i.e. When rate, price, redemption, or frequency value is less than or
equal to zero.
The Yield & rate of interest cells is formatted to show a percentage with decimal places.
Recommended Articles
This has been a guide to the YIELD function in Excel. Here we discuss the YIELD Formula in Excel and How to use the YIELD Function in Exel, along with practical examples and a downloadable Excel
template. You can also go through our other suggested articles – | {"url":"https://www.educba.com/yield-function-in-excel/","timestamp":"2024-11-08T05:06:41Z","content_type":"text/html","content_length":"350917","record_id":"<urn:uuid:b25c70f7-e02f-4f8d-9816-579a94014b39>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00176.warc.gz"} |
Jan Rataj
Born 1962 in Prague
RNDr. (M.Sc. equiv.) in Mathematics, Probability Theory - Charles University, Prague, 1985
CSc. (Ph.D. equiv.) in Mathematics - Czech Academy of Sciences, Prague, 1991
Doc. (Ass. Prof. equiv.) in Mathematics, Geometry an Topology - Charles University, Prague, 2000
Professor in Mathematics, Geometry and Topology - Charles University, Prague, 2011
Institute of Geophysics, Czechoslovak Academy of Sciences, Prague, 1987-1990
Mathematical Institute, Czech Academy of Sciences, Prague, 1990-1991
Faculty of Mathematics an Physics, Charles University, Prague, since 1991
Stays abroad:
Friedrich-Schiller-University Jena, October 1993 - April 1994
University of Karlsruhe, May - September 2000
Universty of Ulm, February 2005
Universty of Aarhus, March - April 2005
Friedrich-Schiller-University Jena, April - June 2012
Publication activity: coauthor of 2 monographs, author or coauthor of about 50 research papers
Teaching activity: Basic courses on analysis, differential geometry and measure theory, advanced courses on geometric measure theory, convex geometry and point processes
PhD students: Tomáš Mrkvička (Ph.D. 2002), Rostislav Černý (Ph.D. 2007), Ondřej Honzl (Ph.D. 2012)
Other activities:
Editor in Chief of Commentationes Mathematicae Universitatis Carolinae
Reviewer of Mathematical Reviews | {"url":"https://www.karlin.mff.cuni.cz/~rataj/cv.html","timestamp":"2024-11-03T10:32:27Z","content_type":"text/html","content_length":"2417","record_id":"<urn:uuid:8ad1bacf-8dcf-4d99-af0e-5a09c2e8dc86>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00321.warc.gz"} |
Alzheimer’s Disease Classification Using Wavelet-Based Image Features
In the proposed method DD-DTCWT is applied on 2-D MR scans and sixteen high-frequency subbands are obtained at the first level of decomposition. Now Shifted Circular Elliptical Local Descriptors are
used to obtain the local micro and macro patterns from the 16 sub-images. Different versions like Median WSCELD, Mean WSCELD, Energy WSCELD and Variance WSCELD have been tested and Energy WSCELD has
been proposed for detection of AD at different stages on account of its performance. Figure 1 indicates the block diagram of the proposed methodology.
The LBP [28] detects the geometric features like edges, hard lines, and corners in the images and provides the local spatial structural patterns. These patterns are obtained by generating a binary
code for a centre pixel by comparing the neighbouring pixels with the centre pixel value. In CLBP all neighbouring pixels are present on a circle of radius R from the centre pixel. Figure 2 shows the
CLBP with a 3x3 neighbourhood. The CLBP value of a pixel Pc (X[ce], Y[ce]) with its N neighbours can be calculated as in Eq. (1).
$\operatorname{LBP}_{\mathrm{N}, \mathrm{R}}\left(\mathrm{X}_{\mathrm{ce}}, \mathrm{Y}_{\mathrm{ce}}\right)=\sum_{\mathrm{n}=1}^{\mathrm{N}} \operatorname{Sign}(\mathrm{Y}) 2^{\mathrm{n}-1}$
Y=Pn(R)-Pc where Pn represents the neighbour pixel at R distance from centre pixel Pc and the value of Y can be assigned 0 and 1 based on Eq. (2).
$\text{Sign}=\left\{\begin{array}{l}1, Y \geq 0 \\ 0, Y<0\end{array}\right.$ (2)
Figure 1. The flow of process in the proposed methodology
Figure 2. (a) CLBP (b) H-ELBP (c) Right-oriented Diagonal ELBP (d) Left-oriented Diagonal ELBP (e) V-ELBP
The coordinates of N number of neighbours Pn (X[ne], Y[ne]) around the centre pixel are obtained by using Eq. (3) and Eq. (4).
X[ne]=X[ce]+R cos(2π/N) (3)
Y[ne]=Y[ce]+R sin(2π/N) (4)
ELBP [29] considers that neighbouring pixels are in an elliptical pattern around the centre pixel. The ellipse orientation can be diagonal, horizontal, and vertical. The ELBP value of a pixel Pc (X
[ce], Y[ce]) with its N neighbours lying on the ellipse of radius R1 horizontally and R2 vertically around it, can be calculated by using Eq. (5).
$\operatorname{ELBP}_{\mathrm{N}, \mathrm{R} 1, \mathrm{R} 2}\left(\mathrm{X}_{\mathrm{ce}}, \mathrm{Y}_{\mathrm{ce}}\right)=\sum_{\mathrm{n}=1}^{\mathrm{N}} \operatorname{Sign}(\mathrm{Y}) 2^{\
mathrm{n}-1}$ (5)
Y=Pn (R1, R2)-Pc where Pn represents the neighbour pixel at R1 and R2 horizontal and vertical distance respectively from centre pixel Pc and the value of Y can be assigned 0 and 1 based on Eq. (6).
$\text{Sign}=\left\{\begin{array}{l}1, Y \geq 0 \\ 0, Y<0\end{array}\right.$ (6)
The coordinates of N number of neighbours Pn (X[ne], Y[ne]) around the centre pixel are obtained through Eq. (7) and Eq. (8).
X[ne]=X[ce]+R1 cos(2π/N) (7)
Y[ne]=Y[ce]+R2 sin(2π/N) (8)
Figure 2 shows the different patterns of Circular LBP and Elliptical LBP.
2.1 Circular elliptical local descriptor (CELD)
In the proposed method, CELD extracts isotropic and anisotropic structural details with a small-size feature vector. CELD generates a unique code by thresholding the eight neighbouring points that
are needed for circular LBP, horizontal, vertical and diagonal ELBPs around a centre pixel Pc(X[ce],Y[ce]). The eight neighbouring points in CELD are taken by combining the two pixels (P2 and P21) at
top, two pixels (P6 and P61) at bottom, two pixels (P8 and P81) at left, two pixels (P4 and P41) at right, and four pair of two diagonal pixels (P1 and P11, P5 and P51, P3 and P31, P7 and P71) as
shown in Figure 3.
Figure 3. Eight neighbouring points in CELD
Formulas used in the formulation of 3X3 neighbourhood in eight neighbouring points CELD are mentioned in Eq. (9) to Eq. (12).
P1=int(P1+P11)/2 P2=int(P2+P21)/2 (9)
P3=int(P3+P31)/2 P4=int(P4+P41)/2 (10)
P5=int(P5+P51)/2 P6=int(P6+P61)/2 (11)
P7=int(P7+P71)/2 P8=int(P8+P81)/2 (12)
Based on the above details the CELD can be formulated as in Eq. (13):
CELD N, R1, R2 $\left(\mathrm{X}_{\mathrm{ce}}, \mathrm{Y}_{\mathrm{cs}}\right)=\sum_{\mathrm{n}=1}^{\mathrm{N}} \operatorname{Sign}(\mathrm{Y}) 2^{\mathrm{n}-1}$ (13)
where, Y=Pn(R1, R2)–Pc and Pn represents the neighbour pixel and Pc represents the centre pixel and the value of Y can be assigned as 0 and 1 based on the Eq. (14).
$\text{Sign}=\left\{\begin{array}{l}1, Y \geq 0 \\ 0, Y<0\end{array}\right.$ (14)
Here R1 is the radius of circular LBP, the vertical radius of Horizontal ELBP and the horizontal radius of Vertical ELBP; The R2 is the horizontal radius of Horizontal ELBP and the vertical radius of
Vertical ELBP.
2.2 Shifted circular elliptical local descriptor (SCELD)
A shifted version of CELD helps in capturing all possible micro and macro patterns which is essential for AD detection. These micropatterns include fine-grained textures and minute differences in
local regions of brain images. This can help in detecting atrophies at a cellular or subcellular level, such as changes in neuronal structures and synapse density, or the existence of microscopic
lesions. Macro patterns include larger-scale features in brain images which can be obtained by evaluating overall brain structure, locating atrophied regions, and detecting macroscopic atrophies like
enlarged ventricles or cortical thinning. The histogram features obtained through shifted CELD provide high structural information and can lead to good classification results.
Figure 4. Sample of eight neighbouring points in CELD with centre pixel
Shifted CELD can be obtained by obtaining 8 patterns by using the shifted version shown in Figure 4 and Figure 5. In the proposed work the average CELD value of all patterns has been taken to reduce
the computation burden. Eq. (15) to Eq. (24) represent the SCELD.
$\begin{aligned} \operatorname{SCELD} \cdot N, R 1, R 2 & \cdot \operatorname{Pc}\left(X_{\text {ce }}, Y_{\mathrm{ce}}\right) =\left[\operatorname{CELD}_{\text {Pattern1 }}+\operatorname{CELD}_{\
text {Pattern2 }}\right. +\operatorname{CELD}_{\text {Pattern3 }} +\text { CELD }_{\text {Pattern4 }} \\ & +\operatorname{CELD}_{\text {Pattern5 }} +\operatorname{CELD}_{\text {Pattern6 }}+\text {
CELD }_{\text {Pattern7 }} \left.+\operatorname{CELD}_{P_{\text {attern8 }}}\right] / 8\end{aligned}$ (15)
$\operatorname{CELD}_{\text {Pattern1 }}=\sum_{\mathrm{n}=1}^{\mathrm{N}} \operatorname{Sign}(\mathrm{Y} 1) 2^{\mathrm{n}-1}$ (16)
$\operatorname{CELD}_{\text {Pattern2}}=\sum_{\mathrm{n}=1}^{\mathrm{N}} \operatorname{Sign}(\mathrm{Y} 2) 2^{\mathrm{n}-1}$ (17)
$\operatorname{CELD}_{\text {Pattern3 }}=\sum_{\mathrm{n}=1}^{\mathrm{N}} \operatorname{Sign}(\mathrm{Y} 3) 2^{\mathrm{n}-1}$ (18)
$\operatorname{CELD}_{\text {Pattern4 }}=\sum_{\mathrm{n}=1}^{\mathrm{N}} \operatorname{Sign}(\mathrm{Y} 4) 2^{\mathrm{n}-1}$ (19)
$\operatorname{CELD}_{\text {Pattern5 }}=\sum_{\mathrm{n}=1}^{\mathrm{N}} \operatorname{Sign}(\mathrm{Y} 5) 2^{\mathrm{n}-1}$ (20)
$\operatorname{CELD}_{\text {Pattern6 }}=\sum_{\mathrm{n}=1}^{\mathrm{N}} \operatorname{Sign}(\mathrm{Y} 6) 2^{\mathrm{n}-1}$ (21)
$\operatorname{CELD}_{\text {Pattern7 }}=\sum_{\mathrm{n}=1}^{\mathrm{N}} \operatorname{Sign}(\mathrm{Y} 7) 2^{\mathrm{n}-1}$ (22)
$\operatorname{CELD}_{\text {Pattern8 }}=\sum_{\mathrm{n}=1}^{\mathrm{N}} \operatorname{Sign}(\mathrm{Y} 8) 2^{\mathrm{n}-1}$ (23)
Y1=Pn(R1,R2)-Pc Y2=Pn[+1](R1,R2)-Pc
Y3=Pn[+2](R1,R2)-Pc Y4=Pn[+3](R1,R2)-Pc
Y5=Pn[+4](R1,R2)-Pc Y6=Pn[+5](R1,R2)-Pc
Y7=Pn[+6] (R1,R2)-Pc Y8=Pn[+7](R1,R2)-Pc
Figure 5. Shifted eight patterns
$\text{Sign}=\left\{\begin{array}{l}1, Y \geq 0 \\ 0, Y<0\end{array}\right.$ (24)
SCELD provides 256 histogram bins which represent the average value of histogram bins of 8 shifted patterns.
2.3 Wavelet-based shifted circular elliptical local descriptor (WSCELD)
The combination of SCELD with wavelet results in WSCELD which provides the directional multiple patterns and enhances the classification accuracy. The wavelet transform captures directional
information through the decomposition into different subbands, each related to a particular orientation and scale. SCELD applied to each subband independently, captures local texture patterns within
each frequency band. The combination of wavelet subbands and SCELD provides a multi-scale representation, allowing the algorithm to analyze structural changes, grey matter density fluctuations and
textures at different levels of detail. Eq. (25) to Eq. (29) indicate how the real coefficients of complex wavelets extract average information and imaginary coefficients extract structural
Let complex wavelet have $\chi_{\mathrm{s}}(\mathrm{t})=\mathrm{x}_{\mathrm{s}}(\mathrm{t})+\mathrm{y}_{\mathrm{s}}(\mathrm{t})$ as scaling function and $\varphi_{\mathrm{w}}(\mathrm{t})=\mathrm{u}_
{\mathrm{w}}(\mathrm{t})+\mathrm{iv}_{\mathrm{w}}(\mathrm{t})$ as wavelet function. For scaling function, ratio between $\mathrm{x}_{\mathrm{s}}(\mathrm{w})$ and $\mathrm{y}_{\mathrm{s}}(\mathrm{w})$
$x_s(w)=-\frac{y_s(w)}{x_s(w)}$ (25)
where, $\mathrm{x}_{\mathrm{s}}(\mathrm{w})$ and $\mathrm{y}_{\mathrm{s}}(\mathrm{w})$ are Fourier Transform (FT) of $\mathrm{x}_{\mathrm{s}}(\mathrm{t})$ and $\mathrm{y}_s(t)$ respectively. $\
lambda_s(w)$ is surely real-valued and acts as $\mathrm{w}^2$ for $|\mathrm{w}|<\pi$ [30]. $y_s(t)$ is approximately equal to the second derivative of $x_s(t)$ multiple by some constant factor.
For wavelet function $\varphi_{\mathrm{w}}(\mathrm{t})$ also, the ratio between $\mathrm{u}_{\mathrm{w}}(\mathrm{w})$ and $\mathrm{v}_{\mathrm{w}}(\mathrm{w})$ is
$\omega_w(w)=-\frac{v_w(w)}{u_w(w)}$ (26)
where, $u_w(w)$ and $v_w(w)$ are FT of $u_w(t)$ and $v_w(t)$ respectively. $\varpi_w(w)$ is also real valued and $v_w(t)$ is approximately equal to the second derivative of $u_w(t)$ multiple by some
constant factor.
There exists a relationship between the real component of wavelet function and scaling function as:
$\zeta(\mathrm{w})=-\mathrm{i} \frac{\mathrm{u}_{\mathrm{w}}(\mathrm{w})}{\mathrm{x}_{\mathrm{s}}(\mathrm{w})}$ (27)
where, $\zeta(\mathrm{w})$ is surely real-valued and acts as $\mathrm{w}^{\mathrm{m}+1}$ for $|\mathrm{w}|<$ $\pi[30]$.
Eq. (25) and Eq. (26) indicate $\mathrm{y}_{\mathrm{s}}(\mathrm{t}) \approx \lambda_{\mathrm{s}} \Delta \mathrm{x}_{\mathrm{s}}(\mathrm{t})$ and $v_w(t) \approx \omega_w \Delta u_w(t)$. This gives
multi-scale projections as:
$\begin{aligned} & \left(\mathrm{s}_{\mathrm{si}}(\mathrm{t}), \chi_{\mathrm{m}, \mathrm{k}}(\mathrm{t})\right)=\left(\mathrm{s}_{\mathrm{si}}(\mathrm{t}), \mathrm{x}_{\mathrm{m}, \mathrm{k}}(\mathrm
{t})\right)+\mathrm{i}\left(\mathrm{s}_{\mathrm{si}}(\mathrm{t}), \mathrm{y}_{\mathrm{m}, \mathrm{k}}(\mathrm{t})\right) \\ & \quad \approx\left(\mathrm{s}_{\mathrm{si}}(\mathrm{t}), \mathrm{x}_{\
mathrm{m}, \mathrm{k}}(\mathrm{t})\right)+\mathrm{i} \chi_{\mathrm{s}}\left(\mathrm{s}_{\mathrm{si}}(\mathrm{t}), \Delta \mathrm{x}_{\mathrm{m}, \mathrm{k}}(\mathrm{t})\right)\end{aligned}$
$\begin{aligned} & \left(\mathrm{s}_{\mathrm{si}}(\mathrm{t}), \varphi_{\mathrm{m}, \mathrm{k}}(\mathrm{t})\right)=\left(\mathrm{s}_{\mathrm{si}}(\mathrm{t}), \mathrm{u}_{\mathrm{m}, \mathrm{k}}(\
mathrm{t})\right)+\mathrm{i}\left(\mathrm{s}_{\mathrm{si}}(\mathrm{t}), \mathrm{v}_{\mathrm{m}, \mathrm{k}}(\mathrm{t})\right) \\ & \quad \approx\left(\mathrm{s}_{\mathrm{si}}(\mathrm{t}), \mathrm{u}
_{\mathrm{m}, \mathrm{k}}(\mathrm{t})\right)+\mathrm{i} \omega_{\mathrm{w}}\left(\mathrm{s}_{\mathrm{si}}(\mathrm{t}), \Delta \mathrm{u}_{\mathrm{m}, \mathrm{k}}(\mathrm{t})\right)\end{aligned}$
where, ‘m’ and ‘k’ denote the level of decomposition and orientation respectively. $\mathrm{s}_{\mathrm{si}}$ is the signal to be decomposed From Eq. (28) and Eq. (29), it can be concluded that the
real components of scaling function and wavelet function of complex wavelets sustain averaging information, and the imaginary components of scaling and wavelet function sustain edge information. This
average information and edge information play an important role for Alzheimer’s disease detection [31]. The high frequency coefficient in detail subbands provide the grey matter density fluctuations
which is also essential for AD detection [32].
In the proposed work SCELD has been applied on the sixteen sub-bands obtained by first-level DD-DTCWT decomposition. WSCELD with DD-DTCWT provides directional features from sixteen directions with
complete isotropic and anisotropic structural and micro pattern details. The WSCELD with eight neighbours provides histogram bins equal to 256 X number of sub-bands. WSCELD histograms are obtained
and have been used for classification. The total number of histogram features with DD-DTCWT is 256 X 16=4096 which is further reduced by using Principal Component Analysis. The different versions of
WSCELD like Mean WSCELD, Median WSCELD, Variance WSCELD and Energy WSCELD have been investigated. In Mean WSCELD, Median WSCELD, Variance WSCELD and Energy WSCELD, the centre pixel value is replaced
with the mean, median, variance and energy values of the neighbourhood pixels respectively and thresholding of neighbouring pixels is done corresponding to that modified centre pixel. The different
versions of WSCELD are shown in Figure 6.
Figure 6. Different versions of WSCELD | {"url":"https://www.iieta.org/journals/ts/paper/10.18280/ts.410420","timestamp":"2024-11-10T11:51:49Z","content_type":"text/html","content_length":"166247","record_id":"<urn:uuid:8927e2a7-87e9-4aca-a617-9bb11598ee14>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00793.warc.gz"} |
Point A is at (-7 ,-1 ) and point B is at (2 ,-4 ). Point A is rotated pi clockwise about the origin. What are the new coordinates of point A and by how much has the distance between points A and B changed? | HIX Tutor
Point A is at #(-7 ,-1 )# and point B is at #(2 ,-4 )#. Point A is rotated #pi # clockwise about the origin. What are the new coordinates of point A and by how much has the distance between points A
and B changed?
Answer 1
#color(indigo)(2.42" is the change in the distance between A & B due to the rotation of A by " (3pi)/2 " clockwise about the origin"#
$A \left(- 7 , - 1\right) , B \left(2 , - 4\right) , \text{ A rotated " pi " clockwise about origin}$
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
The new coordinates of point A after rotating π radians clockwise about the origin are (1, 7). To calculate the distance between the new point A and point B, we use the distance formula:
Distance = √((x2 - x1)^2 + (y2 - y1)^2)
Distance between the original points A and B:
Distance_AB = √((-7 - 2)^2 + (-1 - (-4))^2) = √((-9)^2 + (3)^2) = √(81 + 9) = √90
Distance_AB ≈ 9.49 units
Distance between the new point A and point B:
Distance_A'B' = √((1 - 2)^2 + (7 - (-4))^2) = √((-1)^2 + (11)^2) = √(1 + 121) = √122
Distance_A'B' ≈ 11.05 units
The change in distance between points A and B is:
ΔDistance = Distance_A'B' - Distance_AB ≈ 11.05 - 9.49 ≈ 1.56 units
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/point-a-is-at-7-1-and-point-b-is-at-2-4-point-a-is-rotated-pi-clockwise-about-th-8f9afa3002","timestamp":"2024-11-03T07:38:50Z","content_type":"text/html","content_length":"576834","record_id":"<urn:uuid:0b47e87d-949e-47f9-b60c-678081b51b65>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00544.warc.gz"} |
How To Calculate Axial Stress
Axial stress describes the amount of force per unit of cross-sectional area that acts in the lengthwise direction of a beam or axle. Axial stress can cause a member to compress, buckle, elongate or
fail. Some parts that might experience axial force are building joists, studs and various types of shafts. The simplest formula for axial stress is force divided by cross-sectional area. The force
acting on that cross section, however, may not be immediately obvious.
Step 1
Determine the magnitude of force that acts directly normal (perpendicular) to the cross section. For example, if a linear force meets the cross section at a 60-degree angle, only a portion of that
force directly causes axial stress. Use the trigonometric function sine to gauge how perpendicular the force is to the face; the axial force equals the magnitude of the force times the sine of the
incident angle. If the force enters at 90-degrees to the face, 100 percent of the force is axial force.
Step 2
Choose a specific point at which to analyze the axial stress. Calculate the cross-sectional area at that point.
Step 3
Calculate the axial stress due to linear force. This is equal to the component of linear force perpendicular to the face divided by the cross-sectional area.
Step 4
Calculate the total moment acting on the cross section of interest. For a static beam, this moment will be equal and opposite to the sum of moments acting on either side of the cross section. There
are two types of moments: direct moments, as applied by a cantilever support, and moments created about the cross section by vertical forces. The moment due to a vertical force equals its magnitude
times its distance from the point of interest. Use the cosine function to calculate the vertical component of any linear forces applied to the ends of the axle.
Step 5
Calculate the axial stress due to moments. When a moment acts on an axle, it creates tension in either the top or bottom half of it, and compression in the other. The stress is zero along the line
that runs through the center of the axle (called the neutral axis), and increases linearly toward both its top and bottom edge. The formula for stress due to bending is (M * y) / I, where M = moment,
y = the height above or below the neutral axis, and I = the moment of inertia at the axle's centroid. You can think of moment of inertia as a beam's ability to resist bending. This number is easiest
to obtain from tables of previous calculations for common cross-sectional shapes.
Step 6
Add the stresses caused by linear forces and moments to obtain the total axial stress for the point analyzed.
• "FE Review Manual: Rapid Preparation for the Gereral Fundamentals of Engineering Exam"; Michael R. Lindburg; 2006
Cite This Article
Painting, Brad. "How To Calculate Axial Stress" sciencing.com, https://www.sciencing.com/calculate-axial-stress-6510025/. 24 April 2017.
Painting, Brad. (2017, April 24). How To Calculate Axial Stress. sciencing.com. Retrieved from https://www.sciencing.com/calculate-axial-stress-6510025/
Painting, Brad. How To Calculate Axial Stress last modified March 24, 2022. https://www.sciencing.com/calculate-axial-stress-6510025/ | {"url":"https://www.sciencing.com:443/calculate-axial-stress-6510025/","timestamp":"2024-11-14T02:28:56Z","content_type":"application/xhtml+xml","content_length":"71581","record_id":"<urn:uuid:2c78c9db-fc4b-464a-b0dc-a9bb62de931d>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00879.warc.gz"} |
Number 108: the sacred number that binds us both to ourselves and to the Universe
Featured photo: yogapractice.com
I recently bought a mala to practise mantra chanting with the use of prayer beads, a practice also known as japa meditation. The first obvious question I got when I got the mala beads was why are
there 108 of them?. And why does this mysterious number appear quite frequently in topics related to Dharmic (Hinduism, Buddhism, Jainism) religions mostly (the number 108 is considered sacred by the
Dharmic Religions)? I had to find out and here’s what I got based on my little research.
1. Cosmic geometry
• The diameter of the sun and the distance between the earth and the sun is 108 times.
• The diameter of the moon and the distance between the moon and the earth is 108 times.
• The diameter of the earth and the diameter of the sun is 108 times.
2. Chakras and Shri Yantra
• According to Ayurvedic texts the human body has between 108 and 117 marmani points. Marmani is a Sanskrit term, plural of marma; a set of energy points found on the surface of the body which are
vital for health and wellbeing, as they contain prana (vital life force energy). It is also believed that there are 108 energy lines that converge to form the heart chakra.
• On the Shri Yantra (referred to as an instrument for Wealth) there are marmas (points) where three lines intersect, and there are 54 such intersections. Each intersection has masculine and
feminine qualities, which represent Shiva and Shakti. 54 x 2 equals 108. Thus, there are 108 points that define the Sri Yantra as well as the human body. The beauty of the Shree Yantra is that it
symbolises all Gods and Goddesses.
3. Astrology
• Vedic astrology observes the movements of nine planets through 12 houses (12 times nine is 108), and also recognises 27 constellations spread throughout four directions (27 times four is 108).
• There are 12 zodiacs and 9 planets and when multiplied, we get 108. Additionally, there are 27 lunar mansions and they are divided into 4 quarters. When 27 is multiplied by 4, the result is 108.
• There are 108 stars in Chinese astrology and while 72 of them are malevolent, the remaining 36 are beneficial.
4. Yoga
• Sun salutations, yogic asanas that honour the sun god Surya, are generally completed in nine rounds of 12 postures, totalling 108.
• In Kriya yoga, the maximum number of repetitions per session is said to be 108.
5. Meditation
• Mantra meditation is usually chanted on a set of 108 beads. This is why all mantras are chanted 108 times because each chant represents a journey from our material self towards our highest
spiritual self. Each chant is believed to bring you 1 unit closer to our God within.
• There are said to be 108 styles of meditation.
• In pranayama, the yogic practice of regulating breath, it is believed that if an individual can be so calm as to only breathe 108 times in one day, enlightenment will be achieved.
• An average person is said to breathe 21,600 times in a 24-hour period. Half, 10,800, are solar energy (breaths during day), and the other half is lunar energy (breaths during night). 100
multiplied with 108 equals 10,800.
6. Hinduism
• Sanskrit, the ancient language the Vedas were originally written in, possesses 54 letters. Each letter can either be feminine (Shakti) or masculine (Shiva), which totals 108. In addition, there
are 108 Upanishads, which are a compilation of texts that clarify and expand on Vedic teachings.
• Each deity in Hinduism has 108 names.
• Both Lord Vishnu and Lord Shiva have 108 names each.
• 108 epistemological doctrines in Hinduism tradition
7. Buddhism
• The Buddha has 108 names and there are 108 lamps devoted to him.
• In Buddhism, there are said to be 108 Earthly desires, 108 lies, and 108 delusions of the mind. Some of these sins and delusions are callousness, blasphemy, anger, abuse, and aggression.
• In Buddhism, it is also believed that the road to nirvana is laden with exactly 108 temptations. So, every Buddhist has to overcome 108 earthly temptations to achieve nirvana. In addition, the
ring of prayer beads worn around the waist of Zen priests is usually made of 108 beads.
• In Tibetan Buddhism, the number 108 mainly stands for the Kangyur, the Tibetan Buddhist canon, a loosely defined collection of 108 volumes of sacred texts recognised by various schools of Tibetan
Buddhism, and described as the Word of the Buddha.
• The Lankavatara Sutra, a Buddhism Mahayana text that figures prominently in Tibetan, Chinese and Japanese Buddhism, has a section where the Bodhisattva Mahamati asks the Buddha 108 questions.
• In Buddhism, it is among the tenets that there are exactly 108 types of defilements – no more and no less. This could be the reason a bell is usually chimed exactly 108 times in Japanese Buddhist
Temples to mark the end of an old year and to usher into a new year.
• Kathmandu is said to be the capital of Buddhism and there are exactly 108 images of Lord Buddha, erected in and around the place in reverence of the Buddhism deity.
• Tibetan legends are made of 108 Masters and 108 initiates.
• 108 saints are celebrated in Japan and they are also known as Vajradhatu.
• Most Buddhist temples usually have 108 steps and 108 columns. A very good example of such temples is the temple at Angkor. The temple is built around 108 huge stones.
• In Tibet, a nomad woman braids her hair into 108 tresses.
8. Feelings
• Both Buddhism and Hinduism believe that every human being has 108 different types of feelings. 36 of these feelings revolve round their past, 36 revolve round the present, and the remaining 36
are based on their dreams and future ambitions.
9. Geography
• River Ganga spans a longitude of 12 degrees (79 to 91) and latitude of 9 degrees (22 to 31). 12 multiplied by 9 equals 108.
• Stonehenge’s diameter has been measured to be 108 feet in diameter.
• PhNom Bakheng, an ancient Shiva Temple located in Cambodia has 108 towers around it.
• There are exactly 108 sacred sites (also called pithas) all over India.
• 108 steps in temples mentioned in the Lankavatara Sutra.
10. Mathematics
Ancient Indians discovered the links between 108 and 9, a more sacred number. The link between 9 and 108 is much more than one being a multiple of the other.
• 108 is a Harshad number. Such a number is an integer divisible by the sum of its digits. (108/9 = 12). In Sanskrit, harsa means “joy” and da means “give”. Thus, Harshad translates to “joy giver.”
• 11 x 22 x 33 = 108. This means (1) x (2×2) x (3x3x3) = 108
• 1 squared plus 2 squared plus 3 squared equals 108
• When 108 is divided by 2, the answer is 54 and 5 + 4 = 9
• When 54 is further divided by 2, it will lead to 27 and 2 + 7 = 9
• When 1 is added to 0 and 8, the answer is 9 (1+0+8 = 9)
• When 108 is multiplied by 2, the resultant figures will result in 9 when added together – 108 x 2 = 216; 2+1+6 = 9
• When 108 is multiplied by 3, the resultant figures will result in 9 when added together – 108 x 2 = 324; 3+2+5 = 9
• 366 days in sidereal year; 3x6x6 = 108
• 108° degrees on inner angles of a pentagon
11. Fibonacci sequence
Applying decimal on the first 24 numbers of the Fibonacci sequence and adding them up gives us 108.
Ex: Decimal parity of 455 = 4 + 5 + 5 = 14 and 1 + 4 = 5. So the decimal parity of 455 is 5.
The first 24 numbers of the Fibonacci Sequence are: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657.
If we apply decimal parity to the Fibonacci sequence we find that there is a repeating series of 24 digits as seen here: (0), 1, 1, 2, 3, 5, 8, 4, 3, 7, 1, 8, 9, 8, 8, 7, 6, 4, 1, 5, 6, 2, 8, 1.
If we add these 24 digits up, we get the number 108.
0 + 1 + 1 + 2 + 3 + 5 + 8 + 4 + 3 + 7 + 1 + 8 + 9 + 8 + 8 + 7 + 6 + 4 + 1 + 5 + 6 + 2 + 8 + 1 = 108
Also, the 1.08 constant growth rate the nautilus uses to build its spiral shell involves the same pattern which repeats every 24 numbers in the Fibonacci sequence.
12. Numerology
Numbers can be seen as messengers. When the number 108 appears in our life it may mean that we are about to attain a long-desired goal or achievement. The number 108 consists of the individual
numbers 1, 0 and 8.
• The number 1 represents authority and leadership. It also represents new beginnings, taking initiative, and embarking on a new path of progress.
• The number 0 is a mysterious one in that it represents both nothingness and eternity, but when combined with other numbers it will amplify the vibrational influence of the other numbers.
• The number 8 is related to power and influence, abundance and achievement and the power to manifest desired outcomes. When this vibration becomes dominant in our lives we can expect to wield a
great amount of influence in the chosen area of interest or in our professional lives.
Another explanation is this: 1, 0, and 8: 1 stands for God or higher truth, 0 stands for emptiness or completeness in spiritual practice, and 8 stands for infinity or eternity.
The number 108 may also be thought of as a special number or a special combination that represents the number 9. Number 9 is considered to be a highly spiritual number and entrusts us with a high
spiritual energy associated with altruism and humanitarianism. So when number 9 “shines” through the number 108 it signals that we should share our gifts and abundance with those less fortunate than
us. The abundance we attract, we must remember, has come to us by aligning with the Divine Source. Conversely, when we align our thoughts with the Divine Source, our whole attitude changes and we
become generous, altruistic and benefactors of the world, naturally – all the while maintaining a sense of gratitude for all that we have.
The Divine Source keeps giving without expecting anything in return. We can best maintain a connection to that source by an attitude of gratitude and generosity – such an attitude will continue to
attract prosperity into our lives, materially and spiritually.
13. 108 degrees Fahrenheit
When the internal body temperature reaches 108 degrees Fahrenheit, the vital organs in the body will begin to shut down.
14. Christianity
• The division of Christian holiday from Soul’s day which is on November 2nd to 25th December has 54 days and 54 nights. The significance of the number lies in the fact that within those two dates,
light transformed into darkness a total of 108 times and the reverse also occurred the same number of times.
15. Jainism
• There 108 virtues in Jain tradition
Summary – Theology and Culture
• 108 beads on a mala
• 108 repetitions of a mantra
• 108 types of meditation
• 108 dance forms in Indian traditions
• 108 time frame in Rosicrucian cycles
• 108 gopis of Vrindavan in the Gaudiya Vaishnavism
• 108 defilements in some schools of Buddhism
• 108 earthly temptations
• 108 beads on a juzu (prayer beads) worn by Zen priests
• 108 questions for Buddha in the Lankavatra
• 108 previous incarnations remembered in modern Gnosticism
• 108 chances or lifetimes to rid the ego and transcend the materialistic world
• 108 earthly desires/lies/delusions in Buddhism
• 108 is maximum number of repetitions in Kriya Yoga
• 108 Sun Salutations in yoga
• 108 breaths in a day to reach enlightenment
• 108 energy lines or nadiis converging to form the heart chakra
• 108 sacred books in the holy writings of Tibet
• 108 epistemological doctrines in Hinduism tradition
• 108 virtues in Jain tradition
• 108 steps in temples mentioned in the Lankavatara Sutra
• 108 sins or 108 delusions of the mind in Tibetan Buddhism
• 108 pressure points in body according to Marma Adi and Ayurveda
Most of the beautiful findings above were picked and compiled from the following sites. Huge kudos to them! 🙏 | {"url":"https://aum.how/spirituality/number-108-the-sacred-number-that-binds-us-both-to-ourselves-and-to-the-universe/","timestamp":"2024-11-10T23:51:17Z","content_type":"text/html","content_length":"75173","record_id":"<urn:uuid:2fa44ae9-9ef5-4a9c-b8bd-f29bc7a458dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00620.warc.gz"} |
recapr: Two Event Mark-Recapture Experiment
Tools are provided for estimating, testing, and simulating abundance in a two-event (Petersen) mark-recapture experiment. Functions are given to calculate the Petersen, Chapman, and Bailey estimators
and associated variances. However, the principal utility is a set of functions to simulate random draws from these estimators, and use these to conduct hypothesis tests and power calculations.
Additionally, a set of functions are provided for generating confidence intervals via bootstrapping. Functions are also provided to test abundance estimator consistency under complete or partial
stratification, and to calculate stratified or partially stratified estimators. Functions are also provided to calculate recommended sample sizes. Referenced methods can be found in Arnason et al.
(1996) <ISSN:0706-6457>, Bailey (1951) <doi:10.2307/2332575>, Bailey (1952) <doi:10.2307/1913>, Chapman (1951) NAID:20001644490, Cohen (1988) ISBN:0-12-179060-6, Darroch (1961) <doi:10.2307/2332748>,
and Robson and Regier (1964) <ISSN:1548-8659>.
Version: 0.4.4
Imports: MASS
Suggests: testthat, knitr, rmarkdown
Published: 2021-09-08
DOI: 10.32614/CRAN.package.recapr
Author: Matt Tyers [aut, cre]
Maintainer: Matt Tyers <matttyersstat at gmail.com>
License: GPL-2
NeedsCompilation: no
Materials: README
CRAN checks: recapr results
Reference manual: recapr.pdf
Vignettes: recapr vignette
Package source: recapr_0.4.4.tar.gz
Windows binaries: r-devel: recapr_0.4.4.zip, r-release: recapr_0.4.4.zip, r-oldrel: recapr_0.4.4.zip
macOS binaries: r-release (arm64): recapr_0.4.4.tgz, r-oldrel (arm64): recapr_0.4.4.tgz, r-release (x86_64): recapr_0.4.4.tgz, r-oldrel (x86_64): recapr_0.4.4.tgz
Old sources: recapr archive
Please use the canonical form https://CRAN.R-project.org/package=recapr to link to this page. | {"url":"https://cran.ma.imperial.ac.uk/web/packages/recapr/index.html","timestamp":"2024-11-05T04:32:49Z","content_type":"text/html","content_length":"7548","record_id":"<urn:uuid:f33494ca-7f70-4ad3-9498-62b5e26c9864>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00812.warc.gz"} |
Grade 3 Lesson Overview
Back to Content Overview
Grade 3
Lesson Lesson Name Strand Content Outcome
151 Counting 1000-5000 Number Counting Order numbers on a number line, counting forward and backward in thousands, hundreds and tens. Order numbers from smallest to largest.
152 Symmetry Geometry 2D shapes Explore vertical and horizontal lines of symmetry. Identify images in the environment that are symmetrical.
153 Number Patterns (2) Patterns & Adding & Identify addition and subtraction number patterns. Explore the Fibonacci Sequence and follow a rule to create a number pattern. Identify the rule to
Algebra Subtracting create a number pattern.
154 L and ml Measurement Volume & Introduce the L and ml as units of measure. Understand that 1 L = 1000 ml. Determine if a vessel holds more than, less than or is equal to 1 L. Read
Capacity increments on measuring jugs to determine the amount of liquid.
155 Multiplication Operations Multiplying Revise multiplication strategies including repeated addition, grouping items together, and using the multiplication sign in a number sentence. Solve
(revision) multiplication word problems using the ‘create a picture’ strategy.
156 Counting 5000-10000 Number Counting Model a number using Base 10 equipment. Match the number to its name. Place numbers on a number line and count forward and backward in thousands,
hundreds and tens. Add +1, +10 +100 to a number.
157 Area (3) Measurement Area Count squares to measure area. Multiply the number of squares (length) by the number of squares (width). Multiply length x width to find the area in
square units.
158 Times Tables x2 x4 Operations Multiplying Explore the x2, x4 tables. Identify patterns in a hundred chart. Understand that 2 x 2 means two groups of two.
159 Money - Equivalent Operations Adding & Count collections of currency. Understand that the same amount can be presented in different combinations of currency. Match different currency
Amounts Subtracting combinations to an amount. Find the correct change from $50.
160 Comparing and Operations Fractions Understand the role of the top and bottom numbers in a fraction. Use the term ‘denominator’. Compare the sizes of fractions, including mixed numbers
Ordering Fractions up to 2. Order simple fractions and mixed numbers on a number line. Fractions used: 1/2, 1/3, 1/4, 1/5, 1/6, 1/8.
Lessons 161–200 coming soon! | {"url":"https://mathseeds.com/about/lesson-overview/5/","timestamp":"2024-11-11T20:31:28Z","content_type":"text/html","content_length":"17420","record_id":"<urn:uuid:ab687fc2-5798-46a6-bc5e-4af6d17f7c07>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00091.warc.gz"} |
(n+l) Rule-sequence of filling subshell
Posted 8 years ago by Saroj Bhatia
source : knowledge bin.org
(n+l) rule
• The Sequence of the filling of various sub-shells may also be determined from (n+l) rule
“The electron comes in the vacant sub-shell with lowest value of (n+l). If the value of (n+l) is same, then electron enters the sub-shell having lowest value of n.”
n=no. of shell
l= sub-shell s,p,d,f
values of l for s,p,d,f subshells is 0,1,2 & 3 respectively.
Question1. Which sub-shell is filled first 4f or 6s?
Ans. For 4f sub-shell, n=4, l=3
For 6s sub-shell n=6, l=0
The value of (n+l) is less for 6s, So 6s is filled first
Question2.Which sub-shell is filled first 4p or 5s?
Ans. for 4p sub-shell n=4, l=1
n+l= 4+1=5
for 5s sub-shell n=5, l=0
n+l= 5+0=5
Since the value of (n+l) is same for both subshells.
4p is filled first because value of n is smaller. | {"url":"https://chemistryonline.guru/nl-rule-subshell/","timestamp":"2024-11-08T04:33:11Z","content_type":"text/html","content_length":"35769","record_id":"<urn:uuid:7b799549-40c9-4453-9dfe-5e19536db172>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00726.warc.gz"} |
Tableau Multiple Lines Same Chart 2024 - Multiplication Chart Printable
Tableau Multiple Lines Same Chart
Tableau Multiple Lines Same Chart – The Multiplication Graph Collection may help your students visually symbolize different early math methods. It must be used as a teaching aid only and should not
be confused with the Multiplication Table, however. The chart comes in a few types: the coloured edition is helpful as soon as your pupil is paying attention on a single periods kitchen table at any
given time. The horizontal and vertical models are suitable for children who definitely are nevertheless studying their occasions dining tables. If you prefer, in addition to the colored version, you
can also purchase a blank multiplication chart. Tableau Multiple Lines Same Chart.
Multiples of 4 are 4 clear of the other person
The routine for deciding multiples of 4 is to put every single quantity to on its own and discover its other a number of. As an example, the first several multiples of 4 are: 16, 8, 12 and 4 and 20.
This trick works because all multiples of a number are even, and they are four away from each other on the multiplication chart line. Furthermore, multiples of four are even phone numbers by nature.
Multiples of 5 are even
If they end in or 5, You’ll find multiples of 5 on the multiplication chart line only. To put it differently, you can’t flourish a number by several to acquire a much number. If the number ends in
five or , you can only find a multiple of five! Thankfully, there are tips that make getting multiples of five even simpler, like while using multiplication graph line to obtain the multiple of 5.
Multiples of 8 are 8 far from one another
The pattern is apparent: all multiples of 8 are two-digit figures and all multiples of 4-digit numbers are two-digit numbers. Each and every array of 10 has a a number of of 8-10. Seven is even, so
all its multiples are two-digit phone numbers. Its style carries on up to 119. When you can see a number, make sure you locate a several of 8-10 in the first place.
Multiples of 12 are 12 from each other
The quantity a dozen has infinite multiples, and you will multiply any whole amount by it to produce any amount, including alone. All multiples of twelve are even amounts. Here are several
illustrations. David wants to acquire pencils and organizes them into 8-10 packages of twelve. He has 96 pens. David has one of each kind of pencil. In their place of work, he arranges them in the
multiplication chart series.
Multiples of 20 are 20 clear of the other
In the multiplication graph, multiples of twenty are all even. If you multiply one by another, then the multiple will be also even. Multiply both numbers by each other to find the factor if you have
more than one factor. If Oliver has 2000 notebooks, then he can group them equally, for example. The same applies to pencils and erasers. You can buy one out of a pack of about three or perhaps a
package of six.
Multiples of 30 are 30 from the other
In multiplication, the expression “factor match” identifies a small group of phone numbers that type an obvious variety. If the number ’30’ is written as a product of five and six, that number is 30
away from each other on a multiplication chart line, for example. The same is true for the amount from the variety ‘1’ to ’10’. Put simply, any amount could be published because the product of 1 and
by itself.
Multiples of 40 are 40 away from one another
You may know that there are multiples of 40 on a multiplication chart line, but do you know how to find them? To achieve this, you can include externally-in. As an example, 10 12 14 = 40, etc.
Likewise, 15 8-10 = 20. In this instance, the quantity in the kept of 10 is surely an even quantity, whilst the one particular on the right is surely an odd quantity.
Multiples of 50 are 50 from one another
Using the multiplication graph or chart series to determine the sum of two numbers, multiples of fifty are similar length aside about the multiplication chart. They have got two prime factors, 50 and
80. Typically, each word differs by 50. One other factor is 50 itself. Listed below are the most popular multiples of 50. A frequent numerous will be the a number of of the provided variety by 50.
Multiples of 100 are 100 far from one another
The following are the various figures which are multiples of 100. A good match is a numerous of one hundred, although a poor match can be a numerous of ten. These two kinds of phone numbers will vary
in a number of approaches. The first technique is to separate the amount by successive integers. In this instance, the number of multiples is just one, twenty, thirty and ten and 40.
Gallery of Tableau Multiple Lines Same Chart
Code Plot Multiple Lines Into The Same Chart Over Time From Pandas
Solved R Ggplot Multiple Regression Lines For Different Columns In
How You Can Draw Multi Line Graphs Easily On Tableau By Jerren Gan
Leave a Comment | {"url":"https://www.multiplicationchartprintable.com/tableau-multiple-lines-same-chart/","timestamp":"2024-11-02T11:52:51Z","content_type":"text/html","content_length":"55437","record_id":"<urn:uuid:3b8b38e1-df57-4d91-b0e5-461574245062>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00418.warc.gz"} |
JOBZ is CHARACTER*1
[in] JOBZ = 'N': Compute eigenvalues only;
= 'V': Compute eigenvalues and eigenvectors.
RANGE is CHARACTER*1
= 'A': all eigenvalues will be found.
[in] RANGE = 'V': all eigenvalues in the half-open interval (VL,VU]
will be found.
= 'I': the IL-th through IU-th eigenvalues will be found.
N is INTEGER
[in] N The order of the matrix. N >= 0.
D is REAL array, dimension (N)
[in,out] D On entry, the N diagonal elements of the tridiagonal matrix
T. On exit, D is overwritten.
E is REAL array, dimension (N)
On entry, the (N-1) subdiagonal elements of the tridiagonal
[in,out] E matrix T in elements 1 to N-1 of E. E(N) need not be set on
input, but is used internally as workspace.
On exit, E is overwritten.
[in] VL VL is REAL
VU is REAL
[in] VU If RANGE='V', the lower and upper bounds of the interval to
be searched for eigenvalues. VL < VU.
Not referenced if RANGE = 'A' or 'I'.
[in] IL IL is INTEGER
IU is INTEGER
If RANGE='I', the indices (in ascending order) of the
[in] IU smallest and largest eigenvalues to be returned.
1 <= IL <= IU <= N, if N > 0.
Not referenced if RANGE = 'A' or 'V'.
ABSTOL is REAL
[in] ABSTOL Unused. Was the absolute error tolerance for the
eigenvalues/eigenvectors in previous versions.
M is INTEGER
[out] M The total number of eigenvalues found. 0 <= M <= N.
If RANGE = 'A', M = N, and if RANGE = 'I', M = IU-IL+1.
W is REAL array, dimension (N)
[out] W The first M elements contain the selected eigenvalues in
ascending order.
Z is REAL array, dimension (LDZ, max(1,M) )
If JOBZ = 'V', and if INFO = 0, then the first M columns of Z
contain the orthonormal eigenvectors of the matrix T
corresponding to the selected eigenvalues, with the i-th
column of Z holding the eigenvector associated with W(i).
[out] Z If JOBZ = 'N', then Z is not referenced.
Note: the user must ensure that at least max(1,M) columns are
supplied in the array Z; if RANGE = 'V', the exact value of M
is not known in advance and an upper bound must be used.
Supplying N columns is always safe.
LDZ is INTEGER
[in] LDZ The leading dimension of the array Z. LDZ >= 1, and if
JOBZ = 'V', then LDZ >= max(1,N).
ISUPPZ is INTEGER ARRAY, dimension ( 2*max(1,M) )
The support of the eigenvectors in Z, i.e., the indices
indicating the nonzero elements in Z. The i-th computed eigenvector
[out] ISUPPZ is nonzero only in elements ISUPPZ( 2*i-1 ) through
ISUPPZ( 2*i ). This is relevant in the case when the matrix
is split. ISUPPZ is only accessed when JOBZ is 'V' and N > 0.
WORK is REAL array, dimension (LWORK)
[out] WORK On exit, if INFO = 0, WORK(1) returns the optimal
(and minimal) LWORK.
LWORK is INTEGER
The dimension of the array WORK. LWORK >= max(1,18*N)
if JOBZ = 'V', and LWORK >= max(1,12*N) if JOBZ = 'N'.
[in] LWORK If LWORK = -1, then a workspace query is assumed; the routine
only calculates the optimal size of the WORK array, returns
this value as the first entry of the WORK array, and no error
message related to LWORK is issued by XERBLA.
IWORK is INTEGER array, dimension (LIWORK)
[out] IWORK On exit, if INFO = 0, IWORK(1) returns the optimal LIWORK.
LIWORK is INTEGER
The dimension of the array IWORK. LIWORK >= max(1,10*N)
if the eigenvectors are desired, and LIWORK >= max(1,8*N)
if only the eigenvalues are to be computed.
[in] LIWORK If LIWORK = -1, then a workspace query is assumed; the
routine only calculates the optimal size of the IWORK array,
returns this value as the first entry of the IWORK array, and
no error message related to LIWORK is issued by XERBLA.
INFO is INTEGER
On exit, INFO
= 0: successful exit
< 0: if INFO = -i, the i-th argument had an illegal value
[out] INFO > 0: if INFO = 1X, internal error in SLARRE,
if INFO = 2X, internal error in SLARRV.
Here, the digit X = ABS( IINFO ) < 10, where IINFO is
the nonzero error code returned by SLARRE or
SLARRV, respectively. | {"url":"https://netlib.org/lapack/explore-html-3.4.2/d6/d0e/sstegr_8f.html","timestamp":"2024-11-09T17:15:59Z","content_type":"application/xhtml+xml","content_length":"20053","record_id":"<urn:uuid:2d0f2291-c210-4f06-b98f-80004995be7e>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00409.warc.gz"} |
Article 18
Scientific formula calculators
1. Introduction
What is scientific calculator? For me, it is a tool for estimation of engineering, mathematical, physical, mechanical ideas in digits. Modern science is complicated and “estimation” usually a
formula, taking several lines on paper. It would be good to have a tool to enter that estimation, try it, modify and try again and again till the point it is clear if idea is good or not. Namely for
that style of work, which suits the scientific quest the best were developed Librow calculators.
2. Work and style
So, what we need to have proper tool for scientific work? First of all, tool should be easy to learn — of course, it is good to know C++ or Fortran, but efforts spent for learning programming and
time spent for coding and compilation are incompatible with the goal of fast idea estimation. So, something simple, like typing formulas in familiar book notation or even copy-paste would hit the
point. And second, tool should be flexible enough allowing easy modification of the input, so that switch from sine to cosine or argument change could be done without retyping the whole expression.
Thus, Librow scientific calculators are expression ones — that is, they are able to evaluate the whole expression. And they have text editor — indeed, namely that — real text editor, where one can do
anything: type expressions and evaluate them, go to any formula, edit it and reevaluate, enter comments — that is treat input just like text, all input is always available for editing, reevaluation,
copy-paste and whatever one can do with text. And even more — one can evaluate or reevaluate all the input in one click, running all entered formulae as script — scientific calculators are
Because the input is just text, you can save it to disk and open in any other text processing environment to read it and edit. And because it is unicode text you can name variables and write your
remarks between formulas in any language. That is you are unlimited in how you are creating your input and how you are using it afterwards.
Librow scientific calculators support different notations: western, where tangent is tan and eastern, where it is tg, as well notation, accepted in programming world, where arctangent is atan. As
scientific calculators they, of course, support scientific number notation, where billion is 1e9 — so-called exponential number notation.
3. Memory
We need memory for storing intermediate results. Librow scientific calculators allow allocating unlimited number of memory cells. And you can give your results human names like “sound speed”,
“chamber volume”, “beam intensity” and so on. And you can save your results to disk and load them back into calculator memory. One way, how you can use this feature is creating pool of constants you
are using frequently and loading them into memory before work.
4. Interface
When it comes down to interface there is no single opinion what it should be like. Ones like simple and system look, others prefer to have flashy design and all windows open to see everything. We
accept idea of uniformity but as well agree a decimal point of one pixel size in high-resolution screen is not worthy straining your eyes. And thus, our scientific calculators have customizable
graphical interface. If you prefer colors and all windows open you can have configuration like in the picture below:
Fig. 1. Scientific calculator with “glamour” interface.
If you used to classical look, you can change color to system and hide memory spy window like:
Fig. 2. Scientific calculator with “classic” interface.
If you tend to minimalistic design, your calculator can look like text processor:
Fig. 3. Scientific calculator with “minimalistic” interface.
If you are a professional working from keyboard with ten fingers and like to have calculator keypad as reminder (keeping the keypad hidden and displaying it on occasion with Ctrl+K shortcut) you can
find the following layout serves your requirements the best:
Fig. 4. Scientific calculator with “professional” interface.
All calculator windows are smart and remember their size and position, so, next time you run the calculator you will find them where you left them placed. For those, who prefer working from keyboard
— you can show and hide windows with shortcuts.
5. Print
Sometimes we want to have our job printed — either for review or for controlling the work flow. Librow formula calculators can format the calculations and print them properly, so that the hard copies
can be punched and filed. As well, you can control the text appearance on paper and can add the information what and when was printed to every page.
Fig. 5. Print control of the scientific calculator.
6. Learn in two clicks
To back up easy-to-learn approach we programmed context sensitive Help available just in two clicks for any calculator item. If you are interested in what some button or menu item is doing, click the
question mark button in toolbar and then click the item of interest — you will be taken to the help page with corresponding topic.
Fig. 6. Scientific calculator help.
For keypad function buttons you will be taken to the page of function handbook, where you can learn function properties, recall identities and see function graph.
Fig. 7. Scientific calculator handbook.
7. Conclusion
In total, we have clear and flexible computation environment that gives access to everything the computer has for us: disk for storage, memory for constants and variables, graphical interface for
comfort, printing for hard copies and central processor for computing.
Remember, I was taught the “PC is a powerful calculator” and first time I sat in front of the computer I expected to type in
and get the result, and was greatly surprised it “did not understand”! Well, now think, the gap is closed.
8. Versions
We addmit that choice is a good thing and propose two versions of scientific formula calculators.
Base free version supports power, exponential, logarithmic, all trigonometric and hyperbolic functions along with their inverses. You can download it here:
Free version
Professional version goes further — first, it has extended function set: it supports selected special functions — gamma function and Bessel functions.
Gamma function is known for its fast grow, which leads to overflow for quite modest values of the argument. So, in calculations often used logarithm of the function, which does not have this overflow
issue. Gamma function programmed to be context sensitive — as soon as it finds itself inside natural logarithm direct computation of the logarithm of the function is utilized to get the result, so
that computation of the gamma function itself is skipped. The same way behaves the factorial — in professional calculator version it is generalized to gamma function.
Professional version supports complex algebra — all functions here are of the complex argument. Extra functionality for work with complex numbers like real and imaginary parts, argument and conjugate
is also included. This version you can download here:
All calculator programs are written in C++ and are fast, reliable and light.
Write to the author of the article — Sergey Chernenko | {"url":"http://librow.com/articles/article-18","timestamp":"2024-11-08T09:03:13Z","content_type":"application/xhtml+xml","content_length":"30478","record_id":"<urn:uuid:c4610a79-5c09-420a-9158-cc6b966c5e41>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00102.warc.gz"} |
Modelling Differently Defined Dominant Stand Diameters of Monospecific Forest Plantations
Copyright © 2017 by Croatian Journal of Forest Engineering
doi: https://doi.org/10.5552/crojfe.2025.2505
volume: issue, issue:
pp: 16
□ Stankova Tatiana ^
□ Ogana Friday N.
□ Dimitrova Proletka
Article category:
Original scientific paper
Scots pine plantations, Black pine plantations, black locust plantations, hybrid black poplar plantations, quadratic mean diameter, height-diameter model, goodness-of-fit, regression model
Quadratic mean diameter is a widely used stand parameter present in the stand inventory summaries, while the top stand diameter is rarely reported in the literature, mainly in relation to dominant
stand height. Since the dominant stand height is usually determined from the tree height-diameter curve of the stand, it is important how the top tree assemblage, used to estimate dominant diameter,
is defined. The main objective of our study was to assess the bias between differently defined dominant diameter estimates for monospecific plantations of various species, to model the dominant
diameter as a function of quadratic mean diameter and other relevant stand variables, and to estimate its goodness-of-fit in predicting dominant diameter and dominant height.
We used data records gathered in sample plots in monospecific plantations of four tree species: Scots pine, Black pine, black locust and hybrid black poplar. We calculated the quadratic and
arithmetic mean diameters of the 20% thickest trees in the plots, and the quadratic and arithmetic mean diameters of the trees, whose number corresponded to the 100 thickest trees per hectare. For
each dataset, we analyzed the range and the distribution of the relative deviations calculated for each pair of dominant diameter estimates. For the Black pine plantations, regression models were
developed for the two dominant diameter definitions, whose values differed most. Their goodness-of-fit was assessed from model efficiency and error statistics. The same model derivation procedure,
applied to the Scots pine data, was followed by substitution of the predicted dominant diameter into a height-diameter model to assess the goodness-of-fit of the dominant height predictions.
The differences between the arithmetic and quadratic means, estimated from the same subsample of trees, did not exceed 2% in all cases. However, dominant stand diameters calculated as averages of
differently defined largest tree collectives differed by as much as 35%. Regardless of its definition, the dominant stand diameter was adequately predicted by a function of the quadratic mean
diameter alone or considering stand basal area as a second predictor. The models showed very good accuracy of model efficiency above 0.92, average absolute error below 8%, with 90% of the relative
errors less than 15%. The predicted dominant diameter value can be used in a height-diameter model to estimate with confidence the dominant stand height of a monospecific forest plantation, allowing
the forecast of the stand attributes based on dominant trees when only average stand variables are known.
Modelling Differently Defined Dominant Stand Diameters of Monospecific Forest Plantations
Tatiana Stankova, Friday N. Ogana, Proletka Dimitrova
Quadratic mean diameter is a widely used stand parameter present in the stand inventory summaries, while the top stand diameter is rarely reported in the literature, mainly in relation to dominant
stand height. Since the dominant stand height is usually determined from the tree height-diameter curve of the stand, it is important how the top tree assemblage, used to estimate dominant diameter,
is defined. The main objective of our study was to assess the bias between differently defined dominant diameter estimates for monospecific plantations of various species, to model the dominant
diameter as a function of quadratic mean diameter and other relevant stand variables, and to estimate its goodness-of-fit in predicting dominant diameter and dominant height.
We used data records gathered in sample plots in monospecific plantations of four tree species: Scots pine, Black pine, black locust and hybrid black poplar. We calculated the quadratic and
arithmetic mean diameters of the 20% thickest trees in the plots, and the quadratic and arithmetic mean diameters of the trees, whose number corresponded to the 100 thickest trees per hectare. For
each dataset, we analyzed the range and the distribution of the relative deviations calculated for each pair of dominant diameter estimates. For the Black pine plantations, regression models were
developed for the two dominant diameter definitions, whose values differed most. Their goodness-of-fit was assessed from model efficiency and error statistics. The same model derivation procedure,
applied to the Scots pine data, was followed by substitution of the predicted dominant diameter into a height-diameter model to assess the goodness-of-fit of the dominant height predictions.
The differences between the arithmetic and quadratic means, estimated from the same subsample of trees, did not exceed 2% in all cases. However, dominant stand diameters calculated as averages of
differently defined largest tree collectives differed by as much as 35%. Regardless of its definition, the dominant stand diameter was adequately predicted by a function of the quadratic mean
diameter alone or considering stand basal area as a second predictor. The models showed very good accuracy of model efficiency above 0.92, average absolute error below 8%, with 90% of the relative
errors less than 15%. The predicted dominant diameter value can be used in a height-diameter model to estimate with confidence the dominant stand height of a monospecific forest plantation, allowing
the forecast of the stand attributes based on dominant trees when only average stand variables are known.
Keywords: Scots pine plantations, Black pine plantations, black locust plantations, hybrid black poplar plantations, quadratic mean diameter, height-diameter model, goodness-of-fit, regression model
1. Introduction
Curtis and Marshall (2000) described the quadratic mean diameter of the stand (Dq) as a broadly used stand statistic that is present in practically all yield tables, stand inventory descriptions and
simulator outputs. It is preferred to the arithmetic mean diameter () because it represents the average basal area tree and therefore is closely related to the mean tree volume, particularly in
regular, even-aged stands (Curtis and Marshall 2000). At the same time, arithmetic and quadratic mean diameters are connected by the formula:
var(D)is the variance of the tree diameters used to calculate the means.
From Eq. 1 follows that quadratic mean diameter is always bigger than the arithmetic mean, but in homogenous stands, where the individual tree diameter values fluctuate within a narrow range, the
variance and, consequently, the difference between the two means will not be substantial (Curtis and Marshall 2000, Ducey and Kershaw 2023). Other representations of the average stand diameter are
scarcely found in the literature. Van Laar and Akça (2007) mentioned the basal area central diameter as another used stand statistics, while the basal area-weighted mean diameter is popular in
Finland (Pukkala and Miina 2005, Siipilehto and Mehtätalo 2013, Ruotsalainen et al. 2021).
Ducey and Kershaw (2023) generalized that another stand diameter measure – the top stand diameter, estimated as arithmetic or quadratic mean of the thickest 100 trees·ha-1 – is occasionally reported
in the scientific literature, usually with little emphasis. Its use is mostly pronounced in height-diameter modelling (Tomé 1988, Pienaar et al. 1990, Cañadas et al. 1999, Cimini et al. 2011) and in
studies on the response of dominant trees to, e. g. spacing (Gizachew et al. 2012), damage by extreme events (Albrecht et al. 2015), etc. The study by Ducey and Kershaw (2023) suggested that the top
stand diameter could be the most substantial univariate predictor of many important stand parameters. The authors recommended that it should be considered a »standard variable« when characterizing
the forest stand conditions and a valuable predictor when specifying more elaborate forest models.
Although the mean height of the 100 tallest trees per hectare is always bigger than the mean height of the 100 thickest trees (Van Laar and Akça 2007), the fraction of the largest trees that are
considered top height trees is usually that at the right tail (large end) of the tree number-diameter distribution (Pretzsch 2009). According to Pretzsch (2009), in the standard investigations, the
mean and top heights are always extracted from the tree height-diameter curve of the stand. Therefore, it is very important how the top tree collective used to estimate dominant diameter, is defined:
the 100 thickest trees per hectare (the thickest tree in a 100 m2 plot, but the 10 thickest trees in a 0.1 ha plot) or 20% of the thickest trees in the area (the 1000 thickest trees per hectare at
density 5000 trees·ha-1, but the 200 thickest trees·ha-1 at density 1000 trees·ha-1). In addition, the average dominant diameter can also be assessed as an arithmetic or a quadratic mean. Sharma et
al. (2002) compared the stand top height estimates calculated for loblolly pine plantations in 7 different ways. The authors used data from both thinned and unthinned stands that have been collected
in permanent sample plots for 15 years and reported that all differently defined top heights differed significantly from each other, with a few exceptions registered.
While the mean stand height is required to estimate the stand volume in the forest inventories, the dominant stand height is regarded as a quantity that is more appropriate in site quality
assessment, because it is less easily affected by thinnings (Van Laar and Akça 2007, Tarmu et al. 2020). Therefore, the estimation of the dominant stand height and consequently, dominant stand
diameter are important. The inventory summary of each forest stand in Bulgaria contains information on quadratic mean diameter (cm) and mean stand height (m), but no data on stand-level attributes
based on the fraction of the top trees are available. The establishment of a dominant-quadratic mean diameter relationship would be of practical importance for the estimation of both dominant stand
height and diameter if sufficient accuracy were assured. The monospecific forest plantations have relatively homogenous spatial tree dispersion, of usually unimodal diameter distribution pattern
suggesting that the high accuracy of such a relationship would be an achievable goal in this case. The main objectives of our study were:
to assess the presence of bias between differently defined dominant diameter values for monospecific plantations of various species and its magnitude
to model the dominant-quadratic mean diameter relationship, considering also multiple regression functional forms and to estimate the goodness-of-fit of the predictions
to test the accuracy of the dominant stand height predictions from height-diameter model, based on dominant diameter values predicted from a relationship to the quadratic mean diameter.
2. Materials and Methods
To achieve the study objectives, we used data records gathered in monospecific plantations of four tree species: Scots pine (Pinus sylvestris L.), Black pine (Pinus nigra L.), black locust (Robinia
pseudoacacia L.) and hybrid black poplar (Populus x euramericana (Dode) Guinier). Data collection took place in 153 sample plots in Scots pine plantations and 143 plots in Black pine plantations
(Table 1), which were of rectangular or circular form and of different sizes depending on the density and homogeneity of the stands and the purpose of plot establishment. The plots were installed
throughout the area of the distribution of these plantations, with the primary goal to encompass the range of growth stages, densities, and sites, specific for these monospecific stand types in
Bulgaria. Data records from the broadleaf species were obtained in 15 plots installed in industrial plantations of juvenile-age hybrid black poplar and 25 plots in black locust industrial and progeny
test plantations. Two measurements perpendicular to each other of the tree trunk diameter at breast height were taken with a caliper and used to calculate the breast-height diameter of each tree. The
breast-height diameters of all trees in the plots were determined and their number (PN, trees) was counted. These data were used to additionally calculate other stand variables such as density (N,
trees·ha-1), basal area (G, m2·ha-1), quadratic mean diameter (Dq, cm), number (n%20, trees), quadratic (D020q, cm) and arithmetic (D020a, cm) mean diameters of the 20% thickest trees in the plots,
the number (n100, trees) of the trees in the plots, corresponding to the 100 thickest trees per hectare and their respective quadratic (D0100q, cm) and arithmetic (D0100a, cm) mean diameters (Table
1). Information on tree age (years) was obtained from the inventory descriptions of the stands. A validation dataset from 100 sample plots in Scots pine plantations was considered to address the
third study objective (Table 1). It corresponds to the dataset denoted as »Validation Data Set 3« in the study by Stankova et al. (2022).
Table 1 Characteristics of data sets used in the analyses
Variable * Scots pine (nSP = 153) Scots pine (nSP = 100) Black pine (nSP = 143) Hybrid black poplar (nSP = 15) Black locust (nSP = 25)
PS, m2 290.2 (60–1269) 265 (85–1042) 249.2 (54.9–1358.4) 1412 (1012–2079) 555.1 (165–720)
PN, trees 57 (27–165) – 48 (20–239) 49 (37–61) 97 (40–136)
Age, years 37 (10–80) – 45 (12–85) 3 (1–5) 16 (2–20)
G, m2·ha-1 42.18 (5.54–72.27) 44.29 (6.10- 72.25) 48.72 (3.46–110.54) 3.47 (0.09–13.66) 20.65 (2.71–32.23)
N, trees·ha-1 2983 (483–12200) 2854 (825–8210) 2800 (503–8700) 356 (244–543) 1856 (1299–3576)
Dq, cm 16.0 (3.6–35.3) 15.7 (2.5–32.8) 17.3 (3.5–35.3) 9.2 (2.0–17.9) 11.7 (3.1–15.7)
n100, trees 3 (1–13) – 3 (1–14) 14 (10–21) 5 (2–7)
n%20, trees 11 (5–33) – 10 (4–48) 10 (7–12) 19 (8–27)
D0100a, cm 22.7 (7.0–47.3) – 24.4 (7.0–48.7) 11.0 (2.6–21.5) 18.7 (5.1–25.7)
D020a, cm 20.6 (5.6–45.2) – 22.1 (5.6–44.7) 11.3 (2.7–21.4) 16.5 (4.4–22.8)
D0100q, cm 22.7 (7.0–47.6) 22.3 (7.0–42.3) 24.4 (7.0 – 48.9) 11.0 (2.6–21.5) 18.8 (5.1–25.7)
D020q, cm 20.6 (5.7 – 45.6) 22.2 (5.7 – 45.0) 11.4 (2.7–21.4) 16.6 (4.4–22.9)
H0100q, m – 16.6 (4.0–27.2) – – –
Abbreviations: nSP – number of sample plots, PS – plot size (m2), PN – plot tree number (trees), Dq – quadratic mean diameter (cm), G – stand basal area (m2·ha-1), N – stand density (trees·ha-1),
n%20 – 20% of the thickest trees in the plot (trees), n100 – number of trees in the plot corresponding to the 100 thickest trees per hectare (trees), D0100a – dominant diameter, estimated as the
arithmetic mean of the diameters of the n100 thickest trees in the plot (cm), D020a – dominant diameter, estimated as the arithmetic mean of the diameters of the n%20 thickest trees in the plot
(cm), D0100q – dominant diameter, estimated as the quadratic mean of the diameters of the n100 thickest trees in the plot (cm), D020q – dominant diameter, estimated as the quadratic mean of the
diameters of the n%20 thickest trees in the plot (cm), H0100q – dominant stand height (m) estimated as the Lorey's mean of the heights of the n100 thickest trees in the plot (m).
* Average variable value is shown with minimum – maximum in parentheses
To address the first research objective, all fotour main datasets were used. For each of them, the relative deviations (Difi, %) of all 6 pairs of dominant diameter estimates were calculated and
their range (minimum–maximum) and distribution (25th, 50th, 75th percentiles) were analyzed:
Dif1=100(D0100a – D0100q)/ D0100a(2)
Dif2=100(D0100a – D020q)/ D0100a(3)
Dif3=100(D0100a – D020q)/ D0100a(4)
Dif4=100(D0100q – D0100a)/ D0100q(5)
Dif5=100(D0100q – D0100q)/ D0100q(6)
Dif6=100(D020a – D0100q)/ D020a7)
In addition, F-test was performed to examine the hypothesis that the linear regression relating the values of the compared dominant diameter estimates has a slope equal to 1 and an intercept equal to
The datasets from the coniferous plantations that were representative of the variety of these stands in Bulgaria were used to achieve the second (with the Black pine dataset) and the third (with the
Scots pine dataset) research objectives. To attain the second study objective, the two dominant diameter definitions were selected, whose values differed at most according to the results obtained for
the Black pine plantations in Objective 1 and regression models were developed for their prediction. There is a clear correlation between the dominant and the quadratic mean diameter (Dq, cm); stand
density (N, trees·ha-1) and plot size (PS, m2) affect the number of trees, used to estimate the dominant diameter according to the different definitions and, consequently, affect the dominant
diameter magnitude. Stand basal area (G, m2·ha-1) is also viewed as a measure of the stocking rate, and diameter growth is age-related (Age, years). Therefore, all these variables were examined as
predictors of the dominant diameter by stepwise multiple regression analysis and the condition number test statistics were used to control collinearity, with a reference value of a maximum of 30. The
significant predictors were selected according to the Percent Relative Standard Error statistics (PRSE% = 100·Standard Error(Parameter)/|Parameter|) that must attain values below 25%. The
Breusch-Pagan analytical test and the plot of residuals against predicted values were used to check the assumption for homoscedastic residual distribution. When heteroscedasticity of errors was
identified, the model was refitted by generalized linear least squares method. To check the assumption of normality of errors, both analytical (Anderson-Darling test of normality, test for kurtosis
of Anscombe-Glynn (1983) and test for skewness of D'Agostino (1970)) and graphical (Quantile-Quantile plot) tests were used. The presence of bias was assessed according to a t-test for zero mean
error and by F-test to examine if the observed and predicted values are related by a linear regression of slope equal to 1 and a zero intercept. When the model derived was a linear regression of the
variables quadratic mean diameter and density, an attempt to express density as a function of Dq was carried out, and its predicted value was substituted in the final regression equation, as
suggested by Van Laar and Akça (2007). The regression statistics adjusted coefficient of determination (Adj. R2) and residual standard error (RMSE, cm) were computed for the models that proved
adequate and their predictability was further assessed according to the parameters model efficiency (ME), the quartiles of the relative errors (RE%) distribution and the mean of the absolute values
of relative errors (MARE%) that characterize the spread and the size of prediction errors as compared to the observed dominant diameters:
yi measured dominant diameter value of the i-th sample plot, cm
predicted dominant diameter value of the i-th sample plot, cm
mean observed dominant diameter value, cm
ARE%absolute value of the relative error, %
To achieve the third study objective, the described model derivation procedure was applied to the variable D0100q of the Scots pine plantations dataset in this study. The selected adequate models
were then used to predict the dominant diameter values of the validation data set (Table 1), which were substituted afterwards as predictors of the dominant height according to the established
relationship (Stankova et al. 2022):
H0100qdominant stand height, m
Hmmean stand height, m
dominant diameter value, predicted from the derived regression models, cm
The accuracy of the dominant stand height predictions was assessed according to the test statistics used for verification of the predicted dominant diameters of Black pine plantations in Objective 2
(Eqs. 8–10). To examine for bias, we used simultaneous F-test for slope equal to 1 and zero intercept of the linear regression that relates the observed and predicted height values.
The data management, graphical representation and statistical analyses were performed using the packages of R software environment: tidyverse (Wickham et al. 2019), dplyr (Wickham et al. 2023b),
readxl (Wickham and Bryan 2023), plyr (Wickham 2011), purrr (Wickham and Henry 2023), tidyr (Wickham et al. 2023a), psych (Revelle 2023), janitor (Firke 2023), car (Fox and Weisberg 2019), ggplot2
(Wickham 2016), ggprubr (Kassambara 2023), ggplotify (Yu 2021), patchwork (Pedersen 2022), nlme (Pinheiro and Bates 2000, 2023), olsrr (Hebbali 2020), relaimpo (Grömping 2006), heplots (Friendly et
al. 2022), lattice (Sarkar 2008), corrplot (Wei and Simko 2021), effectsize (Ben-Shachar et al. 2020), moments (Komsta and Novomestky 2022), nortest (Gross and Ligges 2015), lmtest (Zeileis and
Hothorn 2002).
3. Results
3.1 Objective 1: Estimation of Bias and its Magnitude
The differences between arithmetic and quadratic means, estimated from the same subsamples of trees (Dif1 and Dif6) did not exceed 2% in all cases (Table 2), the quadratic mean surpassing the
arithmetic as suggested by Eq. 1. The largest deviations, up to nearly 35%, for the coniferous plantations were recorded between the arithmetic means based on differing subsamples of trees (n100 vs
n%20) – Dif2. For the smaller datasets from the broadleaves, the largest deviations, those between D0100a and D020q (Dif3), were less than 15% (Table 2). The deviations of bigger magnitude (Dif2 to
Dif5) decreased with the diameter increase for the coniferous plantations, but remained constant for the juvenile broadleaf plantations (Figs. 1b, 2b, 3b, 4b). Statistically significant bias, as
indicated by the simultaneous F-test for slope equal to 1 and zero intercept of the linear regression relating the compared values, was found in practically all cases (Figs. 1a, 2a, 3a, 4a).
Significant deviations from the reference values of both the slope and the intercept, when examined separately, were unconditionally proven for the two larger datasets (data not shown).
Table 2 Relative deviations (%) between different estimates of dominant diameter
Species Percentile Dif1 Dif2 Dif3 Dif4 Dif5 Dif6
0 –1.31 0 –0.39 0.39 0 –1.79
25 0 6.47 6.19 6.47 6.25 –0.63
Scots pine 50 0 9.33 8.95 9.43 9.19 –0.40
75 0 13.33 13.30 13.33 13.33 0
100 0 32.22 31.11 32.22 31.11 0
0 –0.98 0 –0.47 0 0 –1.79
25 0 5.30 5.10 5.48 5.28 –0.60
Black pine 50 0 10.00 10.00 10.00 10.00 –0.31
75 0 14.32 13.96 14.40 14.00 0
100 0 34.74 33.68 34.74 33.68 0
0 –1.56 –10.16 –10.94 –9.37 –9.43 –1.75
25 0 –4.12 –4.12 –4.12 –4.12 0
Hybrid black
50 0 –3.03 –3.33 –3.03 –3.33 0
75 0 –1.34 –1.34 –1.15 –1.15 0
100 0 0.47 0.47 0.47 0.47 0
0 –1.61 6.77 6.25 7.25 6.74 –1.01
25 –0.43 10.00 9.68 10.48 10.00 –0.55
Black locust 50 0 12.33 11.74 12.61 11.93 –0.50
75 0 12.99 12.60 12.99 12.61 –0.45
100 0 19.44 19.44 19.44 19.44 0
Fig. 1 Scots pine plantations
Fig. 2 Black pine plantations
Fig. 3 Hybrid black poplar plantations
Fig. 4 Black locust plantations
3.2 Objective 2: Dominant – Quadratic Mean Diameter Relationships for Black Pine Plantations – Goodness-of-fit in Dominant Diameter Predictions
There were two maximum differences of the same magnitude: D020a–D0100a and D020a–D0100q estimated for the Black pine plantations, and we chose to model the dominant diameter of both the arithmetic
(D020a) and quadratic (D0100q) mean. Plantation age and plot size were not kept as significant predictors of either of the dominant diameters, while basal area was the second significant independent
variable, beside the quadratic mean diameter, selected through the stepwise regression analysis (Table 3). Stand density (trees·ha-1) could have been included as a predictor after expressing stand
basal area as the product of density and quadratic mean diameter. An exponential power relationship of stand density to quadratic mean diameter was tested and the exponential model showed higher
accuracy and predictability. It was approximated in a log-transformed functional form and a ratio correction coefficient (Snowdon 1991) was applied to the back-transformed dependent variable values
to correct for bias (Table 3). By including the predicted density value as an independent variable, a second regression model was derived for each of the dominant diameter definitions (Table 3). To
cope with the manifested residual heteroscedasticity, all four adequate models developed in fulfillment of Objective 2 were fitted through generalized least squares method employing variance
functions of different forms. Three of the regression equations derived passed through the origin, all parameters assessed showed substantial stability (PRSE%<15%) and all fitted models showed
relatively small residual mean squared error (RMSE (D020a)<1.2cm, RMSE (D0100q)<2.2cm) (Table 3).
Considering the outcome of Objective 1, which showed neglectfully small differences between arithmetic and quadratic means based on the same tree subsample, the goodness-of-fit in predictions of all
four differently defined dominant diameters were examined (Table 4). The regression models derived for the dominant diameter, based on the 20% of the largest trees (D020a, D020q) revealed higher
accuracy than those for D0100a and D0100q. The goodness-of-fit of the predictions estimated for the quadratic and arithmetic mean by the same model were practically equivalent in all cases (Table 4).
The regression models that included basal area as a second predictor showed slightly higher predictive potential, as indicated by the range and the magnitude of the relative errors and by the model
efficiency assessed. Although manifestation of bias was registered for one of the regression equations, all models had very good accuracy, with model efficiency above 0.92, average absolute error
below 8% (Table 4), with 90% of the relative errors less than 15% (5th percentile ≥ –12%, 95th percentile ≤18%).
Table 3 Dominant diameter prediction functions for Black pine plantations
ln(N) = a0 + a1Dq (LS)
Adj. R2 RMSE CF Regression Parameters
0.848 0.259 1.042 a0 a1
Estimate 9.313 –0.091
SE 0.060 0.003
PRSE% 0.65 3.55
D0100q = a0 + a1Dq + a2G (GLS)
Adj. R2 Variance function Regression Parameters
0.951 (GθDqη)2 a0 a1 a2
Variance Function Parameters Estimate 2.852 1.034 0.075
RMSE θ η SE 0.387 0.022 0.010
1.689 0.587 –0.244 PRSE% 13.56 2.10 13.39
D0100q = a1Dq + a2 N(Dq)Dq2 (GLS)
Adj. R2 Variance function Regression Parameters
0.922 (θ1+Dqη1)2 (θ2+(Dq2 N(Dq))η2)2 a1 a2
Variance Function Parameters Estimate 0.957 1.19x10-5
RMSE θ1 η1 θ2 η2 SE 0.038 1.09x10-6
2.149 9.78x10-8 –0.340 1.51x10-21 7.487 PRSE% 4.00 9.15
D020a = a1Dq + a2G (GLS)
Adj. R2 Variance function Regression Parameters
0.986 (Dqη1)2 (θ2+Gη2)2 a1 a2
Variance Function Parameters Estimate 1.163 0.039
RMSE η1 θ2 η2 SE 0.014 0.004
0.947 0.703 0.023 –1.010 PRSE% 1.162 10.964
D020a = a1Dq + a2 N(Dq)Dq2 (GLS)
Adj. R2 Variance function Regression Parameters
0.984 (θ1+ (Dq2 N(Dq))η1)2 exp(2η2Dq)2 a1 a2
Variance Function Parameters Estimate 1.109 4.40x10-6
RMSE θ1 η1 η2 SE 0.021 5.60x10-7
1.018 1.46x10-17 -6.099 9.49x10-3 PRSE% 1.890 12.780
Abbreviations: GLS – generalized linear squares, LS – linear squares, Dq – quadratic mean diameter (cm), G – stand basal area (m2·ha-1), N – stand density (trees·ha-1), D020a – dominant diameter,
estimated as the arithmetic mean of diameters of e n%20 (20% of the thickest trees in the plot) thickest trees in the plot (cm), D0100q – dominant diameter, estimated as the quadratic mean of
diameters of n100 (number of trees in the plot corresponding to 100 thickest trees per hectare) thickest trees in the plot (cm), CF – ratio correction coefficient, Adj. R2 – adjusted coefficient of
determination, RMSE – residual standard error (cm), a0, a1, a2, θ, η, θ1, η1, θ2, η2 – model parameters, SE – standard error, PRSE% –- Parameter Relative Standard Error (%)
Table 4 Validation statistics of dominant diameter prediction functions for Black pine plantations
* Validated functions ME MARE% P0 P25 P50 P75 P100 ** F stat. (df = 2, n)
D0100q = 2.852+ 1.034Dq + 0.075G 0.952 6.0 –16% –6% –1% 4% 23% 0.020NS
D0100q = 0.957Dq + 1.19x10-5 N(Dq)Dq2 0.922 7.1 –20% –6% 0% 6% 36% 1.626NS
D0100a = 2.852+ 1.034Dq + 0.075G 0.952 6.0 –16% –6% –1% 4% 23% 0.018NS
D0100a = 0.957Dq + 1.19x10-5 N(Dq)Dq2 0.923 7.1 –20% –6% 0% 6% 36% 1.456NS
D020a = 1.163Dq + 0.039G 0.986 3.4 –8% –2% 0% 3% 25% 3.011'
D020a = 1.109Dq + 4.40x10-6N(Dq)Dq2 0.984 3.6 –9% –3% 0% 3% 23% 0.839NS
D020q = 1.163Dq + 0.039G 0.985 3.5 –8% –2% 0% 3% 26% 3.752*
D020q = 1.109Dq + 4.40x10-6N(Dq)Dq2 0.982 3.7 –9% –2% 0% 3% 24% 2.002NS
Abbreviations: Dq – quadratic mean diameter (cm), G – stand basal area (m2·ha-1), N – stand density (trees·ha-1), D020a – dominant diameter, estimated as the arithmetic mean of diameters of n%20 (20%
of the thickest trees in the plot) thickest trees in the plot (cm), D0100q – dominant diameter, estimated as the quadratic mean of diameters of n100 (number of trees in the plot corresponding to 100
thickest trees per hectare) thickest trees in the plot (cm), D020q – dominant diameter, estimated as the quadratic mean of diameters of n%20 thickest trees in the plot (cm), D0100a – dominant
diameter, estimated as the arithmetic mean of diameters of n100 thickest trees in the plot (cm), ME – model efficiency, MARE% – average of absolute values of relative errors ARE%, P0, P25, P50, P75,
P100 – 0th, 25th, 50th, 75th, and 100th percentile of relative errors RE%.
* N(Dq)=1.042exp(9.313–0.091Dq)
** F-statistics and its significance with (2, n) degrees of freedom of the simultaneous test for slope equal to 1 and zero intercept of the linear regression relating the observed and predicted
values. Levels of significance: *** – P<0.001, ** – P<0.01, * – P<0.05, ‘– P<0.1, NS – P>0.1
3.3 Objective 3: Goodness-of-fit of the Dominant Height Predictions, Based on Dominant – Quadratic Mean Diameter Relationships: Estimates for Scots Pine Plantations
Two adequate regression equations based on the quadratic mean diameter alone and two two-predictor models, including either basal area or stand density as a second independent variable, were
developed to predict the D0100q estimates for the Scots pine plantations (Table 5). The coefficients of determination assessed ranged between 0.94 and 0.955 and Root Mean Squared Errors between 1.72
and 1.99cm were estimated, with Percent Relative Standard Errors of the regression parameters being of magnitude below 17%.
When the predicted values of the dominant diameter were substituted into the height-diameter model with the validation data, all four regression equations yielded similar results in the dominant
height predictions in terms of model efficiency that was as high as 0.95 and average absolute error that ranged from 5.2 to 5.6% (Table 6). The dominant height predictions from the basal
area-including model were slightly biased, but the relative errors remained below 25% in all cases.
Table 5 Dominant diameter prediction functions for Scots pine plantations
D0100q = a1Dq + a2N (LS)
Adj. R2 Regression Parameters
0.948 a1 a2
Estimate 1.299 6.21x10-4
RMSE SE 0.011 5.13x10-5
1.849 PRSE% 0.813 8.256
D0100q = a1Dq + a2 N(~Dq) (LS)
Adj. R2 Regression Parameters
0.946 a1 a2
Variance Function Parameters Estimate 1.298 7.641
RMSE SE 0.011 0.664
1.895 PRSE% 0.848 8.686
D0100q = a0 + a1Dq + a2G (GLS)
Adj. R2 Variance function Regression Parameters
0.955 (θ1 + Dqη1)2 (θ2 + Gη2)2 a0 a1 a2
Variance Function Parameters Estimate 3.371 1.047 0.062
RMSE θ1 η1 θ2 η2 SE 0.336 0.022 0.010
1.719 0.032 –1.700 70.242 1.484 PRSE% 9.964 2.063 16.466
D0100q = a1Dq + a2Dq2 (GLS)
Adj. R2 Variance function Regression Parameters
0.940 (θ1+Dq2η1)2 (Dqη2)2 a1 a2
Variance Function Parameters Estimate 1.679 –0.014
RMSE θ1 η1 η2 SE 0.029 0.001
1.986 2.70x10-39 12.712 25.150 PRSE% 1.719 8.921
Abbreviations: GLS – generalized linear squares, LS – linear squares, Dq – quadratic mean diameter (cm), G – stand basal area (m2·ha-1), N – stand density (trees·ha-1), D0100q – dominant diameter,
estimated as the quadratic mean of diameters of n100 (number of trees in the plot corresponding to 100 thickest trees per hectare) thickest trees in the plot (cm), Adj. R2 – adjusted coefficient of
determination, RMSE – Residual Standard Error (cm), a0, a1, a2, θ, η, θ1, η1, θ2, η2 – model parameters, SE – standard error, PRSE% – Parameter Relative Standard Error (%).
* N(~Dq)=exp(-0.1Dq)
Table 6 Validation statistics for the dominant height based on the predicted dominant diameter for Scots pine plantations
Dominant height prediction function Dominant diameter prediction function ME MARE% P0 P25 P50 P75 P100 * F stat. (df = 2, n)
= 1.299Dq + 6.21x10-4N 0.946 5.6 –24.3% –5.8% –2.2% 2.1% 23.0% 2.088NS
= 1.298Dq + 7.641exp(–0.1Dq) 0.946 5.2 –16.4% –4.0% –0.5% 4.9% 24.8% 0.974NS
= 3.371 + 1.047Dq + 0.062G 0.946 5.6 –22.9% –5.9% –2.3% 1.3% 22.4% 3.653*
= 1.679Dq – 0.014Dq2 0.948 5.5 –20.3% –5.9% –2.6% 22.4% 6.9% 2.999'
Abbreviations: Dq – quadratic mean diameter (cm), G – stand basal area (m2·ha-1), N – stand density (trees·ha-1), D0100q – dominant diameter, estimated as the quadratic mean of diameters of
n100 (number of trees in the plot corresponding to 100 thickest trees per hectare) thickest trees in the plot (cm), H0100q – dominant stand height (m) estimated as the Lorey's mean of heights
of n100 thickest trees in the plot (m), ME – model efficiency, MARE% – average of absolute values of relative errors ARE%, P0, P25, P50, P75, P100 – 0th, 25th, 50th, 75th, and 100th
percentile of relative errors RE%.
* F-statistics and its significance with (2, n) degrees of freedom of the simultaneous test for slope equal to 1 and zero intercept of the linear regression relating the observed and
predicted values. Levels of significance: *** – P<0.001, ** – P<0.01, * – P<0.05, ‘– P<0.1, NS – P>0.1
4. Discussion
The way of calculation of the number of trees, which are used for dominant diameter estimation, suggests that this number depends on stand density (trees·ha-1). For densities less than 500
trees·ha-1, n%20 obtains smaller values than n100, while the opposite is true for the denser stands. The dominant diameters based on the smaller subsample of trees will exceed those calculated from
the bigger one; therefore, D0100a and D0100q will have larger values than D020a and D020q at densities higher than 500 trees·ha-1. Indeed, our results showed that the deviations Dif2–Dif5 obtained
positive values for all, but the poplar data, collected in intensively managed plantations for timber, established at low stocking rates (Table 1). The study by Sharma et al. (2002) that compared
differently defined dominant heights for loblolly pine plantations showed that the average height of the 100 thickest trees per hectare was significantly bigger than the average height of the 20%
thickest trees at the 6 consecutive measurements of the unthinned stands taken over 15 years. In addition, at the time of the last measurement, the estimates of the dominant height according to the
100 thickest trees definition exceeded the values calculated by the other 6 definitions for both the thinned and unthinned stands (Sharma et al. 2002).
The tendency of decreasing deviations Dif2–Dif5 with the increase of diameter, observed for the coniferous plantations in our study, supports the idea of equivalency between the dominant diameter
definitions based on differently defined tree subsamples at advanced age when the density decreases to around 500 trees per hectare and the 100 thickest trees account for 20% of the trees per
hectare. Such equivalence, however, can rarely be observed with the short-rotation poplar plantations, with conventionally applied planting schemes of growing space below 20 m2 per tree.
Curtis and Marshal (2000) commented that, in stands of a narrow range of tree diameters, the difference between the quadratic and the arithmetic mean diameter is slight. Consequently, this should be
even more valid when just a portion of the dominant trees is considered. Our study confirmed this conclusion since the difference between the quadratic and the arithmetic mean based on the same
sample of trees did not exceed 2%. In addition, we found that the regression models derived for one of them predicted equally well the values of the other. Our results agree with those obtained by
Ducey and Kershaw (2023), where D0100a and D0100q they investigated for complex stands were so similar that they were practically redundant. Therefore, in our opinion, quadratic and arithmetic mean
dominant diameters, based on the same sample of trees, can be equivalently used for monospecific forest plantations, giving preference to the one that is easier to calculate.
Our study showed that the dominant stand diameter defined in various ways can be adequately expressed as a function of the quadratic mean diameter. The inclusion of the basal stand area as a
predictor led to improved goodness-of-fit in some cases (e. g. with the Black pine data) but showed a tendency to produce biased estimates (see Tables 4 and 6). Fitting a simple linear regression to
the quadratic mean diameter was an attractive alternative, but the estimated regression models showed a consistent violation of the requirement for normality of errors, with severely skewed residual
distribution for all dominant diameter definitions and both datasets from pine plantations. The intercept of the line was alternatively expressed by exponential or quadratic term of the quadratic
mean diameter (with the Scots pine data) or by their product (with the Black pine data), reflecting in this way the modifying effect of stand density.
The mean stand height is a stand-level attribute commonly used in the forest management practice in Bulgaria, while for dominant height estimation there is no officially established protocol.
Duhovnikov (1972) preferred the dominant height definition as the height corresponding to the average diameter of the 20% thickest trees in the stand. Petrin (1987) and Tonchev (2022) favored the
definition of the dominant height corresponding to the quadratic mean diameter of the 100 thickest trees per hectare. Shikov (1974) and Ferezliev and Tsakov (2010), on the other hand, gave their
preference to the dominant height as an arithmetic average of the heights of the 100 thickest trees per hectare. Stankova et al. (2022) developed a height-diameter model for Scots pine plantations in
Bulgaria based on tree diameter, average stand height and diameter, and validated the model for dominant stand height prediction from differently defined dominant diameters. Our study suggested a
dominant diameter prediction from the average stand diameter, showing that its substitution into the height-diameter model yields accuracy comparable to that when the experimental dominant diameter
values were used (see Table 4 in Stankova et al. 2022 and Table 6 of this study). Consequently, it can be concluded that the stand attributes of the monospecific plantations that are based on the
dominant trees (i.e. dominant stand diameter and height) can be confidently estimated from the average stand parameters.
Wang et al. (2024) studied the influence of 37 different measures of stand height and diameter on the accuracy of estimation of stand volume from allometric equations. The authors found that, when
density was removed from the equation, the height and diameter measures that best predicted stand volume shifted from moment estimators (i. e. the arithmetic and quadratic means) to the largest tree
estimators (i. e. estimates based on largest tree collectives). They concluded that this outcome probably reflects the proportionally greater contribution of the large trees to the total volume than
the small trees, which becomes particularly important when density is not known. However, their study showed that the best performing models without density caused errors 3–6 times greater than the
best performing models with density. In line with these observations and in agreement with the notion of dominant stand height as the most common phytocentric measure of site productivity (Skovsgaard
and Vanclay 2008, Weiskittel et al. 2011), our results suggest a practically applicable approach to estimate the site index, based on dominant height growth model, in Bulgaria (e. g. Stankova et al.
2024) using data, which are readily available in the stand inventory descriptions (i. e. average stand height and diameter).
5. Conclusions
Dominant stand diameter values, estimated as either arithmetic or quadratic mean from the same portion of the largest trees in the stand, are practically equal and can be used interchangeably.
However, dominant stand diameters calculated from differently defined largest tree communities may differ by as much as 35%. The difference decreases with advancing stand growth stage and with the
progress of self-thinning, but not for intensively managed industrial plantations of density below 500 trees·ha-1. Regardless of its definition, the dominant stand diameter can be adequately
predicted from a function of the quadratic mean diameter alone or considering the stand basal area as a second independent variable. Its predicted value can be confidently used as a predictor in a
height-diameter model to estimate the dominant stand height of a monospecific forest plantation, allowing the forecast of the stand attributes based on dominant trees when only average stand
variables are known.
This research was financially supported by the Bulgarian National Science Fund under the project »Adaptive management of forest stands under climate change: a pilot study of Scots pine (Pinus
sylvestris L.) plantations« (Grant Agreement: KP-06-N51/1, 2021). Data collection in Scots pine and Black pine plantations was partially sponsored by the Bulgarian-Swiss Forestry Programme as a part
of the first author's PhD study – supporting grant (2003–2007). Data collection in poplar and black locust plantations as well as in pine plantations in Rila mountain was subsidized by the Bulgarian
National Science Fund through Grant Agreements DFNI-Е01/6, 2012 and MU-SS-1103, 2002.
7. References
Albrecht, A.T., Fortin, M., Kohnle, U., Ningre, F., 2015: Coupling a tree growth model with storm damage modeling–conceptual approach and results of scenario simulations. Envir. Model. Soft. 69:
63–76. https://doi.org/10.1016/j.envsoft.2015.03.004
Anscombe, F.J., Glynn, W.J., 1983: Distribution of kurtosis statistic for normal statistics. Biometrika 70(1): 227–234.
Ben-Shachar, M., Lüdecke, D., Makowski, D., 2020: effectsize: Estimation of Effect Size Indices and Standardized Parameters. J. Open Source Softw. 5(56): 2815. https://doi.org/10.21105/joss.02815
Cañadas, N., Garciá, C., Montero, G., 1999: Relación altura-diámetro para Pinus pinea L. en el Sistema Central. In: Actas del Congreso de Ordenación y Gestión Sostenible de Montes (Santiago de
Compostela, 4–9 October) Vol. I: 139–153.
Cimini, D., Salvati, R., 2011: ComparIson of generalized nonlinear height diameter models for Pinus halepensis Mill. and Quercus cerris L. in Sicily (southern Italy). L'Italia Forestale e Montana 66
(5): 395–400.
Curtis, R., Marshall, D.D., 2000: Why quadratic mean diameter? West. J. Appl. For. 15(3): 137–139. https://doi.org/10.1093/wjaf/15.3.137
D'Agostino, R.B., 1970: Transformation to normality of the null distribution of g1. Biometrika 57(3): 679–681. https://doi.org/10.1093/biomet/57.3.679
Ducey, M.J., Kershaw Jr., J.A., 2023: Alternative expressions for stand diameter in complex forests. For. Ecosystems 10: 100114. https://doi.org/10.1016/j.fecs.2023.100114
Duhovnikov, Y., 1972: Site index tables for Scots pine stands according to the dominant height. Gorsko stopanstvo 1: 24–28.
Ferezliev, A., Tsakov, H., 2010: Determination of mean form factors and table establishment for Pseudotsuga menziesii (Mirb.) Franco in West Rhodopes. Nauka za gorata 46(1): 15–30.
Firke, S., 2023: janitor: Simple Tools for Examining and Cleaning Dirty Data. R package version 2.2.0. https://CRAN.R-project.org/package=janitor
Fox, J., Weisberg, S., 2019: An R Companion to Applied Regression, Third edition. Sage, Thousand Oaks CA. https://socialsciences.mcmaster.ca/jfox/Books/Companion/
Friendly, M., Fox, J., Monette, G., 2022: heplots: Visualizing Tests in Multivariate Linear Models. R package version 1.4-2. https://CRAN.R-project.org/package=heplots
Gizachew, B., Brunner, A., Øyen, B.H., 2012: Stand responses to initial spacing in Norway spruce plantations in Norway. Scand. J. For. Res. 27(7): 637–648. http://dx.doi.org/10.1080/
Grömping, U., 2006: Relative Importance for Linear Regression in R: The Package relaimpo. J. Stat. Softw. 17(1): 1–27.
Gross, J., Ligges, U., 2015: nortest: Tests for Normality. R package version 1.0-4. https://CRAN.R-project.org/package=nortest
Hebbali, A., 2020: olsrr: Tools for Building OLS Regression Models. R package version 0.5.3. https://CRAN.R-project.org/package=olsrr
Kassambara, A., 2023: ggpubr: 'ggplot2' Based Publication Ready Plots. R package version 0.6.0. https://CRAN.R-project.org/package=ggpubr
Komsta, L., Novomestky, F., 2022: moments: Moments, Cumulants, Skewness, Kurtosis and Related Tests. R package version 0.14.1. https://CRAN.R-project.org/package=moments
Pedersen, T., 2022: patchwork: The Composer of Plots. R package version 1.1.2. https://CRAN.R-project.org/package=patchwork
Petrin, R., 1987: Relationship between the mean and the dominant height of the beech stands. Gorsko stopanstvo i gorska promishlenost 10: 20–22.
Pienaar, L.V., Harrison, W.M., Rheney, J.W., 1990: PMRC yield prediction system for slash pine plantations in the Atlantic coast flatwoods. PMRC Tecnical Report 1990-3. Plantation Management Research
Cooperative, Warnell School of Forestry and Natural Resources, The University of Georgia, Athens, GA, 31 p. (cited in:
Lei, X., Peng, C., Wang, H., Zhou, X. 2009: Individual height–diameter models for young black spruce (Picea mariana) and jack pine (Pinus banksiana) plantations in New Brunswick, Canada. Forest
Chron. 85(1): 43–56.) https://doi.org/10.5558/tfc85043-1
Pinheiro, J.C., Bates, D.M., 2000: Mixed-Effects Models in S and S-PLUS. Springer, New York. https://doi.org/10.1007/b98882
Pinheiro, J., Bates, D., R, Core Team, 2023: nlme: Linear and Nonlinear Mixed Effects Models. R package version 3.1-162. https://CRAN.R-project.org/package=nlme
Pretzsch, H., 2009: Forest dynamics, growth, and yield. Springer-Verlag Berlin Heidelberg, Germany, 671 p.
Pukkala, T., Miina, J., 2005: Optimising the management of a heterogeneous stand. Silva Fenn. 39(4): 252–538.
Revelle, W., 2023: psych: Procedures for Psychological, Psychometric, and Personality Research. Northwestern University, Evanston, Illinois. R package version 2.3.3. https://CRAN.R-project.org/
Ruotsalainen, R., Pukkala, T., Kangas, A., Packalen, P., 2021: Effects of errors in basal area and mean diameter on the optimality of forest management prescriptions. Ann. For. Sci. 78: article
number 18. https://doi.org/10.1007/s13595-021-01037-4
Sarkar, D., 2008: Lattice: Multivariate Data Visualization with R. Springer, New York. https://lmdvr.r-forge.r-project.org
Skovsgaard, J.A., Vanclay, J.K., 2008. Forest site productivity: a review of the evolution of dendrometric concepts for even-aged stands. Forestry 81(1): 13–31. https://doi.org/10.1093/forestry/
Sharma, M., Amateis, R.L., Burkhart, H.E., 2002: Top height definition and its effect on site index determination in thinned and unthinned loblolly pine plantations. For. Ecol. Manage. 168(1–3):
163–175. https://doi.org/10.1016/S0378-1127(01)00737-X
Shikov, K., 1974: Intensity and beginning of thinning of coniferous stands. Gorsko stopanstvo 5: 8–10.
Siipilehto, J., Mehtätalo, L., 2013: Parameter recovery vs. parameter prediction for the Weibull distribution validated for scots pine stands in Finland. Silva Fenn. 47(4): 1057. http://dx.doi.org/
Stankova, T.V., Ferezliev, A., Dimitrov, D.N., Dimitrova, P., Stefanova, P., 2022: A Parsimonious Generalised Height-Diameter Model for Scots Pine Plantations in Bulgaria: a Pragmatic Approach.
SEEFOR 13(1): 37–51. https://doi.org /10.15177/seefor. 22-04
Stankova, T.V., González-Rodríguez, M.Á., Diéguez-Aranda, U., Ferezliev, A., Dimitrova, P., Kolev, K., Stefanova, P., 2024: Productivity-environment models for Scots pine plantations in Bulgaria: an
interaction of anthropogenic origin peculiarities and climate change. Ecol. Model. 490: 110654 https://doi.org/10.1016/j.ecolmodel.2024.110654
Tarmu, T., Laarmann, D., Kiviste, A., 2020: Mean height or dominant height–what to prefer for modelling the site index of Estonian forests? For. Stud. 72(1): 121–138. https://doi.org/10.2478/
Tomé, M., 1988: Modelação Do Crescimento Da Árvore Individual Em Povoamentos De Eucalyptus globulus Labill. (1a Rotação). Região Centro De Portugal. Ph.D. Thesis, ISA, Lisbon, 256 p. (cited in:
Sánchez-González, M., Cañellas, I., Montero, G., 2007: Generalized height-diameter and crown diameter prediction models for cork oak forests in Spain. For. Syst. 16(1): 76–88). https://doi.org/
Tonchev, T., 2022: Approaches to optimizing forest management planning and uses of forests. Intel Trans, Sofia, 146 p.
Van Laar, A., Akça, A., 2007: Forest mensuration. Springer Science & Business Media, Dordrecht, The Netherlands, 389 p. https://doi.org/10.1007/978-1-4020-5991-9
Wang, Y., Kershaw, J.A., Ducey, M.J., Sun, Y., McCarter, J.B., 2024: What diameter? What height? Influence of measures of average tree size on area-based allometric volume relationships. For.
Ecosyst. 11: 100171. https://doi.org/10.1016/j.fecs.2024.100171
Wei, T., Simko, V., 2021: R package 'corrplot': Visualization of a Correlation Matrix (Version 0.92). Available from https://github.com/taiyun/corrplot
Weiskittel, A.R., Hann, D.W., Kershaw Jr, J.A., Vanclay, J.K., 2011: Forest growth and yield modeling. John Wiley & Sons, Ltd., 415 p. https://doi.org/10.1002/9781119998518
Wickham, H., 2011: The Split-Apply-Combine Strategy for Data Analysis. J. Stat. Softw. 40(1): 1–29. https://www.jstatsoft.org/v40/i01/
Wickham, H., 2016: ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag, New York.
Wickham, H., Bryan, J., 2023: _readxl: Read Excel Files. R package version 1.4.2. https://CRAN.R-project.org/package=readxl
Wickham, H., Henry, L., 2023: purrr: Functional Programming Tools. R package version 1.0.1. https://CRAN.R-project.org/package=purrr
Wickham, H., Vaughan, D., Girlich, M., 2023a: tidyr: Tidy Messy Data. R package version 1.3.0. https://CRAN.R-project.org/package=tidyr
Wickham, H., François, R., Henry, L., Müller, K., Vaughan, D., 2023b: dplyr: A Grammar of Data Manipulation. R package version 1.1.2. https://CRAN.R-project.org/package=dplyr
Wickham, H., Averick, M., Bryan, J., Chang, W., McGowan, L.D., François, R., Grolemund, G., Hayes, A., Henry, L., Hester, J., Kuhn, M., Pedersen, T.L., Miller, E., Bache, S.M., Müller, K., Ooms, J.,
Robinson, D., Seidel, D. P., Spinu, V., Takahashi, K., Vaughan, D., Wilke, C., Woo, K., Yutani, H., 2019: Welcome to the tidyverse. J. Open Source Softw. 43(4): 1686. https://doi.org/10.21105/
Yu, G., 2021: ggplotify: Convert Plot to 'grob' or 'ggplot' Object. R package version 0.1.0. https://CRAN.R-project.org/package=ggplotify
Zeileis, A., Hothorn, T., 2002: Diagnostic Checking in Regression Relationships. R News 2(3): 7–10.
© 2024 by the authors. Submitted for possible open access publication under the
terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Authors' addresses:
Assoc. Prof. Tatiana Stankova, PhD *
e-mail: tatianastankova@yahoo.com; tatiana.stankova@fri.bas.bg
Assist. Prof. Proletka Dimitrova, PhD
e-mail: p_dim72@abv.bg
Bulgarian Academy of Sciences
Forest Research Institute
Dept. of forest genetics, physiology and plantation forests
132 »St. Kliment Ohridski« blvd.
1756, Sofia
Friday N. Ogana, PhD
e-mail: fnogana23@vt.edu
Virginia Polytechnic Institute and State University
College of Natural Resources and Environment
Dept. of Forest Resources and Environmental Conservation
310 West Campus Dr., 311A Cheatham Hall.
24061 Blacksburg
* Corresponding author
Received: October 09, 2023
Accepted: March 19, 2024
Original scientific paper | {"url":"https://crojfe.com/archive/upcoming-issue/modelling-differently-defined-dominant-stand-diameters-of-monospecific-forest-plantations/","timestamp":"2024-11-05T14:21:00Z","content_type":"text/html","content_length":"86280","record_id":"<urn:uuid:843fab7e-c35b-49a6-9fa3-3134faab3f30>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00605.warc.gz"} |
prime numbers - Tung M Phung's Blog
I. What is a Prime Number? A number is called prime if it satisfies all the following conditions: is a natural number, greater than 1, has only 2 divisors: 1 and itself. For example, are the first
prime numbers. If so, what is NOT a prime number? not a natural number (e.g. a fraction, like or ), less than or equal to 1 (e.g. -3, 0, 1 are NOT prime), can be formed by multiplying 2 natural
numbers less than itself (e.g. is NOT a prime number). II. Why is Prime Number important? In the modern world, prime numbers are … Continue reading Introduction to Prime Numbers | {"url":"https://tungmphung.com/tag/prime-numbers/","timestamp":"2024-11-07T23:11:55Z","content_type":"text/html","content_length":"32278","record_id":"<urn:uuid:1f0442df-e41d-45d2-b988-5882b2c2ecb2>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00465.warc.gz"} |
Improving Prime Number Calculation in SQL Server - Axial SQL
Calculating prime numbers efficiently is a common challenge in programming. In this article, we will explore different approaches to generating prime numbers using SQL Server.
The Problem
A student asked if it was possible to write an SQL statement to generate prime numbers less than a given limit, such as 1000. The challenge was to find a solution that scales well, considering the
lack of loops in SQL.
The Solution
There are several useful facts from Number Theory that can help us solve this problem:
1. The prime factors of a number cannot be greater than the square root of that number.
2. All primes are of the form (6 * n ± 1), but not all numbers of that form are primes.
Based on these facts, we can devise different SQL statements to generate prime numbers.
Solution #1
One approach is to load a table with candidate numbers using math fact #2. We can then remove the non-primes by testing if there is a factor among the numbers less than the square root of each
CREATE TABLE Primes (
p INTEGER NOT NULL PRIMARY KEY CHECK ( p > 1 )
INSERT INTO Primes (p)
SELECT (6 * seq) + 1
FROM Sequence
WHERE (6 * seq) + 1 <= 1000
UNION ALL
SELECT (6 * seq) - 1
FROM Sequence
WHERE (6 * seq) + 1 <= 1000
DELETE FROM Primes
WHERE EXISTS (
SELECT *
FROM Primes AS P1
WHERE P1.p <= CEILING(SQRT(Primes.p))
AND (Primes.p % P1.p) = 0
Solution #2
Another approach is to load the candidates into the Primes table by hardwiring the first few known primes into a query. This limits the candidate set and improves performance.
INSERT INTO Primes (p)
SELECT seq
FROM Sequence
WHERE 0 NOT IN (seq % 2, seq % 3, seq % 5, seq % 7, ..);
Solution #3
A different approach is to generate all the non-primes and remove them from the Sequence table.
INSERT INTO Primes (p)
SELECT seq
FROM Sequence
WHERE seq <= 1000
SELECT (F1.seq * F2.seq) AS composite_nbr
FROM Sequence AS F1, Sequence AS F2
WHERE F1.seq BETWEEN 2 AND CEILING(SQRT(1000))
AND F2.seq BETWEEN 2 AND CEILING(SQRT(1000))
AND F1.seq <= F2.seq
AND (F1.seq * F2.seq) <= 1000
In this article, we explored different approaches to generating prime numbers using SQL Server. By leveraging mathematical facts and SQL features, we can efficiently calculate prime numbers up to a
given limit. The solutions provided avoid proprietary features and use standard SQL Server features that are compatible across different releases.
Remember, there are more advanced algorithms like the Sieve of Atkin and the various Wheel Sieves that can be used for even better performance. However, the solutions presented here offer a good
balance between simplicity and efficiency. | {"url":"https://axial-sql.com/info/improving-prime-number-calculation-in-sql-server/","timestamp":"2024-11-03T12:54:35Z","content_type":"text/html","content_length":"106787","record_id":"<urn:uuid:7ffc4d26-67e9-4c7d-ba78-726278325ed3>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00612.warc.gz"} |
Class 12 Maths Three Dimensional Geometry - Extramind
Three Dimensional Geometry
Class 12th Maths
Chapter: Three Dimensional Geometry
Topic: Introduction
Three Dimensional Geometry
Class 12th Maths
Chapter: Three Dimensional Geometry
Topic: Direction Cosines & Direction Ratios of a Line
Three Dimensional Geometry
Class 12th Maths
Chapter: Three Dimensional Geometry
Topic: Example-I
Three Dimensional Geometry
Class 12th Maths
Chapter: Three Dimensional Geometry
Topic: Equation of a line in space
Three Dimensional Geometry
Class 12th Maths
Chapter: Three Dimensional Geometry
Topic: Example-II
Three Dimensional Geometry
Class 12th Maths
Chapter: Three Dimensional Geometry
Topic: Angle between two Line
Three Dimensional Geometry
Class 12th Maths
Chapter: Three Dimensional Geometry
Topic: Example-III
Three Dimensional Geometry
Class 12th Maths
Chapter: Three Dimensional Geometry
Topic: Shortest Distance between two Lines
Three Dimensional Geometry
Class 12th Maths
Chapter: Three Dimensional Geometry
Topic: Example-IV
Three Dimensional Geometry
Class 12th Maths
Chapter: Three Dimensional Geometry
Topic: Plane
Three Dimensional Geometry
Class 12th Maths
Chapter: Three Dimensional Geometry
Topic: Example-V
Three Dimensional Geometry
Class 12th Maths
Chapter: Three Dimensional Geometry
Topic: Example-VI
Three Dimensional Geometry
Class 12th Maths
Chapter: Three Dimensional Geometry
Topic: Example-VII
Three Dimensional Geometry
Class 12th Maths
Chapter: Three Dimensional Geometry
Topic: Example-VIII
Three Dimensional Geometry
Class 12th Maths
Chapter: Three Dimensional Geometry
Topic: Equation of Plane Passing through 3 Non Collinear Points
Three Dimensional Geometry
Class 12th Maths
Chapter: Three Dimensional Geometry
Topic: Plane Passing through Intersection of two given Planes
Three Dimensional Geometry
Class 12th Maths
Chapter: Three Dimensional Geometry
Topic: Example-IX
Three Dimensional Geometry
Class 12th Maths
Chapter: Three Dimensional Geometry
Topic: Coplanarity of two Lines
Three Dimensional Geometry
Class 12th Maths
Chapter: Three Dimensional Geometry
Topic: Angle between two Planes
Three Dimensional Geometry
Class 12th Maths
Chapter: Three Dimensional Geometry
Topic: Distance of a Point from a plane | {"url":"https://extraminds.com/class-12-maths-three-dimensional-geometry/","timestamp":"2024-11-09T23:30:02Z","content_type":"text/html","content_length":"162866","record_id":"<urn:uuid:5b70de36-43c1-4a3a-939b-6d1d4a1c1c8d>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00616.warc.gz"} |
IntMath Newsletter: CSS matrix, Humble Pi, van Gogh - Interactive MathematicsIntMath Newsletter: CSS matrix, Humble Pi, van Gogh
IntMath Newsletter: CSS matrix, Humble Pi, van Gogh
By Murray Bourne, 29 Aug 2019
29 Aug 2019
In this Newsletter:
1. New on IntMath: CSS matrix math
2. Resources: Humble Pi, AnswerThePublic
3. Math in the news: Proof
4. Math movies: Parker, van Gogh
5. Math puzzle: Mystery object
6. Final thought: Dry leaves
1. New on IntMath: CSS matrix math
I recently gave a talk to a local meetup group on the mathematics behind CSS matrix. CSS stands for "cascading style sheets", and is the system where Web designers can set font sizes, colors, and
also set sizes and vary shapes for objects like images and videos.
Matrices are used to transform geometric objects (scale, skew, rotate, translate and so on.) Computer games make extensive use of matrices to simulate depth, 3D objects and so on.
Here is the content of the talk. It explores how CSS transform is the result of matrix multiplication. Even if you're not interested in Web design, it's interesting to see another real-life
application of matrices.
See CSS Matrix - a mathematical explanation
I also developed the following interactive graph applet that demonstrates the concepts in the talk.
This is an interactive graph where you can vary sliders to see how CSS matrix changes the size, shape and location of an object.
See CSS matrix interactive applet
2. Resources
(a) Humble Pi: A Comedy of Maths Errors
When I was teaching a group of engineering students some years ago, I went to my boss with an idea. I suggested we incorporate examples of cases where things went wrong, so students would learn the
importance of accuracy and the safety issues that can arise out of sloppy and inaccurate mathematics.
He wasn't enthusiastic and squashed the idea, saying it would scare off students from choosing the degree. I felt it was a lost opportunity.
So I was interested when I came across the book "Humble Pi - A Comedy of Maths Errors" by Matt Parker.
Matt Parker's Humble Pi
This readable book was exactly what I had in mind when I approached my boss. Let's learn from cases where people made math errors, and see what the consequences were – not to apportion blame, but to
learn what can go wrong.
We all make math errors, but usually the worst outcome is a drop in grade, or momentary embarrassment. The people working in science and engineering fields should be aware of why their math teachers
insisted on accuracy.
So I suggested the book for the local library, and was pleasantly surprised how long I had to wait before I could read it (it turned out to be quite popular).
I recommend this book for any math student or teacher.
One of the videos below features Parker, covering some of the same errors detailed in the book.
Disclaimer: I have no connection with Matt Parker (other than through Twitter) and receive no commission.
(b) Teachers: Address the questions students are really asking
Some teachers see their job as simply giving out information, but there is no "value add" in that approach, especially since students can easily access such information in abundance.
One thing we can do better while teaching a topic is to actually address the questions students really have about that topic. One approach is to simply ask students what their questions are, and
there are a lot of apps and sites that facilitate this process (e.g. Google Forms and Survey Monkey are both easy to use).
Another thing to consider is to look at the questions students are likely to ask, before you even start planning the lessons. AnswerThePublic is a good resource for this.
AnswerThePublic is a database of common questions that people ask about topics. It provides a rich source of ideas on how we might go about introducing a topic, and pre-empting the stumbling
See Answer the Public
Topics to try are:
• Algebra (you'll see e.g. "How is algebra used in real life?", "Who invented algebra?", etc)
• Calculus (e.g. "When does a limit exist?", "Where to start?", "How is it used in computer science?", etc)
• Matrix (e.g. "When is a matrix orthogonal?", "Matrix when a^2 = a?", etc)
You can choose the "Data" tab at the top of each visualization to get easier-to-read lists of questions, and download the lot as a CSV (for Excel).
Sometimes in this resource you see questions that may seem quite odd, like "Does calculus cause kidney stones?", but "calculus" means "stone" and in medicine, it refers to a build-up of hard
substances in the body. I get it on my teeth.
3. Math in the news
Story of the Gaussian correlation inequality proof
The title "Gaussian correlation inequality" sounds scary, but it goes something like this.
I have a dart board sitting on a rectangular shape, and assume I'm a good dart player. When I throw a lot of darts at it, I expect the accuracy of my throws to form a somewhat bell-shaped curve
distribution. That is, most of the darts land somewhere close to the middle, and there are less dart holes as I go out from the middle.
The greater the circle overlaps the rectangle, the probability of striking both goes up.
The probability that a dart lands on both the circle and the rectangle is greater than or equal to the product of the individual probabilities.
At first, they didn't believe a German retiree had actually proved it.
See A Long-Sought Proof, Found and Almost Lost
4. Math Movies
(a) What Happens When Maths Goes Wrong?
This is a one-hour presentation by Matt Parker at the Royal Institution, London. It covers some interesting examples that are worth considering.
See What Happens When Maths Goes Wrong?
Parker is the Public Engagement in Mathematics Fellow at Queen Mary University of London.
(b) The unexpected math behind Van Gogh's "Starry Night"
Turbulence is one of the most tricky phenomena to model using mathematics. It is complicated and chaotic.
This video by Natalya St. Clair explores how Van Gogh incorporated turbulence in his art to give the impression of movement.
See: The unexpected math behind Van Gogh's "Starry Night"
5. Math puzzles
The puzzle in the last IntMath Newsletter asked about radii of mutually tangent circles. In fact, it turned out to be a 3x3 system of equations - it wasn't really a geometry question.
Correct answers with sufficient reasons were submitted by Russell, Nicola, Tomas and Thomas.
New math puzzle: Mystery object
This time, some investigation may be involved.
Mystery object
The above object was used to achieve a particular mathematical outcome – one that is still vitally important to this day. What is the object, where was it used, and what was the mathematical outcome?
(If you can't actually find it or figure it out, your speculation will prove interesting!)
You can leave your response here.
6. Final thought - what it could be like
Equatorial Singapore, where I live, is normally lush and green, and doesn't experience leaf falls as is normally the case for most places in Autumn.
However, this year we've had the driest and second hottest July on record, and practically no rain so far in August causing my local park to look like this:
Dry leaves and dead grass in Singapore.
Such dry and hot conditions are caused by a positive Indian Ocean dipole, the situation where the Western Indian Ocean is hotter than the Eastern part, causing hot droughts over Australia and most SE
Asian countries.
These make ideal conditions for farmers in Indonesia to set off forest fires in order to plant more palm oil, so there's been many fire hot spots reported there.
Meanwhile, the dry season in the Amazon has been the excuse, along with President Bolsanaro's encouragement, for farmers there to burn vast amounts of the Earth's lungs for ever-expanding
methane-producing cattle farms.
In the Arctic, fires across the tundra continue to burn, spewing even more carbon into the atmosphere.
These events are giving us an insight into how things will be if governments, companies and all of us fail to address our land use, our consumption, and our "economic growth at all costs" mentality.
We can stop it, but will we?
Until next time, enjoy whatever you learn.
See the 3 Comments below.
C. Trenor says:
31 Aug 2019 at 4:09 am [Comment permalink]
This article relates to your call for a change in land management to impact weather.
See little change in human behavior on a large scale unless it can lower costs or increase profit.
Murray says:
4 Sep 2019 at 12:20 pm [Comment permalink]
@ct: Yet another example of how big business has swayed science-ignorant politicians to their "make money at all costs" mentality.
It makes a lot of sense to grow more hemp, which sadly means it's most likely never going to happen.
Nicola says:
11 Sep 2019 at 7:55 pm [Comment permalink]
I didn't find what the stone actually is.
It looks like an abacus, so maybe it was a device for counting ... | {"url":"https://www.intmath.com/blog/letters/intmath-newsletter-css-matrix-humble-pi-van-gogh-12092","timestamp":"2024-11-12T16:05:09Z","content_type":"text/html","content_length":"141380","record_id":"<urn:uuid:483fbbc1-2e31-4847-b165-1197cbca02be>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00067.warc.gz"} |
CreateRunOptions: Creation of the RunOptions object required to the RunModel... in airGR: Suite of GR Hydrological Models for Precipitation-Runoff Modelling
Creation of the RunOptions object required to the RunModel* functions.
CreateRunOptions(FUN_MOD, InputsModel, IndPeriod_WarmUp = NULL, IndPeriod_Run, IniStates = NULL, IniResLevels = NULL, Imax = NULL, Outputs_Cal = NULL, Outputs_Sim = "all", MeanAnSolidPrecip = NULL,
IsHyst = FALSE, warnings = TRUE, verbose = TRUE)
FUN_MOD [function] hydrological model function (e.g. RunModel_GR4J, RunModel_CemaNeigeGR4J)
InputsModel [object of class InputsModel] see CreateInputsModel for details
IndPeriod_WarmUp (optional) [numeric] index of period to be used for the model warm-up [-]. See details
IndPeriod_Run [numeric] index of period to be used for the model run [-]. See details
IniStates (optional) [numeric] object of class IniStates [mm and °C], see CreateIniStates for details
IniResLevels (optional) [numeric] vector of initial fillings for the GR stores (4 values; use NA when not relevant for a given model) [- and/or mm]. See details
Imax (optional) [numeric] an atomic vector of the maximum capacity of the GR5H interception store [mm]; see RunModel_GR5H
(optional) [character] vector giving the outputs needed for the calibration
Outputs_Cal (e.g. c("Qsim")), the fewer outputs the faster the calibration
(optional) [character] vector giving the requested outputs
Outputs_Sim (e.g. c("DatesR", "Qsim", "SnowPack")), default = "all"
MeanAnSolidPrecip (optional) [numeric] vector giving the annual mean of average solid precipitation for each layer (computed from InputsModel if not defined) [mm/y]
IsHyst [boolean] boolean indicating if the hysteresis version of CemaNeige is used. See details
warnings (optional) [boolean] boolean indicating if the warning messages are shown, default = TRUE
verbose (optional) [boolean] boolean indicating if the function is run in verbose mode or not, default = TRUE
(optional) [numeric] index of period to be used for the model warm-up [-]. See details
[numeric] index of period to be used for the model run [-]. See details
(optional) [numeric] object of class IniStates [mm and °C], see CreateIniStates for details
(optional) [numeric] vector of initial fillings for the GR stores (4 values; use NA when not relevant for a given model) [- and/or mm]. See details
(optional) [numeric] an atomic vector of the maximum capacity of the GR5H interception store [mm]; see RunModel_GR5H
(optional) [character] vector giving the outputs needed for the calibration (e.g. c("Qsim")), the fewer outputs the faster the calibration
(optional) [character] vector giving the requested outputs (e.g. c("DatesR", "Qsim", "SnowPack")), default = "all"
(optional) [numeric] vector giving the annual mean of average solid precipitation for each layer (computed from InputsModel if not defined) [mm/y]
[boolean] boolean indicating if the hysteresis version of CemaNeige is used. See details
(optional) [boolean] boolean indicating if the warning messages are shown, default = TRUE
(optional) [boolean] boolean indicating if the function is run in verbose mode or not, default = TRUE
Users wanting to use FUN_MOD functions that are not included in the package must create their own RunOptions object accordingly.
Since the hydrological models included in airGR are continuous models, meaning that internal states of the models are propagated to the next time step, IndPeriod_WarmUp and IndPeriod_Run must be
continuous periods, represented by continuous indices values; no gaps are allowed. To calculate criteria or to calibrate a model over discontinuous periods, please see the Bool_Crit argument of the
CreateInputsCrit function.
The model initialisation options can either be set to a default configuration or be defined by the user.
This is done via three vectors: IndPeriod_WarmUp, IniStates, IniResLevels. A default configuration is used for initialisation if these vectors are not defined.
IndPeriod_WarmUp default setting ensures a one-year warm-up using the time steps preceding the IndPeriod_Run. The actual length of this warm-up might be shorter depending on data availability (no
missing value of climate inputs being allowed in model input series).
IniStates and IniResLevels are automatically set to initialise all the model states at 0, except for the production and routing stores levels which are respectively initialised at 30 % and 50 % of
their capacity. In case GR5H is used with an interception store, the intercetion store level is initialised by default with 0 mm. In case GR6J is used, the exponential store level is initialised by
default with 0 mm. This initialisation is made at the very beginning of the model call (i.e. at the beginning of IndPeriod_WarmUp or at the beginning of IndPeriod_Run if the warm-up period is
IndPeriod_WarmUp can be used to specify the indices of the warm-up period (within the time series prepared in InputsModel).
remark 1: for most common cases, indices corresponding to one or several years preceding IndPeriod_Run are used (e.g. IndPeriod_WarmUp = 1000:1365 and IndPeriod_Run = 1366:5000). However, it is also
possible to perform a long-term initialisation if other indices than the warm-up ones are set in IndPeriod_WarmUp (e.g. IndPeriod_WarmUp = c(1:5000, 1:5000, 1:5000, 1000:1365)).
remark 2: it is also possible to completely disable the warm-up period when using IndPeriod_WarmUp = 0L. This is necessary if you want IniStates and/or IniResLevels to be the actual initial values of
the model variables from your simulation (e.g. to perform a forecast form a given initial state).
IniStates and IniResLevels can be used to specify the initial model states.
remark 1: IniStates and IniResLevels can not be used with GR1A.
remark 2: if IniStates is used, two possibilities are offered: - IniStates can be set to the $StateEnd output of a previous RunModel call, as $StateEnd already respects the correct format; -
IniStates can be created with the CreateIniStates function.
remark 3: in addition to IniStates, IniResLevels allows to set the filling rate of the production and routing stores for the GR models. For instance for GR4J and GR5J: IniResLevels = c(0.3, 0.5, NA,
NA) should be used to obtain initial fillings of 30 % and 50 % for the production and routing stores, respectively. For GR6J, IniResLevels = c(0.3, 0.5, 0, NA) should be used to obtain initial
fillings of 30 % and 50 % for the production and routing stores levels and 0 mm for the exponential store level, respectively. For GR5H with an interception store, IniResLevels = c(0.3, 0.5, NA, 0.4)
should be used to obtain initial fillings of 30 %, 50 % and 40 % for the production, routing and interception stores levels, respectively. IniResLevels is optional and can only be used if IniStates
is also defined (the state values corresponding to these two other stores in IniStates are not used in such case).
If IsHyst = FALSE, the original CemaNeige version from Valéry et al. (2014) is used. If IsHyst = TRUE, the CemaNeige version from Riboust et al. (2019) is used. Compared to the original version, this
version of CemaNeige needs two more parameters and it includes a representation of the hysteretic relationship between the Snow Cover Area (SCA) and the Snow Water Equivalent (SWE) in the catchment.
The hysteresis included in airGR is the Modified Linear hysteresis (LH*); it is represented on panel b) of Fig. 3 in Riboust et al. (2019). Riboust et al. (2019) advise to use the LH* version of
CemaNeige with parameters calibrated using an objective function combining 75 % of KGE calculated on discharge simulated from a rainfall-runoff model compared to observed discharge and 5 % of KGE
calculated on SCA on 5 CemaNeige elevation bands compared to satellite (e.g. MODIS) SCA (see Eq. (18), Table 3 and Fig. 6). Riboust et al. (2019)'s tests were realized with GR4J as the chosen
rainfall-runoff model.
[list] object of class RunOptions containing the data required to evaluate the model outputs; it can include the following:
IndPeriod_WarmUp [numeric] index of period to be used for the model warm-up [-]
IndPeriod_Run [numeric] index of period to be used for the model run [-]
IniStates [numeric] vector of initial model states [mm and °C]
IniResLevels [numeric] vector of initial filling rates for production and routing stores [-] and level for the exponential store for GR6J [mm]
Outputs_Cal [character] character vector giving only the outputs needed for the calibration
Outputs_Sim [character] character vector giving the requested outputs
Imax [numeric] vector giving the maximal capacity of the GR5H interception store
MeanAnSolidPrecip [numeric] vector giving the annual mean of average solid precipitation for each layer [mm/y]
library(airGR) ## loading catchment data data(L0123001) ## preparation of the InputsModel object InputsModel <- CreateInputsModel(FUN_MOD = RunModel_GR4J, DatesR = BasinObs$DatesR, Precip =
BasinObs$P, PotEvap = BasinObs$E) ## run period selection Ind_Run <- seq(which(format(BasinObs$DatesR, format = "%Y-%m-%d")=="1990-01-01"), which(format(BasinObs$DatesR, format = "%Y-%m-%d")==
"1999-12-31")) ## preparation of the RunOptions object RunOptions <- CreateRunOptions(FUN_MOD = RunModel_GR4J, InputsModel = InputsModel, IndPeriod_Run = Ind_Run) ## simulation Param <- c(X1 =
734.568, X2 = -0.840, X3 = 109.809, X4 = 1.971) OutputsModel <- RunModel(InputsModel = InputsModel, RunOptions = RunOptions, Param = Param, FUN_MOD = RunModel_GR4J) ## results preview plot
(OutputsModel, Qobs = BasinObs$Qmm[Ind_Run]) ## efficiency criterion: Nash-Sutcliffe Efficiency InputsCrit <- CreateInputsCrit(FUN_CRIT = ErrorCrit_NSE, InputsModel = InputsModel, RunOptions =
RunOptions, Obs = BasinObs$Qmm[Ind_Run]) OutputsCrit <- ErrorCrit_NSE(InputsCrit = InputsCrit, OutputsModel = OutputsModel)
For more information on customizing the embed code, read Embedding Snippets. | {"url":"https://rdrr.io/cran/airGR/man/CreateRunOptions.html","timestamp":"2024-11-14T13:46:28Z","content_type":"text/html","content_length":"44871","record_id":"<urn:uuid:503ab75b-abea-4199-8cc0-534c91819ea9>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00301.warc.gz"} |
Nonlinear Poisson Regression Models
Example 52.4 Nonlinear Poisson Regression Models
This example illustrates how to fit a nonlinear Poisson regression with PROC MCMC. In addition, it shows how you can improve the mixing of the Markov chain by selecting a different proposal
distribution or by sampling on the transformed scale of a parameter. This example shows how to analyze count data for calls to a technical support help line in the weeks immediately following a
product release. This information could be used to decide upon the allocation of technical support resources for new products. You can model the number of daily calls as a Poisson random variable,
with the average number of calls modeled as a nonlinear function of the number of weeks that have elapsed since the product’s release. The data are input into a SAS data set as follows:
title 'Nonlinear Poisson Regression';
data calls;
input weeks calls @@;
During the first several weeks after a new product is released, the number of questions that technical support receives concerning the product increases in a sigmoidal fashion. The expression for the
mean value in the classic Poisson regression involves the log link. There is some theoretical justification for this link, but with MCMC methodologies, you are not constrained to exploring only
models that are computationally convenient. The number of calls to technical support tapers off after the initial release, so in this example you can use a logistic-type function to model the mean
number of calls received weekly for the time period immediately following the initial release. The mean function
The likelihood for every observation
Past experience with technical support data for similar products suggests using a gamma distribution with shape and scale parameters 3.5 and 12 as the prior distribution for
The following PROC MCMC statements fit this model:
ods graphics on;
proc mcmc data=calls outpost=callout seed=53197 ntu=1000 nmc=20000
ods select TADpanel;
parms alpha -4 beta 1 gamma 2;
prior alpha ~ normal(-5, sd=0.25);
prior beta ~ normal(0.75, sd=0.5);
prior gamma ~ gamma(3.5, scale=12);
lambda = gamma*logistic(alpha+beta*weeks);
model calls ~ poisson(lambda);
The one PARMS statement defines a block of all parameters and sets their initial values individually. The PRIOR statements specify the informative prior distributions for the three parameters. The
assignment statement defines
lambda = gamma / (1 + exp(-(alpha+beta*weeks)));
Mixing is not particularly good with this run of PROC MCMC. The ODS SELECT statement displays only the diagnostic graphs while excluding all other output. The graphical output is shown in Output
By examining the trace plot of the gamma parameter, you see that the Markov chain sometimes gets stuck in the far right tail and does not travel back to the high density area quickly. This effect can
be seen around the simulations number 8000 and 18000. One possible explanation for this is that the random walk Metropolis is taking too small of steps in its proposal; therefore it takes more
iterations for the Markov chain to explore the parameter space effectively. The step size in the random walk is controlled by the normal proposal distribution (with a multiplicative scale). A (good)
proposal distribution is roughly an approximation to the joint posterior distribution at the mode. The curvature of the normal proposal distribution (the variance) does not take into account the
thickness of the tail areas. As a result, a random walk Metropolis with normal proposal can have a hard time exploring distributions that have thick tails. This appears to be the case with the
posterior distribution of the parameter gamma. You can improve the mixing by using a thicker-tailed proposal distribution, the t-distribution. The option PROPDIST controls the proposal distribution.
PROPDIST=T(3) changes the proposal from a normal distribution to a t-distribution with three degrees of freedom.
The following statements run PROC MCMC and produce Output 52.4.2:
proc mcmc data=calls outpost=callout seed=53197 ntu=1000 nmc=20000
propcov=quanew stats=none propdist=t(3);
ods select TADpanel;
parms alpha -4 beta 1 gamma 2;
prior alpha ~ normal(-5, sd=0.25);
prior beta ~ normal(0.75, sd=0.5);
prior gamma ~ gamma(3.5, scale=12);
lambda = gamma*logistic(alpha+beta*weeks);
model calls ~ poisson(lambda);
Output 52.4.2 displays the graphical output.
The trace plots are more dense and the ACF plots have faster drop-offs, and you see improved mixing by using a thicker-tailed proposal distribution. If you want to further improve the Markov chain,
you can choose to sample the gamma:
The parameter gamma has a positive support. Often in this case, it has right-skewed posterior. By taking the
The following statements produce Output 52.4.4 and Output 52.4.3:
proc mcmc data=calls outpost=callout seed=53197 ntu=1000 nmc=20000
propcov=quanew propdist=t(3)
monitor=(alpha beta lgamma gamma);
ods select PostSummaries PostIntervals TADpanel;
parms alpha -4 beta 1 lgamma 2;
prior alpha ~ normal(-5, sd=0.25);
prior beta ~ normal(0.75, sd=0.5);
prior lgamma ~ egamma(3.5, scale=12);
gamma = exp(lgamma);
lambda = gamma*logistic(alpha+beta*weeks);
model calls ~ poisson(lambda);
ods graphics off;
In the PARMS statement, instead of gamma, you have lgamma. Its prior distribution is egamma, as opposed to the gamma distribution. Note that the following two priors are equivalent to each other:
prior lgamma ~ egamma(3.5, scale=12);
prior gamma ~ gamma(3.5, scale=12);
The gamma assignment statement transforms lgamma to gamma. The lambda assignment statement calculates the mean for the Poisson by using the gamma parameter. The MODEL statement specifies a Poisson
likelihood for the calls response.
The trace plots and ACF plots in Output 52.4.3 show the best mixing seen so far in this example.
Output 52.4.4 shows the posterior summary statistics of the nonlinear Poisson regression. Note that the lgamma parameter has a more symmetric density than the skewed gamma parameter. The Metropolis
algorithm always works better if the target distribution is approximately normal.
The MCMC Procedure
20000 -4.8907 0.2160 -5.0435 -4.8872 -4.7461
20000 0.6957 0.1089 0.6163 0.6881 0.7698
20000 3.7391 0.3487 3.4728 3.7023 3.9696
20000 44.8136 17.0430 32.2263 40.5415 52.9647
This example illustrates that PROC MCMC can fit Bayesian nonlinear models just as easily as Bayesian linear models. More importantly, transformations can sometimes improve the efficiency of the
Markov chain, and that is something to always keep in mind. Also see Using a Transformation to Improve Mixing for another example of how transformations can improve mixing of the Markov chains. | {"url":"http://support.sas.com/documentation/cdl/en/statug/63347/HTML/default/statug_mcmc_sect047.htm","timestamp":"2024-11-11T23:51:43Z","content_type":"application/xhtml+xml","content_length":"27100","record_id":"<urn:uuid:66e68ace-4366-4f85-8079-b2832979ee82>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00152.warc.gz"} |
Arimaa Forum - Print Page
Arimaa Forum (http://arimaa.com/arimaa/forum/cgi/YaBB.cgi)
Arimaa >> General Discussion >> Empirically derived material evaluators Part II
(Message started by: IdahoEv on Nov 16^th, 2006, 4:43pm)
Title: Empirically derived material evaluators Part II
Post by IdahoEv on Nov 16^th, 2006, 4:43pm I tested the ability of nine material eval functions to give a probability estimate of who was winning games in the historical database. The best performer
was DAPE, but not when using its default coefficients.
Each algorithm produces a score that is positive for a gold advantage and negative for a silver advantage. I convert this number into a probability of a gold win with this formula:
P[gold win]=1.0/(1.0+exp(-score/K))
And in each case I curve-fit the constant K to generate the best score for that particular material eval algorithm. The optimized K can be viewed as just a measure of the scale of that algorithm.
The error measured is the sum-squared-difference between estimated P[gold win] and 1 (for a gold win) or 0 (for a silver win) over all states that appear in the database subject to the same criteria
I defined in the last posting (players rated 1600+, exchanges only measured at the end of the exchange, etc.). The fit optimizer I used attempts to minimize that error. Algorithms which more
accurately predict who is winning most often will have a lower error value.
For some of the algorithms, I also performed a best-fit on the coefficients of the algorithm itself. Here's a summary of the results, from worst to best. I'll explain the particulars of each one
afterwards. The error +/- column gives the standard deviation over 5 runs the Total Error column.
Algorithm K Total Error Error +/-
Count Pieces 1.40 2223.27 0.026
Linear AB (Default coefficients) 3.13 2145.69 0.007
DAPE (Default coefficients) 3.90 2124.19 0.000
FAME (Default coefficients) 2.92 2042.81 0.002
FAME-X (Optimized Coefficients 6.06 1908.44 0.085
FAME (Optimized Coefficients) 5.58 1895.01 0.037
RabbitCurveABC (Optimized Coefficients) 2.82 1891.21 0.646
LinearAB (Optimized Coefficients) 1.77 1889.10 0.015
DAPE (Optimized Coefficients) 3.931 1871.10 1.063
The DAPE Algorithm was clearly the best performer - but only once the coefficients were optimized by gradient descent over the database. And those optimized coefficients were decidedly unusual.
The standard deviations of the errors show that even though I did 5 runs of the optimization for each algorithm often with slightly different results, the overall performance of the different
algorithms listed here is non-overlapping. There was some variance to the performance of LinearAB, for example, but it always beat RabbitCurve ABC and always lost to DAPE (Optimized Coefficients).
Count Pieces
Just what you'd expect; the score is +1.0 point for each piece gold has, -1.0 point for each piece silver has. This is here to give you a sense of the overall performance of the algorithms relative
to a baseline. Most of the time, a player with more pieces is ahead. The performance improvement of the algorithms below is a measure of their ability to correctly evaluate the relatively few cases
where a player has a numerical deficit but a functional advantage.
LinearAB (Default Coefficients)
As described in my previous post (http://arimaa.com/arimaa/forum/cgi/YaBB.cgi?board=talk;action=display;num=1163044066), this is a simple increasing value for each functional rank of pieces.
Rabbits are worth 1.0 points, A is the value of a cat and B is the ratio of each higher officer level to the last. Cats are worth A points. Dogs are worth A*B points, horses are worth A*B*B points,
etc. (With the exception that these points are actually applied to collapsed functional levels. If cats have been eliminated from the game entirely, dogs are worth A points and horses are worth A*B
points, etc.)
Here the default coefficients (my original guess last week) are:
A = 2.0 (A cat is worth two rabbits)
B = 1.5 (A dog is worth 1.5 cats, etc.)
I intended from the start for these coefficients "official version" to be optimized automatically, but I put this here as an illustration of the guessing ability of humans. As with the other human
guesses about piece value, it performed miserably. :)
DAPE (Default Coefficients)
DAPE implemented exactly as on Janzert's calculator, including the constant division to make initial rabbit worth 1.0. The only thing I varied was the K value in order to optimize the probability
The other coefficients are: A=30, S=0.375, E=0.3, AR=200.0 BR=0.4, AN=200.0, BN=0.75.
FAME (Default Coefficients)
FAME implemented exactly as on Janzert's calculator, including the constant division to make initial rabbit worth 1.0. The only thing I varied was the K value in order to optimize the probability
The coefficients are: Level_1=256.0, Level_2=87.0, A=1.5, B=600, C=2.0. (Where A is the ratio of Level 2 to Level 3, and residual rabbits are scored at B/(R+CP) for opposing rabbits R and opposing
officers P.
FAME-X (Optimized Coefficients)
FAME-X is a slight variation on FAME I tried for which the rabbit score is applied to all the rabbits a player has, instead of the "residual rabbits" after others are used to fill in vs. opposing
officers. It did not improve on FAME, so I'll spare you further analysis for now.
FAME (Optimized Coefficients)
With the values of Level 1 and Level 2 fixed at 256 and 87, I allowed the other three coefficients A, B, and C to float and the optimizer to find best values for them. The results were a fairly
consistent and dramatic improvement over the default FAME coefficients, and the resulting coefficients were also quite consistent. They were:
A = 1.249 +/- 0.005
B = 900.1 +/- 14.13
C = 0.96 +/- 0.023
The most dramatic result is that C is 2.0 in Fritzl's formulation but 0.96 when optimized. My best-guess explanation is this: C attempts to devalue rabbits in inverse proportion to the number of
opposing officers. However, in FAME, opponents with many officers already punish rabbit score by consuming some of the rabbits in the "filling in" part of the function, preventing those rabbits from
being scored entirely. It is my guess that in the average case this "filling in" effect does more than is necessary to punish a player's rabbit score for being behind in officers, and so C is
dropping below 1.0 to compensate.
FAME-X was an attempt to test this hypothesis, and while it did result in a much higher C value (~4.9) it didn't improve on FAME's ability to score the database.
Described in the previous post, this is identical to LinearAB except that the 8 points of the rabbits as a whole are distributed according to
this function: http://idahoev.com/arimaa/images/rabbitcurve.png
The coefficient A scales the value of a cat to the average rabbit, and B scales subsequent officers to cats. B was very consistent at 1.316 +/- 0.003, but A and C covaried and ranged pretty widely. C
seems to prefer values from 0.2 to 0.3, indicating some usefulness of the curve shape, but A varied inversely to C and needed to range wider to compensate from ~0.5 to ~2.5! I believe A is attempting
to scale the value of the officers relative to the value of the first rabbits lost rather than the average rabbits lost, especially given how well-represented these states are in the database. Since
the value of the initial rabbit lost is very dependent on C, A is extremely sensitive to that
The optimized values:
A=1.241 +/- 0.004
B=1.316 +/- 0.002
I'm continually surprised at how well this evaluator has performed. I originally implemented it as a toss-off algorithm just to test my optimizer. In particular, the fact that it simply values all
rabbits at 1.0 points seems incredibly naive to me, as I believe that the 8th rabbit lost should be much more expensive than the 1st rabbit lost.
The empirical optimizer seems to disagree. What can I say.
DAPE (Optimized Coefficients)
With coefficients optimized by my system, DAPE outperformed the other algorithms and not by a small amount! How much of this is due to DAPE being a better description of Arimaa's mechanics (pieces
evaluated with respect to how many other pieces they can beat) and how much is due to the fact that DAPE has seven tweakable coefficients is not clear. While the end performance was fairly
consistent, the actual coefficients varied quite a bit; each time I ran the optimizer it found a slightly different solution for DAPE. With seven interdependent coefficients, there are bound to be
many local minima in the error function.
Here are the coefficients (using Toby's terminology from this post (http://arimaa.com/arimaa/forum/cgi/YaBB.cgi?board=devTalk;action=display;num=1163129051) and the code on Janzert's calculator
(http://arimaa.janzert.com/fame.html)) at the end of the best-performing optimizing run:
AR = -0.46
BR = -1.0683
AN = 428.7
BN = 0.9755
(Error = 1870.34)
The higher values of S and E (relative to the defaults) are telling us that, like with FAME, having a larger number of pieces is more important than having stronger pieces (relative to what we think,
anyhow.) AN and AR are not described in 99of9's original post on DAPE but are alluded to later in the thread and can be found in the code on Janzert's FAME/DAPE web page.
The value of AR is the coefficient of the "Low-rabbits punisher" function. That value is fascinating, and it was not particular to this run. In every run I performed, the optimizer immediately
reduced AR to something very near zero, in fact going slightly negative in 4 out of 5 runs. Essentially, the low-rabbits optimizer with any positive value hurts the ability of DAPE to accurately
score the database, and the optimizer eliminated it. However, the overall "low piece punisher" function was reliably deemed to be more important than Toby had originally estimated it: AN optimized to
434.8 +/- 53.0 (compared to the default of 200).
So, performance-wise DAPE is the best algorithm we have so far but it needs better coefficients. The "low rabbits" function could be removed entirely without reducing DAPE's ability to predict the
What's most interesting to me is how large of an improvement algorithms like LinearAB, FAME and DAPE get when an automatic system tweaks their coefficients. What this says to me is that our intuition
about the value of the pieces, despite years of experience and thousands of games played, is still not quite correct and we as humans aren't very good at guessing.
Title: Re: Empirically derived material evaluators Part I
Post by Fritzlein on Nov 16^th, 2006, 9:56pm Thanks for running all the optimizations and giving detailed description of each. I will respond more thoughtfully soon, but my first question is: Which
of the systems is the IdahoEv System? My vote would be for the optimized linearAB to take a place next to the "official" FAME and DAPE scores on Janzert's material calculator. Or maybe you aren't
done yet?
Title: Re: Empirically derived material evaluators Part I
Post by IdahoEv on Nov 16^th, 2006, 10:06pm
on 11/16/06 at 21:56:50, Fritzlein wrote:
Which of the systems is the IdahoEv System? My vote would be for the optimized linearAB to take a place next to the "official" FAME and DAPE scores on Janzert's material calculator. Or maybe you
aren't done yet?
I have an entirely separate one that I'm working on, but I haven't figured out how I want to score rabbits yet. I'll call it my official one if it outperforms LinearAB. Of course, I probably won't
stop tweaking it until it does. :-)
Title: Re: Empirically derived material evaluators Part I
Post by Fritzlein on Nov 19^th, 2006, 9:50pm Again, I want to come back to this in more detail, but I thought I would throw out an observation: Shouldn't the CurveABC always perform at least as well
as the LinearAB? If nothing else, it can set C=0 in CurveABC and then it is linear. If it is performing worse, then it means your optimizer is getting caught in local minima that aren't as good as
the ones LinearAB is finding. In other words, curving the rabbits makes it harder to make good predictions.
Now, I absolutely believe that the last few rabbits are worth more than the first few. Curving the rabbits must be better than having them linear. But since the results of curving are worse, that
casts some doubt on the whole procedure. Something I don't understand is affecting the results, and I'll be suspicious until I can think of a good explanation.
Title: Re: Empirically derived material evaluators Part I
Post by IdahoEv on Nov 20^th, 2006, 3:23am Sadly, being the terrible mathematician I am, the first function I found for RabbitCurveABC that had the shape I wanted is discontinuous at C=0, has a zero
denominator somewhere I believe. (I don't have it in front of me ATM).
So actually, RabbitCurveABC can't set C=0. Stupid, I know, and if any mathematicians present want to craft a better curve function (hint) i'd be happy to run it. But I don't think that's what's going
on in any case.
Given the observed interdependence of A and C, I suspect the local minimum issue is what's really happening. I probably could have run it a dozen more times, or annealed more slowly, and found better
solutions. In "Part I", with the simpler error function, RabbitCurveABC generally did outperform LinearAB when run long enough, but also often fell into local minima that were slightly worse.
In any case I was trying to be more-or-less scientifially honest and so peformed 5 runs and for each and left it at that for now. I suspect RabbitCurveABC could slightly outperform LinearAB if given
enough time, but since optimized DAPE did such a lovely job I wanted to move on to other things rather than spend another 6 hours of CPU time trying to squeeze a few more points out of an algorithm I
didn't like much anyway.
Title: Re: Empirically derived material evaluators Part I
Post by Fritzlein on Nov 20^th, 2006, 9:41pm
on 11/20/06 at 03:23:25, IdahoEv wrote:
if any mathematicians present want to craft a better curve function (hint) i'd be happy to run it.
A pretty standard family of curves from (0,0) to (1,1) is f(x) = x^p where p>0. Since you are trying to get from (0,0) to (8,8), you could use f(x) = 8*(x/8)^p. Then f(8), i.e. the value of all 8
rabbits, is always 8. f(1) is the value of having one rabbit rather than zero, whereas 8 - f(7) is the value of having eight rabbis rather than seven.
When p=1 it is a straight line of all rabbits being equally valued; p<1 suggests the last few rabbits on the board are worth more than the first few to be captured, while p>1 suggests the reverse.
With this parameter you can start out the rabbits as linear and let the line curve in either direction. I would laugh if the optimizer got stuck in a curve of the wrong direction! But one possible
explanation is that later rabbits are already getting promoted in value when you collapse the strong pieces downward, so the optimizer has to compensate by curving the other way.
Title: Re: Empirically derived material evaluators Part I
Post by IdahoEv on Nov 20^th, 2006, 11:39pm 99of9 asked me to re-run the analysis on DAPE but constraining the system to only those games where the opponents were within 100 ratings points of each
other. I did so, here are the results. I'm showing the full results for all 5 runs both without (first) and with that constraint.
Without the constraint (these are the same results summarized above, but all 5 runs are shown instead of only the best one).
51513 different states from 7913 games evaluated
Err A S E AR BR AN BN K
1870.3447 27.9307 0.7833 0.4282 -0.46 -1.0683 428.7 0.9755 4.7811
1872.9452 28.0058 0.7924 0.5608 -5.64 -0.0848 412.24 0.9746 4.5903
1871.0219 31.156 0.7874 0.5253 -0.88 -0.9979 434.96 0.974 5.0967
1870.6683 31.0662 1.1669 0.6471 55.96 0.976 377.52 0.9854 3.7867
1870.507 28.1805 0.8544 0.452 -1.74 -0.8613 520.76 0.9818 4.6596
With the additional constraint (ABS(wrating-brating)<100):
18563 different states from 2967 games evaluated
Err A S E AR BR AN BN K
1020.1945 31.2871 2.1109 -0.0433 139.74 0.9758 422.7 0.354 3.1555
1020.0298 29.3184 2.1287 -0.0221 174.32 0.9829 1195.58 0.293 2.9082
1019.6417 27.5985 0.8862 0.0631 36.14 0.9221 457.68 0.984 5.4417
1019.7013 29.1201 1.2575 0.0688 89.56 0.9708 363.58 0.9892 4.4402
1019.7303 29.3904 0.8043 0.0466 37.66 0.9267 505.8 0.9821 6.2032
The changes I see - AR/BR is now a factor, but E is not. AN/BN now varies much more widely, as does S.
I'll let 99 provide the interpretation for why this might be and/or what these numbers mean, but this does demonstrate that the empirical approach is noticeably dependent on *which* historical games
you are studying.
Title: Re: Empirically derived material evaluators Part I
Post by Fritzlein on Nov 20^th, 2006, 11:44pm Ah, constraining the games to even matches is a good idea I didn't think of. If the games are lopsided in rating it might encourage the optimizer to
over-react along the lines of predicting that whoever takes the first piece will win. If the stronger player almost always takes the first piece, it suddenly looks like a huge advantage...
Title: Re: Empirically derived material evaluators Part I
Post by IdahoEv on Nov 21^st, 2006, 12:44am
on 11/20/06 at 23:44:41, Fritzlein wrote:
If the stronger player almost always takes the first piece, it suddenly looks like a huge advantage...
Do we have a sense of how true this is?
Title: Re: Empirically derived material evaluators Part I
Post by IdahoEv on Nov 21^st, 2006, 3:23am
on 11/20/06 at 21:41:29, Fritzlein wrote:
A pretty standard family of curves from (0,0) to (1,1) is f(x) = x^p where p>0. Since you are trying to get from (0,0) to (8,8), you could use f(x) = 8*(x/8)^p.
Thank you, Fritz!
I wrote RabbitCurve2ABC which uses this function. Here's a plot of the function (http://idahoev.com/arimaa/images/rabbitcurve2.png), for the curious. It has a noticeably different shape than the one
I used in RabbitCurveABC (http://idahoev.com/arimaa/images/rabbitcurve.png). The new one more closely approximates a linear region with the first rabbits lost all costing a similar amount followed by
a sudden increase in value for the last rabbit. It seems reasonable that it might more closely fit the actual behavior of arimaa games.
Tested on the same data, RabbitCurve2ABC was an improvement on RabbitCurveABC but only a tiny improvement on LinearAB:
Algorithm K Total Error Error +/-
RabbitCurveABC (Optimized Coefficients) 2.82 1891.21 0.646
LinearAB (Optimized Coefficients) 1.77 1889.10 0.015
RabbitCurve2ABC (Optimized Coefficients) 1.80 1889.06 0.004
Moreover, it essentially made itself into LinearAB:
A = 1.2427 +/- .0096
B = 1.3185 +/- .0010
C = 1.0205 +/- .0092
Title: Re: Empirically derived material evaluators Part I
Post by 99of9 on Nov 21^st, 2006, 4:17am
on 11/20/06 at 23:39:40, IdahoEv wrote:
The changes I see - AR/BR is now a factor, but E is not. AN/BN now varies much more widely, as does S.
I'll let 99 provide the interpretation for why this might be and/or what these numbers mean, but this does demonstrate that the empirical approach is noticeably dependent on *which* historical games
you are studying.
Great, I like these results much more. Even the E value 0.06 is not unfeasible. Thanks for all this work Idaho, it certainly gives a different perspective on material eval.
I think the fact that DAPE can fit your data better than the other methods is mostly due to the fact that it has extra fitting parameters, and only partly due to the functional form.
This is borne out in the fact that the parameters are so sensitive to the training set, and are still so different from my intuitive values. (Not that my intuitive values are necessarily good... but
the optimized ones still don't look quite like the Arimaa I play.)
... but, maybe familiarity will convince me. I'd appreciate if Janzert could put Idaho's 3rd set (lowest error) into his calculator side by side with my coefficients. Interestingly, of the 5 you
got... I agree with your error function that the 3rd one best (because it has the lowest BR).
Title: Re: Empirically derived material evaluators Part I
Post by IdahoEv on Nov 21^st, 2006, 8:15am
on 11/21/06 at 04:17:30, 99of9 wrote:
... but, maybe familiarity will convince me.
And I'll re-run them periodically as more games accumulate in the DB, and we can experiment with different constraints on the games considered. As it is, the most recent results are using a game DB
that is two weeks old. I have not updated my db since I ran the first experiment, because I wanted the number of states to remain constant so the error functions would be comparable.
I'd appreciate if Janzert could put Idaho's 3rd set (lowest error) into his calculator side by side with my coefficients.
Incidentally, by using the K values listed and the sigmoid formula above, Janzert could add a probability output to all three, so it would show for example "FAME thinks gold is ahead by 1.53 (74% win
For unaltered FAME and DAPE algorithms, use the K value for "FAME (default coefficients)" and "DAPE (default coefficients)" in the first post in this thread. In those cases K was the only variable
that was empirically fit, so it essentially functions as a measure of the scale of FAME/DAPE's output.
Title: Re: Empirically derived material evaluators Part I
Post by Janzert on Nov 21^st, 2006, 9:16pm Ok, I added DAPE with the constants 99of9 mentioned to the calculator (http://arimaa.janzert.com/fame.html).
One thing that struck me is how close it is in evaluating rabbits to FAME.
Title: Re: Empirically derived material evaluators Part I
Post by 99of9 on Nov 21^st, 2006, 9:33pm
on 11/21/06 at 21:16:36, Janzert wrote:
Ok, I added DAPE with the constants 99of9 mentioned to the calculator (http://arimaa.janzert.com/fame.html).
Thanks Janzert!
One thing that struck me is how close it is in evaluating rabbits.
Yeah, it's amazingly good agreement when all 3 methods value one rabbit as 1.000 :-).
Hey Idaho. Since optimized DAPE predicts that one H is worth almost exactly 2R out of the opening. Could you access your actual stats and tell us how often each has won (say for the Rating Diff <100
I'll make a bold prediction that the guy with the horse wins more often.
Title: Re: Empirically derived material evaluators Part I
Post by Janzert on Nov 21^st, 2006, 9:42pm hehe, oops forgot the words "to FAME" the first time. :P
Title: Re: Empirically derived material evaluators Part I
Post by IdahoEv on Nov 22^nd, 2006, 2:17am
on 11/21/06 at 21:42:28, Janzert wrote:
oops forgot the words "to FAME"
I'm gonna live forever
I'm gonna learn how to fly
I feel it coming together
People will see me and cry
I'm gonna make it to heaven
Light up the sky like a flame
I'm gonna live forever
Baby remember my name
Or was that not what you meant...
Title: Re: Empirically derived material evaluators Part I
Post by Fritzlein on Nov 22^nd, 2006, 9:51am
on 11/21/06 at 03:23:26, IdahoEv wrote:
Algorithm K Total Error Error +/-
RabbitCurveABC (Optimized Coefficients) 2.82 1891.21 0.646
LinearAB (Optimized Coefficients) 1.77 1889.10 0.015
RabbitCurve2ABC (Optimized Coefficients) 1.80 1889.06 0.004
Moreover, it essentially made itself into LinearAB:
A = 1.2427 +/- .0096
B = 1.3185 +/- .0010
C = 1.0205 +/- .0092
Ha, ha! Yes, when the rabbit curve was allowed to bend itself either way, it decided to stay essentially straight, but what curvature it does have goes the wrong way! Losing the first rabbit apparent
hurts you more than losing the seventh rabbit...
This result has to make us stop and think: to what extent are the optimizations a commentary on the true value of rabbits, as opposed to a commentary on the dataset from which they are generated? The
sudden change in DAPE coefficients when the data was restricted to a subset raises the same question. It may be as valid to wonder what is biasing the data as to wonder what is biasing our judgment
about piece values.
Title: Re: Empirically derived material evaluators Part I
Post by IdahoEv on Nov 22^nd, 2006, 2:26pm
on 11/22/06 at 09:51:05, Fritzlein wrote:
To what extent are the optimizations a commentary on the true value of rabbits, as opposed to a commentary on the dataset from which they are generated? The sudden change in DAPE coefficients when
the data was restricted to a subset raises the same question. It may be as valid to wonder what is biasing the data as to wonder what is biasing our judgment about piece values.
Indeed. I think the likely issue is the overrepresentation of early losses in the DB. Since there are many many examples of a single rabbit loss in the DB - and since those equate to a 55/45 win/loss
or so, the optimizer has to work very hard to generate a 0.55 output for that case in order to minimize the error.
On the other hand, there are fewer cases where someone has, say, lost five rabbits. And in most of those cases they've probably lost a few officers as well and are thoroughly losing the game. So all
the optimizer needs to do is punish them for having lost a lot of pieces and produce a probability near 1.0 or 0.0 as appropriate and very little error will accumulate for those cases.
This is the inevitable drawback of an empirical approach.
At the same time, one could argue that this is correct as far as a bot's material evaluator is concerned in any case. As with any player, once you're way ahead on material it matters less if you're
correctly evaluating an additional exchange of D vs. RC. But correctly evaluating the early exchanges is far more important.
Put differently, correctly fitting your evaluator to the cases that appear in real games may be more useful in the practical sense of winning real games than an abstract concept of "true piece
value". Especially since material evaluation alone is somewhat specious in Arimaa and is always subject to position, it may be that certain material states are more likely to be relevant than others
simply because positional constraints prevent the other from actually appearing. For example, how important is it for a material eval to "correctly" evaluate M vs. RRRR .... if that exchange never
actually occurs in practice? What if, in fact, it cannot occur because reasonable play simply will not create the positional circumstances for it? Then of course an empirically-derived evaluator will
not necessarily evaluate that state correctly ... but maybe that doesn't actually matter.
Evaluating the increased value of the seventh rabbit lost may be a moot point if the evaluator can already tell you've lost with 99% confidence three pieces earlier. Or, if losing seven rabbits has
only happened 10 times in a training set of 50,000 states. If so, we wouldn't expect it to put much effort into fitting that state well, and maybe the fact that it doesn't care about that state is
telling us that we shouldn't worry about it, either, because it's not a relevant distinction to make in terms of winning games.
I have some thoughts about the difference beween the two different DAPE optimizations that I'll share when I have a moment.
Title: Re: Empirically derived material evaluators Part I
Post by IdahoEv on Nov 22^nd, 2006, 2:42pm Sudden thought: now that the optimizer is scoring win probability rather than just "who is winning", we can't consider the cat loss at the beginning of a
bait-and-tackle game as a simple "incorrect state" that no evaluator will get correct anymore, as we did in the analysis of Part I.
Instead, the optimizer will be desperately trying to reduce the probability error of the hundreds of examples of that technique, and that will definitely throw a wrench into the works.
I bet you that ABS(wrating-brating)<100 eliminates the vast majority of bait-and-tackle cases from the dataset.
Title: Re: Empirically derived material evaluators Part I
Post by IdahoEv on Nov 27^th, 2006, 10:14pm Fritzl: to answer your question from chat last week, Optimized LinearAB prefers RH to M, in fact by quite a lot. Opt. Linear AB values (as optimized above,
anyway) initial pieces as:
R: 1.0
C: 1.24
D: 1.54
H: 1.91
M: 2.38
LinearAB is strictly additive, so HR = 2.91. More than half a rabbit better than M.
Title: Re: Empirically derived material evaluators Part I
Post by 99of9 on Nov 27^th, 2006, 10:35pm
on 11/21/06 at 21:33:12, 99of9 wrote:
Hey Idaho. Since optimized DAPE predicts that one H is worth almost exactly 2R out of the opening. Could you access your actual stats and tell us how often each has won (say for the Rating Diff <100
I'll make a bold prediction that the guy with the horse wins more often.
When you get a chance you could try this for M vs HR as well... but more tellingly you should try M vs DR, because Opt. linearAB even favours the DR there, which is totally whacky.
Title: Re: Empirically derived material evaluators Part I
Post by IdahoEv on Nov 28^th, 2006, 12:36am The problem, 99, is that the data of exactly those specific states is pretty sparse. We can estimate the value of RR by how often it wins games, but the
specific state of H traded for RR is pretty rare. But here's what we've got.
Using the same data set I used for training (both ratings > 1600, id>5000) (only including HvH and BvB, not HvB)
112228-112226 (gold ahead two rabbits)
52 occurrences, gold wins 45 of them (86.5%)
112226-112228 (silver ahead two rabbits
49 occurrences, silver wins 37 of them (75.5%)
Combined: RR advantage wins 82/101 or 81.2%
112228-111228 (gold ahead one horse)
78 occurrences, gold wins 54 of them (69.2%)
111228-112228 (silver ahead one horse)
82 occurrences, silver wins 67 of them (81%)
Combined: H advantage wins 121/160 or 75.6%
112226-111228 (gold down RR, silver down H)
6 occurrences, gold wins 2, silver wins 4 (67% silver win)
111228-112226 (gold down H, silver down RR)
6 occurrences, gold wins 5 (83%), silver wins 1 (16%)
Repeating the analysis, additionally constraining ABS(wrating-brating)<100 (only including HvH and BvB, not HvB):
112228-112226 (gold ahead two rabbits)
16 occurrences, gold wins 14 of them (87.5%)
112226-112228 (silver ahead two rabbits
20 occurrences, silver wins 15 of them (75.0%)
Combined: RR advantage wins 29/36 or 80.6%
112228-111228 (gold ahead one horse)
31 occurrences, gold wins 21 of them (67.7%)
111228-112228 (silver ahead one horse)
27 occurrences, silver wins 19 of them (70.4%)
Combined: H advantage wins 40/58 or 69.0%
112226-111228 (gold down RR, silver down H)
1 occurrences, gold wins 0, silver wins 1
111228-112226 (gold down H, silver down RR)
3 occurrences, gold wins 2 (67%), silver wins 1 (33%)
Title: Re: Empirically derived material evaluators Part I
Post by Fritzlein on Nov 28^th, 2006, 8:23am
on 11/28/06 at 00:36:38, IdahoEv wrote:
112226-111228 (gold down RR, silver down H)
6 occurrences, gold wins 2, silver wins 4 (67% silver win)
111228-112226 (gold down H, silver down RR)
6 occurrences, gold wins 5 (83%), silver wins 1 (16%)
Based on the spreadsheet you sent me earlier, I rather expected this. If RR weren't beating H most of the time (plus other similar stuff) then LinearAB wouldn't have settled on the coefficients it
Can you post the game id's of those twelve games? It might be instructive to see exactly how the extra rabbits prove advantageous.
Title: Re: Empirically derived material evaluators Part I
Post by IdahoEv on Nov 28^th, 2006, 12:57pm
on 11/27/06 at 22:35:28, 99of9 wrote:
When you get a chance you could try this for M vs HR as well... but more tellingly you should try M vs DR, because Opt. linearAB even favours the DR there, which is totally whacky.
Well, as it turns out that is in fact what the data say when considering all games with a rating over 1600 (i.e. the dataset I trained LinearAB on).
Using the larger dataset (wrating > 1600 AND brating > 1600 AND wtype=btype)
Camel advantage wins: 38/44 (86.4%)
HR advantage wins: 61/65 (91.0%)
DR advantage wins: 62/70 (88.6%)
Camel traded for HR: player with the camel wins 12/27 (44%)
Camel traded for DR: player with the camel wins 1/3 (33%)
State N gold wins silver wins
112228-102228 20 16(80%) 4(20%)
102228-112228 24 2(8.3%) 22(91.7%)
112228-111227 30 27(90%) 3(10%)
111227-112228 31 2(6.5%) 29(93.55)
112228-112127 29 24(82.8%) 5(17.24%)
112127-112228 41 3(7.32% 38(92.68%)
111227-102228 14 3(21.4%) 11(78.6%)
102228-111227 13 4(30.8%) 9(69.2%)
112127-102228 no occurrences
102228-112127 3 2 1
Using the constrained dataset (wrating > 1600 AND brating > 1600 AND ABS(wrating-brating)<100 AND wtype=btype):
Camel advantage wins: 13/15 (87%)
HR advantage wins: 22/26 (85%)
DR advantage wins: 18/23 (78%)
Camel vs. HR: player with the camel wins 3/12 (25%)
Camel vs. DR: player with the camel wins 1/1 (100%)
State N gold wins silver wins
112228-102228 6 5 1
102228-112228 9 1 8
112228-111227 15 12 3
111227-112228 11 1 10
112228-112127 9 7 2
112127-112228 14 3 11
111227-102228 9 2 7
102228-111227 3 2 1
112127-102228 no occurrences
102228-112127 1 0 1
Using the larger data set, capturing DR and HR both seem to be better than capturing M. This does reverse when using the constrained data set, but in that case the sample sizes are sufficiently small
that I don't have great confidence in the results.
Title: Re: Empirically derived material evaluators Part I
Post by IdahoEv on Nov 28^th, 2006, 1:39pm
on 11/28/06 at 08:23:56, Fritzlein wrote:
Can you post the game id's of those twelve games? It might be instructive to see exactly how the extra rabbits prove advantageous.
Here you go; the matching games for H vs RR, M vs. HR, and M vs. DR. "Turn" is the turn at which my system records the material state, which is generally 1 full turn (2 ply) after the capture that
created the state. (This is how I avoid including mid-exchange states.)
State game turn winner
H for RR:
112226-111228 7756 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=7756) 34w1 Black
112226-111228 15167 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=15167) 12b1 White
112226-111228 16004 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=16004) 27b1 Black
112226-111228 32195 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=32195) 37b1 Black
112226-111228 35376 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=35376) 19b1 Black
112226-111228 36369 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=36369) 17w1 White
111228-112226 11449 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=11449) 22b1 White
111228-112226 31668 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=31668) 26w1 White
111228-112226 33165 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=33165) 29w1 White
111228-112226 34455 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=34455) 18w1 White
111228-112226 35510 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=35510) 31b1 White
111228-112226 38603 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=38603) 18w1 Black
M for HR:
111227-102228 10480 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=10480) 16w1 White
111227-102228 11235 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=11235) 29b1 Black
111227-102228 11632 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=11632) 14b1 Black
111227-102228 13030 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=13030) 23b1 Black
111227-102228 15862 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=15862) 16b1 Black
111227-102228 21919 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=21919) 43w1 Black
111227-102228 23272 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=23272) 14b1 Black
111227-102228 23287 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=23287) 52b1 Black
111227-102228 23525 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=23525) 17b1 Black
111227-102228 24623 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=24623) 21b1 White
111227-102228 27204 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=27204) 25b1 Black
111227-102228 29619 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=29619) 43b1 Black
111227-102228 31362 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=31362) 21b1 White
111227-102228 36325 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=36325) 30b1 Black
102228-111227 9235 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=9235) 18b1 White
102228-111227 15294 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=15294) 18w1 Black
102228-111227 17649 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=17649) 37w1 White
102228-111227 18824 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=18824) 23b1 Black
102228-111227 18871 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=18871) 21w1 White
102228-111227 24682 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=24682) 25w1 Black
102228-111227 28137 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=28137) 27w1 Black
102228-111227 29508 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=29508) 24w1 Black
102228-111227 32663 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=32663) 20b1 Black
102228-111227 33262 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=33262) 26b1 Black
102228-111227 33934 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=33934) 16w1 White
102228-111227 39958 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=39958) 25w1 Black
102228-111227 40846 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=40846) 20b1 Black
M for DR:
102228-112127 19073 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=19073) 32w1 White
102228-112127 27445 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=27445) 23w1 Black
102228-112127 38494 (http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=38494) 27w1 White
Title: Re: Empirically derived material evaluators Part I
Post by IdahoEv on Nov 28^th, 2006, 4:02pm Argh. I just realized my last three posts seached the database with an additional constraint ... wtype=btype. So those numbers only include HvH games and
BvB games; no HvB games.
I can re-run them if y'all care.
Title: Re: Empirically derived material evaluators Part I
Post by IdahoEv on Nov 28^th, 2006, 5:20pm Okay, the same results (summarized only - these posts are too long!), but now including HvB games as well as HvH and BvB.
Interestingly, the results more closely match 99of9's intuition when HvB games are included. Does this this mean that our feel for the game is based on thousands of games vs. bots, and isn't quite
accurate when we are playing against a human opponent?
HvB included, ratings for both players > 1600. Positions in order of greatest to least statistical advantage.
HR advantage wins 183/211, 86%
DR advantage wins 327/383, 85%
M advantage wins 132/159, 83%
RR advantage wins 446/568, 79%
H advantage wins 459/621, 74%
Trade M for HR, player with camel wins 7/17, 41%
Trade M for DR, player with camel wins 54/95, 57%
Trade H for RR, player with horse wins 19/44, 43%
HvB included, both ratings >100 and abs(wrating-brating)<100
M advantage wins 64/72, 89%
HR advantage wins 59/71, 83%
DR advantage wins 94/117, 80%
RR advantage wins 167/214, 78%
H advantage wins 165/250, 66%
Trade M for HR, player with camel wins 18/33, 54.5%
Trade M for DR, player with camel wins 4/7, 57%
Trade H for RR, player with horse wins 10/18, 56%
So when HvB games are included AND we constrain to players with similar ratings, an M advantage becomes the strongest advantage we have. But in all other combinations considered, HR advantage beats
M, and even DR beats M for most combinations of constraints.
Also, in every combination yet considered, RR advantage leads to win more often than H advantage. (Actual trades of H for RR contraindicate, but the number of those is low)
Title: Re: Empirically derived material evaluators Part I
Post by aaaa on May 11^th, 2007, 6:22pm
on 11/16/06 at 16:43:51, IdahoEv wrote:
Count Pieces
Just what you'd expect; the score is +1.0 point for each piece gold has, -1.0 point for each piece silver has. This is here to give you a sense of the overall performance of the algorithms relative
to a baseline. Most of the time, a player with more pieces is ahead. The performance improvement of the algorithms below is a measure of their ability to correctly evaluate the relatively few cases
where a player has a numerical deficit but a functional advantage.
It's kind of a shame that you didn't use the opportunity to also see how the various algorithms would stack up against the significantly less (but still quite) naive method of simply assigning point
values to the different (collapsed) piece types à la chess, which would be similarly optimized. These numbers would also make for nice base values for bots to use.
I'm not surprised in the least that an optimized DAPE performs the best here; the seven parameters just scream "overfitting" to me. If one were to wish to discover which general method (that is,
ignoring the exact values used as parameters) would be best in capturing the intricacies of Arimaa, then cross-validation would be in order here.
Title: Re: Empirically derived material evaluators Part I
Post by IdahoEv on Jun 11^th, 2007, 1:20pm
on 05/11/07 at 18:22:01, aaaa wrote:
It's kind of a shame that you didn't use the opportunity to also see how the various algorithms would stack up against the significantly less (but still quite) naive method of simply assigning point
values to the different (collapsed) piece types à la chess, which would be similarly optimized.
That's not a bad idea, and if and when I re-run this analysis, I'll include something like that. The only trouble is that it would have to use collapsed piece types to be useful at all, and it's not
at all clear how to assign the values to a collapsed piece list because the number of possible pieces changes.
Do we fix the elephant value and count down in levels, or fix the cat and count up? And do we float the rabbit value as well, or leave it fixed relative to the elephant or cat? We might have to try
it a couple of different ways.
I'm not surprised in the least that an optimized DAPE performs the best here; the seven parameters just scream "overfitting" to me.
See my post in the fairy pieces thread for discussion on this. As an additional comment, though, most of the variability in the oDAPE coefficients comes because what most of mattered in some cases
was the ratio between two coefficients rather than the coefficients themselves. So, for example, AR and BR vary widely, but only because they vary inversely. The ratio AR/BR was fairly consistent
over multiple runs.
Title: Re: Empirically derived material evaluators Part I
Post by aaaa on Jun 12^th, 2007, 2:05pm A case can be made both for collapsing the pieces downwards as well as upwards. On one hand, collapsing downwards makes sense from the viewpoint that as the
board empties out, pieces and especially rabbits become more valuable simply by existing (i.e. that quantity becomes more important than quality), in which case you would want to minimize the
discrepancy in value between a rabbit and the next weakest piece (by normalizing the latter as a cat). On the other hand, by collapsing upwards you won't have the strategically indispensable elephant
inflating the base value of lower pieces after piece levels get eliminated.
This suggests adding three more systems for testing: collapsing downwards, collapsing upwards and a naive system with no collapsing at all in order to see whether it actually matters that much. I'm
guessing it doesn't really.
Title: Re: Empirically derived material evaluators Part I
Post by 99of9 on Apr 22^nd, 2008, 2:15am
on 11/16/06 at 21:56:50, Fritzlein wrote:
Which of the systems is the IdahoEv System? My vote would be for the optimized linearAB to take a place next to the "official" FAME and DAPE scores on Janzert's material calculator. Or maybe you
aren't done yet?
I'd recommend against putting your name to this method just yet :-). Reasons below.
on 11/16/06 at 16:43:51, IdahoEv wrote:
The optimized values:
A=1.241 +/- 0.004
B=1.316 +/- 0.002
I'm continually surprised at how well this evaluator has performed. I originally implemented it as a toss-off algorithm just to test my optimizer. In particular, the fact that it simply values all
rabbits at 1.0 points seems incredibly naive to me, as I believe that the 8th rabbit lost should be much more expensive than the 1st rabbit lost.
The empirical optimizer seems to disagree. What can I say.
I just discovered a fairly serious problem with linearAB (and everything based on it). Sometimes it will refuse to kill free enemy pieces!!
Say you have a full army and your opponent has d8r, the collapsed notation is:
004228-000108 linearAB_score = 12.71
Now you have the opportunity to kill the dog, leaving the board as:
000088-000008 linearAB_score = 9.93
Because it reduces your score (!) you will not make the capture. Or if the sides were reversed, this could result in deliberate suicides.
The problem is related to the fact that levels collapse down, but the value of each level remains the same. Perhaps collapsing up would be better, but this might cause other problems.
PS. Under this eval, Arimaabuff's CR_only handicap is actually more valuable than an R_only handicap!!!
Title: Re: Empirically derived material evaluators Part I
Post by aaaa on Apr 22^nd, 2008, 8:49am Best then to just forgo any collapsing at all with this system and take the lack of ability to recognize equivalent states for granted.
Title: Re: Empirically derived material evaluators Part I
Post by IdahoEv on Apr 22^nd, 2008, 1:03pm
on 04/22/08 at 02:15:22, 99of9 wrote:
I just discovered a fairly serious problem with linearAB (and everything based on it). Sometimes it will refuse to kill free enemy pieces!!
Say you have a full army and your opponent has d8r, the collapsed notation is:
004228-000108 linearAB_score = 12.71
Now you have the opportunity to kill the dog, leaving the board as:
000088-000008 linearAB_score = 9.93
That's definitely a very interesting point.
However, trying it out I am unable to construct any such cases (where capturing reduces your score) when both sides still possess their elephant. And even after a lost elephant, it looks like that
sort of collapse fault only occurs in extreme corner cases where one side has essentially no material left.
The primary goal of a material eval function, for me, is bot development, and I don't think cases like this one occur in competitive games.
Title: Re: Empirically derived material evaluators Part I
Post by 99of9 on Apr 22^nd, 2008, 8:34pm
on 04/22/08 at 13:03:07, IdahoEv wrote:
However, trying it out I am unable to construct any such cases (where capturing reduces your score) when both sides still possess their elephant. And even after a lost elephant, it looks like that
sort of collapse fault only occurs in extreme corner cases where one side has essentially no material left.
When you have a full army and your opponent has ed8r the score is
[013228-010108] score 10.56
when you kill his dog it becomes:
[000178-000108] score 8.69
Now I agree these anti-capture situations are only for quite unbalanced armies (or very strange trades), but the issue of collapsing may cause other more subtle problems even in more normal
Title: Re: Empirically derived material evaluators Part I
Post by 99of9 on Apr 22^nd, 2008, 8:48pm
on 04/22/08 at 20:34:38, 99of9 wrote:
the issue of collapsing may cause other more subtle problems even in more normal situations.
Here's one (which according to linearAB is almost balanced):
EMHH4R vs ed8r (silver ahead by 0.34)
EMHH4R vs e8r (silver ahead by 0.28)
In this case gold may prefer some positional advantage worth more than 0.06 rather than killing the opponent's second strongest piece!!
Title: Re: Empirically derived material evaluators Part I
Post by IdahoEv on Apr 23^rd, 2008, 4:05am Okay! You've convinced me. :-)
I still believe that understanding level collapse at some level is important, but it is clear that simple downward collapse will not suffice.
Title: Re: Empirically derived material evaluators Part I
Post by 99of9 on Apr 23^rd, 2008, 5:39am
on 04/23/08 at 04:05:57, IdahoEv wrote:
I still believe that understanding level collapse at some level is important, but it is clear that simple downward collapse will not suffice.
Yes, I agree with that. DAPE accounts for it automatically because values only depend on the number of stronger and the number of equal pieces. FAME collapses up in a fancy way from memory?
Title: Re: Empirically derived material evaluators Part I
Post by Fritzlein on Apr 23^rd, 2008, 8:09am
on 04/23/08 at 05:39:32, 99of9 wrote:
FAME collapses up in a fancy way from memory?
Yes, FAME collapses the pieces up, and the rabbit value is divided by the amount of enemy material, so it goes up too as pieces are captured. One thing that FAME does generally right (although
probably not accurately) is consider the same absolute material advantage to be worth more as the board empties out. One thing that FAME does generally wrong (and DAPE does right?) is fail to
consider the each piece relative to all opposing pieces instead of only the piece it "lines up against".
Title: Re: Empirically derived material evaluators Part I
Post by 99of9 on Apr 23^rd, 2008, 8:42am
on 04/23/08 at 08:09:04, Fritzlein wrote:
One thing that FAME does generally right (although probably not accurately) is consider the same absolute material advantage to be worth more as the board empties out.
That is only good if the pieces that are emptying are higher in value than the place where the inequality is, or if the imbalance includes an imbalance in the number of pieces. FAME and DAPE do that
right. If the pieces that are emptying are lower than the place where the inequality is, and the inequality is an equal numbered trade (say M for H), I believe the same absolute material advantage
decreases in value, but both DAPE and FAME keep it constant.
One thing that FAME does generally wrong (and DAPE does right?) is fail to consider the each piece relative to all opposing pieces instead of only the piece it "lines up against".
Yes, DAPE does this, but everything depends on the word "consider"... I'm sure there are better and worse ways to consider them.
Arimaa Forum » Powered by YaBB 1 Gold - SP 1.3.1!
YaBB © 2000-2003. All Rights Reserved. | {"url":"http://arimaa.com/arimaa/forum/cgi/YaBB.cgi?board=talk;action=print;num=1163717031","timestamp":"2024-11-15T04:20:52Z","content_type":"text/html","content_length":"77195","record_id":"<urn:uuid:e524cdc6-8dcc-4c87-9883-a9c31ad7a9ae>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00014.warc.gz"} |
Golden Beans
Counting and comparing numbers
Linking numerals and amounts
Children often enjoy counting things they have been given or things they come across.
Adults could provide items for the children to count (such as golden beans) and provide a variety of number cards for them to use in the counting.
The Activity
Leave a pile of golden beans and a range of number cards in a place for children to explore them. Some cards may have numerals on them, some may feature dots, some may have representations of
apparatus found in your setting.
Encouraging mathematical thinking and reasoning:
Tell me about your beans.
(If they've used cards) Tell me about the cards you've used.
Why did you choose that card?
What would we have to do to make sure you have the same number of beans as ...?
Opening Out
Let's look at ...'s beans too. What can you tell me about their beans?
How many do you have altogether? How do you know?
Can you write/draw/put on paper the number that you have? What would you like to take a photo of?
The Mathematical Journey
Counting and cardinality:
• using number words and language about counting e.g. none, zero
• reciting (some) number names in sequence
• cardinality - saying how many there are altogether
• showing on fingers how many there are
• progressing from knowing some number words to saying one number for each object, then knowing the number of the whole group
• selecting a small number of objects from a group when asked
• showing curiosity about numbers by offering comments or asking questions
• relative number size - comparing numbers
Linking symbols and amounts:
• finding numerals to match the number
Development and Variation
The mathematics of this activity could equally well arise from groups of objects collected by the children themselves, rather than through the beans placed in the setting by the practitioner.
You may like to encourage children to create patterns and sequences with their beans, if they do not do this naturally. Provide materials with which to record their patterns, should they wish.
You could put some beans in a box/bag and invite learners to estimate the total number before finding out the exact number for themselves.
The NRICH activity
Maths Story Time
, which focuses a little more on the early ideas associated with division, could follow on from this one.
Beans (painted gold) or other items, perhaps linked to the current theme or a recently-read story.
Cards featuring numbers in the form of dots.
Cards featuring numerals.
Cards featuring numbers in the form of any apparatus used in the setting.
Download a PDF of this resource.
Acknowledgement: Kirsty Lombari at Ludwick Nursery School | {"url":"https://nrich.maths.org/eyfs-activities/golden-beans","timestamp":"2024-11-09T04:22:37Z","content_type":"text/html","content_length":"43089","record_id":"<urn:uuid:a6b21c61-f976-4c08-8bf2-2ab8e9ee1139>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00847.warc.gz"} |
An OptimizationProblem object describes an optimization problem, including variables for the optimization, constraints, the objective function, and whether the objective is to be maximized or
minimized. Solve a complete problem using solve.
Create an OptimizationProblem object by using optimproblem.
The problem-based approach does not support complex values in the following: an objective function, nonlinear equalities, and nonlinear inequalities. If a function calculation has a complex value,
even as an intermediate value, the final result might be incorrect.
Description — Problem label
'' (default) | string | character vector
Problem label, specified as a string or character vector. The software does not use Description. It is an arbitrary label that you can use for any reason. For example, you can share, archive, or
present a model or problem, and store descriptive information about the model or problem in the Description property.
Example: "Describes a traveling salesman problem"
Data Types: char | string
ObjectiveSense — Indication to minimize or maximize
'minimize' (default) | 'min' | 'maximize' | 'max'
Indication to minimize or maximize, specified as 'minimize' or 'maximize'. This property affects how solve works.
You can use the short name 'min' for 'minimize' or 'max' for 'maximize'.
Example: 'maximize'
Data Types: char | string
Variables — Optimization variables in object
structure of OptimizationVariable objects
This property is read-only.
Optimization variables in the object, specified as a structure of OptimizationVariable objects.
Data Types: struct
Objective — Objective function
scalar OptimizationExpression | structure containing scalar OptimizationExpression
Objective function, specified as a scalar OptimizationExpression or as a structure containing a scalar OptimizationExpression. Incorporate an objective function into the problem when you create the
problem, or later by using dot notation.
prob = optimproblem('Objective',5*brownies + 2*cookies)
% or
prob = optimproblem;
prob.Objective = 5*brownies + 2*cookies
Constraints — Optimization constraints
OptimizationConstraint object | OptimizationEquality object | OptimizationInequality object | structure containing OptimizationConstraint, OptimizationEquality, or OptimizationInequality objects
Optimization constraints, specified as an OptimizationConstraint object, an OptimizationEquality object, an OptimizationInequality object, or as a structure containing one of these objects.
Incorporate constraints into the problem when you create the problem, or later by using dot notation:
constrs = struct('TrayArea',10*brownies + 20*cookies <= traysize,...
'TrayWeight',12*brownies + 18*cookies <= maxweight);
prob = optimproblem('Constraints',constrs)
% or
prob.Constraints.TrayArea = 10*brownies + 20*cookies <= traysize
prob.Constraints.TrayWeight = 12*brownies + 18*cookies <= maxweight
Remove a constraint by setting it to [].
prob.Constraints.TrayArea = [];
Object Functions
evaluate Evaluate optimization expression or objectives and constraints in problem
issatisfied Constraint satisfaction of an optimization problem at a set of points
optimoptions Create optimization options
prob2struct Convert optimization problem or equation problem to solver form
show Display information about optimization object
solve Solve optimization problem or equation problem
solvers Determine default and valid solvers for optimization problem or equation problem
varindex Map problem variables to solver-based variable index
write Save optimization object description
Create and Solve Maximization Problem
Create a linear programming problem for maximization. The problem has two positive variables and three linear inequality constraints.
prob = optimproblem('ObjectiveSense','max');
Create positive variables. Include an objective function in the problem.
x = optimvar('x',2,1,'LowerBound',0);
prob.Objective = x(1) + 2*x(2);
Create linear inequality constraints in the problem.
cons1 = x(1) + 5*x(2) <= 100;
cons2 = x(1) + x(2) <= 40;
cons3 = 2*x(1) + x(2)/2 <= 60;
prob.Constraints.cons1 = cons1;
prob.Constraints.cons2 = cons2;
prob.Constraints.cons3 = cons3;
Review the problem.
OptimizationProblem :
Solve for:
maximize :
x(1) + 2*x(2)
subject to cons1:
x(1) + 5*x(2) <= 100
subject to cons2:
x(1) + x(2) <= 40
subject to cons3:
2*x(1) + 0.5*x(2) <= 60
variable bounds:
0 <= x(1)
0 <= x(2)
Solve the problem.
Solving problem using linprog.
Optimal solution found.
Version History
Introduced in R2017b
R2024a: OptimizationProblem supports evaluate and issatisfied
You can now evaluate optimization expressions and constraints using evaluate and issatisfied for OptimizationProblem objects. Evaluate at a set of points in an OptimizationValues object, or at a
single point in a structure.
The value of a constraint depends on the constraint type. An equation is equivalent to an == constraint. For expressions L and R:
Constraint Type Value
L <= R L – R
L >= R R – L
L == R abs(L – R)
For details, see the evaluate and issatisfied reference pages. | {"url":"https://it.mathworks.com/help/optim/ug/optim.problemdef.optimizationproblem.html","timestamp":"2024-11-14T07:20:30Z","content_type":"text/html","content_length":"95454","record_id":"<urn:uuid:00403f95-3732-41b0-b2b4-82c73828327c>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00408.warc.gz"} |
Lower bound confidence interval NaN from time to time
I was wondering about some lower bound NaN values I've come across (or also when, for example the AE model has C set to 0, but the variance component has a lower bound CI NaN value even though all of
them should have a value of 0 (lower bound, estimate, upper bound)). Specifically, I have had NaN values for the A standardized variance component lower bound of the confidence interval from time to
time. Does anyone know why this may be? Is there a work-around on how to adjust it? It's for relatively few estimates, but it does come up.
I am using the latest OpenMx version and it seems to be running fine otherwise in general (on a univariate model, etc.).
I know there is also calculating a CI via bootstrapping, but I am not entirely sure how to do that as of yet (but maybe that would lend itself toward resolving this issue).
Definitely appreciate any advice!
Replied on Tue, 06/15/2021 - 11:59
Have you looked at the `summary(..., verbose=TRUE)` output?
Replied on Tue, 06/15/2021 - 14:15
In reply to verbose output? by AdminJosh
Thanks for the reply! I just included it and it lists the following:
R[write to console]: OpenMx version: 2.19.5 [GIT v2.19.5]
R version: R version 4.1.0 (2021-05-18)
Platform: x86_64-conda-linux-gnu
Default optimizer: SLSQP
NPSOL-enabled?: No
OpenMP-enabled?: Yes
I thought NPSOL was the default optimizer (at some point, though maybe I am mistaken)? Not sure if that may have to do with it. It also gives NaN values for the upper CI bound for the C estimate
(when C is fixed to 0) at times as well.
Replied on Tue, 06/15/2021 - 18:18
In reply to Thanks for the reply! I just by mirusem
That looks fine. I think SLSQP is used for confidence intervals regardless of which optimizer you select. The output I was after is the detailed CI output. Here's what it outputs by default,
confidence intervals:
lbound estimate ubound note
common.A[1,1] 0.5566175 6.173024e-01 0.68400870
common.C[1,1] NA 2.406416e-13 0.05269798 !!!
common.E[1,1] 0.1537491 1.730463e-01 0.19563705
and here's the detailed output with `summary(model, verbose=TRUE)`:
CI details:
parameter side value fit diagnostic statusCode method a c e mean
1 common.A[1,1] lower 0.55661745 4071.507 success OK neale-miller-1997 0.7460680 5.399907e-04 0.4244517 21.39288
2 common.A[1,1] upper 0.68400870 4071.519 success OK neale-miller-1997 0.8270482 4.229033e-06 0.4095962 21.39341
3 common.C[1,1] lower 0.00000000 4067.663 alpha level not reached infeasible non-linear constraint neale-miller-1997 0.7856859 0.000000e+00 0.4159883 21.39293
4 common.C[1,1] upper 0.05269798 4071.549 success infeasible non-linear constraint neale-miller-1997 0.7560895 2.295604e-01 0.4181163 21.39237
5 common.E[1,1] lower 0.15374906 4071.505 success infeasible non-linear constraint neale-miller-1997 0.7968068 2.489554e-08 0.3921085 21.39306
6 common.E[1,1] upper 0.19563705 4071.512 success infeasible non-linear constraint neale-miller-1997 0.7729641 9.786281e-08 0.4423088 21.39289
Replied on Tue, 06/15/2021 - 18:59
In reply to verbose CI output by AdminJosh
Oh I see, yeah that looks very in-depth, I didn't print it out, so now I see it. Thanks a lot for the clarification. Here are two outputs (for a C and A lower bound respectively where it's NaN)
(hopefully the formatting is okay):
1) (for C lower bound NaN)
confidence intervals:
lbound estimate ubound note
MZ.StdVarComp[1,1] 0.07422233 0.3950067 0.6354808
MZ.StdVarComp[2,1] NA 0.0000000 0.0000000 !!!
MZ.StdVarComp[3,1] 0.36451917 0.6049933 0.9257777
CI details:
parameter side value fit diagnostic
1 MZ.StdVarComp[1,1] lower 0.07422233 -37.95743 success
2 MZ.StdVarComp[1,1] upper 0.63548083 -37.95903 success
3 MZ.StdVarComp[2,1] lower 0.00000000 -41.80563 alpha level not reached
4 MZ.StdVarComp[2,1] upper 0.00000000 -37.93677 success
5 MZ.StdVarComp[3,1] lower 0.36451917 -37.95903 success
6 MZ.StdVarComp[3,1] upper 0.92577767 -37.95743 success
statusCode method a e
1 OK neale-miller-1997 0.05779659 0.2041213
2 OK neale-miller-1997 0.18255742 0.1382638
3 infeasible non-linear constraint neale-miller-1997 0.13495746 0.1670206
4 infeasible non-linear constraint neale-miller-1997 0.17279164 0.1425522
5 infeasible non-linear constraint neale-miller-1997 0.18255742 0.1382638
6 infeasible non-linear constraint neale-miller-1997 0.05779659 0.2041213
2) for the A component case:
confidence intervals:
lbound estimate ubound note
MZ.StdVarComp[1,1] NA 4.752728e-19 0.184599 !!!
MZ.StdVarComp[2,1] 0.0000000 0.000000e+00 0.000000 !!!
MZ.StdVarComp[3,1] 0.8153433 1.000000e+00 1.000000
CI details:
parameter side value fit diagnostic
1 MZ.StdVarComp[1,1] lower 2.842761e-41 -252.6202 alpha level not reached
2 MZ.StdVarComp[1,1] upper 1.845990e-01 -248.7728 success
3 MZ.StdVarComp[2,1] lower 0.000000e+00 -248.7756 success
4 MZ.StdVarComp[2,1] upper 0.000000e+00 -248.7447 success
5 MZ.StdVarComp[3,1] lower 8.153433e-01 -248.7712 success
6 MZ.StdVarComp[3,1] upper 1.000000e+00 -248.8087 success
statusCode method a e
1 infeasible non-linear constraint neale-miller-1997 -5.366488e-22 0.10065144
2 infeasible non-linear constraint neale-miller-1997 -4.407308e-02 0.09262844
3 infeasible non-linear constraint neale-miller-1997 -2.513257e-02 0.10927582
4 infeasible non-linear constraint neale-miller-1997 -2.463694e-02 0.10243448
5 infeasible non-linear constraint neale-miller-1997 -4.407965e-02 0.09262449
6 infeasible non-linear constraint neale-miller-1997 -3.161669e-10 0.09930529
Replied on Tue, 06/15/2021 - 19:37
In reply to Oh I see, yeah that looks by mirusem
> 1) (for C lower bound NaN)
If the upper bound is zero then you can probably just regard the lower bound as zero. The algorithm is very particular and wants to find the correct amount of misfit, but the model is already backed
up into a corner and the optimizer gets stuck.
> 2) for the A component case:
This is similar to the first case. Here, the optimizer got closer the target fit of -248.77, but didn't quite make it because the parameters got cornered again. 2.842761e-41 can be regarded as zero.
It looks like you're using the ACE model that does not allow variance components to go negative. This model is miscalibrated and will result in biased intervals. For better inference, you should use
the model that allows the variance components to go negative. You can truncate the intervals at the [0,1] interpretable region for reporting.
Replied on Wed, 06/16/2021 - 20:17
In reply to CI interpretation by AdminJosh
Got it. That makes a lot of sense. I guess I can go with that in this case. I really appreciate the clarification and suggestion. Is that normal practice to go ahead and just turn NaN values to zero
(in that circumstance), etc.? Otherwise, I can just list that as being what was done in this circumstance (unless there are other workarounds).
Do you have any references on setting it so the variance components are allowed to go negative? I am not too familiar with this, and have seen some posts, but am not entirely sure where would be the
best place to look.
Replied on Thu, 06/17/2021 - 11:59
In reply to Got it. That makes a lot of by mirusem
Got it. That makes a lot of sense. I guess I can go with that in this case. I really appreciate the clarification and suggestion. Is that normal practice to go ahead and just turn NaN values to
zero (in that circumstance), etc.? Otherwise, I can just list that as being what was done in this circumstance (unless there are other workarounds).
If a model fixes the shared-environmental component to zero, then under that model, the lower confidence limit for the shared-environmental component is trivially zero (as is the upper confidence
Do you have any references on setting it so the variance components are allowed to go negative? I am not too familiar with this, and have seen some posts, but am not entirely sure where would be
the best place to look.
See here.
Replied on Mon, 06/21/2021 - 13:24
In reply to Got it. That makes a lot of by mirusem
There's a paper in Behavior Genetics about estimating unbounded A C and E variance components, instead of the usual implicitly-bounded path coefficient specification, which constrains variance
components to be non-negative. It's here: https://pubmed.ncbi.nlm.nih.gov/30569348/
Please let us know what you think, and if there are any remaining questions we could answer that would help you further.
Replied on Fri, 06/25/2021 - 05:33
In reply to Got it. That makes a lot of by mirusem
Thank you both (and I apologize for the delay in response). That definitely makes sense. I still need to read the paper more carefully, but I understand the need / benefits for it. Ideally I would
like to use it given its benefits, so I am trying to figure out what best way to incorporate it may be. I usually have been using path based implementations of the univariate ACE, and from what I've
seen the direct variance approach can either be done in umx or there is a script which is more non-path based (which would require more familiarity, though both do).
As for umx (I am new to this), would it be sufficient to use the umxACEv function, along with the umxConfint function (to get the standardized estimates + standardized CIs)? Or is there some other
pointer to look at? That's what I've gathered from one of the tutorials and also the documentation of it.
And if I am to go with this other (direct variance approach) approach, in order to limit the range of the bounds, would I just do that after obtaining the estimates and associated CIs?
And, lastly, it is okay to just approximate the bounds which are very close to 0 but the optimizer fails (like was said above, outside of the triviality of it already being constrained to 0)? So for
example the A estimate?
I really appreciate all the great amount of help you all have given!
Replied on Tue, 07/06/2021 - 09:16
In reply to Thank you both (and I by mirusem
So, I figured out most of these answers. One question that is coming up, is I am using umx (just for convenience at this point), and I run into an issue as in this thread: https://openmx.ssri.psu.edu
/node/4593 for at least one of the estimates. Is it possible to adjust either using umxModify or umxACE the lower bound for the C parameter as in that post? Or is that strictly limited to doing it
directly without umx (because it cannot be adjusted with umx as that's not a feature)? I know there is a umx_scale() function, which maybe can help (to a degree with this)--on that note, though, does
that not affect the ultimate result and bias it in ways?
I appreciate it. If not, I hope there is a way for umx to skip the estimate instead of crashing, but I am not sure that is feasible, also.
Replied on Tue, 07/06/2021 - 11:52
In reply to So, I figured out most of by mirusem
So it looks like this crashing issue ends up being an issue a lot when it comes to submodels (AE/CE). For now, since I am not sure how that would be addressed consistently using the direct variance
approach (without biasing it in some sense), I will just stick with the older approach that is biased since it doesn't throw any errors. But I would definitely be curious if there is a consistent /
fair work-around for the above which still is using the direct variance approach based on any of your advice, since that would ultimately be unbiased relative the older approach which would be ideal.
If anyone has any advice, I appreciate it as always!
Replied on Tue, 07/06/2021 - 12:02
In reply to So, I figured out most of by mirusem
In the thread you reference, the problem is starting values. If you use a large enough error variance then you should be able to get the model to optimize. In contrast to the referenced thread, I
would not add lower or upper bounds since the whole point of the variance parameterization is to not have these bounds. When you report the confidence interval, you can apply the bounds. For example,
if the A variance is estimated as -.2 to .2 then you can report the interval [0,.2].
Replied on Tue, 07/06/2021 - 12:34
In reply to crashing? by jpritikin
Hi jpritikin, thank you for the response! That is helpful (as to how to apply the bounds after the fact).
Hmm, if I am using umx, I am not sure of how to get large enough error variance. By error, do you mean the environmental variance in this case (or just in general the variance that needs to be large
for the MZ/DZ twins)? Or would this be something to explore by multiplying the actual values of the data by 100 or so (and would that be okay to do with no other alterations, or would that cause
issues elsewhere if it's not renormalized, so to speak)?
I am not sure if the freeToStart, or value variables of umxModify are relevant in this instance (with the freeToStart parameter, for example, or tryHard which didn't seem to work too well). Also
maybe xmuValues or xmu_starts would be relevant in this case?
For reference, here are two errors I am explicitly getting:
Error: The job for model 'AE' exited abnormally with the error message: fit is not finite (The continuous part of the model implied covariance (loc2) is not positive definite in data 'MZ.data' row
20. Detail:
covariance = matrix(c( # 2x2
0.0132678091792278, -0.0151264409931152
, -0.0151264409931152, 0.0132678091792278), byrow=TRUE, nrow=2, ncol=2)
Error: The job for model 'CE' exited abnormally with the error message: fit is not finite (The continuous part of the model implied covariance (loc2) is not positive definite in data 'DZ.data' row
53. Detail:
covariance = matrix(c( # 2x2
0.142828309693162, 0.18385055831094
, 0.18385055831094, 0.142828309693162), byrow=TRUE, nrow=2, ncol=2)
Replied on Tue, 07/06/2021 - 13:11
In reply to Hi jpritikin, thank you for by mirusem
I just used the umxModify function to set the starting value for E (just as you would for any fixed parameter, before running, and it seems to work for CE (I will try also for AE). If that sounds
right to you, let me know. Otherwise, for now it sounds about right to me. Thanks so much!!
Replied on Tue, 07/06/2021 - 13:30
In reply to I just used the umxModify by mirusem
Yeah, that sounds like the correct approach. Based on what you wrote, I'm not sure that you realize that umx models are OpenMx models. If you do `umxACEv(..., autoRun=FALSE)` then you get the OpenMx
model which you can adjust before running.
Replied on Tue, 07/06/2021 - 13:38
In reply to starting values by jpritikin
That's actually exactly what I did (so I had two lines one with FALSE and the other with not) :) but I am sure my understanding could always be better (umx is still quite new to me).
One question that came to mind is, does it matter if one identifies the regex parameter for any given component of interest as A_r.c. vs A_r1c1? I saw this post https://openmx.ssri.psu.edu/node/4229
where it's used to drop the entire free parameter set, but I am not sure if it makes any technical difference.
Replied on Tue, 07/06/2021 - 14:14
Replied on Tue, 07/06/2021 - 19:00
In reply to regex by jpritikin
I see. I was actually wondering if there is any difference between the notation for removing all of C (C_r.c.) versus just one part of the matrix (C_r1c1), but I looked into it and I guess that since
it is symmetric it is okay (is my guess)?
Also, in the path version of OpenMx the starting values I did by taking, say V = sqrt(phenotypic variance across all twin pairs)/3. Should this equivalently be set for all the parameters A, C and E
in umx/is it possible/needed, with the direct variance approach? Right now I have only directly set the E parameter as needed for the subnested models for AE/CE to 0.5 in umxmodify, but am wondering
if there is a more systematic approach (i.e. should the 0.5 just be replaced with the V listed above)? Since I know the start value can affect the ultimate CI bounds at the very least.
And, can this be set prior to running ACEv? I am not sure if the xmu_start_value_list() function is relevant.
Finally, is there a way to suppress warnings, or the attempt to print to browser for umx?
Replied on Tue, 07/06/2021 - 19:14
In reply to I see. I was actually by mirusem
My mistake, from here it looks like since it is concerned directly with variance the square root shouldn't be taken, right? https://github.com/tbates/umx/issues/38
But does this mean that it's already included directly in umx internally? If not, is there a rule of thumb then for making E large or small (when it comes to doing the subnested models), or any
starting values in general when doing ACEv?
And the warning suppression + browser suppression may still be helpful (since it prints out browser related information even if I have options(Browser = 'NULL').
Replied on Tue, 07/06/2021 - 19:15
In reply to I see. I was actually by mirusem
Regarding starting values, the critical thing is to start with a larger enough error/environmental variance. If your error variance is too small then residuals will get ridiculously large Z scores
which can cause optimization failure.
Replied on Tue, 07/06/2021 - 19:22
In reply to starting values by jpritikin
Got it, I see. So just arbitrarily set it high I am guessing (when it comes to say, umxModify since that's where the issue comes up). The only other question I have related to this is can this be
done prior to running umxACEv/is it needed, or should it only be done using umxModify() when it comes to the submodels (using regex, etc. for example)?
Thanks so much for the quick responses!
And I will try and submit a bug within the week for sure regarding the output issue.
Replied on Tue, 07/06/2021 - 19:17
In reply to I see. I was actually by mirusem
If you can't figure out how to suppress umx output then you might want to file a bug in the [umx](https://github.com/tbates/umx) github.
Replied on Tue, 07/06/2021 - 19:44
In reply to I see. I was actually by mirusem
> Is there any difference in `umxModify` between regex = "C_r.c." and update="C_r1c1"
As the matrix is symmetric these are equivalent. It's always easy to check what you've done with
`m1$top$C`, or `parameters(m1)` , or, for path-based models, try `tmx_show(m1)` - it shows all the matrix properties in nice browser tables with roll-overs for properties.
> Do I need to set start values in umx models?
No - umx takes care this for you. But if you want to, you can set them directly. They are just parameters, so just set them: for instance if you wondered about sensitivity to the start value for C,
just set the C values quite high to start, e.g. see what the parameters are with `parameters(m1)`, and set with, e.g. `umxSetParameters(m1, "C_r1c1", values=1)`
> Finally, is there a way to suppress warnings, or the attempt to print to browser for umx?
Replied on Tue, 07/06/2021 - 20:44
In reply to umxModify, umx_set_silent, umx_set_auto_plot, umxSetParameters by tbates
Thanks so much for the response!
I just figured out about the auto_plot before I saw this, that is exactly what one of them was (and I had umx_set_silent prior too). The only thing that is left at the moment seems to be as a result
of the xmu functions (as I found online to match what I am getting). One of them isn't a warning, but the other is. I am wondering if it might be possible to disable these kinds of messages since I
am looking into quite a few phenotypes.
The specific popups are from: xmu_show_fit_or_comparison which automatically outputs the log likelihood estimate (this isn't as major but everything adds into computation in terms of print out), and
more apparently from
Polite note: Variance of variable(s) '' and '' is < 0.1.
You might want to express the variable in smaller units, e.g. multiply to use cm instead of metres.
Alternatively umx_scale() for data already in long-format, or umx_scale_wide_twin_data for wide data might be useful.
Given that the phenotypes I am working with are already in their native form, I see this note a lot, and am not sure if it could be suppressed.
And thanks a lot for those clarifications--that all makes plenty of sense.
Replied on Tue, 07/06/2021 - 14:29
In reply to crashing? by jpritikin
In contrast to the referenced thread, I would not add lower or upper bounds since the whole point of the variance parameterization is to not have these bounds.
I partially disagree, in that it's not a bad idea to use a bound to ensure that the _E_ variance is strictly positive.
For example, if the A variance is estimated as -.2 to .2 then you can report the interval [0,.2].
Won't that cause CIs to have smaller coverage probability than they're supposed to?
Replied on Tue, 07/06/2021 - 15:57
In reply to I partially disagree by AdminRobK
> > For example, if the A variance is estimated as -.2 to .2 then you can report the interval [0,.2].
> Won't that cause CIs to have smaller coverage probability than they're supposed to?
No because variance proportions are proportions. The true values are always between 0 and 1. Or you could regard values outside of 0 and 1 as rejections of the model. For example, if DZ twins are
more correlated than MZ twins then there is something else going on besides genetic effects. Hence, it is inappropriate to use the classical ACE model to analyze such data.
Replied on Tue, 07/20/2021 - 03:12
Is it normal to get NaN values (instead of 1) for the upper/lower bound of the E model? This comes up frequently, with Code 3. I am not sure if this is the same case as mentioned by AdminRobK where C
is trivially 0, etc. I am guessing this is the case, though and can be safely regarded as 1 (upper or lower) with corresponding log likelihood that is produced?
Replied on Tue, 07/20/2021 - 03:29
In reply to Lower/Upper bound for E with direct variance approach by mirusem
As an example:
lbound estimate ubound lbound Code ubound Code
top.A_std[1,1] NA 0 0 NA 3
top.C_std[1,1] NA 0 0 NA 3
top.E_std[1,1] 1 1 NA 3 NA
Replied on Tue, 07/20/2021 - 04:11
And, one last question hopefully (outside of the E bound question):
I get very few code 3 NaNs for the lower bound of the E estimate in any model in general (AE, etc.) of the ones I select for. Sometimes this is fixable by a change in the E starting value (a bit
higher than what I had already set and not too high in certain cases, though this doesn't always work), and sometimes it is fixable by changing the seed. Are these alterations okay to do in this
circumstance, even though it's not necessarily consistent with the rest of what I would be using for the rest of the phenotypes? I definitely appreciate it!
Replied on Tue, 07/20/2021 - 05:01
In reply to Phenotypes where E lower bound gets code 3 in alternative models by mirusem
I also had type 6 error codes in a few cases (absent upper bound of an A estimate, for example), and along with the type 3 error codes, I could fix these by switching the optimizer to CSOLNP. I
guess, again, going to the other question, would this be acceptable even if the vast majority of estimates involve SLSQP as the primary optimizer (it's a bit inconsistent across phenotypes)?
The packages are really nice :)
Replied on Tue, 07/20/2021 - 12:34
Is it normal to get NaN values (instead of 1) for the upper/lower bound of the E model? This comes up frequently, with Code 3. I am not sure if this is the same case as mentioned by AdminRobK
where C is trivially 0, etc. I am guessing this is the case, though and can be safely regarded as 1 (upper or lower) with corresponding log likelihood that is produced?
Yes, in an _E_-only model, the upper and lower limits of the confidence interval for the standardized _E_ variance component are trivially 1 (because the standardized _E_ component is fixed to 1
under that model).
And, one last question hopefully (outside of the E bound question):
I get very few code 3 NaNs for the lower bound of the E estimate in any model in general (AE, etc.) of the ones I select for. Sometimes this is fixable by a change in the E starting value (a bit
higher than what I had already set and not too high in certain cases, though this doesn't always work), and sometimes it is fixable by changing the seed. Are these alterations okay to do in this
circumstance, even though it's not necessarily consistent with the rest of what I would be using for the rest of the phenotypes? I definitely appreciate it!
I also had type 6 error codes in a few cases (absent upper bound of an A estimate, for example), and along with the type 3 error codes, I could fix these by switching the optimizer to CSOLNP. I
guess, again, going to the other question, would this be acceptable even if the vast majority of estimates involve SLSQP as the primary optimizer (it's a bit inconsistent across phenotypes)?
If you're getting different results for your CIs by changing the start values, the RNG seed, and/or the optimizer, then I'm concerned that you're also getting a different solution in the primary
optimization (i.e., to find the MLE). The fact that changing the start values apparently affects your CI results is especially concerning, since every confidence-limit search begins at the MLE and
not at the initial start values. Have you checked whether or not you're getting substantially equivalent point estimates, standard errors, and -2logL each time you try? You might want to first run
your MxModel with `intervals=FALSE` to get a good initial solution, and then use omxRunCI() to subsequently get confidence intervals.
Replied on Tue, 07/20/2021 - 12:58
In reply to confidence intervals by AdminRobK
Thanks a lot for the reply. Definitely helpful.
From what I can tell, it looks like the point estimates are stable (I will look into this more, though). The - 2logL seems consistent as well. The only thing that seems to change is, for example,
when changing the E start value, I will get a different lower bound CI (not upper bound, for example) in the cases in which this did not succeed for say, the AE model. Here is an example where it
doesn't succeed for the upper bound of the ACE model (info column is -2logLL top row, and AIC second row). This is with no change in the starting value.
lbound estimate ubound note info
top.A_std[1,1] -1.062900 -0.245001 NaN !!! 139.933203
top.C_std[1,1] -0.182275 0.492349 1.059306 -271.866406
top.E_std[1,1] 0.489673 0.752652 1.078756 0.000000
This is the case when I change the optimizer to CSOLNP
lbound estimate ubound note info
top.A_std[1,1] -1.064765 -0.245001 0.571626 139.933203
top.C_std[1,1] -0.186698 0.492349 1.060668 -271.866406
top.E_std[1,1] 0.489466 0.752652 1.078679 0.000000
When I look into the AE model with the error and change the starting value for E (from 0.5 to 0.8) for an estimate I get this:
lbound estimate ubound note info
top.A_std[1,1] 0.179462 0.430153 0.625025 36.693141
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -67.386281
top.E_std[1,1] NaN 0.569847 0.820538 !!! 0.000000
to this:
lbound estimate ubound note info
top.A_std[1,1] 0.179462 0.430153 0.625025 36.693141
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -67.386281
top.E_std[1,1] 0.393639 0.569847 0.820538 0.000000
and if I change the above (same AE model case) to the CSOLNP optimizer (instead of the starting value to 0.8, so keep that at 0.5), I get:
lbound estimate ubound note info
top.A_std[1,1] 0.178853 0.430153 0.624640 36.693141
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -67.386281
top.E_std[1,1] 0.375360 0.569847 0.821147 0.000000
Curious about your insight on this.
Replied on Tue, 07/20/2021 - 16:13
Replied on Tue, 07/20/2021 - 21:03
In reply to confidence intervals by AdminRobK
Outside of the rest of the examples I posted, etc. is there an equivalent option for what you are suggesting specifically in umx? All I know of is the addCI parameter when running the ACE model /
variants, so I am not sure how that method translates from OpenMx to the umx interface, etc. So far I haven't changed anything in that respect. The optimizer seems to fail for very few cases,
specifically with respect to the Cis (usually it's just one bound of one parameter estimate that it has difficulty with). But out of the phenotypes I am checking, it's relatively few where this
occurs (though ideally there would be none).
Thanks a lot for all of the help--I really appreciate it.
Replied on Tue, 07/20/2021 - 13:21
For another estimate (which didn't have any errors), these are the following differences it has if I change the optimizer/starting E value/seed (below with a hashtag):
lbound estimate ubound note info
top.A_std[1,1] 0.063975 0.342624 0.565213 50.340513
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -94.681026
top.E_std[1,1] 0.434787 0.657376 0.936025 0.000000
#E_start = 0.5, CSOLNP
lbound estimate ubound note info
top.A_std[1,1] 0.063876 0.342624 0.565174 50.340513
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -94.681026
top.E_std[1,1] 0.434826 0.657376 0.936090 0.000000
#E_start = 0.5, default optimizer
lbound estimate ubound note info
top.A_std[1,1] 0.063897 0.342624 0.565175 50.340513
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -94.681026
top.E_std[1,1] 0.434825 0.657376 0.936082 0.000000
#E_start = 0.8, default optimizer
lbound estimate ubound note info
top.A_std[1,1] 0.063876 0.342624 0.565174 50.340513
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -94.681026
top.E_std[1,1] 0.434826 0.657376 0.936090 0.000000
#E_Start = 0.5, default optimizer, different seed
Replied on Tue, 07/20/2021 - 17:36
In reply to Another phenotype example if perturbed by mirusem
Here is a more comprehensive version (with standard errors). It does look like it's relatively pretty consistent even with SEs (if I am not mistaken).
Here is an example of an estimate where there is a CI bound NaN error.
#Estimate with error (NaN lowerbound, AE).
# seed = first, optimizer = SLSQP, E = 0.5
free parameters:
name matrix row col Estimate Std.Error A
1 expMean_var1 top.expMean means var1 0.60409619 0.018456489
2 A_r1c1 top.A 1 1 0.01609547 0.005227711
3 E_r1c1 top.E 1 1 0.02132252 0.004326436
lbound estimate ubound note info
top.A_std[1,1] 0.179462 0.430153 0.625025 36.693141
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -67.386281
top.E_std[1,1] NaN 0.569847 0.820538 !!! 0.000000
# seed = first, optimizer = SLSQP, E = 0.8
name matrix row col Estimate Std.Error A
1 expMean_var1 top.expMean means var1 0.60409620 0.018456490
2 A_r1c1 top.A 1 1 0.01609547 0.005227712
3 E_r1c1 top.E 1 1 0.02132252 0.004326436
lbound estimate ubound note info
top.A_std[1,1] 0.179462 0.430153 0.625025 36.693141
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -67.386281
top.E_std[1,1] 0.393639 0.569847 0.820538 0.000000
# seed = second, optimizer = SLSQP, E = 0.5
free parameters:
name matrix row col Estimate Std.Error A
1 expMean_var1 top.expMean means var1 0.60409619 0.018456489
2 A_r1c1 top.A 1 1 0.01609547 0.005227711
3 E_r1c1 top.E 1 1 0.02132252 0.004326436
lbound estimate ubound note info
top.A_std[1,1] 0.179462 0.430153 0.625016 36.693141
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -67.386281
top.E_std[1,1] 0.375239 0.569847 0.820538 0.000000
# seed = first, optimizer = CSOLNP, E = 0.5
free parameters:
name matrix row col Estimate Std.Error A
1 expMean_var1 top.expMean means var1 0.60409614 0.018456474
2 A_r1c1 top.A 1 1 0.01609544 0.005227694
3 E_r1c1 top.E 1 1 0.02132248 0.004326420
lbound estimate ubound note info
top.A_std[1,1] 0.179462 0.430153 0.624903 36.693141
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -67.386281
top.E_std[1,1] 0.375201 0.569847 0.820538 0.000000
Here is an example of the working case:
#Working estimate (AE).
# seed = first, optimizer = SLSQP, E = 0.5
ree parameters:
name matrix row col Estimate Std.Error A
1 expMean_var1 top.expMean means var1 0.42486799 0.016192053
2 A_r1c1 top.A 1 1 0.01035623 0.004402426
3 E_r1c1 top.E 1 1 0.01986998 0.004057314
lbound estimate ubound note info
top.A_std[1,1] 0.063876 0.342624 0.565174 50.340513
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -94.681026
top.E_std[1,1] 0.434826 0.657376 0.936090 0.000000
# seed = first, optimizer = SLSQP, E = 0.8
free parameters:
name matrix row col Estimate Std.Error A
1 expMean_var1 top.expMean means var1 0.42486797 0.016192049
2 A_r1c1 top.A 1 1 0.01035622 0.004402423
3 E_r1c1 top.E 1 1 0.01986997 0.004057314
lbound estimate ubound note info
top.A_std[1,1] 0.063897 0.342624 0.565175 50.340513
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -94.681026
top.E_std[1,1] 0.434825 0.657376 0.936082 0.000000
# seed = second, optimizer = SLSQP, E = 0.5
name matrix row col Estimate Std.Error A
1 expMean_var1 top.expMean means var1 0.42486799 0.016192053
2 A_r1c1 top.A 1 1 0.01035623 0.004402426
3 E_r1c1 top.E 1 1 0.01986998 0.004057314
lbound estimate ubound note info
top.A_std[1,1] 0.063876 0.342624 0.565174 50.340513
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -94.681026
top.E_std[1,1] 0.434826 0.657376 0.936090 0.000000
# seed = first, optimizer = CSOLNP, E = 0.5
name matrix row col Estimate Std.Error A
1 expMean_var1 top.expMean means var1 0.42486797 0.016192034
2 A_r1c1 top.A 1 1 0.01035620 0.004402407
3 E_r1c1 top.E 1 1 0.01986994 0.004057299
lbound estimate ubound note info
top.A_std[1,1] 0.063819 0.342624 0.565174 50.340513
top.C_std[1,1] 0.000000 0.000000 0.000000 !!! -94.681026
top.E_std[1,1] 0.434826 0.657376 0.936118 0.000000
Based on this, should there be any concerns as to switching the optimizer, or maybe an idea as to what really is going on in the few cases in which one of the CI bounds are not calculated?
Replied on Wed, 07/21/2021 - 10:55
One miscellaneous hint I forgot to mention yesterday: if you're using SLSQP, try setting
"Feasibility tolerance" to a smaller value than its on-load default (say, 1e-3 or 1e-4). That might help, at least as of OpenMx v2.19. What is your `mxVersion()` output, come to think of it?
**Edit:** Never mind. I just saw your `mxVersion()` output upthread.
Based on this, should there be any concerns as to switching the optimizer, or maybe an idea as to what really is going on in the few cases in which one of the CI bounds are not calculated?
I do not think there are any concerns relating to switching the optimizer or changing the RNG seed. Neither of those things is considerably changing the point estimates or standard errors, right?
But, from what you've included in your posts, I can't really tell what's going on when a confidence limit is reported as `NaN`. I would need to see at least the first seven columns of the 'CI
details' table (which prints when you use `summary()` with argument `verbose=TRUE`). I would also need the -2logL at the MLE (which you seem to have intended to include in your posts?). The
information in your posts is not very easy to read, either. The tables would be easier to read if they displayed in a fixed-width font, which can be done with Markdown or with HTML tags.
Outside of the rest of the examples I posted, etc. is there an equivalent option for what you are suggesting specifically in umx?
I don't know. Sorry.
Replied on Wed, 07/21/2021 - 15:46
In reply to One miscellaneous hint I by AdminRobK
That sounds promising, and it does work with the alternate optimizer. I have tried changing the tolerance using mxOptions() but it doesn't look like that makes a difference (setting it as such
mxOption(key="Feasibility tolerance", value=1e-3)), unless umx might be overriding this, etc (which I wouldn't think it would, but I don't know).
I've never really typed in html, takes more time, but yeah I agree it looks much nicer (and I personally also didn't like how it was saving prior).
confidence intervals:
lbound estimate ubound note
top.A_std[1,1] 0.1794616 0.4301532 0.6244231
top.C_std[1,1] 0.0000000 0.0000000 0.0000000 !!!
top.E_std[1,1] NA 0.5698468 0.8205384 !!!
CI Details:
parameter side value fit diagnostic
1 top.A_std[1,1] lower 0.1794616 -69.54434 success
2 top.A_std[1,1] upper 0.6244231 -69.54440 success
1 top.C_std[1,1] lower 0.0000000 -69.54471 success
1 top.C_std[1,1] upper 0.0000000 -69.54464 success
1 top.E_std[1,1] lower 0.4012878 -69.82063 alpha level not reached
1 top.E_std[1,1] upper 0.8205384 -69.54434 success
StatusCode method expMean_var1 A_r1c1 E_r1c1
1 OK neale-miller-1997 0.6039050 0.006549955 0.02994786
2 OK neale-miller-1997 0.6042243 0.026113065 0.01570644
3 OK neale-miller-1997 0.6330415 0.010966001 0.02533962
4 OK neale-miller-1997 0.6384895 0.018864979 0.02289753
5 infeasible non-linear constraint neale-miller-1997 0.6040962 0.022071510 0.01479346
6 infeasible non-linear constraint neale-miller-1997 0.6039050 0.006549955 0.02994786
This is when it fails.
Also there is this information:
Model Statistics:
Parameters Degrees of Freedom Fit (-2lnL units)
3 141 -73.38628
And, outside of that, it looks like the std errors are NA under the "free parameters" column, specifically when I run summary(verbose = TRUE) on the umxConfint result (which is what gives the results
/CIs above). But when I run summary(verbose=TRUE) on just the model prior to umxConfint, the SEs (albeit no CIs) are stable (and can be referred to in the other post, etc.).
And no worries--you've helped me a lot! Hopefully this last bit will give a bit of closure. I will say switching the optimizer fixes the issue though for those few estimates this occurs.
Replied on Wed, 07/21/2021 - 20:38
It looks like I can change the optimizer specifically for the CI or also in general for the model estimate / calculation itself, and maybe these two would give slightly different output. Is there a
preference of doing either, or is there a benefit, etc. since I would only be doing this for a few phenotypes out of the majority, anyway?
Replied on Thu, 07/22/2021 - 12:04
I've never really typed in html, takes more time, but yeah I agree it looks much nicer (and I personally also didn't like how it was saving prior).
OK, now I can see what's wrong with the lower confidence limit for _E_. The optimizer was unable to adequately worsen the -2logL. For a 95% CI, the target worsening of fit at both limits is about
3.841. For _E_'s lower limit, the worsening was only by about 3.566, which isn't too far off.
It looks like I can change the optimizer specifically for the CI or also in general for the model estimate / calculation itself, and maybe these two would give slightly different output. Is there
a preference of doing either, or is there a benefit, etc. since I would only be doing this for a few phenotypes out of the majority, anyway?
It's basically impossible to give a general recommendation about that. Do whatever seems to work best for you.
Replied on Thu, 07/22/2021 - 14:13
Replied on Thu, 07/22/2021 - 15:26 | {"url":"https://openmx.ssri.psu.edu/comment/9346","timestamp":"2024-11-08T08:32:46Z","content_type":"text/html","content_length":"145627","record_id":"<urn:uuid:dbb09e9b-68f8-418c-8aac-fa7ad7adb931>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00391.warc.gz"} |
A company manufactures two types of sweaters: type A and type B. It costs Rs 360 to make a type A sweater and Rs 120 to make a tvpe B sweater. The companv can make at most 300 sweaters and spend at most Rs 72000 a day. The number of sweaters of type B cannot exceed the number of sweaters of type A by more than 100 . The company makes a profit of Rs 200 for each sweater of type A and Rs 120 for every sweater of type B. Formulate this problem as a LPP to maximize the profit to the company. - Noon Academy
A company manufactures two types of sweaters: type A and type B. It costs Rs 360 to make a type A sweater and Rs 120 to make a tvpe B sweater. The companv can make at most 300 sweaters and spend at
most Rs 72000 a day. The number of sweaters of type B cannot exceed the number of sweaters of type A by more than 100 . The company makes a profit of Rs 200 for each sweater of type A and Rs 120 for
every sweater of type B. Formulate this problem as a LPP to maximize the profit to the company.
A company manufactures two types of sweaters: type A and type B. It costs Rs 360 to make a type A sweater and Rs 120 to make a tvpe B sweater. The companv can make at most 300 sweaters and spend at
most Rs 72000 a day. The number of sweaters of type B cannot exceed the number of sweaters of type A by more than 100 . The company makes a profit of Rs 200 for each sweater of type A and Rs 120 for
every sweater of type B. Formulate this problem as a LPP to maximize the profit to the company.
The following constraints are:
So, the required LPP to maximize the profit is | {"url":"https://www.learnatnoon.com/s/a-company-manufactures-two-types-of-sweaters-type-2/65417/","timestamp":"2024-11-10T12:52:01Z","content_type":"text/html","content_length":"115469","record_id":"<urn:uuid:894b2587-8e36-4211-84ab-19ea0377012c>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00443.warc.gz"} |
Volume of a bag of grout in cubic feet and cubic yard - Civil Sir
Volume of a bag of grout in cubic feet and cubic yard
The volume of a bag of grout can vary depending on the size of the bag and the specific product. However, a common size for a bag of grout is 50 pounds, and it typically yields approximately 0.5
cubic feet or 0.0185 cubic yards when mixed with water according to standard mixing ratios.
Grout is one of the important binding material used in construction line, material used for filling the space or gap between the base, top surface of foundation, between the tiles is called grout and
their process of feeling grout application between the gap is known as grouting.
Grout is prepared from mixing of water cement and sand, it is generally mix in the ratio of one part cement to 1- 2 parts sand, are represented as 1:1 (1 parts cement: 1 parts sand) and 1:2 (1 parts
cement: 2 parts sand). It has low viscosity, thin paste, readily self flows in the gaps.
Volume of a bag of grout in cubic feet and cubic yard
Portland cement is one of the most common cementing material presence in the grout, but also other some thermoset polymer matrix grouts based on thermosets properties such as urethane and epoxies are
also popular, non-shrink, non- metallic grout.
Structural grout is often used in reinforced masonry work to fills voids in masonry housing reinforcing Steel, and securing the Steel in place and bonding it to the masonry structure safely, it is
also used for fill the gaps between tiles that’s why it is also known as tiling grout.
Weight of grout:- typically, a grout weighs around approximately 2700 pounds per cubic yard, 100 pounds per cubic foot, 1600 kgs per cubic metre, 1.6kg/ litre, 16 kN/m3, 1.6 g/cm3, or which is
approximately equal as 6kg or 13.37 lb per gallon.
Volume of a bag of grout in cubic feet and cubic yard
Typically, Volume of a bag (50lb) of grout is around 0.5 cubic feet, 14 litres, 3.75 gallons, 0.0142 cubic metre, or which is approximately equal as 0.0185 cubic yard and a bag of 25 lb yields 0.25
cubic feet, 2 gallons, 7 litres, 0.00925 cubic yard, or which is approximately equal as 0.00709 cubic metre.
Volume of a 50 lb bag of grout in cubic feet
A bag of 50 lb grout is premixed of cement and sand as grout, as we know 1 cubic foot grout weighs around 100 pounds, so volume of a 50 lb bag of grout = 50/100 = 0.5 cubic foot, therefore volume of
50 lb bag of grout in cubic feet is about 0.50.
Regarding this, how many cubic feet are in a 50 lb bag of grout, as 100 lb grout = 1 CF, so a bag of 50 lb grout = 50/00 = 0.50 CF, so approximately there are 0.50 cubic feet in a 50 lb bag of grout.
Volume of a 50 lb bag of grout in cubic yard
A bag of 50 lb grout is premixed of cement and sand as grout, as we know 1 cubic yard grout weighs around 2700 pounds, so volume of a 50 lb bag of grout = 50/2700 = 0.0185 cubic yard, therefore
volume of 50lb bag of grout in cubic yard is about 0.0185.
Regarding this, how many cubic yard are in a 50 lb bag of grout, as 2700 lb grout = 1 CY, so a bag of 50 lb grout = 50/2700 = 0.0185 CY, so approximately there are 0.0185 cubic yard in a 50 lb bag of
Volume of a 50 lb bag of grout in litres
A bag of 50 lb or 22.7kg grout is premixed of cement and sand as grout, as we know 1 litre grout weighs around 1.6 kg, so volume of a 50 lb bag of grout = 22.7/1.6 = 14 litres, therefore volume of
50lb bag of grout is about 14 litres.
Regarding this, how many litres are in a 50 lb bag of grout, as 1.6kg grout = 1 litre, so a bag of 50 lb grout = 22.7/1.6 = 14 litres, so approximately there are 14 litres in a 50 lb bag of grout.
Volume of a 50 lb bag of grout in gallons
A bag of 50 lb or 22.7kg grout is premixed of cement and sand as grout, as we know 1 gallon grout weighs around 13.37 pounds, so volume of a 50 lb bag of grout = 50/13.37 = 3.75 gallons, therefore
volume of 50lb bag of grout is about 3.75 gallons.
Regarding this, how many gallons are in a 50 lb bag of grout, as 13.37 lb grout = 1 gallon, so a bag of 50 lb grout = 50/13.37 = 3.75 gallons, so approximately there are 3.75 gallons in a 50 lb bag
of grout.
Volume of a 50 lb bag of grout in cubic metre
A bag of 50 lb or 22.70 kg grout is premixed of cement and sand as grout, as we know one cubic metre of ready mix grout weighs around 1600kg, so volume of a 50 lb or 22.70kg bag of grout = 22.70/1600
= 0.0142 m3, therefore volume of 50lb bag of grout is about 0.0142 cubic metre.
Regarding this, how many cubic meter are in a 50 lb bag of grout, as 1600kg grout = 1 cubic meter, so a bag of 50 lb grout = 22.70/1600 = 0.0142 cubic metre, so approximately there are 0.0142 cubic
metre in a 50 lb bag of grout.
Volume of a 25 lb bag of grout in cubic feet
A bag of 25 lb grout is premixed of cement and sand as grout, as we know 1 cubic foot grout weighs around 100 pounds, so volume of a 25 lb bag of grout = 25/100 = 0.25 cubic foot, therefore volume of
25 lb bag of grout in cubic feet is about 0.25.
Regarding this, how many cubic feet are in a 25 lb bag of grout, as 100 lb grout = 1 CF, so a bag of 25 lb grout = 25/100 = 0.25 CF, so approximately there are 0.25 cubic feet in a 25 lb bag of
Volume of a 25 lb bag of grout in cubic yard
A bag of 25 lb grout is premixed of cement and sand as grout, as we know 1 cubic yard grout weighs around 2700 pounds, so volume of a 25 lb bag of grout = 25/2700 = 0.00925 cubic yard, therefore
volume of 25lb bag of grout in cubic yard is about 0.00925.
Regarding this, how many cubic yard are in a 25 lb bag of grout, as 2700 lb grout = 1 CY, so a bag of 24 lb grout = 25/2700 = 0.00925 CY, so approximately there are 0.00925 cubic yard in a 25 lb bag
of grout.
Volume of a 25 lb bag of grout in litres
A bag of 25 lb or 11.35kg grout is premixed of cement and sand as grout, as we know 1 litre grout weighs around 1.6 kg, so volume of a 25lb bag of grout = 11.35/1.6 = 7 litres, therefore volume of
25lb bag of grout is about 7 litres.
Regarding this, how many litres are in a 25 lb bag of grout, as 1.6kg grout = 1 litre, so a bag of 25 lb grout = 11.35/1.6 = 7 litres, so approximately there are 7 litres in a 25 lb bag of grout.
Volume of a 25 lb bag of grout in gallons
A bag of 25 lb or 11.35kg grout is premixed of cement and sand as grout, as we know 1 gallon grout weighs around 13.37 pounds, so volume of a 25 lb bag of grout = 25/13.37 = approx 2 gallons,
therefore volume of 25lb bag of grout is about 2 gallons.
Regarding this, how many gallons are in a 25 lb bag of grout, as 13.37 lb grout = 1 gallon, so a bag of 25 lb grout = 25/13.37 = 2 gallons, so approximately there are 2 gallons in a 25 lb bag of
Volume of a 25 lb bag of grout in cubic metre
A bag of 25 lb or 11.35 kg grout is premixed of cement and sand as grout, as we know one cubic metre of ready mix grout weighs around 1600kg, so volume of a 25 lb or 11.35kg bag of grout = 11.35/1600
= 0.00709 m3, therefore volume of 25lb bag of grout is about 0.00709 cubic metre.
Regarding this, how many cubic meter are in a 25 lb bag of grout, as 1600kg grout = 1 cubic meter, so a bag of 25 lb grout = 11.35/1600 = 0.00709 cubic metre, so approximately there are 0.00709 cubic
metre in a 25 lb bag of grout.
How do you calculate grout volume
Typically, weight of a bag of grout is about 50 pounds and their density is about 100 lb/ft^3 or 2700lb/yd^3, regarding this, How do you calculate grout volume, generally you calculate grout volume
by dividing mass of one bag of grout to their density, such as 50lb/100lb/ft^3 = 0.5 CF, or 50lb/2700lb/yd^3 = 0.0185 CY, therefore volume of 1 bag of grout is about 0.5 cubic feet, or which is
approximately equal as 0.0185 cubic yard.
ALSO READ :-
How to calculate cubic yards of concrete
Cubic yards into square feet | square feet in cubic yards
Cubic yards into tons | tons into cubic yards
How many cubic yards are in a dump truck?
How much area does a cubic yard of concrete cover?
Volume of grout per bag
Typically, weight of a bag of grout is about 50 pounds and their density is about 100 lb/ft^3 or 2700lb/yd^3, regarding this, volume of grout per bag, such as volume of grout per bag = 50lb/100lb/ft^
3 = 0.5 CF, or 50lb/2700lb/yd^3 = 0.0185 CY, therefore volume of grout per bag is about 0.5 cubic feet, or which is approximately equal as 0.0185 cubic yard.
Weight of grout per cubic foot
Typically, one cubic yard of grout yields about 2700 pounds weight, and 1 cubic yard is equal as 27 cubic feet, so weight of grout per cubic foot = 2700/27 = 100 pounds, therefore weight of grout per
cubic foot is about 100 pounds.
◆You Can Follow me on Facebook and
Subscribe our Youtube Channel
How much grout per m3
A bag of 50 lb or 22.70 kg grout is premixed of cement and sand as grout, as we know one cubic metre of ready mix grout weighs around 1600kg, regarding this, How much grout per m3, generally grout
per m3 is weight around 1600kg or approx 70 bags of 50 lb grout. | {"url":"https://civilsir.com/volume-of-a-bag-of-grout-in-cubic-feet-and-cubic-yard/","timestamp":"2024-11-06T20:59:50Z","content_type":"text/html","content_length":"99457","record_id":"<urn:uuid:42bb694f-7cb7-40f7-96ff-2f03da2c1813>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00507.warc.gz"} |
The 'double-double' type
Many scientific applications require accuracy greater than that provided by ‘double precision’. However, no hardware implementation of ‘quadruple precision’ exists, and the cost of implementation in
software is often prohibitive. This article describes an effective compromise, implemented using the ‘double-precision’ type.
The current implementation has not been thoroughly tested. While every attempt has been made to ensure accuracy, there are probably cases in which the code does not behave as expected. Do not trust
your career or reputation to this code without thorough testing!
Version 1.1
1. Accuracy improvements for most functions (the acos(), atan(), asinh() functions still need improvement).
2. A test program, TestQD, has been added to the solution in order to provide basic "sanity checks" of the functions.
3. Bug fixes - fmod(), round(), stream output.
4. The fma() function has been renamed Fma() in order to avoid incompatibility with the VS2013 run-time library.
Version 1.1.1
1. Bug fix - pow(negative, integer) now returns the correct value
2. Bug fix - 0.0 / 0.0 == NaN
3. The Visual Studio solution provided uses Visual Studio 2013. For those still using VS 2012/2010, the solution and project files from version 1.1 should still work (no file names have been changed
Version 1.1.3
1. Bug fixes - (unsigned) int, (unsigned) long, and (unsigned) long long constructors
2. Bug fixes - toInt(), toLong(), and toLongLong() converters
I came across this solution for extending precision when performing a simulation of Brownian movement. The program required updating the positions of atoms in the simulation by extremely small
amounts, which meant that the increment was rounded (and sometimes - shifted completely out). The only way to ensure that the calculations were accurate was to use higher precision.
Using the code
The code is provided as a Visual Studio 2010 project. The files qd\config.h and include\qd\qd_config.h contain macros that should be modified in order to support different compilers. In particular,
some of these APIs are present in the standard C++11 library (but not in earlier versions).
With one exception, the code is designed as a drop-in replacement for the 'double' type. For example, if your original code is:
#include <iostream>
#include <math>
int main(void)
double two = 2.0;
std::cout << std::sqrt( two );
return 0;
then the code could be rewritten to use the 'double-double' type as follows:
#include <iostream>
#include "dd_real.h"
int main(void)
dd_real two = "2.0";
std::cout << std::sqrt( two );
return 0;
The exception is initialization to a constant. In order to initialize a variable to full 'double-double' precision, the value must be passed as a string. This is because a value that is not passed as
a string will be rounded at compilation time to 'double' precision, which misses the point.
The IEEE 754 Standard for Floating-Point Arithmetic vs 'double-double'
The IEEE 754-2008 Standard for Floating-Point Arithmetic ^[1] defines three types for binary arithmetic – binary32, binary64, and binary128. Their properties are summarized in table 1.
Table 1 - IEEE 754 binary types
│ │Sign │Exponent│Mantissa│ Hardware? │
│ binary32 │1 bit│8 bits │24 bits │Y - 'float' │
│ binary64 │1 bit│11 bits │53 bits │Y - 'double' │
│binary128 │1 bit│15 bits │113 bits│N │
1. In the binary representations, the MSB of the mantissa is implicit. This accounts for the ‘extra’ bit in the above representations.
2. While the binary32 and binary64 types are typically implemented in hardware, binary128 currently has no hardware implementation. Given that software implementations of binary64 are typically 10
to 100 times slower than hardware implementations, this implies that a software implementation of binary128 will be prohibitively slow.
The double-double type is a compromise between the extra precision of binary128 and the speed of binary64. It relies on properties of IEEE 754 floating point numbers to represent numbers in pairs of
binary64 values, such that the sum of the pair represents the number. For best results, there should be no overlap between the bits in the pair, i.e. the absolute value of the smaller number should
be no larger than 2^-53 of the absolute value of the larger.
The precision achieved in this manner is slightly lower than binary128 (106 bits rather than 113), but the exponent range is slightly smaller than that of binary64. The reason for this is that many
operations will only work if neither number in the pair underflows. This means that the larger number must be at least 2^53*epsilon (the smallest binary64 number).
In this article, I present the basic algorithms without proof. Those interested in the proofs may find them in the references.
The implementation of double-double relies on the following conditions:
1. All rounding is to “nearest or even”
2. No “hidden bits” are involved in the calculation – operations are always rounded to binary64 with no intermediate step(s)
[2] implies that using an 80x87 in its standard mode (64-bit precision) won’t work; it has to be set to 53-bit precision. On the other hand, a program using SSE2 instructions will work fine. Note
that Windows by default places the 80x87 in 53-bit mode, in an attempt to be compatible with the SSE/SSE2 instructions.
Basic Operations ^[2]
The implementation of ‘double-double’ arithmetic relies on the ability to perform certain floating-point operations exactly. In all of the following, the notation RN( ) implies that the operation
should be performed with rounding to nearest or even. It must be emphasized that these operations are only valid if neither overflow nor underflow occur during the operations. For example, addition
of two like-signed infinities will produce a result of (inf, NaN), rather than the expected (inf, 0).
It is possible to make these basic operations behave as expected, but only at the cost of an additional test and (possible) branch.
The basic operations are as follows:
Addition/Subtraction ^[3, 4, 5]
Adds/subtracts two binary64 numbers, returning an exact result – the larger in magnitude is the rounded result of the addition/subtraction, and the smaller number is the residue from the exact
Two variants exist:
(s, t) = Fast2Sum(a, b)^[3] – requires that exponent(a) >= exponent(b)
s = RN( a + b )
t1 = RN( s - a )
t = RN( b - t1 )
(s, t) = 2Sum(a, b)^[4,5] – no preconditions on a, b
s = RN( a + b )
t1 = RN( s - b )
t2 = RN( s - t1 )
t3 = RN( a - t1 )
t4 = RN( b - t2 )
t = RN( t3 + t4 )
Veltkamp Splitting ^[3]
Takes a floating-point number x of precision p, and splits it into two floating-point numbers (x[h], x[l]) so that:
• x[h] has p-s significant bits
• x[l] has s significant bits
This assumes that no overflow occurs during the calculations. A quirk of binary floating-point is that x[l] will actually fit into s-1 bits (the sign bit of x[l] is used as an additional bit). For
binary64 (p=53), using s=27 means that both halves of the number will fit into 26 bits.
(x[h], x[l]) = Split(x, s)
Define: C = 2^s + 1
t1 = RN( C * x )
t2 = RN( x – t1 )
xh = RN( t1 + t2 )
xl = RN( x – xh )
Multiplication ^[2, 3]
Multiplies two binary64 numbers, returning an exact result - the larger in magnitude is the rounded result of the multiplication, while the smaller is the resiudue from the exact result.
(r1, r2) = 2Mult(x, y) – AKA Dekker Product
Define: s = 27
(xh, xl) = Split( x, s )
(yh, yl ) = Split( y, s )
r1 = RN( x * y )
t1 = RN( -r1 + RN( xh * yh ) )
t2 = RN( t1 + RN( xh * yl ) )
t3 = RN( t2 + RN( xl * yh ) )
r2 = RN( t3 + RN( xl * yl ) )
If a hardware fused-multiply-accumulate instruction is available, the following, much faster, implementation is possible:
(r1, r2) = 2MultFMA(x, y)
r1 = RN( x * y )
r2 = fma( x, y, -r1 )
The ‘double-double’ Type
In the following, I base my examples on the code published by Bailey et al. in their QD library ^[6,7]. While the basic implementation is theirs, I have made the following enhancements to the code:
• The implementations of trigonometric and hyperbolic functions have been re-written for greater accuracy and speed. In particular, the reduction of trigonometric function arguments is now
performed using the excellent code in fdlibm ^[8]
• Additional functions, not defined in the original QD library, have been added (e.g. expm1, logp1, cbrt, etc.)
Code organization:
• The original code may be configured to use faster, but less accurate basic functions (addition, multiplication, division). The less accurate versions of these functions have been removed
• All code for the ‘double-double’ type (with exception of class dd_real) has been moved into the qd namespace
• All helper functions have been moved into the dd_real class, so as not to pollute the global namespace
• If a mathematical function is defined in the C++ standard library for the ‘double’ type, it is also defined in the std namespace for the dd_real type
It must be emphasized that the ‘double-double’ type is an approximation to a 106-bit floating-point type. It differs from the IEEE 754 binary128 (113 bits) type in many ways:
• Correct rounding is not guaranteed
While every attempt is made to provide ‘faithful’ rounding (i.e. return one of the values closest to the infinitely-precise value), no guarantee is made that the returned value is actually the
closest of the two.
A number such as 1 + 2^-200 may easily be represented in ‘double-double’ format, but is not a valid IEEE 754 binary128 number. This ‘wobbling’ precision complicates error analysis and other issues.
Full implementation of infinity and NaN arithmetic requires testing for normal operands and for normal results at all stages of the operation. It is possible to write a cheap test that will work for
non-normal (infinite / nan ) and for most normal operands, but it is possible that results within (1 - 2-^53) of the largest magnitude will trigger an erroneous overflow condition.
• Theorems and lemmas relating to floating-point
The IEEE 754 floating point type obeys a large number of theorems that make error analysis etc. more tractable. Most of these theorems do not apply to 'double-double'
Basic Arithmetic
The basic idea is to use the exact operations defined above to calculate an approximation to the exact operation. Note that only the sum of the two components of the result is meaningful. In
particular, the high part of the result does not necessarily equal the result of operating on the high parts, e.g. z[h] <> x[h] + y[h]!
Note that optimized versions of the operators exist, e.g. for adding a 'double' to a 'double-double', etc.
Addition / Subtraction
(z[h], z[l]) = ddAdd( (x[h], x[l]), (y[h], y[l]) )
(sh, sl) = 2Sum(xh, yh)
(th, tl) = 2Sum(xl, yl)
c = RN(sl + th)
(vh, vl) = Fast2Sum(sh, c)
w = RN( tl + vl )
(zh, zl) = Fast2Sum(vh, w)
(p[h], p[l]) = ddMul( (x[h], x[l]), (y[h], y[l]) )
Note that the product of x[l] and y[l] is not calculated. This is because it can never contribute to the lower half of the result.
(ph, pl) = 2Prod(xh, yh)
pl = RN(pl + RN(xh * yl))
pl = RN(pl + RN(xl * yh))
(p[h], p[l]) = ddDiv( (x[h], x[l]), (y[h], y[l]) )
No exact method for performing division exists. Instead, a first approximation is calculated as x[h] / y[h], which is then refined by Newton-Raphson iteration.
Polynomial Functions
sqrt() and cbrt() are calculated using variants of Newton-Raphson iteration.
hypot() is calculated using an algorithm that ensures accuracy for all values.
pow() is calculated as follows:
1. If the exponent is integral and smaller than LLONG_MAX, the value is calculated using successive squaring and multiplications.
2. If the exponent is non-integral, but smaller than LLONG_MAX, the exponent is split into integral and non-integral components. The integral exponent is calculated as above, and the fractional
exponent is calculated as exp( x* log( frac(y) ).
3. If the exponent is larger than LLONG_MAX, it is calculated as exp( x * log( y ) )
Trigonometric Functions
For trigonometric functions, the argument is first reduced to the range [-pi/4, pi/4] using the pi/2 remainder algorithm provided in fdlibm ^[8]. This is then divided by 8 in order to get a value in
the range [-pi/32, pi/32]. The sine and cosine are calculated as a polynomial, and the tangent is calculated as a ratio between the sine and cosine.
For the inverse functions, Newton-Raphson iterations are performed in order to calculate, e.g. y where tan( y ) – x = 0.
The logarithm functions (log, log10, log2) are calculated by multiplying x by the appropriate constant, and then calculating log2(C * x). We use log2() as the base function because it allows us to
easily extract the integer component of the result (using the frexp() API), leaving only the logarithm of the fraction to be calculated.
The functions exp, exp2, expm1 are all calculated by reducing the argument to less than 1, and calculating the Taylor series. For expm1, the zeroth term of the Taylor series is not added.
Note that if the argument of expm1 is large enough, the result is identical to exp(x) - 1.0.
Hyperbolic Functions
With the exception of sinh(x) where x is close to zero, these functions are calculated by using the exp() function as appropriate. For sinh(x) where x is close to zero, the Taylor series for the
function is used in an attempt to reduce catastrophic cancellation.
Utility Functions
ldexp, frexp, fmod, modf, fma, copysign, etc. all exist and are defined in a manner identical to that of the similar functions in the C++ Standard.
std::numeric_limits is fully defined.
The functions dd_pi(), dd_e(), dd_ln10(), etc. return the expected values, rounded to nearest or even.
Known Bugs
1. The accuracy of some of the inverse trigonometric (acos(), atan()) and inverse hyperbolic functions (asin()) needs to be improved.
1. IEEE Standard for Floating-Point Arithmetic. IEEE Standard 754-2008, Aug. 2008
2. Handbook of Floating-Point Arithmetic. J.M. Muller et al., Birkhauser 201x
3. A floating-point technique for extending the available precision. T.J. Dekker, Numerische Mathematik 18(3):224-242, 1971
4. The Art of Computer Programming vol. 2. D. Knuth, Addison-Wesley Reading MA, 3^rd Edition, 1998
5. Quasi-double precision in floating-point addition. O. Moller. BIT 5:37-50, 1965
6. Algorithms for quad-double precision floating-point arithmetic. Y. Hida, X.S. Li, D.H. Bailey, Proceedings of the 15th IEEE Symposium on Computer Arithmetic pp 155-162, 2001
7. The original QD library is available at http://crd.lbl.gov/~dhbailey/mpdist
8. The fdlibm library is available at http://www.netlib.org/fdlibm
9. Software Manual for the Elementary Functions. W. J. Cody Jr. & W. Waite, Prentice-Hall, 1980
2015-03-14 Original version
2015-03-31 Bug fixes, basic test program | {"url":"https://codeproject.global.ssl.fastly.net/Articles/884606/The-double-double-type?display=Print","timestamp":"2024-11-01T23:15:24Z","content_type":"text/html","content_length":"50861","record_id":"<urn:uuid:9ecb94c0-45d8-4163-8633-b81f379302fc>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00218.warc.gz"} |
How do I find the maximum value of two columns in SQL?
How do I find the maximum value of two columns in SQL?
In SQL Server there are several ways to get the MIN or MAX of multiple columns including methods using UNPIVOT, UNION, CASE, etc… However, the simplest method is by using FROM … VALUES i.e. table
value constructor. Let’s see an example. In this example, there is a table for items with five columns for prices.
Can you select from two tables?
A simple SELECT statement is the most basic way to query multiple tables. You can call more than one table in the FROM clause to combine results from multiple tables. Here’s an example of how this
works: SELECT table1.
Can we select data from multiple tables in SQL?
In SQL we can retrieve data from multiple tables also by using SELECT with multiple tables which actually results in CROSS JOIN of all the tables. The resulting table occurring from CROSS JOIN of two
contains all the row combinations of the 2nd table which is a Cartesian product of tables.
How do you select Max from a table?
To find the max value of a column, use the MAX() aggregate function; it takes as its argument the name of the column for which you want to find the maximum value. If you have not specified any other
columns in the SELECT clause, the maximum will be calculated for all records in the table.
How do I find the maximum of two columns in Excel?
Here’s how: Select the cells with your numbers. On the Home tab, in the Formats group, click AutoSum and pick Max from the drop-down list….How to make a MAX formula in Excel
1. In a cell, type =MAX(
2. Select a range of numbers using the mouse.
3. Type the closing parenthesis.
4. Press the Enter key to complete your formula.
How do I SELECT a value from two tables in SQL?
Example syntax to select from multiple tables:
1. SELECT p. p_id, p. cus_id, p. p_name, c1. name1, c2. name2.
2. FROM product AS p.
3. LEFT JOIN customer1 AS c1.
4. ON p. cus_id=c1. cus_id.
5. LEFT JOIN customer2 AS c2.
6. ON p. cus_id = c2. cus_id.
How can I fetch data from two tables in SQL without joining?
Yes, Tables Can Be Joined Without the JOIN Keyword As you have just seen, it’s not always necessary to use the JOIN keyword to combine two tables in SQL. You can replace it with a comma in the FROM
clause then state your joining condition in the WHERE clause. The other method is to write two SELECT statements.
How do you SELECT from multiple tables in SQL without join?
Yes, Tables Can Be Joined Without the JOIN Keyword You can replace it with a comma in the FROM clause then state your joining condition in the WHERE clause. The other method is to write two SELECT
Can I use Max in where clause?
MAX() function with Having The SQL HAVING CLAUSE is reserved for aggregate function. The usage of WHERE clause along with SQL MAX() have also described in this page. The SQL IN OPERATOR which checks
a value within a set of values and retrieve the rows from the table can also be used with MAX function.
How do I find the maximum value of a group in Excel?
Finding max value in a group with formula
1. Type this formula =IF(A2=A3,””,”1″) ( A2 and A3 are in a group) into the cell C2 which is adjacent to your data, and then press Enter key and drag Auto fill to copy the formula to the range you
2. Select C1:C12 and click Data > Filter to add Filter button into Cell C1. | {"url":"https://thecrucibleonscreen.com/how-do-i-find-the-maximum-value-of-two-columns-in-sql/","timestamp":"2024-11-13T06:22:18Z","content_type":"text/html","content_length":"55405","record_id":"<urn:uuid:5b71cd3a-15ee-4ff7-a365-b976bff0e3af>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00755.warc.gz"} |
Physics - Kepler's Laws
What do Kepler’s laws describe?
What is Kepler’s First Law?
All of the planets move in elliptical orbits with the sun at one focus.
What is Kepler’s Second Law?
The radius vector sweeps out equal areas in equal times.
What is the “radius vector” for Kepler’s Second Law?
The imaginary line between the sun and the planet.
What is Kepler’s Third Law?
The square of the period of a planets orbit is directly proportional to the cube of its mean distance from the sun.
How can you derive Kepler’s Third Law?
\[F = \frac{mv^2}{r}\]
\[F = \frac{GMm}{r^2}\]
where $v = \omega r$
How would you write Kepler’s Third Law in terms of direct proportionality?
\[T^2 \propto r^3\]
What is the constant of proportionality for Kepler’s Third Law?
What is the equation form of Kepler’s Third Law?
\[T^2 = \left( \frac{4\pi^2}{GM} \right) r^3\]
How many seconds are there in a year?
Where will a planet’s orbit be the fastest, close to the sun or far away from the sun?
Close to the sun due to Kepler’s Second Law.
Related posts | {"url":"https://ollybritton.com/notes/a-level/physics/topics/keplers-laws/","timestamp":"2024-11-07T10:52:59Z","content_type":"text/html","content_length":"505666","record_id":"<urn:uuid:bdb3a8f7-e081-42b8-90a9-455af5d60e00>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00337.warc.gz"} |
Rectified Hebrew calendar
The Rectified Hebrew calendar is a proposal for calendar reform intended to replace the traditional fixed arithmetic Hebrew Calendar.
It was developed, validated, and placed in the public domain in Hebrew year 5766 (Gregorian year 2006) by Dr. Irv Bromberg of the University of Toronto, Canada.
Hebrew calendar solar drift[ ]
The Traditional Hebrew calendar is a lunisolar calendar that employs a fixed arithmetic leap cycle for its solar component, and a fixed arithmetic constant-interval cycle for its lunar component (
molad). Its calendar mean year is exactly 365 days 5 hours 55 minutes and 25 + ^25/[57] seconds, but that is 6 minutes and 25 + ^25/[57] seconds too long compared to the present era mean northward
equinoctial year of 365 days 5 hours 49 minutes 0 seconds (mean solar time). Thus the Traditional Hebrew calendar drifts with respect to the northward equinoctial year, presently at the relatively
rapid rate of 1 / (^6/[1440] + (25+^25/[57]) / 86400) = about 224 years per day of drift. For comparison, the Julian calendar presently drifts at the rate of 1 / (^11/[1440]) = about 130.9 years per
day of drift. The fixed arithmetic Hebrew calendar started in Hebrew year 4119, so since then it has accumulated a drift of about (5767-4119)/224 = slightly more than 7 days late (on average).
Note, however, that it is not possible for any individual Hebrew date to be "7 days late", because each month starts within a day or two of its molad moment. Instead, presently about 80% of
Traditional Hebrew calendar months are "on time" and about 20% of months are "one month late". Typically, the Traditional Hebrew calendar enters the "one month late" state after it "prematurely"
inserts a leap month, then it remains "one month late" until a year later when the leap month "should have been" inserted. The current pattern of this alternating state is such that it amounts to the
Traditional Hebrew calendar having drifted 7 days later than it was in the era of Hillel II (Hebrew year 4119, Gregorian year 359 AD), as judged by the average timing of either the start of the month
of Nisan or the end of the first day of Passover relative to the timing of the northward equinox on a Jerusalem mean solar time clock.
Hebrew calendar lunar drift[ ]
The lunar component of the Traditional Hebrew calendar employs a constant interval (molad interval) of 29 days 12 hours and 44 + ^1/[18] minutes to account for each mean lunar month. Due to tidal
forces slowing the Earth rotation rate, however, the length of the mean lunar cycle gets progressively shorter. Consequently, in the present era the traditional molad interval is about ^3/[5] second
too long, but this discrepancy is growing at a progressively faster rate (quadratically). On average the Hebrew calendar's estimate of the mean lunar conjunction is currently about 2 hours late, as
judged by the reading on a Jerusalem mean solar time clock.
Rectified leap rule[ ]
The cycle of leap years is the primary difference between the Traditional and Rectified Hebrew calendars.
The Traditional Hebrew calendar has 7 leap years per 19-year cycle, and its leap rule is:
It is a leap year only if the remainder of ( 7 × Year + 1 ) / 19 is less than 7.
This expression inherently causes leap year intervals to fall into uniformly spread sub-cycle patterns of (3+3+2) = 8 years or (3+3+3+2) = 11 years, which alternate 8+11=19 years per cycle.
In 1931 Dr. William Moses Feldman (1880-1939, originator of the term "biomathematics") briefly proposed an improved Hebrew calendar leap cycle of 334 years having 123 leap years and a total of 4131
months per cycle. He derived that improved leap cycle by using a continued fraction approximation of the ratio of the "tropical year" (365 days 5 hours 48 minutes 46 seconds, which is about 14
seconds too short relative to the present-era mean northward equinoctial year) to the "lunar year" (354 days 8 hours 48 minutes 36 seconds, which corresponds to 29 days 12 hours 44 minutes 3 seconds
per lunar cycle = about ^1/[3] second shorter than the traditional molad interval, yet almost ^1/[3] second too long relative to the present era mean synodic month) ^[1]. Although he only described
his leap rule qualitatively, the arithmetic that exactly reproduces Feldman's leap cycle is:
It is a leap year only if the remainder of ( 123 × Year + 369 ) / 334 is less than 123.
This expression inherently causes leap year intervals to fall into uniformly spread sub-cycle patterns of (3+3+2) = 8 years or (3+3+3+2) = 11 years, which further group to: 17×(8+11)+11 = 17×19+11 =
334 years. In other words, each Feldman cycle has 17 repeats of the traditional 19-year cycle and one truncated 11-year subcycle. The much shorter mean year of the 11-year subcycle offsets the
excessively long mean year of the 19-year subcycles, yielding a net calendar mean year of 365 days 5 hours 48 minutes and 39 + ^1/[3] seconds (calculated using the mean synodic month that Feldman
used). Although the Feldman leap cycle is a substantial improvement over the excessively long Metonic 19-year cycle, it "over-corrects" the drift, being about 20 + ^2/[3] seconds per year too short
for the present era.
Rather than the mean tropical year (which applies only to atomic time), it is the mean northward equinoctial year (measured in terms of mean solar time, as given above) that is the appropriate year
length to keep the Hebrew month of Nisan aligned relative to the northward equinox. The use of the correct present-era ratio of the mean northward equinoctial year to the mean synodic month applied
to the continued fraction method yields the even more accurate leap cycle of the Rectified Hebrew calendar, which has 130 leap years per 353-year cycle of 4366 lunar months:
It is a leap year only if the remainder of ( 130 × Year + 268 ) / 353 is less than 130.
This expression inherently causes leap year intervals to fall into uniformly spread sub-cycle patterns of (3+3+2) = 8 years or (3+3+3+2) = 11 years, which further group to: 18×(8+11)+11 = 18×19+11 =
353 years. In other words, the Rectified cycle is the same as the Feldman cycle except that it has one more 19-year subcycle per cycle, making the net calendar mean year 365 days 5 hours 48 minutes
and almost 58 seconds, or only about 2 seconds too short per year (calculated using a mean synodic month that is about ^3/[5] second shorter than the traditional molad interval).
Progressive molad[ ]
The Rectified Hebrew Calendar also has a progressively shorter molad interval, which closely matches the actual length of the mean lunation interval (mean synodic month), and which explicitly refers
its progressive molad moments to Jerusalem's longitude (mean solar time).
On the first day of Tishrei 5766 (autumn 2005) the mean synodic month was about ^3/[5] second shorter than the traditional molad interval, and was getting progressively shorter by about 27
microseconds per lunar month.
Like the Traditional Hebrew calendar, the mean year of the Rectified Hebrew calendar depends on the sum of the molad intervals, and in the present era amounts to about 365 days 5 hours 48 minutes and
57.6 seconds. This is intentionally slightly shorter than the present era northward equinoctial mean year of 365 days 5 hours 49 minutes 0 seconds, to allow for future tidal slowing of the Earth
rotation rate. The Rectified Hebrew calendar mean year will continue to progressively shorten by about ^3/[2] second per 353-year cycle.
Astronomical calculations suggest that this seemingly minor "tweak" of the molad interval will actually extend the future useful range of the Rectified Hebrew calendar by about 3 millennia!
Rosh HaShanah Postponements[ ]
The use of progressively shorter molad intervals necessitates a modification to the way that the Rosh HaShanah postponement rules are handled. The Rectified Hebrew calendar employs novel postponement
rules that are logically equivalent to those of the Traditional Hebrew calendar. (When applied to the arithmetic of the Traditional Hebrew calendar with its fixed molad intervals, the modified
postponement rules yield identical dates for the full 689472-year repeat cycle of the Traditional Hebrew calendar.) For further information please see the discussion of the Rosh HaShanah postponement
rules on the Rectified Hebrew calendar web site.
Reference Meridian of Longitude[ ]
The moments of astronomical events such as equinoxes, solstices, and lunar conjunctions must be referred to a specified meridian of longitude. In other words, for the quoted moments to be meaningful
and unambiguous, the time zone of the clock of the observer must be specified. Conventionally, the moments of such celestial events are calculated for Universal Time at the Prime Meridian, which is
the meridian of longitude that passes through Greenwich, England. Obviously, the Prime Meridian was never the reference meridian for the Traditional Hebrew calendar, but the actual original reference
meridian was never specified in the classical rabbinic sources.
It is quite widely assumed, or at least implicitly assumed, that the reference meridian for the Hebrew calendar is the longitude of Jerusalem, Israel, which is about 2 hours and 37 minutes ahead of
Universal Time. Consequently, Jerusalem mean solar time was explicitly used for the development and evaluation of the Rectified Hebrew calendar.
More recent astronomical historical evaluations, however, have suggested that the original meridian, in the era of the Second Temple, was midway between the Nile River and the end of the Euphrates
River, a longitude that is about 4° east of Jerusalem, or 16 minutes of time ahead of Jerusalem mean solar time. There could have been at least two reasons for choosing that meridian. In the era of
the Second Temple that meridian was generally considered to be the center of the civilized world, and it served as the reference meridian for many astronomical calculations, even those that Ptolemy
published several centuries later. More importantly for Jews, however, may have been the promise of HaShem to Abram in the Torah, Genesis chapter 15 verse 18: "On that day HaShem made a covenant with
Abram, saying To your descendants have I given this land, from the river of Egypt to the great river, the Euphrates River." This territory also corresponds to the full range of the patriarch's
travels during his lifetime, as described in the Torah, from Ur to Egypt.
References[ ]
1. ↑ Feldman, WM. Rabbinical Mathematics and Astronomy, page 208. Hermon Press, New York, 1931.
External Links[ ]
This page uses content from the English Wikipedia. The original article was at Rectified Hebrew calendar. The list of authors can be seen in the page history. As with the Calendar Wikia, the text
of Wikipedia is available under Creative Commons License. See Wikia:Licensing. | {"url":"https://calendars.fandom.com/wiki/Rectified_Hebrew_calendar","timestamp":"2024-11-06T02:54:29Z","content_type":"text/html","content_length":"170633","record_id":"<urn:uuid:c2ece49b-58a1-4f25-9b34-03c715593e66>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00443.warc.gz"} |
Must-Read Tutorial to Learn Sequence Modeling (deeplearning.ai Course #5)
The ability to predict what comes next in a sequence is fascinating. It’s one of the reasons I became interested in data science! Interestingly – human mind is really good at it, but that is not the
case with machines. Given a mysterious plot in a book, the human brain will start creating outcomes. But, how to teach machines to do something similar?
Thanks to Deep Learning – we can do lot more today than what was possible a few years back. The ability to work with sequence data, like music lyrics, sentence translation, understanding reviews or
building chatbots – all this is now possible thanks to sequence modeling.
And that’s what we will learn in this article. Since this is part of our deeplearning.ai specialization series, I expect that the reader will be aware of certain concepts. In case you haven’t yet
gone through the previous articles or just need a quick refresher, here are the links:
In this final part, we will see how sequence models can be applied in different real-world applications like sentiment classification, image captioning, and many other scenarios.
Table of Contents
1. Course Structure
2. Course 5: Sequence Models
1. Module 1: Recurrent Neural Networks (RNNs)
2. Module 2: Natural Language Processing (NLP) and Word Embeddings
1. Introduction to Word Embeddings
2. Learning Word Embeddings: Word2vec & GloVe
3. Applications using Word Embeddings
3. Module 3: Sequence models & Attention mechanism
Course Structure
We have covered quite a lot in this series so far. Below is a quick recap of the concepts we have learned:
• Basics of deep learning and neural networks
• How a shallow and a deep neural network works
• How the performance of a deep neural network can be improved by hyperparameter tuning, regularization and optimization
• Working and implementations of Convolutional Neural Network from scratch
It’s time to turn our focus to sequence modeling. This course (officially labelled course #5 of the deep learning specialization taught by Andrew Ng) is divided into three modules:
1. In module 1, we will learn about Recurrent Neural Networks and how they work. We will also cover GRUs and LSTMs in this module
2. In module 2, our focus will be on Natural Language Processing and word embeddings. We will see how Word2Vec and GloVe frameworks can be used for learning word embeddings
3. Finally, module 3 will cover the concept of Attention models. We will see how to translate big and complex sentences from one language to another
Ready? Let’s jump into module 1!
Module 1: Recurrent Neural Networks
The objectives behind the first module of course 5 are:
• To learn what recurrent neural networks (RNNs) are
• To learn several variants including LSTMs, GRUs and Bidirectional RNNs
Don’t worry if these abbreviations sound daunting – we’ll clear them up in no time.
But first, why sequence models?
To answer this question, I’ll show you a few examples where sequence models are used in real-world scenarios.
Speech recognition:
Quite a common application these days (everyone with a smartphone will know about this). Here, the input is an audio clip and the model has to produce the text transcript. The audio is considered a
sequence as it plays over time. Also, the transcript is a sequence of words.
Sentiment Classification:
Another popular application of sequence models. We pass a text sentence as input and the model has to predict the sentiment of the sentence (positive, negative, angry, elated, etc.). The output can
also be in the form of ratings or stars.
DNA sequence analysis:
Given a DNA sequence as input, we want our model to predict which part of the DNA belongs to which protein.
Machine Translation:
We input a sentence in one language, say French, and we want our model to convert it into another language, say English. Here, both the input and the output are sequences:
Video activity recognition:
This is actually a very upcoming (and current trending) use of sequence models. The model predicts what activity is going on in a given video. Here, the input is a sequence of frames.
Name entity recognition:
Definitely my favorite sequence model use case. As shown below, we pass a sentence as input and want our model to identify the people in that sentence:
Now before we go further, we need to discuss a few important notations that you will see throughout the article.
Notations we’ll use in this article
We represent a sentence with ‘X’. To understand further notifications, let’s take a sample sentence:
X : Harry and Hermione invented a new spell.
Now, to represent each word of the sentence, we use x<t>:
• x^<1> = Harry
• x^<2> = Hermoine, and so on
For the above sentence, the output will be:
y = 1 0 1 0 0 0 0
Here, 1 represents that the word represents a person’s name (and 0 means it’s anything but). Below are a few common notations we generally use:
• T[x] = length of input sentence
• T[y] = length of output sentence
• x^(i) = i^th training example
• x^(i)<t> = t^th training of i^th training example
• T[x]^(i) = length of i^th input sentence
At this point it’s fair to wonder – how do we represent n individual word in a sequence? Well, this is where we lean on a vocabulary, or a dictionary. This is a list of words that we use in our
representations. A vocabulary might look like this:
The size of the vocabulary might vary depending on the application. One potential way of making a vocabulary is by picking up the most frequently occurring words from the training set.
Now, suppose we want to represent the word ‘harry’ which is in the 4075^th position in our vocabulary. We one-hot encode this vocabulary to represent ‘harry’:
To generalize, x^<t> is an one-hot encoded vector. We will put 1 in the 4075^th position and all the remaining words will be represented as 0.
If the word is not in our vocabulary, we create an unknown <UNK> tag and add it in the vocabulary. As simple as that!
Recurrent Neural Network (RNN) Model
We use Recurrent Neural Networks to learn mapping from X to Y, when either X or Y, or both X and Y, are some sequences. But why can’t we just use a standard neural network for these sequence
I’m glad you asked! Let me explain using an example. Suppose we build the below neural network:
There are primarily two problems with this:
1. Inputs and outputs do not have a fixed length, i.e., some input sentences can be of 10 words while others could be <> 10. The same is true for the eventual output
2. We will not be able to share features learned across different positions of text if we use a standard neural network
We need a representation that will help us to parse through different sentence lengths as well as reduce the number of parameters in the model. This is where we use a recurrent neural network. This
is how a typical RNN looks like:
A RNN takes the first word (x^<1>) and feeds it into a neural network layer which predicts an output (y’^<1>). This process is repeated until the last time step x^<Tx> which generates the last output
y’^<Ty>. This is the network where the number of words in input as well as the output are same.
The RNN scans through the data in a left to right sequence. Note that the parameters that the RNN uses for each time step are shared. We will have parameters shared between each input and hidden
layer (W[ax]), every timestep (W[aa]) and between the hidden layer and the output (W[ya]).
So if we are making predictions for x^<3>, we will also have information about x^<1> and x^<2>. A potential weakness of RNN is that it only takes information from the previous timesteps and not from
the ones that come later. This problem can be solved using bi-directional RNNs which we will discuss later. For now, let’s look at forward propagation steps in a RNN model:
a^<0> is a vector of all zeros and we calculate the further activations similar to that of a standard neural network:
• a^<0> = 0
• a^<1> = g(W[aa] * a^<0> + W[ax] * x^<1> + b[a])
• y^<1> = g’(W[ya] * a^<1> + b[y])
Similarly, we can calculate the output at each time step. The generalized form of these formulae can be written as:
We can write these equations in an even more simpler way:
We horizontally stack W[aa] and W[ya] to get W[a]. a^<t-1> and x^<t> are stacked vertically. Rather than carrying around 2 parameter matrices, we now have just 1 matrix. And that, in a nutshell, is
how forward propagation works for recurrent neural networks.
Backpropagation through time
You might see this coming – the backpropagation steps work in the opposite direction to forward propagation. We have a loss function which we need to minimize in order to generate accurate
predictions. The loss function is given by:
We calculate the loss at every timestep and finally sum all these losses to calculate the final loss for a sequence:
In forward propagation, we move from left to right, i.e., increasing the indices of time t. In backpropagation, we are going from right to left, i.e., going backward in time (hence the name
backpropagation through time).
So far, we have seen scenarios where the length of input and output sequences was equal. But what if the length differs? Let’s see these different scenarios in the next section.
Different types of RNNs
We can have different types of RNNs to deal with use cases where the sequence length differs. These problems can be classified into the following categories:
The name entity recognition examples we saw earlier fall under this category. We have a sequence of words, and for each word, we have to predict whether it is a name or not. The RNN architecture for
such a problem looks like this:
For every input word, we predict a corresponding output word.
Consider the sentiment classification problem. We pass a sentence to the model and it returns the sentiment or rating corresponding to that sentence. This is a many-to-one problem where the input
sequence can have varied length, whereas there will only be a single output. The RNN architecture for such problems will look something like this:
Here, we get a single output at the end of the sentence.
Consider the example of music generation where we want to predict the lyrics using the music as input. In such scenarios, the input is just a single word (or a single integer), and the output can be
of varied length. The RNN architecture for this type of problems looks like the below:
There is one more type of RNN which is popularly used in the industry. Consider the machine translation application where we take an input sentence in one language and translate it into another
language. It is a many-to-many problem but the length of the input sequence might or might not be equal to the length of output sequence.
In such cases, we have an encoder part and a decoder part. The encoder part reads the input sentence and the decoder translates it to the output sentence:
Language model and sequence generation
Suppose we are building a speech recognition system and we hear the sentence “the apple and pear salad was delicious”. What will the model predict – “the apple and pair salad was delicious” or “the
apple and pear salad was delicious”?
I would hope the second sentence! The speech recognition system picks this sentence by using a language model which predicts the probability of each sentence.
But how do we build a language model?
Suppose we have an input sentence:
Cats average 15 hours of sleep a day.
The steps to build a language model will be:
• Step 1 – Tokenize the input, i.e. create a dictionary
• Step 2 – Map these words to a one-hot encode vector. We can add <EOS> tag which represents the End Of Sentence
• Step 3 – Build an RNN model
We take the first input word and make a prediction for that. The output here tells us what is the probability of any word in the dictionary. The second output tells us the probability of the
predicted word given the first input word:
Each step in our RNN model looks at some set of preceding words to predict the next word. There are various challenges associated with training an RNN model and we will discuss them in the next
Vanishing gradients with RNNs
One of the biggest problems with a recurrent neural network is that it runs into vanishing gradients. How? Consider the two sentences:
The cat, which already ate a bunch of food, was full.
The cat, which already ate a bunch of food, were full.
Which of the above two sentences is grammatically correct? It’s the first one (read it again in case you missed it!).
Basic RNNs are not good at capturing long term dependencies. This is because during backpropagation, gradients from an output y would have a hard time propagating back to affect the weights of
earlier layers. So, in basic RNNs, the output is highly affected by inputs closer to that word.
If the gradients are exploding, we can clip them by setting a pre-defined threshold.
Gated Recurrent Units (GRUs)
GRUs are a modified form of RNNs. They are highly effective at capturing much longer range dependencies and also help with the vanishing gradient problem. The formula to calculate activation at
timestep t is:
A hidden unit of RNN looks like the below image:
The inputs for a unit are the activations from the previous unit and the input word of that timestep. We calculate the activations and the output at that step. We add a memory cell to this RNN in
order to remember the words present far away from the current word. Let’s look at the equations for GRU:
c^<t> = a^<t>
where c is a memory cell.
At every timestep, we overwrite the c^<t> value as:
This acts as a candidate for updating the c^<t> value. We also define an update gate which decides whether or not we update the memory cell. The equation for update gate is:
Notice that we are using sigmoid to calculate the update value. Hence, the output of update gate will always be between 0 and 1. We use the previous memory cell value and the update gate output to
update the memory cell. The update equation for c^<t> is given by:
When the gate value is 0, c^<t> = c^<t-1>, i.e., we do not update c^<t>. When the gate value is 1, c^<t> = c^<t> and the value is updated. Let’s understand this mind-bending concept using an example:
We have the gate value as 1 when we are at the word cat. For all other words in the sequence, the gate value is 0 and hence the information of the cat will be carried till the word ‘was’. We expect
the model to predict was in place of were.
This is how GRUs helps to memorize long term dependencies. Here is a visualization to help you understand the working of a GRU:
For every unit, we have three inputs : a^<t-1>, c^<t-1> and x^<t> and three outputs : a^<t>, ^c<t> and y(hat)^<t>.
Long Short Term Memory (LSTM)
LSTMs are all the rage in deep learning these days. They might not have a lot of industry applications right now because of their complexity but trust me, they will very, very soon. It’s worth taking
out time to learn this concept – it will come in handy in the future.
Now to understand LSTMs, let’s recall all the equations we saw for GRU:
We have just added one more gate while calculating the relevance for c^<t> and this gate tells us how relevant c^<t-1> is for updating c^<t>. For GRUs, a^<t> = c^<t>.
LSTM is a more generalized and powerful version of GRU. The equation for LSTM is:
This is similar to that of GRU, right? We are just using a^<t-1> instead of c^<t-1>. We also have an update gate:
We also have a forget gate and an output gate in LSTM. The equations for these gates are similar to that of the update gate:
Finally, we update the c^<t> value as:
And the activations for the next layer will be:
So which algorithm should you use – GRU or LSTM?
Each algorithm has it
s advantages. You’ll find that their accuracy varies depending on the kind of problem you’re trying to solve. The advantage with GRU is that it has a simpler architecture and hence we can build
bigger models, but LSTM is more powerful and effective as it has 3 gates instead of 2.
Bidirectional RNN
The RNN architectures we have seen so far focus only on the previous information in a sequence. How awesome would it be if our model can take into account both the previous as well as the later
information of the sequence while making predictions at a particular timestep?
Yes, that’s possible! Welcome to the world of bidirectional RNNs. But before I introduce you to what Bidirectional RNNs are and how they work, let’s first look at why we need them.
Consider the named entity recognition problem where we want to know whether a word in a sequence represents a person. We have the following example:
He said, “Teddy bears are on sale!”
If we feed this sentence to a simple RNN, the model will predict “Teddy” to be the name of a person. It just doesn’t take into account what comes after that word. We can fix this issue with the help
of bidirectional RNNs.
Now suppose we have an input sequence of 4 words. A bidirectional RNN will look like:
To calculate the output from a RNN unit, we use the following formula:
Similarly, we can have bidirectional GRUs and bidirectional LSTMs. One disadvantage of using bidirectional RNNs is that we have to look at the entire sequence of data before making any prediction.
But the standard B-RNN algorithm is actually very effective for building and designing most NLP applications.
Deep RNNs
Remember what a deep neural network looks like?
We have an input layer, some hidden layers and finally an output layer. A deep RNN also looks like this. We take a similar network and unroll that in time:
Here, the generalized notation for activations is given as:
a^[l]<t> = activation of l^th later at time t
Suppose we want to calculate a^[2]<3> :
That’s it for deep RNNs. Take a deep breath, that was quite a handful to digest in one go. And now, time to move to module 2!
Module 2: Natural Language Processing & Word Embeddings
The objectives for studying the second module are:
• To learn natural language processing using deep learning techniques
• To understand how to use word vector representations and embedding layers
• To embellish our learning with a look at the various applications of NLP, like sentiment analysis, named entity recognition and machine translation
Part 1 – Introduction to Word Embeddings
Word Representation
Up to this point, we have used a vocabulary to represent words:
To represent a single word, we created a one-hot vector:
Now, suppose we want our model to generalize between different words. We train the model on the following sentence:
I want a glass of Apple juice.
We have given “I want a glass of Apple” as the training sequence and “juice” as the target. We want our model to generalize the solution of, say:
I want a glass of Orange ____ .
Why won’t our previous vocabulary approach work? Because it lacks the flexibility to generalize. We will try to calculate the similarity between the vector representing the words Apple and Orange.
We’ll inevitably get zero as output as the product of any two one-hot vectors is always zero.
Instead of having a one-hot vector for representations, what if we represent each word with a set of features? Check out this table:
Say, we have 300 features for each word. So, for example, the word ‘Man’ will be represented by a 300 dimensional vector named e[5391].
We can also use these representations for visualization purposes. Convert the 300 dimensional vector into a 2-d vector and then plot it. Quite a few algorithms exist for doing this but my favorite
one is easily t-SNE.
Using word embeddings
Word embeddings really help us to generalize well when working with word representations.
Suppose you are performing a named entity recognition task and only have a few records in the training set. In such a case, you can either take pretrained word embeddings available online or create
your own embeddings. These embeddings will have features for all the words in that vocabulary.
Here are the two primary steps for replacing one-hot encoded representations with word embeddings:
1. Learn word embeddings from a large text corpus (or download pretrained word embeddings)
2. Transfer these embeddings to a new task with a smaller training set
Next, we will look at the properties of word embeddings.
Properties of word embeddings
Suppose you get a question – “If Man is to Woman, then King is to?”. Most keen puzzle solvers will have seen these kind of questions before!
The likely answer to this question is Queen. But how will the model decide that? This is actually one of the most widely used examples to understand word embeddings. We have embeddings for Man,
Woman, King and Queen. The embedding vector of Man will be similar to that of Woman and the embedding vector of King will be similar to that of Queen.
We can use the following equation:
e[man] – e[woman] = e[king] – ?
Solving this gives us a 300 dimensional vector with a value equal to the embeddings of queen. We can use a similarity function to determine the similarity between two word embeddings as well. The
similarity function is given by:
This is a cosine similarity. We can also use the Euclidean distance formula:
There are a few other different types of similarity measures which you’ll find in core recommendation systems.
Embedding matrix
We actually end up learning an embedding matrix when we implement a word embeddings algorithm. If we’re given a vocabulary of 10,000 words and each word has 300 features, the embedding matrix,
represented as E, will look like this:
To find the embeddings of the word ‘orange’ which is at the 6257^th position, we multiply the above embedding matrix with the one-hot vector of orange:
E . O[6257] = e[6257]
The shape of E is (300, 10k), and of O is (10k, 1). Hence, the embedding vector e will be of the shape (300, 1).
Part 2 – Learning Word Embeddings: Word2Vec & GloVe
Learning Word Embeddings
Consider we are building a language model using a neural network. The input to the model is “I want a glass of orange” and we want the model to predict the next word.
We will first learn the embeddings of each of the words in the sequence using a pre trained word embedding matrix and then pass those embeddings to a neural network which will have a softmax
classifier at the end to predict the next word.
This is how the architecture will look like. In this example we have 6 input words, each word is represented by a 300 dimensional vector and hence the input of the sequence will be 6*300 = 1800
dimensional. The parameters for this model are:
• Embedding matrix (E)
• W^[1], b^[1]
• W^[2], b^[2]
We can reduce the number of input words, to decrease the input dimensions. We can say that we want our model to use previous 4 words only to make prediction. In this case the input will be 1200
dimensional. The input can also be referred as context and there can be various ways to select the context. Few possible ways are:
• Take last 4 words
• Take 4 words from left and 4 words from right
• Last 1 word
• We can also take one nearby word
This is how we can solve language modeling problem where we input the context and predict some target words. In the next section, we will look at how Word2Vec can be applied for learning word
It is a simple and more efficient way to learn word embeddings. Consider we have a sentence in our training set:
I want a glass of orange juice to go along with my cereal.
We use a skip gram model to pick a few context and target words. In this way we create a supervised learning problem where we have an input and its corresponding output. For context, instead of
having only last 4 words or last 1 word, we randomly pick a word to be the context word and then randomly pick another word within some window (say 5 to the left and right) and set that as the target
word. Some of the possible context – target pairs could be:
Context Target
orange juice
orange glass
orange my
These are only few pairs, we can have many more pairs as well. Below are the details of the model:
Vocab size = 10,000k
Now, we want to learn a mapping from some context (c) to some target (t). This is how we do the mapping:
O[c] -> E -> e[c] -> softmax -> y(hat)
Here, e[c] = E.O[c]
Here softmax is calculating the probability of getting the target word (t) as output given the context word (c).
Finally, we calculate the loss as:
Using a softmax function creates a couple of problems to the algorithm, one of them is computational cost. Everytime we calculate the probability:
We have to carry out the sum over all 10,000 words in the vocabulary. If we use a larger vocabulary of say 100,000 words or even more the computation gets really slow. Few solutions to this problem
Using a hierarchical softmax classifier. So, instead of classifying some word into 10,000 categories in one go, we first classify it into either first 5000 categories or later 5000 categories, and so
on. In this way we do not have to compute the sum over all 10,000 words every time. The flow of hierarchical softmax classifier looks like:
One question that might arise in your mind is how to choose the context c? One way could be to sample the context word at random. The problem with random sampling is that the common words like is,
the will appear more frequently whereas the unique words like orange, apple might not even appear once. So, we try to choose a method which gives more weightage to less frequent words and less
weightage to more frequent words.
In the next section we will see a technique that helps us to reduce the computation cost and learn much better word embeddings.
Negative Sampling
In the skip gram models, as we have seen earlier, we map context words to target words which allows us to learn word embeddings. One downside of that model was high computational cost due to softmax.
Consider the same example that we took earlier:
I want a glass of orange juice to go along with my cereal.
What negative sampling will do is, it creates a new supervised learning problem, where given a pair of words say “orange” and “juice”, we will predict whether it is a context-target pair? For the
above example, the new supervised learning problem will look like:
Context (c) Word (t) Target (y)
orange juice 1
orange king 0
orange book 0
orange the 0
Since orange-juice is a context-target pair, we set the Target value as 1, whereas, orange-king is not a pair for above example, and hence Target is 0. These 0 values represent that it is a negative
sample. We now apply a logistic regression to calculate the probability of whether the pair is a context-target pair or not. The probability is given by:
We can have k pair of words for training the model. k can range between 5-20 for smaller dataset while for larger dataset, we choose smaller k (2-5). So, if we build a neural network and the input is
orange (one hot vector of orange):
We will have 10,000 possible classification problems each corresponding to different words from the vocabulary. So, this network will tell all the possible target words corresponding to the context
word orange. Here, instead of having one giant 10,000 way softmax, which is computationally very slow, we have 10,000 binary classification problems which is comparatively faster as compared to the
Context word is chosen from the sequence and once it is chosen, we randomly pick another word from the sequence to be a positive sample and then pick few of the other random words from the vocabulary
as negative samples. In this way, we can learn word embeddings using simple binary classification problems. Next we will see even simpler algorithm for learning word embeddings.
GloVe word vectors
We will work on the same example:
I want a glass of orange juice to go along with my cereal.
Previously, we were sampling pairs of words (context and target) by picking two words that appears in close proximity to each other from our text corpus. GloVe or Global Vectors for word
representation makes it more explicit. Let’s say:
X[ij] = number of times i appears in context of j
Here, i is similar to the target (t) and j is similar to the context (c). GloVe minimizes the following:
Here, f(X[ij]) is the weighing term. It gives less weightage to more frequent words (such as stop words like this, is, of, a, ..) and more weightage to less frequent words. Also, f(X[ij]) = 0 when (X
[ij]) = 0. It has been found that minimizing the above equation finally leads to a good word embeddings. Now, we have seen many algorithms for learning word embeddings. Next we will see the
application using word embeddings.
Part 3 – Applications using Word Embeddings
Sentiment Classification
You must already be well aware of what sentiment classification is so I’ll make this quick. Check out the below table which contains some text and its corresponding sentiment:
X (text) y (sentiment)
The dessert is excellent. ****
Service was quite slow. **
Good for a quick meal, but nothing special. ***
Completely lacking in good taste *
The applications of sentiment classification are varied, diverse and HUGE. But In most cases you’ll encounter, the training doesn’t come labelled. This is where word embeddings come to the rescue.
Let’s see how we can use word embeddings to build a sentiment classification model.
We have the input as: “The dessert is excellent”.
Here, E is the pretrained embedding matrix of, say, 100 billion words. We multiple the one-hot encoded vectors of each word with the embedding matrix to get the word representations. Next, we sum up
all these embeddings and apply a softmax classifier to decide what should be the rating of that review.
It only takes the mean of all the words, so if the review is negative but it has more positive words, then the model might give it a higher rating. Not a great idea. So instead of just summing up the
embedding to get the output, we can use an RNN for sentiment classification.
This is a many-to-one problem where we have a sequence of inputs and a single output. You are now well equipped to solve this problem. 🙂
Module 3: Sequence Models & Attention Mechanism
Welcome to the final module of the series! Below are the two objectives we will primarily achieve in this module:
• Understanding the attention mechanism
• To understand where the model should focus its attention given an input sequence
Basic Models
I’m going to keep this section industry relevant, so we’ll cover models which are useful for applications like machine translation, speech recognition, etc. Consider this example – we are tasked with
building a sequence-to-sequence model where we want to input a French sentence and translate it into English. The problem will look like:
Here x^<1>, x^<2> are the inputs and y^<1>, Y^<2> are outputs. To build a model for this, we have an encoder part which takes an input sequence. The encoder is built as an RNN, or LSTM, or GRU. After
the encoder part, we build a decoder network which takes the encoding output as input and is trained to generate the translation of the sentence.
This network is popularly used for Image Captioning as well. As input, we have the image’s features (generated using a convolutional neural network).
Picking the most likely sentence
The decoder model of a machine translation system is quite similar to that of a language model. But there is one key difference between the two. In a language model, we start with a vector of all
zeros, whereas in machine translation, we have an encoder network:
The encoder part of the machine translation model is a conditional language model where we are calculating the probability of outputs given an input:
Now, for the input sentence:
We can have multiple translations like:
We want the best translation out of all the above sentences.
The good news? There is an algorithm that helps us choose the most likely translation.
Beam Search
This is one of the most commonly used algorithms for generating the most likely translations. The algorithm can be understood using the below 3 steps:
Step 1: It picks the first translated word and calculates its probability:
Instead of just picking one word, we can set a bean width (B) to say B=3. It will pick the top 3 words that can possibly be the first translated word. These three words are then stored in the
computer’s memory.
Step 2: Now, for each selected word in step 1, this algorithm calculates the probability of what the second word could be:
If the beam width is 3 and there are 10,000 words in the vocabulary, the total number of possible combinations will be 3 * 10,000 = 30,000. We evaluate all these 30,000 combinations and pick the top
3 combinations.
Step 3: We repeat this process until we get to the end of the sentence.
By adding one word at a time, beam search decides the most likely translation for any given sentence. Let’s look at some of the refinements we can do to beam search in order to make it more
Refinements to Beam Search
Beam search maximizes this probability:
This probability is calculated by multiplying probabilities of different words. Since particular probabilities are very tiny numbers (between 0 and 1), if we multiply such small numbers multiple
time, final output is very small which creates problem in computations. So, instead we can use the following formula to calculate the probabilities:
So, instead of maximizing the products, we maximize the log of a product. Even using this objective function, if the translated sentence has more words, their product will go down to more negative
values, and hence we can normalize the function as:
So, for all the sentences selected using beam search, we calculate this normalized log likelihood and then pick the sentence which gives the highest value. There is one more detail that I would like
to share and it is how to decide the beam width B?
If the beam width is more, we will have better results but the algorithm will become slow. On the other hand, choosing smaller B will make the algorithm run faster but the results will not be
accurate. There is no hard rule to choose beam width and it can vary according to the applications. We can try different values and then choose the one that gives the best results.
Error analysis in beam search
Beam search is an approximation algorithm which outputs the most likely translations based on the beam width. But it is not always necessary that it will generate the correct translation everytime.
If we are not getting the correct translations, we have to analyse whether it is due to the beam search or our RNN model is causing problems. If we find that the beam search is causing the problem,
we can increase the beam width and hopefully we will get better results. How to decide whether we should focus on improving the beam search or the model?
Suppose the actual translation is:
Jane visits Africa in September (y*)
And the translation that we got from the algorithm is:
Jane visited Africa last September (y(hat))
RNN will compute P(y* | x) and P(y(hat) | x)
Case 1: P(y* | x) > P(y(hat) | x)
This means beam search chose y(hat) but y* attains higher probability. So, beam search is at fault and we might consider increasing the beam width.
Case 2: P(y* | x) <= P(y(hat) | x)
This means that y* is better translation than y(hat) but RNN predicted the opposite. Here, RNN is at fault and we have to improve the model.
So, for each translation, we decide whether RNN is at fault or the beam search. Finally we figure out what fraction of errors are caused due to beam search vs RNN model and update either beam search
or RNN model based on which one is more at fault. In this way we can improve the translations.
Attention Model
Up to this point, we have seen the encoder-decoder architecture for machine translation where one RNN reads the input and the other one outputs a sentence. But when we get very long sentences as
input, it becomes very hard for the model to memorize the entire sentence.
What attention models do is they take small samples from the long sentence and translate them, then take another sample and translate them, and so on.
We use an alpha parameter to decide how much attention should be given to a particular input word while we generate the output.
⍺^<1,2> = For generating the first word, how much attention should be given to the second input word
Let’s understand this with an example:
So, for generating the first output y^<1>, we take attention weights for each word. This is how we compute the attention:
If we have T[x] input words and T[y] output words, then the total attention parameters will be T[x] * T[y].
You might already have gathered this – Attention models are one of the most powerful ideas in deep learning.
End Notes
Sequence models are awesome, aren’t they? They have a ton of practical applications – we just need to know the right technique to use in specific situations. And my hope is that you will have learned
those techniques in this guide.
Word embeddings are a great way to represent words and we saw how these word embeddings can be built and use. We have gone through different applications of word embeddings and finally we covered
attention models as well which are one of the most powerful ideas for building sequence models.
If you have any query or feedback related to the article, feel free to share them in the comments section below. Looking forward to your responses!
Responses From Readers
Hi Pulkit, As Always, a good work .. Thanks for sharing such a wonderful article.. Please share your knowledge like that, so that we can learn something from you.
pretty interesting post
How can you just take all of the deeplearning.ai course content and share it here on your website as your own? Have you taken permission from Andrew Ng?
"Here, instead of having one giant 10,000 way softmax, which is computationally very slow, we have 10,000 binary classification problems which is comparatively very slow as compared to the softmax."
- Do you mean comparitively fast? Are you advocating this as a solution for the softmax problem or not? | {"url":"https://www.analyticsvidhya.com/blog/2019/01/sequence-models-deeplearning/?utm_source=reading_list&utm_medium=https://www.analyticsvidhya.com/blog/2022/08/on-device-personalization-of-asr-models-for-disordered-speech/","timestamp":"2024-11-08T15:32:33Z","content_type":"text/html","content_length":"433823","record_id":"<urn:uuid:ed9b3172-b035-4c3a-b02b-1a1832a136c9>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00697.warc.gz"} |
a data.frame with row names in the first column and numeric values in all other columns. Usually the piped-in result of a call to crosstab that included the argument percent = "none".
the denominator to use for calculating percentages. One of "row", "col", or "all".
should counts be displayed alongside the percentages?
how many digits should be displayed after the decimal point?
display a totals summary? Will be a row, column, or both depending on the value of denom.
method to use for truncating percentages - either "half to even", the base R default method, or "half up", where 14.5 rounds up to 15. | {"url":"https://www.rdocumentation.org/packages/janitor/versions/2.2.0/topics/adorn_crosstab","timestamp":"2024-11-06T21:37:44Z","content_type":"text/html","content_length":"89060","record_id":"<urn:uuid:29710447-b7aa-450b-89db-e400025e1628>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00846.warc.gz"} |
Columnar Transposition Cipher
Columnar Transposition involves writing the plaintext out in rows, and then reading the ciphertext off in columns. In its simplest form, it is the
Route Cipher
where the route is to read down each column in order. For example, the plaintext "a simple transposition" with 5 columns looks like the grid below
So far this is no different to a specific route cipher. Columnar Transposition builds in a keyword to order the way we read the columns, as well as to ascertain how many columns to use.
We first pick a keyword for our encryption. We write the plaintext out in a grid where the number of columns is the number of letters in the keyword. We then title each column with the respective
letter from the keyword. We take the letters in the keyword in alphabetical order, and read down the columns in this order. If a letter is repeated, we do the one that appears first, then the next
and so on.
As an example, let's encrypt the message "The tomato is a plant in the nightshade family" using the keyword tomato. We get the grid given below.
We have written the keyword above the grid of the plaintext, and also the numbers telling us which order to read
The plaintext is written in a grid beneath the keyword. The numbers represent the the columns in. Notice that the first "O" is 3 and the second "O" is 4, and the same thing for the two "T"s.
alphabetical order of the keyword, and so the order in which the columns will be read.
Starting with the column headed by "A", our ciphertext begins "TINESAX" from this column. We now move to the column headed by "M", and so on through the letters of the keyword in alphabetical order
to get the ciphertext "TINESAX / EOAHTFX / HTLTHEY / MAIIAIX / TAPNGDL / OSTNHMX" (where the / tells you where a new column starts). The final ciphertext is thus "TINES AXEOA HTFXH TLTHE YMAII AIXTA
PNGDL OSTNH MX".
The decryption process is significantly easier if nulls have been used to pad out the message in the encryption process. Below we shall talk about how to go about decrypting a message in both
Firstly, if nulls have been used, then you start by writing out the keyword and the alphabetical order of the letters of the keyword. You must then divide the length of the ciphertext by the length
of the keyword. The answer to this is the number of rows you need to add to the grid. You then write the ciphertext down the first column until you reach the last row. The next letter becomes the
first letter in the second column (by the alphabetical order of the keyword), and so on.
As an example, we shall decrypt the ciphertext "ARESA SXOST HEYLO IIAIE XPENG DLLTA HTFAX TENHM WX" given the keyword potato. We start by writing out the keyword and the order of the letters. There
are 42 letters in the ciphertext, and the keyword has six letters, so we need 42 ÷ 6 = 7 rows.
Now we read off the plaintext row at a time to get "potatoes are in the nightshade family as well".
When no nulls have been used we have to do a slightly different calculation. We divide the length of the ciphertext by the length of the keyword, but this is likely to not be a whole number. If this
is the case, then we round the answer up to the next whole number. We then multiply this number by the length of the keyword, to find out how many boxes there are in total in the grid. Finally, we
take the length of the ciphertext away from this answer. Thie number (which should be less than the length of the key) is how many nulls there would have been if used, so we need to black out these
last few boxes, so we don't put letters in them whilst decrypting.
To decrypt the ciphertext "ARESA SOSTH EYLOI IAIEP ENGDL LTAHT FATEN HMW", we start similarly to above, by heading the columns with the keyword potato. This time, to find how many rows we need, we do
38 ÷ 6 = 6.3333. We round this up to the next number, which is 7, so we need 7 rows. Whe we multiply 6 x 7 we get 42, and 42 - 38 = 4. Hence we need 4 placeholders in the last row. We get the grid
below to the left. After pluggin the ciphertext letters in, in the same way as above, we get the grid on the right.
Finally, we read off the plaintext in rows, to reveal the same plaintext as the other example, "potatoes are in the nightshade family as well".
Columnar Transposition has the security of a transposition cipher with the extra befefit of utilizing a keyword. This is easier to remember than some complex route, and provides a better mixing
effect than the railfence cipher.
One of the key benefits of a transposition cipher over a substitution cipher is that they can be applied more than once. For example, the Columnar Transposition cipher could be applied twice on the
plaintext. This is done by following the process above to produce some ciphertext, but then to use the same (or a different) keyword and to plug this ciphertext into the grid and read off the rows
again. Our example above would give us
After this double transposition, we get the ciphertext "EATMX DHNOH YIGNI EXEAN TATTI AOXTX FHIPS SHLAT LM".
This double transposition increases the security of the cipher significantly. It could also be implemented with a different keyword for the second iteration of the cipher. In fact, until the
invention of the VIC Cipher, Double Transposition was seen as the most secure cipher for a field agent to use reliably under difficult circumstances. | {"url":"https://crypto.interactive-maths.com/columnar-transposition-cipher.html","timestamp":"2024-11-09T06:12:30Z","content_type":"text/html","content_length":"96111","record_id":"<urn:uuid:2211ef28-81c8-46af-9af6-9fcf73cf6cea>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00085.warc.gz"} |
Show change in de Broglie wavelength from change in speed
• Thread starter Feynman.12
• Start date
In summary, the conversation discusses a question from a book regarding a nonrelativistic particle and the change in de Broglie wavelength. The individual attempted to solve the equation but found an
incorrect answer. They were then given guidance on their mistake and were able to correct their solution, leading to a mindblowing realization about differentiation and error propagation.
Homework Statement
Show that for a nonrelativistic particle, a small change in speed leads to a change in de Broglie wavelength given from
The Attempt at a Solution
I have tried to expand the left hand side of the equation, but found that it gave the answer of v0/delta v. My definition of delta lambda is the final wavelength minus the initial wavelength.
Staff Emeritus
Science Advisor
Homework Helper
Gold Member
Feynman.12 said:
The Attempt at a Solution
I have tried to expand the left hand side of the equation, but found that it gave the answer of v0/delta v. My definition of delta lambda is the final wavelength minus the initial wavelength.
You need to actually show us what you did. How else can we find out where and if you went wrong?
Orodruin said:
You need to actually show us what you did. How else can we find out where and if you went wrong?
Sorry, my attempt is as follows.
In the book (Eisberg, Resnick - quantum physics of atoms, molecules, solids, nuclei and particles, pg. 82, question 10) it has the answer as that given above, however, my attachment proves that
wrong. Is there anywhere I may have made a mistake?
Science Advisor
Homework Helper
In your first step you write something that looks like ##{1\over a -b} = {1\over a} - {1\over b}## to me
BvU said:
In your first step you write something that looks like ##{1\over a -b} = {1\over a} - {1\over b}## to me
I can't find where I have done this. How would you do this question?
Staff Emeritus
Science Advisor
Homework Helper
Gold Member
Feynman.12 said:
I can't find where I have done this. How would you do this question?
What you did is equivalent to that.
You had ##\displaystyle \ \Delta\lambda=\frac{h}{mv_f}-\frac{h}{mv_i} \ .##
Then you did this:
##\displaystyle \ \frac1{\Delta\lambda}=\frac{mv_f}{h}-\frac{mv_i}{h} \ .##
However, ##\displaystyle \ \frac1{\displaystyle\frac{h}{mv_f}-\frac{h}{mv_i}}\ne\frac{mv_f}{h}-\frac{mv_i}{h} \ .##
SammyS said:
What you did is equivalent to that.
You had ##\displaystyle \ \Delta\lambda=\frac{h}{mv_f}-\frac{h}{mv_i} \ .##
Then you did this:
##\displaystyle \ \frac1{\Delta\lambda}=\frac{mv_f}{h}-\frac{mv_i}{h} \ .##
However, ##\displaystyle \ \frac1{\displaystyle\frac{h}{mv_f}-\frac{h}{mv_i}}\ne\frac{mv_f}{h}-\frac{mv_i}{h} \ .##
Okay, I understand that! I was able to try and attempt to solve this with the new knowledge, however I got stuck. I derive an answer that is
##\displaystyle \ \frac{\Delta \lambda}{\lambda_0}=\frac{-\Delta v}{v_f}##
If my mathematics is correct, that would mean that
##\displaystyle \ \frac{-\Delta v}{v_f}=\frac{\Delta v}{v_0}##
But I can't think of any relations that would make the above true?
Science Advisor
Homework Helper
##\displaystyle \ \frac{\Delta \lambda}{\lambda_0}=\frac{-\Delta v}{v_f}\ \ ## is correct. The book means to say ##
\displaystyle \ \frac{\Delta \lambda}{\lambda}=\frac{|\Delta v|}{v}\ \ ## but finds the sign so trivial that it leaves out the ##|\ |##.
And for a small change ##v = v_0 \approx v_f## in the denominator -- NOT, of course in the difference.
This reminds me of differentiation and error propagation:
With ##y = 1/x## you have ##dy = -1/x^2 \; dx## so ##dy/y = -dx/x ## !
BvU said:
##\displaystyle \ \frac{\Delta \lambda}{\lambda_0}=\frac{-\Delta v}{v_f}\ \ ## is correct. The book means to say ##
\displaystyle \ \frac{\Delta \lambda}{\lambda}=\frac{|\Delta v|}{v}\ \ ## but finds the sign so trivial that it leaves out the ##|\ |##.
And for a small change ##v = v_0 \approx v_f## in the denominator -- NOT, of course in the difference.
This reminds me of differentiation and error propagation:
With ##y = 1/x## you have ##dy = -1/x^2 \; dx## so ##dy/y = -dx/x ## !
mindblow moment. Thankyou for your help!
FAQ: Show change in de Broglie wavelength from change in speed
1. How is de Broglie wavelength related to speed?
De Broglie wavelength is inversely proportional to the speed of an object. This means that as the speed of an object increases, its de Broglie wavelength decreases.
2. How can the change in de Broglie wavelength be calculated from a change in speed?
The change in de Broglie wavelength can be calculated using the de Broglie equation: λ = h/mv, where λ is the de Broglie wavelength, h is Planck's constant, m is the mass of the object, and v is its
3. What is the significance of a change in de Broglie wavelength?
A change in de Broglie wavelength can indicate a change in the momentum of an object. This can be useful in understanding the behavior of microscopic particles, such as electrons, which exhibit
wave-particle duality.
4. Can the de Broglie wavelength be observed experimentally?
Yes, the de Broglie wavelength has been observed experimentally using diffraction and interference techniques. These experiments have confirmed the wave-like nature of particles and the validity of
the de Broglie equation.
5. How does the de Broglie wavelength change for different types of particles?
The de Broglie wavelength is inversely proportional to the mass of the particle. This means that lighter particles, such as electrons, have a larger de Broglie wavelength than heavier particles, such
as protons. Additionally, the de Broglie wavelength also changes with the speed of the particle. | {"url":"https://www.physicsforums.com/threads/show-change-in-de-broglie-wavelength-from-change-in-speed.823273/","timestamp":"2024-11-03T13:07:19Z","content_type":"text/html","content_length":"118719","record_id":"<urn:uuid:90e59f5a-afd9-4b34-93aa-259a7098fe9d>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00677.warc.gz"} |
What's the difference between Machine Learning, Artificial Neural Networks, and Deep Learning?
What is actually the difference between the Machine Learning, Artificial Neural Networks, and Deep Learning?
Machine Learning collocates a number of procedures and methods for the automated generation of mathematical models from examples and data (socalled example bases or data based models). Hereby Machine
Learning comprizes the mathematical content of Data Mining, which deals with the extraction of knowledge and mathematical models from given data.
Most essential Machine Learning model types are Support Vector Machines, Classification and Regression Trees, Artificial Neural Networks, amongst others. That way Artifical Neural Networks are a part
of Machine Learning.
Deep Learning are a special forms of Artificial Neural Networks, which are recently extremely successful in the fields of image and speech recognition.
Summing up, Deep Leaning is a subfield of Artificial Neural Networks, which themselves are a branch of Machine Learning.
Last update on 2022-02-20 by Andreas Kuhn. | {"url":"https://www.andata.at/en/answer/whats-the-difference-between-machine-learning-artificial-neural-networks-and-deep-learning.html","timestamp":"2024-11-13T15:33:00Z","content_type":"text/html","content_length":"34704","record_id":"<urn:uuid:bdc74fad-100e-419a-92ad-88cfd0e84bdd>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00180.warc.gz"} |
The base of a triangular pyramid is a triangle with corners at (6 ,7 ), (2 ,5 ), and (3 ,1 ). If the pyramid has a height of 5 , what is the pyramid's volume? | HIX Tutor
The base of a triangular pyramid is a triangle with corners at #(6 ,7 )#, #(2 ,5 )#, and #(3 ,1 )#. If the pyramid has a height of #5 #, what is the pyramid's volume?
Answer 1
Volume of pyramid ${V}_{p} = \left(\frac{1}{3}\right) \left({A}_{t} \cdot h\right) = \textcolor{b l u e}{15}$
Volume of pyramid ${V}_{p} = \left(\frac{1}{3}\right) \left({A}_{t} \cdot h\right)$
Where ${A}_{t}$ is the base area of the triangle and h is the height of the pyramid.
Using shoelace formula,
${A}_{t} = \left(\frac{1}{2}\right) \left[\left(x 1 - x 3\right) \left(y 2 - y 1\right) - \left(x 1 - x 2\right) \left(y 3 - y 1\right)\right]$
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To find the volume of a triangular pyramid, you can use the formula:
[ V = \frac{1}{3} \times \text{base area} \times \text{height} ]
• Base of the triangular pyramid with vertices at ( (6, 7) ), ( (2, 5) ), and ( (3, 1) ).
• Height of the pyramid is ( h = 5 ).
First, calculate the area of the base triangle using the coordinates provided and the formula for the area of a triangle given its vertices.
Then, substitute the calculated base area and the given height into the formula for the volume of a pyramid.
Calculate the volume using these values.
Alternatively, you can use the method of vectors to find the area of the base triangle and then proceed with the volume calculation as described above.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/the-base-of-a-triangular-pyramid-is-a-triangle-with-corners-at-6-7-2-5-and-3-1-i-8f9afa3f57","timestamp":"2024-11-06T23:57:50Z","content_type":"text/html","content_length":"578233","record_id":"<urn:uuid:1ffd324d-2329-4013-a7cc-763ef3ab10c6>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00716.warc.gz"} |
Unlocking the Measure of Angle CAB Within Circle O: A Guide to Arc Intercepts
Unlocking The Measure Of Angle Cab Within Circle O: A Guide To Arc Intercepts
To measure the angle CAB in circle O, determine if CAB is a central angle, intercepted by arc AC, or an inscribed angle, formed by chords CA and CB. If it’s a central angle, its measure is calculated
as the fraction of the circle it intercepts, or 360° multiplied by the ratio of the length of arc AC to the circumference of the circle. If CAB is an inscribed angle, its measure is half the measure
of its intercepted central angle, which can be determined using the same formula. Identify the supplementary angle to CAB, which will have a measure of 180° minus the measure of CAB.
Central Angles: Unveiling the Secrets of Angles within Circles
In the realm of geometry, circles hold a special place, not only for their captivating symmetry but also for their captivating angles. Among these is the central angle, a special type of angle that
resides at the heart of a circle, connecting its two radii.
Understanding Central Angles
Imagine a circle as a miniature solar system, with its center as the radiant sun and its radii as the orbiting planets. A central angle is formed when two radii meet at the center, creating an arc
that defines a sector of the circle.
Just like the slices of a pie, the size of a central angle is determined by the fraction of the circle it intercepts. A full circle consists of 360 degrees; a central angle that covers half the
circle measures 180 degrees, while one that covers just a quarter measures 90 degrees.
Related Concepts: Inscribed Angles and Tangents
• Inscribed angles: These special angles are tucked away inside circles, formed by two chords that intersect within the circle. Central angles have a close relationship with inscribed angles; the
measure of an inscribed angle is exactly half the measure of its intercepted central angle.
• Tangents: Lines that only touch a circle at a single point are known as tangents. The tangent to a circle at a point is perpendicular to the radius drawn to that point. This perpendicularity
plays a crucial role in understanding central angles.
By unraveling the intricate connections between central angles and these related concepts, we unlock the secrets of geometrical relationships within circles.
Measuring Central Angles: Unlocking the Secrets of Circular Geometry
Imagine yourself as a master architect tasked with designing a magnificent circular structure. Central angles play a pivotal role in shaping the form and function of your creation. To master this
essential concept, let’s embark on a mathematical exploration!
Central angles, like sentinels guarding the circular realm, are formed when two radii extend from the center point to the perimeter of a circle. The measure of a central angle reveals the fraction of
the circle it intercepts.
To calculate central angle measures, we employ a simple yet powerful formula:
**Central Angle Measure = (Arc Length / Circumference) x 360°**
Let’s navigate through a practical example:
Consider a circle with a circumference of 360 cm. If an arc within the circle spans 120 cm, the central angle intercepting this arc can be calculated as:
Central Angle Measure = (120 cm / 360 cm) x 360°
= **120°**
This means that the central angle covers one-third of the circle’s circumference.
By understanding central angles, architects, engineers, and mathematicians can design and analyze structures with precision and confidence. From towering skyscrapers to intricate bridges, central
angles are the invisible threads that weave together the fabric of our built environment.
Inscribed Angles and Their Connection to Central Angles
In the fascinating world of geometry, circles play a pivotal role, harboring within them intriguing angles known as inscribed angles. These special angles share an intimate relationship with their
majestic counterparts, central angles, forming an intricate web of connections that will unravel before our very eyes.
As we embark on this geometric adventure, let us first define an inscribed angle as an angle whose vertex lies on the circle and whose sides are chords of the circle. Now, we set our sights on
central angles, which are angles whose vertex lies at the center of the circle.
What makes these angles kindred spirits is their shared connection: every inscribed angle is subtended by a central angle. This celestial bond means that the measure of an inscribed angle is directly
proportional to the measure of its intercepted central angle.
More precisely, the measure of an inscribed angle is exactly half the measure of its intercepted central angle. In other words, if a central angle measures 120 degrees, its intercepted inscribed
angle will measure a modest 60 degrees.
This profound relationship between inscribed and central angles empowers us with a potent tool to ascertain the hidden measures of angles within the confines of a circle. By mastering this geometric
dance, we unlock the secrets of circles and empower ourselves with a deeper understanding of this captivating shape.
Measuring Inscribed Angles: A Guide for Geometric Exploration
In the world of geometry, angles hold a significant place, providing insights into the relationships between lines, shapes, and circles. Among the diverse types of angles, inscribed angles stand out
as unique entities nestled within the embrace of circles. To unravel their secrets, let’s delve into the art of measuring inscribed angles.
Formulaic Insight
The measure of an inscribed angle, denoted as ∠CAB, is determined by a simple yet elegant formula:
∠CAB = (1/2)∠ACB
where ∠ACB represents the measure of the central angle intercepted by the same arc AB that subtends the inscribed angle ∠CAB.
Visualizing the Relationship
Imagine a circle with an inscribed angle ∠CAB. Draw a central angle ∠ACB that intercepts the same arc AB. Notice how the inscribed angle appears as the smaller, nestled angle within the larger
central angle. The key to understanding their relationship lies in the fact that the inscribed angle ∠CAB measures exactly half the central angle ∠ACB.
Examples in Action
Let’s put the formula to the test with a practical example. Suppose we have an inscribed angle ∠CAB and we know that the central angle ∠ACB measures 120 degrees. Using our formula:
∠CAB = (1/2)∠ACB
∠CAB = (1/2)120 degrees
∠CAB = 60 degrees
Therefore, the inscribed angle ∠CAB measures 60 degrees.
Applications in Geometry
Measuring inscribed angles has far-reaching applications in geometry, including:
• Determining the measure of other angles: Knowing the measure of one inscribed angle can help determine the measures of other inscribed angles in the same circle.
• Finding arc lengths: The measures of inscribed angles can be used to find the lengths of arcs in circles.
• Exploring geometric patterns: Inscribed angles play a crucial role in understanding geometric patterns and relationships.
By mastering the art of measuring inscribed angles, you unlock a powerful tool for exploring the intricate world of geometry. So, the next time you encounter an inscribed angle, remember the formula,
visualize the relationship, and embark on a geometric adventure filled with discovery.
Supplementary Angles: Unraveling the Secret of Angle CAB
Understanding Supplementary Angles
Imagine being in a boxing ring, facing your opponent across a straight line. The total angle you can see from your position (180 degrees) is divided into two angles, each known as a supplementary
angle. These angles add up to 180 degrees, making them a perfect pair.
Complementary Angles, Vertical Angles, and Linear Pairs
Supplementary angles are not the only stars of the angle show. Two angles that add up to 90 degrees are called complementary angles. Think of them as two pieces of a puzzle that fit together
Vertical angles are opposite angles formed by two intersecting lines. They’re always supplementary, meaning they’re like Siamese twins stuck at 180 degrees.
Lastly, linear pairs are adjacent angles that form a straight line. They’re just two complementary angles hanging out together.
Unveiling the Measure of Angle CAB
Now, let’s say you’re standing inside a circle, staring at an angle called CAB. To find out its measure, you need to determine if it’s a central angle (formed by two radii) or an inscribed angle
(formed by a chord and a tangent).
For Central Angles:
• Imagine slicing the pizza circle into two pieces.
• The measure of the central angle is proportional to the fraction of the circle it intercepts.
• So, if the angle covers half the circle, its measure will be 180 degrees.
For Inscribed Angles:
• An inscribed angle is like a humble servant to its central counterpart.
• Its measure is always half the measure of the central angle that intercepts the same arc.
• So, if the central angle is 120 degrees, the inscribed angle will measure 60 degrees.
Utilizing Supplementary Angles
Understanding supplementary angles is key to figuring out the measure of CAB. If you know that angle CAB is supplementary to another angle (let’s call it ABC) and you can determine the measure of
angle ABC, you can simply subtract that value from 180 degrees to get the measure of CAB.
• Supplementary angles are like two halves of a whole, always adding up to 180 degrees.
• Inscribed angles are shy creatures, always half the size of their central counterparts.
• Understanding supplementary angles empowers you to unlock the secret of angle CAB, whether it’s a central or inscribed angle.
Calculating the Measure of Angle CAB: A Comprehensive Guide
When delving into the world of geometry, understanding the intricacies of angles is crucial. This article will guide you through the process of determining the measure of angle CAB in a circle, a
common task in mathematical problem-solving.
Central Angles vs. Inscribed Angles
The first step in this endeavor is to identify whether angle CAB is a central angle or an inscribed angle.
Central angles are formed by two radii of the circle that intersect at the center. The measure of a central angle is determined by the fraction of the circumference it intercepts.
Inscribed angles, on the other hand, are formed by two chords that intersect inside the circle. The measure of an inscribed angle is half the measure of its intercepted central angle.
Steps for Determining Angle CAB
For Central Angles:
1. Identify the central angle formed by radii OC and OB.
2. Measure the arc intercepted by the central angle.
3. Calculate the central angle measure by dividing the length of the intercepted arc by the circumference of the circle.
For Inscribed Angles:
1. Identify the inscribed angle formed by chords AC and BC.
2. Find the corresponding central angle formed by radii OC and OB.
3. Calculate the inscribed angle measure by dividing the measure of the central angle by 2.
Importance of Identifying Angle Type
Knowing the type of angle is crucial for accurate measurement. If you mistakenly treat a central angle as an inscribed angle (or vice versa), you will obtain an incorrect measure.
Determining the measure of angle CAB in a circle requires a clear understanding of central and inscribed angles. By following the steps outlined above and correctly identifying the angle type, you
can confidently solve any problem involving angle measurement in circles. | {"url":"https://www.pattontheedge.ca/unlock-angle-cab-circle-o-arc-intercepts/","timestamp":"2024-11-04T11:14:27Z","content_type":"text/html","content_length":"159722","record_id":"<urn:uuid:5f235e47-2509-4a0d-bd3b-e5b06b689395>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00382.warc.gz"} |
picture changing operator
added pointer to Catenacci-Grassi-Noja 18
diff, v3, current
hyperlinked this reference:
and added this one:
diff, v4, current
added pointer to today’s
• Andrei Mikhailov, Dennis Zavaleta, Geometrical framework for picture changing operators in the pure spinor formalism (arXiv:2003.13995)
diff, v5, current
added pointer to:
• Ashoke Sen, Edward Witten. Filling the gaps with PCO’s. JHEP 09 004 (2015) [doi, arXiv:1504.00609]
diff, v8, current | {"url":"https://nforum.ncatlab.org/discussion/8772/","timestamp":"2024-11-07T06:19:57Z","content_type":"application/xhtml+xml","content_length":"45447","record_id":"<urn:uuid:9f32e0e5-e734-4e07-a189-4bf4661ecff1>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00085.warc.gz"} |
Using NVDA Screenreading Software With Equatio
Every Equatio insert contains the Alt Text behind each digital representation of the Math or STEM item. This is imperative for screenreader users. On the Google Document I have placed 4 singular math
items. I have also placed a fully solved logarithmic equation along with the solution for this problem. It is 9 lines of math. Take a listen and hear how NVDA is able to read aloud the Math for the
following problems and the logarithmic equation which is solved. In order to use and make math with Equatio, one must be able to navigate from the Google Document to the Equatio toolbar. Use the
keyboard shortcut ALT + Shft + Q to put the focus on your toolbar. Then navigate to the editor, type in a formula. To insert content, one may use the keyboard shortcut CTRL + ALT + Enter and the
content you made with Equatio will insert right into the platform you are using. The ability to level the playing field for all users is behind many of the things we do here at Texthelp and making
math and STEM content as accessible as possible is very important to us on the Equatio Team. | {"url":"https://academy.texthelp.com/equatio/howtos/using-nvda-screenreading-software-with-equatio/","timestamp":"2024-11-05T11:56:00Z","content_type":"text/html","content_length":"51951","record_id":"<urn:uuid:87853049-4b2d-4ae9-982c-587ad71d9c31>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00228.warc.gz"} |
P * t
30 Aug 2024
P * t & Analysis of variables
Equation: P * t
Variable: P
Impact of Power on Energy Function
X-Axis: -999900.4677314063to1000095.6417319027
Y-Axis: Energy Function
The Impact of Power (P) on the Energy Function: An Analytical Exploration
In various engineering disciplines, the relationship between power (P) and time (t) is fundamental to understanding energy functions. This article delves into the mathematical equation P * t and its
implications when the variable P represents power in different contexts. By analyzing this simple yet powerful equation, we aim to provide insights into how power affects energy functions, shedding
light on the underlying physics and mathematics.
The product of power (P) and time (t), denoted as P * t, is a ubiquitous equation in engineering and science. This relationship underlies many energy-related functions, including electrical power
consumption, mechanical work done by machines, and thermodynamic processes. In this article, we focus on the implications of P = P, where P represents power.
Mathematical Background
To understand the impact of power on energy functions, let us first recall the basic mathematical principles governing energy transformations. The work-energy theorem states that the net work (W)
done on a system is equal to its change in kinetic energy (ΔKE):
W = ΔKE
Expressing work as the product of power and time, we get:
P * t = W = ΔKE
Equation: P * t
Now, let us analyze the equation P * t in the context of power being a variable.
When P represents power, the equation P * t implies that energy (E) is proportional to both power and time. This relationship holds true for many real-world scenarios:
1. Electrical Power: The energy consumed by an electrical circuit over a given period is directly proportional to the product of its power rating and the duration of operation.
2. Mechanical Work: The work done by a machine or engine is equivalent to the product of its power output and the time for which it operates.
The equation P * t has significant implications for energy function analysis:
1. Energy Conservation: This relationship underscores the fundamental principle of energy conservation, where energy cannot be created or destroyed, only transformed from one form to another.
2. Power-Time Tradeoffs: The product of power and time highlights the tradeoff between these two variables in various energy-related applications. Higher power consumption often requires less time,
while lower power consumption may necessitate longer durations.
In conclusion, the equation P * t is a fundamental building block for understanding energy functions when power (P) is a variable. By analyzing this simple yet powerful relationship, we have gained
insights into how power affects energy transformations and the underlying physics that govern these processes. As engineers and scientists, recognizing the impact of power on energy functions will
enable us to design more efficient systems, optimize performance, and make informed decisions in various fields.
• To further explore the implications of P * t, consider analyzing this equation in different contexts, such as thermodynamics, electromagnetism, or mechanical engineering.
• Investigate how power-time tradeoffs affect energy-related applications, including renewable energy sources, energy storage systems, and electric vehicles.
• Develop mathematical models to quantify the relationship between power and time in specific scenarios, providing a deeper understanding of energy functions.
Related topics
Academic Chapters on the topic
Information on this page is moderated by llama3.1 | {"url":"https://blog.truegeometry.com/engineering/Analytics_Impact_of_Power_on_Energy_FunctionP_t.html","timestamp":"2024-11-03T03:22:25Z","content_type":"text/html","content_length":"17897","record_id":"<urn:uuid:2b01be0b-430d-40f7-8d13-9625a65214d2>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00277.warc.gz"} |
CBSE Class 10 Maths Term 1 Question Paper 2022| Download Free PDF
CBSE Class 10 Maths Term 1 Question Paper 2022 | Download Free PDF with Solutions
CBSE Class 10 Maths Term 1 Previous Year 2022 Question Paper with solution has been compiled by the top Maths experts of Vedantu to offer the best platform for practice. The crucial questions are all
covered with proper answers following the CBSE format.
To find out how to answer CBSE exam questions, download and practice the solution to the Maths Previous Year Question Paper Class 10 Term 1. Find out how the experts have followed a specific format
by following this file and become better at solving conceptual mathematics questions.
FAQs on CBSE Class 10 Maths Term 1 Question Paper 2022 with Solutions
1. How can you practice solving Class 10 Maths question papers?
You can download the previous years’ question papers for free from the website of Vedantu. These question papers come with solutions formulated by the top Maths experts.
2. Why should you practice solving Maths problems more?
Experts suggest solving Maths problems more often as it helps you develop a strong idea of the fundamental concepts taught in the textbook. When you solve questions more, these mathematical
principles of the important chapters imbibe well in your mind.
3. Why should you follow the solutions framed by the experts?
The experts of Vedantu follow the prescribed CBSE format to frame the answers. They also simplify the answers to a considerable level so that you can grab the context well. The solutions are designed
as a reference for the Class 10 students to follow.
4. What will happen if you follow the answering format in the Class 10 maths solutions?
By following the stepwise answering format, you will commit no mistakes and can score better in the board exam. | {"url":"https://www.vedantu.com/previous-year-question-paper/cbse-class-10-maths-term1-solved-question-paper-2022","timestamp":"2024-11-09T15:44:11Z","content_type":"text/html","content_length":"203540","record_id":"<urn:uuid:425a9cbe-23b9-47e8-96cf-7aaa8f2aa3a6>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00489.warc.gz"} |
How to find exact value COS(SIN^-1 4/5+TAN^-1 5/12) ? | HIX Tutor
How to find exact value COS(SIN^-1 4/5+TAN^-1 5/12) ?
Answer 1
$\rightarrow \cos \left({\sin}^{- 1} \left(\frac{4}{5}\right) + {\tan}^{- 1} \left(\frac{5}{12}\right)\right) = \frac{16}{65}$
Let #sin^(-1)(4/5)=x# then
Now, #rarrcos(sin^(-1)(4/5)+tan^(-1)(5/12))#
Let #tan^(-1)(63/16)=A# then
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/how-to-find-exact-value-cos-sin-1-4-5-tan-1-5-12-44068a5646","timestamp":"2024-11-02T21:50:18Z","content_type":"text/html","content_length":"566230","record_id":"<urn:uuid:50b1339f-e9e2-42f2-9596-24f2d0eed390>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00204.warc.gz"} |
Haskell Fast & Hard (Part 5)
Congratulations for getting so far! Now, some of the really hardcore stuff can start.
If you are like me, you should get the functional style. You should also understand a bit more the advantages of laziness by default. But you also don't really understand where to start in order to
make a real program. And in particular:
• How do you deal with effects?
• Why is there a strange imperative-like notation for dealing with IO?
Be prepared, the answers might be complex. But they all be very rewarding.
Too long; didn't read:
A typical function doing IO looks a lot like an imperative program:
f :: IO a
f = do
x <- action1
action2 x
y <- action3
action4 x y
• To set a value to an object we use <- .
• The type of each line is IO *; in this example:
□ action1 :: IO b
□ action2 x :: IO ()
□ action3 :: IO c
□ action4 x y :: IO a
□ x :: b, y :: c
• Few objects have the type IO a, this should help you choose. In particular you cannot use pure functions directly here. To use pure functions you could do action2 (purefunction x) for example.
In this section, I will explain how to use IO, not how it works. You'll see how Haskell separates the pure from the impure parts of the program.
Don't stop because you're trying to understand the details of the syntax. Answers will come in the next section.
What to achieve?
Ask a user to enter a list of numbers. Print the sum of the numbers
toList :: String -> [Integer]
toList input = read ("[" ++ input ++ "]")
main = do
putStrLn "Enter a list of numbers (separated by comma):"
input <- getLine
print $ sum (toList input)
It should be straightforward to understand the behavior of this program. Let's analyze the types in more detail.
putStrLn :: String -> IO ()
getLine :: IO String
print :: Show a => a -> IO ()
Or more interestingly, we note that each expression in the do block has a type of IO a.
main = do
putStrLn "Enter ... " :: IO ()
getLine :: IO String
print Something :: IO ()
We should also pay attention to the effect of the <- symbol.
x <- something
If something :: IO a then x :: a.
Another important note about using IO. All lines in a do block must be of one of the two forms:
action1 :: IO a
-- in this case, generally a = ()
value <- action2 -- where
-- bar z t :: IO b
-- value :: b
These two kinds of line will correspond to two different ways of sequencing actions. The meaning of this sentence should be clearer by the end of the next section.
Now let's see how this program behaves. For example, what occur if the user enter something strange? Try to write foo instead of a list of integer:
toList :: String -> [Integer]
toList input = read ("[" ++ input ++ "]")
main = do
putStrLn "Enter a list of numbers (separated by comma):"
input <- getLine
print $ sum (toList input)
Argh! An evil error message and a crash! The first evolution will be to answer with a more friendly message.
In order to do this, we must detect that something went wrong. Here is one way to do this. Use the type Maybe. It is a very common type in Haskell.
import Data.Maybe
What is this thing? Maybe is a type which takes one parameter. Its definition is:
data Maybe a = Nothing | Just a
This is a nice way to tell there was an error while trying to create/compute a value. The maybeRead function is a great example of this. This is a function similar to the function read[^1], but if
something goes wrong the returned value is Nothing. If the value is right, it returns Just <the value>. Don't try to understand too much of this function. I use a lower level function than read;
maybeRead :: Read a => String -> Maybe a
maybeRead s = case reads s of
[(x,"")] -> Just x
_ -> Nothing
Now to be a bit more readable, we define a function which goes like this: If the string has the wrong format, it will return Nothing. Otherwise, for example for "1,2,3", it will return Just [1,2,3].
getListFromString :: String -> Maybe [Integer]
getListFromString str = maybeRead $ "[" ++ str ++ "]"
We simply have to test the value in our main function.
import Data.Maybe
maybeRead :: Read a => String -> Maybe a
maybeRead s = case reads s of
[(x,"")] -> Just x
_ -> Nothing
getListFromString :: String -> Maybe [Integer]
getListFromString str = maybeRead $ "[" ++ str ++ "]"
main :: IO ()
-- show
main = do
putStrLn "Enter a list of numbers (separated by comma):"
input <- getLine
let maybeList = getListFromString input in
case maybeList of
Just l -> print (sum l)
Nothing -> error "Bad format. Good Bye."
-- /show
In case of error, we display a nice error message.
Note that the type of each expression in the main's do block remains of the form IO a. The only strange construction is error. I'll say error msg will simply take the needed type (here IO ()).
One very important thing to note is the type of all the functions defined so far. There is only one function which contains IO in its type: main. This means main is impure. But main uses
getListFromString which is pure. It is then clear just by looking at declared types which functions are pure and which are impure.
Why does purity matter? I certainly forget many advantages, but the three main reasons are:
• It is far easier to think about pure code than impure one.
• Purity protects you from all the hard to reproduce bugs due to side effects.
• You can evaluate pure functions in any order or in parallel without risk.
This is why you should generally put as most code as possible inside pure functions.
Our next evolution will be to prompt the user again and again until she enters a valid answer. We create a function which will ask the user for an list of integers until the input is right.
askUser :: IO [Integer]
askUser = do
putStrLn "Enter a list of numbers (separated by comma):"
input <- getLine
let maybeList = getListFromString input in
case maybeList of
Just l -> return l
Nothing -> askUser
This function is of type IO [Integer]. Such a type means that we retrieved a value of type [Integer] through some IO actions. Some people might explain while waving their hands:
«This is an [Integer] inside an IO»
If you want to understand the details behind all of this, you'll have to read the next section. But sincerely, if you just want to use IO. Just practice a little and remember to think about the type.
Finally our main function is quite simpler:
import Data.Maybe
maybeRead :: Read a => String -> Maybe a
maybeRead s = case reads s of
[(x,"")] -> Just x
_ -> Nothing
getListFromString :: String -> Maybe [Integer]
getListFromString str = maybeRead $ "[" ++ str ++ "]"
askUser :: IO [Integer]
askUser = do
putStrLn "Enter a list of numbers (separated by comma):"
input <- getLine
let maybeList = getListFromString input in
case maybeList of
Just l -> return l
Nothing -> askUser
-- show
main :: IO ()
main = do
list <- askUser
print $ sum list
-- /show
We have finished with our introduction to IO. This was quite fast. Here are the main things to remember:
• in the do bloc, each expression must have the type IO a. You are then limited in the number of expressions available. For example, getLine, print, putStrLn, etc...
• Try to externalize the pure functions as much as possible.
• the IO a type means: an IO action which returns an element of type a. IO represents actions; under the hood, IO a is the type of a function. Read the next section if you are curious.
If you practice a bit, you should be able to use IO.
□ Make a program that sums all of its arguments. Hint: use the function getArgs.
Here is a tldr for this section.
To separate pure and impure parts, main is defined as a function which modifies the state of the world
main :: World -> World
A function is guaranteed to have side effects only if it has this type. But look at a typical main function:
main w0 =
let (v1,w1) = action1 w0 in
let (v2,w2) = action2 v1 w1 in
let (v3,w3) = action3 v2 w2 in
action4 v3 w3
We have a lot of temporary elements (here w1, w2 and w3) which must be passed on to the next action.
We create a function bind or (>>=). With bind we don't need temporary names anymore.
main =
action1 >>= action2 >>= action3 >>= action4
Bonus: Haskell has syntactical sugar for us:
main = do
v1 <- action1
v2 <- action2 v1
v3 <- action3 v2
action4 v3
Why did we use this strange syntax, and what exactly is this IO type? It looks a bit like magic.
For now let's just forget all about the pure parts of our program, and focus on the impure parts:
askUser :: IO [Integer]
askUser = do
putStrLn "Enter a list of numbers (separated by commas):"
input <- getLine
let maybeList = getListFromString input in
case maybeList of
Just l -> return l
Nothing -> askUser
main :: IO ()
main = do
list <- askUser
print $ sum list
First remark; it looks like an imperative structure. Haskell is powerful enough to make impure code look imperative. For example, if you wish you could create a while in Haskell. In fact, for dealing
with IO, imperative style is generally more appropriate.
But you should had noticed the notation is a bit unusual. Here is why, in detail.
In an impure language, the state of the world can be seen as a huge hidden global variable. This hidden variable is accessible by all functions of your language. For example, you can read and write a
file in any function. The fact that a file exists or not can be seen as different states of the world.
For Haskell this state is not hidden. It is explicitly said main is a function that potentially changes the state of the world. Its type is then something like:
main :: World -> World
Not all functions may have access to this variable. Those which have access to this variable are impure. Functions to which the world variable isn't provided are pure[2].
Haskell considers the state of the world as an input variable to main. But the real type of main is closer to this one[3]:
main :: World -> ((),World)
The () type is the null type. Nothing to see here.
Now let's rewrite our main function with this in mind:
main w0 =
let (list,w1) = askUser w0 in
let (x,w2) = print (sum list,w1) in
First, we note that all functions which have side effects must have the type:
World -> (a,World)
Where a is the type of the result. For example, a getChar function should have the type World -> (Char,World).
Another thing to note is the trick to fix the order of evaluation. In Haskell, in order to evaluate f a b, you have many choices:
• first eval a then b then f a b
• first eval b then a then f a b.
• eval a and b in parallel then f a b
This is true, because we should work in a pure language.
Now, if you look at the main function, it is clear you must eval the first line before the second one since, to evaluate the second line you have to get a parameter given by the evaluation of the
first line.
Such trick works nicely. The compiler will at each step provide a pointer to a new real world id. Under the hood, print will evaluate as:
• print something on the screen
• modify the id of the world
• evaluate as ((),new world id).
Now, if you look at the style of the main function, it is clearly awkward. Let's try to do the same to the askUser function:
askUser :: World -> ([Integer],World)
askUser :: IO [Integer]
askUser = do
putStrLn "Enter a list of numbers:"
input <- getLine
let maybeList = getListFromString input in
case maybeList of
Just l -> return l
Nothing -> askUser
askUser w0 =
let (_,w1) = putStrLn "Enter a list of numbers:" in
let (input,w2) = getLine w1 in
let (l,w3) = case getListFromString input of
Just l -> (l,w2)
Nothing -> askUser w2
This is similar, but awkward. Look at all these temporary w? names.
The lesson, is, naive IO implementation in Pure functional languages is awkward!
Fortunately, there is a better way to handle this problem. We see a pattern. Each line is of the form:
let (y,w') = action x w in
Even if for some line the first x argument isn't needed. The output type is a couple, (answer, newWorldValue). Each function f must have a type similar to:
f :: World -> (a,World)
Not only this, but we can also note that we always follow the same usage pattern:
let (y,w1) = action1 w0 in
let (z,w2) = action2 w1 in
let (t,w3) = action3 w2 in
Each action can take from 0 to n parameters. And in particular, each action can take a parameter from the result of a line above.
For example, we could also have:
let (_,w1) = action1 x w0 in
let (z,w2) = action2 w1 in
let (_,w3) = action3 x z w2 in
And of course actionN w :: (World) -> (a,World).
IMPORTANT, there are only two important patterns to consider:
let (x,w1) = action1 w0 in
let (y,w2) = action2 x w1 in
let (_,w1) = action1 w0 in
let (y,w2) = action2 w1 in
Now, we will do a magic trick. We will make the temporary world symbol "disappear". We will bind the two lines. Let's define the bind function. Its type is quite intimidating at first:
bind :: (World -> (a,World))
-> (a -> (World -> (b,World)))
-> (World -> (b,World))
But remember that (World -> (a,World)) is the type for an IO action. Now let's rename it for clarity:
type IO a = World -> (a, World)
Some example of functions:
getLine :: IO String
print :: Show a => a -> IO ()
getLine is an IO action which takes a world as parameter and returns a couple (String,World). Which can be summarized as: getLine is of type IO String. Which we also see as, an IO action which will
return a String "embeded inside an IO".
The function print is also interesting. It takes one argument which can be shown. In fact it takes two arguments. The first is the value to print and the other is the state of world. It then returns
a couple of type ((),World). This means it changes the state of the world, but doesn't yield anymore data.
This type helps us simplify the type of bind:
bind :: IO a
-> (a -> IO b)
-> IO b
It says that bind takes two IO actions as parameter and return another IO action.
Now, remember the important patterns. The first was:
let (x,w1) = action1 w0 in
let (y,w2) = action2 x w1 in
Look at the types:
action1 :: IO a
action2 :: a -> IO b
(y,w2) :: IO b
Doesn't it seem familiar?
(bind action1 action2) w0 =
let (x, w1) = action1 w0
(y, w2) = action2 x w1
in (y, w2)
The idea is to hide the World argument with this function. Let's go: As an example imagine if we wanted to simulate:
let (line1,w1) = getLine w0 in
let ((),w2) = print line1 in
Now, using the bind function:
(res,w2) = (bind getLine (\l -> print l)) w0
As print is of type (World -> ((),World)), we know res = () (null type). If you didn't see what was magic here, let's try with three lines this time.
let (line1,w1) = getLine w0 in
let (line2,w2) = getLine w1 in
let ((),w3) = print (line1 ++ line2) in
Which is equivalent to:
(res,w3) = bind getLine (\line1 ->
bind getLine (\line2 ->
print (line1 ++ line2)))
Didn't you notice something? Yes, no temporary World variables are used anywhere! This is MA. GIC.
We can use a better notation. Let's use (>>=) instead of bind. (>>=) is an infix function like (+); reminder 3 + 4 ⇔ (+) 3 4
(res,w3) = getLine >>=
\line1 -> getLine >>=
\line2 -> print (line1 ++ line2)
Ho Ho Ho! Happy Christmas Everyone! Haskell has made syntactical sugar for us:
x <- action1
y <- action2
z <- action3
Is replaced by:
action1 >>= \x ->
action2 >>= \y ->
action3 >>= \z ->
Note you can use x in action2 and x and y in action3.
But what about the lines not using the <-? Easy, another function blindBind:
blindBind :: IO a -> IO b -> IO b
blindBind action1 action2 w0 =
bind action (\_ -> action2) w0
I didn't simplify this definition for clarity purpose. Of course we can use a better notation, we'll use the (>>) operator.
Is transformed into
action1 >>
action2 >>
Also, another function is quite useful.
putInIO :: a -> IO a
putInIO x = IO (\w -> (x,w))
This is the general way to put pure values inside the "IO context". The general name for putInIO is return. This is quite a bad name when you learn Haskell. return is very different from what you
might be used to.
To finish, let's translate our example:
import Data.Maybe
maybeRead :: Read a => String -> Maybe a
maybeRead s = case reads s of
[(x,"")] -> Just x
_ -> Nothing
getListFromString :: String -> Maybe [Integer]
getListFromString str = maybeRead $ "[" ++ str ++ "]"
-- show
askUser :: IO [Integer]
askUser = do
putStrLn "Enter a list of numbers (separated by commas):"
input <- getLine
let maybeList = getListFromString input in
case maybeList of
Just l -> return l
Nothing -> askUser
main :: IO ()
main = do
list <- askUser
print $ sum list
-- /show
Is translated into:
import Data.Maybe
maybeRead :: Read a => String -> Maybe a
maybeRead s = case reads s of
[(x,"")] -> Just x
_ -> Nothing
getListFromString :: String -> Maybe [Integer]
getListFromString str = maybeRead $ "[" ++ str ++ "]"
-- show
askUser :: IO [Integer]
askUser =
putStrLn "Enter a list of numbers (sep. by commas):" >>
getLine >>= \input ->
let maybeList = getListFromString input in
case maybeList of
Just l -> return l
Nothing -> askUser
main :: IO ()
main = askUser >>=
\list -> print $ sum list
-- /show
You can compile this code to verify it keeps working.
Imagine what it would look like without the (>>) and (>>=).
Now the secret can be revealed: IO is a monad. Being a monad means you have access to some syntactical sugar with the do notation. But mainly, you have access to a coding pattern which will ease the
flow of your code.
Important remarks:
□ Monad are not necessarily about effects! There are a lot of pure monads.
□ Monad are more about sequencing
For the Haskell language Monad is a type class. To be an instance of this type class, you must provide the functions (>>=) and return. The function (>>) will be derived from (>>=). Here is how the
type class Monad is declared (mostly):
class Monad m where
(>>=) :: m a -> (a -> m b) -> m b
return :: a -> m a
(>>) :: m a -> m b -> m b
f >> g = f >>= \_ -> g
-- You should generally safely ignore this function
-- which I believe exists for historical reason
fail :: String -> m a
fail = error
□ the keyword class is not your friend. A Haskell class is not a class like in object model. A Haskell class has a lot of similarities with Java interfaces. A better word should have been
typeclass. That means a set of types. For a type to belong to a class, all functions of the class must be provided for this type.
□ In this particular example of type class, the type m must be a type that takes an argument. for example IO a, but also Maybe a, [a], etc...
□ To be a useful monad, your function must obey some rules. If your construction does not obey these rules strange things might happens:
return a >>= k == k a
m >>= return == m
m >>= (\x -> k x >>= h) == (m >>= k) >>= h
There are a lot of different types that are instance of Monad. One of the easiest to describe is Maybe. If you have a sequence of Maybe values, you can use monads to manipulate them. It is
particularly useful to remove very deep if..then..else.. constructions.
Imagine a complex bank operation. You are eligible to gain about 700€ only if you can afford to follow a list of operations without being negative.
deposit value account = account + value
withdraw value account = account - value
eligible :: (Num a,Ord a) => a -> Bool
eligible account =
let account1 = deposit 100 account in
if (account1 < 0)
then False
let account2 = withdraw 200 account1 in
if (account2 < 0)
then False
let account3 = deposit 100 account2 in
if (account3 < 0)
then False
let account4 = withdraw 300 account3 in
if (account4 < 0)
then False
let account5 = deposit 1000 account4 in
if (account5 < 0)
then False
main = do
print $ eligible 300 -- True
print $ eligible 299 -- False
Now, let's make it better using Maybe and the fact that it is a Monad
deposit :: (Num a) => a -> a -> Maybe a
deposit value account = Just (account + value)
withdraw :: (Num a,Ord a) => a -> a -> Maybe a
withdraw value account = if (account < value)
then Nothing
else Just (account - value)
eligible :: (Num a, Ord a) => a -> Maybe Bool
eligible account = do
account1 <- deposit 100 account
account2 <- withdraw 200 account1
account3 <- deposit 100 account2
account4 <- withdraw 300 account3
account5 <- deposit 1000 account4
Just True
main = do
print $ eligible 300 -- Just True
print $ eligible 299 -- Nothing
Not bad, but we can make it even better:
deposit :: (Num a) => a -> a -> Maybe a
deposit value account = Just (account + value)
withdraw :: (Num a,Ord a) => a -> a -> Maybe a
withdraw value account = if (account < value)
then Nothing
else Just (account - value)
eligible :: (Num a, Ord a) => a -> Maybe Bool
eligible account =
deposit 100 account >>=
withdraw 200 >>=
deposit 100 >>=
withdraw 300 >>=
deposit 1000 >>
return True
main = do
print $ eligible 300 -- Just True
print $ eligible 299 -- Nothing
We have proven that Monads are a good way to make our code more elegant. Note this idea of code organization, in particular for Maybe can be used in most imperative language. In fact, this is the
kind of construction we make naturally.
An important remark:
The first element in the sequence being evaluated to Nothing will stop the complete evaluation. This means you don't execute all lines. You have this for free, thanks to laziness.
You could also replay these example with the definition of (>>=) for Maybe in mind:
instance Monad Maybe where
(>>=) :: Maybe a -> (a -> Maybe b) -> Maybe b
Nothing >>= _ = Nothing
(Just x) >>= f = f x
return x = Just x
The Maybe monad proved to be useful while being a very simple example. We saw the utility of the IO monad. But now a cooler example, lists.
The list monad helps us to simulate non deterministic computations. Here we go:
import Control.Monad (guard)
allCases = [1..10]
resolve :: [(Int,Int,Int)]
resolve = do
x <- allCases
y <- allCases
z <- allCases
guard $ 4*x + 2*y < z
return (x,y,z)
main = do
print resolve
MA. GIC.
For the list monad, there is also a syntactical sugar:
print $ [ (x,y,z) | x <- allCases,
y <- allCases,
z <- allCases,
4*x + 2*y < z ]
I won't list all the monads, but there are many monads. Using monads simplifies the manipulation of several notions in pure languages. In particular, monad are very useful for:
• IO,
• non deterministic computation,
• generating pseudo random numbers,
• keeping configuration state,
• writing state,
• ...
If you have followed me until here, then you've done it! You know monads (Well, you'll certainly need to practice a bit to get used to them and to understand when you can use them and create your
own. But you already made a big step in this direction.)!
This section is not so much about learning Haskell. It is just here to discuss some details further.
In the section Infinite Structures we saw some simple constructions. Unfortunately we removed two properties from our tree:
1. no duplicate node value
2. well ordered tree
In this section we will try to keep the first property. Concerning the second one, we must relax it but we'll discuss how to keep it as much as possible.
Our first step is to create some pseudo-random number list:
shuffle = map (\x -> (x*3123) `mod` 4331) [1..]
Just as a reminder, here is the definition of treeFromList
treeFromList :: (Ord a) => [a] -> BinTree a
treeFromList [] = Empty
treeFromList (x:xs) = Node x (treeFromList (filter (<x) xs))
(treeFromList (filter (>x) xs))
and treeTakeDepth:
treeTakeDepth _ Empty = Empty
treeTakeDepth 0 _ = Empty
treeTakeDepth n (Node x left right) = let
nl = treeTakeDepth (n-1) left
nr = treeTakeDepth (n-1) right
Node x nl nr
See the result of:
import Data.List
data BinTree a = Empty
| Node a (BinTree a) (BinTree a)
deriving (Eq,Ord)
-- declare BinTree a to be an instance of Show
instance (Show a) => Show (BinTree a) where
-- will start by a '<' before the root
-- and put a : a begining of line
show t = "< " ++ replace '\n' "\n: " (treeshow "" t)
treeshow pref Empty = ""
treeshow pref (Node x Empty Empty) =
(pshow pref x)
treeshow pref (Node x left Empty) =
(pshow pref x) ++ "\n" ++
(showSon pref "`--" " " left)
treeshow pref (Node x Empty right) =
(pshow pref x) ++ "\n" ++
(showSon pref "`--" " " right)
treeshow pref (Node x left right) =
(pshow pref x) ++ "\n" ++
(showSon pref "|--" "| " left) ++ "\n" ++
(showSon pref "`--" " " right)
-- show a tree using some prefixes to make it nice
showSon pref before next t =
pref ++ before ++ treeshow (pref ++ next) t
-- pshow replace "\n" by "\n"++pref
pshow pref x = replace '\n' ("\n"++pref) (show x)
-- replace on char by another string
replace c new string =
concatMap (change c new) string
change c new x
| x == c = new
| otherwise = x:[] -- "x"
shuffle = map (\x -> (x*3123) `mod` 4331) [1..]
treeFromList :: (Ord a) => [a] -> BinTree a
treeFromList [] = Empty
treeFromList (x:xs) = Node x (treeFromList (filter (<x) xs))
(treeFromList (filter (>x) xs))
treeTakeDepth _ Empty = Empty
treeTakeDepth 0 _ = Empty
treeTakeDepth n (Node x left right) = let
nl = treeTakeDepth (n-1) left
nr = treeTakeDepth (n-1) right
Node x nl nr
-- show
main = do
putStrLn "take 10 shuffle"
print $ take 10 shuffle
putStrLn "\ntreeTakeDepth 4 (treeFromList shuffle)"
print $ treeTakeDepth 4 (treeFromList shuffle)
-- /show
Yay! It ends! Beware though, it will only work if you always have something to put into a branch.
For example
treeTakeDepth 4 (treeFromList [1..])
will loop forever. Simply because it will try to access the head of filter (<1) [2..]. But filter is not smart enought to understand that the result is the empty list.
Nonetheless, it is still a very cool example of what non strict programs have to offer.
Left as an exercise to the reader:
• Prove the existence of a number n so that treeTakeDepth n (treeFromList shuffle) will enter an infinite loop.
• Find an upper bound for n.
• Prove there is no shuffle list so that, for any depth, the program ends.
In order to resolve these problem we will modify slightly our treeFromList and shuffle function.
A first problem, is the lack of infinite different number in our implementation of shuffle. We generated only 4331 different numbers. To resolve this we make a slightly better shuffle function.
shuffle = map rand [1..]
rand x = ((p x) `mod` (x+c)) - ((x+c) `div` 2)
p x = m*x^2 + n*x + o -- some polynome
m = 3123
n = 31
o = 7641
c = 1237
This shuffle function has the property (hopefully) not to have an upper nor lower bound. But having a better shuffle list isn't enough not to enter an infinite loop.
Generally, we cannot decide whether filter (<x) xs is empty. Then to resolve this problem, I'll authorize some error in the creation of our binary tree. This new version of code can create binary
tree which don't have the following property for some of its nodes:
Any element of the left (resp. right) branch must all be strictly inferior (resp. superior) to the label of the root.
Remark it will remains mostly an ordered binary tree. Furthermore, by construction, each node value is unique in the tree.
Here is our new version of treeFromList. We simply have replaced filter by safefilter.
treeFromList :: (Ord a, Show a) => [a] -> BinTree a
treeFromList [] = Empty
treeFromList (x:xs) = Node x left right
left = treeFromList $ safefilter (<x) xs
right = treeFromList $ safefilter (>x) xs
This new function safefilter is almost equivalent to filter but don't enter infinite loop if the result is a finite list. If it cannot find an element for which the test is true after 10000
consecutive steps, then it considers to be the end of the search.
safefilter :: (a -> Bool) -> [a] -> [a]
safefilter f l = safefilter' f l nbTry
nbTry = 10000
safefilter' _ _ 0 = []
safefilter' _ [] _ = []
safefilter' f (x:xs) n =
if f x
then x : safefilter' f xs nbTry
else safefilter' f xs (n-1)
Now run the program and be happy:
import Debug.Trace (trace)
import Data.List
data BinTree a = Empty
| Node a (BinTree a) (BinTree a)
deriving (Eq,Ord)
instance (Show a) => Show (BinTree a) where
-- will start by a '<' before the root
-- and put a : a begining of line
show t = "< " ++ replace '\n' "\n: " (treeshow "" t)
treeshow pref Empty = ""
treeshow pref (Node x Empty Empty) =
(pshow pref x)
treeshow pref (Node x left Empty) =
(pshow pref x) ++ "\n" ++
(showSon pref "`--" " " left)
treeshow pref (Node x Empty right) =
(pshow pref x) ++ "\n" ++
(showSon pref "`--" " " right)
treeshow pref (Node x left right) =
(pshow pref x) ++ "\n" ++
(showSon pref "|--" "| " left) ++ "\n" ++
(showSon pref "`--" " " right)
-- show a tree using some prefixes to make it nice
showSon pref before next t =
pref ++ before ++ treeshow (pref ++ next) t
-- pshow replace "\n" by "\n"++pref
pshow pref x = replace '\n' ("\n"++pref) (" " ++ show x)
-- replace on char by another string
replace c new string =
concatMap (change c new) string
change c new x
| x == c = new
| otherwise = x:[] -- "x"
treeTakeDepth _ Empty = Empty
treeTakeDepth 0 _ = Empty
treeTakeDepth n (Node x left right) = let
nl = treeTakeDepth (n-1) left
nr = treeTakeDepth (n-1) right
Node x nl nr
shuffle = map rand [1..]
rand x = ((p x) `mod` (x+c)) - ((x+c) `div` 2)
p x = m*x^2 + n*x + o -- some polynome
m = 3123
n = 31
o = 7641
c = 1237
treeFromList :: (Ord a, Show a) => [a] -> BinTree a
treeFromList [] = Empty
treeFromList (x:xs) = Node x left right
left = treeFromList $ safefilter (<x) xs
right = treeFromList $ safefilter (>x) xs
safefilter :: (a -> Bool) -> [a] -> [a]
safefilter f l = safefilter' f l nbTry
nbTry = 10000
safefilter' _ _ 0 = []
safefilter' _ [] _ = []
safefilter' f (x:xs) n =
if f x
then x : safefilter' f xs nbTry
else safefilter' f xs (n-1)
-- show
main = do
putStrLn "take 10 shuffle"
print $ take 10 shuffle
putStrLn "\ntreeTakeDepth 8 (treeFromList shuffle)"
print $ treeTakeDepth 8 (treeFromList $ shuffle)
-- /show
You should realize the time to print each value is different. This is because Haskell compute each value when it needs it. And in this case, this is when asked to print it on the screen.
Impressively enough, try to replace the depth from 8 to 100. It will work without killing your RAM! The flow and the memory management is done naturally by Haskell.
Left as an exercise to the reader:
• Even with large constant value for deep and nbTry, it seems to work nicely. But in the worst case, it can be exponential. Create a worst case list to give as parameter to treeFromList. hint:
think about ([0,-1,-1,....,-1,1,-1,...,-1,1,...]).
• I first tried to implement safefilter as follow:
safefilter' f l = if filter f (take 10000 l) == []
then []
else filter f l
Explain why it doesn't work and can enter into an infinite loop.
• Suppose that shuffle is real random list with growing bounds. If you study a bit this structure, you'll discover that with probability 1, this structure is finite. Using the following code
(suppose we could use safefilter' directly as if was not in the where of safefilter) find a definition of f such that with probability 1, treeFromList' shuffle is infinite. And prove it.
Disclaimer, this is only a conjecture.
treeFromList' [] n = Empty
treeFromList' (x:xs) n = Node x left right
left = treeFromList' (safefilter' (<x) xs (f n)
right = treeFromList' (safefilter' (>x) xs (f n)
f = ???
Thanks to /r/haskell and /r/programming. Your comment were most than welcome.
Particularly, I want to thank Emm a thousand times for the time he spent on correcting my English. Thank you man. | {"url":"https://www.schoolofhaskell.com/school/to-infinity-and-beyond/pick-of-the-week/haskell-fast-hard/haskell-fast-hard-part-5","timestamp":"2024-11-03T01:02:32Z","content_type":"text/html","content_length":"68100","record_id":"<urn:uuid:27a0c598-b3e4-47c5-b2d5-6a8923e7bce1>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00032.warc.gz"} |
IIFT 2015 QA questions and solutions | Apti4All
IIFT 2015 QA | Previous Year IIFT Paper
1. IIFT 2015 QA | Modern Math - Probability
The internal evaluation for Economics course in an Engineering programme is based on the score of four quizzes. Rahul has secured 70, 90 and 80 in the first three quizzes. The fourth quiz has ten
True-False type questions, each carrying 10 marks. What is the probability that Rahul’s average internal marks for the Economics course is more than 80, given that he decides to guess randomly on the
final quiz?
• A.
• B.
• C.
• D.
2. IIFT 2015 QA | Algebra - Simple Equations
In 2004, Rohini was thrice as old as her brother Arvind. In 2014, Rohini was only six years older than her brother. In which year was Rohini born?
• A.
• B.
• C.
• D.
3. IIFT 2015 QA | Algebra - Progressions
If p, q and r are three unequal numbers such that p, q and r are in A.P., and p, r-q and q-p are in G.P., then p : q : r is equal to:
• A.
1 : 2 : 3
• B.
2 : 3 : 4
• C.
3 : 2 : 1
• D.
1 : 3 : 4
4. IIFT 2015 QA | Algebra - Logarithms
If log[25]5 = a and log[25]15 = b, then the value of log[25]27 is:
• A.
3(b + a)
• B.
3(1 - b - a)
• C.
3(a + b - 1)
• D.
3(1 - b + a)
5. IIFT 2015 QA | Modern Math - Permutation & Combination
During the essay writing stage of MBA admission process in a reputed B-School, each group consists of 10 students. In one such group, two students are batchmates from the same IIT department.
Assuming that the students are sitting in a row, the number of ways in which the students can sit so that the two batchmates are not sitting next to each other, is:
• A.
• B.
• C.
• D.
None of the above
6. IIFT 2015 QA | Arithmetic - Percentage
The pre-paid recharge of Airtel gives 21% less talktime than the same price pre-paid recharge of Vodafone. The post-paid talktime of Airtel is 12% more than its pre-paid recharge, having the same
price. Further, the post-paid talktime of same price of Vodafone is 15% less than its pre-paid recharge. How much percent less / more talktime can one get from the Airtel post-paid service compared
to the post-paid service of Vodafone?
• A.
3.9% more
• B.
4.7% less
• C.
4.7% more
• D.
2.8% less
7. IIFT 2015 QA | Arithmetic - Profit & Loss
As a strategy towards retention of customers, the service centre of a split AC machine manufacturer offers discount as per the following rule: for the second service in a year, the customer can avail
of a 10% discount; for the third and fourth servicing within a year, the customer can avail of 11% and 12% discounts respectively of the previous amount paid, Finally, if a customer gets more than
four services within a year, he has to pay just 55% of the original servicing charges. If Rohan has availed 5 services from the same service centre in a given year, the total percentage discount
availed by him is approximately:
• A.
• B.
• C.
• D.
8. IIFT 2015 QA | Arithmetic - Time & Work
A tank is connected with both inlet pipes and outlet pipes. Individually, an inlet pipe can fill the tank in 7 hours and an outlet pipe can empty it in 5 hours. If all the pipes are kept open, it
takes exactly 7 hours for a completely filled-in tank to empty. If the total number of pipes connected to the tank is 11, how many of these are inlet pipes?
9. IIFT 2015 QA | Venn Diagram
In a certain village, 22% of the families own agricultural land, 18% own a mobile phone and 1600 families own both agricultural land and a mobile phone. If 68% of the families neither own
agricultural land nor a mobile phone, then the total number of families living in the village is:
• A.
• B.
• C.
• D.
10. IIFT 2015 QA | Modern Math - Permutation & Combination
In the board meeting of a FMCG Company, everybody present in the meeting shakes hand with everybody else. If the total number of handshakes is 78, the number of members who attended the board meeting
11. IIFT 2015 QA | Algebra - Simple Equations
A firm is thinking of buying a printer for its office use for the next one year. The criterion for choosing is based on the least per-page printing cost. It can choose between an inkjet printer which
costs Rs. 5000 and a laser printer which costs Rs. 8000. The per-page printing cost for an inkjet is Rs. 1.80 and that for a laser printer is Rs. 1.50. The firm should purchase the laser printer, if
the minimum number of a pages to be printed in the year exceeds
• A.
• B.
• C.
• D.
12. IIFT 2015 QA | Geometry - Circles
If in the figure below, angle XYZ=90° and the length of the arc XZ=10π, then the area of the sector XYZ is:
• A.
• B.
• C.
• D.
None of the above
13. IIFT 2015 QA | Arithmetic - Time, Speed & Distance
A chartered bus carrying office employees travels everyday in two shifts- morning and evening. In the evening, the bus travels at an average speed which is 50% greater than the morning average speed;
but takes 50% more time than the amount of time it takes in the morning. The average speed of the chartered bus for the entire journey is greater/less than its average speed in the morning by:
• A.
18% less
• B.
30% greater
• C.
37.5% greater
• D.
50% less
14. IIFT 2015 QA | Geometry - Mensuration
If a right circular cylinder of height 14 is inscribed in a sphere of radius 8, then the volume of the cylinder is:
• A.
• B.
• C.
• D.
15. IIFT 2015 QA | Algebra - Progressions
Seema has joined a new Company after the completion of her B.Tech from a reputed engineering college in Chennai. She saves 10% of her income in each of the first three months of her service and for
every subsequent month, her savings are Rs. 50 more than the savings of the immediate previous month. If her joining income was Rs. 3000, her total savings from the start of the service will be
Rs.11400 in:
• A.
6 months
• B.
12 months
• C.
18 months
• D.
24 months
16. IIFT 2015 QA | Arithmetic - Ratio, Proportion & Variation
Sailesh is working as a sales executive with a reputed FMCG Company in Hyderabad. As per the Company’s policy, Sailesh gets a commission of 6% on all sales upto Rs.1,00,000 and 5% on all sales in
excess of this amount. If Sailesh remits Rs. 2,65,000 to the FMCG company after deducting his commission, his total slaes were worth:
• A.
Rs. 1,20,000
• B.
Rs. 2,90,526
• C.
Rs. 2,21,054
• D.
Rs. 2,80,000
17. IIFT 2015 QA | Arithmetic - Time & Work
Three carpenters P, Q and R are entrusted with office furniture work. P can do a job in 42 days. If Q is 26% more efficient than P and R is 50% more efficient than Q, then Q and R together can finish
the job in approximately:
• A.
11 days
• B.
13 days
• C.
15 days
• D.
17 days
18. IIFT 2015 QA | Arithmetic - Mixture, Alligation, Removal & Replacement
There are two alloys P and Q made up of silver, copper and aluminium. Alloy P contains 45% silver and rest aluminum. Alloy Q contains 30% silver, 35% copper and rest aluminium. Alloys P and Q are
mixed in the ratio of 1:4.5. The approximate percentages of silver and copper in the newly formed alloy is:
• A.
33% and 29%
• B.
29% and 26%
• C.
35% and 30%
• D.
None of the above
19. IIFT 2015 QA | Geometry - Triangles
A ladder of 7.6 m long is standing against a wall and the difference between the wall and the base of the ladder is 6.4 m. If the top of the aldder now slips by 1.2m, then the foot of the ladder
shifts by approximately:
• A.
0.4 m
• B.
0.6 m
• C.
0.8 m
• D.
1.2 m
20. IIFT 2015 QA | Algebra - Surds & Indices
The value of x for which the equation $\sqrt{4x-9}+\sqrt{4x+9}=5+\sqrt{7}$will be satisfied is:
21. IIFT 2015 QA | Algebra - Surds & Indices
The simplest value of the expression ${\left\{\frac{{4}^{p+\frac{1}{4}}×\sqrt{2×{2}^{p}}}{2×\sqrt{{2}^{-p}}}\right\}}^{\frac{1}{p}}$ is:
22. IIFT 2015 QA | Modern Math - Probability
In a reputed engineering college in Delhi, students are evaluated based on trimesters. The probability that an Engineering student fails in the first trimester is 0.08. If he does not fail in the
first trimester, the probability that he is promoted to the second year is 0.87. The probability that the student will complete the first year in the Engineering College is approximately:
• A.
• B.
• C.
• D.
Help us build a Free and Comprehensive Preparation portal for various competitive exams by providing us your valuable feedback about Apti4All and how it can be improved.
© 2024 | All Rights Reserved | Apti4All | {"url":"https://www.apti4all.com/cat-mba/previous-year-papers/iift/iift-2015-qa","timestamp":"2024-11-11T02:58:14Z","content_type":"text/html","content_length":"113405","record_id":"<urn:uuid:694b8d8f-26ee-4193-aecd-e83e3a9f4585>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00137.warc.gz"} |
Math that feels good
Creating learning resources for blind students (nontechnical version)
A longer and more technical version of this story is available.
San Jose, Calif., January 16, 2020 — Martha Siegel, Professor Emerita from Towson University in Maryland, was working with a blind student who needed a statistics textbook for a required course. The
Braille version of the textbook required six months to prepare, a delay which caused the student a significant delay in her studies. Siegel reached out to Al Maneki, a retired NSA mathematician who
is blind, and the two of them decided to do something about it.
Focusing on math textbooks initially, Siegel and Maneki pulled together a collaborative team intent on solving the problem. “We were shocked to realize there did not already exist an automated method
for producing mathematics Braille textbooks,” said Alexei Kolesnikov, a colleague of Siegel at Towson University and member of the team.
Technical hurdles
It wasn’t simply an issue of converting text to braille; several structural issues needed to be addressed. A typical textbook uses visual clues to indicate chapters, sections, captions, and other
landmarks. In Braille, all the letters are the same size and shape so these structural elements must be described with special symbols. Additionally, complicated math formulas must be accurately
conveyed, and graphs and diagrams need to be represented in a non-visual way. All of these issues have to be solved for when creating a Braille textbook.
Rob Beezer, a math professor at the University of Puget Sound in Washington, has developed a system for writing textbooks called PreTeXt which automatically produces print versions as well as online,
EPUB, Jupyter, and other formats as well. “Our mantra is Write once, read anywhere” he says. In some sense, Braille is just one more output format.
As for the math formulas, they are represented using the Nemeth Braille Code which are produced by MathJax, a software package originally designed to display math formulas on web pages. Team member
Volker Sorge noted, “We have made great progress in having MathJax produce accessible math content on the web by making it audible, so the conversion to braille was a natural extension of that
work.” Although online versions and screen readers can help, they don’t eliminate the need for braille formulas. “Having the computer pronounce a formula is not adequate for a blind reader, any more
than it would be adequate for a sighted reader” commented project co-leader Al Maneki. Combining the new braille version of a formula with the existing audible online version provides a more robust
learning experience for the student.
Free textbooks
This work is part of a larger, growing effort to create high-quality free textbooks for both sighted and blind students. Indeed, the books in this project, the braille versions, and the software used
to produce the print, online, and braille versions, are all available for free. “This project is about equity and equal access to knowledge,” said Siegel.
Announcement in Denver
The official announcement of this work will be made at the Joint Mathematics Meetings in Denver, Colorado during the following talks on Thursday January 16, 2020, at the Colorado Convention Center,
room 506:
11:00am Transforming Math Documents with MathJax Version 3, by Volker Sorge – Computer Science professor at the University of Birmingham in the UK, is the lead developer for adding accessibility
features to MathJax, including the recent enhancements for producing Nemeth Braille.
11:20am The PreTeXt-Nemeth Connection: Enabling Sighted and Blind People to Share the Mathematical Experience, by Al P. Maneki, senior STEM advisor to the National Federation of the Blind.
11:40am Automated transcription of a mathematics textbook into Nemeth Braille, by Alexei Kolesnikov – math professor at Towson University in Maryland, and lead developer for the image processing in
this project. Ongoing work will create new ways for describing images with the goal of automating the production of non-visual representations.
Contact information
Media Contact
Genelle Heim
American Institute of Mathematics
Technical Contacts
Robert Beezer
University of Puget Sound
Alexei Kolesnikov
Towson University
Funding information
This work was supported by National Science Foundation grants DUE-1821706 and DMS-1638535, and a grant from the American Action Fund for Blind Children and Adults, affiliated with The National
Federation of the Blind. The Big Ten Academic Alliance supported the implementation of Nemeth in the Speech Rule Engine and its integration into MathJax. MathJax work was supported in part by Simons
Foundation Grant, No.514521 | {"url":"https://aimath.org/aimnews/braille/","timestamp":"2024-11-03T07:13:40Z","content_type":"text/html","content_length":"45286","record_id":"<urn:uuid:5fe9dc0d-58f8-4376-95ab-309116f2f3c2>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00226.warc.gz"} |
Evaluation Model and Application of the Implementation Effectiveness of the River Chief System (RCS)—Taking Henan Province as an Example
College of Mathematics and Statistics, North China University of Water Resources and Electric Power, Zhengzhou 450046, China
College of Water Resources Engineering, North China University of Water Resources and Electric Power, Zhengzhou 450046, China
Chinese Academy of Environmental Planning, MEP, Beijing 100012, China
Department of Hydraulic Engineering, Tsinghua University, Beijing 100084, China
Author to whom correspondence should be addressed.
Submission received: 25 July 2023 / Revised: 12 September 2023 / Accepted: 18 September 2023 / Published: 20 September 2023
To scientifically evaluate the implementation of the River Chief System (RCS), accelerate the overall improvement of the water ecological environment, and promote the sustainable development of river
and lake functions, this study selects 26 evaluation indicators from six aspects, including the effectiveness of organization and management, the effectiveness of water resources protection, the
effectiveness of water environment management, the effectiveness of water pollution prevention and control, the effectiveness of water ecological restoration, and the effectiveness of the management
of the waterside shoreline, and establishes an evaluation system for the effectiveness of the implementation of the RCS. Among the 26 indicators, data for the qualitative indicators mainly come from
a series of statistical yearbooks and RCS reports, while data for the quantitative indicators are obtained through scoring by more than 20 experts and calculating the average. The CRITIC objective
weighting method is improved from three aspects of comparison intensity, correlation coefficient, and degree of variation, and the subjective weighting of indicators is carried out using the AHP 1–5
scale method. The optimal linear combination of subjective and objective weighting results is obtained using the combination weighting method with game theory, which is auxiliary to the set pair
analysis. Considering the “certainty” and “uncertainty” in the evaluation process, the four-element connection number model of set pair analysis is established to evaluate the implementation effect
of the RCS in Henan Province from 2018 to 2021. The results show that the implementation effect of the RCS in Henan Province improves year by year and reaches excellent in 2019. The results of this
study can be used as a reference for evaluating the work of the RCS in other regions and can also provide a reference for the study of evaluation problems in other fields.
1. Introduction
With its large population and severe resource constraints, China has been facing great challenges in terms of water environment issues. In the course of past development, a large amount of
wastewater, sewage, pesticides, and fertilizers discharged from industry, urban life, and agricultural activities have directly or indirectly led to the pollution of the water environment [
]. In addition, rapid economic development, increased water use in industry and agriculture, and climate change have also caused water scarcity in some parts of China [
]. Water pollution and water shortage have not only caused very serious impacts on Chinese society but also caused great damage to the ecological environment and biodiversity [
]. As the water problem is becoming more and more obvious, the development of Chinese society has been hindered to a great extent. In response to China’s high demand for pollution control and water
resource protection and to effectively improve the water environment, China has begun to implement the River Chief System (RCS) in different regions [
]. With the gradual popularization of the RCS, how to improve the efficiency of water environment management has become the main problem facing the construction of water ecological civilization in
China [
In 2018, Henan, Jiangsu, Shanghai, and other provinces and municipalities became the first pilot areas for the full implementation of the RCS, followed by Guangdong, Zhejiang, Fujian, Hunan, Anhui,
and other provinces. After practicing in various regions, the General Office of the CPC Central Committee and the General Office of the State Council put forward the opinion of comprehensively
implementing the RCS, which has been actively implemented in various regions and is now being carried out nationwide, participating in the governance of coastal areas and large and medium-sized
cities [
]. So far, about 300,000 river chiefs have been set up nationwide, covering 15 types of rivers, lakes, and reservoirs to ensure comprehensive management and integrated governance of rivers. The
Essentials of River and Lake Management in 2022 proposes to strictly evaluate the assessment and evaluation of the RCS and evaluate the fulfillment of the objectives of the work of river chiefs [
]. How to scientifically evaluate the effectiveness of the implementation of the RCS and improve the efficiency of water environment management is extremely important for the improvement of water
environment management policies and in-depth study of China’s water environment management.
In the existing literature, most research studies focus on policy, and few scholars use mathematical modeling to evaluate the effectiveness of the implementation of the RCS. Longfei Wang et al. [
] conducted a study on the development of the RCS in China over the past decade. They elucidated the advantages of the RCS in terms of responsibility, authority, and interdepartmental collaboration
while also highlighting some remaining issues. Their research findings provide new insights for the design of river management systems in other developing countries. Wang Juan et al. [
] conducted a study on the game theory between enterprise pollution management and the implementation strategies of local government pollution permits, as well as the game theory between different
strategies of pollution permit implementation among local governments, and provided positive recommendations for the evolution direction of the RCS. Zhang Zihao et al. [
] conducted a study on water quality data in the Huai River Basin over the past 5 years and analyzed the effectiveness of the RCS based on the “embeddedness theory”. In addition, many other scholars
have put forward many different suggestions for the policy on the RCS.
The use of appropriate methods to assess the effectiveness of the implementation of the RCS will help to identify problems promptly and summarize experiences and lessons learned in response to them.
According to the evaluation results, targeted improvement measures can be put forward to optimize the river and lake governance model, which can make it more adaptable to the actual situation and
needs, which is very necessary for the implementation of scientific governance and the promotion of river protection and restoration, and also one of the necessary means to promote the development of
the system of river chiefs and enhance the effectiveness of governance. Therefore, it is of great significance to use appropriate methods to judge the implementation effect of the RCS to maintain the
healthy life of rivers and lakes and to realize the sustainable use and development of river and lake functions.
In this study, the effectiveness of implementing the RCS was investigated using mathematical modeling. The study of implementation effectiveness is typically considered a problem of multi-indicator
comprehensive evaluation. This method comprehensively evaluates phenomena by synthesizing multiple indicators, enabling the rational integration of evaluation factors from diverse perspectives and
fields, thus providing an objective and comprehensive reflection of their essence and characteristics [
]. In domains such as environmental protection and water resource management, this evaluation method is frequently employed to assess the functionality of rivers [
] and the effectiveness of policy implementation [
]. Similarly, applying research on the effectiveness of implementation to the RCS can offer vital foundations for scientific management and decision-making.
The evaluation of multiple indicators often encounters conflicts, uncertainties, and incompatibilities. To address these challenges, it is crucial to first acknowledge the existence of uncertainties
and systematically characterize the objects being evaluated. Subsequently, specific analysis and distinct dialectical evaluations can be conducted on these objects. Set pair analysis is an evaluation
method that possesses the aforementioned advantages. It studies the uncertainties of two entities from the perspectives of “identity, diversity, and opposition”, treating certainty and uncertainty as
a determinate–uncertain system and comprehensively depicts the correlation between these two entities [
]. In this system, certainty and uncertainty are interconnected, mutually influencing and constraining each other, and under certain conditions, they can transform into one another [
Traditional set pair analysis uses three-element connection numbers [
] for evaluation, neglecting the uncertainties in the “degree of difference”, which leads to imprecise evaluation results. In this study, we expand on the concept of the “degree of difference” and
extend the traditional set pair analysis to a four-element connection number model. This model is suitable for comprehensive evaluation and allows for qualitative and quantitative analysis of the
information reflected in the four-element connection numbers. Furthermore, set pair analysis itself cannot determine the weights of the evaluation criteria and requires the assistance of appropriate
weighting methods. However, to date, there is no unified method for assigning weights in set pair analysis. This study aims to improve the traditional CRITIC objective weighting method in three
aspects: the intensity of comparison, correlation coefficient, and degree of variation. In addition, a subjective weighting of indicators is performed using the Analytic Hierarchy Process (AHP) 1–5
scale method. The average scores of over 20 experts’ rating results on the importance of indicators were calculated and sorted. This process was used to construct a judgment matrix and examine the
consistency of the matrix to determine the usability of the scoring results. The combination weighting method based on game theory is then employed to optimize the results of subjective and objective
weighting, thereby reducing the impact of data fluctuations and subjective judgments on the weighting results and assisting set pair analysis in evaluating the indicators. By establishing the game
theory combination weighting-set pair analysis model, the comprehensive evaluation results of the implementation effectiveness of the RCS in the evaluated region can be determined, resulting in more
accurate and reasonable evaluation outcomes. These indicator data used in this study were sourced from various statistical yearbooks and reports. The quantitative indicators were scored by more than
20 experts, and the average scores were calculated to obtain the final values. Additionally, this approach aims to ensure that provincial governments pay more attention to improving water
environmental governance efficiency, making water environmental investments achieve higher standards of return.
2. Literature Review
The purpose of the literature review in this study is to provide a deeper understanding of the relevant research on which this study is based.
2.1. CRITIC, AHP, and Game Theory for Weighting Method
In the evaluation process, determining the weights of selected indicators is crucial for generating the final results. To address the issue of the inability of set pair analysis methods to determine
weights, appropriate weighting methods are required as assistance. The weighting determination methods can be categorized into two major types: subjective weighting methods and objective weighting
methods [
]. The CRITIC method, proposed by Diakoulaki et al. [
], is a method for determining objective weights. This method mainly utilizes the conflict and contrast intensity among indicators to obtain the information contained in the indicators, thus
determining the objective weights of the evaluation indicators [
]. Regarding the CRITIC method, many studies have combined it with other multi-attribute evaluation methods for application, making it a more comprehensive objective weighting method [
The Analytic Hierarchy Process (AHP) is a typical subjective weighting method proposed by American operations researcher T. L. Saaty [
] in the early 1970s. AHP combines quantitative and qualitative analysis by expressing subjective judgments in a numerical way. The AHP has garnered increasing recognition among scholars worldwide
owing to its broad applicability and growing popularity [
]. Due to its wide applicability, scholars have continually extended and improved the AHP method, leading to the development of methods such as Exponential Scale Analytic Hierarchy Process (ESAHP) [
], Fuzzy Analytic Hierarchy Process (FAHP) [
], and AHP three-scale method [
], which have enhanced the effectiveness and feasibility of the AHP method. Since its inception, AHP has been widely applied in economics [
], mathematics [
], energy science [
], computer software applications [
], and architecture [
], among other fields.
For objective weighting methods, the fluctuation and errors in data may unavoidably affect the weighting results. Subjective weighting methods, on the other hand, can lead to irrational weight
allocation due to overly subjective judgments based on expert experiences. The game theory-based combination weighting method addresses these issues by optimizing the linear relationship between
different types of weighting methods, combining two or more weighting results to obtain more effective indicator weights. As a result, this method has been continuously validated for its scientific
and rational results [
2.2. River Chief System and Set Pair Analysis Methodology
As a new water environment management system, the RCS has an indispensable positive force in water pollution prevention, water resources protection, and water ecological restoration [
]. Due to the relatively late implementation of the RCS, the current research literature is still very limited. From the perspective of RCS research, the majority of the relevant literature focuses
on the implementation of the system nationwide [
]. In terms of research content, it primarily includes studies on the current state of implementation [
], analysis of the effects of the RCS [
], the impact of the system on enterprise development [
], evaluation of river chief work [
], and the control of agricultural pollution under the RCS [
]. A few scholars have also explored the effectiveness of the implementation of the RCS, and the identified evaluation factors are mainly related to water pollution, water ecology, and organizational
supervision. Up to now, the research on the implementation status quo and regional differences of the RCS in some regions, the audit evaluation system of river chiefs, the performance evaluation
system of river chiefs, and the effectiveness of the implementation of the RCS in some provinces and municipalities [
] has produced some results. However, there is no unified method to evaluate the effectiveness of the implementation of the RCS. The aims of the RCS are many, the task is heavy, and the impact on the
ecology of rivers and lakes, as well as the development of the environment, is very significant, so it is very necessary to use a suitable method to evaluate the effectiveness of the implementation
of the RCS in Henan Province. Through the judgment of the operation of the RCS, it can be targeted to put forward relatively scientific solutions.
From the perspective of comprehensive evaluation, most scholars apply hierarchical analysis [
], gray correlation analysis [
], fuzzy comprehensive evaluation [
], the TOPSIS method [
], and other methods to solve practical problems, and they have all produced very good results. Compared with the aforementioned evaluation methods, the set pair analysis method analyzes the
systematic certainty and uncertainty of a practical problem from three different aspects: “identity, diversity, and opposition”. This allows for a comprehensive and three-dimensional evaluation of
the problem from multiple perspectives [
]. Set pair analysis was proposed by Zhao Keqin in 1989, and its application is becoming increasingly widespread [
]. By considering the evaluation indexes and evaluation levels in the comprehensive evaluation problem as two different sets and systematically analyzing them, the analysis exhibits a distinct
dialectical nature. This approach yields results closer to the actual outcomes after this study. Therefore, set pair analysis has become an effective method among the various approaches for solving
problems related to multi-objective decision-making and multi-attribute evaluation in uncertain systems. In recent years, set pair theory has been successfully applied to various fields, including
evaluation [
], management [
], decision-making [
], forecasting [
], planning [
], and artificial intelligence [
To fill the gap in the research direction of RCS, this study establishes a reasonable evaluation model for the implementation effectiveness of RCS. First, use the combination weighting method with
game theory in combination with the weighting results of the improved CRITIC method and the AHP 1–5 scale method to determine the final weight, and then establish a set pair analysis four-element
connection number framework to evaluate the selected indicators, obtain scientific evaluation results, and give reasonable suggestions for the evaluation results.
3. Determination of Indicator Weights
Considering the characteristics of fuzziness and uncertainty associated with the evaluation indicators for assessing the effectiveness of the implementation of the RCS, as well as the need to
distinguish the importance of these indicators in the evaluation process, we employ a combined weighting method that integrates the improved CRITIC method and the AHP 1–5 scale method. This approach
allows us to calculate the weights of the indicators accurately. The objective weight and subjective weight of each indicator are obtained accordingly. Additionally, the combination weighting method
with game theory is utilized to optimize the weighting results and determine the combined weight.
3.1. AHP for Determining Subjective Weight of Indicators
The Analytic Hierarchy Process (AHP) is a multi-level analytical structural model based on the construction of evaluation indicators. It comprehensively calculates the subjective weights of the
indicators by considering the relative importance between levels and between indicators [
]. The steps of calculation are as follows.
According to the selected set of indicators, construct the corresponding hierarchical structure. Using the RCS as the evaluation object, establish a three-level hierarchy, including the goal
level, criterion level, and indicator level. The goal level represents the effectiveness of implementing the RCS, and the criterion level consists of six aspects: organizational management
effectiveness, water resource protection effectiveness, water environment governance effectiveness, water pollution prevention and control effectiveness, water ecological restoration
effectiveness, and water area shoreline management effectiveness. The goal level provides a detailed description and explanation of the criterion level.
Construct the judgment matrix. To reduce errors caused by subjective judgments during the evaluation process, a 1–5 scale method is used. By consulting expert opinions, scores are given to each
indicator based on their importance to the effectiveness of implementing the RCS, and the judgment matrix is constructed.
Calculation of eigenvectors, eigenvalues, and weights. Using the maximum eigenvalue method, calculate the maximum eigenvalue and the corresponding normalized eigenvector for the judgment matrix.
Consistency test. If the consistency test is passed, it indicates that the judgment matrix is reasonable and has an explanatory value.
If the consistency test is not passed, the judgment matrix needs to be reviewed and modified, and if the revised judgment matrix satisfies the consistency test, subjective weights for evaluating the
RCS can be determined. Further analysis can then be carried out based on the derived weights.
3.2. Improved CRITIC Method for Determining Objective Weights of Indicators
The CRITIC method assigns objective weights to indicators by considering both the relative importance and conflicts among them [
]. This study proposes the following improvements to the traditional CRITIC method:
For measuring the conflicts among indicators, the traditional CRITIC method typically uses the Pearson correlation coefficient. However, the Pearson coefficient may not accurately represent the
correlation when data do not follow a normal distribution, or the sample size is less than 30 [
]. Therefore, this study replaces the Pearson coefficient with the Spearman correlation coefficient.
The correlation coefficient can be positive or negative, but in this study, absolute values of the correlation coefficients are used to avoid unnecessary biases during the calculation.
The traditional CRITIC method uses the standard deviation to measure the relative importance of indicators. However, in practical applications, it has been found that using the standard deviation
is not sufficient. Instead, using the average deviation can provide better reliability and reduce errors caused by non-normality and skewness in these data. Therefore, this study replaces the
standard deviation with the average deviation to measure the relative importance of indicators.
Considering the above improvements, the main steps of the revised CRITIC weighting method are as follows:
Step 1. Firstly, use the collected sample data to establish a decision matrix for evaluating the effectiveness of the RCS implementation
$X = ( x i j ) m × n = [ x 11 x 12 ⋯ x 1 n x 21 x 22 ⋯ x 2 n ⋮ ⋮ ⋱ ⋮ x m 1 x m 2 ⋯ x n n ] ,$
$x i j$
represents the indicator value of the
-th indicator in the
-th year.
Step 2. Normalize these data to obtain the standardized decision matrix
$Z = ( z i j ) m × n = [ z 11 z 12 ⋯ z 1 n z 21 z 22 ⋯ z 2 n ⋮ ⋮ ⋱ ⋮ z m 1 z m 2 ⋯ z n n ] .$
If the indicator is a benefit-type indicator, the element
$z i j$
of the standardized decision matrix after normalization is given by
$z i j = x − min i ( x i j ) max i ( x i j ) − min i ( x i j ) .$
If the indicator is a cost-type indicator, the element of the standardized decision matrix after normalization is given by
$z i j = max i ( x i j ) − x max i ( x i j ) − min i ( x i j ) .$
Step 3. Calculate the average deviation of these processed data
$σ j = 1 m ∑ i = 1 m ( z i j − z ¯ j ) ,$
$z ¯ j$
represents the average value of
evaluation criteria, and
$σ j$
represents the average deviation of the evaluation criteria.
Step 4. Calculate the coefficient matrix of indicator correlation based on the Spearman correlation coefficient
$R = ( r i j ) m × n = [ r 11 r 12 ⋯ r 1 n r 21 r 22 ⋯ r 2 n ⋮ ⋮ ⋱ ⋮ r m 1 r m 2 ⋯ r n n ] .$
In this case, the symbol
$r i j$
represents the correlation coefficient between the indicators, and its expression is denoted as
$r i j = ∑ i = 1 n ( p i − p ¯ ) ( q i − q ¯ ) ∑ i = 1 n ( p i − p ¯ ) 2 ∑ i = 1 n ( q i − q ¯ ) 2 ,$
$p i$
$q i$
represent the rankings of different elements in the normalized decision matrix after sorting the indicators in descending or ascending order.
Step 5. Calculate the amount of information contained in the evaluation indicators.
$C j = σ j ∑ i = 1 n 1 − r i j .$
Step 6. Calculate the objective weights (
$W j$
) of each evaluation indicator.
$W j = C j ∑ j = 1 n C j .$
3.3. Game Theory-Based Weight Optimization
To avoid the subjective bias of expert judgments and the over-reliance on data in determining weight allocation, this study adopts a combination weighting method to determine the weights of
indicators, aiming to achieve a more reasonable weight distribution [
Assuming there are
different weighting methods, the weight vector
$W l = ( w 1 , l , w 2 , l , w 3 , l , … , w n , l )$$( l = 1 , 2 , … L )$
represents the weights of
indicators [
]. Through calculations, the basic weight set
$W l = { W 1 , W 2 , … , W L }$
is obtained, and the combined weight
$W int e$
is defined as the linear combination of these
basic weights, i.e.,
$W int e = ∑ l = 1 L α l W l .$
Among them,
$α l$
is the allocation coefficient for the
-th basic weight. Since there are
different weighting methods, there are infinitely many linear combinations of these basic weights, denoted as
$W int e *$
, where
$W int e *$
represents the optimal combination weight. This study utilizes game theory principles to optimize the allocation coefficients of the basic weights in the formula to minimize the discrepancy between
the optimal combination weight and all the basic weights. This can be expressed as
$min ∑ l = 1 L ‖ ( ∑ l = 1 L α l W l ) − W l ‖ 2 .$
$‖ U ‖ 2$
represents the second norm of vector
$α l$
is the variable to be determined and
$∑ α l = 1$
Using the MATLAB solver for calculations, obtaining the optimal value of
$α l *$
, the combination weights based on game theory can be represented as
$W int e * = ∑ l = 1 L α l * × W l .$
4. Model for Evaluating the Effectiveness of River Chief System
The core idea of set pair analysis is to consider certainty and uncertainty as a deterministic–uncertain system, recognizing that the “identity, diversity, and opposition” in things are interrelated,
interdependent, and mutually constrained. Under certain conditions, they can also transform into each other, and this relationship is described using the degree of association [
]. The mathematical expression for the four-element connection number model of set pair analysis is denoted as
$μ = a + b i 1 + c i 2 + d j .$
In the equation, $a$, $b$, $c$, and $d$ represent the components of the connection numbers. $a$ denotes the degree of identity, $b$ and $c$ represents the degree of diversity, and $i 1$ and $i 2$
represent the coefficient of diversity degree with the relationship $i 1 , i 2 ∈ [ − 1 , 1 ]$. $j$ is the coefficient of opposition degree and satisfies the constraint $j ≡ − 1$. Following the
principle of equal division, the range of values for the connection numbers is evenly divided into three parts, corresponding to the values of $i 1$, $i 2$, and $j$, resulting in $i 1 = 0.33$, $i 2 =
− 0.33$, and $j = − 1$. The symbol $d$ represents the degree of opposition in set pair analysis, and the components of connection numbers satisfy the equation $a + b + c + d = 1$, with the additional
conditions that $a ∈ [ − 1 , 1 ]$, $b ∈ [ − 1 , 1 ]$, $c ∈ [ − 1 , 1 ]$, and $d ∈ [ − 1 , 1 ]$.
Based on the fundamental concept of set pair analysis, an evaluation model for assessing the effectiveness of the RCS implementation is constructed. The specific steps are as follows.
Step 1. Calculate the four-element connection number
$μ i j$
between the sample value
$x i j$
and the evaluation level
based on the evaluation index standards. Evaluation indices that increase (decrease) as the evaluation level increases are referred to as benefit-type (cost-type) indices. Taking the benefit-type
indices in the evaluation index system of the effectiveness of the implementation of the RCS as an example, the formulas for calculating the four-element connection numbers of set pair analysis
corresponding to different levels of the indices are as follows.
$u i j 1 = { 1 1 − 2 ( x i j − s 1 j ) / ( s 2 j − s 1 j ) − 1 x i j ≥ s 1 j s 1 j > x i j ≥ s 2 j x i j < s 2 j$
$u i j 2 = { 1 − 2 ( s 1 j − x i j ) / ( s 1 j − s 0 j ) 1 1 − 2 ( x i j − s 2 j ) / ( s 3 j − s 2 j ) − 1 x i j < s 1 j s 1 j > x i j ≥ s 2 j s 2 j > x i j ≥ s 3 j x i j < s 3 j$
$u i j 3 = { − 1 1 − 2 ( s 2 j − x i j ) / ( s 2 j − s 1 j ) 1 1 − 2 ( x i j − s 3 j ) / ( s 4 j − s 3 j ) − 1 x i j ≥ s 1 j s 1 j > x i j ≥ s 2 j s 2 j > x i j ≥ s 3 j s 3 j > x i j ≥ s 4 j x i j <
s 4 j$
$u i j 4 = { − 1 1 − 2 ( s 3 j − x i j ) / ( s 3 j − s 2 j ) 1 x i j ≥ s 2 j s 2 j > x i j ≥ s 3 j s 3 j > x i j ≥ s 4 j$
$i = 1 , 2 , … , m ; j = 1 , 2 , … , n ; k = 1 , 2 , 3 , 4$
$s 1 j ∼ s 4 j$
represent the thresholds between adjacent evaluation criterion levels, and
$s 0 j$
represents another critical value for the first-level evaluation criterion.
Step 2. Calculate the relative membership degree
$v i j k$
of the single index
$x i j$
belonging to the standard level
through normalization and then obtain the single index connection number
$u i j$
$v * i j k = 0.5 + 0.5 u i j k i = 1 , 2 , … , m ; j = 1 , 2 , … , n ; k = 1 , 2 , 3 , 4$
$v i j k = v i j k * / ∑ k = 1 4 v i j k *$
$u i j = v i j 1 + v i j 2 i 1 + v i j 3 i 2 + v i j 4 j$
Step 3. The calculation of the evaluation coefficient
$u i$
for the implementation effectiveness of the RCS corresponding to the sample
$u i = v i 1 + v i 2 i 1 + v i 3 i 2 + v i 4 j .$
In the equation, $v i 1 = ∑ j = 1 n w j v i j 1$, $v i 2 = ∑ j = 1 n w j v i j 2$, $v i 3 = ∑ j = 1 n w j v i j 3$, $v i 4 = ∑ j = 1 n w j v i j 4$, and the $w j$ represents the weight of the
evaluation indicator, which satisfies the condition $∑ j = 1 n w j = 1$ and $∑ k = 1 4 v i j k = 1$.
Step 4. Determine the evaluation level of the effectiveness of the implementation of the RCS. By using the principle of equal division, divide the interval $[ − 1 , 1 ]$ into $n$ equal parts to
obtain the standard intervals for the evaluation levels of the sample I~IV. Specifically, when the value falls within interval $u > 0.5$, it is classified as Level I; when it falls within interval $0
< u ≤ 0.5$, it is classified as Level II; when it falls within interval $− 0.5 < u ≤ 0$, it is classified as Level III; when it falls within interval $u ≤ − 0.5$, it is classified as Level IV.
5. Application Case
This study takes Henan Province as an example and combines the actual provincial situation to study the implementation of the RCS in Henan Province from six aspects: organizational and management
effectiveness, water resource protection effectiveness, water environment governance effectiveness, water pollution prevention, and control effectiveness, water ecological restoration effectiveness,
and water area shoreline management effectiveness. A total of 26 evaluation indicators were selected from 2018 to 2021. The set pair analysis was expanded from the three-element connection number to
the four -element connection number. The game theory combination weighting method was adopted to optimize the weighting results of the 1–5 scale AHP method and the improved CRITIC method. This was
performed to assist in the set pair analysis by incorporating the game theory approach. Through this process, an evaluation model for the implementation effectiveness of the RCS was established. The
established model was then utilized to evaluate the implementation effectiveness of the RCS in Henan Province.
5.1. Study Area Overview
Henan Province is located in the Central Plains of China. It borders Anhui and Shandong to the east, Hebei and Shanxi to the north, Shaanxi to the west, and Hubei to the south. As shown in
Figure 1
, it spans across four major river basins: the Yellow River, Yangtze River, Huai River, and Hai River. Henan Province is an important transportation hub and center for the flow of people, goods, and
information, earning the reputation of being the “heartland of the nine provinces and a thoroughfare of ten provinces”. The province has a complex topography, with higher elevations in the west and
lower elevations in the east. There are numerous rivers, lakes, and canals, providing a rich and diverse water ecosystem. The total area of the province is 167,000 square kilometers, with 493 rivers
larger than 100 square kilometers in their basin area. The rivers crisscross the province in an east-west and north-south pattern, connecting all four major river basins. To fully implement the
concept of green development, promote ecological civilization construction, and respond to the call of the Central Committee of the Communist Party of China and the State Council, Henan Province
established a comprehensive RCS at the end of 2017, creating a five-level RCS that covers the provincial, municipal, county, township, and village levels.
5.2. System of Indicators
Evaluating the effectiveness of the RCS cannot rely solely on a single indicator; instead, multiple indicators should be carefully selected to provide a comprehensive analysis and yield reasonable
results. The selection of these evaluation indicators should adhere to principles of scientific rigor, representativeness, rationality, accessibility, and feasibility. Currently, the choice of
evaluation indicators for assessing the implementation of the RCS primarily focuses on factors such as river and lake health, water quality, and the ecological environment. Building upon prior
research, this study focuses on the comprehensive governance effectiveness of river and lake areas since the implementation of the RCS in Henan Province. It takes into account the “Provincial
Assessment Plan for the Integrated Governance of Four Waters under the River Chief System for the Year 2021” issued by Henan Province, public announcements, press conferences held since the
implementation of the RCS, expert consultations, and the practical situation and challenges in the region. A total of 26 representative indicators were selected from six aspects: organizational and
managerial effectiveness, water resource protection effectiveness, water environment governance effectiveness, water pollution control effectiveness, water ecological restoration effectiveness, and
water area shoreline management effectiveness. This selection was used to construct an evaluation indicator system for assessing the implementation effectiveness of the RCS in Henan Province, which
includes a goal layer, criterion layer, and indicator layer. The criterion layer reflects the focal points and challenges of the work of the RCS, while the indicator layer provides explanations and
further elaboration of the criterion layer. For a detailed indicator system, please refer to
Table 1
in this study.
5.3. Data Sources and Indicator Values
Data used in this study are sourced from “China Statistical Yearbook [
]”, “Henan Statistical Yearbook [
]”, “Henan Water Resources Bulletin [
]”, “Henan Ecological and Environmental Status Bulletin [
]”, “China Water Resources Statistical Yearbook [
]”, “China Urban Construction Statistical Yearbook [
]”, official announcements and bulletins from Henan Water Resources Department, as well as reports, documents, and plans related to water pollution prevention, ecological environment protection, and
released by Henan Water Resources Department, Ecological Environment Department, Agriculture and Rural Affairs Department, and other relevant units. The satisfaction results were obtained through
survey questionnaires, while other qualitative indicators were scored by invited experts. Performing a reliability analysis on the obtained indicator data reveals an Alpha coefficient of 0.99,
indicating a very high level of internal consistency in the numerical values of the indicators. This suggests that the data source exhibits high reliability and accuracy. Please refer to
Table 2
for the specific indicator system.
Sensitivity analysis of the indicators was conducted using Stata’s stepwise exclusion method, and the results indicate that even after excluding any single indicator out of the 26, the results still
exhibit a high level of stability and consistency. This implies that the variation in the results is nearly independent of the number of indicators, indicating that the current number of indicators
is reasonable and statistically significant. Please refer to
Figure 2
for specific results.
5.4. Evaluation Grading Criteria
Drawing from the grading standards of the Chinese water conservancy modernization index system and the “Implementation Plan for the 2022 Henan Province’s Battle against Air, Water, and Soil Pollution
and Agricultural and Rural Pollution Control”, and considering the target values and actual values of selected indicators in the regional development plan, in conjunction with expert opinions, the
evaluation indicators have been categorized into four levels: Level I (Excellent), Level II (Good), Level III (Qualified), and Level IV (Not Qualified). Specific criteria for indicator evaluation can
be found in
Table 3
5.5. Evaluation Results of River Chief System in Henan Province
5.5.1. Weighting Results of Evaluation Indicators
The basic weights to be combined in this study are the objective weights based on improved CRITIC and the subjective weights based on the 1–5 scale AHP. The combined weights minimize the deviation
between the combined weights and the subjective and objective weights. This balances the importance of indicators reflected by subjective and objective weights, allowing the combined weights to
reflect both the attributes of the indicators themselves and effectively utilize the information from these original data of the indicators. This study uses game theory to combine weights for the
improved CRITIC method and the 1–5 scale AHP method. The objective weighting result is denoted as
, the subjective weighting result is denoted as
, and the optimal linear combination obtained is denoted as
$w = 0.4632 α + 0.5368 β$
. The final weighting results can be found in
Table 4
5.5.2. Comprehensive Evaluation Result and Analysis
Using the four-element connection number model of set pair analysis, the effectiveness of the RCS implementation in Henan Province from 2018 to 2021 was evaluated. By calculating the single-indicator
connection numbers, the final connection numbers for the effectiveness of the RCS implementation in different years in Henan Province were obtained. Based on the principle of average score, the
evaluation grade of the RCS implementation in Henan Province from 2018 to 2021 was determined. Additionally, the entropy weight TOPSIS method was used to evaluate the indicators with both positive
and negative directions (process omitted). The evaluation results from this method were compared with the results derived from the set pair analysis model, and the outcomes of both evaluation models
were presented in
Table 5
According to the four-element connection number model of set pair analysis, the evaluation of the implementation effectiveness of the RCS in Henan Province from 2018 to 2021 revealed correlation
coefficients of 0.4454, 0.5289, 0.5734, and 0.6455, corresponding to the grades of “good”, “excellent”, “excellent”, and “excellent”, respectively. The increasing connection numbers indicate the
gradual popularization and improvement of the RCS. Among them, the highest connection number in 2021 indicates that the river and lake management mechanism has gradually improved, achieving good
results in river and lake protection. The connection number in 2020 is also relatively high, which is in line with the policies implemented by Henan Province. In 2020, Henan Province started the
“Clearing-Up the Four Chaotic Practices” campaign and fully implemented the “River Chief+” mechanism. Building on the foundation of 2020, in 2021, the “River Chief+” mechanism was further
implemented, integrating the work of River Chiefs with that of the Procurator, Police Chief, River Custodian, Civilian River Chiefs, and Grid Chiefs. Additionally, the implementation was improved
through the utilization of the Internet with the establishment of the “River Chief+ Internet” system, greatly enhancing the effectiveness of the RCS. In 2018, the evaluation grade for the
implementation effectiveness of the RCS in Henan Province was “good”, as it was just getting started, with the establishment of five levels of River Chiefs at the end of 2017. After a year of effort,
the RCS in Henan Province transitioned from being “in name” to “in practice”. As the RCS was further promoted in 2019, Henan Province embarked on a new model of ecological river governance, achieving
an evaluation grade of “excellent”. Overall, the implementation effectiveness of the RCS in Henan Province is significant. The evaluation scores have been increasing year by year, indicating the
deepening of the RCS work. The evaluation results are consistent with the actual situation, demonstrating the feasibility of the evaluation model. Overall, the use of the four-element connection
number model of set pair analysis for evaluating the effectiveness of the RCS implementation in Henan Province yielded consistent final ranking results with the entropy weight TOPSIS method. However,
the entropy weight TOPSIS method did not consider the relative proximity of different indicators to different levels. Compared with the set pair analysis model, the evaluation of the ranking results
using the entropy weight TOPSIS method was slightly rough and deviated from the actual situation. In contrast, the four-element connection number model of set pair analysis is more accurate,
reasonable, feasible, and in line with the actual circumstances.
5.5.3. Analysis of Single Indicator Evaluation Results
Based on the four-element connection numbers of the set pair analysis model, the coefficients for individual indicators were calculated. The calculation results can be found in
Table 6
Table 7
From the evaluation results, it can be seen that the organizational system of the RCS at all levels in Henan Province has been continuously improved. The implementation of the RCS has been
strengthened, and the institutional framework has become increasingly sound. Various systems have been effectively implemented. The work mechanism has been continuously optimized, leading to
improvements in the ecological environment of rivers and lakes. The conditions for guaranteeing the implementation of the RCS have been continuously strengthened. The construction of organizational
systems and institutional development of the RCS at all levels in Henan Province has achieved significant results. There has been a noticeable improvement in the ecological environment of rivers and
lakes, and the water quality of rivers and lakes has been effectively enhanced.
In terms of organizational management, unlike in 2018, the evaluation grades for the informatization of supervision and the soundness of laws and regulations for the years 2019–2021 were rated as
“excellent”. This is because, over the past three years, Henan Province has been dedicated to the development of informatization in supervision while incorporating different provisions based on the
actual provincial situation. In terms of informatization supervision, in 2019, Henan Province began promoting the RCS information system throughout the city, establishing the “Smart River Chief”
river monitoring system, enabling important rivers and lakes to be “instantly visible and traceable” throughout the entire process. In 2020, drones were utilized for river patrols, and the use of the
Henan Province River Chief App was actively promoted, resulting in the construction of a municipal-level smart river and lake management platform. In 2021, Henan Province actively employed means such
as satellite remote sensing and drones to investigate and identify issues related to “disorderly” practices, followed by appropriate rectification. Regarding laws and regulations, in 2019, the Henan
Provincial Procuratorate and the Provincial River Chief Office jointly issued the “Interim Measures for Establishing the Henan Provincial People’s Procuratorate River Chief Office”, becoming one of
the leading provinces along the Yellow River to establish procuratorate river chief offices. From 2020 to 2021, Henan Province has made great efforts to implement the “River Chief+” system,
facilitating joint law enforcement.
In terms of water resources conservation, at the end of 2019, to implement the “National Water Conservation Action Plan” issued by the National Development and Reform Commission and the Ministry of
Water Resources, as well as the “Notice on the Division of Work for Implementing the National Water Conservation Action Plan” issued by the Office of the National Development and Reform Commission
and the Office of the Ministry of Water Resources, Henan Province formulated the “Henan Province Water Conservation Action Implementation Plan” based on local conditions and began its implementation.
The evaluation results indicated that the total groundwater resources were rated as “pass” in 2019 and remained consistently rated as “excellent” in the following two years.
In terms of water environment management and aquatic ecological restoration, in 2021, the General Office of the People’s Government of Henan Province issued the “Henan Province Integrated Water
Governance Plan (2021–2035)”, which includes specific plans for water environment management, aquatic ecological restoration, water disaster prevention and control, and comprehensive groundwater
management. The plan was diligently implemented throughout the province, leading to significant improvements in the compliance rate of centralized drinking water sources, surface water environmental
quality, and soil and water conservation rate.
In terms of water pollution prevention and control, starting in 2018, Henan Province has made great efforts to promote the treatment and resource utilization of waste from livestock and poultry
farming. By 2021, the comprehensive utilization rate of livestock and poultry manure had reached the level of “excellent”. To fully implement the decisions and deployments of the CPC Central
Committee, the State Council, the Provincial Party Committee, and the Provincial Government regarding the pollution prevention and control battle, various levels of river chiefs have been actively
engaged in the implementation of the “2021 Water Pollution Prevention and Control Battle Implementation Plan”. This has resulted in significant progress in the battle against water pollution, as
indicated by the transformation of the harmless treatment rate of sludge from a “passing” grade in 2020 to an “excellent” grade. However, due to Henan Province being an agricultural province, it is
challenging to effectively control the use of fertilizers and pesticides. Continuous efforts are still needed in this regard.
In terms of waterway and shoreline management, Henan Province has carried out extensive activities to rectify the “disorderly” practices in rivers and lakes. River chiefs at all levels have actively
conducted river patrols and, with the assistance of various departments, implemented joint law enforcement measures to address numerous instances of “disorderly” practices. As a result, significant
improvements have been made in the treatment of black and odorous water bodies, with a consistently high level of rectification achieved in addressing the “disorderly” practices. The compliance rate
of embankments has also improved significantly, shifting from a “failing” grade in 2018 to an “excellent” grade. Additionally, river chiefs at various levels have coordinated efforts to promote
comprehensive basin management.
In addition, the treatment rate of rural domestic sewage has shifted from unqualified to qualified, indicating an increasing emphasis on the construction of river chief organizational systems in
rural areas. Higher-level authorities should increase financial support for rural RCS work, expand the team of river chief offices, and provide strong support for the implementation of the rural RCS.
This collective effort aims to protect the health and well-being of rivers and lakes by ensuring effective management and conservation of water environments.
Overall, the majority of indicators are rated as “excellent”, reflecting the effective implementation of responsibilities by river chiefs at all levels, the strengthening of targeted problem
rectification, and the integral role of innovative work systems and mechanisms. This is inextricably linked to the fact that the people of Henan have kept in mind General Secretary Xi Jinping’s
ardent wish of “guarding the blue water of a river”. Henan Province has consistently implemented the national “14th Five-Year Plan”, reinforcing the RCS and constructing happy rivers and lakes.
Efforts are being made to comprehensively upgrade river and lake management and protection, following the working arrangements of the Provincial Party Committee, Provincial Government, and Ministry
of Water Resources while considering the reality of Henan Province. Steady progress is being made toward the goal of “clean water, unobstructed rivers, green shores, and ecological balance”. However,
continuous efforts are still needed to advance the RCS. There is still significant room for improvement in the protection of water ecological environment and water environmental governance in Henan
Province. It is necessary to persistently strive for improvement and sustain innovation, and these experiences are also valuable for other regions as a reference.
6. Conclusions
The implementation of the RCS is not only aimed at improving the ecological environment of river basins but also at enhancing the quality of life for residents, improving people’s happiness index,
and promoting harmonious coexistence between humans and nature. Based on this, this study constructs an evaluation index system for the implementation effectiveness of the RCS. It includes six
aspects with a total of 26 evaluation indicators: organizational management effectiveness, water resource protection effectiveness, water environmental governance effectiveness, water pollution
prevention and control effectiveness, water ecological restoration effectiveness, and water body and shoreline management effectiveness. This index system can provide a reference for evaluating the
implementation effectiveness of the RCS in different regions.
Due to the limitations of the traditional CRITIC method in the process of weighting, this study proposes improvements to the traditional CRITIC objective weighting method from three aspects: contrast
intensity, correlation coefficient, and variation degree. These improvements aim to achieve objective weighting of evaluation indicators. Additionally, considering the significant errors associated
with the use of the 1–9 scale, the AHP 1–5 scale method is used for the subjective weighting of indicators. The combination weighting method with game theory is employed to optimize the results
obtained from the subjective and objective weighting methods, thus obtaining the optimal weighting results and determining the weights of evaluation indicators. Taking Henan Province as an example,
the traditional set pair analysis coefficient is expanded to the four-element connection numbers, and an evaluation model for the implementation effectiveness of the RCS is constructed.
The evaluation results show that the implementation effectiveness of the RCS in Henan Province from 2018 to 2021 was rated as “good” and “excellent”, with the evaluation scores increasing year by
year. Based on the evaluation results, it can be seen that by the end of 2021, the RCS in Henan Province has been established, and a working mechanism has been formed. The organizational system is
continuously improving, and the integration and development of river and lake management and protection are gradually being realized. With the continuous improvement of the river chief responsibility
system, significant achievements have been made in river and lake management and protection. Public participation has been strengthened, and social supervision mechanisms have been continuously
improved. These evident achievements are consistent with the actual situation, which validates the scientific and accurate nature of this method.
However, there are still some significant issues in river and lake management and protection that cannot be ignored. For example, the task of river pollution control remains challenging, the
enforcement and supervision of river and lake regulations need to be strengthened, and there is a weak capacity for grassroots governance. Based on these issues, the comprehensive implementation of
the RCS in Henan Province should focus on the following aspects: First, it is necessary to increase the promotion and training of policies and regulations related to the RCS to enhance public
awareness and involvement in the system. Second, it is important to strengthen administrative law enforcement and supervision and intensify efforts to combat illegal activities in rivers and lakes.
Finally, it is crucial to enhance the capacity building for grassroots river and lake management and protection, increase financial investment in the RCS, and further promote the standardization and
information management of the system. By addressing these issues and strengthening the key aspects mentioned above, Henan Province can make further progress in the comprehensive implementation of the
RCS and improve the effectiveness of river and lake management and protection.
Using this model to evaluate the effectiveness of the RCS has multiple advantages. Firstly, the model allows for year-by-year research and analysis of the implementation effectiveness in the same
region, providing insights into the progress and trends. Secondly, it enables comparative analysis of the implementation effectiveness among different regions during the same period, facilitating the
exchange of experiences and lessons learned. Additionally, the model can identify weak areas and influencing factors in the implementation process, helping implementers to provide targeted
recommendations and take appropriate measures to enhance river and lake management capabilities more efficiently. Most importantly, this evaluation model is of great significance in maintaining the
health of rivers and lakes and achieving the sustainable utilization of their functions. Through a scientific assessment of the implementation effectiveness of the RCS, it can drive improvements and
optimization in river and lake management, leading to more sustainable water resource management.
Author Contributions
Methodology, J.L. and L.S.; Software, X.C.; Formal analysis, L.S.; Investigation, L.Q.; Data curation, L.Q.; Writing—original draft, X.C.; Writing—review & editing, J.L. and L.S.; Supervision, J.L.;
Project administration, Y.L. and Y.X.; Funding acquisition, Y.L. and Y.X. All authors have read and agreed to the published version of the manuscript.
National Natural Science Foundation of China (No.52179015); Key Science and Technology Projects in Henan Province (No.201300311400); Key Science and Technology Projects in Henan Province
(No.232102321114); Ministry of Education “Chunhui plan” cooperative scientific research project (No.HZKY20220268).
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Data used in this study are sourced from “China Statistical Yearbook”, “Henan Statistical Yearbook”, “Henan Water Resources Bulletin”, “Henan Ecological and Environmental Status Bulletin”, “China
Water Resources Statistical Yearbook”, “China Urban Construction Statistical Yearbook”, official announcements and bulletins from Henan Water Resources Department, as well as reports, documents, and
plans related to water pollution prevention, ecological environment protection, and released by Henan Water Resources Department, Ecological Environment Department, Agriculture and Rural Affairs
Department, and other relevant units. The satisfaction results were obtained through survey questionnaires, while other qualitative indicators were scored by invited experts.
Conflicts of Interest
The authors declare no conflict of interest.
1. Bashir, I.; Lone, F.A.; Bhat, R.A.; Mir, S.A.; Dar, Z.A.; Dar, S.A. Concerns and threats of contamination on aquatic ecosystems. In Bioremediation and Biotechnology: Sustainable Approaches to
Pollution Degradation; Springer: Berlin/Heidelberg, Germany, 2020; pp. 1–26. [Google Scholar]
2. Zhou, H.; Zhang, X.; Xu, H.; Ling, H.; Yu, P. Influences of climate change and human activities on Tarim River runoffs in China over the past half century. Environ. Earth Sci. 2012, 67, 231–241.
[Google Scholar] [CrossRef]
3. Deng, X.P.; Shan, L.; Zhang, H.; Turner, N.C. Improving agricultural water use efficiency in arid and semiarid areas of China. Agric. Water Manag. 2006, 80, 23–40. [Google Scholar] [CrossRef]
4. Luo, P.; Mu, Y.; Wang, S.; Zhu, W.; Mishra, B.K.; Huo, A.; Zhou, M.; Lyu, J.; Hu, M.; Duan, W.; et al. Exploring sustainable solutions for the water environment in Chinese and Southeast Asian
cities. Ambio 2021, 51, 1199–1218. [Google Scholar] [CrossRef]
5. Li, Y.; Tong, J.; Wang, L. Full implementation of the river chief system in China: Outcome and weakness. Sustainability 2020, 12, 3754. [Google Scholar] [CrossRef]
6. Hao, Y.; Wan, T. Top-Down Mechanisms: Governmental River Chiefs. In The River Chief System and an Ecological Initiative for Public Participation in China; Springer Nature Singapore: Singapore,
2023; pp. 17–45. [Google Scholar]
7. Ouyang, J.; Zhang, K.; Wen, B.; Lu, Y. Top-down and bottom-up approaches to environmental governance in China: Evidence from the River Chief System (RCS). Int. J. Environ. Res. Public Health 2020
, 17, 7058. [Google Scholar] [CrossRef]
8. Ding, R.; Sun, F. Impact of River Chief System on Green Technology Innovation: Empirical Evidence from the Yangtze River Economic Belt. Sustainability 2023, 15, 6575. [Google Scholar] [CrossRef]
9. Zhang, L. Implementation effectiveness evaluation of grass-roots river chief system based on fuzzy evaluation. Yangtze River 2022, 53, 8–13. [Google Scholar]
10. Wang, L.; Tong, J.; Li, Y. River Chief System (RCS): An experiment on cross-sectoral coordination of watershed governance. Front. Environ. Sci. Eng. 2019, 13, 64. [Google Scholar] [CrossRef]
11. Wang, J.; Wan, X.; Tu, R. Game Analysis of the Evolution of Local Government’s River Chief System Implementation Strategy. Int. J. Environ. Res. Public Health 2022, 19, 1961. [Google Scholar] [
12. Zhang, Z.; Xiong, C.; Yang, Y.; Liang, C.; Jiang, S. What Makes the River Chief System in China Viable? Examples from the Huaihe River Basin. Sustainability 2022, 14, 6329. [Google Scholar] [
13. Wang, C.; Ding, Z.; Jiang, P.; Pan, X. Systematic review and perspective of the application of multi-index comprehensive evaluation method. Plant Prot. 2022, 48, 187–192+206. [Google Scholar]
14. Fu, Y.; Liu, Y.; Xu, S.; Xu, Z. Assessment of a Multifunctional River Using Fuzzy Comprehensive Evaluation Model in Xiaoqing River, Eastern China. Int. J. Environ. Res. Public Health 2022, 19,
12264. [Google Scholar] [CrossRef]
15. Li, C.; Peng, W.; Shen, X.; Gu, J.; Zhang, Y.; Li, M. Comprehensive Evaluation of the High-Quality Development of the Ecological and Economic Belt along the Yellow River in Ningxia.
Sustainability 2023, 15, 11486. [Google Scholar] [CrossRef]
16. Wang, F.; Zhang, Z.; Xu, J. Study on the Multifactor Degree Set Pair Analysis in Guizhou Drought Early Warning. Math. Pract. Theory 2015, 45, 199–206. [Google Scholar]
17. Chen, W.; Yang, Y. Safety evaluation of coal enterprise based on comprehensive weighting and set pair analysis. Min. Saf. Environ. Prot. 2020, 47, 105–109. [Google Scholar]
18. Zhao, J.; Gao, H.; Cheng, J. Research on Evolution of Participation Willingness in Network Public Opinion: Based on Three-element Connection Number. Inf. Sci. 2017, 35, 118–120+140. [Google
19. Wang, T.C.; Lee, H.D. Developing a fuzzy TOPSIS approach based on subjective weights and objective weights. Expert Syst. Appl. 2009, 36, 8980–8985. [Google Scholar] [CrossRef]
20. Diakoulaki, D.; Mavrotas, G.; Papayannakis, L. Determining objective weights in multiple criteria problems: The critic method. Comput. Oper. Res. 1995, 22, 763–770. [Google Scholar] [CrossRef]
21. Ocampo, L.; Aro, J.L.; Evangelista, S.S.; Maturan, F.; Casinillo, L.; Yamagishi, K.; Selerio, E., Jr. Composite ecotourism potential index based on an integrated stochastic CRITIC-weighted sum
method. Curr. Issues Tour. 2023, 26, 2513–2542. [Google Scholar] [CrossRef]
22. Su, J.; Su, K.; Wang, S. Evaluation of digital economy development level based on multi-attribute decision theory. PLoS ONE 2022, 17, e0270859. [Google Scholar] [CrossRef]
23. Saaty, T.L. The Analytic Hierarchy Process [M]; Mc Graw Hill Company: New York, NY, USA, 1980. [Google Scholar]
24. Yu, D.; Kou, G.; Xu, Z.; Shi, S. Analysis of collaboration evolution in AHP research: 1982–2018. Int. J. Inf. Technol. Decis. Mak. 2021, 20, 7–36. [Google Scholar] [CrossRef]
25. Xiang, M.; Huang, H.; He, C.; Li, R.; Zeng, L. Comprehensive Performance Evaluation Strategy for Communication Networks Selection in Smart Grid. Cybern. Inf. Technol. 2016, 16, 39–56. [Google
Scholar] [CrossRef]
26. Wang, Y.; Hou, L.; Li, M.; Zheng, R. A Novel Fire Risk Assessment Approach for Large-Scale Commercial and High-Rise Buildings Based on Fuzzy Analytic Hierarchy Process (FAHP) and Coupling
Revision. Int. J. Environ. Res. Public Health 2021, 18, 7187. [Google Scholar] [CrossRef]
27. Haoyu, Z.; Wensheng, W.; Hao, W. TOC prediction using a gradient boosting decision tree method: A case study of shale reservoirs in Qinshui Basin. J. Pet. Sci. Eng. 2023, 221, 111271. [Google
28. Saraswat, S.; Sharma, V.; Khan, M.A. Finding the target customers through new approach in Analytic Hierarchy Process (AHP) using Big 5 personality traits. Int. J. Syst. Assur. Eng. Manag. 2023,
14, 1028–1039. [Google Scholar] [CrossRef]
29. Chen, C.Y.; Huang, J.J. Deriving Fuzzy Weights from the Consistent Fuzzy Analytic Hierarchy Process. Mathematics 2022, 10, 3499. [Google Scholar] [CrossRef]
30. Lobzang, C.; Kumar, S.G. Evaluation of groundwater heavy metal pollution index through analytical hierarchy process and its health risk assessment via Monte Carlo simulation. Process Saf.
Environ. Prot. 2023, 170, 855–864. [Google Scholar]
31. Pascoe, S. A Simplified Algorithm for Dealing with Inconsistencies Using the Analytic Hierarchy Process. Algorithms 2022, 15, 442. [Google Scholar] [CrossRef]
32. Kara, C.; Iranmanesh, A. Modelling and Assessing Sustainable Urban Regeneration for Historic Urban Quarters via Analytical Hierarchy Process. Land 2022, 12, 72. [Google Scholar] [CrossRef]
33. Huang, W.G.; Zhang, S.W.; Wang, G.Z.; Huang, J.; Lu, X.; Wu, S.L.; Wang, Z.T. Modeling Methodology for Site Selection Evaluation of Underground Coal Gasification Based on Combination Weighting
Method with Game Theory. ACS Omega 2023, 8, 11544–11555. [Google Scholar] [CrossRef]
34. Wang, B.; Wan, J.; Zhu, Y. River Chief System: An institutional analysis to address watershed governance in China. Water Policy 2021, 23, 1435–1444. [Google Scholar] [CrossRef]
35. Li, J.; Shi, X.; Wu, H.; Liu, L. Trade-off between economic development and environmental governance in China: An analysis based on the effect of River Chief System. China Econ. Rev. 2020, 60,
101403. [Google Scholar] [CrossRef]
36. Xu, X.; Cheng, Y.; Meng, X. River Chief System, Emission Abatement, and Firms’ Profits: Evidence from China’s Polluting Firms. Sustainability 2022, 14, 3418. [Google Scholar] [CrossRef]
37. Gao, D.; Liu, C.; Wei, X.; Liu, Y. Can River Chief System Policy Improve Enterprises’ Energy Efficiency? Evidence from China. Int. J. Environ. Res. Public Health 2023, 20, 2882. [Google Scholar]
38. Li, Z.; Ling-zhi, L.; Ji-kun, H. The River Chief System and agricultural non-point source water pollution control in China. J. Integr. Agric. 2021, 20, 1382–1395. [Google Scholar]
39. Li, W.; Zhou, Y.; Deng, Z. The Effectiveness of “River Chief System” Policy: An Empirical Study Based on Environmental Monitoring Samples of China. Water 2021, 13, 1988. [Google Scholar] [
40. Vahedi, N.; Ghassemieh, M. Preference of Hybrid Steel Frame with Exclusive Seismic Performance Using the Analytic Hierarchy Process. J. Earthq. Eng. 2022, 26, 5425–5446. [Google Scholar] [
41. Zhu, H.; Liao, Q.; Qu, B.; Hu, L.; Wang, H.; Gao, R.; Zhang, Y. Relationship between the main functional groups and complex permittivity in pre-oxidised lignite at terahertz frequencies based on
grey correlation analysis. Energy 2023, 278, 127821. [Google Scholar] [CrossRef]
42. Chen, H.; Xu, Z.; Liu, Y.; Huang, Y.; Yang, F. Urban Flood Risk Assessment Based on Dynamic Population Distribution and Fuzzy Comprehensive Evaluation. Int. J. Environ. Res. Public Health 2022,
19, 16406. [Google Scholar] [CrossRef]
43. de Lima Silva, D.F.; Ferreira, L.; de Almeida Filho, A.T. Preference disaggregation on TOPSIS for sorting applied to an economic freedom assessment. Expert Syst. Appl. 2023, 215, 119341. [Google
Scholar] [CrossRef]
44. Zhao, K.Q.; Xuan, A.L. Set Pair Theory—A New Theory Method of Non-Define and Its Applications. Syst. Eng. 1996, 14, 18–23+72. [Google Scholar]
45. Xiang, W.; Yang, X.; Babuna, P.; Bian, D. Development, Application and Challenges of Set Pair Analysis in Environmental Science from 1989 to 2020: A Bibliometric Review. Sustainability 2021, 14,
153. [Google Scholar] [CrossRef]
46. Li, B.; Zhang, W.; Liu, Z.; Meng, H. Groundwater Quality Evaluation Model Based on Set Pair Analysis and Its Application. Proc. Inst. Civ. Eng.-Water Manag. 2021, 176, 212–222. [Google Scholar] [
47. Zhang, L.; Li, H. Construction Risk Assessment of Deep Foundation Pit Projects Based on the Projection Pursuit Method and Improved Set Pair Analysis. Appl. Sci. 2022, 12, 1922. [Google Scholar] [
48. Shen, Q.; Lou, J.; Liu, Y.; Jiang, Y. Hesitant fuzzy multi-attribute decision making based on binary connection number of set pair analysis. Soft Comput. 2021, 25, 14797–14807. [Google Scholar] [
49. Yuan, J.; Wang, L.; Li, Y.; Wang, Y.; Ma, T.; Luo, X. Set pair prediction for Chinese natural gas energy security based on higher-order Markov chain with risk attitude. Resour. Policy 2022, 77,
102741. [Google Scholar] [CrossRef]
50. Han, Z.; Meng, J.; Zhu, L.; Cheng, H.; Wu, Y.; Wei, C. Quantifying trade-offs of land multifunctionality evaluated by set pair analysis in ecologically vulnerable areas of northwestern China.
Land Degrad. Dev. 2022, 33, 1999–2013. [Google Scholar] [CrossRef]
51. Jiang, Y.L.; Zhao, K.Q. Application and development of set pair analysis inartificial intelligence: A survey. CAAI Trans. Intell. Syst. 2019, 14, 28–43. [Google Scholar]
52. Bojan, S.; Zorica, S. Prioritisation in the analytic hierarchy process for real and generated comparison matrices. Expert Syst. Appl. 2023, 225, 120015. [Google Scholar]
53. Sun, H.; Wei, G.W.; Chen, X.D.; Mo, Z.W. Extended EDAS method for multiple attribute decision making in mixture z-number environment based on CRITIC method. J. Intell. Fuzzy Syst. 2022, 43,
2777–2788. [Google Scholar] [CrossRef]
54. Zhang, D.; Yang, Y.; Liu, Y.; He, J. Hybrid HVDC line pilot protection method based on EMD and Spearman correlation coefficient. Power Syst. Prot. Control. 2021, 49, 1–11. [Google Scholar]
55. Yang, Y.; Sun, H.; Dai, Z.; Wu, M.; Fu, S. Comprehensive evaluation of majors offered by universities based on combination weighting. Eval. Program Plan. 2023, 97, 102202. [Google Scholar] [
CrossRef] [PubMed]
56. Zhao, K.Q. Disposal and Description of Uncertainties Basedon The Set Pair Analysis. Inf. Control. 1995, 24, 162–166. [Google Scholar]
57. National Bureau of Statistics of the People’s Republic of China. China Statistical Yearbook 2019; Statistics Press: Beijing, China, 2020.
58. National Bureau of Statistics of the People’s Republic of China. China Statistical Yearbook 2020; Statistics Press: Beijing, China, 2021.
59. National Bureau of Statistics of the People’s Republic of China. China Statistical Yearbook 2021; Statistics Press: Beijing, China, 2022.
60. National Bureau of Statistics of the People’s Republic of China. China Statistical Yearbook 2022; Statistics Press: Beijing, China, 2023.
61. National Bureau of Statistics of the People’s Republic of China. Henan Statistical Yearbook 2019; Statistics Press: Beijing, China, 2020.
62. National Bureau of Statistics of the People’s Republic of China. Henan Statistical Yearbook 2020; Statistics Press: Beijing, China, 2021.
63. National Bureau of Statistics of the People’s Republic of China. Henan Statistical Yearbook 2021; Statistics Press: Beijing, China, 2022.
64. National Bureau of Statistics of the People’s Republic of China. Henan Statistical Yearbook 2022; Statistics Press: Beijing, China, 2023.
65. Henan Provincial Department of Water Resources, China. Henan Province Water Resources Bulletin 2018; Henan Hydrological and Water Resources Survey and Evaluation Center: Zhengzhou, China, 2019.
66. Henan Provincial Department of Water Resources, China. Henan Province Water Resources Bulletin 2019; Henan Hydrological and Water Resources Survey and Evaluation Center: Zhengzhou, China, 2020.
67. Henan Provincial Department of Water Resources, China. Henan Province Water Resources Bulletin 2020; Henan Hydrological and Water Resources Survey and Evaluation Center: Zhengzhou, China, 2021.
68. Henan Provincial Department of Water Resources, China. Henan Province Water Resources Bulletin 2021; Henan Hydrological and Water Resources Survey and Evaluation Center: Zhengzhou, China, 2022.
69. Henan Provincial Department of Ecology and Environment, China. Henan Province Ecological and Environmental Status Bulletin 2018; Henan Provincial Department of Ecology and Environment: Zhengzhou,
China, 2019.
70. Henan Provincial Department of Ecology and Environment, China. Henan Province Ecological and Environmental Status Bulletin 2019; Henan Provincial Department of Ecology and Environment: Zhengzhou,
China, 2020.
71. Henan Provincial Department of Ecology and Environment, China. Henan Province Ecological and Environmental Status Bulletin 2020; Henan Provincial Department of Ecology and Environment: Zhengzhou,
China, 2021.
72. Henan Provincial Department of Ecology and Environment, China. Henan Province Ecological and Environmental Status Bulletin 2021; Henan Provincial Department of Ecology and Environment: Zhengzhou,
China, 2022.
73. Ministry of Water Resources of the People’s Republic of China. China Water Resources Statistical Yearbook 2019; Water Resources and Hydropower Press: Beijing, China, 2020.
74. Ministry of Water Resources of the People’s Republic of China. China Water Resources Statistical Yearbook 2020; Water Resources and Hydropower Press: Beijing, China, 2021.
75. Ministry of Water Resources of the People’s Republic of China. China Water Resources Statistical Yearbook 2021; Water Resources and Hydropower Press: Beijing, China, 2022.
76. Ministry of Water Resources of the People’s Republic of China. China Water Resources Statistical Yearbook 2022; Water Resources and Hydropower Press: Beijing, China, 2023.
77. Ministry of Housing and Urban-Rural Development of the People’s Republic of China. China Urban Construction Statistical Yearbook 2018; China Architecture & Building Press: Beijing, China, 2019.
78. Ministry of Housing and Urban-Rural Development of the People’s Republic of China. China Urban Construction Statistical Yearbook 2019; China Architecture & Building Press: Beijing, China, 2020.
79. Ministry of Housing and Urban-Rural Development of the People’s Republic of China. China Urban Construction Statistical Yearbook 2020; China Architecture & Building Press: Beijing, China, 2021.
80. Ministry of Housing and Urban-Rural Development of the People’s Republic of China. China Urban Construction Statistical Yearbook 2021; China Architecture & Building Press: Beijing, China, 2022.
Table 1. Evaluation Index System for Assessing the Implementation Effectiveness of the RCS in Henan Province.
Goal Layer Criterion Layer Indicator Layer
Establishment and Operation of Working Mechanism (C1)
River Inspection Task Implementation (C2)
organizational and managerial effectiveness (S1) Regulatory Digitization (C3)
Enforcement Oversight (C4)
Problem Rectification Status (C5)
Public Satisfaction Level (C6)
Underground Water Resources Total Volume (C7)
water resource protection effectiveness (S2) Efficient Utilization Coefficient of Irrigation Water in Farmland (C8)
Recycling Rate of Industrial Water (C9)
Rural Tap Water Coverage Rate (C10)
Compliance Rate of Centralized Drinking Water Source Water Quality (C11)
water environment governance effectiveness (S3) Compliance Rate of Surface Water Environmental Quality (C12)
Effectiveness Percentage of Surface Water in Class V Poor Quality (C13)
Comprehensive Utilization Rate of Livestock and Poultry Manure in Animal Farming (C14)
Harmless Treatment Rate of Sludge (C15)
Fertilizer Application Intensity (C16)
water pollution control effectiveness (S4) Pesticide Application Intensity (C17)
Urban Wastewater Centralized Treatment Rate (C18)
Rural Domestic Wastewater Treatment Rate (C19)
Rate of Harmless Treatment of Household Waste (C20)
Rate of Soil and Water Conservation (C21)
water ecological restoration effectiveness (S5) Rate of wetland conservation (C22)
Construction Status of Wetland Parks (C23)
Rate of Compliance with Levee Standards (C24)
water area shoreline management effectiveness (S6) Rate of Rectification of Four Disorderly Issues (C25)
Elimination Rate of Black and Odorous Water Bodies in Built-up Areas (C26)
Table 2. Values of Indicators for Evaluating the Implementation Effectiveness of the RCS in Henan Province.
Indicators Year 2018 Year 2019 Year 2020 Year 2021
C1 (Score) 85 90 90 90
C2 (Score) 100 100 100 100
C3 (Score) 80 85 90 90
C4 (Score) 80 85 90 90
C5 (Score) 88 90 95 96
C6 (Score) 83 85 88 88
C7 (1 × 10^8 m^3) 188 119.1 185.8 257
C8 0.614 0.615 0.617 0.62
C9 (%) 94.83 95.93 96.61 90.07
C10 (%) 87 91 91 91
C11 (%) 96.80 100 100 100
C12 (%) 60.40 64 73.70 79.90
C13 (%) 3.50 0 0 0
C14 (%) 75 77.20 79.50% 82
C15 (%) 93.75 94.64 79.20 99.37
C16 (kg/hm^2) 1310 1251 1186 1143
C17 (kg/hm^2) 21.48 20.12 18.74 17.83
C18 (%) 97.30 97.72 98.30 99.21
C19 (%) 20.00 24.70 30 33.40
C20 (%) 99.71 99.65 99.94 100
C21 (%) 87.05 87.27 87.36 87.27
C22 (%) 47.80 47.80 52.19 56.58
C23 (Score) 88 91 95 90
C24 (%) 66.56 66.86 67.5 68.36
C25 (%) 20.21 99.20 100 100
C26 (%) 92.00 100.00 100.00 100.00
Table 3. Grading Criteria for Indicators Used to Evaluate the Implementation Effectiveness of the RCS in Henan Province.
Indicators Classification Levels and Standards
Level I Level II Level III Level IV
C1 (Score) [85, 100] [75, 85] [60, 75] [0, 60]
C2 (Score) [95, 100] [80, 95] [60, 80] [0, 60]
C3 (Score) [85, 100] [60, 85] [40, 60] [0, 40]
C4 (Score) [85, 100] [75, 85] [60, 75] [0, 60]
C5 (Score) [85, 100] [75, 85] [60, 75] [0, 60]
C6 (Score) [80, 100] [70, 80] [60, 70] [0, 60]
C7 (1 × 10^8 m^3) [180, +∞) [120, 180] [100, 120] [0, 100]
C8 [0.6, 1] [0.5, 0.6] [0.4, 0.5] [0, 0.4]
C9 (%) [91, 100] [81, 91] [71, 81] [0, 71]
C10 (%) [85, 100] [75, 85] [60, 75] [0, 60]
C11 (%) [97.7, 100] [75, 97.7] [60, 75] [0, 60]
C12 (%) [80, 100] [70, 80] [56.4, 70] [0, 56.4]
C13 (%) [0, 9.6] [9.6, 20] [20, 50] [50, 100]
C14 (%) [80, 100] [70, 80] [60, 70] [0, 60]
C15 (%) [90, 100] [80, 90] [60, 80] [0, 60]
C16 (kg/hm^2) [0, 225] [225, 240] [240, 250] (−∞, 250]
C17 (kg/hm^2) [0, 10] [10, 20] [20, 25] (−∞, 25]
C18 (%) [95, 100] [75, 95] [50, 75] [0, 50]
C19 (%) [75, 100] [50, 75] [25, 50] [0, 25]
C20 (%) [85, 100] [75, 85] [60, 75] [0, 60]
C21 (%) [80, 100] [70, 80] [60, 70] [0, 60]
C22 (%) [70, 100] [50, 70] [40, 50] [0, 40]
C23 (Score) [85, 100] [75, 85] [60, 75] [0, 60]
C24 (%) [70, 100] [60, 70] [40, 60] [0, 40]
C25 (%) [90, 100] [70, 90] [50, 70] [0, 50]
C26 (%) [90, 100] [80, 90] [70, 80] [0, 70]
Table 4. Weights of Indicators for Evaluating the Implementation Effectiveness of the RCS in Henan Province.
Indicators AHP CRITIC Game Theory
C1 0.0248 0.0398 0.0318
C2 0.0097 0.0286 0.0184
C3 0.0059 0.0286 0.0164
C4 0.0059 0.0286 0.0164
C5 0.0157 0.0293 0.0220
C6 0.0381 0.0305 0.0346
C7 0.0245 0.0617 0.0417
C8 0.0454 0.0241 0.0355
C9 0.0847 0.0907 0.0875
C10 0.0454 0.0398 0.0428
C11 0.1094 0.0398 0.0772
C12 0.0379 0.0270 0.0329
C13 0.0526 0.0398 0.0467
C14 0.0083 0.0240 0.0156
C15 0.0152 0.0836 0.0469
C16 0.0163 0.0251 0.0206
C17 0.0163 0.0249 0.0203
C18 0.0546 0.0235 0.0402
C19 0.0546 0.0252 0.0410
C20 0.0346 0.0660 0.0491
C21 0.0163 0.0381 0.0264
C22 0.0297 0.0375 0.0333
C23 0.0540 0.0511 0.0526
C24 0.0286 0.0245 0.0267
C25 0.0857 0.0285 0.0592
C26 0.0857 0.0398 0.0645
Time Period Components of Connection Numbers Connection Numbers Ranking Results Evaluation Results
Level I Level II Level III Level IV
2018 0.4525 0.3539 0.1041 0.0895 0.4454 4 Level II
Set Pair Analysis Model 2019 0.5289 0.2931 0.1213 0.0567 0.5289 3 Level I
2020 0.5524 0.2934 0.1170 0.0372 0.5734 2 Level I
2021 0.6035 0.2948 0.0694 0.0324 0.6455 1 Level I
Time Period Positive Ideal Solution Distance(D+) Negative Ideal Solution Distance(D−) Composite Scores Ranking Results Evaluation Results
2018 0.4809 0.7483 0.6088 4 Level II
Entropy-based Weighted TOPSIS 2019 0.4245 0.8257 0.6604 3 Level II
2020 0.3365 0.8699 0.721 2 Level II
2021 0.2944 0.888 0.751 1 Level II
Table 6. Evaluation Results of Single Indicator for the Implementation Effectiveness of the RCS in Henan Province, 2018–2019.
Indicators Year 2018 Year 2019
Level I Level II Level III Level IV Evaluation Result Level I Level II Level III Level IV Evaluation Result
C1 1 1 −1 −1 Level I 1 0.3333 −1 −1 Level I
C2 1 −1 −1 −1 Level I 1 −1 −1 −1 Level I
C3 0.6 1 −0.6 −1 Level II 1 1 −1 −1 Level I
C4 0 1 0 −1 Level II 1 1 −1 −1 Level I
C5 1 0.6 −1 −1 Level I 1 0.3333 −1 −1 Level I
C6 1 0.7 −1 −1 Level I 1 0.5 −1 −1 Level I
C7 1 0.8667 −1 −1 Level I −1 0.91 1 −0.91 Level III
C8 1 0.93 −1 −1 Level I 1 0.925 −1 −1 Level I
C9 1 0.1489 −1 −1 Level I 1 −0.0956 −1 −1 Level I
C10 1 0.7333 −1 −1 Level I 1 0.2 −1 −1 Level I
C11 0.9207 1 −0.9207 −1 Level II 1 −1 −1 −1 Level I
C12 1 −0.4118 −1 0.4118 Level I −1 0.1176 1 −0.1176 Level III
C13 1 −0.2708 −1 −1 Level I 1 −1 −1 −1 Level I
C14 0 1 0 −1 Level II 0.44 1 −0.44 −1 Level II
C15 1 0.25 −1 −1 Level I 1 0.072 −1 −1 Level I
C16 −1 −1 −0.2114 1 Level IV −1 −1 −0.144 1 Level IV
C17 −1 0.408 1 −0.408 Level III −1 0.952 1 −0.952 Level III
C18 1 0.08 −1 −1 Level I 1 0.12 −1 −1 Level I
C19 −1 −1 0.6 1 Level IV −1 −1 0.624 1 Level IV
C20 1 −0.9613 −1 −1 Level I 1 −0.9533 −1 −1 Level I
C21 1 0.295 −1 −1 Level I 1 0.273 −1 −1 Level I
C22 −1 0.56 1 −0.56 Level III −1 0.56 1 −0.56 Level III
C23 1 0.6 −1 −1 Level I 1 0.2 −1 −1 Level I
C24 0.312 1 −0.312 −1 Level II 0.372 1 −0.372 −1 Level II
C25 −1 −1 −0.1916 1 Level IV 1 −0.84 −1 −1 Level I
C26 1 0.6 −1 −1 Level I 1 −1 −1 −1 Level I
Table 7. Evaluation Results of Single Indicator for the Implementation Effectiveness of the RCS in Henan Province, 2020–2021.
Indicators Year 2020 Year 2021
Level I Level II Level III Level IV Evaluation Result Level I Level II Level III Level IV Evaluation Result
C1 1 0.3333 −1 −1 Level I 1 0.3333 −1 −1 Level I
C2 1 −1 −1 −1 Level I 1 −1 −1 −1 Level I
C3 1 0.3333 −1 −1 Level I 1 0.3333 −1 −1 Level I
C4 1 0.3333 −1 −1 Level I 1 0.3333 −1 −1 Level I
C5 1 −0.3333 −1 −1 Level I 1 −0.4667 −1 −1 Level I
C6 1 0.2 −1 −1 Level I 1 0.2 −1 −1 Level I
C7 1 0.9033 −1 −1 Level I 1 −0.2833 −1 −1 Level I
C8 1 0.915 −1 −1 Level I 1 0.9 −1 −1 Level I
C9 1 −0.2467 −1 −1 Level I 0.814 1 −0.814 −1 Level II
C10 1 0.2 −1 −1 Level I 1 0.2 −1 −1 Level I
C11 1 −1 −1 −1 Level I 1 −1 −1 −1 Level I
C12 −0.26 1 0.26 −1 Level II 0.98 1 −0.98 −1 Level II
C13 1 −1 −1 −1 Level I 1 −1 −1 −1 Level I
C14 0.9 1 −0.9 −1 Level II 1 0.8 −1 −1 Level I
C15 −1 0.92 1 −0.92 Level III 1 −0.874 −1 −1 Level I
C16 −1 −1 −0.0697 1 Level IV −1 −1 −0.0206 1 Level IV
C17 −0.748 1 0.748 −1 Level II −0.566 1 0.566 −1 Level II
C18 1 −0.32 −1 −1 Level I 1 −0.684 −1 −1 Level I
C19 −1 −0.6 1 0.6 Level III −1 −0.328 1 0.328 Level III
C20 1 −0.992 −1 −1 Level I 1 −1 −1 −1 Level I
C21 1 0.264 −1 −1 Level I 1 0.273 −1 −1 Level I
C22 −0.781 1 0.781 −1 Level II −0.342 1 0.342 −1 Level II
C23 1 −0.333 −1 −1 Level I 1 0.3333 −1 −1 Level I
C24 0.5 1 −0.5 −1 Level II 0.672 1 −0.672 −1 Level II
C25 1 −1 −1 −1 Level I 1 −1 −1 −1 Level I
C26 1 −1 −1 −1 Level I 1 −1 −1 −1 Level I
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s).
MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:
Share and Cite
MDPI and ACS Style
Liu, J.; Chen, X.; Su, L.; Li, Y.; Xu, Y.; Qi, L. Evaluation Model and Application of the Implementation Effectiveness of the River Chief System (RCS)—Taking Henan Province as an Example. Systems
2023, 11, 481. https://doi.org/10.3390/systems11090481
AMA Style
Liu J, Chen X, Su L, Li Y, Xu Y, Qi L. Evaluation Model and Application of the Implementation Effectiveness of the River Chief System (RCS)—Taking Henan Province as an Example. Systems. 2023; 11
(9):481. https://doi.org/10.3390/systems11090481
Chicago/Turabian Style
Liu, Jianting, Xuanyu Chen, Limin Su, Yanbin Li, Yanxue Xu, and Lei Qi. 2023. "Evaluation Model and Application of the Implementation Effectiveness of the River Chief System (RCS)—Taking Henan
Province as an Example" Systems 11, no. 9: 481. https://doi.org/10.3390/systems11090481
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/2079-8954/11/9/481","timestamp":"2024-11-14T15:10:03Z","content_type":"text/html","content_length":"650304","record_id":"<urn:uuid:4bdd1d9f-7014-4417-a108-f068fbf3ff76>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00067.warc.gz"} |
Calculate the grid convergence for a spatial reference at a given point. The grid convergence is the angle between True North and Grid North at a point on a map. The grid convergence can be used to
convert a horizontal direction expressed as an azimuth in a geographic coordinate system (relative to True North) to a direction expressed as a bearing in a projected coordinate system (relative to
Grid North), and vice versa.
Sign convention
The grid convergence returned by this method is positive when Grid North lies east of True North. The following formula demonstrates how to obtain a bearing (b) from an azimuth (a) using the grid
convergence (c) returned by this method:
b = a - c
This sign convention is sometimes named the Gauss-Bomford convention.
Other notes:
• Returns 0 if the spatial reference is a geographic coordinate system
• Returns NAN if the point is outside the projection's horizon or on error
• If the point has no spatial reference, it is assumed to be in the given spatial reference
• If the point's spatial reference differs from the spatial reference given, its location is transformed automatically to the given spatial reference
The grid convergence in degrees. | {"url":"https://developers.arcgis.com/kotlin/api-reference/arcgis-maps-kotlin/com.arcgismaps.geometry/-spatial-reference/get-convergence-angle.html","timestamp":"2024-11-12T12:07:30Z","content_type":"text/html","content_length":"7811","record_id":"<urn:uuid:573d8503-9b13-4885-a45d-b156b77af5c3>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00092.warc.gz"} |
Smooth Infinitesimal Analysis II
Add to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Filip Bár.
We continue where we left off last time and proof the second derivative factoring through the R-module of symmetric bilinear maps. We finish the section on derivatives in arbitrary dimensions with a
Taylor theorem and with exhibiting that homogeneity of a map implies its linearity in K-L R-modules.
Our final chapter on SIA concerns the integration axiom and its implications. The elementary integral calculus will be obtained as easily as the elementary differential calculus. Higherdimensional
integrals will be constructed via Fubini’s theorem as iterated integrals. As applications we will discuss the Fermat-Reyes axiom and a proof of the reflexivity of R^n.
The corresponding sections in Lavendhomme’s book are the second half of 1.2.3, 1.3 and p. 84/85 of section 3.3.2
This talk is part of the Synthetic Differential Geometry Seminar series.
This talk is included in these lists:
Note that ex-directory lists are not shown. | {"url":"https://talks.cam.ac.uk/talk/index/36439","timestamp":"2024-11-13T15:04:35Z","content_type":"application/xhtml+xml","content_length":"11889","record_id":"<urn:uuid:d1f4793b-4813-4f22-8527-e410db9c6951>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00014.warc.gz"} |
Difference Between Photon and Electron | Compare the Difference Between Similar Terms
The key difference between photon and electron is that photon is a packet of energy while the electron is a mass.
An electron is a subatomic particle that plays a vital role in almost everything. The photon is a conceptual packet of energy, which is very important in quantum mechanics. Electron and photon are
two concepts that developed greatly with the development of quantum mechanics. It is vital to have a proper understanding of these concepts, to understand the field of quantum mechanics, classical
mechanics and related fields properly.
1. Overview and Key Difference
2. What is Photon
3. What is Electron
4. Side by Side Comparison – Photon vs Electron in Tabular Form
5. Summary
What is Photon?
Photon is a topic that we discuss in wave mechanics. In quantum theory, we can observe that waves also have particle properties. The photon is the particle of the wave. It is a fixed amount of energy
depending only on the frequency of the wave. We can give the energy of the photon by the equation E = hf, where E is the energy of the photon, h is the Plank constant, and f is the frequency of the
We can consider photons as packets of energy. With the development of relativity, scientists discovered that waves also have a mass. It is because waves behave as particles on interactions with
matter. However, the rest mass of a photon is zero. When a photon is moving with the speed of light, it has a relativistic mass of E/C^2, where E is the energy of the photon and C is the speed of
light in a vacuum.
What is Electron?
An atom consists of a nucleus that has a positive charge, and it contains almost all of the mass and electrons orbiting around the nucleus. These electrons have a negative charge, and they contain a
very small amount of mass compared to the nucleus. An electron has a rest mass of 9.11 x 10^-31 kilograms.
The electron falls into the subatomic particle family fermions. Moreover, they have half-integer values as spin. The spin is a property describing the angular momentum of the electron. The classical
theory of electron described the electron as a particle orbiting around the nucleus. However, with the development of quantum mechanics, we can see that the electron also can behave as a wave.
Further, the electron has specific energy levels. Now, we can define the orbit of the electron as the probability function of finding the electron around the nucleus. Scientists conclude that
electron behaves as both a wave and a particle. When we consider a travelling electron, some of the wave properties become prominent than the particle properties. When we consider the interactions,
particle properties are more prominent than the wave properties. The electron has a charge of – 1.602 x 10^-19 C. It is the smallest amount of charge any system can obtain. Moreover, all other
charges are multiplications of the unit charge of the electron.
What is the Difference Between Photon and Electron?
Photon is a type of elementary particle which acts as a carrier of energy, but the electron is a subatomic particle which occurs in all the atoms. The key difference between photon and electron is
that the photon is a packet of energy while the electron is a mass. Moreover, the photon does not have a rest mass, but an electron has a rest mass. As another significant difference between photon
and electron, the photon can go at the speed of light, but for an electron, it is theoretically impossible to obtain the speed of light.
Moreover, a further difference between photon and electron is that the photon displays more wave properties whereas the electron displays more particle properties. Below is an infographic on the
difference between photon and electron.
Summary – Photon vs Electron
Photon is an elementary particle, and we can describe it as a packet of energy while the electron is a subatomic particle having a mass. Therefore, we can say that the key difference between photon
and electron is that the photon is a packet of energy while the electron is a mass.
1. Jones, Andrew Zimmerman. “What Is a Photon in Physics?” ThoughtCo, Sep. 3, 2018. Available here
2. Britannica, The Editors of Encyclopaedia. “Photon.” Encyclopædia Britannica, Encyclopædia Britannica, Inc., 7 Feb. 2018. Available here
Image Courtesy:
1.”The Photon”By Illusterati (CC BY-SA 3.0) via Commons Wikimedia
2.”2750576″ by sjeiti (CC0) via pixabay
1. Christopher says
Electron rest mass is 9.11 x 10^-31 (not 10^-3)!!!
□ Write4U says
I believe that you meant to say; 9.11 x 10^-31 (not 9.11 x 10^31 as stated in the OP)
This is only intended as a literal correction of your correction….:)
Leave a Reply Cancel reply | {"url":"https://www.differencebetween.com/difference-between-photon-and-vs-electron/","timestamp":"2024-11-02T14:31:39Z","content_type":"text/html","content_length":"69541","record_id":"<urn:uuid:37d4b06d-8285-4359-99e9-49c7a306dbe1>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00183.warc.gz"} |
Using MKL DataFitting cubic spline routines
01-10-2022 02:02 AM
Dear reader,
I'm in the process of evaluating whether it is a good idea to switch from IMSL to MKL. Since I have been assisted quite well with my previous queries regarding MKL routines, I will give it another
try with my next problem:
I'm trying to replace the IMSL routine DCSIEZ(..) which does the most basic cubic spline interpolation using the "not-a-knot" boundary condition. I'm trying to mimic this routine using the MKL
DataFitting routines. To this purpose I wrote a little wrapper routine (MKL_C_SPLINE(..)) that exposes exactly the same input/output as my DCSIEZ() routine. In this routine I use a modified version
of your cubic spline-based C example as presented in the MKL reference manual. You find the routine at the bottom of this mail. When I replace DCSIEZ() by MKL_C_SPLINE(), the code still compiles and
links without any problems. However, at run-time the code crashes. I have a strong impression that this might have something to do with a possible data type conflict. I'm using the standard C data
types (int, float and double) but your DataFitting example uses generic types (e.g., MKL_INT). Frankly, I do not know/understand how this works exactly, even after reading some of the documentation.
Can you please have a look at my wrapper routine below and give some explanations as to what I'm doing wrong??? Your help is very much appreciated!
Best rgrds,
#include "mkl.h"
/* This routine replaces IMSL cubic spline routine DCSIEZ()
This (wrapper) routine calls the appropriate MKL DataFitting
routines as explained in the MKL reference manual.
#define SPLINE_ORDER DF_PP_CUBIC /* A cubic spline to construct */
void MKL_C_SPLINE(int *N, double *F_TMP, double *S_TMP, int *N_OUT, double *F_NEW, double *S_NEW)
int NX, NSITE;
int status; /* Status of a Data Fitting operation */
double *scoeff;
DFTaskPtr task; /* Data Fitting operations are task based */
/* Parameters describing the partition */
MKL_INT nx; /* The size of partition x */
/*double x[NX]; Partition x; This refers to the input array F_TMP of size N */
MKL_INT xhint; /* Additional information about the structure of breakpoints */
/* Parameters describing the function */
MKL_INT ny; /* Function dimension */
/*double y[NX]; Function values at the breakpoints; This refers to the input array S_TMP of size N */
MKL_INT yhint; /* Additional information about the function */
/* Parameters describing the spline */
MKL_INT s_order; /* Spline order */
MKL_INT s_type; /* Spline type */
MKL_INT ic_type; /* Type of internal conditions */
double* ic; /* Array of internal conditions */
MKL_INT bc_type; /* Type of boundary conditions */
double* bc; /* Array of boundary conditions */
/*double scoeff[(NX - 1)* SPLINE_ORDER]; Array of spline coefficients */
MKL_INT scoeffhint; /* Additional information about the coefficients */
/* Parameters describing interpolation computations */
MKL_INT nsite; /* Number of interpolation sites */
/*double site[NSITE]; Array of interpolation sites */
MKL_INT sitehint; /* Additional information about the structure of
interpolation sites */
MKL_INT ndorder, dorder; /* Parameters defining the type of interpolation */
double* datahint; /* Additional information on partition and interpolation sites */
/* double r[NSITE]; Array of interpolation results */
MKL_INT rhint; /* Additional information on the structure of the results */
MKL_INT* cell; /* Array of cell indices */
NX = *N;
NSITE = *N_OUT;
scoeff = (double*)calloc((NX - 1)* SPLINE_ORDER, sizeof(double));
/* Initialize the partition (The partition is input F_TMP) */
nx = NX;
xhint = DF_NON_UNIFORM_PARTITION; /* The partition is non-uniform. */
/* Initialize the function */
ny = 1; /* The function is scalar. */
/* Set function values (The function is input S_TMP) */
yhint = DF_NO_HINT; /* No additional information about the function is provided. */
/* Create a Data Fitting task */
status = dfdNewTask1D(&task, nx, F_TMP, xhint, ny, S_TMP, yhint);
/* Check the Data Fitting operation status (TO BE IMPLEMENTED!!) */
/* Initialize spline parameters */
s_order = DF_PP_CUBIC; /* Spline is of the fourth order (cubic spline). */
s_type = DF_PP_BESSEL; /* Spline is of the Bessel cubic type. */
/* Define internal conditions for cubic spline construction (none in this example) */
ic_type = DF_NO_IC;
ic = NULL;
/* Use not-a-knot boundary conditions. In this case, the is first and the last
interior breakpoints are inactive, no additional values are provided. */
bc_type = DF_BC_NOT_A_KNOT;
bc = NULL;
scoeffhint = DF_NO_HINT; /* No additional information about the spline. */
/* Set spline parameters in the Data Fitting task */
status = dfdEditPPSpline1D(task, s_order, s_type, bc_type, bc, ic_type,
ic, scoeff, scoeffhint);
/* Check the Data Fitting operation status (TO BE IMPLEMENTED!!) */
/* Use a standard method to construct a cubic Bessel spline: */
/* Pi(x) = ci,0 + ci,1(x - xi) + ci,2(x - xi)2 + ci,3(x - xi)3, */
/* The library packs spline coefficients to array scoeff: */
/* scoeff[4*i+0] = ci,0, scoef[4*i+1] = ci,1, */
/* scoeff[4*i+2] = ci,2, scoef[4*i+1] = ci,3, */
/* i=0,...,N-2 */
status = dfdConstruct1D(task, DF_PP_SPLINE, DF_METHOD_STD);
/* Check the Data Fitting operation status (TO BE IMPLEMENTED!!) */
/* Initialize interpolation parameters */
nsite = NSITE;
/* Set site values, i.e., values at which spline needs to be evaluated. These are contained in F_NEW (input of this routine) */
sitehint = DF_NON_UNIFORM_PARTITION; /* Partition of sites is non-uniform */
/* Request to compute spline values */
ndorder = 1;
dorder = 1;
datahint = DF_NO_APRIORI_INFO; /* No additional information about breakpoints or
sites is provided. */
rhint = DF_MATRIX_STORAGE_ROWS; /* The library packs interpolation results
in row-major format. */
cell = NULL; /* Cell indices are not required. */
/* Solve interpolation problem using the default method: compute the spline values
at the points site(i), i=0,..., nsite-1 and place the results to array r */
status = dfdInterpolate1D(task, DF_INTERP, DF_METHOD_STD, nsite, F_NEW,
sitehint, ndorder, &dorder, datahint, S_NEW,
rhint, cell);
/* Check Data Fitting operation status */
/* De-allocate Data Fitting task resources */
status = dfDeleteTask(&task);
/* Check Data Fitting operation status(TO BE IMPLEMENTED!!) */
01-18-2022 08:31 PM
01-10-2022 07:53 PM
01-11-2022 03:06 AM
01-11-2022 06:05 AM
01-11-2022 06:20 AM
01-11-2022 06:35 AM
01-11-2022 11:35 PM
01-12-2022 12:39 AM
01-12-2022 05:28 AM
01-12-2022 07:17 AM
01-13-2022 01:02 AM
01-13-2022 02:01 AM
01-13-2022 02:50 AM
01-13-2022 05:08 AM
01-13-2022 05:52 AM
01-12-2022 05:45 AM
01-14-2022 09:54 PM
01-18-2022 08:31 PM | {"url":"https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Using-MKL-DataFitting-cubic-spline-routines/td-p/1349970","timestamp":"2024-11-05T18:42:39Z","content_type":"text/html","content_length":"622039","record_id":"<urn:uuid:240aa4a2-20ac-4af5-b541-e50a408d56c5>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00356.warc.gz"} |
How to estimate a smooth transition regression model in time series modeling? | Hire Some To Take My Statistics Exam
How to estimate a smooth transition regression model in time series modeling? Learning about time series models is a powerful tool to help understanding the mechanism of interest during a time series
analysis. In particular, it is well known that in many systems there is a range of regression models that have different weights to balance the effects of each component of the time series. This
paper focuses on these types of models. Given a model in it’s original format, you can now use one or more of these models as: Example 1 As you can see, there are different regressor weights used in
time series models for modelling the time series in this example. We can say that one regressor weight is “1”, where “1” means being logged in and “1” means not logging in with the time series.
Example 2 Suppose the time series sample has a subset of 24,000 realisations, which are then used to model the data in this example. Since there are six types of visit this site series that appear in
the data, you can use the data modeler to log the values over each 24,000 data set in order to compute a one dimensional linear least square regression model. This is done using the normalisation
procedure standard: for (int x; for (map-name (intersection).rwdGraph (intersection$x)) x I (intersection$x x); k < 1 ; for y (to-z (map-name:intersection:))) { y v (map-name:log($y); ((v -
data-modeler-y ~ v)); v ~ v ^ v | v²) v = ((v - input$v); ((v - input$v) ~ (v - v)); ((v ^ v) (g) 3, (v - data-modeler-v) 3.14 ))) } Since there are six types of time series, this is bothHow to
estimate a smooth transition regression model in time series modeling? Good research questions to fit model estimation algorithms. The data of the European National Hospital Network illustrates that
traditional regression models do not predict time-varying probability at a certain index factor. The transition probability has a frequency error window of 2 Hz … 32 32 32 32 One example is that of
finding those components (e.g., temperature and pH) that correlate with risk factor, and they will interact with the latent model. This work is used in developing the “Linear Regression model”
algorithm to estimate time course effects of a time-varying probabilistic hop over to these guys The algorithm is used with a regression equation whose specification relies upon a latent,
time-varying distribution. Once the model has been fitted via these steps, > use it that your method looks pretty good, but you only need to guess those components (e.g., K and K/m for pH) and one of
them should have high degree of predictibility, something I don’t think the model should be fitted to. Keep in mind if you have bad weather or a poorly adapted climate model, you can’t look at that
Paymetodoyourhomework Reddit
My question is “how to estimate a smooth transition model like exponential for a discrete population model”. For example, a time-varying distribution, say, k, there’s a transition probability with
high degree of predictibility, which means k/m has high degree of predictability. You want a certain component in the predictive equations; which is probably the problem. go to website lot of the
time-injecting algorithm (as opposed to linear regression) that I am having trouble with uses these mathematical principles. They are not very rigorous. It was not known that they’re a valid
approach. You could say that your model has predictive properties like smoothness, stability, etc (as opposed to predictiveness or linearization)How to estimate a smooth transition regression model
in time series modeling? A time series model has been established to describe the temporal properties of complex biological events. It is commonly referred to as “splines”. Methods for estimating
time series models depend upon whether the output log-likelihood function is smooth or not. The chosen range includes smooth transitions, logistic transitions, and multiple log-likelihood structures.
A number of approaches exist to estimate smooth transitions in time series models. One simple method relies upon the growth of the regression model over time. However, this fails due to the small
error associated with estimating a smooth transition. Step 1: Estimate smooth transition. Many researchers estimate smooth transitions in real time as time series models with four moving parts and
exponential smoothing (e.g., the transition on a additional hints curve and the model smoothing process is called a gamma factored model). Any such model will typically be expressed in quadrature in
terms of sample values for each set of transformed time series. The average of the sample values is then estimated from the transformation coefficient tensor. The process of estimating a transition
can be expressed in cubic or discrete polynomials as follows read this the article “Optimum of Integration Using the Continuum Model”, Fourth Congress of Mathematical Engineering No.
Do My Homework For Me Online
1, Academic Press, Hoboken, USA, 2006): (t) s x ( ) = -m (( ) – (p) t)(s x ( ) + (p_m(x)t) ( -t)) (t) x ( ) = 2 ( ( ) – ( ) ) = 2 (( ( ) – ( ) ) / ( (t) ).(1) If for a smooth transition t=t(1) then a
cubic polynomial with a double integral of 1 is given as the parameter estimator for the transition equation. A discrete polynomial estimator of 0.09 will satisfy the condition for smooth transition
(see formula (4), (3)) but will | {"url":"https://hireforstatisticsexam.com/how-to-estimate-a-smooth-transition-regression-model-in-time-series-modeling-2","timestamp":"2024-11-11T16:17:04Z","content_type":"text/html","content_length":"168261","record_id":"<urn:uuid:6005fd26-a39f-4262-9027-39f57dddd8bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00550.warc.gz"} |
Central Tendency
In statistics, the central tendency is the descriptive summary of a data set. Through the single value from the dataset, it reflects the centre of the data distribution. Moreover, it does not provide
information regarding individual data from the dataset, where it gives a summary of the dataset. Generally, the central tendency of a dataset can be defined using some of the measures in statistics.
The central tendency is stated as the statistical measure that represents the single value of the entire distribution or a dataset. It aims to provide an accurate description of the entire data in
the distribution.
Measures of Central Tendency
The central tendency of the dataset can be found out using the three important measures namely mean, median and mode.
The mean represents the average value of the dataset. It can be calculated as the sum of all the values in the dataset divided by the number of values. In general, it is considered as the arithmetic
mean. Some other measures of mean used to find the central tendency are as follows:
• Geometric Mean
• Harmonic Mean
• Weighted Mean
It is observed that if all the values in the dataset are the same, then all geometric, arithmetic and harmonic mean values are the same. If there is variability in the data, then the mean value
differs. Calculating the mean value is completely easy. The formula to calculate the mean value is given as
The histogram given below shows that the mean value of symmetric continuous data and the skewed continuous data.
In symmetric data distribution, the mean value is located accurately at the centre. But in the skewed continuous data distribution, the extreme values in the extended tail pull the mean value away
from the centre. So it is recommended that the mean can be used for the symmetric distributions.
Median is the middle value of the dataset in which the dataset is arranged in the ascending order or in descending order. When the dataset contains an even number of values, then the median value of
the dataset can be found by taking the mean of the middle two values.
Consider the given dataset with the odd number of observations arranged in descending order – 23, 21, 18, 16, 15, 13, 12, 10, 9, 7, 6, 5, and 2
Here 12 is the middle or median number that has 6 values above it and 6 values below it.
Now, consider another example with an even number of observations that are arranged in descending order – 40, 38, 35, 33, 32, 30, 29, 27, 26, 24, 23, 22, 19, and 17
When you look at the given dataset, the two middle values obtained are 27 and 29.
Now, find out the mean value for these two numbers.
i.e.,(27+29)/2 =28
Therefore, the median for the given data distribution is 28.
The mode represents the frequently occurring value in the dataset. Sometimes the dataset may contain multiple modes and in some cases, it does not contain any mode at all.
Consider the given dataset 5, 4, 2, 3, 2, 1, 5, 4, 5
Since the mode represents the most common value. Hence, the most frequently repeated value in the given dataset is 5.
Based on the properties of the data, the measures of central tendency are selected.
• If you have a symmetrical distribution of continuous data, all the three measures of central tendency hold good. But most of the times, the analyst uses the mean because it involves all the
values in the distribution or dataset.
• If you have skewed distribution, the best measure of finding the central tendency is the median.
• If you have the original data, then both the median and mode are the best choice of measuring the central tendency.
• If you have categorical data, the mode is the best choice to find the central tendency.
Video Lesson
Measure of Central Tendency
Measures of Central Tendency and Dispersion
The central tendency measure is defined as the number used to represent the center or middle of a set of data values. The three commonly used measures of central tendency are the mean, median, and
A statistic that tells us how the data values are dispersed or spread out is called the measure of dispersion. A simple measure of dispersion is the range. The range is equivalent to the difference
between the highest and least data values. Another measure of dispersion is the standard deviation, representing the expected difference (or deviation) among a data value and the mean.
For more information on statistics, register with BYJU’S – The Learning App and also watch videos to learn with ease.
Frequently Asked Questions – FAQs
What are the 4 measures of central tendency?
The four measures of central tendency are mean, median, mode and the midrange. Here, mid-range or mid-extreme of a set of statistical data values is the arithmetic mean of the maximum and minimum
values in a data set.
What are central tendency examples?
Central tendency is a statistic that represents the single value of the entire population or a dataset. Some of the important examples of central tendency include mode, median, arithmetic mean and
geometric mean, etc.
How do you find the central tendency?
The central tendency can be found using the formulas of mean, median or mode in most of the cases. As we know, mean is the average of a given data set, median is the middlemost data value and the
mode represents the most frequently occurring data value in the set.
What is the purpose of central tendency?
The purpose of the central tendency is to provide an exact representation of the entire collected data. It is often defined as the single value that is representative of the data.
What is the difference between mean and median?
Mean is the average (or arithmetic mean) of the values of a data set, whereas median is the middlemost value of the data.
Which is the best central tendency measure?
The mean is considered the best measure of central tendency to use if the data distribution is continuous and symmetrical. | {"url":"https://mathlake.com/Central-Tendency","timestamp":"2024-11-06T02:04:20Z","content_type":"text/html","content_length":"16427","record_id":"<urn:uuid:c004e93d-c2e4-41e4-98fb-b778aaec3668>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00819.warc.gz"} |
Formula regarding parent / child rows
I'm trying to create a formula to automatically populate a column based on if the row's primary column cell is a "parent" or "child."
The column I want to automate is titled "Level." If the primary column cell at that row is a "parent", I want the "Level" column cell at that row to automatically populate "1." If it's a "child", I
want it to automatically populate "2."
Is there a formula for this?
Best Answer
• Hey @Kaitlyn Carroll
Sorry, I missed you had replied. Try this
=IF(OR(COUNT(ANCESTORS([your primary column]@row)) = 0, COUNT(CHILDREN([your primary column]@row)) > 0), 1, IF(COUNT(ANCESTORS([your primary column]@row)) > 0, 2))
Insert the real name of your primary column if the open parentheses give you a circular error
Does this work for you
• Hey @Kaitlyn Carroll
This formula will indicate if a row is a Parent or Non-Parent
=IF(COUNT(CHILDREN())>0, 1, 2)
If your sheet also has rows that are neither Parents or Children, it will also indicate these rows as a 2
If this is a concern, you must add an additional condition
=IF(AND(COUNT(ANCESTORS())=0, COUNT(CHILDREN())>0), 1, IF(AND(COUNT(ANCESTORS())>0, COUNT(CHILDREN())=0), 2))
Will either of these work for you?
• Hi Kelly,
Thank you for the formulas!
The first formula would be great. However, as you stated, if it's neither a Parent or Child it's marked as "2" and that won't work.
When I use the second formula you shared, it gives me the following errors:
Any ideas? Also, I noticed the cell is blank when it's not a "Parent." I would need those "non-parent" rows to be "1" still. Is that possible?
• Hey @Kaitlyn Carroll
Sorry, I missed you had replied. Try this
=IF(OR(COUNT(ANCESTORS([your primary column]@row)) = 0, COUNT(CHILDREN([your primary column]@row)) > 0), 1, IF(COUNT(ANCESTORS([your primary column]@row)) > 0, 2))
Insert the real name of your primary column if the open parentheses give you a circular error
Does this work for you
• That worked! I'm so excited!! Thank you so much!
• So I would like to use this formula to provide info at 4 levels of the hierachy, see example below. I've messed with this formula for the last 30 minutes and now have completely confused myself.
Any ideas?
• I have a need for the same exact formula type with multiple level of hierarchy
Sherry Fox
Data Science & Reporting Specialist | PA Performance & Data Insights
UnitedHealth Group | OptumRx
EAP | Mobilizer | Automagician | Superstar | Community Champion
• @Sherry Fox , this has worked for my needs:
=IF(COUNT(ANCESTORS([PRIMARY]@row)) > 1, COUNT(ANCESTORS([PRIMARY]@row)), IF(COUNT(ANCESTORS([PRIMARY]@row)) = 1, 1, 0))
Change the [PRIMARY] to your primary column. Let me know if this works for you. It's not a complex formula, but I was just looking for calculating the top level to zero, and then primary items to
be level 1, secondary as 2, and continuing.
Help Article Resources | {"url":"https://community.smartsheet.com/discussion/101018/formula-regarding-parent-child-rows","timestamp":"2024-11-02T20:25:10Z","content_type":"text/html","content_length":"456033","record_id":"<urn:uuid:a7f43601-c577-4bf1-ada0-be84872cec5e>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00024.warc.gz"} |
Design robust, configurable, parallel gates for large trapped-ion arrays | Trapped-ion quantum computing | Apply | Boulder Opal | Q-CTRL Documentation
Design robust, configurable, parallel gates for large trapped-ion arrays
Obtaining control solutions for parallel and specifiable multi-qubit gates using Boulder Opal pulses
Boulder Opal provides a broad toolkit enabling the generation of high-fidelity entangling gates with trapped ions. In particular, it is possible to engineer parallel optimized gates with arbitrary
user-defined phases on large ion arrays. This comes from adding independent addressing and individual tuning of control parameters in order to simultaneously manipulate the phase space trajectory of
multiple motional modes.
In this notebook, we demonstrate control solutions for trapped ions robust to frequency-detuning and pulse-timing errors. We will cover:
• Creating optimized, robust, parallel Mølmer–Sørensen gates over multiple ions
• Calculating phase space trajectories for multiple modes
• Simulating entangling phase accumulation in the gate between arbitrary ion pairs
• Validating performance by calculating quasi-static noise susceptibility
Ultimately we will demonstrate how to parallelize gates in order to speed-up quantum circuit execution in trapped-ion quantum computers, while adding robustness against common hardware error sources.
For further discussion see Numeric optimization for configurable, parallel, error-robust entangling gates in large ion registers, published as Advanced Quantum Technology 3, 11 (2020).
Using Boulder Opal, you can obtain control solutions for configurable multi-qubit gates. These gates can also be performed simultaneously (in parallel). In this example, you'll learn to perform
simultaneous two-qubit and three-qubit entangling operations.
import numpy as np
import matplotlib.pyplot as plt
import qctrlvisualizer as qv
import boulderopal as bo
We start by specifying the ion, trap, and laser characteristics, and obtaining the Lamb–Dicke parameters and relative detunings. These will be used in the phase and displacement calculations for
ions. For more information, see the How to optimize error-robust Mølmer–Sørensen gates for trapped ions user guide.
# Define trap and laser characteristics.
ion_count = 5
center_of_mass_frequencies = [1.6e6, 1.5e6, 0.3e6]
wavevector = [(2 * np.pi) / 355e-9, (2 * np.pi) / 355e-9, 0]
maximum_rabi_rate = 2 * np.pi * 100e3
laser_detuning = 1.6e6 + 4.7e3
# Collect Lamb–Dicke parameters as an array of shape [<axis>, <collective_mode>, <ion>]
# and relative detunings as an array of shape [<axis>, <collective_mode>].
ion_chain_properties = bo.ions.obtain_ion_chain_properties(
atomic_mass=171, # Yb ions
lamb_dicke_parameters = ion_chain_properties["lamb_dicke_parameters"]
relative_detunings = ion_chain_properties["relative_detunings"]
Your task (action_id="1827932") is queued.
Your task (action_id="1827932") has started.
Your task (action_id="1827932") has completed.
To demonstrate simultaneous two- and three-qubit gates, configure the target operation by specifying target relative phases between each ion pair. Element ($j$, $k$) of the target phase array is the
relative phase between ions $j$ and $k$, with $k<j$. In the cell below, the target relative phase for the ion pair (1, 0) is set to $\pi/4$ for maximal two-qubit entanglement. For the three-qubit
gate, the same relative phase $\psi = \pi / 4$ is specified for ion pairs (3, 2), (4, 2) and (4, 3).
# Operation duration.
duration = 3e-4 # s
# Target phases: element (j,k) is the relative entanglement for ions j and k (k<j).
psi = np.pi / 4
target = np.array(
[0.0, 0.0, 0.0, 0.0, 0.0],
[psi, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, psi, 0.0, 0.0],
[0.0, 0.0, psi, psi, 0.0],
To control the system, consider separately-tunable complex drives $\gamma_j (t) = \Omega_j e^{i \phi_j}$ that address each ion $j$ in the trap. The drives are piecewise-constant with amplitude and
phase modulation. In the optimization, the gate infidelity quantifies the solution performance.
def reflect_signal(graph, moduli, phases, even_total_segment_count):
Reflect a drive signal about its temporal midpoint
and return the combined signal.
(Milne et al., Phys. Rev. Applied, 2020)
if even_total_segment_count:
moduli_refl = graph.reverse(moduli, [0])
phases_refl = graph.reverse(phases, [0])
moduli_refl = graph.reverse(moduli[:-1], [0])
phases_refl = graph.reverse(phases[:-1], [0])
moduli_comb = graph.concatenate([moduli, moduli_refl], 0)
phases_comb = graph.concatenate([phases, 2 * phases[-1] - phases_refl], 0)
return moduli_comb, phases_comb
# Helper function for optimization with different drives for each ion.
def optimization_with_different_drives(
graph = bo.Graph()
# Specification of free variables and combination with reflected signal.
free_segment_count = segment_count
if robust:
free_segment_count = (segment_count + 1) // 2
drives = []
for drive_name in drive_names:
# The drive moduli and phases are free variables here.
# They could also be restricted or fixed.
moduli = graph.optimization_variable(
count=free_segment_count, lower_bound=0, upper_bound=maximum_rabi_rate
phases = graph.optimization_variable(
upper_bound=2 * np.pi,
if robust:
moduli, phases = reflect_signal(
graph, moduli, phases, segment_count % 2 == 0
moduli=moduli, phases=phases, duration=duration, name=drive_name
ms_phases = graph.ions.ms_phases(
ms_displacements = graph.ions.ms_displacements(
infidelity = graph.ions.ms_infidelity(
if not robust:
cost = infidelity + 0
robust_cost_term = graph.ions.ms_dephasing_robust_cost(
cost = infidelity + robust_cost_term
cost.name = "cost"
return bo.run_optimization(
output_node_names=["displacements", "infidelity", "phases"] + drive_names,
# Helper function for plotting phase space trajectories.
def plot_phase_space_trajectories(total_displacement):
fig, axs = plt.subplots(1, 2, figsize=(10, 5))
plot_range = 1.1 * np.max(np.abs(total_displacement))
fig.suptitle("Phase space trajectories")
for k in range(2):
for mode in range(ion_count):
np.real(total_displacement[:, k, mode]),
np.imag(total_displacement[:, k, mode]),
label=f"mode {mode % ion_count}",
np.real(total_displacement[-1, k, mode]),
np.imag(total_displacement[-1, k, mode]),
axs[k].set_xlim(-plot_range, plot_range)
axs[k].set_ylim(-plot_range, plot_range)
hs, ls = axs[0].get_legend_handles_labels()
fig.legend(handles=hs, labels=ls, loc="center left", bbox_to_anchor=(1, 0.5))
segment_count = 128
sample_times = np.linspace(0, duration, segment_count)
drive_names = [f"ion_drive_{number}" for number in range(ion_count)]
result_optimal = optimization_with_different_drives(
print(f"Cost = Infidelity = {result_optimal['output']['infidelity']['value']:.3e}")
qv.plot_controls({"$\\gamma$": result_optimal["output"][drive_names[0]]})
Your task (action_id="1827945") is queued.
Your task (action_id="1827945") has started.
Your task (action_id="1827945") has completed.
Cost = Infidelity = 9.473e-09
The figure displays the optimized pulse modulus and phase dynamics for ion 0.
To obtain robust controls, we impose symmetry on each drive and optimize such that the centre of mass of each mode's trajectory is at zero. This is the same procedure as for the two-qubit case
described here. In this case, the solution performance is given by the sum of the gate infidelity and the robustness cost. The robust option is enabled by setting the flag robust to True.
result_robust = optimization_with_different_drives(
print(f"Cost = {result_robust['cost']:.3e}")
print(f"Infidelity = {result_robust['output']['infidelity']['value']:.3e}")
qv.plot_controls({"$\\gamma$": result_robust["output"][drive_names[0]]})
Your task (action_id="1827960") is queued.
Your task (action_id="1827960") has started.
Your task (action_id="1827960") has completed.
Cost = 7.460e-08
Infidelity = 9.690e-09
The above figure displays the dynamics of the modulus and angle of the robust optimized pulse for ion 0. The symmetry in time of the modulus values can be observed.
Next, we visualize the trajectory of the center of a coherent state in (rotating) optical phase space for each mode. Note that there are $3N$ modes in general, where $N$ is the ion number. We address
only the $2N$ transverse modes in the trap. The closure of these trajectories is a necessary condition for an effective operation. We first examine the (non-robust) standard optimized control,
followed by the robust optimized control.
# Sum over ion index to obtain total displacement of the mode.
np.sum(result_optimal["output"]["displacements"]["value"], axis=3)
The black cross marks the final displacement for each mode. These are overlapping at zero, indicating no residual state-motional entanglement and no motional heating caused by the operations.
Now we visualize the phase space trajectories for the robust optimized control.
# Sum over ion index to obtain total displacement of the mode.
np.sum(result_robust["output"]["displacements"]["value"], axis=3)
Again, the black crosses at the origin indicate no residual state-motional entanglement, which arises from satisfying the center of mass and symmetry conditions.
We can also obtain the phase accumulation for each pair of ions. The target phases for each pair of ions should be achieved at the end of a successful operation. First consider the standard optimized
times = sample_times * 1e3
phases = result_optimal["output"]["phases"]["value"]
fig, axs = plt.subplots(2, 1, figsize=(10, 10))
fig.suptitle("Relative phase dynamics")
target_phases = [psi, 0]
target_phase_names = ["\\pi/4", "0"]
for k in range(2):
target_phase = target_phases[k]
for ion1 in range(ion_count):
for ion2 in range(ion1):
if target[ion1][ion2] == target_phase:
axs[k].plot(times, phases[:, ion1, ion2], label=f"{ion2, ion1}")
axs[k].set_yticks([0, psi])
axs[k].set_yticklabels([0, "$\\pi/4$"])
axs[k].set_ylim((-0.1, 0.9))
axs[k].set_ylabel("Relative phase")
axs[k].legend(title="Ion pair")
axs[k].plot([0, times[-1]], [target_phase, target_phase], "k--")
axs[k].set_title(f"Pairs with target phase $\\psi = {target\_phase\_names[k]}$")
axs[1].set_xlabel("Time (ms)")
These figures display the relative phase dynamics for the duration of the gate operation. For clarity, different plots display the ion pairs with different target relative phases. The two plots are
for target relative phases of $\pi/4$ and 0, marked with black horizontal dashed lines.
Next consider the phase accumulation for the robust optimized control.
times = sample_times * 1e3
phases = result_robust["output"]["phases"]["value"]
fig, axs = plt.subplots(2, 1, figsize=(10, 9))
fig.suptitle("Relative phase dynamics")
target_phases = [psi, 0]
target_phase_names = ["\\pi/4", "0"]
for k in range(2):
target_phase = target_phases[k]
for ion1 in range(ion_count):
for ion2 in range(ion1):
if target[ion1][ion2] == target_phase:
axs[k].plot(times, phases[:, ion1, ion2], label=f"{ion2, ion1}")
axs[k].set_yticks([0, psi])
axs[k].set_yticklabels([0, "$\\pi/4$"])
axs[k].set_ylim((-0.1, 0.9))
axs[k].set_ylabel("Relative phase")
axs[k].legend(title="Ion pair")
axs[k].plot([0, times[-1]], [target_phase, target_phase], "k--")
axs[k].set_title(f"Pairs with target phase $\\psi = {target\_phase\_names[k]}$")
axs[1].set_xlabel("Time (ms)")
As above, these figures display the relative phase dynamics for the duration of the gate operation. The two plots are for target relative phases of $\pi/4$ and 0, and these values are marked with
black horizontal dashed lines.
The optimized drives produce nontrivial relative phase dynamics for each ion pair since the ions are individually addressed. The entangling phase is mediated by coupling the ion pairs to shared
motional modes, and the optimized drives (both standard and robust) provide the necessary degrees of freedom to achieve the different target relative phases for each ion pair.
We can assess the robustness of the 'Robust' optimized pulses in comparison to the 'Standard' optimized pulses using 1D quasi-static scans.
First, we calculate a scan of scaling factors for the pulse timings. The scaling factors are applied uniformly: they scale the timing for the entire operation by the same factor.
scan_point_count = 21
optimal_infidelity_names = [
f"infidelity_{number}" for number in range(scan_point_count)
robust_infidelity_names = [
f"robust_infidelity_{number}" for number in range(scan_point_count)
max_timing_shift = 0.004
time_shifts = np.linspace(1 - max_timing_shift, 1 + max_timing_shift, scan_point_count)
graph = bo.Graph()
for result, infidelity_names in zip(
[result_optimal, result_robust], [optimal_infidelity_names, robust_infidelity_names]
for time_shift, infidelity_name in zip(time_shifts, infidelity_names):
drives = [
durations=time_shift * result["output"][drive_name]["durations"],
for drive_name in drive_names
ms_phases = graph.ions.ms_phases(
ms_displacements = graph.ions.ms_displacements(
ms_infidelity = graph.ions.ms_infidelity(
timing_scan = bo.execute_graph(
graph, output_node_names=optimal_infidelity_names + robust_infidelity_names
optimal_infidelities_timing = [
timing_scan["output"][name]["value"] for name in optimal_infidelity_names
robust_infidelities_timing = [
timing_scan["output"][name]["value"] for name in robust_infidelity_names
Your task (action_id="1827973") is queued.
Your task (action_id="1827973") has started.
Your task (action_id="1827973") has completed.
Next, calculate the robustness of the optimized pulses using a systematic scan of the relative detunings (which corresponds to shifting the laser detuning).
max_dephasing_shift = 0.04 * np.min(np.abs(relative_detunings))
dephasing_shifts = np.linspace(
-max_dephasing_shift, max_dephasing_shift, scan_point_count
graph = bo.Graph()
for result, infidelity_names in zip(
[result_optimal, result_robust], [optimal_infidelity_names, robust_infidelity_names]
for dephasing_shift, infidelity_name in zip(dephasing_shifts, infidelity_names):
drives = [
graph.pwc(**result["output"][drive_name]) for drive_name in drive_names
ms_phases = graph.ions.ms_phases(
relative_detunings=relative_detunings + dephasing_shift,
ms_displacements = graph.ions.ms_displacements(
relative_detunings=relative_detunings + dephasing_shift,
ms_infidelity = graph.ions.ms_infidelity(
dephasing_scan = bo.execute_graph(
graph, output_node_names=optimal_infidelity_names + robust_infidelity_names
optimal_infidelities_dephasing = [
dephasing_scan["output"][name]["value"] for name in optimal_infidelity_names
robust_infidelities_dephasing = [
dephasing_scan["output"][name]["value"] for name in robust_infidelity_names
Your task (action_id="1827976") has started.
Your task (action_id="1827976") has completed.
fig, axs = plt.subplots(1, 2, figsize=(15, 5))
fig.suptitle("Quasi-static scans", y=1.1)
axs[0].set_title("Timing noise")
axs[0].plot(time_shifts, robust_infidelities_timing, label="Robust")
axs[0].plot(time_shifts, optimal_infidelities_timing, label="Standard")
axs[0].set_xlabel("Pulse timing scaling factor")
axs[1].set_title("Dephasing noise")
axs[1].plot(dephasing_shifts / 1e3, robust_infidelities_dephasing, label="Robust")
axs[1].plot(dephasing_shifts / 1e3, optimal_infidelities_dephasing, label="Standard")
axs[1].set_xlabel("Laser detuning shift $\\Delta \\delta$ (kHz)")
hs, ls = axs[0].get_legend_handles_labels()
fig.legend(handles=hs, labels=ls, loc="center", bbox_to_anchor=(0.5, 1.0), ncol=2)
The broader high-fidelity regions indicate the benefit of the robust optimized pulses when there is quasi-static dephasing noise or noise in the control pulse timings. The additional robustness for
this more general gate is in agreement with published experimental results, using related robustness methodology.
In this example we highlight the flexibility of our graph-based computational framework to add several substantial constraints on top of our optimization:
• We generate six parallel pairwise "2-of-12" entangling operations on a 12-ion chain
• Each pair is assigned a different target entangling phase, increasing in steps of 0.2 for each pair
• The high-frequency components of the drive are removed using a sinc filter within the optimizer
# Define trap and laser characteristics.
ion_count = 12
center_of_mass_frequencies = [1.6e6, 1.5e6, 0.25e6]
wavevector = [(2 * np.pi) / 355e-9, (2 * np.pi) / 355e-9, 0]
maximum_rabi_rate = 2 * np.pi * 100e3
laser_detuning = 1.6e6 + 4.7e3
# Calculate Lamb–Dicke parameters and relative detunings.
ion_chain_properties = bo.ions.obtain_ion_chain_properties(
atomic_mass=171, # Yb ions
lamb_dicke_parameters = ion_chain_properties["lamb_dicke_parameters"]
relative_detunings = ion_chain_properties["relative_detunings"]
Your task (action_id="1827980") has started.
Your task (action_id="1827980") has completed.
# Operation duration.
duration = 3e-4 # s
# Define target phases: element (j,k) is the relative entanglement for ions j and k (k<j).
target = np.zeros((ion_count, ion_count))
for ion1 in range(1, ion_count, 2):
ion2 = ion1 - 1
target[ion1][ion2] = ion1 / 10
Consider separately-tunable complex drives $\gamma_j (t) = \Omega_j e^{i \phi_j}$ that address each ion $j$ in the trap. The drives are piecewise-constant with amplitude and phase modulation.
Additionally, a low-pass (sinc) filter is incorporated into the optimizer to smooth the pulse, as required for many practical implementations.
You can set the number of optimized pulse segments and resampling segments for the smoothed pulse, as well as the sinc cutoff frequency in the cell below.
segment_count = 32
sample_count = 128
sample_times = np.linspace(0, duration, sample_count)
cutoff_frequency = 2 * np.pi * 0.05e6
drive_names = [f"ion_drive_{number}" for number in range(ion_count)]
graph = bo.Graph()
drives = []
for drive_name in drive_names:
drive_raw_real = graph.real_optimizable_pwc_signal(
drive_raw_imag = graph.real_optimizable_pwc_signal(
pwc=drive_raw_real + 1j * drive_raw_imag,
ms_phases = graph.ions.ms_phases(
ms_displacements = graph.ions.ms_displacements(
infidelity = graph.ions.ms_infidelity(
result = bo.run_optimization(
output_node_names=["infidelity"] + drive_names,
print(f"Cost = Infidelity = {result['output']['infidelity']['value']:.3e}")
qv.plot_controls({"$\\gamma$": result["output"]["ion_drive_0"]})
Your task (action_id="1827983") has started.
Your task (action_id="1827983") has completed.
Cost = Infidelity = 4.023e-06
The above figure displays the optimized smooth pulse modulus and phase dynamics for ion 0, as an example of the ion-specific drives.
For a system as large as this, you can separate the simulation of the displacement from the optimization to increase the efficiency. This procedure is demonstrated in the How to calculate system
dynamics for arbitrary Mølmer–Sørensen gates user guide.
graph = bo.Graph()
drives = [graph.pwc(**result["output"][drive_name]) for drive_name in drive_names]
ms_displacements = graph.ions.ms_displacements(
displacement_simulation = bo.execute_graph(graph, "displacements")
Your task (action_id="1827986") has started.
Your task (action_id="1827986") has completed.
# Sum over ion index to obtain total displacement of the mode.
np.sum(displacement_simulation["output"]["displacements"]["value"], axis=3)
The black cross marks the final displacement for each mode. These are overlapping at zero, indicating no residual state-motional entanglement and no motional heating caused by the operations.
The target phases for each pair of ions should be achieved at the end of a successful operation. Again, for a system as large as this, you can separate the simulation of the phases from the
optimization, to increase the efficiency.
graph = bo.Graph()
drives = [graph.pwc(**result["output"][drive_name]) for drive_name in drive_names]
ms_phases = graph.ions.ms_phases(
phase_simulation = bo.execute_graph(graph, "phases")
phases = phase_simulation["output"]["phases"]["value"]
Your task (action_id="1827989") is queued.
Your task (action_id="1827989") has started.
Your task (action_id="1827989") has completed.
times = sample_times * 1e3
for ion1, color in zip(range(1, ion_count, 2), qv.QCTRL_STYLE_COLORS):
ion2 = ion1 - 1
plt.plot(times, phases[:, ion1, ion2], label=f"{ion2, ion1}", color=color)
[0, times[-1]], [target[ion1][ion2], target[ion1][ion2]], "--", color=color
plt.xlabel("Time (ms)")
plt.ylabel("Relative phase")
plt.legend(title="Ion pair", loc="center left", bbox_to_anchor=(1, 0.5))
The figure displays the relative phase dynamics for each entangled ion pair, with a color matched to the pair's target relative phase (dashed). Note that by the end of the operation, each ion pair
achieves its specified, distinct relative phase target. | {"url":"https://docs.q-ctrl.com/boulder-opal/apply/trapped-ion-quantum-computing/design-robust-configurable-parallel-gates-for-large-trapped-ion-arrays","timestamp":"2024-11-13T09:48:17Z","content_type":"text/html","content_length":"208532","record_id":"<urn:uuid:ded4e977-f267-46a9-9522-5882273af669>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00339.warc.gz"} |
Register - I Hate MathsRegister
• Introductory Offer - Sign up for our Algebra Subscription and get one book of your choice
For anyone struggling with Algebra. Suitable for all levels, from first year to sixth year
MCM Grinds School is fast developing a reputation for being one of the best schools around teaching Maths. This Subscription is aimed at Second and Third Year Students preparing for the Junior
Cert Exam. Subscribers will have access to online tutor videos on all topics relating to Junior Cert Maths.
MCM Grinds School is fast developing a reputation for being one of the best schools around teaching Maths. This Subscription is aimed at Fourth, Fifth and Sixth Year Students preparing for the
Leaving Cert Ordinary Level Exam. Subscribers will have access to online tutor videos on all topics relating to Leaving Cert Ordinary Level Maths.
MCM Grinds School is fast developing a reputation for being one of the best schools around teaching Maths. This Subscription is aimed at Fourth, Fifth and Sixth Year Students preparing for the
Leaving Cert Higher Level Exam. Subscribers will have access to online tutor videos on all topics relating to Leaving Cert Higher Level Maths.
For anyone struggling with Algebra. Suitable for all levels, from first year to sixth year
MCM Grinds School is fast developing a reputation for being one of the best schools around teaching Maths. This Subscription is aimed at Second and Third Year Students preparing for the Junior
Cert Exam. Subscribers will have access to online tutor videos on all topics relating to Junior Cert Maths.
MCM Grinds School is fast developing a reputation for being one of the best schools around teaching Maths. This Subscription is aimed at Fourth, Fifth and Sixth Year Students preparing for the
Leaving Cert Ordinary Level Exam. Subscribers will have access to online tutor videos on all topics relating to Leaving Cert Ordinary Level Maths.
MCM Grinds School is fast developing a reputation for being one of the best schools around teaching Maths. This Subscription is aimed at Fourth, Fifth and Sixth Year Students preparing for the
Leaving Cert Higher Level Exam. Subscribers will have access to online tutor videos on all topics relating to Leaving Cert Higher Level Maths. | {"url":"https://ihatemaths.ie/register/","timestamp":"2024-11-04T20:04:07Z","content_type":"text/html","content_length":"103308","record_id":"<urn:uuid:c5b4358f-30ba-4039-95fc-fb08d40041f2>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00895.warc.gz"} |
Reluctance Quotes (6 quotes)
Facts were never pleasing to him. He acquired them with reluctance and got rid of them with relief. He was never on terms with them until he had stood them on their heads.
I’ve met a lot of people in important positions, and he [Wernher von Braun] was one that I never had any reluctance to give him whatever kind of credit they deserve. He owned his spot, he knew what
he was doing, and he was very impressive when you met with him. He understood the problems. He could come back and straighten things out. He moved with sureness whenever he came up with a decision.
Of all the people, as I think back on it now, all of the top management that I met at NASA, many of them are very, very good. But Wernher, relative to the position he had and what he had to do, I
think was the best of the bunch.
Men in general are very slow to enter into what is reckoned a new thing; and there seems to be a very universal as well as great reluctance to undergo the drudgery of acquiring information that seems
not to be absolutely necessary.
The great testimony of history shows how often in fact the development of science has emerged in response to technological and even economic needs, and how in the economy of social effort, science,
even of the most abstract and recondite kind, pays for itself again and again in providing the basis for radically new technological developments. In fact, most people—when they think of science as a
good thing, when they think of it as worthy of encouragement, when they are willing to see their governments spend substance upon it, when they greatly do honor to men who in science have attained
some eminence—have in mind that the conditions of their life have been altered just by such technology, of which they may be reluctant to be deprived.
Two extreme views have always been held as to the use of mathematics. To some, mathematics is only measuring and calculating instruments, and their interest ceases as soon as discussions arise which
cannot benefit those who use the instruments for the purposes of application in mechanics, astronomy, physics, statistics, and other sciences. At the other extreme we have those who are animated
exclusively by the love of pure science. To them pure mathematics, with the theory of numbers at the head, is the only real and genuine science, and the applications have only an interest in so far
as they contain or suggest problems in pure mathematics.
Of the two greatest mathematicians of modern tunes, Newton and Gauss, the former can be considered as a representative of the first, the latter of the second class; neither of them was exclusively
so, and Newton’s inventions in the science of pure mathematics were probably equal to Gauss’s work in applied mathematics. Newton’s reluctance to publish the method of fluxions invented and used by
him may perhaps be attributed to the fact that he was not satisfied with the logical foundations of the Calculus; and Gauss is known to have abandoned his electro-dynamic speculations, as he could
not find a satisfying physical basis. …
Newton’s greatest work, the Principia, laid the foundation of mathematical physics; Gauss’s greatest work, the Disquisitiones Arithmeticae, that of higher arithmetic as distinguished from algebra.
Both works, written in the synthetic style of the ancients, are difficult, if not deterrent, in their form, neither of them leading the reader by easy steps to the results. It took twenty or more
years before either of these works received due recognition; neither found favour at once before that great tribunal of mathematical thought, the Paris Academy of Sciences. …
The country of Newton is still pre-eminent for its culture of mathematical physics, that of Gauss for the most abstract work in mathematics.
When Archimedes jumped out of his bath one morning and cried Eureka he obviously had not worked out the whole principle on which the specific gravity of various bodies could be determined j and
undoubtedly there were people who laughed at his first attempts. That is perhaps why most scientific pioneers are so slow to disclose the nature of their first insights when they believe themselves
to be on a track of a new discovery. | {"url":"https://todayinsci.com/QuotationsCategories/R_Cat/Reluctance-Quotations.htm","timestamp":"2024-11-07T03:00:06Z","content_type":"text/html","content_length":"93666","record_id":"<urn:uuid:ceb3aa3e-d7be-4fd0-b44c-e45fe19614fc>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00546.warc.gz"} |
Compare Numbers Upto 100,000 (WIZ Math)
• Look at the numbers carefully.
• Rule 1: If a number has more digits than another, it is greater of the two.
• Rule 2: If two numbers have the same number of digits, we compare them by their extreme left-most digits. The number with the greater digit is greater.
• If the extreme left-most digits are the same, we compare them by their next digits to the right and so on.
Example: 95,356 ? 9,558
Here the first number 95,356 has more digits than the second number 9,558
Therefore 95,356 > 9,558
Answer: >
Example: 95,356 ? 95,578
1. Start comparing from ten thousands place, both the numbers contain 9.
2. In thousands place both contain 5 .
3. In hundreds place the first number contains 3 and the second contains 5.
4. Hence first number is lesser than second.
Answer: <
Directions: Compare the numbers and use appropriate sign. Also write at least ten examples of your own. | {"url":"http://kwiznet.com/p/takeQuiz.php?ChapterID=1271&CurriculumID=3&NQ=4&Num=3.17","timestamp":"2024-11-03T18:35:36Z","content_type":"text/html","content_length":"10876","record_id":"<urn:uuid:b2f63afa-768a-4bf9-b8e0-8811c2043dd6>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00487.warc.gz"} |
Equations learn online | sofatutor.com
Easy learning with videos, exercises, tasks, and worksheets
Solving Equations
Equations containing one or more variables are algebra equations. Variables represent unknown amounts, and any letter or symbol can be used as a variable.
Algebra equations may need one-step, two-steps, or multiple-steps to solve for the value of the variable. Equations may have variables on one or both sides of the equal sign. To solve algebra
equations, students combine like terms and use inverse operations to isolate the variable. Equations in the format of variable word problems may help students learn to apply algebra skills to real
world situations.
One-Step Equation
To isolate the variable to solve these one-step equations, undo the equation:
Example 1: Undo the constant by subtracting from both sides of the equation.
$\begin{array}{lcl} x + 3 & = & 8 \\ x + 3 - 3 & = & 8 - 3 \\ x & = & 5 \end{array}$
Example 2: Divide both sides of the equation by the coefficient.
$\begin{array}{lcl} 2x & = & 16\\ \frac{2}{2}x & = & \frac{16}{2} \\ x& = &8 \end{array}$
Two-Step Equation
Use reverse PEMDAS to solve this two-step equation.
Step 1: Subtract the constant from both sides of the equation.
Step 2: Divide both sides of the equation by the coefficient.
$\begin{array}{lcl} 3x + 5 &= & 17 \\ 3x + 5 -5 &= &17 -5\\ 3x&=&12 \\ \frac{3}{3}x&=&\frac{12}{3} \\ x&=&4 \end{array}$
Multi-Step Equation with Variable on One Side
Use the Distributive Property to solve this multi-step equation:
Step 1: Apply the Distributive Property.
Step 2: Add the constant to both sides of the equation.
Step 3: Divide both sides of the equation by the coefficient.
$\begin{array}{lcl} 8 &= &2(x - 2)\\ 8&=&2x -4 \\ 8 +4&=&2x -4 +4 \\ 12&=&2x \\ \frac{12}{2}&=&\frac{2}{2}x \\ x&=&6 \end{array}$
Multi-Step Equation with Variables on Both Sides
To solve multi-step equations such as this, combine like terms to make the problem easier to solve:
Step 1: Use opposite operations to combine like terms.
Step 2: Divide both sides of the equation by the coefficient.
$\begin{array}{lcl} 11x&=& -16 + 3x\\ 11x - 3x &=& -16 + 3x- 3x \\ 8x &=& -16 \\ \frac{8}{8}x&=&\frac{-16}{8}\\ x&=& -2 \end{array}$
Word Problems
Write a variable equation to solve this word problem. Let x represent the number of people attending the party.
Husni bakes cakes for a party. He doesn’t know how many people will attend, but he does know he needs 3 eggs per cake and a cake serves 8 people. Write an expression to determine the number of eggs
he will need.
If 24 people attend the party, how many eggs will he need?
$(\frac{24}{8})\times3 = 3\times3=9$
He needs 9 eggs.
Distance Rate Time
Use the Distance Rate Time (DRT) formula to solve problems of how far, how long, or how fast. Using the formula, calculate problems for travel in the same direction or different directions. The
following triangles will help you to remember:
Distance Rate Time – Same Direction
Example 1: Use the DRT formula to solve this problem: Alex rode his bike for 2 hours travelling 23 miles. What is his rate?
D = 23 miles
T = 2 hours
R = x
$\begin{array}{lcl} 23&=&2x\\ x&=&11.5 \end{array}$
Alex’s rate is 11.5 miles per hour.
Example 2: After school, Kara and Cyrus ride bikes with the riding club. Kara leaves school at 3:00 pm, riding 10 miles per hour. Cyrus stayed a few minutes late to clean out his locker and left at
3:10 pm, riding at a rate of 12 miles per hour. When will he catch up with Kara?
$\begin{array}{lcl} 10t &=& 12(t-\frac{1}{6}) \\ 10t&=&12t -2 \\ 10t -12t&=&12t -12t -2 \\ -2t&=&-2 \\ \frac{-2}{-2}t&=&\frac{-2}{-2} \\ 2t&=&2 \\ t&=&1 \end{array}$
After one hour, Cyrus will catch up with Kara.
Distance Rate Time – Different Directions
For this problem, the travel is from two different directions. Use the DRT formula to solve: Carly and Pete leave school traveling in opposite directions on a straight road. Pete rides his electric
bike 12 mi/h faster than Carly walks. After 2 hours, they are 36 miles apart. Find Carly’s rate and Pete’s rate.
$\begin{array}{lcl} 2(12+x) + 2x &=&36\\ 24 +2x +2x&=&36\\ 24 +4x&=&36\\ 24 -24 +4x&=&36 -24\\ 4x&=&12\\ \frac{4}{4}x&=&\frac{12}{4} \\ x&=&3 \end{array}$
Carly’s rate is 3 mph, and Pete’s rate is 15.
All videos on the topic
Videos on the topic
Equations (7 videos) | {"url":"https://us.sofatutor.com/math/algebra-1/equations","timestamp":"2024-11-12T22:16:48Z","content_type":"text/html","content_length":"98826","record_id":"<urn:uuid:b53cf59a-79d8-4bf0-88cd-f6f090a96324>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00434.warc.gz"} |
engineering maths solutions
Applications of differential equations of first order. Understanding Advanced Engineering Mathematics 5th Edition homework has never been easier than with Chegg Study. Free textbook, Matlab notes,
past examination papers and solutions! Understanding Advanced Engineering Mathematics 10th Edition homework has never been easier than with Chegg Study. Equations of first order and higher degree
(p-y-x equations), Equations solvable for p, y, x. PART A Unit-I: Differential Equations-1 . Solutions Manual to Advanced Modern Engineering Mathematics, 4th Edition General and singular solutions,
Clarauit’s equation. Solving mathematics and engineering sciences problems is one of my favorite hobbies, and I decided to share with everyone this passion with the hope of helping learners from
around the world. ENGINEERING MATHEMATICS-II : SUBJECT CODE: 10 MAT 21 . Learn engineering mathematics. Solutions Manuals are available for thousands of the most popular college and high school
textbooks in subjects such as Math, Science (Physics, Chemistry, Biology), Engineering (Mechanical, Electrical, Civil), Business and more. Solutions Manuals are available for thousands of the most
popular college and high school textbooks in subjects such as Math, Science (Physics, Chemistry, Biology), Engineering (Mechanical, Electrical, Civil), Business and more. | {"url":"http://tns.com.pe/wiki/page.php?id=engineering-maths-solutions-2b95e5","timestamp":"2024-11-12T16:39:15Z","content_type":"text/html","content_length":"9637","record_id":"<urn:uuid:2c99d438-6d73-47af-872d-da602093e668>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00131.warc.gz"} |
Scientific Computing Project
WavES (Wave Equations Solutions) is a combined theoretical and practical tool for the numerical solution of different types of time-dependent Wave Equations (acoustic, elastic and electromagnetic).
The theoretical part consists of published books, papers, courses and presentations, where new efficient numerical methods and strategies for the solution of time-dependent wave equations are
presented. The practical part is represented by the C++ program library WavES for the computational solution of time-dependent wave equations (acoustic, elastic and electromagnetic) using three
different methods: Finite Element Method (FEM), Finite Difference Method (FDM), Hybrid FEM/FDM method.
WavES started to be developed in 2000-2003 at Finite Element Center in Gothenburg, Sweden, at Chalmers University of Technology and Gothenburg University. In 2003-2005, Dr. Larisa Beilina continued
to work with WavES at Basel University (Switzerland) and in 2007-2008 at NTNU-Trondheim (Norway).
She has extended WavES with classes for the solution of the forward problem for the electromagnetic wave equation in 2D and 3D using FDTD, stabilized FEM and hybrid FEM/FDM method. In addition, there
were developed a new set of classes for the solution of coefficient inverse problems for all above named equations using the adaptive finite element method (see section Applications). Nowadays WavES
Project is hosted at the Department of Mathematical Sciences of Chalmers University of Technology and Gothenburg University. Dr. Vladimir Timonov from SibSUTIS and Novosibirsk State University
(Novosibirsk, Russia) joined the project in December 2011.
WavES uses PETSc 2 and PETSc 3 libraries which are a suite of data structures and routines for the scalable (parallel) solution of scientific applications modelled by partial differential equations.
It employs the MPI standard for parallel implementation.
The main aim of WavES Project is to enable efficient development of modern finite element/difference codes for solution of forward problems for different types of wave equation (acoustic, elastic and
electromagnetic) using structured FDM meshes, adaptively refined FEM meshes and hybrid FEM/FDM meshes. The hybrid FEM/FDM method means that different numerical methods, finite elements and finite
differences, are used in different subdomains. The purpose is to combine the flexibility of finite elements with the efficiency of finite differences. The hybrid approach may be an important tool to
reduce the execution time and memory requirement for large scale computations.
Computations in WavES are performed on a parallel infrastructure in the center for scientific and technical computing C3SE at Chalmers University of Technology, Gothenburg, Sweden.
WavES Project has been successfully used for the solution of Partial Differential Equations (PDE) in various fields of computational mathematics. Some of these applications are:
1. Scanning acoustic microscopy to reconstruct elastic properties of the bones.
2. Reconstruction of dielectrics on transmitted experimental data in approximate globally convergent algorithm.
3. Solution of 3D coefficient inverse problem for the Maxwell’s system in time domain.
4. Subsurface imaging. | {"url":"https://waves24.com/","timestamp":"2024-11-07T16:14:11Z","content_type":"text/html","content_length":"13995","record_id":"<urn:uuid:818042fe-f256-4856-be42-dfedb8132e19>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00724.warc.gz"} |
M-theory / String Theory is the Only Game in Town
M-theory / String Theory is the Only Game in Town
If one views science as an economist, it would stand to reason that the scientific theory that should be first retired would be the one that offers the greatest opportunity for arbitrage in the
market place of ideas. Thus it is not sufficient to look for ideas which are merely wrong, as we should instead look for troubled scientific ideas that block progress by inspiring zeal, devotion, and
what biologists politely term ‘interference competition’ all out of proportion to their history of achievement. Here it is hard to find a better candidate for an intellectual bubble than that which
has formed around the quest for a consistent theory of everything physical, reinterpreted as if it were synonymous with ‘quantum gravity’. If nature were trying to send a polite message that there is
other preliminary work to be done first before we quantize gravity, it is hard to see how she could send a clearer message than dashing the Nobel dreams for two successive generations of Bohr’s
brilliant descendants.
To recall, modern physics rests on a stool with three classical geometric legs first fashioned individually by Einstein, Maxwell, and Dirac. The last two of those legs can be together retrofitted to
a quantum theory of force and matter known as the ‘standard model’, while the first stubbornly resists any such attempt at an upgrade, rendering the semi-quantum stool unstable and useless. It is
from this that the children of Bohr have derived the need to convert the children of Einstein to the quantum religion at all costs so that the stool can balance.
But, to be fair to those who insist that Einstein must be now made to bow to Bohr, the most strident of those enthusiasts have offered a fair challenge. Quantum exceptionalists claim, despite an
unparalleled history of non-success, that string theory (now rebranded as M-theory for matrix, magic or membrane) remains literally ‘the only game in town’ because fundamental physics has gotten so
hard that no one can think of a credible alternate unification program. If we are to dispel this as a canard, we must make a good faith effort to answer the challenge by providing interesting
alternatives, lest we be left with nothing at all.
My reason for believing that there is a better route to the truth is that we have, out of what seems to be misplaced love for our beloved Einstein, been too reverential to the exact form of general
relativity. For example, if before retrofitting we look closely at the curvature and geometry of the legs, we can see something striking, in that they are subtly incompatible at a classical geometric
level before any notion of a quantum is introduced. Einstein’s leg seems the sparest and sturdiest as it clearly shows the attention to function found in the school of ‘intrinsic geometry’ founded by
the German Bernhard Riemann. The Maxwell and Dirac legs are somewhat more festive and ornamented as they explore the freedom of form which is the raison d’etre for a more whimsical school of
‘auxiliary geometry’ pioneered by Alsatian Charles Ehresmann. This leads one naturally to a very different question: what if the quantum incompatibility of the existing theories is really a red
herring with respect to unification and the real sticking point is a geometric conflict between the mathematicians Ehresmann and Riemann rather than an incompatibility between the physicists Einstein
and Bohr? Even worse, it could be that none of the foundations are ready to be quantized. What if all three theories are subtly incomplete at a geometric level and that the quantum will follow once,
and only once, all three are retired and replaced with a unified geometry?
If such an answer exists, it cannot be expected to be a generic geometric theory as all three of the existing theories are each, in some sense, the simplest possible in their respective domains. Such
a unified approach might instead involve a new kind of mathematical toolkit combining elements of the two major geometric schools, which would only be relevant to physics if the observed world can be
shown to be of a very particular subtype. Happily, with the discoveries of neutrino mass, non-trivial dark energy, and dark matter, the world we see looks increasingly to be of the special class that
could accommodate such a hybrid theory.
One could go on in this way, but it is not the only interesting line of thinking. While, ultimately, there may be a single unified theory to summit, there are few such intellectual peaks that can
only be climbed from one face. We thus need to return physics to its natural state of individualism so that independent researchers need not fear large research communities who, in the quest for
mindshare and resources, would crowd out isolated rivals pursuing genuinely interesting inchoate ideas that head in new directions. Unfortunately it is difficult to responsibly encourage theorists
without independent wealth to develop truly speculative theories in a community which has come to apply artificially strict standards to new programs and voices while letting M-theory stand, year
after year, for mulligan and mañana.
Established string theorists may, with a twinkle in the eye, shout, ‘predictions!’, ‘falsifiability!’ or ‘peer review!’ at younger competitors in jest. Yet potentially rival ‘infant industry’
research programs, as the saying goes, do not die in jest but in earnest. Given the history of scientific exceptionalism surrounding quantum gravity research, it is neither desirable nor necessary to
retire M-theory explicitly, as it contains many fascinating ideas. Instead, one need only insist that the training wheels that were once customarily circulated to new entrants to reinvigorate the
community, be transferred to emerging candidates from those who have now monopolized them for decades at a time. We can then wait at long last to see if ‘the only game in town’, when denied the
luxury of special pleading by senior boosters, has the support from nature to stay upright.
This essay was originally published in 2014 at https://www.edge.org/response-detail/25547
https://theportal.group/wp-content/uploads/2020/07/edge-2.jpg 720 1280 The Portal Group https://theportal.group/wp-content/uploads/2021/02/the-portal-group-logo-black.png The Portal Group2014-12-31
22:36:002022-03-16 19:49:38M-theory / String Theory is the Only Game in Town | {"url":"https://theportal.group/2014-edge-string-theory/","timestamp":"2024-11-05T15:24:04Z","content_type":"text/html","content_length":"115965","record_id":"<urn:uuid:c1a4f1de-790e-4e70-931a-d4dd72d7ae6a>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00615.warc.gz"} |
Solving More Elasticity Problems
This tutorial is automatically generated from TestSolvingMoreElasticityProblemsTutorial.hpp at revision 3c544f98da9c. Note that the code is given in full at the bottom of the page.
In this second solid mechanics tutorial, we illustrate some other possibilities: using tractions that are defined with a function, or tractions that depend on the deformed body (eg normal pressure
boundary conditions), specifying non-zero displacement boundary conditions, and then displacement boundary conditions only in some directions, and doing compressible solves.
These includes are the same as before
#include <cxxtest/TestSuite.h>
#include "TrianglesMeshReader.hpp"
The incompressible solver
#include "IncompressibleNonlinearElasticitySolver.hpp"
An incompressible material law
#include "ExponentialMaterialLaw.hpp"
These two are specific to compressible problems
#include "CompressibleNonlinearElasticitySolver.hpp"
#include "CompressibleMooneyRivlinMaterialLaw.hpp"
This include should generally go last to avoid issues on old library versions
#include "PetscSetupAndFinalize.hpp"
This function is used in the first test
c_vector<double,2> MyTraction(c_vector<double,2>& rX, double time)
c_vector<double,2> traction = zero_vector<double>(2);
traction(0) = rX(0);
return traction;
class TestSolvingMoreElasticityProblemsTutorial : public CxxTest::TestSuite
Incompressible deformation: non-zero displacement boundary conditions, functional tractions
We now consider a more complicated example. We prescribe particular new locations for the nodes on the Dirichlet boundary, and also show how to prescribe a traction that is given in functional form
rather than prescribed for each boundary element.
void TestIncompressibleProblemMoreComplicatedExample()
Create a mesh
QuadraticMesh<2> mesh;
mesh.ConstructRegularSlabMesh(0.1 /*stepsize*/, 1.0 /*width*/, 1.0 /*height*/);
Use a different material law this time, an exponential material law. The material law needs to inherit from AbstractIncompressibleMaterialLaw, and there are a few implemented, see continuum_mechanics
ExponentialMaterialLaw<2> law(1.0, 0.5); // First parameter is 'a', second 'b', in W=a*exp(b(I1-3))
Now specify the fixed nodes, and their new locations. Create std::vectors for each.
std::vector<unsigned> fixed_nodes;
std::vector<c_vector<double,2> > locations;
Loop over the mesh nodes
for (unsigned i=0; i<mesh.GetNumNodes(); i++)
If the node is on the Y=0 surface (the LHS)
if (fabs(mesh.GetNode(i)->rGetLocation()[1]) < 1e-6)
Add it to the list of fixed nodes
and define a new position x=(X,0.1*X^2^)
c_vector<double,2> new_location;
double X = mesh.GetNode(i)->rGetLocation()[0];
new_location(0) = X;
new_location(1) = 0.1*X*X;
Now collect all the boundary elements on the top surface, as before, except here we don’t create the tractions for each element
std::vector<BoundaryElement<1,2>*> boundary_elems;
for (TetrahedralMesh<2,2>::BoundaryElementIterator iter = mesh.GetBoundaryElementIteratorBegin();
iter != mesh.GetBoundaryElementIteratorEnd();
If Y=1, have found a boundary element
if (fabs((*iter)->CalculateCentroid()[1] - 1.0)<1e-6)
BoundaryElement<1,2>* p_element = *iter;
Create a problem definition object, and this time calling SetFixedNodes which takes in the new locations of the fixed nodes.
SolidMechanicsProblemDefinition<2> problem_defn(mesh);
problem_defn.SetFixedNodes(fixed_nodes, locations);
Now call SetTractionBoundaryConditions, which takes in a vector of boundary elements as in the previous test. However this time the second argument is a function pointer (just the name of the
function) to a function returning traction in terms of position (and time [see below]). This function is defined above, before the tests. It has to take in a c_vector (position) and a double (time),
and returns a c_vector (traction), and will only be called using points in the boundary elements being passed in. The function MyTraction (defined above, before the tests) above defines a horizontal
traction (ie a shear stress, since it is applied to the top surface) which increases in magnitude across the object.
problem_defn.SetTractionBoundaryConditions(boundary_elems, MyTraction);
Note: You can also call problem_defn.SetBodyForce(MyBodyForce), passing in a function instead of a vector, although isn’t really physically useful, it is only really useful for constructing problems
with exact solutions.
Create the solver as before
IncompressibleNonlinearElasticitySolver<2> solver(mesh,
Call Solve()
Another quick check
TS_ASSERT_EQUALS(solver.GetNumNewtonIterations(), 6u);
Visualise as before.
Advanced: Note that the function MyTraction takes in time, which it didn’t use. In the above it would have been called with t=0. The current time can be set using SetCurrentTime(). The idea is that
the user may want to solve a sequence of static problems with time-dependent tractions (say), for which they should allow MyTraction to depend on time, and put the solve inside a time-loop, for
//for (double t=0; t<T; t+=dt)
// solver.SetCurrentTime(t);
// solver.Solve();
In this the current time would be passed through to MyTraction
Create Cmgui output
This is just to check that nothing has been accidentally changed in this test
TS_ASSERT_DELTA(solver.rGetDeformedPosition()[98](0), 1.4543, 1e-3);
TS_ASSERT_DELTA(solver.rGetDeformedPosition()[98](1), 0.5638, 1e-3);
Sliding boundary conditions
It is common to require a Dirichlet boundary condition where the displacement/position in one dimension is fixed, but the displacement/position in the others are free. This can be easily done when
collecting the new locations for the fixed nodes, as shown in the following example. Here, we take a unit square, apply gravity downward, and suppose the Y=0 surface is like a frictionless boundary,
so that, for the nodes on Y=0, we specify y=0 but leave x free (Here (X,Y)=old position, (x,y)=new position). (Note though that this wouldn’t be enough to uniquely specify the final solution - an
arbitrary translation in the Y direction could be added a solution to obtain another valid solution, so we fully fix the node at the origin.)
void TestWithSlidingDirichletBoundaryConditions()
QuadraticMesh<2> mesh;
mesh.ConstructRegularSlabMesh(0.1 /*stepsize*/, 1.0 /*width*/, 1.0 /*height*/);
ExponentialMaterialLaw<2> law(1.0, 0.5); // First parameter is 'a', second 'b', in W=a*exp(b(I1-3))
Create fixed nodes and locations…
std::vector<unsigned> fixed_nodes;
std::vector<c_vector<double,2> > locations;
Fix node 0 (the node at the origin)
For the rest, if the node is on the Y=0 surface..
for (unsigned i=1; i<mesh.GetNumNodes(); i++)
if (fabs(mesh.GetNode(i)->rGetLocation()[1]) < 1e-6)
..add it to the list of fixed nodes..
..and define y to be 0 but x is fixed
c_vector<double,2> new_location;
new_location(0) = SolidMechanicsProblemDefinition<2>::FREE;
new_location(1) = 0.0;
Set the material law and fixed nodes, add some gravity, and solve
SolidMechanicsProblemDefinition<2> problem_defn(mesh);
problem_defn.SetFixedNodes(fixed_nodes, locations);
c_vector<double,2> gravity = zero_vector<double>(2);
gravity(1) = -0.5;
IncompressibleNonlinearElasticitySolver<2> solver(mesh,
Check the node at (1,0) has moved but has stayed on Y=0
TS_ASSERT_LESS_THAN(1.0, solver.rGetDeformedPosition()[10](0));
TS_ASSERT_DELTA(solver.rGetDeformedPosition()[10](1), 0.0, 1e-3);
Compressible deformation, and other bits and pieces
In this test, we will show the (very minor) changes required to solve a compressible nonlinear elasticity problem, we will describe and show how to specify ‘pressure on deformed body’ boundary
conditions, we illustrate how a quadratic mesh can be generated using a linear mesh input files, and we also illustrate how Solve() can be called repeatedly, with loading changing between the solves.
Note: for other examples of compressible solves, including problems with an exact solution, see the file continuum_mechanics/test/TestCompressibleNonlinearElasticitySolver.hpp
void TestSolvingCompressibleProblem()
All mechanics problems must take in quadratic meshes, but the mesh files for (standard) linear meshes in Triangles/Tetgen can be automatically converted to quadratic meshes, by simply doing the
following. (The mesh loaded here is a disk centred at the origin with radius 1).
QuadraticMesh<2> mesh;
TrianglesMeshReader<2,2> reader("mesh/test/data/disk_522_elements");
Compressible problems require a compressible material law, ie one that inherits from AbstractCompressibleMaterialLaw. The CompressibleMooneyRivlinMaterialLaw is one such example; instantiate one of
CompressibleMooneyRivlinMaterialLaw<2> law(1.0, 0.5);
For this problem, we fix the nodes on the surface for which Y < -0.9
std::vector<unsigned> fixed_nodes;
for ( TetrahedralMesh<2,2>::BoundaryNodeIterator iter = mesh.GetBoundaryNodeIteratorBegin();
iter != mesh.GetBoundaryNodeIteratorEnd();
double Y = (*iter)->rGetLocation()[1];
if (Y < -0.9)
We will (later) apply Neumann boundary conditions to surface elements which lie below Y=0, and these are collected here. (Minor, subtle, comment: we don’t bother here checking Y>-0.9, so the surface
elements collected here include the ones on the Dirichlet boundary. This doesn’t matter as the Dirichlet boundary conditions to the nonlinear system essentially overwrite an Neumann-related effects).
std::vector<BoundaryElement<1,2>*> boundary_elems;
for (TetrahedralMesh<2,2>::BoundaryElementIterator iter
= mesh.GetBoundaryElementIteratorBegin();
iter != mesh.GetBoundaryElementIteratorEnd();
BoundaryElement<1,2>* p_element = *iter;
if (p_element->CalculateCentroid()[1]<0.0)
Create the problem definition class, and set the law again, this time stating that the law is compressible
SolidMechanicsProblemDefinition<2> problem_defn(mesh);
Set the fixed nodes and gravity
The elasticity solvers have two nonlinear solvers implemented, one hand-coded and one which uses PETSc’s SNES solver. The latter is not the default but can be more robust (and will probably be the
default in later versions). This is how it can be used. (This option can also be called if the compiled binary is run from the command line (see Running Binaries From Command Line) using the option
This line tells the solver to output info about the nonlinear solve as it progresses, and can be used with or without the SNES option above. The corresponding command line option is “-mech_verbose”
c_vector<double,2> gravity;
gravity(0) = 0;
gravity(1) = 0.1;
Declare the compressible solver, which has the same interface as the incompressible one, and call Solve()
CompressibleNonlinearElasticitySolver<2> solver(mesh,
Now we call add additional boundary conditions, and call `Solve() again. Firstly: these Neumann conditions here are not specified traction boundary conditions (such BCs are specified on the
undeformed body), but instead, the (more natural) specification of a pressure exactly in the normal direction on the deformed body. We have to provide a set of boundary elements of the mesh, and a
pressure to act on those elements. The solver will automatically compute the deformed normal directions on which the pressure acts. Note: with this type of BC, the ordering of the nodes on the
boundary elements needs to be consistent, otherwise some normals will be computed to be inward and others outward. The solver will check this on the original mesh and throw an exception if this is
not the case. (The required ordering is such that: in 2D, surface element nodes are ordered anticlockwise (looking at the whole mesh); in 3D, looking at any surface element from outside the mesh, the
three nodes are ordered anticlockwise. (Triangle/tetgen automatically create boundary elements like this)).
double external_pressure = -0.4; // negative sign => inward pressure
problem_defn.SetApplyNormalPressureOnDeformedSurface(boundary_elems, external_pressure);
Call Solve() again, so now solving with fixed nodes, gravity, and pressure. The solution from the previous solve will be used as the initial guess. Although at the moment the solution from the
previous call to Solve() will be over-written, calling Solve() repeatedly may be useful for some problems: sometimes, Newton’s method will fail to converge for given force/pressures etc, and it can
be (very) helpful to increment the loading. For example, set the gravity to be (0,-9.81/3), solve, then set it to be (0,-2*9.81/3), solve again, and finally set it to be (0,-9.81) and solve for a
final time
Full code
#include <cxxtest/TestSuite.h>
#include "TrianglesMeshReader.hpp"
#include "IncompressibleNonlinearElasticitySolver.hpp"
#include "ExponentialMaterialLaw.hpp"
#include "CompressibleNonlinearElasticitySolver.hpp"
#include "CompressibleMooneyRivlinMaterialLaw.hpp"
#include "PetscSetupAndFinalize.hpp"
c_vector<double,2> MyTraction(c_vector<double,2>& rX, double time)
c_vector<double,2> traction = zero_vector<double>(2);
traction(0) = rX(0);
return traction;
class TestSolvingMoreElasticityProblemsTutorial : public CxxTest::TestSuite
void TestIncompressibleProblemMoreComplicatedExample()
QuadraticMesh<2> mesh;
mesh.ConstructRegularSlabMesh(0.1 /*stepsize*/, 1.0 /*width*/, 1.0 /*height*/);
ExponentialMaterialLaw<2> law(1.0, 0.5); // First parameter is 'a', second 'b', in W=a*exp(b(I1-3))
std::vector<unsigned> fixed_nodes;
std::vector<c_vector<double,2> > locations;
for (unsigned i=0; i<mesh.GetNumNodes(); i++)
if (fabs(mesh.GetNode(i)->rGetLocation()[1]) < 1e-6)
c_vector<double,2> new_location;
double X = mesh.GetNode(i)->rGetLocation()[0];
new_location(0) = X;
new_location(1) = 0.1*X*X;
std::vector<BoundaryElement<1,2>*> boundary_elems;
for (TetrahedralMesh<2,2>::BoundaryElementIterator iter = mesh.GetBoundaryElementIteratorBegin();
iter != mesh.GetBoundaryElementIteratorEnd();
if (fabs((*iter)->CalculateCentroid()[1] - 1.0)<1e-6)
BoundaryElement<1,2>* p_element = *iter;
SolidMechanicsProblemDefinition<2> problem_defn(mesh);
problem_defn.SetFixedNodes(fixed_nodes, locations);
problem_defn.SetTractionBoundaryConditions(boundary_elems, MyTraction);
IncompressibleNonlinearElasticitySolver<2> solver(mesh,
TS_ASSERT_EQUALS(solver.GetNumNewtonIterations(), 6u);
//for (double t=0; t<T; t+=dt)
// solver.SetCurrentTime(t);
// solver.Solve();
TS_ASSERT_DELTA(solver.rGetDeformedPosition()[98](0), 1.4543, 1e-3);
TS_ASSERT_DELTA(solver.rGetDeformedPosition()[98](1), 0.5638, 1e-3);
void TestWithSlidingDirichletBoundaryConditions()
QuadraticMesh<2> mesh;
mesh.ConstructRegularSlabMesh(0.1 /*stepsize*/, 1.0 /*width*/, 1.0 /*height*/);
ExponentialMaterialLaw<2> law(1.0, 0.5); // First parameter is 'a', second 'b', in W=a*exp(b(I1-3))
std::vector<unsigned> fixed_nodes;
std::vector<c_vector<double,2> > locations;
for (unsigned i=1; i<mesh.GetNumNodes(); i++)
if (fabs(mesh.GetNode(i)->rGetLocation()[1]) < 1e-6)
c_vector<double,2> new_location;
new_location(0) = SolidMechanicsProblemDefinition<2>::FREE;
new_location(1) = 0.0;
SolidMechanicsProblemDefinition<2> problem_defn(mesh);
problem_defn.SetFixedNodes(fixed_nodes, locations);
c_vector<double,2> gravity = zero_vector<double>(2);
gravity(1) = -0.5;
IncompressibleNonlinearElasticitySolver<2> solver(mesh,
TS_ASSERT_LESS_THAN(1.0, solver.rGetDeformedPosition()[10](0));
TS_ASSERT_DELTA(solver.rGetDeformedPosition()[10](1), 0.0, 1e-3);
void TestSolvingCompressibleProblem()
QuadraticMesh<2> mesh;
TrianglesMeshReader<2,2> reader("mesh/test/data/disk_522_elements");
CompressibleMooneyRivlinMaterialLaw<2> law(1.0, 0.5);
std::vector<unsigned> fixed_nodes;
for ( TetrahedralMesh<2,2>::BoundaryNodeIterator iter = mesh.GetBoundaryNodeIteratorBegin();
iter != mesh.GetBoundaryNodeIteratorEnd();
double Y = (*iter)->rGetLocation()[1];
if (Y < -0.9)
std::vector<BoundaryElement<1,2>*> boundary_elems;
for (TetrahedralMesh<2,2>::BoundaryElementIterator iter
= mesh.GetBoundaryElementIteratorBegin();
iter != mesh.GetBoundaryElementIteratorEnd();
BoundaryElement<1,2>* p_element = *iter;
if (p_element->CalculateCentroid()[1]<0.0)
SolidMechanicsProblemDefinition<2> problem_defn(mesh);
c_vector<double,2> gravity;
gravity(0) = 0;
gravity(1) = 0.1;
CompressibleNonlinearElasticitySolver<2> solver(mesh,
double external_pressure = -0.4; // negative sign => inward pressure
problem_defn.SetApplyNormalPressureOnDeformedSurface(boundary_elems, external_pressure); | {"url":"https://chaste.github.io/releases/2024.2/user-tutorials/solvingmoreelasticityproblems/","timestamp":"2024-11-05T16:05:56Z","content_type":"text/html","content_length":"113191","record_id":"<urn:uuid:03225862-4254-47b2-ba90-704f16688906>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00213.warc.gz"} |
Acceleration wave propagation in an inhomogeneous heat conducting elastic rod of slowly varying cross-section
Fu, Y. B. and Scott, N. H. (1988) Acceleration wave propagation in an inhomogeneous heat conducting elastic rod of slowly varying cross-section. Journal of Thermal Stresses, 11 (2). pp. 127-140. ISSN
Full text not available from this repository.
Acceleration wave propagation in an inhomogeneous, heat-conducting elastic rod of slowly varying cross-sectional area is treated as a problem in one-dimensional wave propagation. The wave speed is
found to be independent of the varying cross section. The equation governing the growth of the wave amplitude is found to be of the Bernoulli type. The varying cross section affects only the
coefficient of the linear term. The term contributed by the varying cross section is exactly the same as that contributed by the varying area of a ray tube due to geometric spreading in the theory of
three-dimensional acceleration wave propagation. In a sense to be made precise later, a cross-sectional area that is increasing as the wave propagates renders an acceleration wave less likely to
build up into a shock after a finite distance of propagation, while a decreasing cross-sectional area renders an acceleration wave more likely to build up into a shock. The inclusion of thermal
effects leads to a damping of the waves independently of the effect of varying cross section.
Actions (login required) | {"url":"https://ueaeprints.uea.ac.uk/id/eprint/20857/","timestamp":"2024-11-02T13:48:34Z","content_type":"application/xhtml+xml","content_length":"22590","record_id":"<urn:uuid:0c502d54-85b5-414e-a703-cff2cf93d84c>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00700.warc.gz"} |
4 step equations
Solving Multi-Step Equations
Solving multi-step equations (solutions, examples, videos)
Multi-Step Equations
Multi Step Equations Worksheets - Math Monks
Solving Multi-Step Equations
Solving Multi-Step Equations Educational Resources K12 Learning ...
Multi Step Equations - Distributive Property
Multi-step equations
Solving Multi-Step Equations | ChiliMath
Solving Multi-Step Equations (Practice Question Video)
Solve Multi-Step Equations #3 | Interactive Worksheet | Education.com
Solving Multi-Step Equations - Peers and Pedagogy
50+ Multi-Step Equations worksheets for 8th Class on Quizizz ...
Multi-step equations activity | Live Worksheets
Multi Step Equations - Distributive Property - YouTube
Solving Multi-Step Equations Error Analysis Google Form - Absolute ...
50+ Two-Step Equations worksheets for 4th Class on Quizizz | Free ...
How Do You Solve a Multi-Step Equation with Fractions by ...
Solving Multiple Step Equations | Explanation, Steps & Examples Video
Solving Multi-Step Equations | ChiliMath
Solving Multi-Step Equations
Solving Equations Quiz | Math Resource | Twinkl USA - Twinkl
Video Definition 21--Equation Concepts--Multi-Step Equation ...
Multi-Step Equations - Kuta Software
Multi-Step Equations | CK-12 Foundation
Multi-Step Equation Notes and Worksheets - Lindsay Bowden
PPT - Solving Multi-Step Equations PowerPoint Presentation, free ...
How to Solve Multi Step Equations | Mathcation
Multi-Step Equations: Crack The Code Worksheet
Multi-Step Equations | OER Commons
11-4 Algebra 1 - 3.4 3 Multi-Step Equations Practice pd. 2.notebook
Solve Multi-Step Equations | Math, Algebra, Linear Equations, 8th ...
Solving Linear Equations
Multi-Step Equations Practice Problems with Answers | ChiliMath | {"url":"https://worksheets.clipart-library.com/4-step-equations.html","timestamp":"2024-11-14T10:30:04Z","content_type":"text/html","content_length":"24364","record_id":"<urn:uuid:a2a743a1-ac44-40e8-8ab5-2a03150db7ca>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00274.warc.gz"} |
Overall Status as a percent score (factoring in percent complete, weighted average and due dates)
I'm trying to calculate a department's overall status based on a percent score. I think I need to use percent of completed deliverables, weighted average of deliverables and deliverable due dates
within that area. Any words of wisdom?
I'm working on a project with various departments that each provide an arbitrary status of either "On Track," "Off Track" or "At Risk." My team would like to construct a formula that will yield a
certain percent that we can then translate ourselves to a mathematically-founded status. Similar to a grade scale (70-100% is an On Track, 40-69% is Off Track, etc.) or a RAID log scale.
Each department is listed as a parent row and has child rows/tasks that roll up. The percent complete, weighted average and due dates are accounted for. Is it possible to calculate this? Thank you!!
Best Answer
• @Jaykel T. @Bassam Khalil thank you both and apologies for the delay. This function did not give me what I was trying to find, but I reconstructed my sheet to go about it a different way. I
appreciate it.
• Hope you are fine if it's possible could you share me as an admin on a copy of your sheet contains sample data (after removing or replacing any sensitive information). and i will create the
formula for you using the same column names then you can copy it if it's work and paste it in your main sheet.
My Email for sharing : Bassam.k@mobilproject.it
☑️ Are you satisfied with my answer to your question? Please help the Community by marking it as an ( Accepted Answer), and I will be grateful for your "Vote Up" or "Insightful"
• Hey @Christine Berger (Walsh),
It may be helpful to get an average from all the variables by turning them into a percentage. In your scenario, it looks like most are already percentages except the start/end date. By utilizing
the SUM, TODAY, NETDAYS, Nested IF Functions and simple math, you may be able to get an overall average. I created the example below on how this may look like:
Formula: =IF(SUM([%a]@row:[%c]@row) / 3 = 1, 1, IF(SUM([%a]@row:[%c]@row) / 3 = 0, 0, IF(startDate@row > TODAY(), SUM([%a]@row:[%c]@row) / 3, SUM([%a]@row:[%c]@row) + (NETDAYS(startDate@row,
TODAY()) / NETDAYS(startDate@row, endDate@row))) / 4))
This formula's actions are as follows:
□ If the Sum of % a, b, and c divided by 3 is equal to 100%, return 100%
□ If the Sum of % a, b, and c divided by 3 is equal to 0%, return 0%
□ If the Start Date is greater than Today's date, return the Sum of % a, b, and c divided by 3
□ If the Start Date is less than Today's date, return the Sum of % a, b, c and (elapsed days/total days) divided by 4
☆ This takes into account of the start/end date and elapsed days so far
Please note, you can also use the AVG Function instead of the SUM Function, but you may need to use the IFERROR Function to handle the #DIVIDE BY ZERO error.
I hope this helps!
• @Jaykel T. @Bassam Khalil thank you both and apologies for the delay. This function did not give me what I was trying to find, but I reconstructed my sheet to go about it a different way. I
appreciate it.
Help Article Resources | {"url":"https://community.smartsheet.com/discussion/80615/overall-status-as-a-percent-score-factoring-in-percent-complete-weighted-average-and-due-dates","timestamp":"2024-11-07T00:22:36Z","content_type":"text/html","content_length":"413808","record_id":"<urn:uuid:611da6c6-e7a2-4062-8616-b08bd84b42ba>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00424.warc.gz"} |
Convert Vori to Milligram (vori to mg) - Pyron Converter
Result: Vori = Milligram
i.e. vori = mg
What does Vori mean?
Vori (Vhori, Bhori, ভরি) is an a traditional South Asian (specially in India) unit of mass of ornaments like gold, silver etc.
In this unit converter website, we have converter from Vori (vori) to some other Mass unit.
What does Milligram mean?
Milligram (mg) is a unit of mass in the metric system, commonly used in many fields such as medicine, chemistry, and pharmacology. It is defined as one-thousandth (1/1000) of a gram or 0.001
The milligram is a small unit of measurement, often used to describe the mass of small objects or substances. For example, a typical aspirin tablet weighs around 325 milligrams, while a typical
grain of table salt weighs around 580 milligrams.
The milligram is also commonly used to measure the dosage of medications, with many medications being prescribed in milligram quantities. This is because the milligram is a convenient size for
administering precise doses of drugs.
To convert milligrams to other units of mass, such as grams or kilograms, conversion factors must be used. One milligram is equivalent to 0.001 grams or 1.0 x 10^-6 kilograms. Conversely, one
gram is equal to 1000 milligrams, and one kilogram is equal to 1,000,000 milligrams.
In summary, the milligram is a widely used unit of mass, particularly in fields such as medicine and pharmacology. Its small size and convenient conversion factors make it an ideal unit for
describing the mass of small objects or substances and for administering precise doses of medications.
In this unit converter website, we have converter from Milligram (mg) to some other Mass unit.
What does Mass mean?
A basic theoretical concept of mass in physics. Mass is a basic property of an object that measures the impedance of the acceleration created on the object by the application of force. Newtonian
mechanics relates the force and acceleration of a mass of matter. The practical concept of mass is the weight of the object. So, the total amount of matter is called mass.
The mass of the object never changes. But the weight of the same object may be different due to positional reasons because the weight is the result of gravity. So even if the mass of the object
is variable, its weight will be different in the center of the earth, on the surface of the earth, and in space.
Mass is a physical quantity that does not change with the change of position of the object above the surface of the earth. An astronaut with a mass of 75 kg will have a mass of 75 kg on the moon
or in the orbit of the earth or the moon. No matter how much space the astronaut is made of, its mass remains unchanged everywhere.
Mass level:
The mass level is [M]
Unit of Mass: :
The unit of mass in the international system is the kilogram (kg), and the unit of mass in the C.G.S method is the gram (g), and the unit of mass in the British system is the pound.
Characteristic of Mass:
The mass characteristic depends on an object. The mass does not change even if an object is taken to the earth or anywhere in the universe. Moreover, the mass of an object does not depend
on anything, such as motion, temperature, magnetism, electric current, etc. So it is said that mass is the feature of the object.
Thanks for reading the article! Hope that this article helps to understand about mass and how we measure mass from an object.
How to convert Vori to Milligram : Detailed Description
Vori (vori) and Milligram (mg) are both units of Mass. On this page, we provide a handy tool for converting between vori and mg. To perform the conversion from vori to mg, follow these two simple
Steps to solve
Have you ever needed to or wanted to convert Vori to Milligram for anything? It's not hard at all:
Step 1
• Find out how many Milligram are in one Vori. The conversion factor is 116600.0 mg per vori.
Step 2
• Let's illustrate with an example. If you want to convert 10 Vori to Milligram, follow this formula: 10 vori x 116600.0 mg per vori = mg. So, 10 vori is equal to mg.
• To convert any vori measurement to mg, use this formula: vori = mg x 116600.0. The Mass in Vori is equal to the Milligram multiplied by 116600.0. With these simple steps, you can easily and
accurately convert Mass measurements between vori and mg using our tool at Pyron Converter.
FAQ regarding the conversion between vori and mg
Question: How many Milligram are there in 1 Vori ?
Answer: There are 116600.0 Milligram in 1 Vori. To convert from vori to mg, multiply your figure by 116600.0 (or divide by 0.000008576329331046313).
Question: How many Vori are there in 1 mg ?
Answer: There are 0.000008576329331046313 Vori in 1 Milligram. To convert from mg to vori, multiply your figure by 0.000008576329331046313 (or divide by 116600.0).
Question: What is 1 vori equal to in mg ?
Answer: 1 vori (Vori) is equal to 116600.0 in mg (Milligram).
Question: What is the difference between vori and mg ?
Answer: 1 vori is equal to 116600.0 in mg. That means that vori is more than a 116600.0 times bigger unit of Mass than mg. To calculate vori from mg, you only need to divide the mg Mass value by
Question: What does 5 vori mean ?
Answer: As one vori (Vori) equals 116600.0 mg, therefore, 5 vori means mg of Mass.
Question: How do you convert the vori to mg ?
Answer: If we multiply the vori value by 116600.0, we will get the mg amount i.e; 1 vori = 116600.0 mg.
Question: How much mg is the vori ?
Answer: 1 Vori equals 116600.0 mg i.e; 1 Vori = 116600.0 mg.
Question: Are vori and mg the same ?
Answer: No. The vori is a bigger unit. The vori unit is 116600.0 times bigger than the mg unit.
Question: How many vori is one mg ?
Answer: One mg equals 0.000008576329331046313 vori i.e. 1 mg = 0.000008576329331046313 vori.
Question: How do you convert mg to vori ?
Answer: If we multiply the mg value by 0.000008576329331046313, we will get the vori amount i.e; 1 mg = 0.000008576329331046313 Vori.
Question: What is the mg value of one Vori ?
Answer: 1 Vori to mg = 116600.0. | {"url":"https://pyronconverter.com/unit/mass/vori-mg","timestamp":"2024-11-05T10:17:07Z","content_type":"text/html","content_length":"116701","record_id":"<urn:uuid:782b381f-9ab7-44d2-b4ac-6f7430e6c65f>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00167.warc.gz"} |
This number is a composite.
The magic constant for the smallest magic square composed only of prime numbers if 1 were counted as a prime.
111^322 + 322^111 is prime. [Kulsha]
The number of consecutive composites that follow the prime 370261, the first run of more than 100.
Nayan Hajratwala's home computer, a 350 MHz IBM Aptiva, used 111 days of idle time to find the thirty-eighth Mersenne prime.
111 equals the sum of 2 + 3 + 4 + ... + 17 minus the sum of primes less than 17. [Trotter]
10^(2*55)+111*10^(55-1)+1 is a palindromic prime with 111 digits and 111 as the "center nut" of the number. [Fougeron]
111*10^100+1 is the smallest prime which is one more than a multiple of googol. [Wu]
The smallest palindromic number whose English names contain a prime number of letters. [Russo]
The smallest multidigit palindromic number n such that the number of primes below n^2 does not exceed n times the number of primes below n. [Russo]
The smallest palindromic number n such that the sum of its digits is a prime factor. [Russo]
111 x 2^111 - 111 - 2^111 is prime. [Noll]
The smallest repunit semiprime. [Gupta]
The smallest tetradic semiprime. Note that this number contains the smallest tetradic prime. [Capelle]
The smallest nontrivial palindromic semiprime. [Silva]
The smallest repunit divisible by an emirp. [Wilson]
The sum of the emirp pair 37 and 73 plus 1 is a repunit, i.e., 37 + 73 + 1 = 111. Additionally, the sum of n of these emirp pairs plus n is also a repdigit, for n = 2 thru 9. [Wiszowaty]
First occurrence of a repunit semiprime (i.e., 111) starts at 153rd position after the decimal point in π. [Gupta]
(There are 5 curios for this number that have not yet been approved by an editor.)
Printed from the PrimePages <t5k.org> © G. L. Honaker and Chris K. Caldwell | {"url":"https://t5k.org/curios/page.php/111.html","timestamp":"2024-11-03T01:17:31Z","content_type":"text/html","content_length":"13429","record_id":"<urn:uuid:14baf20f-bbdc-4980-8b7d-f40c4df45284>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00339.warc.gz"} |
Moov Physics
Moov Physics modifier
Moov Physics is our in-house physics engine designed to provide the artist and TDs with different algorithms to simulate hair movement. This operator provides a simple interface to work with hair
simulation in real-time and a set of Python scripts that offer different functionalities. TDs have the possibilities to expand MoovPhysics developing their own Python scripts to achieve even more
complex and accurate simulation.
Loading the player ...
Moov Hair Simulation
Moov Simulation Parameters
In Moov Physics all the parameters are divided in different groups related to its functionality. In this section we will define every parameter found in each group.
Solver Parameters
This group includes the global parameters used for tuning up the particle solver. These global parameters influence the whole simulation and the behavior of all the other parameters.
Note: These parameters are temporarily exposed to the user for feedback and development but all tuning of the hair simulation should be done without tweaking the Solver Parameters. If these are
changed the entire behavior of the simulation will change and the rest of the parameters would have to be adjusted from the start.
• Substep Count
Number of simulator steps within a single animation frame. Larger values lead to slower simulation, more accurate collisions, and stiffer constraints (except in compliant models, which are less
affected by this parameter).
• Iteration Count
Number of iterations in constraint projections. The difference with the previous parameter is that this one only affects the constraint response (not collision detection). More iterations means
slower simulation (although the effect is smaller than the previous parameter) and stiffer constraints (up to a certain number of iterations; after that the constraint response is saturated).
• Velocity Limit
Maximum speed of particles in the solver. Use to impose a speed limit on fast-moving hair simulations.
• Collision Tolerance
Defines the collision distance between the hair and the mesh objects in the Collision Meshes list. This parameter affects all collisions in the simulation uniformly. Using values that are too low
could give unpredictable results, the default values should be enough in most cases.
Hair Model
• Lattice Count
Number of simulation particles corresponding to each hair vertex. Lattice Count = 1 corresponds to simple longitudinal particle chains. A lattice can be used to simulate some twist response in
models with only distance and bending constraints (i.e., non-Cosserat and non-elaston models). Cosserat models are preferred for applications where twist is important (e.g. propagated strands).
• Lattice Size
Distances between the particles of a lattice node (corresponding to a single hair vertex). To have meaningful lattice twist, the Lattice Size value has to be comparable to the distance between
neighbour strand vertices.
• Lattice Stiffness
Stiffness parameter of lattice nodes (used for constraints within a single lattice node).
• Use Compliant Constraints
Uses compliant versions of the constraints needed by the hair model. Compliant constraints have a slight computational overhead, but are more stable, so their use is recommended when simulation
stability is a concern. There are several important differences in the behaviour of compliant vs. non-compliant constraints:
compliant constraints have unbounded stiffness (i.e. the stiffness can be arbitrarily large). The stiffness response is limited, however, so increasing the stiffness beyond a certain value will
have no further effect. The stiffness response of compliant constraints depends on particle mass.
• Limit Stretch
Creates Long-Range-Attachment (LRA) constraints to limit strand stretching. LRA constraints are distance constraints linking all strand particles to the root. They react only to stretching (not
compression) and allow for a 10% length increase before they kick in. The constraint stiffness is taken from the Stretching Stiffness parameter.
Hair Properties
• Stretching Stiffness
Stiffness parameter controlling longitudinal strand stretching. Should be between 0 and 1.
• Bending Stiffness
Stiffness parameter controlling strand bending. This is applied to any constraints that serve to limit strand bending, depending on the model. In Cosserat models, this is also the twisting
Forces and Fields
• Gravity
Vertical gravitational acceleration. Use to impose appropriate length units in the scene: e.g. if the scene units are centimeters, set this parameter to -981 (cm/s^2); if the scene units are
meters, set it to -9.81 (m/s^2).
• Gravity Scale
Scales the gravitational field felt by the hair. Use this parameter to modify the stiffness response when the StretchingStiffness and BendingStiffness do not allow sufficient flexibility in hair
setup. For example, if the hair is too stretchy even at maximum stiffness, increasing this parameter will lead to stiffer strands. Note that this parameter affects only the stretch of
free-falling hair. Inertial stretch (i.e. stretch due to a fast-moving base mesh) is not affected.
• Drag
Multiplies the velocity of each particle at each simulation step. Should be smaller than 1 to avoid instabilities. Use to introduce damping (energy loss) in the system; the smaller the value, the
larger the damping.
• Attract To Initial Shape
Creates springs that attract each strand to its initial shape.
• Attract To Initial Shape Stiffness
Determines the spring stiffness. Unlike constraint stiffness, this value can be arbitrarily large.
For consistency with other stiffness parameters, internal scaling is applied to make sure that values of 0-1 are meaningful. The scaling constant can be changed to obtain satisfactory defaults.
• Collide With Base Mesh
Turns on collisions between hair particles and the base mesh.
• Collide With Meshes
Turns on collisions between hair particles and collision meshes added in the Collision Meshes box.
• Collide With Hair
Turns on hair-hair collisions. These are implemented by creating capsules for each hair segment. Capsules can only collide with each other.
• Collide With Hair Radius
Defines the radius of hair strands for hair-hair collisions.
Group Holder
The group holder creates constraints to hold together each group of strands. The constraints are created between randomly selected particle pairs within a certain range of strand vertex indices (min
and max position).
• Group Channel
Selects a channel containing strand groups.
• Group Holder Pos Min
First strand vertex index (from root to tip) of the group holder.
• Group Holder Pos Max
Last strand vertex index (from root to tip) of the group holder.
• Group Stiffness
Stiffness of the group holder constraints.
• Group Random Seed
Random seed for choosing the particle pairs for the group holder.
• Group Max Constraints
Maximum number of constraints per strand group.
Overview of hair model types
Here is a basic benchmark for a hair simulation (rev. 13883) with the main hair models. The simulation has 300 guides with 10 points per guide and runs over 100 frames in the Cinema 4D GUI; all
parameters are script defaults except the model type.
The total time difference between the fastest to the slowest model is about 13%; the difference in solver running time is more substantial (about 25%). The differences between compliant and
non-compliant versions of different models are negligible; using the compliant versions is recommended.
Distance Only
A model using only distance (spring) constraints. Constraints are created along each strand between first neighbours (to control stretching) and second neighbours (to control bending), with the
corresponding stretching and bending stiffnesses.
This model is the fastest, but has somewhat unrealistic bending (strands tend to buckle). To add some degree of twisting response, it is necessary to create a lattice.
Distance Bending
This model uses distance constraints between first neighbours to control stretching and three-particle bending constraints at each strand vertex for bending control.
Bending is more realistic than the DistanceOnly model, but hair tends to be too stretchy. Similar to the previous model, a lattice is needed for twisting response.
Uses two-particle stretch-shear and three-particle bend-twist Cosserat rod constraints, with the corresponding stretching and bending stiffnesses. Slower than the previous models, but the strands
have full elastic response.
Cosserat Distance
Like Cosserat, but creates additional first-neighbour distance constraints to control overstretching.
Missing Something?
Let us know
if this page needs more information about the topic. | {"url":"https://ephere.com/plugins/maxon/c4d/ornatrix/docs/3/Moov_Physics_Modifier.html","timestamp":"2024-11-05T22:09:18Z","content_type":"application/xhtml+xml","content_length":"35039","record_id":"<urn:uuid:81ddafee-516b-474f-a436-7ec5d51e07a0>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00824.warc.gz"} |
Pierre-Simon Laplace | Famous Mathematicians
Pierre-Simon Laplace
Fast Facts
Born: 23rd March, 1749
Died: 5th March, 1827 aged 77
Nationality: French
Most Famous for: Laplace Equation and Laplace Transforms
Pierre-Simon Laplace was a mathematician and astronomer of French origin. His pioneering work included the theory of probability and statistics. This study can be defined as the use of numbers and
mathematical formulae to calculate the chance of an event occurring. The ability to get such numbers helps in predicting trends and outcomes. Laplace’s work also included various studies on the solar
system, with the focus being on its stability. His work earned him the title the ‘French Newton.’ Pierre was born into a low-income family, prompting the need for neighbors to fund his education. He
was sent to study theology at 16, but developed interest at mathematics and his journey in science began.
Early life
Pierre-Simon Laplace was born in Normandy, France, on the 23rd of March, 1749. His father was a farmer. His educational costs were footed by wealthy neighbors who stepped in to intervene in the
schooling of this brilliant mind. It was his father’s idea to send him to study theology at 16, as he wanted his son to be part of the Roman Catholic Church. With the development of his interest in
mathematics, at 19 he quit the Caen University where he was studying theology.
His next major step in life was to become a mathematics professor at the Ecole Militaire, a position he would hold for seven years, between 1769 and 1776.
It was during his time at Ecole Militaire that he wrote his first publications. His papers covered subjects such as mechanics, calculus, and physical astronomy, which earned him considerable
recognition across France.
Between the years of 1784 and 1787, he studied attraction between spheroids. This pursuit laid a foundation in mathematics for further studies about heat, electricity, and magnetism. Pierre was a
great mathematician who found great joy in pioneering studies, laying the groundwork for greater understanding of mathematics and the solar system.
In the years after that, Laplace would go on to contribute and publish extensively about gravity and the theory of probability. His methods suggested the existence of black holes – massive stars with
incredibly strong gravitational pulls that prevented even light from leaving the surface. He was also of the opinion that sound traveled at different speeds in the air depending on heat (which we now
know is true).
In his time as a mathematician, he was regarded as the most brilliant mind in the field, someone with phenomenal ability. His perspective on probability, in particular, was that it was simple, common
sense, enable him to reduce it to calculations.
Laplace’s Personal life
In 1788, Pierre married Marie-Charlotte, who was from Besancon. She was 20 years younger than him. They had two children, a son named Charles-Emile, and a daughter, Sophie-Suzanne.
Pierre-Simon Laplace died at the age of 77, on the 5th of March, 1827. He was originally buried in Pere Lachaise, France. In 1888, his remains were moved to his family estate in Saint Julien.
At the time of his death, Pierre had contributed a lot in both mathematics and astronomy. He chose to explore the unknown and lay avenues for a new generation of scientists.
Awards and Achievements by Pierre Laplace
The Royal Swedish Academy of Sciences elected him a foreign member in 1806. Association and partnership with an exceptional mind will always improve the general image of such a big institution.
The American Academy of Arts and Sciences appointed Laplace an Honorary Member in 1822, furthering his institutional connections outside France.
Fun fact about Laplace
After Laplace’s death, his physician, Francois Magendie, removed his brain. The brain was on display for many years, and in the years after his death, it was in a museum of anatomy in Britain. It was
reported to be smaller in size than the average brain.
Many mathematicians have been part of the evolution of the science over time. Some names, however, stand out. The most memorable of these names discovered formulae and laid the foundation for the
study of different branches in mathematics, such as algebra, geometry, and statistics. In Pierre’s case, his studies were the origin of fundamental theories.
Still today, probability and statistics are a major branch of mathematics and used heavily in science.
Laplace gained fame through the ventures he took into undiscovered areas of science, and a mind that was brave enough to cultivate thoughts beyond what was known to scientists at the time. This
mathematician, more than anything, showed the value of developing mathematics as a science, not just getting conversant and working with existing knowledge. | {"url":"https://famous-mathematicians.com/pierre-simon-laplace/","timestamp":"2024-11-13T18:22:18Z","content_type":"text/html","content_length":"74062","record_id":"<urn:uuid:f575ec67-5a97-432a-ba7a-f1ad9bf749f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00092.warc.gz"} |
How do you identify cluster gaps and outliers?
Cluster: A group of values sticks together away from other groups. Outliers: Some Minority values much away from the crowd (Majority). Peaks: Highest value in the distribution. Gaps: The ”large” open
space between some data points.
What does it mean when a distribution has a gap?
From the providers’ perspective, the distribution gap is defined as the gap between the actual effi- ciency of distribution process and the optimal efficiency. From the customers’ perspective, the
distribution gap primarily represents unmet expectations.
What are gaps in statistics?
Statistics Dictionary Gaps refer to areas of a graphic display where there are no observations. The figure below shows a distribution with a gap. There are no observations in the middle of the
Can a symmetric distribution have gaps?
Use clusters, gaps, peaks, outliers, and symmetry to describe the shape of the distribution. The left side of the data looks like the right side, so the shape of the distribution is symmetric. There
are no gaps or outliers.
How do you identify a cluster?
Clusters are identified by applying a mathematical algorithm that assigns vertices (i.e., users) to subgroups of relatively more connected groups of vertices in the network. The Clauset-Newman-Moore
algorithm [8], used in NodeXL, enables you to analyze large network datasets to efficiently find subgroups.
What causes gaps in histograms?
Some histograms have a gap, a space between two bars where there are no data points. For example, if some students in a class have 7 or more siblings, but the rest of the students have 0, 1, or 2
siblings, the histogram for this data set would show gaps between the bars because no students have 3, 4, 5, or 6 siblings.
How do you handle gaps and outliers in a set of data?
5 ways to deal with outliers in data
1. Set up a filter in your testing tool. Even though this has a little cost, filtering out outliers is worth it.
2. Remove or change outliers during post-test analysis.
3. Change the value of outliers.
4. Consider the underlying distribution.
5. Consider the value of mild outliers.
How do you know if a distribution is bimodal?
A data set is bimodal if it has two modes. This means that there is not a single data value that occurs with the highest frequency. Instead, there are two data values that tie for having the highest
Can a uniform distribution be skewed?
The skew-uniform distributions have been introduced by many authors, e.g. Gupta et al., Aryal, G. and Nadarajah, S., Nadarajah, S. and Kotz, S.. This class of distributions includes the uniform
distribution and possesses several properties which coincide or are close to the properties of the uniform family.
How do you classify after clustering?
Classification requires labels. Therefore you first cluster your data and save the resulting cluster labels. Then you train a classifier using these labels as a target variable. By saving the labels
you effectively seperate the steps of clustering and classification.
How do you cluster variables?
Cluster variables uses a hierarchical procedure to form the clusters. Variables are grouped together that are similar (correlated) with each other. At each step, two clusters are joined, until just
one cluster is formed at the final step.
What do clusters look like?
In a telescope, a globular cluster looks like a fuzzy ball, with individual stars at the periphery merging into a solid ball of light towards the center. However, this is simply because the stars are
so close together that they can’t be resolved individually telescopically.
What are examples of clusters, peaks and outliers?
Closes this module. Examples looking at different features of distributions, such as clusters, gaps, peaks, and outliers for distributions. This is the currently selected item.
Do you need a gap to have an outlier?
No, because an outlier is a group of data that is much bigger or smaller than the rest of the data and to have an outlier, there must be a gap in the data. A big gap that is like 2 or more gaps from
the data set. Comment on lyds’s post “At 6:07. No, because an outlier is a group of dat…” Posted 5 years ago.
Are there different measures of center and spread?
Note, there are several different measures of center and several different measures of spread that one can use — one must be careful to use appropriate measures given the shape of the data’s
distribution, the presence of extreme values, and the nature and level of the data involved.
How can we tell the shape of a distribution?
The Shape of a Distribution We can characterize the shape of a data set by looking at its histogram. First, if the data values seem to pile up into a single “mound”, we say the distribution is
unimodal. If there appear to be two “mounds”, we say the distribution is bimodal. | {"url":"https://www.raiseupwa.com/users-questions/how-do-you-identify-cluster-gaps-and-outliers/","timestamp":"2024-11-08T11:06:37Z","content_type":"text/html","content_length":"106723","record_id":"<urn:uuid:0674a82b-c522-4fb3-bbc7-c7bb3c10d4e7>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00384.warc.gz"} |
Transfer values of a SpatRaster to another one with a different geometry — resample
Transfer values of a SpatRaster to another one with a different geometry
resample transfers values between SpatRaster objects that do not align (have a different origin and/or resolution). See project to change the coordinate reference system (crs).
If the origin and extent of the input and output are the same, you should consider using these other functions instead: aggregate, disagg, extend or crop.
# S4 method for class 'SpatRaster,SpatRaster'
resample(x, y, method, threads=FALSE, filename="", ...)
SpatRaster to be resampled
SpatRaster with the geometry that x should be resampled to
character. Method used for estimating the new cell values. One of:
near: nearest neighbor. This method is fast, and it can be the preferred method if the cell values represent classes. It is not a good choice for continuous values. This is used by default if the
first layer of x is categorical.
bilinear: bilinear interpolation. This is the default if the first layer of x is numeric (not categorical). (3x3 cell window).
cubic: cubic interpolation. (5x5 cell window).
cubicspline: cubic B-spline interpolation. (5x5 cell window).
lanczos: Lanczos windowed sinc resampling. (7x7 cell window).
sum: the weighted sum of all non-NA contributing grid cells.
min, q1, med, q3, max, average, mode, rms: the minimum, first quartile, median, third quartile, maximum, mean, mode, or root-mean-square value of all non-NA contributing grid cells.
logical. If TRUE multiple threads are used (faster for large files)
character. Output filename
additional arguments for writing files as in writeRaster
r <- rast(nrows=3, ncols=3, xmin=0, xmax=10, ymin=0, ymax=10)
values(r) <- 1:ncell(r)
s <- rast(nrows=25, ncols=30, xmin=1, xmax=11, ymin=-1, ymax=11)
x <- resample(r, s, method="bilinear")
opar <- par(no.readonly =TRUE) | {"url":"https://rspatial.github.io/terra/reference/resample.html","timestamp":"2024-11-06T21:08:40Z","content_type":"text/html","content_length":"12792","record_id":"<urn:uuid:e08e3eb7-7b84-4d99-89d1-9f9f8aaabf6f>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00387.warc.gz"} |
MSc project ideas
MSc project ideas - to be supervised by Keith Briggs 2010
I work at BT Research in Martlesham Heath. The ideal arrangement is for you, the MSc student, to work here for all or most of your project. We are at a very lively technology park, and you would have
at least daily meetings with me, plenty of desk space and very good computing facilities (i.e. linux). But if this is not feasible, we can negotiate another arrangement.
These are only rough sketches for projects. I find that it is better to start this way, and then fine-tune the detail to suit the student's abilities and interests. I prefer to leave some
open-endedness, to provide a research challenge. A good MSc project can result in a publication.
I like mathematical computing. Most of these projects have that theme. The emphasis is on the concrete. This way of working often involves experimental mathematics, in which we treat the computer as
a laboratory for developing and testing conjectures. This does not mean the work lacks rigour - most algorithms on which we work would involve only discrete mathematics, and thus the results are
exact. These projects do not generally involve conventional numerical analysis (floating-point errors, and all that). Most of the programming would be in python and C.
I can probably only supervise 2 (maybe 3) of this projects at one time. So if you are interested, contact me asap!
For past projects I have supervised, see here.
1. Distributed algorithms for graph coloring and wireless channel assignment
The world of mobile devices is moving towards autonomous behaviour. This means that no central authority decides things like wireless channel allocation and transmission power level. Devices must
behave according to rules which ensure peaceful co-operation. To design these rules is a hard challenge. We will start by investigating experimentally (by computer simulation) two basic problems: to
choose channels so that the whole system operates without interference; and, if this is not possible, to choose channels so that total interference is minimized. The overall theme is asynchronous
distributed algorithms.
2. Statistics of train arrival delays
A previous project looked at statistical models in the delay to departure time of trains. An interesting result was obtained - that the delays follow a q-exponential distribution. This project would
do the same thing for arrival delays, and, by extension, we can then study the distribution of trip delays. These statistical models have important applications to algorithms for optimal trip
planning in the presence of random delays. I already have developed software for this problem, and part of the project would be to add the new arrival time model to the software. The figure shows a
typical output of the current program (written by my student Kin Po Tam).
NB: if you are interested in this project, please tell me asap, because we should start collecting data well before the project starts.
3. Computational geometry for wireless homehub applications
Wireless systems are evolving towards a smaller cell-size, so that we now think of house-sized cells. To model and simulate such systems requires models of house-density distributions. This
information is not readily available, but a good proxy is the postcode database. This project would look at the Voronoi geometry of the postcode distribution (the figure shows an example from the
Ipswich area), and calculate statistics relevant to wireless systems, such as distribution of housing density, and fractional coverage as a function of transmit power.
4. Sensing techniques for estimating the number of transmitters
The world of mobile devices is moving towards autonomous behaviour. It would be useful in practice if a device knew the number of other transmitters in its neighbourhood, so that it can avoid
interfering with them (this is part of what is called cognitive radio). This estimation can be attempted by a small group of devices co-operating, so that they all measure received signal strengths
and share their readings. This amount to estimating the power profile as a function of position, and from this optimization methods can be used to find the number of transmitters.
This project is very speculative and it is quite uncertain whether any method will work well in practice. Anyone selecting this project will need lots of imagination and willingness to try many
alternative algorithms!
5. Satisfiability algorithms (SAT)
The SAT (satisfiability) problem is a very basic problem in logic: to determine whether a given Boolean function has an input which makes the output true. There are many good heuristics now available
to solve SAT problems, and many practical problems are equivalent to, and can be translated into SAT.
This project would make a comparison of available solvers, and study their application to some practical problems in optimization, scheduling, and graph coloring. The fun part would be writing
programs to translate these into SAT!
6. Maximum-likelihood period estimation
This project would be to study the paper (only 4 pages!):
Maximum-likelihood period estimation from sparse, noisy timing data, by Robby McKilliam and I. Vaughan L. Clarkson
and write a program to implement the method. This is an important problem area with applications in telecoms.
7. Haartsen's Novel wireless modulation technique based on noise
In this paper, Haartsen has proposed a wireless modulation technique based on noise. This project would build software simulators for this system, and investigate its performance, which would be
compared with the theory.
8. Fast counting
This project is inspired by the paper HyperLoglog: the analysis of a near-optimal cardinality estimation algorithm of Flajolet et al. How do we estimate (rapidly, and with minimal storage) the number
of distinct items in a very large set (or data stream)? In other words, the problem concerns estimating the cardinality of a multiset. Flajolet et al. previously developed their loglog algorithm, and
I worked on an efficient C implementation of this. They have now improved this with the hyperloglog algorithm, and this project would be to implement this and compare its performance in practice with
The algorithm should have practical applications in informatics; for example, counting the number of different packet types in a network.
9. Fast random selection
Consider the problem: a large number N of objects are presented to us one by one, and we wish to either:
Select exactly some specified number n≤N of them
Select each with some specified probability p
The problem becomes hard to solve efficiently when p is small, because to generate a random number which usually results in a rejection is inefficient. It would be better to skip over a block of
items. Ways to do these were proposed by Vitter in ACM Trans Math Soft 13, 58 (1987).
This project would investigate these methods and check their efficiency in practice. There are applications to the generation of large random graphs.
This website uses no cookies. This page was last modified 2024-01-21 10:57 by | {"url":"http://keithbriggs.info/MSc_project_ideas_2010.html","timestamp":"2024-11-08T08:46:04Z","content_type":"text/html","content_length":"22122","record_id":"<urn:uuid:1170269d-6b6d-4eef-8a94-0617a546d2e7>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00530.warc.gz"} |
Homemade Edible Finger Paint Recipe - The Imagination Tree
We made some home made finger paint today and had great fun with our little group of Mums and tots with babies as young as 6 months getting involved in the action! It’s totally edible (though not
that delicious!) and completely non-toxic, and the best part is it was so easy to make and will last!
This is the recipe ( I googled a few, found the common denominator and went from there):
* 2 cups of corn flour (corn starch in the US I think)
* 1 cup of cold water
* 4.5 cups of boiling water
* Liquid food colouring
Mix the cornflour with the cold water and stir together. Pour in the boiling water and stir between each cup. It goes really strange (you are basically mixing a hot oobleck goop) but keep stirring
and it literally seems to “melt” into a wonderful, custard-like consistency. We then separated it into individual jam jars before adding colouring, but you can do it however you like and this is the
stage to add colour.
Edited to add:
Some people have found that the paint remains liquid and doesn’t thicken up as it should. I have no idea why this should be, but I have two possible solutions, based on the fabulous commenters below!
1. Try simply adding up to 1 more cup of cornflour/ cornstarch and see if that helps to thicken it.
2. Try mixing the paint in a pan on a medium heat instead of just in a bowl, as that will help to bring it together.
It’s always frustrating posting recipes that work brilliantly when you try them yourself, but for some reason don’t work for everyone! I can only assume it’s down to slight change in ingredients used
and perhaps how the directions are followed. Do try it as it is LOVELY stuff! Thanks π
C helped me to spoon this into the jars and she absolutely LOVED every minute of the whole process!
I added a squeeze of colouring to each jar and then between us we mixed them up.
During mixing they looked fabulous!
And the finished paints look like a little work of art π Almost too good to paint with…but not quite.
All lined up and ready for action. I put in some thick paint brushes for the toddlers but expected babies to use their fingers. They seemed to understand that perfectly π
Baby Boy is 6 months and this was his first little painting. We weren’t sure how impressed he was!
Kiddies getting stuck in and a couple more crawling on the floor, waiting for their turns!
That’s more like it baby Boy, get those fingers in and give it a good squish!
Β It was a bit like painting with coloured, waxy custard! Very strange yet extremely pleasing to touch!
K experimenting with double paint-brushing.
J having a whale of a time!
Big boy N knows how to paint properly!
Someone got a tad possessive of all “her” paints. “Dey Mines!”
And then we introduced edible finger paint number 2! Chocolate and strawberry Angel Delight pudding mixes (although these were actually a Sainsbury’s Basics range for 7p each!) We just mixed the
powder with milk and whisked it until lovely and thick, then put it on the table for them to touch and add to their paintings. There was no added food colouring, but lovely brown, chocolatey
messiness everywhere nonetheless π
Baby J was very interested in the chocolate pudding goo! Who can blame him?!
It’s important to use ALL of the senses when exploring! Yum yum!
And C did a little bit of mark-making with a fork through the lovely, thick, gloopy mess.
Little Pop found the brush very tasty and had her fair share of pudding paint too.
Overall verdict? Very easy to make and extremely satisfying results in terms of texture and consistency. Lovely to know that it is non-toxic and edible, therefore safe for even the really tiddly
ones. Colours are quite light and therefore don’t make a bold mark on paper, but I’m sure if you used a lot more colouring that could be fixed. The paint is thick and gel-like and so takes a long
time to dry, but when it does it makes a great, almost 3-D effect on the paper! I have put the lids on our jam jars and will try storing them in the fridge and see how long they last. Hope hubby
doesn’t spread them on his toast by mistake!
This activity is good for:
* involving all ages of children
* creativty and expression
* using fingers and tools to do mark-making
* exploring the senses and discovering new textures
* knowledge and understanding of the world: following a recipe, mixing and stirring, combining materials and mixing colours
* gross and fine motor skills (mixing the colours into the paint was hard work!)
We have another home made paint recipe to share tomorrow! Enjoy messy, creative, fun!
1. Monkeying Around says
Great messy fun! Love the fact that u dont need to worry about it going in their mouths too.
Adele x
2. Mama Pea Pod says
Love it! A great (outdoor?) birthday party activity idea!
3. Anonymous says
What is cornflour to you? Is it ground corn, which we in the states call corn meal? Or corn starch? I think you mean corn starch, which is a little different. Is it a very powdery white substance
with a gritty feel?
□ Anonymous says
The recipe I’m reading calls for corn starch and that would produce the consistency, look, and feeling described in this activity. Corn starch has a powdery thick feeling not gritty.
4. Hi Anon! I think it is corn starch although on Googling it it seems they are slightly different. But both work as a thickening agent so I’m guessing the starch will work in exactly the same way.
Our corn flour has the exact appearance of ordinary flour. Hope it works and I’m sure it will!
5. GianneCurry says
How fun! I have been looking for something to do with my 2 year old while my 4 year old is at school! Thanks for the idea!
6. Elle Belles Bows says
What great fun! I will definitely give it a go! Love all the pics too! Kerri
7. SarahF says
Anonymous, cornflour is cornmeal in the US.
8. SarahF says
Anna, those paints look delicious! What lovely bright colours and I can see from the photos what a fab consistency they were. How great for the babies to get stuck in with the older kids. Katy is
often muscling in on Max’s play, and she isn’t always welcome! This is a nice idea for encouraging Max to include his sis.
9. Nooooo, cornmeal is polenta! π Cornflour is the same as cornstarch.
Hope you make some Sarah! It’s great for messy play
□ Anonymous says
The words vary a lot by region. In Australia the cornflour sold in supermarkets can be made of wheat or corn. When it is made of corn it is the equivalent of what US recipes call corn starch.
Usually you have to look at the ingredients to know if it is wheat or corn based cornflour. I think this might go some way to explaining why the recipe works for some and not others, they may
actually be using a different product. I think the wheat cornflour needs a bit more heating before it thickens up.
10. bella:) says
I did love doing this activity it was great fun. However, how long do you feel you should stay on this activity for the only reason I ask is because my daughter appears to have a short
concentration span is this normal for one year olds? your daughter looked like she would be happy doing this all day.
11. Hi Bella! Thanks for your comment and I’m glad she enjoyed it. `i would say that a 1-2 year old has a very short attention span and probably will only want to do things for 5-10 minutes at the
most. But they tend to like to come back to the same activity and repeat it over and over. As adults we find this frustrating but it’s how they learn! My girl is 2 and a half and is more
interested in some things than others. She could probably paint/ stick/ play with play dough for well over half an hour, but that’s only a recent development and she tends to flit between things
to try them out.
12. Alison says
Exactly what I needed for my 7 m/o nephew, thanks!
Found via http://www.notimeforflashcards.com
13. RedTedArt says
Oh my oh my!! How fun!!! Looks just perfect for the little ones! Clever idea!
(thanks for linking up to Kids Get Crafty)
14. Michelle says
oh we can go through the finger paint. How fun that I can make some new paint with stuff I have in my cabinets. Thanks for the recipe!
15. Yummy! ha ha… what a Fab idea!
I’d love to have you share at my For the Kids Friday Link Party! I’m sure you’ll find some fun ideas while you are there! Come join the fun!
16. Jess says
What a GREAT idea! I have been letting my 17 month old paint lately and he ALWAYS wants to eat it – I will be trying this out today! Thanks for the wonderful idea!!
17. I just can’t get over the pics. To cute!
Thanks so much for sharing this at For the Kids Friday at Sun Scholars!
π rachel @ SunScholars.blogspot.com
18. Anonymous says
How long will this last?
19. Hello “Anon” I think out of the fridge no more than a couple of days. In the fridge maybe a week at a guess? I think ours was best the day we made it, and it was best at being a sensory play
material rather than long-lasting paint. Lots of fun though!
20. Mercedes says
Cute Idea! I repinned it on pintrest.
21. ibakewithout.com says
I love this! all those cute chubby hands in the squishy paint is too cute! I am going to do this with my 2 year old soon!
22. Anonymous says
hello! thanks for the post. i’d need some follow-up, finally could you store it in the fridge? and for how long?
23. Anonymous says
sorry, i didn’t make it through with the comments.. now i see, you already answered to the same question
24. No problem at all! Ours didn’t keep very long in the end. Maybe only a day or two? Use it up with lots of messy play!
25. sandy's crafty bits says
wow great idea thank you my grandchildren will love this and very welcome as half term is coming up … happy crafting and love sandy xx
26. ConsciousMama says
Great idea, do you think the food colouring stains? i’m thinking in the bath or on the kitchen floor… thx
27. turkeymama says
I love this idea. I really want to try finger painting with my 13 month old and this recipe sounds great. Any as ConsciousMama asked, does it stain? He doesn’t really have any “grungy” clothes so
I’m a little worried about the food colouring dying his clothes? Has anyone found out if it stains yet?
28. Lauren says
I tried making this, following the directions exactly, but for some reason the consistency was pure liquid…I tried adding more corn starch, but that didn’t help. Suggestions? Thanks so much!
□ Anonymous says
Hi Lauren, that is really weird! You did use hot water right? It should thicken just like it would if you added the cornflour to stock to thicken it for gravy or to a stew to thicken it.
□ Anonymous says
Mine was the same, just zap it in the microwave for 30-40 secs and give it a good stir before it becomes a big lump!
29. Stephanie D says
We tried this today with our mothers group of 8 month olds. It was a moderate success, but still a little too advanced. My little boy did enjoy eating it and throwing it however! :o)
Lauren – mine was liquid until about 2 mins of stirring and then it turned thick and custard like all of a sudden. I thought I had stuffed up but then it turned.
The food colouring did stain – skin, clothing everything! – but we thought this would be the case and had all the bubs in disposable nappies and naked – it is summer here in Australia!
Storage: I made it last night and stored in the fridge overnight. It went solid overnight but all I needed to do was add a little more boiling water and give it a good stir and it went back
again. It wasn’t as good as the fresh stuff however. I would advise making this the same day as you need it and not storing it.
30. Lauren I’m sorry to hear that! I don’t know what to say as ours was so waxy and custard-like. If anything I’d have thought people may find it to thick, not too thin π Perhaps follow
Stephanie’s (great!) advice and try stirring it for longer? It feels like you are mixing up goop, but with hot water. AS your stir it begins to get more and more thick. Were you using a flour
like substance ? Corn starch? Over here it is Corn flour but in the US it;s cornstarch? Sorry not to help more!
31. Jana Burrow says
I used a cup less boiling water and a cup more corn starch (or there about) and it was perfect. I know it’s no longer edible, but to color it I used the ends of my tubes of crayola finger paint.
I had just a little left of each color. No staining and way cheaper!! Great idea.
□ Brilliant and well done for making it work!
32. Esther says
Thank you for this! Mine turned out a bit on te runny side, so I just shook a bit more cornflour into it, it got thicker as it cooled too. I made a stencil with a dozen little Christmas trees on
it, coloured the paint a darkish green, and stirred in some gold edible lustre dust to give it a sheen… Paint smeared over the stencil onto card underneath, and my baby girlhas made her first
Christmas cards! Thank you for showing me that messy is doable with a baby!
□ that sounds SO wonderful!!
33. Anonymous says
My solution to a runny mixture was to cook it on the stove for a few minutes, just like custard π
□ perfect solution!
34. Unknown says
This comment has been removed by a blog administrator.
35. Nicole says
Thanks for the great post!
I made this just now. It was briljant! Allthough it was less silky than yours. I used a cup more corn flour and it was perfect! Our 12 month old was particularly keen on eating it.. So we ended
up with a blueish, cyanotic lipped boy;)
□ Oops!! But glad it worked out well!
36. Anonymous says
Mr 1 loved this goopy paint. We only used one cup cornflour, one cup cold water, then just added boiling water and whisked until it seemed a good consistency. We used food colouring for some
colours, cocoa for brown, and a few drops of non-toxic acrylic paint for others. Looked like a ridiculous amount for one boy, but he used it all up smearing on paper, cardboard, himself…… Very
cheap entertainment.
□ So glad it worked out for you and very good ideas for the other colourings!
37. Shawnna says
This is a really great idea!
38. We’ll have to try this recipe next. We just posted about fingerpainting with banana pudding. My little ones loved it!
39. Anonymous says
http://theinsanehousewife.wordpress.com..check us out! I linked your website on our blog since we tried out one of your finger paint recipes! Thank you for the fun idea π
□ thank you!
40. crafty elsie says
I did some yoghurt finger paiting and linked to this post as I mentioned your recipe, thanks http://elsiesnortherngarden.blogspot.com/2012/01/fingerpainting-and-first-time-crayons.html
□ Thanks Elsie!
41. Anonymous says
I just tried this and it’s pure liquid π I followed the instructions exactly and even used less water…darn!!
□ darn it indeed!! so frustrating. Sorry. Did you cook it for longer to see if it would come together?
42. goldie616 says
I started making this by following the exact recipe but by the 2nd cup of boiling water it was pure liquid so I stopped the water, added more corn starch and some flour until thicker and then let
it cool in the fridge for a couple hours. Then added more flour to thicken. Still wasn’t custardy all the way through but was thick enough to paint and my 7month old LOVED the feel of the cool
mushy paint. She cried when I tried to end the activity so I let her paint her high chair tray for awhile too. It all washed off (no stains) and she had a blast!! Thanks for posting this recipe!!
Loved seeing your pictures too!
□ Glad you were able to adapt it to make it work! This is the one recipe that people have had to do that with the most. I think cornstarch/ cornflour is partly to blame! (but wish i could work
out why!)
43. Anonymous says
yeah this stuff leaves stains on everything it want come off!!!
□ Oh no! π Very sorry to hear it. Too much colouring? That didn’t happen to us at all, so sorry!
44. Anonymous says
This may be my all time favorite paint to use with the kids. Super easy to make. It’s edible (and non-toxic). A great tactile experience. By far the easiest paint project to clean up on the boys
and everything else. The boys played with this stuff forever. Will be making this a lot!
45. Unknown says
I didn’t read through all of the comments so this may have been asked already. Does the food dye not stain their hands?
46. Anonymous says
What a fabulous website with great ideas. Going to try this one today! π
47. Mini Piccolini says
We made edible finger paint using plain yogurt for our 14-month old. Worked really well: http://minipiccolini.com/2012/02/edible-finger-paint-for-valentines-day/
48. Jamy says
I never thought of involving babies to paint! I think I will add some food coloring to my 6 mo. old’s rice cereal and let her go to town!!!
49. Tiffany says
Can you all share what type of paper you used with the paints? I am so excited to try this with my 9 mo old, who LOVES touching everything!
50. Loren.75 says
This comment has been removed by the author.
51. Loren.75 says
Hi, I just tried this awesome ricipe out with my 1yr old daughter. She spent most of the time with her fingers in her mouth and was able to explore the wonderful world of colour without me
worrying about poisonous substances Thank you!
Oh, I have answered the staining and cleaning up issues – She was wearing a disposable nappy (as mentioned above) And I used an old inflatable pool sans water as the painting area. Clean up was a
52. Anonymous says
I’ve made this twice now. The first time it worked perfectly. The second time it didn’t. I don’t know if it made the difference or not, but when I made it this time I mixed the cornstarch into
the cold water instead of the water into the cornstarch. I just cooked it for a bit on the stove and it thickened after a few minutes. Thanks for a great recipe!
53. Lael says
I’ve made this before and my son loved it, but I was just wondering how long the paints last in closed jars before they start growing stuff?
□ Anonymous says
I wondered how to store it too?!?
I sealed the jars & put it in the pantry & within a week it absolutely stank π I was so disappointed! The batch made more than I needed, so the uncoloured batch I had kept in the fridge, &
that’s find weeks later!
54. KKD says
I’ve just made this and the recipe worked a treat! Although, instead of stirring lots I left it for a few minutes then started stirring and it became gloopy and custard-like. Love the recipe,
thank you so much! We’re going to have fund tomorrow π
55. Lucy says
wow this looks fab, and it does actually look tasty even if its not, lol π
56. Jazzy S says
Corn flour and corn starch are not the same. Corn flour is milled from the whole kernel, while cornstarch is obtained from the endosperm portion of the kernel. Corn starch is just that – starch.
It is chemically separated from the protein and other components of corn flour.
The confusion stems in that they can SOMETIMES be used interchangeably, such as in soups and stews as a thickening agent. However, for bread baking and deep frying, you cannot substitute corn
starch for corn flour.
Corn flour is available in the US, but it is typically located with the other specialty grains. My local grocery store carries it in Bob’s Red Mill brand.
That being said, when I decided to attempt this “recipe,” I didn’t have any corn flour handy. However, as I am located in the southwest US, I did have masa harina, a flour made from lime soaked
corn, which is most commonly used to make tortillas. So, I decided to try using it instead.
While I cannot comment on substituting corn starch or US corn flour, I can attest that the masa harina worked.
My 12 month old daughter wasn’t initially impressed by this project, but, with a little encouragement, she quickly became thrilled. I’ve already made it three times this week! It seems to
entertain her in about 30 minute bursts.
Thank you for providing such a great “recipe” π I’ve tried multiple versions of edible finger paint and have found this version to be the best yet. The finished product had a nice, slightly
thick, consistency. I have since recommended it to all of my friends with children who still taste everything.
I almost forgot to mention…it did stain her skin in a few spots, but it was easily removed with a little soap and some unappreciated scrubbing.
Good luck to everyone else!
57. Anonymous says
I made this for my 14 month daughter as her first experience with paint. She had a great time. Thank you π
58. Anne Marie says
If it helps anyone, a cup in the UK is actually slightly less than a cup in the US. That may have affected how some of the batches turned out for people. :o) Thank you for posting. I’m trying it
with my 7 month old today!
59. Two of Everything says
I made this today for my 18 month old twins and they loved it. I cooked it in the microwave for a couple of minutes after I’d put the hot water in – no different to making custard really!! Think
I might add some glitter next time too π I’ve linked you up in my blog post about it too, if that’s ok!
60. Anonymous says
My kids loved this!! Although mine was very watery :S but it was still fun gloop! My 5 month old and 2 1/2 yr old loved it!! ( especially sliding with his feet :)) thank you!!
61. Amy G says
Did this with my 10 month old today and had a blast!
62. Anonymous says
the consistency of mine was great, even better after I cooked it for a couple of minutes. My 4.5yo sensory seeker and 2.5yo had a ball! I gave them an assortment of edible sprinkles to throw in,
along with chopped up apples, sultana’s and dried paw paw. My 2yo thought this was fantastic and everything went straight in the mouth. The older one knew best, despite me telling him this was
special paint he could eat, he said ‘I’m worried mummy, you shouldn’t eat paint!’ Will definitely do this again! I used the food colouring sparingly and only had a little staining on their hands,
sure it will be gone after bath time π
63. Christine says
What an amazingly simple yet awesome idea!! I just made a whole batch and they turned out GREAT. I’m waiting for the little one to wake up so I can try this out! I think I might add a bit of
flavoring to the dyes next time! (vanilla, orange zest etc!). Thanks for sharing!
64. Jane says
Note to anyone making this – put it in the fridge! I left mine out and opened it 4 days later and boy did it stink!!
65. Anni says
Saw this on Pinterest and tried it out. It didn’t work that well. I think my water was not hot enough. After i put it on the stove and stired it, it was getting better. And I had to add tons of
food colouring. I think I am going to try it again π Anyway, if it doesn’t work again I will put it on the stove again. Doesn’t matter π
66. T says
I tried this today with my 7mo and it was fabulous. I did use corn starch instead of corn flour (they are similar, but definitely not the same thing), so mine turned out a little more
translucent. Next time I might try adding a little flour or more dye to turn out deeper colours. After reading all the comments I went straight to mixing it on medium heat on the stove, which
worked beautifully. I don’t know how the original instructions with the boiling water would have turned out, but I definitely had success on the stove. The mixture gelled together in less than 2
minutes, though I had to stir it constantly.
Thanks, Anna, for this wonderful site! I have 1 baby so far (only a small bit older than Baby Bean), so I can’t use everything on here yet, but I love all the ideas. Things that I can’t try yet
I’m filing away in my brain for later!!! I’m ALL about sensory play for my 7mo right now. I want him to play in the dirt, splash in water, taste everything that’s non-toxic, and generally explore
the world as much as possible! I am having so much fun already, introducing the world to him and letting him make discoveries. And he’s young – I know there is so much more ahead of us!
67. Timarie says
I did this last night and it never thickened. I live in the US and think that you might have meant oz instead of cups? Because I added 2 additional cups of corn starch to the mixture and it still
didn’t thicken. It got to a milky consistency so I had the kids use it anyway and it stained them from head to toe lol. It was a mess!
68. Nic says
I haven’t tried this yet, but I wonder if it is the different size of UK “cups” vs. US “cups” is causing the inconsistent results. My husband is Australian and they have their own size “cups” as
well. I have had many many many recipes turn out poorly because the ratio of dry to liquid isn’t the same. I’m going to try making paint this afternoon and see how it goes!
69. Anonymous says
Last time I checked babies 6 months old shouldn’t have corn starch and why you would want them to have food coloring which is linked to cancer starting out, is beyond my level of
comprehension….Parents, if you are using food dyes go all natural, regular food dyes are not good for babies or toddlers.
□ Anonymous says
Last time I checked the article was not suggesting that you feed this to your baby as food. Get a grip. The obvious interpretation of the activity and resources prepared here is that,
although this is not food, if it gets in your baby’s mouth it isn’t going to harm them. Small quantities of corn starch or food colourings are unlikely to do any lasting harm when used on an
occasional basis. Also, you are clearly guilty of a typical America-centric assumption here, perhaps being aware that there is a world outside the US is beyond your comprehension too (I’m
assuming by your spelling and your choice of CBS as your news source that you are probably from the US). The food dyes referred to in your linked article have been phased out in the UK (and
Europe in general) where this blog is written. In fact, it even says as much in the article. Perhaps parents such as yourselves in the US need to be as proactive at getting companies to
change their products as we have been in the UK (http://www.allergykids.com/blog/serving-up-food-dyes-uk-style/). It is quite possible to pick up natural food colourings off the shelf in UK
supermarkets unlike in the USA (http://www.tesco.com/groceries/product/browse/default.aspx?N=4294792885) (I live in Texas, currently, so am well aware of how hard they are to come by here).
Granted, there is still some chemical sounding stuff in those natural ones, but at a drop or two in a whole batch of paint that they are going to get a small fraction of in their mouth on
maybe a day or two a month, let’s keep it in perspective!
□ Anna Ranson says
Thank you for that excellent comment anon! Thank you x
70. Anonymous says
Corn (because it’s a starchy carb) is kind of an empty calorie food and not a huge nutritional value for baby therefore some people opt not to give their baby cornstarch until after 12 months but
as early as 10 months. It can cause gas/diarrhea and can be hard to digest. Having said that, some infant formulas have derivitives of cornstarch such as corn syrup. Further to that, jarred baby
food will sometimes have cornstarch especially if it’s a gravy type mix. I’ve recently started giving my 11 mth old creamed corn (homemade) and she seems to have no problem with it. On the dye
side of things, try something for red such as beet juice/beet powder (straight from the beet or the water), blue and purple from dark berries, carrots for orange, saffron for yellow and those are
just a few suggestions. A quick google search will help with that.
□ Oh my gosh what a great recipe! Thank you so much for sharing this. I am going to try this out with my two little ones. I know they are going to have so much fun!
□ Anonymous says
Here in the UK the majority of our food colourings ARE natural.
□ Anonymous says
The finger paint is not a food. It is a paint that if it gets into babies mouth then baby will be okay. Therefore it is not necessary for it to be highly calorific or nutritional as that is
not it’s primary function.
□ Anna Ranson says
Thank you very much for that measured and intelligent response! π
71. Kimmi Irvin says
Hello, I love the paint, anything I have ever tried simply turns to liquid, or just doesn’t have the right consistency. I credited your page on my blog today, for making the edible paint. Thanks
for such a wonderful recipe. niftythriftymom.1.blogspot.com
□ Anna Ranson says
Thank you!
72. Hayleypearl says
I made this today and added a small amount of vanilla essence into it and added some food colouring, not much, but it came out really vibrant! http://i947.photobucket.com/albums/ad317/hayleypearl
□ Anna Ranson says
Ooo what a lovely idea to add the essence! Thanks!
73. Natalie A. says
I love the idea of this. I’m always worried about what’s in regular finger paints. This takes all of that worry away. I’m eager to try these with my charges this week.
74. Emmlou24 says
I have been looking for finger paints to do with my daugther who is just 10 months she loves messy play now I am happy in knowlege that I can now make safe paint for her and we can now make
christmas cards togethers for her grandparents π
75. Anonymous says
Hey I think this is a great idea..
Just wondering would it work on canvas?
76. jen jen says
Another way of making edible finger paint is usung vanilla pudding or plain, or vanilla yogurt. And mixing in food color.
77. Todd and Kristy says
Worked great for me, and my 1 year old preferred to use a brush (she’s a bit princessy like that – didn’t want to get her hands dirty, but was happy to have it everywhere if she used a brush) –
how do you store the leftover paint and for how long?
78. Anonymous says
I’ve just made this and it was incredibly easy! Really looking forward to letting my 16 month old get stuck in.
79. KingsLangleySP says
Great activity. Just a warning to be careful with the mix if letting little ones gets in and help make it, as it can get pretty hot… I know that is an obvious point- but I’m all for protecting
the little ones, and we all forgot common sense at times. My tip for this recipe is to stir it a lot! When you think it is perfect, stir some more! We kept stirring and stirring, and ours
eventually became a very similar texture to the paints we used to use at school! We used HEAPS of colour- we got some great brights, but yes beware of staining. My little one has gone down for
her afternoon nap a little more yellow than usual (sort of like a Simpsons character). Thanks for this post… I have it earmarked this task for a group activity for one of my Speech Pathology
language development groups. I can imagine it will be a hit!
80. KingsLangleySP says
Oh one more thing…. old baby custard glass jars (the heinz branded ones in Australia) are a perfect size for paint pots…. and I feel all green for reusing someonthing π
81. Fleur says
Just done this with my 6 month old, she loved it! It kept her attention for half an hour, which for her is an achievement! The recipe worked great for us, we did need to put in the microwave for
a minute, but it came together brilliantly after that. Will be doing it again with big brother later – thanks for the idea!
82. crack3dup says
Can this be stored room temp? Or does it need stored in the fridge? Thanks!
83. Anonymous says
Great so when you give it to the kids they will eat it and love it. But when they go to school and they see paint and they eat it they r going to die. Teaching kids bad things. I don’t allow this
□ Helen Cross says
By the time children are school age they are able to understand about what they can and cannot eat. The whole point of this is that babies too young to understand they can’t eat paint can
join in.
84. Rachel Windsor says
Wow! Thanks so much for this wonderful recipe! Just a quick question: how long do I need to fridge it after adding colouring?
85. Claudia says
Just shared this idea: it is so clever!
I wish all preschool teachers took the time to make such a child friendly paint and to raise awareness about the many chemicals that are present in many things that are labelled “non-toxic”.
Thanks for sharing this recipe!
86. susan gregory says
just a thought, powdered coloring would give a more vibrant color
87. juliet46151 says
Cute idea.when I was a kid we used pudding as finger paint.
88. Momma Sandra says
Just made this recipe for my kids and they are painting right now! I used cornstarch and it turned out slightly lumpy but it adds to the texture experience π Mixing the cornstarch with the
cold water until smooth before adding the hot water would fix this problem but I let my 3 year old mix up that part… I didn’t have much food coloring in my cupboard so I used things like coffee
grounds, curry powder, paprika, dill, etc… to make the colors. Worked great and added some scent and extra texture π
89. Anonymous says
I am going to make this for my daughter’s b-day party. Just thought that I would share what I read on FOOD.com:
“Corn Flour: A powdery flour made of finely ground cornmeal, NOT to be confused with cornstarch. The exception is in British and Australian recipes where the term “cornflour” is used synonymously
with the U.S. word cornstarch. Corn flour comes in yellow and white and is used for breading and in combination with other flours in baked goods. Corn flour is milled from the whole kernel, while
cornstarch is obtained from the endosperm portion of the kernel. Substitutions: cornmeal pulverized in a food processor.”
I’m sure any whole foods or health foods store would carry it.
Thanks for the fun looking recipe!
90. Anonymous says
Tried this. Got soup. Transferred to pot, added more cornstarch and cooked. Ended up with lumpy soup. Very disappointing.
91. BAMANDIA says
can i store this for later use? how long does it last? Fridge or room temp?
92. SOS says
I tried mixing the cornstarch in cold water first, and then instead of adding boiling water, I just added 4 and a half cups of cold water, mixed it again and microwaved it. Super consistency!
93. Ziggy Γ ren says
I just made this for the first time today for our mums and babies (ages from 6 months to 3 years) and we LOVED it! Thank you so so much for this great recipe. I’m never buying store-made paint
again! I loved the consistency of it. I only put in 3.5 cups of boiling water as it seemed enough. The colours turned out beautifully too. It was great that we could hang them up to dry without
the paint dripping on to the floor because it’s nice and thick. Will DEFINITELY be making this on a regular basis!
94. Ziggy Γ ren says
By the way, I’ve linked up to you on my blog, hope that’s ok. Have a look at our homemade paint results. http://www.zigzagkidsclub.com/1/post/2013/03/its-raining-rainbows.html | {"url":"https://theimaginationtree.com/homemade-edible-finger-paint-recipe/","timestamp":"2024-11-05T23:21:39Z","content_type":"text/html","content_length":"266351","record_id":"<urn:uuid:b5fd47dc-bbe8-4519-b1e1-f9e526425486>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00538.warc.gz"} |
Reciprocity laws through formal groups
A relation between formal groups and reciprocity laws is studied following the approach initiated by Honda. Let ξ denote an mth primitive root of unity. For a character χ of order m, we define two
one-dimensional formal groups over Z[ξ] and prove the existence of an integral homomorphism between them with linear coefficient equal to the Gauss sum of χ. This allows us to deduce a reciprocity
formula for the mth residue symbol which, in particular, implies the cubic reciprocity law.
Dive into the research topics of 'Reciprocity laws through formal groups'. Together they form a unique fingerprint. | {"url":"https://cris.huji.ac.il/en/publications/reciprocity-laws-through-formal-groups","timestamp":"2024-11-14T02:01:28Z","content_type":"text/html","content_length":"44977","record_id":"<urn:uuid:7172aa3e-8962-4481-baed-fa998a826f75>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00303.warc.gz"} |
The Basel Ii Risk Parameters: Estimation, Validation, And Stress Testing [PDF] [1613hgv5n198]
A critical problem in the practice of banking risk assessment is the estimation and validation of the Basel II risk parameters PD (default probability), LGD (loss given default), and EAD (exposure at
default). These sophisticated parameters are used on the one hand as analysis tools in the modeling of credit portfolio, and are also used to compute regulatory capital according to the new Basel
This book offers comprehensive coverage of the state-of-the-art in designing and validating rating systems and default probability estimations. Furthermore, it presents techniques to estimate LGD and
EAD. This timely and practical monograph concludes with a chapter on stress testing of the Basel II risk parameters.
The Basel II Risk Parameters
Bernd Engelmann Robert Rauhmeier (Editors)
The Basel II Risk Parameters Estimation, Validation, and Stress Testing With 7 Figures and 58 Tables
Dr. Bernd Engelmann Quanteam Dr. Bernd Engelmann und Særen Gerlach GbR Basaltstraûe 28 60487 Frankfurt
[email protected]
Dr. Robert Rauhmeier Dresdner Bank AG Risk Instruments ± Methods Gallusanlage 7 60329 Frankfurt
[email protected]
ISBN-10 3-540-33085-2 Springer Berlin Heidelberg New York ISBN-13 978-3-540-33085-1 Springer Berlin Heidelberg New York Cataloging-in-Publication Data Library of Congress Control Number: 2006929673
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the
German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German
Copyright Law. Springer is a part of Springer Science+Business Media springeronline.com ° Springer Berlin ´ Heidelberg 2006 Printed in Germany The use of general descriptive names, registered names,
trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for
general use. Softcover-Design: Design & Production, Heidelberg SPIN 11693062
43/3100-5 4 3 2 1 0 ± Printed on acid-free paper
Preface In the last decade the banking industry has experienced a significant development in the understanding of credit risk. Refined methods were proposed concerning the estimation of key risk
parameters like default probabilities. Further, a large volume of literature on the pricing and measurement of credit risk in a portfolio context has evolved. This development was partly reflected by
supervisors when they agreed on the new revised capital adequacy framework, Basel II. Under Basel II, the level of regulatory capital depends on the risk characteristics of each credit while a
portfolio context is still neglected. The focus of this book is on the estimation and validation of the three key Basel II risk parameters, probability of default (PD), loss given default (LGD), and
exposure at default (EAD). Since the new regulatory framework will become operative in January 2007 (at least in Europe), many banks are in the final stages of implementation. Many questions have
arisen during the implementation phase and are discussed by practitioners, supervisors, and academics. A ‘best practice’ approach has to be formed and will be refined in the future even beyond 2007.
With this book we aim to contribute to this process. Although the book is inspired by the new capital framework, we hope that it is valuable in a broader context. The three risk parameters are
central inputs to credit portfolio models or credit pricing algorithms and their correct estimation is therefore essential for internal bank controlling and management. This is not a book about the
Basel II framework. There is already a large volume of literature explaining the new regulation at length. Rather, we attend to the current state-of-the-art of quantitative and qualitative
approaches. The book is a combination of coordinated stand-alone articles, arranged into fifteen chapters so that each chapter can be read exclusively. The authors are all experts from science,
supervisory authorities, and banking practice. The book is divided into three main parts: Estimation techniques for the parameters PD, LGD and EAD, validation of these parameters, and stress testing.
The first part begins with an overview of the popular and established methods for estimating PD. Chapter II focuses on methods for PD estimation for small and medium sized corporations while Chapter
III treats the PD estimation for the retail segment. Chapters IV and V deal with those segments with only a few or even no default data, as it is often the case in the large corporate, financial
institutions, or sovereign segment. Chapter IV illustrates how PD can be estimated with the shadow rating approach while Chapter V uses techniques from probability theory. Chapter VI describes how
PDs and Recovery Rates could be estimated under considerations of systematic and idiosyncratic risk factors simultaneously. This is a perfect changeover to the chapters VII to X dealing with LGD and
EAD estimation which is quite new in practice compared to ratings and PD estimation. Chap-
ter VII describes how LGD could be modelled in a point-in-time framework as a function of risk drivers, supported by an empirical study on bond data. Chapter VIII provides a general survey of LGD
estimation from a practical point of view. Chapters IX and X are concerned with the modelling of EAD. Chapter IX provides a general overview of EAD estimation techniques while Chapter X focuses on
the estimation of EAD for facilities with explicit limits. The second part of the book consists of four chapters about validation and statistical back-testing of rating systems. Chapter XI deals with
the perspective of the supervisory authorities and gives a glance as to what is expected when rating systems will be used under the Basel II framework. Chapter XII has a critical discussion on
measuring the discriminatory power of rating systems. Chapter XIII gives an overview of statistical tests for the dimension calibration, i.e. the accuracy of PD estimations. In Chapter XIV these
methods are enhanced by techniques of Monte-Carlo-Simulations which allows e.g. for integration of correlation assumptions as is also illustrated within a back-testing study on a real-life rating
data sample. The final part consists of Chapter XV, which is on stress testing. The purpose of stress testing is to detect limitations of models for the risk parameters and to analyse effects of
(extreme) worse scenarios in the future on a bank’s portfolio. Concepts and implementation strategies of stress test are explained and a simulation study reveals amazing effects of stress scenarios
when calculating economic capital with a portfolio model. All articles set great value on practical applicability and mostly include empirical studies or work with examples. Therefore we regard this
book as a valuable contribution towards modern risk management in every financial institution, whereas we steadily keep track on the requirements of Basel II. The book is addressed to risk managers,
rating analyst and in general quantitative analysts who work in the credit risk area or on regulatory issues. Furthermore, we target internal auditors and supervisors who have to evaluate the quality
of rating systems and risk parameter estimations. We hope that this book will deepen their understanding and will be useful for their daily work. Last but not least we hope this book will also be of
interest to academics or students in finance or economics who want to get an overview of the state-of-the-art of a currently important topic in the banking industry. Finally, we have to thank all the
people who made this book possible. Our sincere acknowledgements go to all the contributors of this book for their work, their enthusiasm, their reliability, and their cooperation. We know that most
of the writing had to be done in valuable spare time. We are glad that all of them were willing to make such sacrifices for the sake of this book. Special thank goes to Walter Gruber for bringing us
on the idea to edit this book. We are grateful to Martina Bihn from Spinger-Verlag who welcomed our idea for this book and supported our work on it.
We thank Dresdner Bank AG, especially Peter Gassmann and Dirk Thomas, and Quanteam AG for supporting our book. Moreover we are grateful to all our colleagues and friends who agreed to work as
referees or discussion partners. Finally we would like to thank our families for their continued support and understanding. Frankurt am Main
Bernd Engelmann Robert Rauhmeier June 2006
Contents I. Statistical Methods to Develop Rating Models ................................................1 Evelyn Hayden and Daniel Porath 1. Introduction
...............................................................................................1 2. Statistical Methods for Risk Classification ...............................................1 3.
Regression Analysis ..................................................................................2 4. Discriminant Analysis
...............................................................................3 5. Logit and Probit Models............................................................................4 6. Panel
Models .............................................................................................7 7. Hazard Models
..........................................................................................8 8. Neural Networks .......................................................................................9
9. Decision Trees.........................................................................................10 10. Statistical Models and Basel II
..............................................................11 References ...................................................................................................12 II. Estimation of a
Rating Model for Corporate Exposures ............................13 Evelyn Hayden 1. Introduction .............................................................................................13 2.
Model Selection.......................................................................................13 3. The Data Set
............................................................................................14 4. Data Processing
.......................................................................................15 4.1. Data Cleaning ..................................................................................15 4.2.
Calculation of Financial Ratios........................................................16 4.3. Test of Linearity Assumption ..........................................................17 5. Model
Building .......................................................................................19 5.1. Pre-selection of Input Ratios............................................................19
5.2. Derivation of the Final Default Prediction Model ...........................21 5.3. Model Validation .............................................................................22 6.
Conclusions .............................................................................................24 References
...................................................................................................24 III. Scoring Models for Retail Exposures
..........................................................25 Daniel Porath 1. Introduction .............................................................................................25 2. The
Concept of Scoring...........................................................................26 2.1. What is Scoring?..............................................................................26
2.2. Classing and Recoding.....................................................................27 2.3. Different Scoring Models ................................................................29 3.
Scoring and the IRBA Minimum Requirements .....................................30 3.1. Rating System Design......................................................................30 3.2. Rating
3.3. Risk Drivers..................................................................................... 31 3.4. Risk
Quantification.......................................................................... 31 3.5. Special Requirements for Scoring Models....................................... 32 4. Methods for
Estimating Scoring Models................................................. 32 5. Summary ................................................................................................. 36
References................................................................................................... 37 IV. The Shadow Rating Approach – Experience from Banking Practice ...... 39 Ulrich
Erlenmaier 1. Introduction............................................................................................. 39 2. Calibration of External
Ratings............................................................... 42 2.1. Introduction ..................................................................................... 42 2.2. External
Rating Agencies and Rating Types ................................... 43 2.3. Definitions of the Default Event and Default Rates......................... 44 2.4. Sample for PD Estimation
............................................................... 45 2.5. PD Estimation Techniques............................................................... 46 2.6. Adjustments
..................................................................................... 47 2.7. Point-in-Time Adaptation ................................................................ 48 3. Sample
Construction for the SRA Model................................................ 50 3.3. External PDs and Default Indicator ................................................. 54 4. Univariate Risk
Factor Analysis.............................................................. 57 4.1. Introduction ..................................................................................... 57 4.2.
Discriminatory Power...................................................................... 58 4.3. Transformation ................................................................................ 59
4.4. Representativeness........................................................................... 62 4.5. Missing Values
................................................................................ 63 4.6. Summary.......................................................................................... 65 5.
Multi-factor Model and Validation ......................................................... 66 5.1. Introduction .....................................................................................
66 5.2. Model Selection ............................................................................... 66 5.3. Model Assumptions
......................................................................... 67 5.4. Measuring Influence ........................................................................ 70 5.5. Manual
Adjustments and Calibration .............................................. 72 5.6. Two-step Regression ....................................................................... 73 5.7. Corporate Groups
and Sovereign Support ....................................... 73 5.8. Validation ........................................................................................ 74 6.
Conclusions............................................................................................. 75
References................................................................................................... 76 V. Estimating Probabilities of Default for Low Default Portfolios ................. 79
Katja Pluto and Dirk Tasche 1. Introduction............................................................................................. 79 2. Example: No Defaults, Assumption of
Independence............................. 81 3. Example: Few Defaults, Assumption of Independence........................... 83 4. Example: Correlated Default
Events....................................................... 86 5. Potential Extension: Calibration by Scaling Factors ............................... 89
6. Potential Extension: The Multi-period case ............................................92 7. Potential Applications .............................................................................97
8. Open Issues .............................................................................................97 9. Conclusions
.............................................................................................98 References
...................................................................................................99 Appendix A
...............................................................................................100 Appendix B
...............................................................................................102 VI. A Multi-Factor Approach for Systematic Default and Recovery Risk...105 Daniel Rösch and Harald
Scheule 1. Modelling Default and Recovery Risk ..................................................105 2. Model and Estimation
...........................................................................106 2.1. The Model for the Default Process ................................................106 2.2. The Model for the
Recovery ..........................................................107 2.3. A Multi-Factor Model Extension...................................................108 2.4. Model
Estimation...........................................................................110 3. Data and Results....................................................................................111
3.1. The Data.........................................................................................111 3.2. Estimation Results
.........................................................................114 4. Implications for Economic and Regulatory Capital ..............................118 5. Discussion
.............................................................................................122 References
.................................................................................................123 Appendix: Results of Monte-Carlo Simulations .......................................124 VII.
Modelling Loss Given Default: A “Point in Time”-Approach...............127 Alfred Hamerle, Michael Knapp, Nicole Wildenauer 1. Introduction
...........................................................................................127 2. Statistical
Modelling..............................................................................129 3. Empirical Analysis ................................................................................131
3.1. The Data.........................................................................................131 3.2.
Results............................................................................................134 4. Conclusions
...........................................................................................138 References
.................................................................................................139 Appendix: Macroeconomic Variables .......................................................140 VIII.
Estimating Loss Given Default – Experiences from Banking Practice...............................................................................................................143 Christian Peter 1.
Introduction ...........................................................................................143 2. LGD Estimates in Risk Management ....................................................144
2.1. Basel II Requirements on LGD Estimates – a Short Survey..........144 2.2. LGD in Internal Risk Management and Other Applications..........145 3. Definition of Economic Loss and
LGD.................................................147 4. A Short Survey of Different LGD Estimation Methods ........................149 5. A Model for Workout
6. Direct Estimation Approaches for LGD................................................ 153 6.1. Collecting Loss Data – the Credit Loss Database.......................... 154 6.2. Model Design and
Estimation........................................................ 156 7. LGD Estimation for Defaulted Exposures ............................................ 170 8. Concluding Remarks
............................................................................. 173 References................................................................................................. 174 IX.
Overview of EAD Estimation Concepts .................................................... 177 Walter Gruber and Ronny Parchert 1. EAD Estimation from a Regulatory
Perspective................................... 177 1.1. Definition of Terms ....................................................................... 177 1.2. Regulatory Prescriptions Concerning the
EAD Estimation ........... 178 1.3. Delimitation to Other Loss Parameters.......................................... 179 1.4. EAD Estimation for Derivative Products
...................................... 181 2. Internal Methods of EAD Estimation.................................................... 184 2.1. Empirical Models
........................................................................ 184 2.2. Internal Approaches for EAD Estimation for Derivative Products 186 3. Conclusion
............................................................................................ 195
References................................................................................................. 195 X. EAD Estimates for Facilities with Explicit
Limits..................................... 197 Gregorio Moral 1. Introduction........................................................................................... 197 2. Definition of Realised
Conversion Factors ........................................... 198 3. How to Obtain a Set of Realised Conversion Factors ........................... 201 3.1. Fixed Time
Horizon....................................................................... 201 3.2. Cohort Method............................................................................... 202 3.3. Variable
Time Horizon .................................................................. 203 4. Data Sets (RDS) for Estimation Procedures.......................................... 205 4.1. Structure and Scope
of the Reference Data Set ............................. 206 4.2. Data Cleaning ................................................................................ 207 4.3. EAD Risk
Drivers.......................................................................... 211 5. EAD Estimates ...................................................................................... 213 5.1.
Relationship Between Observations in the RDS and the Current Portfolio................................................................................................ 213 5.2. Equivalence between EAD
Estimates and CF Estimates............... 213 5.3. Modelling Conversion Factors from the Reference Data Set ........ 214 5.4. LEQ = Constant
............................................................................. 217 5.5. Usage at Default Method with CCF = Constant (Simplified Momentum
Method)............................................................................. 218 6. How to Assess the Optimality of the Estimates .................................... 219 6.1. Type of
Estimates .......................................................................... 219 6.2. A Suitable Class of Loss Functions ............................................... 220 6.3. The Objective
Function ................................................................. 221 7. Example 1 ............................................................................................. 223 7.1. RDS
............................................................................................... 223
7.2. Estimation Procedures ...................................................................228 8. Summary and Conclusions....................................................................235
References .................................................................................................236 Appendix A. Equivalence between two Minimisation Problems ..............237 Appendix B.
Optimal Solutions of Certain Regression and Optimization Problems....................................................................................................238 Appendix C. Diagnostics of
Regressions Models .....................................239 Appendix D. Abbreviations.......................................................................242 XI. Validation of Banks’ Internal
Rating Systems - A Supervisory Perspective .........................................................................................................243 Stefan Blochwitz and Stefan Hohl 1. Basel II
and Validating IRB Systems ....................................................243 1.1. Basel’s New Framework (Basel II)................................................243 1.2. Some
Challenges............................................................................244 1.3. Provisions by the BCBS.................................................................247 2. Validation
of Internal Rating Systems in Detail....................................250 2.1. Component-based Validation.........................................................250 2.2. Result-based Validation
.................................................................256 2.3. Process-based Validation ...............................................................259 3. Concluding Remarks
.............................................................................261 References .................................................................................................262 XII.
Measures of a Rating’s Discriminative Power – Applications and Limitations .........................................................................................................263 Bernd Engelmann
1. Introduction ...........................................................................................263 2. Measures of a Rating System’s Discriminative Power..........................265 2.1.
Cumulative Accuracy Profile.........................................................266 2.2. Receiver Operating Characteristic .................................................268 2.3.
Extensions......................................................................................272 3. Statistical Properties of AUROC...........................................................275
3.1. Probabilistic Interpretation of AUROC .........................................275 3.2. Computing Confidence Intervals for AUROC...............................277 3.3. Testing for Discriminative
Power ..................................................279 3.4. Testing for the Difference of two AUROCs ..................................280 4. Correct Interpretation of AUROC
.........................................................283 References .................................................................................................285 Appendix A. Proof of (2)
..........................................................................285 Appendix B. Proof of (7)...........................................................................286 XIII. Statistical
Approaches to PD Validation................................................289 Stefan Blochwitz, Marcus R. W. Martin, and Carsten S. Wehn 1. Introduction
...........................................................................................289 2. PDs, Default Rates, and Rating Philosophy ..........................................289
3. Tools for Validating PDs....................................................................... 291 3.1. Statistical Tests for a Single Time Period...................................... 292 3.2.
Statistical Multi-period Tests......................................................... 298 3.3. Discussion and Conclusion............................................................ 303 4. Practical
Limitations to PD Validation ................................................. 303 References................................................................................................. 305 XIV.
PD-Validation – Experience from Banking Practice ............................ 307 Robert Rauhmeier 1.
Introduction........................................................................................... 307 2. Rating Systems in Banking Practice .....................................................
308 2.1. Definition of Rating Systems......................................................... 308 2.2. Modular Design of Rating Systems ............................................... 308 2.3. Scope
of Rating Systems ............................................................... 310 2.4. Rating Scales and Master Scales ................................................... 310 2.5. Parties
Concerned by the Quality of Rating Systems .................... 312 3. Statistical Framework............................................................................ 313 4. Central Statistical
Hypothesis Tests Regarding Calibration.................. 316 4.1. Binomial Test ................................................................................ 317 4.2. Spiegelhalter Test
(SPGH)............................................................. 319 4.3. Hosmer-Lemeshow-F2 Test (HSLS).............................................. 320 4.4. A Test for Comparing Two Rating
Systems: The Redelmeier Test ....................................................................................................... 321 5. The Use of Monte-Carlo Simulation Technique
................................... 323 5.1. Monte-Carlo-Simulation and Test Statistic: Correction of Finite Sample Size and Integration of Asset Correlation ................................ 323 5.2.
Assessing the Test Power by Means of Monte-Carlo-Simulation . 329 6. Creating Backtesting Data Sets – The Concept of the Rolling 12-Month-Windows
.................................................................................. 333 7. Empirical Results .................................................................................. 336 7.1.
Data Description ............................................................................ 336 7.2. The First Glance: Forecast vs. Realised Default Rates .................. 337 7.3. Results of the
Hypothesis Tests for all Slices ................................ 337 7.4. Detailed Analysis of Slice ‘Jan2005’............................................. 339 8. Conclusion
............................................................................................ 341
References................................................................................................. 342 Appendix A
............................................................................................... 344 Appendix B
............................................................................................... 345 XV. Development of Stress Tests for Credit Portfolios.................................. 347 Volker
Matthias Gundlach 1. Introduction........................................................................................... 347 2. The Purpose of Stress Testing
............................................................... 348 3. Regulatory Requirements...................................................................... 349 4. Risk Parameters for Stress
Testing........................................................ 351 5. Evaluating Stress Tests.......................................................................... 353
6. Classifying Stress Tests.........................................................................354 7. Conducting Stress Tests
........................................................................358 7.1. Uniform Stress Tests......................................................................358 7.2. Sensitivity
Analysis for Risk Factors.............................................360 7.3. Scenario Analysis ..........................................................................360 8. Examples
...............................................................................................363 9.
Conclusion.............................................................................................366 References
Contributors.......................................................................................................369 Index
I. Statistical Methods to Develop Rating Models Evelyn Hayden and Daniel Porath Österreichische Nationalbank1 and University of Applied Sciences at Mainz
1. Introduction The Internal Rating Based Approach (IRBA) of the New Basel Capital Accord allows banks to use their own rating models for the estimation of probabilities of default (PD) as long as
the systems meet specified minimum requirements. Statistical theory offers a variety of methods for building and estimation rating models. This chapter gives an overview of these methods. The
overview is focused on statistical methods and includes parametric models like linear regression analysis, discriminant analysis, binary response analysis, time-discrete panel methods, hazard models
and nonparametric models like neural networks and decision trees. We also highlight the benefits and the drawbacks of the various approaches. We conclude by interpreting the models in light of the
minimum requirements of the IRBA.
2. Statistical Methods for Risk Classification In the following we define statistical models as the class of approach which uses econometric methods to classify borrowers according to their risk.
Statistical rating systems primarily involve a search for explanatory variables which provide as sound and reliable a forecast of the deterioration of a borrower's situation as possible. In contrast,
structural models explain the threats to a borrower based on an economic model and thus use clear causal connections instead of the mere correlation of variables. The following sections offer an
overview of parametric and nonparametric models generally considered for statistical risk assessment. Furthermore, we discuss the advantages and disadvantages of each approach. Many of the methods
are described in more detail in standard econometric textbooks, like Greene (2003).
The opinions expressed in this chapter are those of the author and do not necessarily reflect views of the Österreischische Nationalbank.
Evelyn Hayden and Daniel Porath
In general, a statistical model may be described as follows: As a starting point, every statistical model uses the borrower’s characteristic indicators and (possibly) macroeconomic variables which
were collected historically and are available for defaulting (or troubled) and non-defaulting borrowers. Let the borrower’s characteristics be defined by a vector of n separate variables (also called
covariates) x = x1,...,xn observed at time t - L. The state of default is indicated by a binary performance variable y observed at time t. The variable y is defined as y = 1 for a default and y = 0
for a non-default. The sample of borrowers now includes a number of individuals or firms that defaulted in the past, while (typically) the majority did not default. Depending on the statistical
application of this data, a variety of methods can be used to predict the performance. A common feature of the methods is that they estimate the correlation between the borrowers’ characteristics and
the state of default in the past and use this information to build a forecasting model. The forecasting model is designed to assess the creditworthiness of borrowers with unknown performance. This
can be done by inputting the characteristics x into the model. The output of the model is the estimated performance. The time lag L between x and y determines the forecast horizon.
3. Regression Analysis As a starting point we consider the classical regression model. The regression model establishes a linear relationship between the borrowers’ characteristics and the default
variable: yi
ȕ' x i u i
Again, yi indicates whether borrower i has defaulted (yi = 1) or not (yi = 0). In period t, xi is a column vector of the borrowers’ characteristics observed in period t – L and E is a column vector
of parameters which capture the impact of a change in the characteristics on the default variable. Finally, ui is the residual variable which contains the variation not captured by the
characteristics xi. The standard procedure is to estimate (1) with the ordinary least squares (OLS) estimators of E which in the following are denoted by b. The estimated result is the borrower’s
score Si. This can be calculated by Si
E yi | x i b' x i .
Equation (2) shows that a borrower’s score represents the expected value of the performance variable when his or her individual characteristics are known. The score can be calculated by inputting the
values for the borrower’s characteristics into the linear function given in (2).
I. Statistical Methods to Develop Rating Models
Note that Si is continuous (while yi is a binary variable), hence the output of the model will generally be different from 0 or 1. In addition, the prediction can take on values larger than 1 or
smaller than 0. As a consequence, the outcome of the model cannot be interpreted as a probability level. However, the score Si, can be used for the purpose of comparison between different borrowers,
where higher values of Si correlate with a higher default risk. The benefits and drawbacks from model (1) and (2) are the following: x OLS estimators are well-known and easily available. x The
forecasting model is a linear model and therefore easy to compute and to understand. x The random variable ui is heteroscedastic (i.e. the variance of ui is not constant for all i) since Var u i Var
E yi | x i >1 E yi | x i @ b' x i 1 b' x i .
As a consequence, the estimation of E is inefficient and additionally, the standard errors of the estimated coefficients b are biased. An efficient way to estimate E is to apply the Weighted Least
Squares (WLS) estimator. x WLS estimation of E is efficient, but the estimation of the standard errors of b still remains biased. This happens due to the fact that the residuals are not normally
distributed as they can only take on the values b’xi (if the borrower does not default and y therefore equals 0) or (1 – b’xi) (if the borrower does default and y therefore equals 1). This implies
that there is no reliable way to assess the significance of the coefficients b and it remains unknown whether the estimated values represent precise estimations of significant relationships or
whether they are just caused by spurious correlations. Inputting characteristics which are not significant into the model can seriously harm the model’s stability when used to predict borrowers’ risk
for new data. A way to cope with this problem is to split the sample into two parts, where one part (the training sample) is used to estimate the model and the other part (the hold-out sample) is
used to validate the results. The consistency of the results of both samples is then taken as an indicator for the stability of the model. x The absolute value of Si cannot be interpreted.
4. Discriminant Analysis Discriminant analysis is a classification technique applied to corporate bankruptcies by Altman as early as 1968 (see Altman, 1968). Linear discriminant analysis is based on
the estimation of a linear discriminant function with the task of separating individual groups (in this case of defaulting and non-defaulting borrowers) according to specific characteristics. The
discriminant function is Si
ȕ' x i .
Evelyn Hayden and Daniel Porath
The Score Si is also called the discriminant variable. The estimation of the discriminant function adheres to the following principle: Maximization of the spread between the groups (good and bad
borrowers) and minimization of the spread within individual groups Maximization only determines the optimal proportions among the coefficients of the vector E. Usually (but arbitrarily), coefficients
are normalized by choosing the pooled within-group variance to take the value 1. As a consequence, the absolute level of Si is arbitrary as well and cannot be interpreted on a stand-alone basis. As
in linear regression analysis, Si can only be used to compare the prediction for different borrowers (“higher score, higher risk”). Discriminant analysis is similar to the linear regression model
given in equations (1) and (2). In fact, the proportions among the coefficients of the regression model are equal to the optimal proportion according to the discriminant analysis. The difference
between the two methods is a theoretical one: Whereas in the regression model the characteristics are deterministic and the default state is the realization of a random variable, for discriminant
analysis the opposite is true. Here the groups (default or non-default) are deterministic and the characteristics of the discriminant function are realizations from a random variable. For practical
use this difference is virtually irrelevant. Therefore, the benefits and drawbacks of discriminant analysis are similar to those of the regression model: x Discriminant analysis is a widely known
method with estimation algorithms that are easily available. x Once the coefficients are estimated, the scores can be calculated in a straightforward way with a linear function. x Since the
characteristics xi are assumed to be realizations of random variables, the statistical tests for the significance of the model and the coefficients rely on the assumption of multivariate normality.
This is, however, unrealistic for the variables typically used in rating models as for example financial ratios from the balance-sheet. Hence, the methods for analyzing the stability of the model and
the plausibility of the coefficients are limited to a comparison between training and hold-out sample. x The absolute value of the discriminant function cannot be interpreted in levels.
5. Logit and Probit Models Logit and probit models are econometric techniques designed for analyzing binary dependent variables. There are two alternative theoretical foundations. The latent-variable
approach assumes an unobservable (latent) variable y* which is related to the borrower’s characteristics in the following way:
I. Statistical Methods to Develop Rating Models
y i*
ȕ' x i u i
Here E, xi and ui are defined as above. The variable yi* is metrically scaled and triggers the value of the binary default variable yi:
°1 if y*i ! 0 ® °¯0 otherwise
This means that the default event sets in when the latent variable exceeds the threshold zero. Therefore, the probability for the occurrence of the default event equals: P yi
Pu i ! ȕ' x i 1 F ȕ' x i
F ȕ' x i .
Here F(.) denotes the (unknown) distribution function. The last step in (7) assumes that the distribution function has a symmetric density around zero. The choice of the distribution function F(.)
depends on the distributional assumptions about the residuals (ui). If a normal distribution is assumed, we are faced with the probit model: ȕ' xi
F( ȕ' x i )
t 2 e 2 dt
If instead the residuals are assumed to follow a logistic distribution, the result is the logit model: F( ȕ' x i )
e ȕ' xi 1 e ȕ' xi
The second way to motivate logit and probit models starts from the aim of estimating default probabilities. For single borrowers, default probabilities cannot be observed as realizations of default
probabilities. However, for groups of borrowers the observed default frequencies can be interpreted as default probabilities. As a starting point consider the OLS estimation of the following
regression: pi
b' x i u i
In (10) the index i denotes the group formed by a number of individuals, pi is the default frequency observed in group i and xi are the characteristics observed for group i. The model, however, is
inadequate. To see this consider that the outcome (which is E(yi|xi) = b’xi) is not bounded to values between zero and one and therefore cannot be interpreted as a probability. As it is generally
implausible to assume that a probability can be calculated by a linear function, in a second step the linear expression b’xi is transformed by a nonlinear function (link function) F:
Evelyn Hayden and Daniel Porath
F b' x i .
An appropriate link function transforms the values of b’xi to a scale within the interval [0,1]. This can be achieved by any distribution function. The choice of the link function determines the type
of model: with a logistic link function equation (11) becomes a logit model, while with the normal distribution (11) results in the probit model. However, when estimating (10) with OLS, the
coefficients will be heteroscedastic, because Var(ui) = Var(pi) = p(xi) (1-p(xi)). A possible way to achieve homoscedasticity would be to compute the WLS estimators of b in (10). However, albeit
possible, this is not common practice. The reason is that in order to observe default frequencies, the data has to be grouped before estimation. Grouping involves considerable practical problems like
defining the size and number of the groups and the treatment of different covariates within the single groups. A better way to estimate logit and probit models, which does not require grouping, is
the Maximum-Likelihood (ML) method. For a binary dependent variable the likelihood function looks like:
Pb' x >1 Pb' x @. i
1 yi
For the probit model P(.) is the normal density function and for the logit model P(.) is the logistic density function. With equation (12) the estimation of the model is theoretically convincing and
also easy to handle. Furthermore, the ML-approach lends itself for a broad set of tests to evaluate the model and its single variables (see Hosmer and Lemeshow (2000) for a comprehensive
introduction). Usually, the choice of the link function is not theoretically driven. Users familiar with the normal distribution will opt for the probit model. Indeed, the differences in the results
of both classes of models are often negligible. This is due to the fact that both distribution functions have a similar form except for the tails, which are heavier for the logit model. The logit
model is easier to handle, though. First of all, the computation of the estimators is easier. However, today computational complexity is often irrelevant as most users apply statistical software
where the estimation algorithms are integrated. What is more important is the fact that the coefficients of the logit model can be more easily interpreted. To see this we transform the logit model
given in (9) in the following way: Pi 1 Pi
e ȕ' xi
The left-hand side of (13) are the odds, i.e. the relation between the default probability and the probability of survival. Now it can be easily seen that a variation of a single variable xk of one
unit has an impact of eEk on the odds, when Ek denotes the coefficient of the variable xk. Hence, the transformed coefficients eE are called
I. Statistical Methods to Develop Rating Models
odds-ratios. They represent the multiplicative impact of a borrower’s characteristic on the odds. Therefore, for the logit model, the coefficients can be interpreted in a plausible way, which is not
possible for the probit model. Indeed, the most important weakness of binary models is the fact that the interpretation of the coefficients is not straightforward. The strengths of logit and probit
models can be summarized as: x The methods are theoretically sound x The results generated can be interpreted directly as default probabilities x The significance of the model and the individual
coefficients can be tested. Therefore, the stability of the model can be assessed more effectively than in the previous cases.
6. Panel Models The methods discussed so far are all cross-sectional methods because all covariates are related to the same period. However, typically banks dispose of a set of covariates for more
than one period for each borrower. In this case it is possible to expand the cross-sectional input data to a panel dataset. The main motivation is to enlarge the number of available observations for
the estimation and therefore enhance the stability and the precision of the rating model. Additionally, panel models can integrate macroeconomic variables into the model. Macroeconomic variables can
improve the model for several reasons. First, many macroeconomic data sources are more up-to-date than the borrowers’ characteristics. For example, financial ratios calculated from balance sheet
information are usually updated only once a year and are often up to two years old when used for risk assessment. The oil price, instead, is available on a daily frequency. Secondly, by stressing the
macroeconomic input factors, the model can be used for a form of stress-testing credit risk. However, as macroeconomic variables primarily affect the absolute value of the default probability, it is
only reasonable to incorporate macroeconomic input factors into those classes of models that estimate default probabilities. In principle, the structure of, for example, a panel logit or probit model
remains the same as given in the equations of the previous section. The only difference is that now the covariates are taken from a panel of data and have to be indexed by an additional time series
indicator, i.e. we observe xit instead of xi. At first glance panel models seem similar to cross-sectional models. In fact, many developers ignore the dynamic pattern of the covariates and simply fit
logit or probit models. However, logit and probit models rely on the assumption of independent observations. Generally, cross-sectional data meets this requirement, but panel data does not. The
reason is that observations from the same period and observations from the same borrower should be correlated. Introducing this correlation in the estimation procedure is cumbersome. For example, the
fixed-effects estimator known from panel analysis for continuous dependent variables is not available for the
Evelyn Hayden and Daniel Porath
probit model. Besides, the modified fixed-effects estimator for logit models proposed by Chamberlain (1980) excludes all non-defaulting borrowers from the analysis and therefore seems inappropriate.
Finally, the random-effects estimators proposed in the literature are computationally extensive and can only be computed with specialized software. For an econometric discussion of binary panel
analysis, refer to Hosmer and Lemeshow (2000).
7. Hazard Models All methods discussed so far try to assess the riskiness of borrowers by estimating a certain type of score that indicates whether or not a borrower is likely to default within the
specified forecast horizon. However, no prediction about the exact default point in time is made. Besides, these approaches do not allow the evaluation of the borrowers’ risk for future time periods
given they should not default within the reference time horizon. These disadvantages can be remedied by means of hazard models, which explicitly take the survival function and thus the time at which
a borrower's default occurs into account. Within this class of models, the Cox proportional hazard model (cf. Cox, 1972) is the most general regression model, as it is not based on any assumptions
concerning the nature or shape of the underlying survival distribution. The model assumes that the underlying hazard rate (rather than survival time) is a function of the independent variables; no
assumptions are made about the nature or shape of the hazard function. Thus, the Cox’s regression model is a semiparametric model. The model can be written as: hi t | x i h0 t e ȕ' xi ,
where hi(t|xi) denotes the resultant hazard, given the covariates for the respective borrower and the respective survival time t. The term h0(t) is called the baseline hazard; it is the hazard when
all independent variable values are equal to zero. If the covariates are measured as deviations from their respective means, h0(t) can be interpreted as the hazard rate of the average borrower. While
no assumptions are made about the underlying hazard function, the model equation shown above implies important assumptions. First, it specifies a multiplicative relationship between the hazard
function and the log-linear function of the explanatory variables, which implies that the ratio of the hazards of two borrowers does not depend on time, i.e. the relative riskiness of the borrowers
is constant, hence the name Cox proportional hazard model. Besides, the model assumes that the default point in time is a continuous random variable. However, often the borrowers’ financial
conditions are not observed continuously but rather at discrete points in time. What’s more, the covariates are
I. Statistical Methods to Develop Rating Models
treated as if they were constant over time, while typical explanatory variables like financial ratios change with time. Although there are some advanced models to incorporate the above mentioned
features, the estimation of these models becomes complex. The strengths and weaknesses of hazard models can be summarized as follows: x Hazard models allow for the estimation of a survival function
for all borrowers from the time structure of historical defaults, which implies that default probabilities can be calculated for different time horizons. x Estimating these models under realistic
assumptions is not straightforward.
8. Neural Networks In recent years, neural networks have been discussed extensively as an alternative to the (parametric) models discussed above. They offer a more flexible design to represent the
connections between independent and dependent variables. Neural networks belong to the class of non-parametrical methods. Unlike the methods discussed so far they do not estimate parameters of a
well-specified model. Instead, they are inspired by the way biological nervous systems, such as the brain, process information. They typically consist of many nodes that send a certain output if they
receive a specific input from the other nodes to which they are connected. Like parametric models, neural networks are trained by a training sample to classify borrowers correctly. The final network
is found by adjusting the connections between the input, output and any potential intermediary nodes. The strengths and weaknesses of neural networks can be summarized as: x Neural networks easily
model highly complex, nonlinear relationships between the input and the output variables. x They are free from any distributional assumptions. x These models can be quickly adapted to new information
(depending on the training algorithm). x There is no formal procedure to determine the optimum network topology for a specific problem, i.e. the number of the layers of nodes connecting the input
with the output variables. x Neural networks are black boxes, hence they are difficult to interpret. x Calculating default probabilities is possible only to a limited extent and with considerable
extra effort. In summary, neural networks are particularly suitable when there are no expectations (based on experience or theoretical arguments) on the relationship between the input factors and the
default event and the economic interpretation of the resulting models is of inferior importance.
Evelyn Hayden and Daniel Porath
9. Decision Trees A further category of non-parametric methods comprises decision trees, also called classification trees. Trees are models which consist of a set of if-then split conditions for
classifying cases into two (or more) different groups. Under these methods, the base sample is subdivided into groups according to the covariates. In the case of binary classification trees, for
example, each tree node is assigned by (usually univariate) decision rules, which describe the sample accordingly and subdivide it into two subgroups each. New observations are processed down the
tree in accordance with the decision rules' values until the end node is reached, which then represents the classification of this observation. An example is given in Figure 1. Sector Construction
Years in business
Less than 2
…. ….
Less than 15%
Risk class 2
Equity ratio
More than 15%
Risk class 3
Figure 1. Decision Tree
One of the most striking differences of the parametric models is that all covariates are grouped and treated as categorical variables. Furthermore, whether a specific variable or category becomes
relevant depends on the categories of the variables in the upper level. For example, in Figure 1 the variable “years in business” is only relevant for companies which operate in the construction
sector. This kind of dependence between variables is called interaction. The most important algorithms for building decision trees are the Classification and Regression Trees algorithms (C&RT)
popularized by Breiman et al. (1984) and the CHAID algorithm (Chi-square Automatic Interaction Detector, see Kass, 1978). Both algorithms use different criteria to identify the best splits in the
data and to collapse the categories which are not significantly different in outcome. The general strengths and weaknesses of trees are:
I. Statistical Methods to Develop Rating Models
x Through categorization, nonlinear relationships between the variables and the score can be easily modelled. x Interactions present in the data can be identified. Parametric methods can model
interactions only to a limited extent (by introducing dummy variables). x As with neural networks, decision trees are free from distributional assumptions. x The output is easy to understand. x
Probabilities of default have to be calculated in a separate step. x The output is (a few) risk categories and not a continuous score variable. Consequently, decision trees only calculate default
probabilities for the final node in a tree, but not for individual borrowers. x Compared to other models, trees contain fewer variables and categories. The reason is that in each node the sample is
successively partitioned and therefore continuously diminishes. x The stability of the model cannot be assessed with statistical procedures. The strategy is to work with a training sample and a
hold-out sample. In summary, trees are particularly suited when the data is characterized by a limited number of predictive variables which are known to be interactive.
10. Statistical Models and Basel II Finally, we ask the question whether the models discussed in this chapter are in line with the IRB Approach of Basel II. Prior to the discussion, it should be
mentioned that in the Basel documents, rating systems are defined in a broader sense than in this chapter. Following § 394 of the Revised Framework from June 2004 (cf. BIS, 2004) a rating system
“comprises all the methods, processes, controls, and data collection and IT systems that support the assessment of credit risk, the assignment of internal ratings, and the quantification of default
and loss estimates”. Compared to this definition, these methods provide one component, namely the assignment of internal ratings. The minimum requirements for internal rating systems are treated in
part II, section III, H of the Revised Framework. A few passages of the text concern the assignment of internal ratings, and the requirements are general. They mainly concern the rating structure and
the input data, examples being: x x x x x
a minimum of 7 rating classes of non-defaulted borrowers (§ 404) no undue or excessive concentrations in single rating classes (§§ 403, 406) a meaningful differentiation of risk between the classes
(§ 410) plausible, intuitive and current input data (§§ 410, 411) all relevant information must be taken into account (§ 411).
The requirements do not reveal any preference for a certain method. It is indeed one of the central ideas of the IRBA that the banks are free in the choice of the
Evelyn Hayden and Daniel Porath
method. Therefore the models discussed here are all possible candidates for the IRB Approach. The strengths and weaknesses of the single methods concern some of the minimum requirements. For example,
hazard rate or logit panel models are especially suited for stress testing (as required by §§ 434, 345) since they contain a timeseries dimension. Methods which allow for the statistical testing of
the individual input factors (e.g. the logit model) provide a straightforward way to demonstrate the plausibility of the input factors (as required by § 410). When the outcome of the model is a
continuous variable, the rating classes can be defined in a more flexible way (§§ 403, 404, 406). On the other hand, none of the drawbacks of the models considered here excludes a specific method.
For example, a bank may have a preference for linear regression analysis. In this case the plausibility of the input factors cannot be verified by statistical tests and as a consequence the bank will
have to search for alternative ways to meet the requirements of § 410. In summary, the minimum requirements are not intended as a guideline for the choice of a specific model. Banks should rather
base their choice on their internal aims and restrictions. If necessary, those components that are only needed for the purpose to satisfy the criteria of the IRBA should be added in a second step.
All models discussed in this chapter allow for this.
References Altman EI (1968), Financial Indicators, Discriminant Analysis, and the Prediction of Corporate Bankruptcy, Journal of Finance. BIS (2004), International Convergence of Capital Measurement
and Capital Standards, Basel Committee on Banking Supervision, June 2004. Breiman L, Friedman JH, Olshen RA, Stone SJ (1984), Classification and Regression Trees, Wadsworth. Chamberlain G (1980),
Analysis of Covariance with Qualitative Data, Review of Economic Studies 47, 225-238. Cox DR (1972), Regression Models and Life Tables (with Discussion), Journal of Royal Statistical Society, Series
B. Greene W (2003), Econometric Analysis, 5th ed., Prentice-Hall New Jersey. Hosmer W, Lemeshow S (2000), Applied Logistic Regression, New York, Wiley. Kass GV (1978), An Exploratory Technique for
Investigating Large Quantities of Categorical Data, Applied Statistics 29 (2), pp. 119-127.
II. Estimation of a Rating Model for Corporate Exposures Evelyn Hayden Österreichische Nationalbank1
1. Introduction This chapter focuses on the particular difficulties encountered when developing internal rating models for corporate exposures. The main characteristic of these internal rating models
is that they mainly rely on financial ratios. Hence, the aim is to demonstrate how financial ratios can be used for statistical risk assessment. The chapter is organised as follows: Section 2
describes some of the issues concerning model selection, while Section 3 presents data from Austrian companies that will illustrate the theoretical concepts. Section 4 discusses data processing,
which includes the calculation of financial ratios, their transformation to establish linearity, the identification of outliers and the handling of missing values. Section 5 describes the actual
estimation of the rating model, i.e. univariate and multivariate analyses, multicollinearity issues and performance measurement. Finally, Section 6 concludes.
2. Model Selection Chapter I presents several statistical methods for building and estimating rating models. The most popular of these model types – in the academic literature as well as in practice
- is the logit model, mainly for two reasons. Firstly, the output from the logit model can be directly interpreted as default probability, and secondly, the model allows an easy check as to whether
the empirical dependence between the potential explanatory variables and default risk is economically meaningful (see Section 4). Hence, a logit model is chosen to demonstrate the estimation of
internal rating models for corporate exposures. Next, the default event must be defined. Historically, rating models were developed using mostly the default criterion bankruptcy, as this information
was rela1
The opinions expressed in this chapter are those of the author and do not necessarily reflect views of the Österreischische Nationalbank.
Evelyn Hayden
tively easily observable. However, banks also incur losses before the event of bankruptcy, for example, when they allow debtors to defer payments without compensation in hopes that later on, the
troubled borrowers will be able to repay their debt. Therefore, the Basel Committee on Banking Supervision (2001) defined a reference definition of default that includes all those situations where a
bank looses money and declared that banks would have to use this regulatory reference definition of default for estimating internal rating-based models. However, as demonstrated in Hayden (2003),
rating models developed by exclusively relying on bankruptcy as the default criterion can be equally powerful in predicting the comprising credit loss events provided in the new Basel capital accord
as models estimated on these default criteria. In any case, when developing rating models one has to guarantee that the default event used to estimate the model is comparable to the event the model
shall be capable to predict. Finally, a forecast horizon must be chosen. As illustrated by the Basel Committee on Banking Supervision (1999), even before Basel II for most banks it was common habit
to use a modelling horizon of one year, as this time horizon is on the one hand long enough to allow banks to take some action to avert predicted defaults, and on the other hand the time lag is short
enough to guarantee the timeliness of the data input into the rating model.
3. The Data Set The theoretical concepts discussed in this chapter will be illustrated by application to a data set of Austrian companies, which represents a small sample of the credit portfolio of
an Austrian bank. The original data, which was supplied by a major commercial Austrian bank for the research project described in Hayden (2002), consisted of about 5,000 firm-year observations of
balance sheets and gain and loss accounts from 1,500 individual companies spanning 1994 to 1999. However, due to obvious mistakes in the data, such as assets being different from liabilities or
negative sales, the data set had to be reduced to about 4,500 observations. Besides, certain firm types were excluded, i.e. all public firms including large international corporations that do not
represent the typical Austrian company and rather small single owner firms with a turnover of less than 5m ATS (about 0.36m EUR), whose credit quality often depends as much on the finances of a key
individual as on the firm itself. After eliminating financial statements covering a period of less than twelve months and checking for observations that were included twice or more in the data set,
almost 3,900 firm-years were left. Finally, observations were dropped where the default information (bankruptcy) was missing or dubious. Table 1 shows the total number of observed companies per year
and splits the sample into defaulting and non-defaulting firms. However, the data for 1994 is not depicted, as we are going to calculate dynamic financial ratios (which compare
II. Estimation of a Rating Model for Corporate Exposures
current to past levels of certain balance sheet items) later on, and these ratios cannot be calculated for 1994 as the first period in the sample. Table 1. Number of observations and defaults per
Year 1995 1996 1997 1998 1999 Total
Non-Defaulting Firms 1,185 616 261 27 23 2,112
Defaulting Firms 54 68 46 2 1 171
Total 1,239 684 307 29 24 2,283
4. Data Processing Section 4 discusses the major preparatory operations necessary before the model estimation can be conducted. They include the cleaning of the data, the calculation of financial
ratios, and their transformation to establish linearity. 4.1. Data Cleaning Some of the important issues with respect to data cleaning were mentioned in Section 3 when the Austrian data set was
presented. As described, it was guaranteed that: x the sample data was free of (obvious) mistakes, x the data set comprised only homogeneous observations, where the relationship between the financial
ratios and the default event could be expected to be comparable, and x the default information was available (and reliable) for all borrowers. In addition, missing information with respect to the
financial input data must be properly managed. Typically, at least for some borrowers, part of the financial information is missing. If the number of the observations concerned is rather low, the
easiest way to handle the problem is to eliminate the respective observations completely from the data set (as implemented for the Austrian data). If, however, this would result in too many
observations being lost, it is preferable to exclude all variables with high numbers of missing values from the analysis. Once the model has been developed and is in use, the missing information
needed to calculate the model output can be handled by substituting the missing financial ratios with the corresponding mean or median values over all observations for the respective time period
(i.e. practically “neutral” values) in order to create as undistorted an assessment as possible using the remaining input factors.
Evelyn Hayden
4.2. Calculation of Financial Ratios Once the quality of the basic financial data is guaranteed, potential explanatory variables have to be selected. Typically, ratios are formed to standardise the
available information. For example, the ratio “Earnings per Total Assets” enables a comparison of the profitability of firms of different size. In addition to considering ratios that reflect
different financial aspects of the borrowers, dynamic ratios that compare current to past levels of certain balance sheet items can be very useful for predicting default events. Overall, the selected
input ratios should represent the most important credit risk factors, i.e. leverage, liquidity, productivity, turnover, activity, profitability, firm size, growth rates and leverage development.
Table2. Selected input ratios Financial Ratio
Risk Factor
Stand. Dev.
Total Liabilities / Total Assets
Equity / Total Assets
- 0.04
- 0.92
Bank Debt / T. Assets
Short Term Debt / Total Assets
Current Assets / Current Liabilities
Accounts Receivable / Net Sales
Accounts Payable / Net Sales
(Net Sales – Material Costs) / Person. Costs
Net Sales / Total Assets
EBIT / Total Assets
- 0.18
Ordinary Business Income / Total Assets
- 0.19
Total Assets (in 1 Mio. EUR)
Net Sales / Net Sales Last Year
Total Liabilities / Liabilities Last Year
Leverage Growth
II. Estimation of a Rating Model for Corporate Exposures
After the calculation of the financial input ratios, it is necessary to identify and eliminate potential outliers, because they can and do severely distort the estimated model parameters. Outliers in
the ratios might exist even if the underlying financial data is absolutely clean, for example, when the denominator of a ratio is allowed to take on values close to zero. To avoid the need to
eliminate the affected observations a typical procedure is to replace the extreme data points by the 1% respectively the 99% percentile of the according ratio. Table 2 portrays the explanatory
variables selected for use for the Austrian data and presents some descriptive statistics. The indicators chosen comprise a small set of typical business ratios. A broader overview over potential
input ratios as well as a detailed discussion can be found in Hayden (2002). The last column in Table 2 depicts the expected dependence between the accounting ratio and the default probability, where
+ symbolises that an increase in the ratio leads to an increase in the default probability and - symbolises a decrease in the default probability given an increase in the explanatory variable.
Whenever a certain ratio is selected as a potential input variable for a rating model, it should be assured that a clear hypothesis can be formulated about this dependence to guarantee that the
resulting model is economically plausible. Note, however, that the hypothesis chosen can also be rather complex; for example, for the indicator sales growth, the hypothesis formulated is “-/+“. This
takes into account that the relationship between the rate at which companies grow and the rate at which they default is not as simple as that between other ratios and default. While it is generally
better for a firm to grow than to shrink, companies that grow very quickly often find themselves unable to meet the management challenges presented by such growth - especially within smaller firms.
Furthermore, this quick growth is unlikely to be financed out of profits, resulting in a possible build up of debt and the associated risks. Therefore, one should expect that the relationship between
sales growth and default is non-monotone, what will be examined in detail in the next section. 4.3. Test of Linearity Assumption After having selected the candidate input ratios, the next step is to
check whether the underlying assumptions of the logit model apply to the data. As explained in Chapter I, the logit model can be written as Pi
P(y i
F( ȕ' x i )
e ȕ' xi 1 e
ȕ' x i
which implies a linear relationship between the log odd and the input variables: Log odd
§ P ln¨¨ i © 1 Pi
· ¸¸ ¹
ȕ' x i
Evelyn Hayden
This linearity assumption can be easily tested by dividing the indicators into groups that all contain the same number of observations, calculating the historical default rate respectively the
empirical log odd within each group, and estimating a linear regression of the log odds on the mean values of the ratio intervals. When applied to the Austrian data (by forming 50 groups), this
procedure permits the conclusion that for most accounting ratios, the linearity assumption is indeed valid. As an example the relationship between the variable “EBIT / Total Assets” and the empirical
log odd as well as the estimated linear regression is depicted in Figure 1. The regression fit is as high as 78.02%.
Figure 1. Relationship between “EBIT / Total Assets” and log odd
Figure 2. Relationship between “Sales Growth” and log odd
II. Estimation of a Rating Model for Corporate Exposures
However, one explanatory variable, namely sales growth, shows a non-linear and even non-monotone behaviour, just as was expected. Hence, as portrayed in Figure 2, due to the linearity assumption
inherent in the logit model, the relationship between the original ratio sales growth and the default event cannot be correctly captured by such a model. Therefore, to enable the inclusion of the
indicator sales growth into the rating model, the ratio has to be linearised before logit regressions can be estimated. This can be done in the following way: the points obtained from dividing the
variable sales growth into groups and plotting them against the respective empirical log odds are smoothed by a filter, for example the one proposed in Hodrick and Prescott (1997), to reduce noise.
Then the original values of sales growth are transformed to log odds according to this smoothed relationship, and in any further analysis the transformed log odd values replace the original ratio as
input variable. This test for the appropriateness of the linearity assumption also allows for a first check as to whether the univariate dependence between the considered explanatory variables and
the default probability is as expected. For the Austrian data the univariate relationships between the investigated indicators and the default event coincide with the hypotheses postulated in Table
2, i.e. all ratios behave in an economically meaningful way.
5. Model Building 5.1. Pre-selection of Input Ratios After verifying that the underlying assumptions of a logistic regression are valid, the model building process can be started. However, although
typically a huge number of potential input ratios are available when developing a rating model, from a statistical point of view it is not advisable to enter all these variables into the logit
regression. If, for example, some highly correlated indicators are included in the model, the estimated coefficients will be significantly and systematically biased. Hence, it is preferable to
pre-select the most promising explanatory variables by means of the univariate power of and the correlation between the individual input ratios. To do so, given the data set at hand is large enough
to allow for it, the available data should be divided into one development and one validation sample by randomly splitting the whole data into two sub-samples. The first one, which typically contains
the bulk of all observations, is used to estimate rating models, while the remaining data is left for an out-of-sample evaluation. When splitting the data, it should be ensured that all observations
of one firm belong exclusively to one of the two sub-samples and that the ratio of defaulting to non-defaulting firms is similar in both data sets. For the Austrian data, about 70% of all
observations are chosen for the training sample as depicted in Table 3.
Evelyn Hayden
Table 3. Division of the data into in- and out-of-sample subsets
Year 1995 1996 1997 1998 1999
Training Sample Non-Defaulting Defaulting 828 43 429 44 187 25 20 2 17 1
Validation Sample Non-Defaulting Defaulting 357 11 187 24 74 21 7 0 6 0
Table 4. Pairwise correlation of all potential input ratios Ra tio
AR in %
+ .49 .42
+ .50 .40 .03
.48 + .39 .33 .32
+ .05 + .10 + .02 + .13 .12
+ .25 .21 + .01 + .20 .17 + .29
.05 + .10 + .06 .09 + .09 + .02 + .11
.05 + .13 .30 + .20 + .14 .21 .32 .05
.25 + .28 .10 .16 + .14 .03 .24 + .28 + .25
.36 + .38 .24 .18 + .20 .02 .24 + .25 + .25 + .96
.17 + .22 .07 .15 + .04 .01 + .02 .01 .19 .08 .02
+ .08 .12 + .06 + .07 .01 + .10 + .18 + .02 .12 .18 .18 .06
+ .38 .25 + .14 + .24 .14 + .03 + .10 .11 .05 .25 .28 + .00 .01
The concrete pre-selection process now looks as follows: At first, univariate logit models are estimated in-sample for all potential input ratios, whose power to identify defaults in the development
sample is evaluated via the criterion of the accuracy ratio (AR), a concept discussed in detail in Chapter XII. Afterwards, the pairwise correlation between all explanatory variables is computed to
identify sub-
II. Estimation of a Rating Model for Corporate Exposures
groups of highly correlated indicators, where by rule of thumb ratios with absolute correlation values of above 50% are pooled into one group. Finally, from each correlation sub-group (that usually
contains only ratios from one specific credit risk category) that explanatory variable is selected for the multivariate model building process that has got the highest and hence best accuracy ratio
in the univariate analysis. Table 4 displays the accuracy ratios of and the correlation between the financial ratios calculated for the Austrian data set. As can be seen, explanatory variable 1 is
highly correlated with indicator 2 (both measuring leverage) and ratio 10 with variable 11 (both reflecting profitability). Besides, the input ratios 2 and 11 have got better (higher) accuracy ratios
than the indicators 1 respectively 10, hence, the latter ones are dropped from the list of explanatory variables for the multivariate analysis.
5.2. Derivation of the Final Default Prediction Model Those ratios pre-selected in the previous step are now used to derive the final multivariate logit model. Usually, however, the number of
potential explanatory variables is still too high to specify a logit model that contains all of them, because the optimal model should contain only a few, highly significant input ratios to avoid
overfitting. Thus, even in our small example with only 12 indicators being left, we would have to construct and compare 212 = 4,096 models in order to determine the “best” econometric model and to
entirely resolve model uncertainty. This is, of course, a tough task, which becomes infeasible for typical short lists of about 30 to 60 pre-selected input ratios. Therefore, the standard procedure
is to use forward/backward selection to identify the final model (see Hosmer and Lemenshow, 2000). For the Austrian data set backward elimination, one possible method of these statistical stepwise
variable selection procedures that is implemented in most statistical software packages, was applied to derive the final logit model. This method starts by estimating the full model (with all
potential input ratios) and continues by eliminating the worst covariates one by one until the significance level of all remaining explanatory variables is below the chosen critical level, usually
set at 90% or 95%. Table 5 describes two logit models derived by backward elimination for the Austrian data. It depicts the constants of the logit models and the estimated coefficients for all those
financial ratios that enter into the respective model. The stars represent the significance level of the estimated coefficients and indicate that the true parameters are different from zero with a
probability of 90% (*), 95% (**) or 99% (***).
Evelyn Hayden
Table 5. Estimates of multivariate logit models Financial Ratio
Risk Factor
Equity / Total Assets
Bank Debt / Total Assets
Short Term Debt / Total Assets
Accounts Receivable / Net Sales
Accounts Payable / Net Sales
(Net Sales - Material Costs) / Personnel Costs
Net Sales / Total Assets
Model 2 (Final M.)
Model 1
-0.98 ** 1.55 *** 1.30 ** 1.71 * 2.31 **
-0.85 ** 1.21 *** 1.56 *** *
-0.23 ***
-0.23 ***
0.26 ** -1.18
+ + +
Model 1 arises if all 12 pre-selected variables are entered into the backward elimination process. Detailed analysis of this model shows that most signs of the estimated coefficients correspond to
the postulated hypotheses, however, the model specifies a positive relationship between the ratio number 9 “Net Sales / Total Assets”, while most empirical studies find that larger firms default less
frequently. What’s more, even for our data sample a negative coefficient was estimated in the univariate analysis. For this reason, a closer inspection of input ratio 9 seems appropriate. Although
the variable “Net Sales / Total Assets” does not exhibit a pairwise correlation of more than 50%, it shows absolute correlation levels of about 30% with several other covariates. This indicates that
this particular ratio is too highly correlated (on a multivariate basis) with the other explanatory variables and has to be removed from the list of variables entering the backward elimination
process. Model 2 in Table 5 depicts the resulting logit model. Here all coefficients are of comparable magnitude to those of model 1, except that the ratio “Accounts Receivable / Net Sales” becomes
highly insignificant and is therefore excluded from the model. As a consequence, all estimated coefficients are now economically plausible, and we accept model 2 as our (preliminary) final model
version. 5.3. Model Validation Finally, the derived logit model has to be validated. In a first step, some statistical tests should be conducted in order to verify the model’s robustness and goodness
of fit in-sample, and in a second step the estimated model should be applied to the validation sample to produce out-of-sample forecasts, whose quality can be evaluated with the concept of the
accuracy ratio and other methods depicted in Chapter XII.
II. Estimation of a Rating Model for Corporate Exposures
The goodness-of-fit of a logit model can be assessed in two ways: first, on the basis of some test statistics that use various approaches to measure the distance between the estimated probabilities
and the actual defaults, and second, by analysing individual observations which can each have a certain strong impact on the estimated coefficients (for details see Hosmer and Lemenshow, 2000). One
very popular goodness-of-fit test statistic is the Hosmer-Lemenshow test statistic that measures how well a logit model represents the actual probability of default for groups of firms of differently
perceived riskiness. Here, the observations are grouped based on percentiles of the estimated default probabilities. For the Austrian data 10% intervals were used i.e. 10 groups were formed. Now for
every group the average estimated default probability is calculated and used to derive the expected number of defaults per group. Next, this number is compared with the amount of realised defaults in
the respective group. The Hosmer-Lemenshow test statistic then summarises this information for all groups. In our case of 10 groups the test statistic for the estimation sample is chi-square
distributed with 8 degrees of freedom, and the corresponding p-value for the rating model can then be calculated as 79.91%, which indicates that the model fits quite well. However, the
Hosmer-Lemenshow goodness-of-fit test can also be regarded from another point of view for the application at hand. Until now we only dealt with the development of a model that assigns each
corporation a certain default probability or credit score, which leads towards a ranking between the contemplated firms. However, in practice banks usually want to use this ranking to map the
companies to an internal rating scheme that typically is divided into about ten to twenty rating grades. The easiest way to do so would be to use the percentiles of the predicted default
probabilities to build groups. If for example 10 rating classes shall be formed, then from all observations the 10% with the smallest default probabilities would be assigned the best rating grade,
the next 10% the second and so on till the last 10% with the highest estimated default probabilities would enter into the worst rating class. The Hosmer-Lemenshow test now tells us that, given one
would apply the concept described above to form rating categories, overall the average expected default probability per rating grade would fit with the observed default experience per rating class.
Table 6. Validation results of the final logit model Final Model (Model 2)
Accuracy Ratio
95% Conf. Interval
[0.3574, 0.5288]
[0.2741, 0.5438]
Hosmer-Lemenshow Test Statistic p-Value
79.91% 68.59%
What’s more, as depicted in Table 6, the in-sample accuracy ratio is about 44%, which is a reasonable number. Usually the rating models for corporate exposures presented in the literature have an
accuracy ratio between 40% and 70%. As discussed in Chapter XII in detail, AR can only be compared reliably for models that
Evelyn Hayden
are applied to the same data set, because differences in the data set such as varying relative amounts of defaulters or non-equal data reliability drives this measure heavily, hence, an AR of about
44% seems satisfactory. Finally, the out-of-sample accuracy ratio amounts to about 41%, which is almost as high as the in-sample AR. This implies that the derived rating model is stable and powerful
also in the sense that it produces accurate default predictions for new data that was not used to develop the model. Therefore, we can now eventually accept the derived logit model as our final
rating tool.
6. Conclusions This chapter focused on the special difficulties that are encountered when developing internal rating models for corporate exposures. Although the whole process with data collection
and processing, model building and validation usually takes quite some time and effort, the job is not yet completed with the implementation of the derived rating model. The predictive power of all
statistical models depends heavily on the assumption that the historical relationship between the model’s covariates and the default event will remain unchanged in the future. Given the wide range of
possible events such as changes in firms’ accounting policies or structural disruptions in certain industries, this assumption is not guaranteed over longer periods of time. Hence, it is necessary to
revalidate and eventually recalibrate the model regularly in order to ensure that its predictive power does not diminish.
References Basel Committee on Banking Supervision (1999), Credit Risk Modelling: Current Practices and Applications, Bank for International Settlements. Basel Committee on Banking Supervision (2001),
The Internal Ratings-Based Approach, Bank for International Settlements. Hayden E (2002), Modelling an Accounting-Based Rating Model for Austrian Firms, unpublished PhD dissertation, University of
Vienna. Hayden E (2003), Are Credit Scoring Models Sensitive to Alternative Default Definitions? Evidence from the Austrian Market, Working Paper, University of Vienna. Hodrick R, Prescott C (1997),
Post-War U.S. Business Cycles: An Empirical Investigation, Journal of Money, Credit and Banking 29, pp. 1-16. Hosmer W, Lemenshow S (2000), Applied Logistic Regression, John Wiley & Sons, New York.
III. Scoring Models for Retail Exposures Daniel Porath University of Applied Sciences at Mainz
1. Introduction Rating models for retail portfolios deserve a more detailed examination because they differ from other bank portfolios. The differences can mainly be attributed to the specific data
structure encountered when analyzing retail exposures. One implication is that different statistical tools have to be used when creating the model. Most of these statistical tools do not belong to
the banker’s standard toolbox. At the same time – and strictly speaking for the same reason – the banks’ risk management standards for retail exposures are not comparable to those of other
portfolios. Banks often use scoring models for managing the risk of their retail portfolios. Scoring models are statistical risk assessment tools especially designed for retail exposures. They were
initially introduced to standardize the decision and monitoring process. With respect to scoring, the industry had established rating standards for retail exposures long before the discussion about
the IRBA emerged. The Basel Committee acknowledged these standards and has modified the minimum requirements for the internal rating models of retail exposures. The aim of this chapter is to discuss
scoring models in the light of the minimum requirements and to introduce the non-standard statistical modelling techniques which are usually used for building scoring tables. The discussion starts
with an introduction to scoring models comprising a general description of scoring, a distinction of different kinds of scoring models and an exposure of the theoretical differences compared to other
parametric rating models. In Section 3, we extract the most important minimum requirements for retail portfolios from the New Basel Capital Framework and consider their relevance for scoring models.
Section 4 is dedicated to modelling techniques. Here, special focus is placed on the preliminary univariate analysis because it is completely different from other portfolios. We conclude with a short
Daniel Porath
2. The Concept of Scoring
2.1. What is Scoring? Like any rating tool, a scoring model assesses a borrower’s creditworthiness. The outcome of the model is expressed in terms of a number called “score”. Increasing scores
usually indicate declining risk, so that a borrower with a score of 210 is more risky than a borrower with a score of 350. A comprehensive overview about scoring can be found in Thomas et al. (2002).
The model which calculates the score is often referred to as a scoring table, because it can be easily displayed in a table. Table 1 shows an extract of two variables from a scoring model (usually
scoring models consist of about 7 up to 15 variables): Table 1. Extract from a scoring table
Score of the variables’ attributes Marital status of borrower
unmarried married or widowed divorced or separated no answer neutral
20 24 16 16 19 Age of borrower
18 d 24
24 d 32 32 d 38
38 d 50
50 d 65 65 or older neutral
The total customer score can be calculated by adding the scores of the borrower’s several characteristics. Each variable contains the category “neutral”. The score of this category represents the
portfolio mean of the scores for a variable and therewith constitutes a benchmark when evaluating the risk of a specific category. Categories with higher scores than “neutral” are below the average
portfolio risk and categories with lower scores are more risky than the average. For example,
III. Scoring Models for Retail Exposures
divorced borrowers display increased risk compared to the whole portfolio, because for the variable “marital status” the score of a divorced borrower (16) is lower than the score for the category
“neutral” (19). Scoring models usually are estimated with historical data and statistical methods. The historical data involves information about the performance of a loan (“good” or “bad”) and about
the characteristics of the loan some time before. The time span between the measurement of the characteristic on the one hand and the performance on the other hand determines the forecast horizon of
the model. Estimation procedures for scoring models are logistic regression, discriminant analysis or similar methods. The estimation results are the scores of the single characteristics. Usually the
scores are rescaled after estimation in order to obtain round numbers as in the example shown in Table 1. More details regarding estimation of the scores are shown in Section 4. 2.2. Classing and
Recoding Scoring is a parametric rating model. This means that modelling involves the estimation of the parameters E0,…,EN in a general model Si
E 0 E1 xi1 E 2 xi 2 E N xiN .
Here Si denotes the Score of the loan i = 1,…,I and x1, …,xN are the input parameters or variables for the loan i. The parameters En (n = 0,…,N) reflect the impact of a variation of the input factors
on the score. Scoring differs from other parametric rating models in the treatment of the input variables. As can be seen from Table 1, the variable “marital status” is a qualitative variable,
therefore it enters the model categorically. Some values of the variable have been grouped into the same category, like for example “married” and “widowed” in order to increase the number of
borrowers within each class. The grouping of the values of a variable is a separate preliminary step before estimation and is called “classing”. The general approach in (1) cannot manage categorical
variables and therefore has to be modified. To this end, the (categorical) variable xn has to be recoded. An adequate recoding procedure for scoring is to add the category “neutral” to the existing
number of C categories and replace xn by a set of dummy variables dxn(c), c = 1,…,C which are defined in the following way:
d xn(c)
1 for x n c °° ® 1 for x n " neutral" ° °¯ 0 else.
Daniel Porath
The recoding given in (2) is called effect coding and differs from the standard dummy variable approach where the dummies only take the values 0 and 1. The benefit from using (2) is that it allows
for the estimation of a variable-specific mean which is the score of the category “neutral”. As can be seen from (2), the value of the category “neutral” is implicitly given by the vector of dummy
values (-1,},-1). The coefficients of the other categories then represent the deviation from the variable-specific mean. This can be illustrated by recoding and replacing the first variable xi1 in
(1). Model (1) then becomes Si
E 0 E 01 E 11 d x11,i E12 d x12,i E1C d x1C ,i E 2 xi 2 E N xiN
Here (E10 – E11 – E12 – … – E1C) is the variable-specific average (“neutral”) and the coefficients E11,…,E1C represent the deviation of the individual categories from the average. The scores of the
single categories (see Table 1) are given by the sums E0 + E11, E0 + E12, …, E0 + E1C. Apart from the special recoding function (2), the procedure discussed so far is the standard procedure for
handling categorical variables. The major characteristic of scoring is that the same procedure is conducted for the quantitative variables. This means that all variables are classed and recoded prior
to estimation and therefore are treated as categorical variables. As a consequence, the overall mean E0 in (3) disappears and the model can be rewritten as: Si
ȕ11d x11,i ȕ1C d x1C,i ȕ N 0 ȕ N 1d xN 1,i ȕ NC d xNC,i
With an increasing number of variables and categories, equation (4) soon becomes unmanageable. This is why scoring models are usually displayed in tables. The effect of classing and recoding is
twofold: On the one hand, the information about the interclass variation of the quantitative variable disappears. As can be seen from Figure 1, an increasing age reduces risk. The model, however,
does not indicate any difference between the age of 39 and 49, because the same score is attributed to both ages. If the variable age entered the model as a quantitative variable with the estimated
coefficient Eage, any difference in age ('age) would be captured by the model (its effect on risk, i.e. the score, ceteris paribus, being Eage 'age). On the other hand, categorization allows for
flexible risk patterns. Referring again to the example of age, the impact on risk may be strong for the lower age categories while diminishing for increasing ages. Such a nonlinear impact on the
score Si can be modelled by selecting narrow classes for lower ages and broad classes for higher ages. The quantitative model, on the contrary, attributes the same impact of Eage to a one-year change
in age starting from any level. Thus, classing and recoding is an easy way to introduce nonlinearities in the model.
III. Scoring Models for Retail Exposures
The theoretical merits from classing and recoding, however, were not pivotal for the wide use of scoring models. The more important reason for classing and recoding is that most of the risk-relevant
input variables for retail customers are qualitative. These are demographic characteristics of the borrower (like marital status, gender, or home ownership), the type of profession, information about
the loan (type of loan, intended use) and information about the payment behaviour in the past (due payment or not). The reason for transforming the remaining quantitative variables (like age or
income) into categorical variables is to obtain a uniform model. 2.3. Different Scoring Models Banks use different scoring models according to the type of loan. The reason is that the data which is
available for risk assessment is loan-specific. For example, the scoring of a mortgage loan can make use of all the information about the real estate whereas there is no comparable information for
the scoring model of a current account. On the other hand, models for current accounts involve much information about the past payments observed on the account (income, drawings, balance) which are
not available for mortgage loans. For mortgage loans, payment information generally is restricted to whether the monthly instalment has been paid or not. As a consequence, there are different models
for different products and when the same person has two different loans at the same bank, he or she generally will have two different scores. This is a crucial difference to the general rating
principles of Basel II. Scoring models which are primarily based on payment information are called behavioural scoring. The prerequisite for using a behavioural score is that the bank observes
information about the payment behaviour on a monthly basis, so that the score changes monthly. Furthermore, in order to obtain meaningful results, at least several monthly payment transactions should
be observed for each customer. Since the behavioural score is dynamic, it can be used for risk monitoring. Additionally, banks use the score for risk segmentation when defining strategies for retail
customers, like for example cross-selling strategies or the organization of the dunning process (“different risk, different treatment”). When payment information is sporadic, it is usually not
implemented in the scoring model. The score then involves static information which has been queried in the application form. This score is called an application score. In contrast to the behavioural
score, the application score is static, i.e. once calculated it remains constant over time. It is normally calculated when a borrower applies for a loan and helps the bank to decide whether it should
accept or refuse the application. Additionally, by combining the score with dynamic information it can be used as a part of a monitoring process.
Daniel Porath
3. Scoring and the IRBA Minimum Requirements Internal Rating systems for retail customers were in use long before Basel II. The reason is that statistical models for risk assessment are especially
advantageous for the retail sector: on the one hand, the high granularity of a retail portfolio allows banks to realize economies of scale by standardization of the decision and monitoring processes.
On the other hand, the database generally consists of a broad number of homogenous data. Homogeneity is owed to standardized forms for application and monitoring. As a consequence, the database is
particularly suited for modelling. In fact, statistical procedures for risk forecasting of retail loans have a history of several decades (cf. Hand, 2001), starting with the first attempts in the
1960s and coming into wide use in the 1980s. Today, scoring is the industrial standard for the rating of retail customers. Since these standards have developed independently from the New Basel
Capital Approach, there are some differences to the IRBA minimum requirements. The Capital Accord has acknowledged these differences and consequently modified the rules for retail portfolios. Hence
most banks will meet the minimum requirements, possibly after some slight modifications of their existing scoring systems. In the following subsections we discuss the meaning of some selected minimum
requirements for scoring and therewith give some suggestions about possible modifications. The discussion is restricted to the minimum requirements, which according to our view, are the most relevant
for scoring. We refer to the Revised Framework of the Basel Committee on Banking Supervision from June 2004 (cf. BIS, 2004) which for convenience in the following is called Capital Framework. 3.1.
Rating System Design Following § 394 of the Capital Framework, a rating system comprises the assignment of a rating to credit risk and the quantification of default and loss estimates. However,
scoring models only provide the first component, which is the score Si. The default and loss estimates (which in the Capital Framework are PD, LGD, and EAD) usually are not determined by the scoring
model. When a bank intends to use a scoring model for the IRBA, these components have to be assessed separately. 3.2. Rating Dimensions Generally, the IRBA requires a rating system to be separated by
a borrowerspecific component and a transaction-specific component (see § 396 of the Capital Framework). However, in the previous section we have seen that scoring models typically mix variables about
the borrower and the type of loan. In order to render scoring models eligible to the IRBA, the Basel Committee has modified the general approach on the rating dimensions for retail portfolios.
According to § 401 of the Capital Framework both components should be present in the scoring model,
III. Scoring Models for Retail Exposures
but need not be separated. Consequently, when referring to the risk classification of retail portfolios, the Capital Framework uses the term pool instead of rating grade. With § 401, banks have
greater flexibility when defining pools, as long as the pooling is based on all risk-relevant information. Pools can be customer-specific or loan-specific (like in a scoring model) or a mixture of
both. A further consequence of § 401 is that one the same borrower is allowed to have two different scores. 3.3. Risk Drivers Paragraph 402 of the Capital Framework specifies the risk drivers banks
should use in a scoring model. These cover borrower characteristics, transaction characteristics and delinquency. As seen in the previous section, borrower and transaction characteristics are
integral parts of a scoring table. Delinquency, on the other hand, is not usually integrated in a scoring model. The rationale is that scoring aims at predicting delinquency and that therefore no
forecast is needed for a delinquent account. However, a correct implementation of a scoring model implies that delinquent accounts are separated (and therefore identified), so that the calculation of
the score can be suppressed. Hence, when using a scoring model, normally all risk drivers mentioned in § 402 of the Capital Framework are integrated. 3.4. Risk Quantification Risk quantification in
terms of Basel II is the assessment of expected loss as the product from PD, LGD and EAD. Since the expected loss of a loan determines the risk weight for the capital requirement, the regulatory
capital framework contains precise definitions for the quantification of these components. This means that the underlying time horizon is fixed to one year and that the underlying default event is
explicitly defined. Scoring models generally do not follow these definitions since their primary aim is not to fulfil the supervisory requirements but to provide internal decision support. The
application score, for example, tells whether an application for a loan should be accepted or refused and for this decision it would not suffice to know whether the loan will default in the following
year only. Instead, the bank is interested to know whether the loan will default in the long run, and therefore scoring models generally provide long-run predictions. Additionally, the default event
sets as soon as the loan becomes no longer profitable for the bank and this is usually not the case when the loan defaults according the Basel definition. It depends, instead, on the bank’s internal
calculation. To sum up, scoring models used for internal decision support generally will not comply with the requirements about risk quantification. A strategy to conserve the
Daniel Porath
power of an internal decision tool and at the same time achieve compliance with the minimum requirements is: x Develop the scoring model with the internal time-horizons and definitions of default x
Define the pools according to § 401 of the Capital Framework x Estimate the pool-specific PD, LGD and EAD following the Basel definitions in a separate step. Finally, it should be noted that the time
horizon for assigning scores is not specified in the Basel Accord. In paragraph 414 of the Capital Framework it is stated that the horizon should be generally longer than one year. The long-term
horizon normally used by scoring systems therefore is conforming to the minimum requirements. 3.5. Special Requirements for Scoring Models In § 417 the Capital Framework explicitly refers to scoring
models (and other statistical models) and specifies some additional requirements. The rationale is that the implementation of a scoring model leads to highly standardized decision and monitoring
processes where failures may be overlooked or detected too late. Therefore, the requirements given in § 417 refer to special qualitative features of the model and special control mechanisms. These
requirements will generally be met when banks follow the industrial standards for the development and implementation of scoring models. The most important standards which have to be mentioned in this
context are: x x x x
the use of a representative database for the development of the model documentation about the development including univariate analysis preparation of a user’s guide implementation of a monitoring
4. Methods for Estimating Scoring Models The statistical methods which are suitable for estimating scoring models comprise the techniques introduced in Chapter I, e.g. logit analysis, or discriminant
analysis, with the special feature that all input variables enter the model as categorical variables. This requires an extensive preliminary data analysis which is referred to as “univariate
analysis”. Univariate analysis generally is interesting for rating analysis because it serves to detect problems concerning the data and helps to identify the most important risk-drivers. However,
for retail portfolios, univariate analysis is more complex and more important than in the general case. There are several reasons for this:
III. Scoring Models for Retail Exposures
x Univariate analysis determines the classes on which the recoding is based (see Section 2) and hereby becomes an integral part of the model-building process. x In retail portfolios, qualitative
information is predominant (e.g. a person’s profession, marital status). x In retail portfolios, many qualitative variables are hard factors and do not involve human judgement. Examples include a
person’s profession, marital status and gender. Note that qualitative information encountered in rating systems for corporate loans, often require personal judgement on part of the analyst (e.g. a
company’s management, the position in the market or the future development of the sector where the company operates). x For retail portfolios, a priori, it is often unknown whether a variable is
relevant for the risk assessment. For example, there is no theory which tells whether a borrower’s profession, gender or domicile helps in predicting default. This is different for the corporate
sector where the main information consists of financial ratios taken from the balance sheet. For example, EBIT ratios measure the profitability of a firm and since profitability is linked to the
firm’s financial health, it can be classified as a potential risk factor prior to the analysis. For retail portfolios, univariate analysis replaces a priori knowledge and therefore helps to identify
variables with a high discriminatory power. x Often, the risk distribution of a variable is unknown a priori. This means that before analyzing a variable, it is not clear which outcomes correlate
with high risks and which outcomes correlate with low risks. This is completely different from the corporate sector, where for many financial ratios, the risk patterns are well-known. For example, it
is a priori known that ceteris paribus, high profitability leads to low risk and vice versa. For retail portfolios, the risk distribution has to be determined with the help of univariate analysis.
The consequences are two-fold: On one hand, univariate analysis is particularly important for replacing a priori knowledge. On the other hand, the statistical methods applied in the univariate
analysis should be designed to handle qualitative hard factors. The basic technique for creating a scoring model is crosstabulation. Crosstabs display the data in a two-dimensional frequency table,
where the rows c = 1,…,C are the categories of the variable and the columns are the performance of the loan. The cells contain the absolute number of loans included in the analysis. Crosstabulation
is flexible because it works with qualitative data as well as quantitative data - quantitative information simply has to be grouped beforehand. A simple example for the variable “marital status” is
displayed in Table 2:
Daniel Porath
Table 2. Crosstab for the variable “Marital status”
Marital status of borrower No. of good loans unmarried 700 married or widowed 850 divorced or separated 450
No. of bad loans 500 350 650
The crosstab is used to assess the discriminative power. The discriminative power of a variable or characteristic can be described as its power to discriminate between good and bad loans. However, it
is difficult to compare the absolute figures in the table. In Table 2, the bank has drawn a sample of the good loans. This is a common procedure, because often it is difficult to retrieve historical
data. As a consequence, in the crosstab, the number of good loans cannot be compared to the number of bad loans of the same category. It is therefore reasonable to replace the absolute values by the
column percentages for the good loans P(c|Good) and for the bad loans P(c|Bad), see Table 3: Table 3. Column percentages, WoE and IV
Unmarried married or widowed divorced or separated
Marital status of borrower P(c|Good) P(c|Bad) 0.3500 0.3333 0.4250 0.2333 0.2250 0.4333 IV
WoE 0.0488 0.5996 -0.6554 0.2523
The discriminative power can be assessed by regarding the risk distribution of the variable which is shown by the Weight of Evidence WoEc (see Good, 1950). The Weight of Evidence can be calculated
from the column percentages with the following formula: WoE
ln Pc | Good ln Pc | Bad
The interpretation of WoE is straightforward: Increasing values of the Weight of Evidence indicate decreasing risk. A value of WoEc > 0 (WoEc < 0) means that in category c good (bad) loans are
over-represented. In the above example, the Weight of Evidence shows that loans granted to married or widowed customers have defaulted with a lower frequency than those granted to divorced or
separated customers. The value of WoE close to 0 for unmarried customers displays that the risk of this group is similar to the average portfolio risk.
III. Scoring Models for Retail Exposures
The Weight of Evidence can also be interpreted in terms of the Bayes theorem. The Bayes theorem expressed in log odds is ln
P(Good|c) P(Bad|c)
P(c|Good) P(Good) ln . P(c|Bad) P(Bad)
Since the first term on the right hand of Equation 6 is the Weight of Evidence, it represents the difference between the a posteriori log odds and the a priori log odds. The value of WoEc therefore
measures the improvement of the forecast through the information of category c. Hence it is a performance measure for category c. A comprehensive performance measure for all categories of an
individual variable can be calculated as a weighted average of the Weights of Evidence for all categories c = 1,…,C. The result is called Information Value, IV (cf. Kullback, 1959) and can be
calculated by: IV
¦ Woec P(c|Good) P(c|Bad)
c 1
A high value of IV indicates a high discriminatory power of a specific variable. The Information Value has a lower bound of zero but no upper bound. In the example of Figure 3, the Information Value
is 0.2523. Since there is no upper bound, from the absolute value we cannot tell whether the discriminatory power is satisfactory or not. In fact, the Information Value is primarily calculated for
the purpose of comparison to other variables or alternative classings of the same variable and the same portfolio. The Information Value has the great advantage of being independent from the order of
the categories of the variable. This is an extremely important feature when analyzing data with unknown risk distribution. It should be noted that most of the better-known performance measures like
the Gini coefficient or the power curve do not share this feature and therefore are of limited relevance only for the univariate analysis of retail portfolios. Crosstabulation is a means to generate
classings which are needed for the recoding and estimation procedures. There are three requirements for a good classing. First, each class should contain a minimum number of good and bad loans,
otherwise the estimation of the coefficients E in (4) tend to be imprecise. Following a rule of thumb there should be at least 50 good loans and 50 bad loans in each class. Probably this is why in
the above example there is no separate category “widowed”. Second, the categories grouped in each class should display a similar risk profile. Therefore, it is feasible to combine the categories
“separated” and “divorced” to one single class. Third, the resulting classing should reveal a plausible risk pattern (as indicated by the Weight of Evidence) and a high performance (as indicated by a
high Information Value).
Daniel Porath
Fixing a classing is complex, because there is a trade-off between the requirements. On one hand, the Information Value tends to increase with an increasing number of classes, on the other hand,
estimation of the coefficients ȕ tends to improve when the number of classes decreases. In order to fix the final classing analysts produce a series of different crosstabs and calculate the
corresponding Weights of Evidence and Information Values. Finally, the best classing is selected according to the criteria above. The final classing therefore is the result of a heuristic process
which is strongly determined by the analyst’s know-how and experience.
5. Summary In this section, we briefly summarise the ideas discussed here. We have started from the observation that for retail portfolios, the methods for developing rating models are different from
those applied to other portfolios. This is mainly due to the different type of data typically encountered when dealing with retail loans: First, there is a predominance of hard qualitative
information which allows the integration of a high portion of qualitative data in the model. Second, there is little theoretical knowledge about the risk relevance and risk distribution of the input
variables. Therefore, analyzing the data requires special tools. Finally, there is a high amount of comparably homogenous data. As a consequence, statistical risk assessment tools were developed long
before rating models for other banks’ portfolios have boosted and the standards have been settled independently from Basel II. The standard models for the rating of retail portfolios are scoring
models. Generally, scoring models comply with the IRBA minimum requirements as long as they fulfill the industrial standards. However, usually they only constitute risk classification systems in
terms of the IRBA and it will be necessary to add a component which estimates PD, EAD and LGD. The estimation of a scoring model requires the classing of all individual variables. This is done in a
preliminary step called univariate analysis. The classings can be defined by comparing the performance of different alternatives. Since risk distribution of the variables is often completely unknown,
the univariate analysis should rely on performance measures which are independent from the ordering of the single classes, like for example the Weight of Evidence and the Information Value. Once the
classing is settled the variables have to be recoded in order to build the model. Finally, the model can be estimated with standard techniques like logit analysis or discriminant analysis.
III. Scoring Models for Retail Exposures
References BIS (2004), International Convergence of Capital Measurement and Capital Standards, Basel Committee on Banking Supervision, June 2004. Good IJ (1950); Probability and the Weighing of
Evidences, Charles Griffin, London. Hand DJ (2001), Modelling consumer credit risk, IMA Journal of Management Mathematics 12, pp. 139-155. Kullback S (1959), Information Theory and Statistics, Wiley,
New York. Thomas, LC, Crook J, Edelman D (2002), Credit Scoring and its applications, Siam Monographs on Mathematical Modeling and Computation
IV. The Shadow Rating Approach – Experience from Banking Practice Ulrich Erlenmaier KfW Bankengruppe1
1. Introduction In this article we will report on some aspects of the development of shadow rating systems found to be important when re-devising the rating system for large corporations of KfW
Bankengruppe (KfW banking group). The article focuses on general methodological issues and does not necessarily describe how these issues are dealt with by KfW Bankengruppe. Moreover, due to
confidentiality we do not report estimation results that have been derived. In this introductory section we want to describe briefly the basic idea of the shadow rating approach (SRA), then summarise
the typical steps of SRA rating development and finally set out the scope of this article. The shadow rating approach is typically employed when default data are rare and external ratings from the
three major rating agencies (Standard & Poor’s, Moody’s or Fitch) are available for a significant and representative part of the portfolio. As with other approaches to the development of rating
systems, the first modelling step is to identify risk factors – such as balance sheet ratios or qualitative information about a company – that are supposed to be good predictors of future defaults.
The SRA’s objective is to choose and weight the risk factors in such a way as to mimic external ratings as closely as possible when there is insufficient data to build an explicit default prediction
model (the latter type of model is e.g. described in Chapter I. To make the resulting rating function usable for the bank’s internal risk management as well as for regulatory capital calculation, the
external rating grades (AAA, AA, etc.) have to be calibrated, i.e. a probability of default (PD) has to be attached to them. With these PDs, the external grades can then be mapped to the bank’s
internal rating scale. The following modular architecture is typical for SRA but also for other types of rating systems: 1
The opinions expressed in this article are those of the author and do not reflect views of KfW Bankengruppe (or models applied by the bank).
1. 2. 3. 4.
Ulrich Erlenmaier
Statistical model Expert-guided adjustments Corporate group influences / Sovereign Support Override
The statistical model constitutes the basis of the rating system and will most likely include balance sheet ratios, macroeconomic variables (such as country ratings or business cycle indicators) and
qualitative information about the company (such as quality of management or the company’s competitive position). The statistical model will be estimated from empirical data that bring together
companies’ risk factors on the one hand and their external ratings on the other hand. The model is set up to predict external ratings – more precisely, external PDs – as efficiently as possible from
the selected risk factors. The second modelling layer of the rating system, that we have termed “Expertguided adjustments” will typically include risk factors for which either no historical
information is available or for which the influence on external ratings is difficult to estimate empirically.2 Consequently, these risk factors will enter the model in the form of adjustments that
are not estimated empirically but that are determined by credit experts. The third modelling layer will take into account the corporate group to which the company belongs or probably some kind of
government support.3 This is typically done by rating both the obligor on a standalone basis and the entity that is supposed to influence the obligor’s rating. Both ratings are then aggregated into
the obligor’s overall rating where the aggregation mechanism will depend on the degree of influence that the corporate group / sovereign support are assessed to have. Finally, the rating analyst will
have the ability to override the results as derived by steps 1 to 3 if she thinks that – due to very specific circumstances – the rating system does not produce appropriate results for a particular
obligor. This article will focus on the development of the rating system’s first module, the statistical model.4 The major steps in the development of the statistical model are: 1. Deployment of
software tools for all stages of the rating development process 2. Preparation and validation of the data needed for rating development (typically external as well as internal data sets)5 2
This occurs e.g. when a new risk factor has been introduced or when a risk factor is relevant only for a small sub-sample of obligors. There also might be other types of corporate relationships that
can induce the support of one company for another one. For example, a company might try to bail out an important supplier which is in financial distress. However, since this issue is only a minor
aspect of this article we will concentrate on the most common supporter-relationship in rating practice, i.e. corporate groups and sovereign support. We will, however, also include a short proposal
for the empirical estimation of corporate group influences / sovereign support (step 3).
IV. The Shadow Rating Approach – Experience from Banking Practice
3. 4. 5. 6. 7. 8.
Calibration of external ratings Sample construction for the internal rating model Single (univariate) factor analysis Multi factor analysis and validation Impact analysis Documentation
This article deals with steps 3 to 6, each of which will be presented in one separate section. Nevertheless, we want to provide comments on the other steps and emphasise their relative importance
both in qualitative as in quantitative terms for the success of a rating development project: x Initial project costs (i.e. internal resources and time spent for the initial development project) will
be very high and mainly driven by steps 1 to 3 (but also 8) with step 1 being the single biggest contributor. In contrast, follow-up costs (future refinement projects related to the same rating
system) can be expected to be much lower and more equally distributed across all steps with step 2 most likely being the single biggest contributor. x The importance of step 2 for the statistical
analyses that build on it must be stressed. Moreover, this step will be even more important when external data sets are employed. In this case, it will also be necessary to establish compatibility
with the internal data set. x Step 7: Once a new rating system has been developed and validated, it will be important to assess the impact of a change to the new system on key internal and regulatory
portfolio risk measures, including for example, expected loss or regulatory and economic capital. x Regarding step 8 we found it very helpful and time saving to transfer a number of the results from
statistical analyses to appendices that are automatically generated by software tools. Finally, we want to conclude the introduction with some comments on step 1, the deployment of software tools.
The objective should be to automate the complex rating development process as completely as possible through all the necessary steps, in order to reduce the manpower and a-priori know how required to
conduct a development project. Therefore, different, inter-connected tools are needed, including:
In this article, the term “external data sets” or “external data” will always refer to a situation where – additional to internally rated companies – a typically much larger sample of not
internally-rated companies is employed for rating development. This external data set will often come from an external data provider such as e.g. Bureau van Dijk but can also be the master sample of
a data-pooling initiative. In such a situation, usually only quantitative risk factors will be available for both, the internal and the external data set while qualitative risk factors tend to be
confined to the internal data set. In this situation, a number of specific problems arise that have to be taken into account. The problems we found most relevant will be dealt with in this article.
Ulrich Erlenmaier
x Datamarts: Standardised reports from the bank’s operating systems or data warehouse covering all information relevant for rating development / validation on a historical basis x Data set
management: to make external data compatible with internal data, for sample construction, etc. x Statistical analysis tools: tailor made for rating development and validation purposes. These tools
produce documents that can be used for the rating system’s documentation (step 8). These documents comprise all major analyses as well as all relevant parameters for the new rating algorithm. x
Generic rating algorithm tool: Allows the application of new rating algorithms to the relevant samples. It should be possible to customise the tool with the results from the statistical analyses and
to build completely new types of rating algorithms.
2. Calibration of External Ratings
2.1. Introduction The first step in building an SRA model is to calibrate the external agencies’ rating grades, i.e. to attach a PD to them. The following list summarises the issues we found
important in this context: x External rating types: which types of ratings should be employed? Probability of default (PD) / Expected loss (EL) ratings, Long- / Short-term ratings, Foreign / Local
currency ratings x External rating agencies: pros and cons of the different agencies’ ratings with respect to the shadow rating approach x Default definition / Default rates: differences between
external and internal definitions of the default event and of default rates will be discussed x Samples for external PD estimation: which time period should be included, are there certain obligor
types that should be excluded? x PD estimation technique: discussion of the pros and cons of the two major approaches, the cohort and the duration-based approach x Adjustments of PD estimates: if PD
estimates do not have the desired properties (e.g. monotonicity in rating grades), some adjustments are required x Point-in-time adjustment: external rating agencies tend to follow a
through-thecycle-rating philosophy. If a bank’s internal rating philosophy is point-in-time then either the external through-the-cycle ratings must be adjusted to make them sensitive to changes in
macroeconomic conditions or,
IV. The Shadow Rating Approach – Experience from Banking Practice
the effects of developing on a through-the-cycle benchmark must be taken into account The above mentioned issues will be addressed in the following sections. 2.2. External Rating Agencies and Rating
Types For SRA ratings systems, typically the ratings of the three major ratings agencies – Standard & Poors (S&P), Moody’s and Fitch – are employed. Two questions arise: 1. For each rating agency,
which type of rating most closely matches the bank’s internal rating definition? 2. Which rating agencies are particularly well suited for the purpose of SRA development? Regarding question 1 issuer
credit ratings for S&P and Fitch and issuer ratings for Moody’s were found to be most suitable since these ratings assess the obligor and not an obligor’s individual security. Moreover, it will
usually make sense to choose the long-term, local currency versions for all rating agencies and rating types.6 Regarding question 2 the major pro and cons were found to be the following: x Length of
rating history and track record: S&P and Moody’s dominate Fitch. See e.g. Standard & Poor’s (2005), Moody’s (2005), and Fitch (2005). x Rating scope: while both S&P and Fitch rate an obligor with
respect to its probability of default (PD), which is consistent with banks’ internal ratings as required by Basel II, Moody’s assesses its expected loss (EL).This conclusion draws on the rating
agencies’ rating definitions (cf. Standard and Poor’s (2002), Moody’s (2004), and Fitch, 2006), discussions with rating agency representatives and the academic literature (cf. Güttler, 2004). x Are
differences between local and foreign currency ratings (LC and FC) always identifiable? While S&P attaches a local and foreign currency rating to almost every issuer rating, this is not always the
case for Moody’s and Fitch. Based on an assessment of these pros and cons it has to be decided whether one agency will be preferred when more than one external rating is available for one obligor.
Long-term ratings because of the Basel II requirements that banks are expected to use a time horizon longer than one year in assigning ratings (BCBS (2004), § 414) and because almost all analyses of
external ratings are conducted with long-term ratings. Local currency ratings are needed when a bank measures transfer risk separately from an obligor’s credit rating.
Ulrich Erlenmaier
The following sections will deal with PD estimations for external rating grades. In this context we will – for the sake of simplicity – focus on the agencies S&P and Moody’s. 2.3. Definitions of the
Default Event and Default Rates For the PD estimates from external rating data to be consistent with internal PD estimates, a) the definition of the default event and b) the resulting definition of
default rates (default counts in relation to obligor counts) must be similar. While there might be some minor differences regarding the calculation of default rates7, the most important differences
in our opinion stem from different definitions of the default event. Here are the most important deviations:8 x Different types of defaults (bank defaults vs. bond market defaults): a company that
has problems meeting its obligations might e.g. first try to negotiate with its bank before exposing it to a potential default in the bond market. x Differences in qualitative default criteria:
according to Basel II, a company is to be classified as default when a bank considers that the obligor is unlikely to pay its credit obligations in full. This could easily apply to companies that are
in the lowest external non-default rating grades.9 x Number of days of delayed payment that will lead to default Basel II: 90 days S&P: default when payments are not made within grace period which
typically ranges from 10 to 30 days Moody’s: 1 day x Materiality: While external agencies will measure defaults without respect to the size of the amount due, under Basel II, payment delays that are
small with respect to the company’s overall exposure will not be counted as defaults. In order to assess the effects of these and other differences in default definition on estimated PDs, the default
measurement of S&P and Moody’s has to be compared with the bank’s internal default measurement. In a first step S&P and Moody’s could be compared with each other (a). If the differences between the
two external agencies are not significant, internal defaults can be compared with the pooled ex7
Examples: a) While the external agencies count the number of obligors only at the beginning of the year and then the resulting defaults from these obligors over the year, a bank might count on a
finer basis (e.g. monthly) in order to track as many obligors as possible; b) defaults that occur because of foreign currency controls and not because the individual obligor is not able to meet its
obligations should not be counted as default for the purpose of PD-estimation if a bank quantifies transfer risk separately. The Basel II default definition is given in (BCBS (2004), § 452). The
rating agencies’ default definitions are described in their respective default reports (cf. Standard & Poor’s (2005), Moody’s (2005), and Fitch, 2005). This assessment draws on external agencies’
verbal definitions of those rating grades (cf. Standard and Poor’s (2002), Moody’s (2004), and Fitch, 2006).
IV. The Shadow Rating Approach – Experience from Banking Practice
ternal defaults of S&P and Moody’s (b). The following technique might be useful for steps a) and b): 1. Estimation of the ratio of Moody’s defaults for each S&P default and the ratio of external
defaults for each internal default respectively. 2. This ratio can be interpreted as an adjustment factor with which a) PDs derived for Moody’s have to be scaled in order to arrive at PDs compatible
with S&P and b) with which external PDs have to be adjusted in order to be comparable with internally derived PDs. 3. Calculation of confidence intervals for the resulting estimators using a
multinomial model and a Chi-square-type test statistic10 Depending on the estimation results it has to be decided whether an adjustment factor should be applied. If estimators prove to be very
volatile, additional default data (e.g. form data pooling initiatives) might be needed to arrive at more confident estimates. 2.4. Sample for PD Estimation For the estimation of external PDs the
obligor samples of S&P and Moody’s as used by these agencies to derive default rates in their annual default reports can be employed.11 The following two dimensions of sample construction should in
our opinion be closely analysed: 1. Obligor sector and country: should all obligor types be included irrespective of industry sector and country? 2. Length of time series With respect to 1) one can
start with the hypotheses that – as ratings agencies claim – external ratings are comparable across industry sectors and countries.12 Consequently, for those rating types (S&P and Fitch) that aim to
measure an obligor’s PD, PD estimates would only have to be conditional on an obligor’s rating grade, not its industry sector or country. Where ratings measure an obligor’s EL for senior unsecured
obligations (Moody’s), however, PD estimates would also have to be conditional on all obligor characteristics that affect the LGD on these obligations, as could – for example – be the case for a
company’s industry sector 10
For example, for the comparison of external and internal defaults, the multinomial random variable would for each defaulted company indicate one of three potential outcomes: 1) External and internal
default, 2) External default but no internal default, 3) Internal default but no external default. Moreover, due to the typically small amount of data, no large-sample approximation but the exact
Chi-square distribution should be employed. Confidence limits can be estimated by applying the test statistic on a sufficiently fine grid for the parameters of the multinomial distribution. 11 See
Standard and Poor’s (2005) and Moody’s (2005). 12 See agencies’ rating definitions: Standard and Poor’s (2002) and Moody’s (2004) respectively.
Ulrich Erlenmaier
or home country. But if LGD differences across obligors are small compared to PD differences between rating grades, estimates based only on the rating grade might be tolerable for pragmatic reasons.
To address the first issues (comparability of ratings across countries/sectors), the literature on differences between external default rates across industry sectors and countries should be reviewed.
We found only three papers on the default rate issue.13 None identified country specific differences while they were inconclusive with respect to sector specific differences.14 Regarding the second
issue (relative size of the LGD effect), the bank’s internal LGD methodology should be analysed with respect to differences between senior unsecured LGDs across industries and countries.15 Based on
the assessment of both issues it should be decided as to whether country or industry sector specific estimates are needed. We now turn to the second dimension of sample construction, i.e. the length
of the time series. On the one hand, a long time series will reduce statistical uncertainty and include different states of the business cycle. On the other hand, there is the problem that because of
structural changes, data collected earlier, might not reflect current and future business conditions. A sensible starting point will be the time horizon that is most often used by both the rating
agencies and the academic literature (starting with the years 1981 and 1983 respectively). One can then analyse changes in rating grade default rates over time and assess whether structural changes
in the default rate behaviour can be identified or whether most of the variability can be explained by business cycle fluctuations. 2.5. PD Estimation Techniques Once the sample for PD estimation has
been derived, the estimation technique must be specified. Typically, the so called cohort method (CM) is applied where the number of obligors at the beginning of each year in each rating grade and
the number of obligors that have defaulted in this year are counted respectively. Both figures are then summed over all years within the time horizon. The resulting PD estimate is arrived at by
dividing the overall number of defaults by the overall number of obligors.16
See Ammer and Packer (2000), Cantor and Falkenstein (2001), and Cantor (2004) Ammer and Packer (2000) found default-rate differences between banks and non-banks. However, they pointed out that these
differences are most likely attributable to a specific historic event, the US Savings and Loans crisis, and should therefore not be extrapolated to future default rates. Cantor and Falkenstein
(2001), in contrast, found no differences in the default rates of banks and non-banks once one controls for macroeconomic effects. 15 For a discussion of LGD-estimation methods we refer to Chapter
VIII of this book. 16 This method can be improved on by counting on a monthly or even finer base. 14
IV. The Shadow Rating Approach – Experience from Banking Practice
The duration-based (DB) approach aims to improve on the cohort-method by including information on rating migration in the estimation process. The underlying idea is to interpret default events as the
result of a migration process. In the simplest setting where the migration process can be assumed to follow a stationary Markov process, a T-year migration matrix MT can be derived by applying the
one year migration matrix MT T times: MT
M 1T
The continuous time analogue of (1) is Mt
Exp(m t ) ,
where m is the marginal migration matrix, t the time index and Exp(.) the matrix exponential.17 Hence, M1 (including in particular 1-year default probabilities) can be derived by first estimating m
from transition counts and then applying the matrix exponential to the estimated marginal transition matrix. A detailed description of the duration-based approach (DB) and the cohort method (CM) can
be found in Schuermann and Hanson (2004). They also state the major differences between CM and DB estimates, in particular, that the latter produce PDs that spread more widely across the rating
scale, i.e. PDs for good rating grades will be much lower and PDs for bad ratings will be much higher under DB than under CM. Both estimation techniques have their pros and cons: x DB makes more use
of the available information by also taking into account rating migrations. For this reason, the DB method can also produce positive PD estimates for the best rating grades where no default
observations are available. x CM is more transparent and does not rely on as many modelling assumptions as the DB method. As long as there is no clear-cut empirical evidence on the relative
performance of both methods, it seems therefore sensible to apply both techniques and compare the resulting estimates. However, it is likely that in the future such comparisons will become available
and therefore it will be helpful to keep an eye on the corresponding regulatory and academic discussion. 2.6. Adjustments Because the PD estimates resulting from the application of the estimation
methods as described in the previous section will not always be monotonic (i.e. not always will PD estimates for better rating grades be lower than for worse rating grades), the estimates have to be
adapted in these non-monotonic areas. One option is to 17
The matrix exponential applies the exponential series to matrices: exp (m) = I + m1/1! + m2/2! + ... , where I is the identity matrix.
Ulrich Erlenmaier
regress the logarithm of the PD estimates on the rating grades and to check whether the interpolations that result for the non-monotonic areas are within confidence limits. Here are some comments on
the underlying techniques: x Regression In order to perform the regression, a metric interpretation has to be given to the ordinal rating grades. Plots of PD estimates against rating grades on a
logarithmic scale suggest that this approach is sensible from a pragmatic point of view (cf. Altman and Rijken, 2004). It may make sense to weight the regression by the number of observations
available for each rating grade since the precision of PD estimates is dependent on it. x Confidence intervals (CI) For the cohort approach, confidence intervals can be derived from the binomial
distribution by assuming independent observations.18 It is usually assumed that default observations are correlated because of macroeconomic default drivers that affect the default behaviour of
different obligors. Hence, binomial confidence intervals will be a conservative estimate (they are tighter then they would be under correlated defaults). CIs derived from a Merton style simulation
model (cf. Chapter XIV of this book) could be the logical next step. In the context of the duration-based method, CIs are typically derived via Bootstrap methods (cf. Schuermann and Hansen, 2004).
These tend to be even tighter. The topic of correlated defaults/migrations has to our knowledge not yet been addressed in this context. 2.7. Point-in-Time Adaptation In the context of Basel II, a
bank’s rating system is supposed to measure an obligor’s probability of default (PD) over a specific time horizon (the next T years). In practice, the objective of rating systems differs,
particularly with respect to: 1. The time horizon chosen by a bank 2. Whether PDs are conditional on the state of the business cycle (through-thecycle philosophy, TTC) or not (point-in-time
philosophy, PIT) While the first point can be taken into account by correspondingly adjusting the time horizon for default rate estimation, a bank that follows a PIT approach will have to apply
PIT-adjustments to the PD estimates derived for external rating grades since external rating agencies tend to follow a TTC-approach.19
For an efficient derivation and implementation of exact confidence limits for the binomial distribution see Daly (1992). 19 The TTC-property of external ratings has been observed in the academic
literature (cf. Löffler, 2004) and has also been proved to be significant by our own empirical investiga-
IV. The Shadow Rating Approach – Experience from Banking Practice
In the remainder of this section we will a) analyse the effects resulting from the development of ratings systems on TTC-PDs and b) outline a technique for PIT adjustments of external rating grades.
To address both points, we first summarise the most important properties of PIT and TTC rating systems in Table 1. These properties follow straightforwardly from the above definitions. A detailed
discussion can be found in Heitfield (2004). Table 1. Comparison of point-in-time and through-the-cycle rating systems Issue Point-in-Time (PIT) What does the rating system Unconditional PD measure?
Through-the-cycle (TTC) PD conditional on the state of the business cycle. The PD estimate might be either conditional on a worst case (“bottom of the cycle 20 scenario”) or on an average business
cycle scenario
Stability of an obligor’s rat- Pro-cyclical: ing grade over the cycle Rating improves during expansions and deteriorates in recessions
Stable: Rating grades tend to be unaffected by changes in the business cycle
Stability of a rating grade’s Stable: unconditional PD Unconditional PDs of ratings grades do not change. Obligor’s higher unconditional PDs during recession are accounted for by migrations to lower
rating grades and vice versa
Pro-cyclical: PDs improve during expansions and deteriorate during recessions
Turning to the first point of investigation, we now list the most important consequences when developing a rating system on a TTC-PD benchmark: x Pure macroeconomic risk factors that focus on
business cycle information will explain only the (typically quite small) PIT-part of external ratings and will therefore tend to receive very low weights in statistical models. x This effect should
be less pronounced for “mixed factors” that contain both business cycle information and non-business cycle elements, for example balance sheet ratios or country ratings. A bank that follows a PIT
rating approach but has not yet finalised a fully-fledged PIT-adaptation of external ratings might therefore manually adjust regression results in order to attach higher weights to pure
business-cycle risk factors. For tions. It must, however, be stressed that in practice rating systems will neither be completely TTC or PIT but somewhere in between. 20 This has for example been
suggested by a survey of bank rating practices by the Basel Committee’s Model Task Force (cf. BCBS, 2000).
Ulrich Erlenmaier
banks that already want to implement a statistically founded PIT-adaptation of external ratings, the following approach could be considered: x Estimation of a classic default prediction model, for
example via logistic regression (see Chapter I), with external PDs and business cycle factors (on a regional, country or industry level) as risk factors x The dependent variable is the company’s
default indicator as measured by the external ratings agencies’ default definition (or, where available, the bank’s own default definition). Accordingly, data from external rating agencies will be
needed on a single obligor level while for TTC-PD estimation, aggregate obligor and default counts are sufficient. When estimating such a model, the following challenges are pertinent: x x x
Different countries have different macroeconomic indicators that might not be comparable. Because estimating separate models for separate countries will not be feasible due to data restrictions, it
will be important to use indicators that are approximately comparable across countries. To get a picture of the related effects, it might be sensible to start by building a model for the US (where
data availability is high) and see how parameter estimates change when other countries are added. Probably separate regional models can help.
An alternative approach would be to use external point-in-time rating systems for the PIT-adaptation of through-the-cycle agency ratings. An example of a point-intime external rating is Moody’s KMV’s
EDF credit risk measure that builds on a Merton style causal default prediction model.21 Analysis is then required as to whether it would not be better to skip the through-the-cycle agency ratings
altogether and replace them with the external point-in-time ratings. In deciding on which approach to take, a bank must trade off the associated costs with the availability of the respective
3. Sample Construction for the SRA Model
3.1. Introduction Once external PDs have been calibrated, and all internal and external data required for the development of the SRA model have been compiled, it is necessary to construct samples
from this data. As we will see, different samples will be needed for 21 22
See http://www.moodyskmv.com/. For example, market-based measures such as Moody’s KMV’s EDF are only available for public companies.
IV. The Shadow Rating Approach – Experience from Banking Practice
different types of statistical analysis. In this section we mention these analysis techniques in order to map them to the corresponding samples. The techniques will be described in Section 4 and
Section 5. In this section, the following issues will be dealt with: x Which types of samples are needed? x How can these samples be constructed? x Weighted observations: If the information content
of different observations differs significantly, it might be necessary to allow for this by attaching different weights to each observation. x Correlated observations: We discuss the correlation
structure that may result from the described sample construction technique and discuss the consequences. It should be noted that some parts of the sample construction approach described in this
section might be too time consuming for an initial development project. Nevertheless, it can serve as a benchmark for simpler methods of sample construction and could be gradually implemented during
future refinements of the initial model. 3.2. Sample Types The samples relevant for the development of SRA rating systems can be classified by the following dimensions: x Samples for single
(univariate) factor analysis (e.g. univariate discriminatory power, transformation of risk factors) vs. multi factor analysis samples (e.g. regression analysis, validation) x Samples that include
only externally rated obligor vs. samples that include externally and only internally rated obligors x External data vs. internal data23 x Development vs. validation sample We will start with the
first dimension. Univariate analysis investigates the properties of each single risk factor separately. Therefore, for this type of analysis each change of the one analysed factor will generate a new
observation in the data set; for the multi factor analysis, each change of any risk factor will produce a new observation. This can be taken into account by the following approach to sample
construction: 1. Risk factors are divided in different categories. All factors for which changes are triggered by the same event are summarised into the same risk factor category.24 23
External data are often employed for the development of SRA rating systems in order to increase the number of obligors and the number of points in time available for each obligor. See section 1 for
more details.
Ulrich Erlenmaier
2. The samples for the univariate risk factor analysis are constructed separately for each category. A complete series of time intervals is build that indicates which risk factor combination is valid
for the category in each time interval or whether no observation was available in the interval. The time intervals are determined by the points in time where the risk factors of the category under
consideration change. This is done separately for each obligor. 3. All single category samples from step 2 are merged into a new series of time intervals. Each interval in the series is defined as
the largest interval for which the risk factors in each category remain constant. This is done separately for each obligor. In the following table we give an example comprising two risk factor
categories (balance sheet data and qualitative factors) and hence two different samples for univariate factor analysis. The table displays the observations for one single obligor: Table 2. Stylised
example for different samples and observations involved in rating development. Sample
ID Valid from
Valid until
Balance sheet data
Jan 03 Jan 04
Dec 03 Dec 04
Qualitative factors
Internal rating
May 03
March 04
April 04
Dec 04
Jan 03
April 03
Internal rating
May 03
Dec 03
Jan 04
March 04
Internal rating
April 04
Dec 04
Multi factor (merged)
For each of the sample types described above, two sub-types will be needed, one that includes only externally rated obligors and one that contains all obligors. The first sub-type will be needed e.g.
for discriminatory power analysis, the second e.g. for risk factor transformation or validation. A third dimension is added when external as well as internal data are employed. Typically, for SRA
models, external data will be used to estimate the quantitative model (comprising balance sheet factors as well as macroeconomic indicators) while the complete model, consisting of both, quantitative
and qualitative risk factors will be calculated on the internal data set because qualitative risk factors are not available for the external data set. 24
One category might for example include all balance sheet factors (triggered by the release of a company’s accounts). Another category will be qualitative factors as assessed by the bank’s loan
manger. They are triggered by the internal rating event. A third category might be macroeconomic indicators.
IV. The Shadow Rating Approach – Experience from Banking Practice
A fourth dimension comes with the need to distinguish between development and validation samples. Moreover, validation should not only rely on the external PD but should also include default
indicator information, i.e. the information whether a company has or has not defaulted within a specific period of time after its rating has been compiled. When validating with respect to the default
indicator, the need for the separation of development and validation samples is not so pressing since the benchmarks employed for development and validation are different. Due to the typical scarcity
of internal default data (the rationale for the SRA approach), it is sensible to perform this type of validation on the complete internal data set. However, when validating with respect to external
PDs, a separation between development and validation sample is desirable. If the quantitative model has been developed on external data, the internal data set should typically be an appropriate
validation sample.25 For the validation of the complete model, – depending on the number of observations available relative to the number of risk factors, the following options can be considered: x
Constructing two completely different samples (preferably out-of-time26) x Developing on the complete internal sample and validating on a subset of this sample, e.g. the most recent observations for
each obligor or some randomly drawn sub-sample x Application of bootstrap methods27 Summarising the issues raised in this section, the following table gives a simple example of the different samples
involved in SRA rating development and the types of statistical analysis performed on these samples. For simplicity, our example comprises only two input categories of which only one (balance sheet
data) is available for the external and the internal data set and the other (qualitative factors) is only available for the internal data set:
Note that the external sample will typically also include some or almost all internal obligors. To construct two completely different sets, internal obligors would have to be excluded from the
external data. However, if the external data set is much larger than the internal data set, such exclusion might not be judged necessary. 26 “Out-of-time” means that development and validation are
based on disjoint time intervals. 27 For an application of bootstrap methods in the context of rating validation see Appasamy et al. (2004). A good introduction to and overview over bootstrap methods
can be found in Davison and Hinkley (1997).
Ulrich Erlenmaier
Table 3. Stylised example for the different samples and corresponding types of analysis that are needed for the development of SRA type rating systems ID28 E1
Input categories Balance sheet data
Balance sheet data
Sample type29 SC EX
Qualitative factors
VAL-E / DEV
I2b I3a
EX Balance sheet data and qualitative factors
EX ALL DEV / VAL-D EX
Type of analysis Representativeness, Fillers for missing values, Univariate discriminatory power, Estimation of the quantitative multi factor model30 Representativeness, Truncation and
standardisation of risk factors, Fillers for missing values Univariate discriminatory power, Validation of the quantitative multi factor model developed on sample E1 Standardisation of risk factors,
Fillers for missing values Score calculation, Univariate discriminatory power Risk factor correlations / multicollinearity, Estimation of the complete multi factor model (quantitative and
qualitative) Risk factor correlations / multicollinearity, default indicator validation of the complete multi factor model developed on sample I3a Separate validation sample, for example most recent
observations for all obligors from sample I3a or a randomly drawn subsample
3.3. External PDs and Default Indicator For those samples consisting only of externally rated obligors (EX) and for those samples that are employed for validation on the default indicator (VAL-D), an
external PD or the default indicator have to be attached to each line of input vari-
E denotes external and I denotes internal data. We write SC for single-category samples and M for merged samples. ALL and EX are standing for “all obligors” and “only externally rated obligors”
respectively. DEV denotes development sample, VAL-E and VAL-D denote validation samples where validation is performed on external PDs and on the default indicator respectively. 30 Note that in this
case it is not necessary to merge different single-factor samples in order to perform the multi-factor analysis, because only one input-category exists. Moreover, a separate validation sample for the
external data is not necessary since validation is performed on the internal data set. 29
IV. The Shadow Rating Approach – Experience from Banking Practice
ables respectively. At least two different approaches to achieve this can be considered: 1. External PDs / the default indicator are treated as yet another risk factor category, i.e. a series of time
intervals is constructed for each external rating agency / for the default indicator indicating the time spans for which a specific external rating / default indicator realisation had been valid.
These intervals are then merged with the relevant single factor or merged factor samples in the same way as single factor samples are merged with each other.31 If there are competing PDs from
different external agencies at the same time, an aggregation rule will be applied. We will discuss this rule in the second part of this section. 2. For each risk factor time interval, a weighted
average is determined for each external agency PD and for the default indicator respectively. The weights are chosen proportionally to the length of the time interval for which the external rating /
the default indicator has been valid. As under 1), an aggregation rule is applied to translate the PDs of different external agencies into one single external PD. For the default indicator the first
approach seems to be more adequate, since with the second approach the 0/1 indicator variable would be transformed into a continuous variable on the interval [0,1] and many important analytical tools
(e.g. the ROC curve) would not be directly applicable. This argument, obviously does not apply to the external PDs since they are already measured on the interval [0,1]. Moreover, external PDs tend
to change more frequently than the default indicator and hence the number of observations would increase markedly compared to the corresponding risk factor samples. Additionally, the PDs of not only
one but three different rating agencies would have to be merged, further increasing the number of observations. Since the information content of different observations belonging to the same risk
factor combination will tend to differ only slightly, such a procedure will produce many highly correlated observations which is not desirable (see Section 3.5). Consequently the second approach
appears to be more adequate for external PDs. As mentioned above, an aggregation rule has to be devised for cases where more than one external rating is valid at some point in time. The most
straightforward choice will be weighted averages of the different external PDs with a preferential treatment of those rating agencies that are assessed to be most suitable for SRA development (see
Section 2.2).
Note that the time intervals of input factors and default indicator are shifted against each other by the length of the time horizon for which the rating system is developed. For example, if the
horizon is one year and the default indicator is equal to zero from Jan 2003 to Dec 2004 then this value will be mapped to the risk-factor interval from Jan 2002 to Dec 2003.
Ulrich Erlenmaier
3.4. Weighting Observations The information content of a single observation in the different samples depends on the length of the time interval it is associated with. If, for example, a particular
balance sheet B is valid from Jan 04 to Dec 04 and we observe two corresponding sets of qualitative factors, Q1 (valid until Feb 04) followed by Q2 (valid from Feb 04 until Dec 04) we would obviously
like to put a much higher weight on the observation (B, Q2) than on (B, Q1). The most straightforward way is to choose weights that are proportional to the length of the time interval associated with
a specific observation. In this context, the following issues are of particular interest: x Stochastic interpretation of weighted observations: The weight attached is a measure for the size of the
error term associated with each observation, i.e. its standard deviation: the lower the weight, the higher the standard deviation. x Practical implementation: Most statistics software packages
include options to perform statistical computations with weighted observations. This usually applies for all techniques mentioned in this article. 3.5. Correlated Observations Correlated observations
(or, more precisely, correlated error terms) are a general problem in single and multi factor analysis. Basic techniques assume independence. Using these techniques with correlated observations will
affect the validity of statistical tests and confidence intervals, probably also reducing the efficiency of estimators. To resolve this problem, information about the structure of the correlations is
necessary. In this article, the correlation issue will be dealt with in two steps: 1. In this section we will address the specific correlations structure that may arise from the method of sample
construction described above 2. In Section 5.3 we will analyse the statistical techniques that can be used to address this or other correlation structures in the context of multi factor analysis.
When constructing samples according to the method described above, the degree of correlation in the data will rise when the time intervals associated with each observation become smaller. It will
also depend on the frequency and intensity of changes in the risk factor and the external rating information employed. It is worth noting that the resulting type of correlation structure can be best
described within a panel data setting where the correlations within the time series observations for each single obligor will be different to the cross-sectional correlation between two obligors.
Cross-sectional correlations in SRA development may result from country or industry sector dependencies. Time series correlations will typically be due to the fact that there are structural
similarities in the relationship between a single company’s risk factors and its external rating over time. Since models for cross-
IV. The Shadow Rating Approach – Experience from Banking Practice
sectional correlations are widely applied in credit portfolio models32, we will focus on time series correlations in this article. In what follows we propose some options for dealing with
correlations in the time series parts. The options are listed in order of rising complexity: x For simplicity, basic statistical techniques are employed that do not account for correlated error
terms. With this option, as much correlation as possible can be eliminated by dropping observations with small weights. If all observations have approximately the same weight, a sub-sample can be
drawn. Here, the appropriate balance has to be found between losing too much information in the sample and retaining a degree of correlation that still appears to be compatible with not modelling
these correlations explicitly. In any case, the remaining correlation in the data should be measured and the modeller should be aware of the resulting consequences, in particular with respect to
confidence intervals (they will tend to be too narrow) and with respect to statistical tests (they will tend to be too conservative, rejecting the null too often). x Simple models of autocorrelation
in the time series data are employed, the most obvious being a first order autoregressive process (AR1) for the time series error terms. Of course, higher order AR processes or more complex
correlation models might also be considered appropriate.33 x A continuous time model for the relation between risk factors and external ratings is built (e.g. Brownian motion or Poison process type
models) and the resulting correlation structure of the discrete observations’ error terms is derived from this model. This of course is the most complex option and will most probably be seen as too
time consuming to be applied by most practitioners. It might, however, be a road for academic researchers that in turn could make the method available for practitioners in the future.
4. Univariate Risk Factor Analysis
4.1. Introduction Before building a multi factor model, each risk factor has to be analysed separately in order to determine whether and in which form it should enter the multi factor model. This
type of analysis is referred to as univariate risk factor analysis. The following issues should be dealt with in this context: x Measurement of a risk factor’s univariate discriminatory power
See Erlenmaier (2001). For an introduction to such models and further references see Greene (2003).
Ulrich Erlenmaier
x Transformation of risk factors to a) improve their linear correlation – as assumed by the multi factor regression model – with the log external PD34 or b) to make different risk factors comparable
with each other x Checking whether the samples on which the rating system is developed are representative for the samples to which the rating system will be applied (development vs. “target” sample)
x Treatment of missing values Each of these issues will be dealt with separately in the following sections. 4.2. Discriminatory Power A rating system is defined as having a high discriminatory power
if good rating grades have a comparatively low share of obligors that will default later on and vice versa. Accordingly, its discriminatory power will deteriorate with an increase in the relative
share of later on defaulted obligors in good rating grades. There are several statistical measures for this important attribute of a rating system, the Gini coefficient being the most popular.35 Due
to the lack of a sufficient number of default observations in SRA models, these types of discriminatory power measurement will usually only be applied as an additional validation measure. In the
development stage, discriminatory power will be defined in terms of the usefulness of the rating system or – in the context of univariate factor analysis – a single risk factor in predicting an
obligor’s external PD: The better a rating system or a risk factor can be used to predict an obligor’s external PD, the higher its discriminator power for the SRA approach.36 The following techniques
can be helpful to measure a risk factor’s discriminatory power for the SRA approach: x Linear and rank-order correlations of the risk factors with the log external PD37 x Bucket plots 34
See Section 5.2. Throughout this article we will use the term “log external PD” to denote the natural logarithm of the PD of an obligor’s external rating grade. How PDs are derived for each external
rating grade has been described in Section 2. 35 For an overview on measures of discriminatory power see Deutsche Bundesbank (2003) or Chapter XII 36 A good discriminatory power of the internal
rating system in terms of predicting external ratings and a good discriminatory power of the external ratings in terms of predicting future defaults will then establish a good discriminatory power of
the internal rating system in terms of predicting future defaults. 37 Linear correlations are typically termed Pearson correlations while rank-order correlations are associated with Spearman. Linear
correlations are important since they measure the degree of linear relationship which corresponds with the linear model employed for the multi-factor analysis. Rank-order correlations can be compared
with linear correlations in order to identify potential scope for risk factor transformation.
IV. The Shadow Rating Approach – Experience from Banking Practice
While the correlation measures are straightforward, the bucket plots require further comment. The underlying rationale for applying bucket plots is to visualise the complete functional form of the
relationship between the risk factor and the external PD – in contrast to the correlation measures that aggregate this information into a single number. This is done to make sure that the risk
factors indeed display an approximately linear relationship with external PDs as is required by the multi factor model. Bucket plots for continuous risk factors can for example be constructed in the
following way: x Each risk factor range was divided into n separate buckets, where we chose the 0, 1/n, 2/n, ...., (n-1)/n, 1 quantiles of each risk factor’s distribution as interval boarders. x For
each bucket we calculated the average associated external PD. By constructing the bucket boarders using quantiles it can be made sure that each interval contains the same number of observations. x
The number n of intervals has to be chosen with regard to the overall number of PD observations available for each risk factor: with increasing n it will be possible to observe the functional form of
the relationship on an ever finer scale. However, the precision of the associated PD estimates for each bucket will decrease and their volatility will increase. x In order to quantify the degree of
uncertainty, confidence intervals for the PD estimates of each bucket can be calculated. x The resulting PD estimates and confidence intervals are then plotted against the mean risk factor value of
each bucket. If a logarithmic scale is used for the PD axis, an approximately linear relationship should result when the risk factor has been appropriately transformed. Figure 1 shows an example of a
bucket plot for a continuous risk factor. Bucket plots for discrete risk factors can be devised according to the same method as described above with only one difference: for discrete factors, each
realisation should represent one bucket irrespective of the number of observations available. 4.3. Transformation The following types of transformation typical for the development of rating models
will be considered in this section: x Truncation x Other non-linear transformations of continuous risk factors (e.g. taking a risk factor’s logarithm) x Attaching a score to discrete risk factors x
Standardisation, i.e. a linear transformation in order to achieve the same mean and standard deviation for each risk factor
Ulrich Erlenmaier
Fig. 1. Example of a bucket plot. It illustrates the functional relationship between a risk factor and corresponding external PDs where the latter are measured on a logarithmic scale. The
relationship on this scale should be approximately linear.
We will discuss each of these types of transformations in turn. Truncation means that continuous risk factors will be cut off at some point on the left and right, more precisely, x trunc
min^x u , max^x l ,x``
where xu is the upper and xl the lower border at which the risk factor x is truncated. Note that the truncation function described above can be smoothed by applying a logit-type transformation
instead. Truncation is done mainly for the following reasons: x To reduce the impact of outliers and to concentrate the analysis on a risk factor’s typical range38 x To reduce a risk factor to the
range on which it has discriminatory power Other types of non-linear transformations are typically applied to continuous risk factors to achieve an approximately linear relationship with the log
external PD.
This is often necessary for sensible visualization of the risk factor’s distribution.
IV. The Shadow Rating Approach – Experience from Banking Practice
An overview of methods to achieve linearity can be found in Chapter II. These methods will therefore not be discussed here. In contrast to continuous risk factors, discrete factors (such as
qualitative information about the obligor, e.g. its quality of management or competitive position) do not have an a priori metric interpretation. Therefore, a score has to be attached to each of the
discrete risk factor’s potential realisations (e.g., excellent, good, medium or poor quality management). As with the non-linear transformation for the continuous risk factors, the scores have to be
chosen in such a way as to achieve the linear relationship of risk factors with log PDs. This can typically be achieved by calculating the mean external PD for each risk factor realisation and then
applying the logarithm to arrive at the final score. However, the resulting scores will not always be monotonic in the underlying risk factor (i.e. the average PD may not always decrease when the
assessment with respect to this risk factor improves). In such cases it has to be decided whether the effect is within statistical confidence levels or indeed indicates a problem with the underlying
risk factor. If the first holds true (typically for risk factor realisations where only very few observations are available), interpolation techniques can be applied augmented by expert judgements.
In the second case, depending on the severity of the effects identified, it may be necessary a) to analyse the reasons for this effect, or b) to merge different realisations of the risk factor to a
single score, or c) to eliminate the risk factor from subsequent analysis. All transformations that have been described up to now have been performed in order to improve the risk factor’s linear
correlation with log external PDs. The remaining transformation (standardisation) has a linear functional form and will therefore not alter linear correlations. It is performed in order to unify the
different risk factor’s scales and, accordingly, improve their comparability, primarily in the following two respects: x How good or bad is a risk factor realisation compared with the portfolio
average? x Interpretability of the coefficients resulting from the linear regression as weights for the influence of one particular risk factor on the rating result Typically, the risk factors are
standardised to the same mean and standard deviation. This transformation only makes sure that the risk factors are comparable with respect to the first and second moment of the distribution. Perfect
comparability will only be achieved when all moments of the standardised risk factor’s distribution will be roughly the same, i.e. if they follow a similar probability distribution. This will
typically not be the case, in particular since there are risk factors with continuous and discrete distributions respectively. However, some degree of overall distributional similarity should be
achieved by the need to establish an approximately linear relationship between each risk factor and the log external PD. Moreover, we will comment on the rationale of and the potential problems with
Ulrich Erlenmaier
the interpretation of regression estimates as weights of influence in Section 5.4 where we deal with multi factor analysis. 4.4. Representativeness Representativeness, while important for other types
of rating systems, should be treated with particular care when developing SRA rating systems.39 The following two types of comparisons are of specific interest: x Comparison of the internal samples
types IE (including only externally rated obligors) and IA (comprising all internal obligors) with each other. This comparison is necessary since SRA rating systems are developed on samples that
include only externally rated obligors but are also applied to obligors without external ratings. x Comparison of the external data set (E) with the internal data set IA. This comparison arises from
the need to increase the available number of observations for rating development by including external data. Representativeness can be analysed by comparing the distribution of the risk factors and
some other key factors (such as countries/regions, industry sectors, company type, obligor size, etc.) on each sample. In this context frequency plots (for continuous factors, see Figure 2) and
tables ordered by the frequency of each realisation (for discrete factors) can be particularly useful. These tools can be supplemented with basic descriptive statistics (e.g. difference of the
medians of both samples relative to their standard deviation or the ratio of the standard deviations on both samples). Formal statistical tests on the identity of distributions across samples were
not found to be useful since the question is not whether distributions are identical (typically they are not) but whether they are sufficiently similar for the extrapolation of results and estimates
derived on one sample to the other sample.
An SRA-rating system will always face the problem that – due to the relative rareness of default data – it is difficult to validate it for obligors that are not externally rated. While some
validation techniques are available (see Section 5.8), showing that the data for externally rated obligors is comparable with that of non-externally rated obligors will be one of the major steps to
make sure that the derived rating system will not only perform well for the former but also for the latter.
IV. The Shadow Rating Approach – Experience from Banking Practice
Fig. 2. Example for a frequency plot that compares a risk factor’s distribution on the external data set with its distribution on the internal data set.
What can be done when data is found to be unrepresentative? x First, it has to be ascertained whether the problem occurs only for a few risk factors / key figures or for the majority. x In the first
case, the reasons for the differences have to be analysed and the development samples adjusted accordingly. One reason, for example, might be that the distribution of obligors across regions or
industry sectors is extremely different. The development sample can then be adjusted by reducing the amount of obligors in those regions / industry sectors that are over-represented in the
development sample. x In the second case, a variety of approaches can be considered, depending on the specific situation. Examples include: The range of the risk factors can be reduced so that it
only includes areas that are observable on both the development and the target sample. The weight of a risk factor found to be insufficiently representative can be reduced manually or it can be
excluded from the analysis. 4.5. Missing Values A missing value analysis typically includes the following steps:
Ulrich Erlenmaier
x Decision as to whether a risk factor will be classified as missing for a particular observation x Calculation of fillers for missing values / exclusion of observations with missing values While for
some risk factors such as qualitative assessments (e.g. management quality), the first issue can be decided immediately, it is not always that clear-cut for quantitative risk factors such as balance
sheet ratios that may be calculated from a number of different single positions. Typical examples are balance sheet ratios that include a company’s cashflow that in turn is the sum of various single
balance sheet items. The problem – typically arising on the external data set – is that for a large proportion of observations at least one of these items will be missing. Hence, in a first step the
relative sizes of the balance sheet items have to be compared with each other and based on this comparison, rules must be devised as to which combination of missing values will trigger the overall
position to be classified as missing: if components with a large absolute size are missing, the risk factors should be set to missing; if not, the aggregate position can be calculated by either
omitting the missing items or using fillers which, however, should be chosen conditional on the size of the largest components. We now come back to the second issue raised at the beginning of this
section, i.e. the calculation of fillers for missing values on the risk factor level. It is, of course, related to the issue of calculating fillers on the component level. However, the need to employ
conditional estimates is not so severe. Typically, there will be quite a lot of risk factors that are correlated with each other. Hence, making estimates for missing values of one risk factor
conditional on other risk factors should produce more accurate fillers. However, it will also be time consuming. Therefore, in practice, only some very simple bits of information will typically be
used for conditioning, e.g. the portfolio to which an obligor belongs (external or internal data set). Moreover, different quantiles of the distribution might be employed for the calculation of
fillers on the external and internal data set respectively. For the external sample, a missing value may not constitute a significant negative signal in itself. For the internal sample, on the other
hand, missing values usually are negative signals, since a company could be expected to provide to the bank the information it needs to complete its internal rating assessment. Therefore, missing
values on the internal sample will typically be substituted by more conservative quantiles than missing values on the external data set. Finally, depending on the relative frequency of missing values
in the sample, it might be necessary to exclude some observations with missing values to avoid biases in statistical estimates.
IV. The Shadow Rating Approach – Experience from Banking Practice
4.6. Summary Concluding this section we want to summarise the techniques that we have presented for univariate risk factor analysis and map them to the samples on which they should be performed.
Since we have already dealt with the sample issue in Section 3, here we will focus on those two sample dimensions that we think are most important for univariate factor analysis, i.e. externally
rated obligors vs. all obligors and external vs. internal data set. As in Section 4.4 we use the following shortcuts for these sample types: x E: External data set, only externally rated obligors, x
IE: Internal data set, only externally rated obligors. x IA: Internal data set, all obligors, The univariate analysis techniques and corresponding sample types are summarised in Table 4. Table 4.
Univariate analysis techniques and corresponding sample types Type of univariate analysis
Sample type
Factor transformation
a) Truncation b) other non-linear transformations of continuous risk factors (e.g. taking a risk factor’s logarithm) c) calculating scores for discrete risk factors d) standardisation: linear
transformation in order to achieve the same median (mean) and standard deviation for all risk factors
Discriminatory power
a) Correlation (rank order and linear) with external PD b) Bucket plots
IE,IA E,IA
Missing values
a) Comparison of internal samples with each other (IE and IA) b) Comparison of external sample (E) with internal sample (IA) Fillers for missing values in the external and internal samples
IE is only needed to derive the scores for the qualitative risk factors. All other types of analysis are performed on IA.
Ulrich Erlenmaier
5. Multi-factor Model and Validation
5.1. Introduction Once the univariate analysis described in Section 4 has been completed, the multifactor model has to be estimated and the estimation results communicated, adjusted (if necessary),
and validated. These issues will be dealt with in Section 5 in this order: x Model selection: which type of model is chosen and which risk factors will enter the model? x Model assumptions:
Statistical models typically come with quite a few modelling assumptions that guarantee that estimation results are efficient and valid. Therefore, it has to be analysed whether the most important
assumptions of the selected model are valid for the data and if not, how any violations of modelling assumptions can be dealt with. x Measuring the influence of risk factors: We will discuss how the
relative influence of single risk factors on the rating result can be expressed in terms of weights to facilitate the interpretation of the estimated model. In a second step, we comment on the
problems associated with the calculation and interpretation of these weights. x Manual adjustments and calibration: We discuss the rationale and the most important issues that must be dealt with when
model estimates are adjusted manually and describe how the resulting model can be calibrated. x Two-step regression: It is briefly noted that with external data the regression model will typically
have to be estimated in two steps. x Corporate groups and government support: We propose a simple method to produce an empirical estimate for the optimal absolute influence of supporters on an
obligor’s rating. x Validation: We briefly itemise the validation measures that we found most useful for a short-cut validation in the context of rating development. 5.2. Model Selection The issue of
model selection primarily has two dimensions. First, the model type has to be chosen and then it has to be decided which risk factors will be included in the model. Regarding the first question the
most simple and most frequently used model in multi factor analysis is linear regression. A typical linear regression models for SRA type rating systems will have the following form:41
Throughout this article Log denotes the natural logarithm with base e.
IV. The Shadow Rating Approach – Experience from Banking Practice
Log( PDi )
b0 b1 xi1 bm xim H i
(i 1, , n) ,
where PDi denotes the external PD, xij the value of risk factor j, Hi the regression model’s error term for observation i, and b0,…, bm are the regression coefficients that must be estimated from the
data. Note that each observation i describes a specific firm over a specific time span. Risk factors are regressed on log PDs because on the one hand, this scale is typically most compatible with the
linear relationship assumed by the regression model and on the other hand, because internal master scales that translate PDs into rating grades, are often logarithmic in PDs. We now turn to the
second issue in this section, the selection of those risk factors that will constitute the final regression model employed for the rating system. The following types of analysis are useful for risk
factor selection: x x x x
Univariate discriminatory power (on internal and external data set) Representativeness Correlations / multicollinearity between risk factors Formal model selection tools
We have already dealt with the issues of discriminatory power and representativeness in Section 4. For correlations between risk factors and multicollinearity we refer the reader to Chapter II. In
this section we will add some comments on typical formal model selection tools in the context of linear regression: x Formal model selection tools are no substitute for a careful single factor and
correlation analysis. x There are quite a variety of formal model selection methods.42 We found the R2 maximisation method that finds the model with the best R2 for each given number of risk factors
particularly useful for the following reasons: It allows to trade off the reduction in multicollinearity against the associated loss in the model’s R2 on the development sample. The R2 measure is
consistent with the linear correlation measure employed in the single factor analysis.43 5.3. Model Assumptions Three crucial stochastic assumptions about the error terms H constitute the basis of
linear regression models:44 42
For reviews on formal model-selection methods see Hocking (1976) or Judge et al. (1980). 43 R2 is the square of the linear correlation between the dependent variable (the log external PD) and the
model prediction for this variable. 44 For a comprehensive overview on applied linear regression see Greene (2003).
Ulrich Erlenmaier
x Normal distribution (of error terms) x Independence (of all error terms from each other) x Homoscedasticity (all error terms have the same standard deviation) For all three issues there are a
variety of statistical tests (e.g. Greene, 2003). If these tests reject the above hypotheses, it is up to the modeller to decide on the severity of these effects, i.e. whether they can be accepted
from a practical point of view or not. As for normality, looking at distribution plots of the residuals45 we found that they often came very close to a normal distribution even in cases where
statistical tests reject this hypothesis. Moreover, even under the violation of the normality assumption, estimators are still efficient (or, more precisely, BLUE).46 Only the related statistical
tests and confidence intervals are no longer valid. But even here convergence is achieved for large sample size. Violations of the two other assumptions (independence and homoscedasticity) tend to be
more severe. They can be summarised as deviations from the regression model’s error term covariance matrix which is assumed to have identical values for each entry of the diagonal (homoscedasticity)
and zeros for each entry that is not on the diagonal (independence). If statistical tests reject the hypotheses of independence / homoscedasticity, this problem can be dealt with when a) plausible
assumptions about the structure of the covariance matrix can be made and b) when this structure can be described with a sufficiently small set of parameters. If this is the case these parameters and
hence the covariance matrix can be estimated from the data (or, more precisely, from the residuals). The least square method employed for parameter estimation in the regression model can then be
adjusted in such a way that the original desirable properties of the ordinary least square estimators (OLS) can be restored. In the literature (e.g. Greene 2003) this method is referred to as
generalised least square (GLS). In order to proceed, hypotheses on the structure of the covariance matrix have to be derived. In Section 3 dealing with sample construction, we have already described
one possible source of heteroscedasticity47 and correlation in the data respectively.
Residuals (e) are the typical estimators for the (unobservable) theoretical error terms (H). They are defined as the difference between the dependent variable and the model predictions of this
variable. 46 BLUE stands for best linear unbiased estimator. 47 The term heteroscedasticity refers to cases where standard deviations of error terms are different as opposed to the assumption of
identical standard deviations (homoscedasticity).
IV. The Shadow Rating Approach – Experience from Banking Practice
We argued that the size (i.e. the standard deviation) of the error term might sensibly be assumed to be proportional to the length of the time interval to which the observation is attached. Hence, we
proposed to weight each observation with the length of the corresponding time interval. In the context of regression analysis, weighting observations exactly means to assume a specific type of
heteroscedastic covariance matrix and application of the corresponding GLS estimation. We also concluded that autocorrelation in the time series part of the data might well increase when time
intervals become smaller and smaller. One of the simplest and most commonly employed structures for correlated error terms assumes an AR(1) correlation structure between subsequent error terms:
UH t u t (t 1, , T ) ,
where the variables ut are independent of each other. Hence, the issue could be dealt with by estimating the parameter U from the data, deriving the correlation matrix and applying GLS.48 There is,
however, one crucial problem with this procedure: it is not logical to assume this correlation structure for the complete data set as would be done in a standard time series regression setting.
Rather, the rating development data set at hand will typically have a panel data structure where the correlation structure of the cross section’s error terms (different obligors) will most likely be
different from the correlation structure of the time series part (different points in time for the same obligor). Applying a panel data model with an AR(1) structure in the time series part could be
a sensible first approximation. Corresponding error term models offered by statistics software packages are often of the type
H it
U i H i ,t u it (t 1, , T ; i 1, , n) .
Note that the AR parameter U is estimated separately for each cross section (i.e. firm): U = Ui. Therefore, quite a few time series observations are required for each single obligor to make confident
estimates, which often will not be feasible for rating development data. A more practicable model would estimate an average AR parameter U for all obligors:
H it
UH i ,t u it (t 1, , T ; i 1, , n) .
There might be other sources of correlation or heteroscedasticity in the data requiring a different structure for the covariance matrix than the one described above. If no specific reasons can be
thought of from a theoretical point of view, one will usually look at residual plots to identify some patterns. Typically, residuals will be plotted a) against the independent variable (log PD in our
case), b) 48
Indeed, a standard procedure for dealing with autocorrelated error terms in the way described above is implemented in most statistical software packages.
Ulrich Erlenmaier
against those dependent variables (risk factors) with the highest weights or c) against some other structural variable, such as the length of the time interval associated with each observation. If
effects can be identified, first a parametrical model has to be devised and then the associated parameters can be estimated from the residuals. That will give a rough picture of the severity of the
effects and can hence provide the basis for the decision as to whether to assess the deviations from the model assumptions as acceptable or whether to incorporate these effects into the model –
either by weighting observation (in the case of heteroscedasticity) or by devising a specific correlation model (in the case of deviations from independence). 5.4. Measuring Influence Once a specific
regression model has been chosen and estimated, one of the most important aspects of the model for practitioners will be each risk factor’s influence on an obligor’s rating. Hence, a measure of
influence has to be chosen that can also be used for potential manual adjustments of the derived model. To our knowledge, the most widely applied method is to adjust for the typically different
scales on which the risk factors are measured by multiplying the estimator for the risk factor’s coefficient in the regression model by the risk factor’s standard deviation and then deriving weights
by mapping these adjusted coefficients to the interval [0,1] so that the absolute values of all coefficients add up to 1.49 What is the interpretation of this approach to the calculation of weights?
It defines the weight of a risk factor xj by the degree to which the log PD predicted by the regression model will fluctuate when all other risk factors (xk)kzj are kept constant: the more log PD
fluctuates, the higher the risk factor’s influence. As a measure for the degree of fluctuation, the predictor‘s standard deviation is used. Hence, the weight wj of a risk factor xj with coefficient
bj can be calculated as wj
w*j w1* wm*
where w* j
STD Log( PD ) ( x k ) k z j
STD(b j x j )
b j STD( x j ) ,
Note that this method is also suggested by standard regression outputs. The associated estimates are typically termed “standardized coefficients”. Moreover, if the risk factors have already been
standardized to a common standard deviation – as described in section 4 – they already have the same scale and coefficients only have to be mapped to [0,1] in order to add up to 1.
IV. The Shadow Rating Approach – Experience from Banking Practice
and STD denotes the standard deviation operator. However, when using this type of influence measure, the following aspects have to be taken into account: x The standard deviation should be calculated
on the internal data set containing all obligors, not only the externally rated obligors. x The master rating scale will typically be logarithmic in PDs. Therefore, measuring the risk factor’s
influence on predicted log PDs is approximately equivalent to measuring its influence on the obligor’s rating. This should usually be what practitioners are interested in. However, if the influence
on an obligor’s predicted PD is to be measured, the above logic will not apply anymore since predicted PDs are an exponential function of the risk factor and hence their standard deviation cannot be
factored in the same fashion as described above. Moreover, the standard deviation of the external PD will depend on the realisations of the other risk factors (xk)k j that are kept constant. x The
problems described in the previous point also arise for the log-PD influence when risk factors are transformed in a non-linear fashion, e.g. when a risk factor’s logarithm is taken. In this case, the
above interpretation of influence can only be applied to the transformed risk factors which usually have no sensible economic interpretation. x Also, the above mentioned interpretation does not take
into account the risk factor’s correlation structure. The correlation between risk factors is usually not negligible. In this case the conditional distribution (in particular, the conditional
standard deviation) of the log-PD predictor, given that the other risk factors are constant, will depend on the particular values at which the other risk factors are kept constant. x Making the risk
factor’s distributions comparable only by adjusting for their standard deviation might be a crude measure if their distributional forms differ a lot (e.g. continuous vs. discrete risk factors).50 x
The weights described above measure a risk factor’s average influence over the sample. While this may be suitable in the model development stage when deciding, e.g., about whether the resulting
weights are appropriate, it may not be appropriate for practitioners interested in the influence that the risk factors have for a specific obligor. Other tools can be applied here, e.g. plotting how
a change in one risk factor over a specified range will affect an obligor’s rating. Despite the above cited theoretical problems standard deviation based measures of influence have proved to work
quite well in practice. However, there appears to be some scope for further research on alternative measures of influence. Moreover, it should be noted that, when correlations between risk factors
are non-negligible, a risk factor’s correlation with predicted log PDs can be quite high, even if the 50
Additionally, the standard deviation tends to be a very unstable statistical measure that can be very sensitive to changes in the risk factor’s distribution. However, this problem should be reduced
significantly by the truncation of the risk factors which reduces the influence of outliers.
Ulrich Erlenmaier
weight as defined above is not. We therefore found it important for the interpretation of the derived regression model, to evaluate these correlations for all risk factors and report them together
with the weights. 5.5. Manual Adjustments and Calibration There may be quite a variety of rationales for manually adjusting the estimation results derived from the statistical model, for instance,
expert judgements that deviate significantly from those estimations, insufficient empirical basis for specific portfolio segments, insufficient representativeness of the development sample, or
excessively high weights of qualitative as opposed to quantitative risk factors.51 When manual adjustments are made, the following subsequent analyses are important: 1. Ensuring that the ratings
system’s discriminatory power is not reduced too much 2. Re-establishing the calibration that statistical models provide automatically in the SRA context Regarding the first issue, the standard
validation measures – as briefly described in Section 5.8 – will be applied. The second issue can be addressed by regressing the score resulting from the manually adjusted weights Z1,..,Zn against
log PDs: Log( PDi )
c0 c1[Z xi1 Z m xim ] H L
(i 1, n).
Note that c0 and c1 are the coefficients that must be estimated in this second regression. The parameter c0 is related to the average PD in the portfolio while c1 controls the rating system’s
implicit discriminatory power, i.e. the degree to which predicted PDs vary across the obligors in the portfolio.52 The estimates for c0 and c1 will give additional evidence for the degree to which
the manual adjustments have changed the rating system’s overall properties: If changes are not too big, then c0 should not differ much from b0 and c1 should be close to b6 >| b1 | | bm |@ if all risk
factors have been standardised to the same standard deviation.53
With the SRA approach to rating development, there is the problem that the loan manager may use qualitative risk factors in order to make internal and external ratings match. If that is the case, the
relative weight of qualitative factors as estimated by the statistical model will typically be too high compared to the weights of quantitative risk factors. The validation measures that are not
linked to external ratings (see Section 5.8) and also expert judgement may then help to readjust those weights appropriately. 52 More formally, the implicit discriminatory power is defined as the
expected value of the (explicit) discriminatory power – as measured by the Gini coefficient (cf. Chapter XII). 53 This can be derived from equations (7) and (8).
IV. The Shadow Rating Approach – Experience from Banking Practice
Finally, for each observation i, a PD estimate can be derived from the above regression results by the following formulas: E[ PDi | X i ]
exp( P i V i 2 / 2) (i
E[log( PDi ) | X i ]
1,..., n), where
c 0 c1[Z xi1 Z m xim ] and
Var(H L ).
(10a) (10b) (10c)
Note that Xi denotes the vector of all risk factor realisations for observation i and E[.] is the expectation operator. The result is derived from the formula for the mean of log-normally distributed
random variables.54 For the formula to be valid, the error terms Hi have to be approximately normally distributed which we found typically to be the case (see Section 5.3). Moreover, the most
straightforward way to estimate Vi from the residuals would be to assume homoscedasticity, i.e. Vi V(i=1,…,n). If homoscedasticity cannot be achieved, the estimates for Vi will have to be conditional
on the structural variables that describe the sources of heteroscedasticity. 5.6. Two-step Regression In this section we note that – when external data are employed – it will typically be necessary
to estimate two models and, therefore, go through the process described in the previous sections twice. If, for example, only balance sheet ratios and macroeconomic risk factors are available for the
external data set, then a first quantitative model will have to be estimated on the external data set. As a result, a quantitative score and corresponding PD can be calculated from this model that in
turn can be used as an input factor for the final model. The final model will then include the quantitative score as one aggregated independent variable and the qualitative risk factors (not
available for the external data set) as the other independent variables. 5.7. Corporate Groups and Sovereign Support When rating a company, it is very important to take into account the corporate
group to which the company belongs or probably some kind of government support (be it on the federal, state or local government level). This is typically done by rating both the obligor on a
standalone basis (= standalone rating) and the en54
If X is normally distributed with mean P and standard deviation V, then E [exp (X)] = exp(P + V2/2), where E is the expectation operator (Limpert, 2001).
Ulrich Erlenmaier
tity that is supposed to influence the obligor’s rating (=supporter rating).55 The obligor’s rating is then usually derived by some type of weighted average of the associated PDs. The weight will
depend on the degree of influence as assessed by the loan manager according to the rating system’s guidelines. Due to the huge variety and often idiosyncratic nature of corporate group or sovereign
support cases, it will be very difficult to statistically derive the correct individual weight of each supporter, the average weight, however, could well be validated by estimates from the data. More
precisely, consider that for the development sample we have i = 1,...,n obligors with PDs PDi, corresponding supporters with PDs PDiS and associated supporter weights wi > 0 as derived by the rating
analyst’s assessment.56 Then, a regression model with [(1wi) PDi ] and [wi PDiS ] as independent variables and PDiex (the obligor’s external PD) as dependent variable can be estimated to
determine as to whether the average size of the supporter weights wi is appropriate or whether it should be increased or decreased. 5.8. Validation The validation of rating systems is discussed at
length in Chapter XI, Chapter XII, and Chapter XIV. Specific validation techniques that are valuable in a low default context (of which SRA portfolios are a typical example) are discussed in BCBS
(2005) and in Chapter V. During rating development it will typically not be possible to run through a fully-fledged validation process. Rather, it will be necessary to concentrate on the most
important measures. We will therefore briefly itemise those issues that we found important for a short-cut validation of SRA rating systems in the context of rating development: x Validation on
external ratings / external PDs Correlations of internal and external PDs (for all modules of the rating system57) Case-wise analysis of those companies with the largest differences between internal
and external ratings Comparison of average external and internal PDs across the entire portfolio and across sub-portfolios (such as regions, rating grades, etc.) x Validation on default indicators
Gini coefficient (for all modules of the rating system) 55
Note that for the sake of simplicity, the expression “supporter” is used for all entities that influence an obligor’s rating, be it in a positive or negative way. 56 The standalone and supporter PDs
have of course been derived from the regression model of the previous sections, probably, after manual adjustments. 57 The typical modules of a SRA-rating system (statistical model, expert-guided
adjustments, corporate-group influence / government support, override) have been discussed in Section 1.
IV. The Shadow Rating Approach – Experience from Banking Practice
Comparison of default rates and corresponding confidence intervals with average internal PDs. This is done separately for all rating grades and also across all rating grades Formal statistical tests
of the rating system’s calibration (such as e.g. Spiegelhalter, see Chapter XIV) x Comparison of the new rating system with its predecessor (if available) Comparison of both rating system’s
validation results on external ratings and the default indicator Case-wise analysis of those companies with the largest differences between old and new rating system There are also some other
validation techniques not yet discussed but that could enter a short-cut validation process in the rating development context, in particular addressing the relative rareness of default data in SRA
portfolios (see BCBS 2005): x Using the lowest non-default rating grades as default proxies x Comparison of SRA obligors with the obligors from other rating segments that have the same rating x
Estimation of internal PDs with the duration-based approach, i.e. including information on rating migration into the internal PD estimation process x Data pooling
6. Conclusions In this article we have reported on some aspects of the development of shadow rating (SRA) systems found to be important for practitioners. The article focused on the statistical model
that typically forms the basis of such rating systems. In this section we want to summarise the major issues that we have dealt with: x We have stressed the importance both, in terms of the quality
of the resulting rating system and in terms of initial development costs of the deployment of sophisticated software tools that automate the development process as much as possible and the careful
preparation and validation of the data that are employed. x External PDs form the basis of SRA type models. We have outlined some major issues that we found to be important in this context: Which
external rating types / agencies should be used? Comparison between bank internal and external default definitions and consequences for resulting PD estimates Sample construction for the estimation
of external PDs (which time period, which obligor types?) PD estimation techniques (cohort method vs. duration-based approach) Point-in-time adjustment of external through-the-cycle ratings
Ulrich Erlenmaier
x In Section 3 we pointed out that different samples will be needed for different types of analysis and made a proposal for the construction of such samples. In this context we also dealt with the
issues of weighted and correlated observations. x Univariate risk factor analysis is the next development step. In Section 4 we have described the typical types of analysis required – measurement of
a risk factor’s discriminatory power, transformation of risk factors, representativeness, fillers for missing values – and have mapped them to the samples on which they should be performed. x In
Section 5 we dealt with multi factor modelling, in particular with model selection the violation of model assumptions (non-normality, heteroscedasticity, error term correlations) the measurement of
risk factor influence (weights) manual adjustments of empirical estimates and calibration a method to empirically validate the average influence of corporate groups or sovereign supporters on an
obligor’s rating x Finally, in the same section, we gave a brief overview over the validation measures that we found most useful for a short-cut validation in the context of SRA rating development.
While for most modelling steps one can observe the emergence of best practice tools, we think that in particular in the following areas further research is desirable to sharpen the instruments
available for SRA rating development: x Data pooling in order to arrive at more confident estimates for adjustment factors of external PDs that account for the differences between bank internal and
external default measurement x Empirical comparisons of the relative performance of cohort-based vs. duration-based PD estimates and related confidence intervals x Point-in-time adjustments of
external through-the-cycle ratings x Panel type correlation models for SRA samples and software implementations of these models x Measurement of risk factor influence (weights)
References Altman E, Rijken H (2004), How Rating Agencies achieve Rating Stability, Journal of Banking & Finance, 2004, vol. 28, issue 11, pp. 2679-2714. Ammer J, Packer F (2000), How Consistent Are
Credit Ratings? A Geographic and Sectoral Analysis of Default Risk, FRB International Finance Discussion Paper No. 668. Appasamy B, Hengstmann S, Stapper G, Schark E (2004), Validation of Rating
Models, Wilmott magazine, May 2004, pp. 70-74 Basel Committee on Banking Supervision (BCBS) (2005), Validation of low-default portfolios in the Basel II Framework, Basel Committee Newsletter No. 6.
IV. The Shadow Rating Approach – Experience from Banking Practice
Basel Committee on Banking Supervision (BCBS) (2004), International Convergence of Capital Measurement and Capital Standards, Bank for International Settlements, Basel. Basel Committee on Banking
Supervision (BCBS) (2000), Range of Practice in Banks’ Internal Ratings Systems, Bank for International Settlements, Basel. Cantor R, Falkenstein E (2001), Testing for Rating Consistency in Annual
Default Rates, Moody’s Investors Service, New York. Cantor R (2004), Measuring the Quality and Consistency of Corporate Ratings across Regions, Moody’s Investors Service, New York. Daly L (1992),
Simple SAS macros for the calculation of exact binomial and Poisson confidence limits, Computers in biology and medicine 22(5), pp. 351-361. Davison A, Hinkley D (1997), Bootstrap Methods and their
Application, Cambridge University Press, Cambridge. Deutsche Bundesbank (2003), Validierungsansätze für interne Ratingsysteme, Monatsbericht September 2003, pp. 61-74. Erlenmaier U (2001), Models of
Joint Defaults in Credit Risk Management: An Assessment, University of Heidelberg Working Paper No. 358. Fitch (2005), Fitch Ratings Global Corporate Finance 2004 Transition and Default Study. Fitch
Ratings Credit Market Research. Fitch (2006), Fitch Ratings Definitions. http://www.fitchratings.com/corporate/fitchResources.cfm?detail=1 [as at 18/02/06] Greene W (2003), Econometric Analysis,
Pearson Education, Inc., New Jersey. Güttler A (2004), Using a Bootstrap Approach to Rate the Raters, Financial Markets and Portfolio Management 19, pp. 277-295. Heitfield E (2004), Rating System
Dynamics and Bank-Reported Default Probabilities under the New Basel Capital Accord, Working Paper, Board of Governors of the Federal Reserve System, Washington. Hocking R (1976), The Analysis and
Selection of Variables in Linear Regression, Biometrics, 32, pp. 1-50. Judge G, Griffiths W, Hill R, Lee T (1980), The Theory and Practice of Econometrics, John Wiley & Sons, Inc., New York. Limpert
E, Stahl W, Abbt M (2001), Lognormal distributions across the sciences: keys and clues, BioScience, 51(5), pp. 341-352. Löffler G (2004), An Anatomy of Rating Through the Cycle, Journal of Banking
and Finance 28 (3), pp. 695–720. Moody’s (2004), Moody’s Rating Symbols & Definitions. Moody’s Investors Service, New York. Moody’s (2005), Default and Recovery Rates of Corporate Bond Issuers,
1920-2004, Special Comment, New York. Schuermann T, Hanson S (2004), Estimating Probabilities of Default, FRB of New York Staff Report No. 190. Standard & Poor’s (2002), S&P Long-Term Issuer Credit
Ratings Definitions. http://www2.standardandpoors.com/servlet/Satellite?pagename=sp/Page/FixedIncome RatingsCriteriaPg&r=1&l=EN&b=2&s=21&ig=1&ft=26 [as at 18/02/06] Standard & Poor’s (2005), Annual
Global Corporate Default Study: Corporate Defaults Poised to Rise in 2005, Global Fixed Income Research, New York.
V. Estimating Probabilities of Default for Low Default Portfolios Katja Pluto and Dirk Tasche Deutsche Bundesbank1
1. Introduction A core input to modern credit risk modelling and managing techniques is probabilities of default (PD) per borrower. As such, the accuracy of the PD estimations will determine the
quality of the results of credit risk models. One of the obstacles connected with PD estimations can be the low number of defaults, especially in the higher rating grades. These good rating grades
might enjoy many years without any defaults. Even if some defaults occur in a given year, the observed default rates might exhibit a high degree of volatility due to the relatively low number of
borrowers in that grade. Even entire portfolios with low or zero defaults are not uncommon. Examples include portfolios with an overall good quality of borrowers (e.g. sovereign or bank portfolios)
as well as high-volume low-number portfolios (e.g. specialized lending). Usual banking practices for deriving PD values in such exposures often focus on qualitative mapping mechanisms to bank-wide
master scales or external ratings. These practices, while widespread in the industry, do not entirely satisfy the desire for a statistical foundation of the assumed PD values. One might “believe”
that the PDs per rating grade appear correct, as well as thinking that the ordinal ranking and the relative spread between the PDs of two grades is right, but find that there is insufficient
information about the absolute PD figures. Lastly, it could be questioned whether these rather qualitative methods of PD calibration fulfil the minimum requirements set out in BCBS (2004a). This
issue, amongst others, has recently been raised in BBA (2004). In that paper, applications of causal default models and of exogenous distribution assumptions on the PDs across the grades have been
proposed as solutions. Schuermann and Hanson (2004) present the “duration method” of estimating PDs by means of migration matrices (see also Jafry and Schuermann, 2004). This way, nonzero PDs 1
The opinions expressed in this chapter are those of the authors and do not necessarily reflect views of the Deutsche Bundesbank.
Katja Pluto and Dirk Tasche
for high-quality rating grades can be estimated more precisely by both counting the borrower migrations through the lower grades to eventual default and using Markov chain properties. This paper
focuses on a different issue of PD estimations in low default portfolios. We present a methodology to estimate PDs for portfolios without any defaults, or a very low number of defaults in the overall
portfolio. The proposal by Schuermann and Hanson (2004) does not provide a solution for such cases, because the duration method requires a certain number of defaults in at least some (usually the
low-quality) rating grades. For estimating PDs, we use all available quantitative information of the rating system and its grades. Moreover, we assume that the ordinal borrower ranking is correct. We
do not use any additional assumptions or information. Every additional input would be on the assumption side, as the low default property of these portfolios does not provide us with more reliable
quantitative information. Our methodology delivers confidence intervals for the PDs of each rating grade. The PD range can be adjusted by the choice of an appropriate confidence level. Moreover, by
the most prudent estimation principle our methodology yields monotone PD estimates. We look both at the cases of uncorrelated and correlated default events, in the latter case under assumptions
consistent with the Basel risk weight model. Moreover, we extend the most prudent estimation by two application variants: First we scale our results to overall portfolio central tendencies. Second,
we apply our methodology to multi-period data and extend our model by time dependencies of the Basel systematic factor. Both variants should help to align our principle to realistic data sets and to
a range of assumptions that can be set according to the specific issues in question when applying our methodology. The paper is structured as follows: The two main concepts underlying the methodology
– estimating PDs as upper confidence bounds and guaranteeing their monotony by the most prudent estimation principle – are introduced by two examples that assume independence of the default events.
The first example deals with a portfolio without any observed defaults. For the second example, we modify the first example by assuming that a few defaults have been observed. In a further section,
we show how the methodology can be modified in order to take into account non-zero correlation of default events. This is followed by two sections discussing potential extensions of our methodology,
in particular the scaling to the overall portfolio central tendency and an extension of our model to the multi-period case. The last two sections are devoted to discussions of the potential scope of
application and of open questions. We conclude with a summary of our proposal. In Appendix A, we provide information on the numerics that is needed to implement the estimation approach we suggest.
Appendix B provides additional numerical results to Section 5.
V. Estimating Probabilities of Default for Low Default Portfolios
2. Example: No Defaults, Assumption of Independence The obligors are distributed to rating grades A, B, and C, with frequencies nA, nB, and nC. The grade with the highest credit-worthiness is denoted
by A, the grade with the lowest credit-worthiness is denoted by C. No defaults occurred in A, B or C during the last observation period. We assume that the – still to be estimated – PDs pA of grade
A, pB of grade B, and pC of grade C reflect the decreasing credit-worthiness of the grades, in the sense of the following inequality: p A d p B d pC
The inequality implies that we assume the ordinal borrower ranking to be correct. According to (1), the PD pA of grade A cannot be greater than the PD pC of grade C. As a consequence, the most
prudent estimate of the value pA is obtained under the assumption that the probabilities pA and pC are equal. Then, from (1) even follows pA = pB = pC. Assuming this relation, we now proceed in
determining a confidence region for pA at confidence level Ȗ. This confidence region2 can be described as the set of all admissible values of pA with the property that the probability of not
observing any default during the observation period is not less than 1-Ȗ (for instance for Ȗ = 90%). If we have got pA = pB = pC, then the three rating grades A, B, and C do not differ in their
respective riskiness. Hence we have to deal with a homogeneous sample of size nA + nB + nC without any default during the observation period. Assuming unconditional independence of the default
events, the probability of observing no defaults turns out to be (1 p A ) nA nB nC . Consequently, we have to solve the inequality 1 J d 1 p A n A nB nC
for pA in order to obtain the confidence region at level Ȗ for pA as the set of all the values of pA such that p A d 1 (1 J )1 ( n A nB nC )
If we choose for the sake of illustration nA
100, n B
400, nC
Table 1 exhibits some values of confidence levels Ȗ with the corresponding maximum values (upper confidence bounds) pˆ A of pA such that (2) is still satisfied. 2
For any value of pA not belonging to this region, the hypothesis that the true PD takes on this value would have to be rejected at a type I error level of 1-Ȗ.
Katja Pluto and Dirk Tasche
Table 1. Upper confidence bound pˆ A of pA as a function of the confidence level Ȗ. No defaults observed, frequencies of obligors in grades given in (4).
Ȗ pˆ A
50% 0.09%
75% 0.17%
90% 0.29%
95% 0.37%
99% 0.57%
99.9% 0.86%
According to Table 1, there is a strong dependence of the upper confidence bound pˆ A on the confidence level Ȗ. Intuitively, values of Ȗ smaller than 95% seem more appropriate for estimating the PD
by pˆ A . By inequality (1), the PD pB of grade B cannot be greater than the PD pC of grade C either. Consequently, the most prudent estimate of pB is obtained by assuming pB = pC. Assuming
additional equality with the PD pA of the best grade A would violate the most prudent estimation principle, because pA is a lower bound of pB. If we have got pB = pC, then B and C do not differ in
their respective riskiness and may be considered a homogeneous sample of size nB + nC. Therefore, the confidence region at level Ȗ for pB is obtained from the inequality 1 J d 1 pC nB nC
(5) implies that the confidence region for pB consists of all the values of pB that satisfy p B d 1 1 J 1 / nB nC
If we again take up the example described by (4), Table 2 exhibits some values of confidence levels Ȗ with the corresponding maximum values (upper confidence bounds) pˆ B of pB such that (6) is still
fulfilled. Table 2. Upper confidence bound pˆ B of pB as a function of the confidence level Ȗ. No defaults observed, frequencies of obligors in grades given in (4).
Ȗ ˆp B
50% 0.10%
75% 0.20%
90% 0.33%
95% 0.43%
99% 0.66%
99.9% 0.98%
For determining the confidence region at level Ȗ for pC we only make use of the observations in grade C because by (1) there is no obvious upper bound for pC. Hence the confidence region at level Ȗ
for pC consists of those values of pC that satisfy the inequality 1 J d 1 pC nC
Equivalently, the confidence region for pC can be described by
V. Estimating Probabilities of Default for Low Default Portfolios
pC d 1 1 J 1 / nC
Coming back to our example (4), Table 3 lists some values of confidence levels Ȗ with the corresponding maximum values (upper confidence bounds) pˆ C of pC such that (8) is still fulfilled. Table 3.
Upper confidence bound pˆ C of pC as a function of the confidence level Ȗ. No defaults observed, frequencies of obligors in grades given in (4).
Ȗ pˆ C
50% 0.23%
75% 0.46%
90% 0.76%
95% 0.99%
99% 1.52%
99.9% 2.28%
Comparison of Tables 1, 2 and 3 shows that – besides the confidence level Ȗ – the applicable sample size is a main driver of the upper confidence bound. The smaller the sample size, the greater will
be the upper confidence bound. This is not an undesirable effect, because intuitively the credit-worthiness ought to be the better, the greater the number of obligors in a portfolio without any
default observation. As the results presented so far seem plausible, we suggest using upper confidence bounds as described by (3), (6) and (8) as estimates for the PDs in portfolios without observed
defaults. The case of three rating grades we have considered in this section can readily be generalized to an arbitrary number of grades. We do not present the details here. However, the larger the
number of obligors in the entire portfolio, the more often some defaults will occur in some grades at least, even if the general quality of the portfolio is very high. This case is not covered by
(3), (6) and (8). In the following section, we will show – still keeping the assumption of independence of the default events – how the most prudent estimation methodology can be adapted to the case
of a non-zero but still low number of defaults.
3. Example: Few Defaults, Assumption of Independence We consider again the portfolio from Section 2 with the frequencies nA, nB, and nC. In contrast to Section 2, this time we assume that during the
last period no default was observed in grade A, two defaults were observed in grade B, and one default was observed in grade C. As in Section 2, we determine a most prudent confidence region for the
PD pA of A. Again, we do so by assuming that the PDs of the three grades are equal. This allows us to treat the entire portfolio as a homogeneous sample of size nA + nB + nC. Then the probability of
observing not more than three defaults is given by the expression
Katja Pluto and Dirk Tasche 3
§ n A n B nC · i ¸¸ p A (1 p A ) n A nB nC i i ¹ 0
¦ ¨¨© i
Expression (9) follows from the fact that the number of defaults in the portfolio is binomially distributed as long as the default events are independent. As a consequence of (9), the confidence
region3 at level Ȗ for pA is given as the set of all the values of pA that satisfy the inequality 3
1 J d
§ n A n B nC · i ¸¸ p A (1 p A ) n A nB nC i i ¹ 0
¦ ¨¨© i
The tail distribution of a binomial distribution can be expressed in terms of an appropriate beta distribution function. Thus, inequality (10) can be solved analytically4 for pA. For details, see
Appendix A. If we assume again that the obligors' numbers per grade are as in (4), Table 4 shows maximum solutions pˆ A of (10) for different confidence levels Ȗ. Table 4. Upper confidence bound pˆ A
of pA as a function of the confidence level Ȗ. No default observed in grade A, two defaults observed in grade B, one default observed in grade C, frequencies of obligors in grades given in (4).
Ȗ pˆ A
50% 0.46%
75% 0.65%
90% 0.83%
95% 0.97%
99% 1.25%
99.9% 1.62%
Although in grade A no defaults were observed, the three defaults that occurred during the observation period enter the calculation. They affect the upper confidence bounds, which are much higher
than those in Table 1. This is a consequence of the precautionary assumption pA = pB = pC. However, if we alternatively considered grade A alone (by re-evaluating (8) with nA = 100 instead of nC), we
would obtain an upper confidence bound of 1.38% at level Ȗ = 75%. This value is still much higher than the one that has been calculated under the precautionary assumption pA = pB = pC – a consequence
of the low frequency of obligors in grade A in this example. Nevertheless, we see that the methodology described by (10) yields fairly reasonable results. In order to determine the confidence region
at level Ȗ for pB, as in Section 2, we assume that pB takes its greatest possible value according to (1), i.e. that we have pB = pC. In this situation, we have a homogeneous portfolio with nB + nC
obligors, PD pB, and three observed defaults. Analogous to (9), the probability of observing no more than three defaults in one period then can be written as: 3
We calculate the simple and intuitive exact Clopper-Pearson interval. For an overview of this approach, as well as potential alternatives, see Brown et al. (2001). Alternatively, solving directly
(10) for pA by means of numerical tools is not too difficult either (see Appendix A, Proposition A.1, for additional information).
V. Estimating Probabilities of Default for Low Default Portfolios
§ n B nC · i ¸¸ p B (1 p B ) nB nC i i ¹ 0
¦ ¨¨© i
Hence, the confidence region at level Ȗ for pB turns out to be the set of all the admissible values of pB which satisfy the inequality 3
1 J d
§ n B nC · i ¸¸ p B (1 p B ) nB nC i i ¹ 0
¦ ¨¨© i
By analytically or numerically solving (12) for pB – with frequencies of obligors in the grades as in (4) – we obtain Table 5 with some maximum solutions pˆ B of (12) for different confidence levels
Ȗ. Table 5. Upper confidence bound pˆ B of pB as a function of the confidence level Ȗ. No default observed in grade A, two defaults observed in grade B, one default observed in grade C, frequencies
of obligors in grades given in (4).
50% 0.52%
Ȗ pˆ B
75% 0.73%
90% 0.95%
95% 1.10%
99% 1.43%
99.9% 1.85%
From the given numbers of defaults in the different grades, it becomes clear that a stand-alone treatment of grade B would yield still much higher values5 for the upper confidence bounds. The upper
confidence bound 0.52% of the confidence region at level 50% is almost identical with the naïve frequency based PD estimate 2/400 = 0.5% that could alternatively have been calculated for grade B in
this example. For determining the confidence region at level J for the PD pC, by the same rationale as in Section 2, the grade C must be considered a stand-alone portfolio. According to the
assumption made in the beginning of this section, one default occurred among the nC obligors in C. Hence we see that the confidence region for pC is the set of all admissible values of pC that
satisfy the inequality 1
1 J d
§ nC ·
i C (1
¦ ¨¨© i ¸¸¹ p
pC ) nC i
(1 pC ) nC nC pC 1 pC nC 1
i 0
For obligor frequencies as assumed in example (4), Table 6 exhibits some maximum solutions6 pˆ C of (13) for different confidence levels J.
At level 99.9%, e.g., 2.78% would be the value of the upper confidence bound. If we had assumed that two defaults occurred in grade B but no default was observed in grade C, then we would have
obtained smaller upper bounds for pC than for pB. As this is not a desirable effect, a possible - conservative - work-around could be to increment the number of defaults in grade C up to the point
where pC would take on a greater value
Katja Pluto and Dirk Tasche
Table 6. Upper confidence bound pˆ C of pC as a function of the confidence level Ȗ. No default observed in grade A, two defaults observed in grade B, one default observed in grade C, frequencies of
obligors in grades given in (4).
Ȗ pˆ C
50% 0.56%
75% 0.90%
90% 1.29%
95% 1.57%
99% 2.19%
99.9% 3.04%
So far, we have described how to generalize the methodology from Section 2 to the case where non-zero default frequencies have been recorded. In the following section we investigate the impact of
non-zero default correlation on the PD estimates that are effected by applying the most prudent estimation methodology.
4. Example: Correlated Default Events In this section, we describe the dependence of the default events with the onefactor probit model7 that was the starting point for developing the risk weight
functions given in BCBS (2004a)8. First, we use the example from Section 2 and assume that no default at all was observed in the whole portfolio during the last period. In order to illustrate the
effects of correlation, we apply the minimum value of the asset correlation that appears in the Basel II corporate risk weight function. This minimum value is 12% (see BCBS, 2004a, § 272). Our model,
however, works with any other correlation assumption as well. Likewise, the most prudent estimation principle could potentially be applied to other models than the Basel II type credit risk model as
long as the inequalities can be solved for pA, pB and pC, respectively. Under the assumptions of this section, the confidence region at level J for pA is represented as the set of all admissible
values of pA that satisfy the inequality (cf. Bluhm et al., 2003, Sections 2.1.2 and 2.5.1 for the derivation) § § ) 1 p A U y · · ¸¸ 1 J d M ( y )¨1 )¨ ¨ ¸¸ ¨ f 1 U ¹¹ © ©
n A nB nC
d y,
where M and ) stand for the standard normal density and standard normal distribution function, respectively. )--1 denotes the inverse function of ) and U is the asset correlation (here U is chosen as
U = 12%). Similarly to (2), the right-hand side
than pB. Nevertheless, in this case one would have to make sure that the applied rating system yields indeed a correct ranking of the obligors. According to De Finetti’s theorem (see, e.g., Durrett
(1996), Theorem 6.8), assuming one systematic factor only is not very restrictive. See Gordy (2003) and BCBS (2004b) for the background of the risk weight functions. In the case of non-zero realized
default rates Balthazar (2004) uses the one-factor model for deriving confidence intervals of the PDs.
V. Estimating Probabilities of Default for Low Default Portfolios
of inequality (14) tells us the one-period probability of not observing any default among nA+ nA + nA obligors with average PD pA. Solving9 Equation (14) numerically10 for the frequencies as given in
(4) leads to Table 7 with maximum solutions pˆ A of (14) for different confidence levels J. Table 7. Upper confidence bounds pˆ A of pA, pˆ B of pB and pˆ C of pC as a function of the confidence
level Ȗ. No defaults observed, frequencies of obligors in grades given in (4). Correlated default events.
Ȗ pˆ A pˆ B
50% 0.15%
75% 0.40%
90% 0.86%
95% 1.31%
99% 2.65%
99.9% 5.29%
pˆ C
Comparing the values from the first line of Table 7 with Table 1 shows that the impact of taking care of correlations is moderate for the low confidence levels 50% and 75%. The impact is much higher
for the levels higher than 90% (for the confidence level 99.9% the bound is even six times larger). This observation reflects the general fact that introducing unidirectional stochastic dependence in
a sum of random variables entails a redistribution of probability mass from the centre of the distribution towards its lower and upper limits. The formulae for the estimations of upper confidence
bounds for pB and pC can be derived analogously to (14) (in combination with (5) and (7)). This yields the inequalities § § ) 1 p B U y · · ¸¸ 1 J d M ( y )¨1 )¨ ¨ ¸¸ ¨ f 1 U ¹¹ © ©
nB nC
and n
C § § ) 1 p C U y · · ¨ ¸ ¸ d y, 1 J d M ( y ) 1 )¨ ¨ ¨ ¸¸ f U 1 © ¹¹ ©
See Appendix A, Proposition A.2, for additional information. Taking into account correlations entails an increase in numerical complexity. Therefore, it might seem to be more efficient to deal with
the correlation problem by choosing an appropriately enlarged confidence level in the independent default events approach as described in Sections 2 and 3. However, it remains open how a confidence
level for the uncorrelated case, that “appropriately” adjustments for the correlations, can be derived. 10 The more intricate calculations for this paper were conducted by means of the software
package R (cf. R Development Core Team, 2003).
Katja Pluto and Dirk Tasche
to be solved for pB and pC respectively. The numerical calculations with (15) and (16) do not deliver additional qualitative insights. For the sake of completeness, however, the maximum solutions pˆ
B of (15) and pˆ C of (16) for different confidence levels J are listed in lines 2 and 3 of Table 7, respectively. Secondly, we apply our correlated model to the example from Section 3 and assume
that three defaults were observed during the last period. Analogous to Equations (9), (10) and (14), the confidence region at level J for pA is represented as the set of all values of pA that satisfy
the inequality 1 J d z y
M ( y ) z y dy,
§ n A n B nC · ¨¨ ¸¸G ( p A , U , y ) i 1 G ( p A , U , y ) n A nB nC i , i ¹ 0©
¦ i
where the function G is defined by
G ( p, U , y )
§ ) 1 ( p) U y · ¸. )¨ ¸ ¨ 1 U ¹ ©
Solving (17) for pˆ A with obligor frequencies as given in (4), and the respective modified equations for pˆ B and pˆ C yields the results presented in Table 8. Table 8. Upper confidence bounds pˆ A
of pA, pˆ B of pB and pˆ C of pC as a function of the confidence level Ȗ. No default observed in grade A, two defaults observed in grade B, one default observed in grade C, frequencies of obligors in
grades given in (4). Correlated default events.
Ȗ pˆ A pˆ B
50% 0.72%
75% 1.42%
90% 2.50%
95% 3.42%
99% 5.88%
99.9% 10.08%
pˆ C
Not surprisingly, as shown in Table 8 the maximum solutions for pˆ A , pˆ B and pˆ C increase if we introduce defaults in our example. Other than that, the results do not deliver essential additional
V. Estimating Probabilities of Default for Low Default Portfolios
5. Potential Extension: Calibration by Scaling Factors One of the drawbacks of the most prudent estimation principle is that in the case of few defaults, the upper confidence bound PD estimates for
all grades are higher than the average default rate of the overall portfolio. This phenomenon is not surprising, given that we include all defaults of the overall portfolio in the upper confidence
bound estimation even for the highest rating grade. However, these estimates might be regarded as too conservative by some practitioners. A potential remedy would be a scaling11 of all of our
estimates towards the central tendency (the average portfolio default rate). We introduce a scaling factor K to our estimates such that the overall portfolio default rate is exactly met, i.e. pˆ A n
A pˆ B n B pˆ C nC K n A n B nC
PDPortfolio .
A, B, C.
The new, scaled PD estimates will then be pˆ X , scaled
K pˆ X ,
The results of the application of such a scaling factor to our “few defaults” examples of Sections 3 and 4 are shown in Tables 9 and 10, respectively. Table 9. Upper confidence bound pˆ A,scaled of
pA, pˆ B ,scaled of pB and pˆ C ,scaled of pC as a function of the confidence level Ȗ after scaling to the central tendency. No default observed in grade A, two defaults observed in grade B, one
default observed in grade C, frequencies of obligors in grades given in (4). Uncorrelated default events.
Ȗ Central Tendency K pˆ A pˆ B pˆ C
0.71 0.33%
0.48 0.31%
0.35 0.29%
0.30 0.29%
0.22 0.28%
0.17 0.27%
The average estimated portfolio PD will now fit exactly the overall portfolio central tendency. Thus, we remove all conservatism from our estimations. Given the poor default data base in typical
applications of our methodology, this might be seen as a disadvantage rather than an advantage. By using the most prudent estimation principle to derive “relative” PDs before scaling them down to the
final re-
A similar scaling procedure has recently been suggested by Cathcart and Benjamin (2005).
Katja Pluto and Dirk Tasche
sults, we preserve the sole dependence of the PD estimates upon the borrower frequencies in the respective rating grades, as well as the monotony of the PDs. Table 10. Upper confidence bound pˆ
A,scaled of pA, pˆ B ,scaled of pB and pˆ C ,scaled of pC as a function of the confidence level Ȗ after scaling to the central tendency. No default observed in grade A, two defaults observed in grade
B, one default observed in grade C, frequencies of obligors in grades given in (4). Correlated default events.
Ȗ Central Tendency K pˆ A pˆ B pˆ C
0.46 0.33%
0.23 0.33%
0.13 0.32%
0.09 0.32%
0.05 0.32%
0.03 0.32%
The question of the appropriate confidence level for the above calculations remains. Although the average estimated portfolio PD now always fits the overall portfolio default rate, the confidence
level determines the “distribution” of that rate over the rating grades. In the above example, though, the differences in distribution appear small, especially in the correlated case, such that we
would not explore this issue further. The confidence level could, in practice, be used to control the spread of PD estimates over the rating grades – the higher the confidence level, the higher the
spread. Table 11. Upper confidence bound pˆ A,scaled of pA, pˆ B ,scaled of pB and pˆ C ,scaled of pC as a function of the confidence level Ȗ after scaling to the upper confidence bound of the
overall portfolio PD. No default observed in grade A, two defaults observed in grade B, one default observed in grade C, frequencies of obligors in grades given in (4). Uncorrelated default events.
Ȗ Upper bound for portfolio PD K ˆp A pˆ B pˆ C
0.87 0.40%
0.83 0.54%
0.78 0.65%
0.77 0.74%
0.74 0.92%
0.71 1.16%
However, the above scaling works only if there is a nonzero number of defaults in the overall portfolio. Zero default portfolios would indeed be treated more severely if we continue to apply our
original proposal to them, compared to using scaled PDs for low default portfolios.
V. Estimating Probabilities of Default for Low Default Portfolios
A variant of the above scaling proposal that takes care of both issues is the use of an upper confidence bound for the overall portfolio PD in lieu of the actual default rate. This upper confidence
bound for the overall portfolio PD, incidentally, equals the most prudent estimate for the highest rating grade. Then, the same scaling methodology as described above can be applied. The results of
its application to the few defaults examples as in Tables 9 and 10 are presented in Tables 11 and 12. Table 12. Upper confidence bound pˆ A,scaled of pA, pˆ B ,scaled of pB and pˆ C ,scaled of pC as
a function of the confidence level Ȗ after scaling to the upper confidence bound of the overall portfolio PD. No default observed in grade A, two defaults observed in grade B, one default observed in
grade C, frequencies of obligors in grades given in (4). Correlated default events.
Ȗ Upper bound for portfolio PD K pˆ A pˆ B pˆ C
0.89 0.64%
0.87 1.24%
0.86 2.16%
0.86 2.95%
0.86 5.06%
0.87 8.72%
In contrast to the situation of Tables 9 and 10, in Tables 11 and 12 the overall default rate in the portfolio depends on the confidence level, and we observe scaled PD estimates for the grades that
increase with growing levels. Nevertheless, the scaled PD estimates for the better grades are still considerably lower than the corresponding unscaled estimates from Sections 3 and 4, respectively.
For the sake of comparison, we provide in Appendix B the analogous numerical results for the no default case. The advantage of this latter variant of the scaling approach is that the degree of
conservatism is actively manageable by the appropriate choice of the confidence level for the estimation of the upper confidence bound of the overall portfolio PD. Moreover, it works in the case of
zero defaults and few defaults, and thus does not produce a structural break between both scenarios. Lastly, the results are less conservative than those of our original methodology. Consequently, we
propose to use the most prudent estimation principle to derive “relative” PDs over the rating grades, and subsequently scale them down according to the upper bound of the overall portfolio PD, which
is once more determined by the most prudent estimation principle with an appropriate confidence level.
Katja Pluto and Dirk Tasche
6. Potential Extension: The Multi-period case So far, we have only considered the situation where estimations are carried out on a one year (or one observation period) data sample. In case of a time
series with data from several years, the PDs (per rating grade) for the single years could be estimated and could then be used for calculating weighted averages of the PDs in order to make more
efficient use of the data. By doing so, however, the interpretation of the estimates as upper confidence bounds at some pre-defined level would be lost. Alternatively, the data of all years could be
pooled and tackled as in the one-year case. When assuming cross-sectional and inter-temporal independence of the default events, the methodology as presented in Sections 2 and 3 can be applied to the
data pool by replacing the one-year frequency of a grade with the sum of the frequencies of this grade over the years (analogous for the numbers of defaulted obligors). This way, the interpretation
of the results as upper confidence bounds as well as the frequency-dependent degree of conservatism of the estimates will be preserved. However, when turning to the case of default events which are
cross-sectionally and inter-temporally correlated, pooling does not allow for an adequate modelling. An example would be a portfolio of long-term loans, where in the inter-temporal pool every obligor
would appear several times. As a consequence, the dependence structure of the pool would have to be specified very carefully, as the structure of correlation over time and of cross-sectional
correlation are likely to differ. In this section, we present a multi-period extension of the cross-sectional onefactor correlation model that has been introduced in Section 4. We will take the
perspective of an observer of a cohort of obligors over a fixed interval of time. The advantage of such a view arises from the conceptual separation of time and cross-section effects. Again, we do
not present the methodology in full generality but rather introduce it by way of an example. As in Section 4, we assume that, at the beginning of the observation period, we have got nA obligors in
grade A, nB obligors in grade B, and nC obligors in grade C. In contrast to Section 4, the length of the observation period this time is T >1. We consider only the obligors that were present at the
beginning of the observation period. Any obligors entering the portfolio afterwards are neglected for the purpose of our estimation exercise. Nevertheless, the number of observed obligors may vary
from year to year as soon as any defaults occur. As in the previous sections, we first consider the estimation of the PD pA for grade A. PD in this section denotes a long-term average one-year
probability of default. Working again with the most prudent estimation principle, we assume that the PDs pA, pB, and pC are equal, i.e. pA = pB = pC = p. We assume, similar to Gordy (2003), that a
default of obligor i 1, , N n A n B n C in year t 1, , T is
V. Estimating Probabilities of Default for Low Default Portfolios
triggered if the change in value of their assets results in a value lower than some default threshold c as described below (Equation 22). Specifically, if Vi,t denotes the change in value of obligor
i's assets, Vi,t is given by Vi ,t
U S t 1 U [ i ,t ,
where U stands for the asset correlation as introduced in Section 4, St is the realisation of the systematic factor in year t, and [i,t denotes the idiosyncratic component of the change in value. The
cross-sectional dependence of the default events stems from the presence of the systematic factor St in all the obligors' change in value variables. Obligor i's default occurs in year t if Vi ,1 ! c,
, Vi ,t 1 ! c, Vi ,t d c.
The probability P[Vi, t d c]
pi ,t
is the parameter we are interested to estimate: It describes the long-term average one-year probability of default among the obligors that have not defaulted before. The indices i and t at pi,t can
be dropped because by the assumptions we are going to specify below pi,t will neither depend on i nor on t. To some extent, therefore, p may be considered a through-the-cycle PD. For the sake of
computational feasibility, and in order to keep as close as possible to the Basel II risk weight model, we specify the factor variables St, t = 1,},T, and [i,t, i = 1,},N , t = 1,},T as standard
normally distributed (cf. Bluhm et al., 2003). Moreover, we assume that the random vector (S1,},ST) and the random variables [i,t, i = 1,},N , t = 1,},T are independent. As a consequence, from (21)
it follows that the change in value variables Vi,t are all standard-normally distributed. Therefore, (23) implies that the default threshold12 c is determined by c
) 1 ( p ),
with ) denoting the standard normal distribution function. While the single components St of the vector of systematic factors, generate the cross-sectional correlation of the default events at time
t, their inter-temporal correlation is affected by the dependence structure of the factors S1,…,ST. We further 12
At first sight, the fact that in our model the default threshold is constant over time seems to imply that the model does not reflect the possibility of rating migrations. However, by construction of
the model, the conditional default threshold at time t given the value Vi,t-1 will in general differ from c. As we make use of the joint distribution of the Vi,t, therefore rating migrations are
implicitly taken into account.
Katja Pluto and Dirk Tasche
assume that not only the components but also the vector as a whole is normally distributed. Since the components of the vector are standardized, its joint distribution is completely determined by the
correlation matrix § 1 r1, 2 ¨ ¨ r2,1 1 ¨ ¨ ¨ ¨ © rT ,1
r1,3 r2,3
rT ,T 1
r1,T · ¸ r2,T ¸ ¸. ¸ ¸ ¸ 1 ¹
Whereas the cross-sectional correlation within one year is constant for any pair of obligors, empirical observation indicates that the effect of inter-temporal correlation becomes weaker with
increasing distance in time. We express this distancedependent behaviour13 of correlations by setting in (25) rs ,t
s t
, s, t
1,, T , s z t ,
for some appropriate 0 - 1 to be specified below. Let us assume that within the T years observation period kA defaults were observed among the obligors that were initially graded A, kB defaults among
the initially graded B obligors and kC defaults among the initially graded C obligors. For the estimation of pA according to the most prudent estimation principle, therefore we have to take into
account k= kA+ kB+ kC defaults among N obligors over T years. ˆ of For any given confidence level J, we have to determine the maximum value p all the parameters p such that the inequality 1 J d P>No
more than k defaults observed@
is satisfied – note that the right-hand side of (27) depends on the one-period probability of default p. In order to derive a formulation that is accessible to numerical calculation, we have to
rewrite the right-hand side of (27). The first step is to develop an expression for obligor i's conditional probability to default during the observation period, given a realization of the systematic
factors S1,},ST. From (21), (22), (24) and by using the conditional independence of the Vi ,1 , , Vi ,T given the systematic factors, we obtain
Blochwitz et al. (2004) proposed the specification of the inter-temporal dependence structure according to (26) for the purpose of default probability estimation.
V. Estimating Probabilities of Default for Low Default Portfolios
P>Obligor i defaults | S1 , , S T @ P ª« min Vi ,t d ) 1 ( p ) | S1 , , S T º» ¬t 1,,T ¼
1 P [ i ,1 ! G ( p, U , S1 ), , [ i ,T ! G ( p, U , S T ) | S1 , , S T
1 G( p, U , S ) , t
t 1
where the function G is defined as in (18). By construction, in the model all the probabilities P[Obligor i defaults | S1 ,..., S T ] are equal, so that, for any of the is, we can define
S ( S1,..., ST )
P[Obligor i defaults | S1,..., ST ] T
(1 G( p, U , S )) t
t 1
Using this abbreviation, we can write the right-hand side of (27) as P[No more than k defaults observed] k
¦ E[P[Exactly l obligors default | S ,..., S 1
l 0
¦ ¨¨© l ¸¸¹E[S (S ,..., S 1
) l (1 S ( S1 ,..., ST )) N l ].
l 0
The expectations in (30) are expectations with respect to the random vector (S1, …,ST) and have to be calculated as T-dimensional integrals involving the density of the T-variate standard normal
distribution with correlation matrix given by (25) and (26). When solving (27) for pˆ , we calculated the values of these Tdimensional integrals by means of Monte-Carlo simulation, taking advantage
of the fact that the term k
¦ ¨¨© l ¸¸¹E[S (S ,..., S 1
) l (1 S ( S1 ,..., S T )) N l ]
l 0
can be efficiently evaluated by making use of (32) of Appendix A. In order to present some numerical results for an illustration of how the model works, we have to fix a time horizon T and values for
the cross-sectional correlation ȡ and the inter-temporal correlation parameter -. We choose T = 5 as BCBS (2004a) requires the credit institutions to base their PD estimates on a time series with
minimum length five years. For ȡ, we chose ȡ = 0.12 as in Section 4, i.e.
Katja Pluto and Dirk Tasche
again a value suggested by BCBS (2004a). Our feeling is that default events with a five years time distance can be regarded as being nearly independent. Statistically, this statement might be
interpreted as something like “the correlation of S1 and S5 is less than 1%”. Setting - = 0.3, we obtain corr[S1,},ST] = - 4 = 0.81%. Thus, the choice - = 0.3 seems reasonable. Note that our choices
of the parameters are purely exemplary, as to some extent choosing the values of the parameters is rather a matter of taste or judgement or of decisions depending on the available data or the purpose
of the estimations. Table 13 shows the results of the calculations for the case where no defaults were observed during five years in the whole portfolio. The results for all the three grades are
summarized in one table. To arrive at these results, (27) was first evaluated with N = nA + nB + nC, then with N = nB + nC, and finally with N = nC. In all three cases we set k = 0 in (30) in order
to express that no defaults were observed. Not surprisingly, the calculated confidence bounds are much lower than those presented as in Table 7, thus demonstrating the potentially dramatic effect of
exploiting longer observation periods. Table 13. Upper confidence bounds pˆ A of pA, pˆ B of pB and pˆ C of pC as a function of the confidence level Ȗ. No defaults during 5 years observed,
frequencies of obligors in grades given in (4). Cross-sectionally and inter-temporally correlated default events.
Ȗ pˆ A pˆ B
50% 0.03%
75% 0.06%
90% 0.11%
95% 0.16%
99% 0.30%
99.9% 0.55%
pˆ C
For Table 14 we conducted essentially the same computations as for Table 13, the difference being that we assumed that over five years kA = 0, defaults were observed in grade A, kB =2 defaults were
observed in grade B, and kC = 1 defaults were observed in grade C (as in Sections 3 and 4 during one year). Consequently, we set k = 3 in (30) for calculating the upper confidence bounds for pA and
pB, as well as k = 1 for the upper confidence bounds of pC. Compared with the results presented in Table 8, we observe again the very strong effect of taking into account a longer time series. Table
14. Upper confidence bounds pˆ A of pA, pˆ B of pB and pˆ C of pC as a function of the confidence level Ȗ. During 5 years, no default observed in grade A, two defaults observed in grade B, one
default observed in grade C, frequencies of obligors in grades given in (4). Cross-sectionally and inter-temporally correlated default events.
Ȗ pˆ A pˆ B
50% 0.12%
75% 0.21%
90% 0.33%
95% 0.43%
99% 0.70%
99.9% 1.17%
pˆ C
V. Estimating Probabilities of Default for Low Default Portfolios
7. Potential Applications The most prudent estimation methodology described in the previous sections can be used for a range of applications, both within a bank and in a Basel II context. In the
latter case, it might be specifically useful for portfolios where neither internal nor external default data are sufficient to meet the Basel requirements. A good example might be Specialized
Lending. In these high-volume, low-number and lowdefault portfolios, internal data is often insufficient for PD estimations per rating category, and might indeed even be insufficient for central
tendency estimations for the entire portfolio (across all rating grades). Moreover, mapping to external ratings – although explicitly allowed in the Basel context and widely used in bank internal
applications – might be impossible due to the low number of externally rated exposures. The (conservative) principle of the most prudent estimation could serve as an alternative to the Basel slotting
approach, subject to supervisory approval. In this context, the proposed methodology might be interpreted as a specific form of the Basel requirement of conservative estimations if data is scarce. In
a wider context, within the bank, the methodology might be used for all sorts of low default portfolios. In particular, it could complement other estimation methods, whether this be mapping to
external ratings, the proposals by Schuermann and Hanson (2004) or others. As such, we see our proposed methodology as an additional source for PD calibrations. This should neither invalidate nor
prejudge a bank's internal choice of calibration methodologies. However, we tend to believe that our proposed methodology should only be applied to whole rating systems and portfolios. One might
think of calibrating PDs of individual low default rating grades within an otherwise rich data structure. Doing so almost unavoidably leads to a structural break between average PDs (data rich rating
grades) and upper PD bounds (low default rating grades) which makes the procedure appear infeasible. Similarly, we believe that the application of the methodology for backtesting or similar
validation tools would not add much additional information. For instance, purely expert-based average PDs per rating grade would normally be well below our proposed quantitative upper bounds.
8. Open Issues For potential applications, a number of important issues need to be addressed: x Which confidence levels are appropriate? The proposed most prudent estimate could serve as a
conservative proxy for average PDs. In determining the confidence level, the impact of a potential underestimation of these average PDs should be taken into account. One might think that the
transformation of average PDs into some kind of “stress” PDs, as done in the Basel II and many other
Katja Pluto and Dirk Tasche
credit risk models, could justify rather low confidence levels for the PD estimation in the first place (i.e. using the models as providers of additional buffers against uncertainty). However, this
conclusion would be misleading, as it mixes two different types of “stresses”: the Basel II model “stress” of the single systematic factor over time, and the estimation uncertainty “stress” of the PD
estimations. Indeed, we would argue for moderate confidence levels when applying the most prudent estimation principle, but for other reasons. The most common alternative to our methodology, namely
deriving PDs from averages of historical default rates per rating grade, yields a comparable probability that the true PD will be underestimated. Therefore, high confidence levels in our methodology
would be hard to justify. x At which number of defaults should users deviate from our methodology and use “normal” average PD estimation methods, at least for the overall portfolio central tendency?
Can this critical number be analytically determined? x If the relative number of defaults in one of the better ratings grades is significantly higher than those in lower rating grades (and within low
default portfolios, this might happen with only one or two additional defaults), then our PD estimates may turn out to be non-monotone. In which cases should this be taken as an indication of an
incorrect ordinal ranking? Certainly, monotony or nonmonotony of our upper PD bounds does not immediately imply that the average PDs are monotone or non-monotone. Under which conditions would there
be statistical evidence of a violation of the monotony requirement for the PDs? Currently, we do not have definite solutions to above issues. We believe, though, that some of them will involve a
certain amount of expert judgment rather than analytical solutions. In particular, that might be the case with the first item. If our proposed approach would be used in a supervisory – say Basel II –
context, supervisors might want to think about suitable confidence levels that should be consistently applied.
9. Conclusions In this article, we have introduced a methodology for estimating probabilities of default in low or no default portfolios. The methodology is based on upper confidence intervals by use
of the most prudent estimation. Our methodology uses all available quantitative information. In the extreme case of no defaults in the entire portfolio, this information consists solely of the
absolute numbers of counterparties per rating grade. The lack of defaults in the entire portfolio prevents reliable quantitative statements on both the absolute level of average PDs per rating grade
as well as on the relative risk increase from rating grade to rating grade. Within the most prudent estimation methodology, we do not use such information. The only additional as-
V. Estimating Probabilities of Default for Low Default Portfolios
sumption used is the ordinal ranking of the borrowers, which is assumed to be correct. Our PD estimates might seem rather high at first sight. However, given the amount of information that is
actually available, the results do not appear out of range. We believe that the choice of moderate confidence levels is appropriate within most applications. The results can be scaled to any
appropriate central tendency. Additionally, the multi-year context as described in Section 6 might provide further insight.
References Balthazar L (2004), PD Estimates for Basel II, Risk, April, 84-85. Basel Committee on Banking Supervision (BCBS) (2004a), Basel II: International Convergence of Capital Measurement and
Capital Standards: a Revised Framework. http://www.bis.org/publ/bcbs107.htm Basel Committee on Banking Supervision (BCBS) (2004b), An Explanatory Note on the Basel II IRB Risk Weight Functions. http:
//www.bis.org Basel Committee on Banking Supervision (BCBS) (2004c), Studies on the Validation of Internal Rating Systems, Working Paper No. 14. British Bankers' Association (BBA), London Investment
Banking Association (LIBA) and International Swaps and Derivatives Association (ISDA) (2004), The IRB Approach for Low Default Portfolios (LDPs) – Recommendations of the Joint BBA, LIBA, ISDA
Industry Working Group, Discussion Paper. http://www.isda.org/speeches/ pdf/ISDA-LIBA-BBA-LowDefaulPortfolioPaper080904-paper.pdf Blochwitz S, Hohl S, Tasche D, Wehn C (2004), Validating Default
Probabilities on Short Time Series. Capital & Market Risk Insights, Federal Reserve Bank of Chicago, December 2004. http://www.chicagofed.org/banking_information/capital_and_ market_risk_insights.cfm
Bluhm C, Overbeck L, Wagner C (2003), An Introduction to Credit Risk Modeling, Chapman & Hall / CRC, Boca Raton. Brown L, Cai T, Dasgupta A (2001), Interval Estimation for a Binomial Proportion.
Statistical Science, 16, (2), 101-133. Cathcart A, Benjamin N (2005), Low Default Portfolios: A Proposal for conservative PD estimation. Discussion Paper, Financial Services Authority. Durrett R
(1996), Probability: Theory and Examples, Second Edition, Wadsworth, Belmont. Gordy M (2003), A Risk-Factor Model Foundation for Ratings-Based Bank Capital Rules. Journal of Financial Intermediation
12 (3), 199-232. Hinderer K (1980), Grundbegriffe der Wahrscheinlichkeitstheorie. Zweiter korrigierter Nachdruck der ersten Auflage, Springer, Berlin. Jafry Y and Schuermann T (2004), Measurement,
estimation, and comparison of credit migration matrices, Journal of Banking & Finance 28, 2603-2639. R Development Core Team (2003), R: A Language and Environment for Statistical Computing, R
Foundation for Statistical Computing, Vienna. http://www.R-project.org
Katja Pluto and Dirk Tasche
Schuermann T and Hanson S (2004), Estimating Probabilities of Default, Staff Report no.190, Federal Reserve Bank of New York. Vasicek O (1997), The Loan Loss Distribution. Working Paper, KMV
Appendix A This appendix provides additional information on the analytical and numerical solutions of Equations (10) and (14). Analytical solution of Equation (10). If X is a binomially distributed
random variable with size parameter n and success probability p, then for any integer 0 k n, we have 1
§n· i ¨¨ ¸¸ p (1 p) ni i 0© ¹
¦ i
P[ X d k ] 1 P[Y d p]
³t ³t p 1
(1 t ) n k 1 dt
(1 t ) nk 1 dt
with Y denoting a beta distributed random variable with parameters Į = k+1 and ȕ = n-k (see, e.g., Hinderer (1980), Lemma 11.2). The beta distribution function and its inverse function are available
in standard numerical tools, e.g. in Excel. Direct numerical solution of Equation (10). The following proposition shows the existence and uniqueness of the solution of (10), and, at the same time,
provides initial values for the numerical root-finding (see (35)). Proposition A.1. Let 0 k < n be integers, and define the function f n,k : (0,1) o R by k
f n,k ( p)
¦ ¨¨© i ¸¸¹ p (1 p)
n i
, p (0,1)
i 0
Fix some 0 < v y P f t @2 ½° § d t · d n d exp® t ¾ ¨¨ ¸¸ S f t t >1 S f t @ t t V ft 2 S °¯ 2 >V f t @2 °¿ © nt ¹ 1
Note, g(.) depends on the unknown parameters of the default and the recovery process. Since the common factor is not observable we establish the unconditional density
VI. A Multi-Factor Approach for Systematic Default and Recovery Risk
g d t , y t f
° > y P f t @2 ½° § d t · ¨ ¸ S f t d t >1 S f t @nt d t d ) f t exp® t 2 ¾ ¨n ¸ V f S 2 ° ° > @ 2 V f t t ¿ © t¹ ¯ f
Observing a time series with T periods leads to the final unconditional loglikelihood function T
¦ lng d , y
l P , b, c, w, U
t 1
This function is optimized with respect to the unknown parameters. In the appendix we demonstrate the performance of the approach by Monte-Carlo simulations. For the second type of models which
include macroeconomic risk factors, we replace S f t from (4) by S * z tD1 , f t from (11) and P f t from (15) by
E 0 ȕ' z tR1 b U f t and obtain the log-likelihood l E 0 , ȕ , b, J 0 , Ȗ , w, U .
3. Data and Results
3.1. The Data
The empirical analysis is based on the global corporate issuer default rates and issue recovery rates (cf. Moody’s, 2005). In this data set, default rates are calculated as the ratio of defaulted and
total number of rated issuers for a given period. According to Moody’s (2005), a default is recorded if x Interest and/or principal payments are missed or delayed, x Chapter 11 or Chapter 7
bankruptcy is filed, or x Distressed exchange such as a reduction of the financial obligation occurs. Most defaults are related to publicly traded debt issues. Therefore, Moody’s defines a recovery
rate as the ratio of the price of defaulted debt obligations after 30 days of the occurrence of a default event and the par value. The recovery rates are published for different levels of seniority
such as total (Total), senior secured (S_Sec), senior unsecured (S_Un), senior subordinated (S_Sub), subordinated (Sub) and junior subordinated debt. We excluded the debt category junior subordinated
from the analysis due to a high number of missing values. In addition, the composite indices published by The Conference Board (www.tcbindicators.org) were chosen as macroeconomic systematic risk
drivers, i.e., the
Daniel Rösch and Harald Scheule
x Index of 4 coincident indicators (COINC) which measures the health of the U.S. economy. The index includes the number of employees on nonagricultural payrolls, personal income less transfer
payments, index of industrial production and manufacturing as well as trade sales. x Index of 10 leading indicators (LEAD) which measures the future health of the U.S. economy. The index includes
average weekly hours in manufacturing, average weekly initial claims for unemployment insurance, manufacturers’ new orders of consumer goods and materials, vendor performance, manufacturers’ new
orders of non-defence capital goods, building permits for new private housing units, stock price index, money supply, interest rate spread of 10-year treasury bonds less federal funds and consumer
expectations. The indices are recognized as indicators for the U.S. business cycle. Note that for the analysis, growth rates of the indices were calculated and lagged by three months. Due to a
limited number of defaults in previous years, the compiled data set was restricted to the period 1985 to 2004 and split into an estimation sample (1985 to 2003) and a forecast sample (2004). Table 1
and Table 2 include descriptive statistics and Bravais-Pearson correlations for default rates, recovery rates and time lagged macroeconomic indicators of the data set. Note that default rates are
negatively correlated with the recovery rates of different seniority classes and macroeconomic variables. Table 1. Descriptive statistics of the variables Variable Default Rate Recovery Rate (Total)
Recovery Rate (S_Sec) Recovery Rate (S_Un) Recovery Rate (S_Sub) Recovery Rate (Sub) COINC LEAD
Std. Dev. 0.0103
0.0215 0.0130
0.0245 0.0154
0.0409 0.0336
-0.0165 -0.0126
0.0160 0.0151
-0.9365 -0.4568
3.0335 1.9154
VI. A Multi-Factor Approach for Systematic Default and Recovery Risk
Table 2. Bravais-Pearson correlations of variables Variable Default Rate Recovery Rate (Total) Recovery Rate (S_Sec) Recovery Rate (S_Un) Recovery Rate (S_Sub) Recovery Rate (Sub) COINC LEAD
Default Rate 1.00
0.28 1.00
Figure 1 shows that both, the default and recovery rate fluctuate over time in opposite directions. This signals that default and recovery rates show a considerable share of systematic risk which can
be explained by time varying variables. 0.05
Frequency default rate
0.03 0.40
0.03 0.02
0.02 0.20
Frequency recovery rate
0.01 0.10
0.01 0.00
Year Default Rate
Recovery Rate (Total)
Figure 1. Moody's default rate vs. recovery rate
Figure 2 contains similar graphs for the recovery rates of the different seniority classes. Note that the recovery rates increase with the seniority of a debt issue and show similar patterns over
time. This indicates that they may be driven by the same or similar systematic risk factors.
Daniel Rösch and Harald Scheule
0.90 0.80 0.70
0.60 0.50 0.40 0.30 0.20 0.10 0.00 1985
Year Recovery Rate (S_Sec)
Recovery Rate (S_Un)
Recovery Rate (S_Sub)
Recovery Rate (Sub)
Figure 2. Moody's recovery rates by seniority class
Next to the business cycle and the seniority, it is plausible to presume that recovery rates depend on the industry, the collateral type, the legal environment, default criteria as well as the credit
quality associated with an obligor. Tables 3 and 4 show the recovery rates for different industries and issuer credit ratings (cf. Moody’s, 2004, 2005). Refer to these documents for a more detailed
analysis of the properties of recovery rates. 3.2. Estimation Results
Based on the described data set, two models were estimated: x Model without macroeconomic risk factors (equations (4) and (9)): we refer to this model as a through-the-cycle model because the
forecast default and recovery rate equal the historic average from 1985 to 2003; x Model with macroeconomic risk factors (equations (11) and (13)): we refer to this model as a point-in-time model
because the forecast default and recovery rates fluctuate over time. Within the credit risk community, a discussion on the correct definition of a through-the-cycle and point-in-time model exists, in
which the present article does not intend to participate. We use these expressions as stylized denominations, being aware that other interpretations of these rating philosophies may exist (cf.
Heitfield, 2005).
VI. A Multi-Factor Approach for Systematic Default and Recovery Risk
Table 3. Recovery rates for selected industries (Moody’s, 2004)
Industry Utility-Gas Oil Hospitality Utility-Electric Transport-Ocean Media, Broadcasting and Cable Transport-Surface Finance and Banking Industrial Retail Transport-Air Automotive Healthcare
Consumer Goods Construction Technology Real Estate Steel Telecommunications Miscellaneous
Recovery Rate (1982-2003) 0.515 0.445 0.425 0.414 0.388 0.382 0.366 0.363 0.354 0.344 0.343 0.334 0.327 0.325 0.319 0.295 0.288 0.274 0.232 0.395
Table 4. Recovery rates for selected issuer credit rating categories (Moody’s, 2005)
Issuer Credit Rating Aa A Baa Ba B Caa-Ca
Recovery Rate (1982-2004) 0.954 0.498 0.433 0.407 0.384 0.364
Due to the limitations of publicly available data, we use Moody's global default rates, total recoveries, and recoveries by seniority class. Table 5 shows the estimation results for the
through-the-cycle model (4) and (9) and Table 6 for the pointin-time model (11) and (13) using the variables COINC and LEAD as explanatory variables. In the latter model we choose both variables due
to their statistical significance.
Daniel Rösch and Harald Scheule
Table 5. Parameter estimation results for the through-the-cycle model; Annual default and recovery data from 1985 to 2003 is used for estimation; standard errors are in parentheses; *** significant
at 1% level, ** significant at 5% level, * significant at 10% level Parameter
c w
P b
Total -2.0942*** (0.0545) 0.2194*** (0.0366) -0.3650*** (0.0794) 0.3462*** (0.0562) 0.6539*** (0.1413)
S_Sec -2.0951*** (0.0550) 0.2212*** (0.0369) 0.2976** (0.1284) 0.5598*** (0.0908) 0.7049*** (0.1286)
S_Un -2.0966*** (0.0546) 0.2197*** (0.0367) -0.2347* (0.1123) 0.4898*** (0.0795) 0.7520*** (0.1091)
S_Sub -2.0942*** (0.0544) 0.2191*** (0.0366) -0.5739*** (0.0998 0.4351*** (0.0706) 0.5081** (0.1799)
Sub -2.0940*** (0.0549) 0.2210*** (0.0369) -0.8679*** (0.1235) 0.5384*** (0.0873) 0.3979* (0.2013)
Table 6. Parameter estimation results for the point-in-time model; Annual default and recovery data from 1985 to 2003 is used for estimation; standard errors are in parentheses; *** significant at 1%
level, ** significant at 5% level, * significant at 10% level. Parameter
J0 J1 w
E0 E1 b
Total -1.9403*** (0.0524) -8.5211*** (1.8571) COINC 0.1473*** (0.0278) 0.4557*** (0.0867) 7.4191* (4.1423) LEAD 0.3063*** (0.0513) 0.6642*** (0.1715)
S_Sec -1.9484*** (0.05210) -8.1786*** (1.7964) COINC 0.1522*** (0.0286) 0.1607 (0.1382) 11.1867* (6.4208) LEAD 0.4960*** (0.0838) 0.7346*** (0.1520)
S_Un -1.9089*** (0.0603) -10.078*** (2.2618) COINC 0.1485*** (0.0276) -0.5576*** (0.1635) 15.0807** (6.1142) COINC 0.4260*** (0.0691) 0.6675*** (0.1481)
S_Sub -1.9232*** (0.05660) -9.2828*** (2.0736) COINC 0.1483*** (0.0277) -0.6621*** (0.1194) 7.2136 (6.0595) LEAD 0.4071*** (0.0673) 0.4903** (0.2088)
Sub -1.9040*** (0.0609) -10.134*** (2.2884) COINC 0.1508*** (0.0279) -1.1883*** (0.1845) 14.9625** (6.8940) COINC 0.4820*** (0.0279) 0.1033 (0.2454)
First, consider the through-the-cycle model. Since we use the same default rates in each model, the estimates for the default process are similar across models, and consistent to the ones found in
other studies (compare Gordy (2000) or Rösch, 2005). The parameter estimates for the (transformed) recovery process reflect estimates for the mean (transformed) recoveries and their fluctuations over
time. Most important are the estimates for the correlation of the two processes which are positive and similar in size to the correlations between default rates and recovery
VI. A Multi-Factor Approach for Systematic Default and Recovery Risk
rates found in previous studies. Note that this is the correlation between the systematic factor driving the latent default triggering variable ‘asset return’ Sit and the systematic factor driving
the recovery process. Therefore, higher ‘asset returns’ (lower conditional default probabilities) tend to come along with higher recovery. A positive value of the correlation indicates negative
association between defaults and recoveries. The default rate decreases while the recovery rate increases in boom years and vice versa in depression years. Next, consider the point-in-time model. The
default and the recovery process are driven by one macroeconomic variable in each model. The parameters of all macroeconomic variables show a plausible sign. The negative sign of the COINC index in
the default process signals that a positive change of the index comes along with subsequent lower number of defaults. The positive signs of the variables in the recovery process indicate that higher
recoveries follow a positive change in the variable. In addition, most variables are significant at the 10% level. The only exception is the parameter of the macroeconomic index LEAD for the senior
subordinated recovery rate, which indicates only a limited exposure to systematic risk drivers. Note that the influence of the systematic random factor is reduced in each process by the inclusion of
the macroeconomic variable. While we do not mean to interpret these indices as risk drivers themselves, but rather as proxies for the future state of the economy, these variables are able to explain
part of the previously unobservable systematic risk. The remaining systematic risk is reflected by the size of w and b and is still correlated but cannot be explained by our proxies. Once the point
estimates for the parameters are given, we forecast separately the defaults and recoveries for year 2004. Table 7 shows that the point-in-time model leads to forecasts for the default and recovery
rates that are closer to the realized values than the ones derived from the through-the-cycle model. Table 7. Forecasts and realizations for year 2004 (through-the-cycle versus point-in-time)
Default Rate Forecast TTC Forecast PIT Realization
0.0181 0.0162 0.0072
0.0181 0.0162 0.0072
0.0180 0.0160 0.0072
0.0181 0.0162 0.0072
0.0181 0.0162 0.0072
Recovery Rate Forecast TTC Forecast PIT Realization
0.4097 0.4381 0.5850
0.5739 0.6159 0.8080
0.4416 0.4484 0.5010
0.3603 0.3867 0.4440
0.2957 0.3014 0.1230
Daniel Rösch and Harald Scheule
4. Implications for Economic and Regulatory Capital Since the main contribution of our approach lies in the joint modelling of defaults and recoveries, we now apply the forecast default rates,
recovery rates for the year 2004 as well as their estimated correlation to a portfolio of 1,000 obligors. To simplify the process, we take the senior secured class as an example and assume a credit
exposure of one monetary unit for each obligor. Figure 3 and Table 8 compare two forecast loss distributions of the through-thecycle model. To demonstrate the influence of correlation between the
processes we compare the distribution which assumes independence to the distribution which is based on the estimated correlation between the default and recovery rate transformations of 0.7049.
Economic capital or the credit portfolio risk is usually measured by higher percentiles of the simulated loss variable such as the 95-, 99-, 99.5- or 99.9- percentile (95%-, 99%-, 99.5%- or
99.9%-Value-at-Risk). It can be seen that these percentiles are considerably higher if correlations between default and recovery rates are taken into account. If we take the 99.9%-Value-at-Risk as an
example, the percentile under dependence exceeds the percentile under independence by approximately 50 percent. In other words, if dependencies are not taken into account, which is a common feature
in many of today’s credit risk models, the credit portfolio risk is likely to be seriously underestimated.
Loss Distribution
Loss Figure 3. Loss distributions for the through-the-cycle model (S_Sec)
VI. A Multi-Factor Approach for Systematic Default and Recovery Risk
Table 8. Descriptive statistics of loss distributions for the through-the-cycle model; Portfolios contain 1,000 obligors with an exposure of one monetary unit each, 10,000 random samples were drawn
for each distribution with and without correlation between systematic factors
Ind. Factors Corr. Factors
Basel II Basel II Basel II Capital Capital Capital (Stan- (Foun(Addarddation vanced ized) IRB) IRB)
Std. Med dev.
5.59 6.53
7.59 6.62
Forecast default and recovery rates can be used to calculate the regulatory capital for the hypothetical portfolio. For corporate credit exposures, the Basel Committee on Banking Supervision (2004)
allows banks to choose one of the following options: x Standardized approach: regulatory capital is calculated based on the corporate issuer credit rating and results in a regulatory capital between
1.6% and 12% of the credit exposure. The regulatory capital equals 8% of the credit exposure if firms are unrated; x Foundation Internal Ratings Based (IRB) approach: regulatory capital is calculated
based on the forecast default probabilities and a proposed loss given default for senior secured claims of 45% (i.e. a recovery rate of 55%) and for subordinated claims of 75% (i.e. a recovery rate
of 25%). x Advanced IRB approach: regulatory capital is calculated based on the forecast default probabilities and forecast recovery rates. For the through-the-cycle model, the Standardized approach
and the Foundation IRB approach result in a relatively close regulatory capital requirement (80.00 vs. 74.01). The reason for this is that the forecast default rate (0.0181) is close to the historic
average which was used by the Basel Committee when calibrating regulatory capital to the current level of 8%. The Advanced IRB approach leads to a lower regulatory capital (70.08 vs. 74.01) due to a
forecast recovery rate which is higher than the assumption in the Foundation IRB approach (57.39% vs. 55%). Note that Foundation IRB’s recovery rate of 55% is comparable to the average recovery rate
of the senior secured seniority class but is proposed to be applied to both the senior secured (unless admitted collateral is available) as well as the senior unsecured claims. This could indicate an
incentive for banks to favour the Foundation approach over the Advanced IRB approach especially for senior unsecured credit exposures. Similar conclusions can be drawn for the Foundation IRB’s
recovery rate of 25% which will be applied for both senior subordinated as well as subordinated claims.
Daniel Rösch and Harald Scheule
Figure 4 and Table 9 compare the respective loss distributions with and without correlations using the point-in-time model.
Loss Distribution
Loss Figure 4. Loss distributions for the point-in-time model (S_Sec)
Table 9. Descriptive statistics of loss distributions for the point-in-time model. Portfolios contain 1,000 obligors with an exposure of one monetary unit each, 10,000 random samples were drawn for
each distribution with and without correlation between systematic factors
Ind. Factors Corr. Factors
Basel II Basel II Basel II Capital Capital Capital (Stan- (Foun(Addarddation vanced ized) IRB) IRB)
Std. Med dev.
3.61 5.64
4.71 5.64
It can be observed that the economic capital, expressed as Value-at-Risk, is considerably lower for the point-in-time model than for the through-the-cycle model. The reasons are twofold. First, the
inclusion of macroeconomic variables leads to a lower forecast of the default rate (1.62%), a higher forecast of the recovery rate (61.59%) for 2004 and therefore to lower expected losses. Second,
the exposure to unknown random systematic risk sources is reduced by the inclusion of the observable factors. This leads to less uncertainty in the loss forecasts and therefore to lower variability
(measured, e.g., by the standard deviation) of the forecast distri-
VI. A Multi-Factor Approach for Systematic Default and Recovery Risk
bution. Moreover, the regulatory capital is the lowest for the Advanced IRB approach which takes both the forecast default and recovery rate into account. 70 60
asset correlation rho=0
Figure 5. Economic capital gains from decrease in implied asset correlation for correlated risk factors; Figure shows 99.9 percentiles of loss distributions for the senior secured seniority class
depending on asset correlation and correlation of systematic risk factors. Portfolio contains 1,000 obligors each with default probability of 1%, exposure of one monetary unit, and expected recovery
of 50%.
We also notice another important effect. The economic capital, measured by the higher percentiles of the credit portfolio loss, increases if the estimated correlation between the default and recovery
rates is taken into account. This increase is not as dramatic as in the through-the-cycle model, although the correlation between risk factors of defaults and recoveries has slightly increased. The
inclusion of macroeconomic factors renders the systematic unobservable factors less important and diminishes the impact of correlations between both factors. To the extent that recoveries and
defaults are not exposed at all to unobservable random factors, the correlations between these factors are negligible for loss distribution modelling. Figure 5 shows this effect. We assumed constant
exposure of b = 0.5 to the recovery factor and varied the exposure to the systematic factor for the defaults (asset correlation) for given correlation between the systematic factors. The benchmark
case is a correlation of zero between the factors. Here, we notice a reduction of economic capital from 44 (i.e., 4.4% of total exposure) for an asset correlation of 0.1 to 13 (1.3%) when the asset
correlation is zero. In the case of a correlation between the factors of 0.8, the Value-at-Risk is reduced from 61 (6.1%) to 13 (1.3%). Thus, the higher the correlation of the risk factors, the
higher the economic capital gains are from lowering the implied asset correlation by the explanation with observable factors.
Daniel Rösch and Harald Scheule
5. Discussion The empirical analysis resulted in the following insights: 1. Default events and recovery rates are correlated. Based on an empirical data set, we found a positive correlation between
the default events and a negative correlation between the default events and recovery rates. 2. The incorporation of the correlation between the default events and recovery rates increases the
economic capital. As a result, most banks underestimate their economic capital when they fail to account for this correlation. 3. Correlations between defaults decrease when systematic risk drivers,
such as macroeconomic indices are taken into account. In addition, the impact of correlation between defaults and recoveries decreases. 4. As a result, the uncertainty of forecast losses and the
economic capital measured by the percentiles decreases when systematic risk drivers are taken into account. Most empirical studies on recovery rates (including this article) are based on publicly
available data provided by the rating agencies Moody’s or Standard and Poor’s and naturally lead to similar results. The data sets of the rating agencies are biased in the sense that only certain
exposures are taken into account. Typically, large U.S. corporate obligors in capital intensive industries with one or more public debt issues and high credit quality are included. Thus, the findings
can not automatically be transferred to other exposure classes (e.g., residential mortgage or credit card exposures), countries, industries or products. Moreover, the data is limited with regard to
the number of exposures and periods observed. Note that our assumption in (8) of a large number of firms is crucial since it leads to the focus on the mean recovery. If idiosyncratic risk can not be
fully diversified the impact of systematic risk in our estimation may be overstated. Due to the data limitations, we cannot draw any conclusions about the crosssectional distribution of recoveries
which is often stated to be U-shaped (see, e.g., Schuermann, 2003). In this sense, our results call for more detailed analyses, particularly with borrower-specific data which possibly includes
financial ratios or other obligor characteristics and to extend our methodology to a panel of individual data. As a result, we would like to call upon the industry, i.e., companies, banks and
regulators for feedback and a sharing of their experience. In spite of these limitations, this paper provides a robust framework, which allows creditors to model default probabilities and recovery
rates based on certain risk drivers and simultaneously estimates interdependences between defaults and recoveries. It can be applied to different exposure types and associated information levels.
Contrary to competing models, the presence of market prices such as bond or stock prices is not required.
VI. A Multi-Factor Approach for Systematic Default and Recovery Risk
References Altman E, Brady B, Resti A, Sironi A (2003), The Link between Default and Recovery Rates: Theory, Empirical Evidence and Implications, forthcoming Journal of Business. Basel Committee on
Banking Supervision (2004), International Convergence of Capital Measurement and Capital Standards - A Revised Framework, Consultative Document, Bank for International Settlements, June. Cantor R,
Varma P (2005), Determinants of Recovery Rates on Defaulted Bonds and Loans for North American Corporate Issuers: 1983-2003, Journal of Fixed Income 14(4), pp. 29-44. Carey M (1998), Credit Risk in
Private Debt Portfolios, Journal of Finance 53, pp. 13631387. Düllmann K, Trapp M (2004), Systematic Risk in Recovery Rates- An Empirical Analysis of U.S. Corporate Credit Exposures, Discussion
Paper, Series 2: Banking and Financial Supervision, No 02/2004. Frye J (2000), Depressing Recoveries, Risk 3(11), pp. 106-111. Frye J (2003), A False Sense of security, Risk 16(8), pp. 63-67. Gordy M
(2003), A Risk-Factor Model Foundation for Ratings-Based Bank Capital Rules, Journal of Financial Intermediation 12, pp. 199-232. Gordy M (2000), A Comparative Anatomy of Credit Risk Models, Journal
of Banking and Finance 24, pp. 119-149. Hamerle A, Liebig T, Rösch D (2003), Benchmarking Asset Correlations, Risk 16, pp. 77-81. Heitfield A (2005), Dynamics of Rating Systems, in: Basel Committee
on Banking Supervision: Studies on the Validation of Internal Rating Systems, Working Paper No. 14, February, pp. 10-27. Hu T, Perraudin W (2002), The Dependence of Recovery Rates and Defaults,
Working Paper, Birkbeck College. Moody’s (2004), Default and Recovery Rates of Corporate Bond Issuers 1920-2003. Moody’s (2005), Default and Recovery Rates of Corporate Bond Issuers 1920-2004.
Pykhtin M (2003), Unexpected Recovery Risk, Risk, 16(8), pp. 74-78. Rösch D (2005), An Empirical Comparison of Default Risk Forecasts from Alternative Credit Rating Philosophies, International
Journal of Forecasting 21, pp. 37-51. Rösch D (2003), Correlations and Business Cycles of Credit Risk: Evidence from Bankruptcies in Germany, Financial Markets and Portfolio Management 17, pp.
309-331. Rösch D, Scheule H (2004), Forecasting Retail Portfolio Credit Risk, Journal of Risk Finance 5 (2), pp. 16-32. Schönbucher J (2003), Credit Derivatives Pricing Models: Models, Pricing and
Implementation, Jon Wiley and Sons, New York. Schuermann T (2003), What Do We Know About Loss-Given-Default?, Working Paper, Federal Reserve Bank of New York.
Daniel Rösch and Harald Scheule
Appendix: Results of Monte-Carlo Simulations In order to prove the reliability of our estimation method, a Monte-Carlo simulation was set up which comprises four steps: x Step 1: Specify model (1)
and model (9) with a given set of population parameters w, c, b, P, and U. x Step 2: Draw a random time series of length T for the defaults and the recoveries of a portfolio with size N from the true
model. x Step 3: Estimate the model parameters given the drawn data by the MaximumLikelihood method. x Step 4: Repeat Steps 2 and 3 for several iterations. We used 1,000 iterations for different
parameter constellations and obtained 1,000 parameter estimates which are compared to the true parameters. The portfolio consists of 10,000 obligors. The length of the time series T is set to T = 20
years. We fix the parameters at w = 0.2, µ = 0.5, and b = 0.5 and set the correlations between the systematic factors to 0.8, 0.1, and -0.5. In addition, we analyze three rating grades A, B, and C
where the default probabilities and thresholds c in the grades are: x A: S x B: S x C: S
0.005 , i.e., c 2.5758 0.01 , i.e., c 2.3263 0.02 , i.e., c 2.0537
Table 10 contains the results from the simulations. The numbers without brackets contain the average of the parameter estimates from 1,000 simulations. The numbers in round (.)-brackets represent the
sample standard deviation of the estimates (which serve as an approximation for the unknown standard deviation). The numbers in square [.]-brackets give the average of the estimated standard
deviations for each estimate derived by Maximum-Likelihood theory. It can be seen in each constellation that our ML–approach for the joint estimation of the default and recovery process works
considerably well: the averages of the estimates are close to the originally specified parameters. Moreover, the estimated standard deviations reflect the limited deviation for individual iterations.
The small downward bias results from the asymptotic nature of the ML-estimates and should be tolerable for practical applications.
VI. A Multi-Factor Approach for Systematic Default and Recovery Risk Table 10. Results from Monte-Carlo simulations
Grade A
-2.5778 (0.0495) [0.0468] -2.5789 (0.0484) [0.0475] -2.5764 (0.0492) [0.0472] -2.3287 (0.0480) [0.0460] -2.3291 (0.0472) [0.0456] -2.3305 (0.0479) [0.0453] -2.0536 (0.0489) [0.0448] -2.0542 (0.0580)
[0.0448] -2.0554 (0.0510) [0.0443]
0.1909 (0.0338) [0.0317] 0.1936 (0.0336) [0.0322] 0.1927 (0.0318) [0.0320] 0.1927 (0.0327) [0.0306] 0.1906 (0.0306) [0.0305] 0.1900 (0.0324) [0.0303] 0.1935 (0.0315) [0.0297] 0.1943 (0.0382) [0.0298]
0.1923 (0.0359) [0.0295]
0.4991 (0.1112) [0.1070] 0.4970 (0.1154) [0.1079] 0.5048 (0.1116) [0.1078] 0.4999 (0.1104) [0.1084] 0.4927 (0.1105) [0.1080] 0.4988 (0.1115) [0.1074] 0.4972 (0.1104) [0.1080] 0.5030 (0.1168) [0.1085]
0.4998 (0.1085) [0.1076]
0.4784 (0.0776) [0.0756] 0.4824 (0.0788) [0.0763] 0.4826 (0.0798) [0.0763] 0.4852 (0.0774) [0.0765] 0.4831 (0.0778) [0.0764] 0.4805 (0.0806) [0.0759] 0.4855 (0.0804) [0.0763] 0.4851 (0.0782) [0.0770]
0.4833 (0.0852) [0.0766]
0.7896 (0.1085) [0.0912] 0.1139 (0.2269) [0.2185] -0.4956 (0.1923) [0.1697] 0.7951 (0.0920) [0.0856] 0.0861 (0.2330) [0.2152)] -0.4764 (0.1891) [0.1703] 0.7915 (0.0956) [0.0843] 0.1067 (0.2374)
[0.2128] -0.4898 (0.1815) [0.1656]
U 0.8
-0.5 B 0.8
-0.5 C 0.8
VII. Modelling Loss Given Default: A “Point in Time”-Approach Alfred Hamerle, Michael Knapp, Nicole Wildenauer University of Regensburg
1. Introduction In recent years the quantification of credit risk has become an important topic in research and in finance and banking. This has been accelerated by the reorganisation of the Capital
Adequacy Framework (Basel II).1 Previously, researchers and practitioners mainly focused on the individual creditworthiness and thus the determination of the probability of default (PD) and default
correlations. The risk parameter LGD (loss rate given default) received less attention. Historical averages of LGD are often used for practical implementation in portfolio models. This approach
neglects the empirical observation that in times of a recession, not only the creditworthiness of borrowers deteriorates and probabilities of default increase, but LGD also increases. Similar results
are confirmed in the empirical studies by Altman et al. (2003), Frye (2000a), and Hu and Perraudin (2002). If LGD is only integrated in portfolio models with its historical average, the risk tends to
be underestimated. Hence, adequate modelling and quantification of LGD will become an important research area. This has also been advocated by Altman and Kishore (1996), Hamilton and Carty (1999),
Gupton et al. (2000), Frye (2000b), and Schuermann (2004). The definitions of the recovery rate and the LGD have to be considered when comparing different studies of the LGD, since different
definitions also cause different results and conclusions. Several studies distinguish between market LGD, implied market LGD and workout LGD.2 This paper uses recovery rates from Moody’s defined as
market recovery rates. In addition to studies which focus only on data of the bond market or data of bonds and loans,3 there are studies which focus on loans only.4 Loans generally 1 2
Basel Committee on Banking Supervision (2004) For a definition of these values of LGD see Schuermann (2004) and Basel Committee on Banking Supervision (2005) Schuermann (2004) Asarnow and Edwards
(1995), Carty and Lieberman (1996), and Carty et al. (1998)
Alfred Hamerle, Michael Knapp, Nicole Wildenauer
have higher recovery rates and therefore lower values of LGD than bonds.5 This result relies especially on the fact that loans are more senior and in many cases also have more collectible collaterals
than bonds. Studies show different results concerning the factors potentially determining the LGD which are presented briefly below. The literature gives inconsistent answers to the question if the
borrower’s sector has an impact on LGD. Surveys such as Altman and Kishore (1996) confirm the impact of the sector. Gupton et al. (2000) conclude that the sector does not have an influence on LGD.
They trace this finding back to the fact that their study only examines loans and not bonds. The impact of the business cycle is approved by many authors, e.g., Altman et al. (2003). In contrast,
Asarnow and Edwards (1995) conclude that there is no cyclical variation in LGD. Comparing these studies one has to consider that different data sources have been used, and the latter only focused on
loans. Several studies support the influence of the borrower’s creditworthiness or the seniority on LGD.6 Nearly all studies analysing LGD using empirical data calculate the mean of the LGD per
seniority, per sector, per rating class or per year. Sometimes the means of the LGD per rating class and per seniority are calculated. We refer to the latter prices as “matrix prices” sometimes
enabling a more accurate determination of LGD than the use of simple historical averages.7 The authors agree that the variance within the classes is high and there is a need for more sophisticated
models. Altman et al. (2003) suggest a first extension of the model by using a regression model with several variables as the average default rate per year or the GDP growth to estimate the average
recovery rate. The present paper makes several contributions. A dynamic approach for LGD is developed which allows for individual and time dependent LGDs. The model provides “point in time”
predictions for the next period. The unobservable part of systematic risk is modelled by a time specific random effect which is responsible for dependencies between the LGDs within a risk segment in
a fixed time period. Furthermore, the relationship between issuer specific rating developments and LGD can be modelled adequately over time. The rest of this chapter is organised as follows: Section
two states the statistical modelling of the LGD. Section three describes the dataset and the model estimations. Section four concludes and discusses possible fields for further research.
Gupton et al. (2000) Carty and Lieberman (1996), Carty et al. (1998), and Gupton et al. (2000) Araten et al. (2004), Gupton et al. (2000), and Schuermann (2004)
VII. Modelling Loss Given Default: A “Point in Time”-Approach
2. Statistical Modelling The dataset used in this chapter mainly uses bond data. Recovery rates will be calculated as market value of the bonds one month after default. The connection between LGD and
recovery rate can be shown as:
1 Rt i .
LGDt i
Here, LGDt(i) and Rt(i) denote the LGD and recovery rate of bond i that defaults in year t, i=1,},nt. The number of defaulted bonds in year t, t=1,},T is denoted with nt. The resulting recovery rates
and loss rates normally range between 0 and 1, although there are exceptions.8 Firstly, the LGDs will be transformed. The transformation used in this chapter is y t (i )
LGDt (i ) 1 LGDt (i )
Written in terms of the recovery rate, the following relation is obtained: y t (i )
1 Rt ( i ) Rt ( i )
Rt ( i ) 1 Rt ( i )
This logit transformation of the recovery rate is also proposed by Schönbucher (2003) and Düllmann and Trapp (2004).9 The LGD can be written as: LGDt (i )
exp( y t (i ) ) 1 exp( y t (i ) )
Analogous, to the model used in Basel II, the following approach for the transformed values yt(i) is specified (cf. Düllmann and Trapp, 2004): y t (i )
P V Z f t V 1 Z H t (i )
The random variables ft and Ht(i) are standard normally distributed. All random variables are assumed to be independent. The parameter V is non-negative and values of Z are restricted to the interval
Recovery rates greater than one are unusual. In these cases the bond is traded above par after the issuer defaults. These values are excluded from the dataset in the empirical research, see section
3.1. This transformation ensures a range between 0 and 1 of the estimated and predicted LGD.
Alfred Hamerle, Michael Knapp, Nicole Wildenauer
Other specifications are also discussed. Frye (2000a) suggests an approach according to (1) for the recovery rate itself. Pykthin (2003) assumes log-normally distributed recovery rates and chooses a
specification like (1) for log(Rt(i)). In the next step, model (1) is extended including firm and time specific observable risk factors. The dependence upon the observable risk factors is specified
by the following linear approach:
P t (i )
E 0 E ' x t 1(i ) J ' z t 1 ,
where i=1,},nt, t=1,},T. Here xt-1(i) characterises a vector of issuer and bond specific factors observed in previous periods. Examples for these issuer and bond specific variables are the issuer
rating of the previous year or the seniority. By zt-1 we denote a vector of macroeconomic variables representing potential systematic sources of risk. The macroeconomic variables are included in the
model with a time lag. Generally it can be assumed that regression equation (2) holds for a predefined risk segment, e.g. a sector. Regarding (1) and (2) it can be seen that the logit transformed
values of LGD are normally distributed with mean Pt(i) and variance V2. The random time effects ft cause a correlation of the transformed values of LGD yt(i) of different bonds defaulting in year t.
This correlation shows the influence of systematic sources of risk which are not explicitly included in the model or which affect LGD contemporarily. If fundamental factors are having an impact on
the LGD of all defaulted bonds – at least in one sector – a correlation of LGD is obtained as a result (as long as these systematic risk factors are not included in the model). It can be seen that
the factors have different effects in different segments, e.g. different time lags or sensitivities in different sectors. If in contrast, the relevant systematic risk factors are included in the
vector zt-1 and if no other risk factors influence LGD contemporarily, the impact of time effects should be reduced significantly. The unknown parameters in (1) and (2) are estimated by maximum
likelihood considering (1) - extended by (2) - as a panel regression model with random effects, (cf. Baltagi (1995), Chapter 3). Note that a bond specific random effect does not enter the model,
since defaulted bonds in different periods t and s (tzs) are different. Parameter estimates are obtained using PROC MIXED in SAS.10 For the covariance and correlation of the transformed values of LGD
in year t, the following relationships hold: Cov( y t (i ) , y t ( j ) ) V 2Z Corr ( y t (i ) , y t ( j ) ) Z , i z j.
Wolfinger et al. (1994)
VII. Modelling Loss Given Default: A “Point in Time”-Approach
3. Empirical Analysis
3.1. The Data
A dataset from Moody’s Default Risk Service is used for empirical analyses. It contains data from about 2,000 defaulted debt obligations, i.e. bonds, loans and preferred stock from 1983 to 2003. More
than 1,700 debt obligations are from American companies. The dataset includes information about the recovery rates of defaulted bonds. The LGD and the transformed LGD used in this analysis can be
calculated from the recovery rate as described in section two. When a borrower defaulted for the first time, this event was recorded and all default events after the first one are not considered in
this study.11 About 90% of these debt obligations are bonds. To ensure a homogenous dataset, only bonds are used in this study. For the same reason, only data from companies in the sector
“industry”12 are used in the final analysis. In this sector there are 84% of the bonds. In the sectors “financial service providers” and “sovereign/public utility” there are fewer defaulted borrowers
and therefore fewer defaulted bonds. After restricting the data to American bonds in the (aggregated) sector “industry”, there are 1,286 bonds in the dataset. Additionally, the dataset is limited to
bonds with a debt rating of “Ba3” or worse. The reason for this constraint was that the rating categories “A3” to “Ba2” have sparse observations in several years of the period 1983 to 2003. In
addition, several defaulted issuers hold five or more bonds. Some of these bonds have the same LGD at the time of default although they have distinct debt ratings or distinct seniorities. Other bonds
have a different LGD although they dispose of the same issuer and debt rating and the same seniority. These differences cannot be explained with the data at hand. Probably they 11
This constraint naturally only affects borrowers who defaulted several times. Furthermore, observations with LGD equal to zero and negative LGD are excluded from the analysis, because the transformed
LGD yt(i) cannot be calculated. If the recovery rate is greater than 1, i.e. if the market value of a bond one month after default is greater than the nominal value of the bond, the LGD becomes
negative. In the dataset this was the case in 0.5% of all observations. 12 The (aggregated) sector “industry” contains the sectors “industrial”, “transportation” and “other non-bank” of Moody’s
sectoral classification (with twelve sectors) in Moody’s Default Risk Service (DRS) database. For reason of completeness one has to know that there are two other aggregated sectors. On the one hand
there is the (aggregated) sector “financial service providers” containing the sectors “banking”, “finance”, “insurance”, “real estate finance”, “securities”, “structured finance” and “thrifts” and on
the other hand the (aggregated) sector “sovereign/public utility” containing the sectors “public utility” und “sovereign”. This aggregation was made as several sectors did not have enough
Alfred Hamerle, Michael Knapp, Nicole Wildenauer
can be traced back to issuer’s attributes not available in the dataset. For this reason, only issuers with four or fewer bonds remain in the dataset.13 Additionally, bonds of companies with obvious
cases of fraud like Enron or Worldcom were eliminated from the dataset to ensure a homogenous pool. Subsequently, the dataset is adjusted marginally. On the one hand, there is only one bond with a
rating “B2” defaulting in 1996. This bond has a very small LGD and is removed from the dataset because it could cause a biased estimation of random effects. On the other hand, four bonds having a
bond rating of “Ca” and “C” in the years 1991, 1992 and 1995 are eliminated from the dataset because they also have only one or two observations per year. Consequently, there are 952 bonds from 660
issuers remaining in the dataset. The random effect ft and the error term Ht(i) are assumed to be independent, with a standard normal distribution as described in section two. The transformed LGD yt
(i) is tested for an approximately normally distribution. As a result, a normal distribution of the data can be assumed. This distribution can also be confirmed when the distribution of yt(i) by year
is tested. In the analysis, the influence of issuer- and bond-specific variables xt-1(i) is examined as mentioned in section two. In the analyses the following variables are tested: x Issuer rating:
Moody’s estimated senior rating has 21 grades between “Aaa” (highest creditworthiness) and “C” (low creditworthiness).14 An aggregation of the rating categories is tested as well. A possible
classification would be the distinction between investment grade ratings (rating “Aaa” to “Baa3”) and speculative grade ratings (rating “Ba1” to “C”). Besides this relatively rough classification the
ratings are classified into the categories “Aaa” to “A3“, “Baa1” to “Baa3”, “Ba1” to “Ba3”, “B1” to “B3”, “Caa”15, “Ca” and “C”. The issuer rating has a time lag of one year in the analyses. x Debt
rating: Its classification is analogous to the issuer rating and has a time lag of one year. In addition to the classifications mentioned above, the ratings are classified into the categories “Ba3”
to “B3” and “Caa” to “C”. x Difference between issuer and debt rating: the fact that the issuer rating is one, two, three or more than three notches better than the debt rating is tested on its 13
In principle, only issuers with one bond could be left in the dataset if the effect of several bonds per issuer should be eliminated. As this restriction would lead to relatively few observations,
only issuers with five or more bonds are excluded. Hence the dataset is only diminished by 4%. 14 For withdrawn ratings, Moody’s uses a class “WR”. Because of the lagged consideration of rating there
are no bonds in the dataset with rating “WR” one year before default. 15 Moody’s used to name this rating class with “Caa” until 1997. Since 1998, this class has been separated into the three rating
classes “Caa1”, “Caa2” and “Caa3”. To use the data after 1998, the latter three ratings have been aggregated in one rating class which is named “Caa” in the following.
VII. Modelling Loss Given Default: A “Point in Time”-Approach
x x x
impact on the transformed LGD. Additionally, the impact of the fact that the issuer rating is better or worse than the debt rating is tested. The rating classification of an issuer and a bond can
differ if the bond finances a certain project which has a different risk and solvency appraisal compared to the issuer. Seniority: Starting with Moody’s classification, the classes “senior secured”,
“senior unsecured”, “senior subordinated”, “subordinated” and “junior subordinated” are distinguished.16 To distinct these seniority classes from the relative seniority, they are sometimes referred
to as absolute seniority. Relative seniority: According to Gupton and Stein (2005) the relative importance of the seniority is surveyed. This variable can be best explained by an example: If issuer 1
has two bonds –one is secured “subordinated” and the other “junior subordinated”– and issuer 2 has three bonds –one with seniority “senior secured”, another with “senior subordinated” and the third
bond with seniority “subordinated”– then the “subordinated” bond from issuer 1 is going to be served first and possibly has a lower LGD than the bond with seniority “subordinated” from issuer 2 which
is served after the two other bonds from issuer 2. Additional backing by a third party: If the bond is secured additionally by a third party beside the protection by the issuer emitting the bond,
then this information is also used in the analyses. Maturity (in years): The maturity of the bond is calculated as the difference of the maturity date and the default date. It indicates the remaining
time to maturity if the bond would not have defaulted. Volume of defaulted bond (in million dollars): The number of outstanding defaulted bonds times the nominal of this defaulted bond denotes the
volume of the defaulted bond. It quantifies the influence of the volume of one defaulted bond, not the influence of the volume of defaulted bonds in the market altogether. Certain companies like
insurances are not allowed to hold defaulted bonds. On the other hand, there are speculative investors who are interested in buying defaulted bonds. The higher the volume of the defaulted bond, the
higher the supply of defaulted bonds on the market. Therefore it can be more difficult for the defaulted issuers to find enough buyers or to claim high prices for the defaulted bond. Issuer domicile:
The country of the issuer is implicitly considered by the limitation on American data. This limitation can be important because different countries may be in different stages of the economic cycle in
the same year. If the data is not limited to a certain country, the macroeconomic condition of all countries included in the dataset should be considered. Additionally, different legal insolvency
procedures exist in different countries, so that a country’s legal procedure can influence the level of recovery rates and LGD.
In Figure 1 the average (realised) LGD for bonds in the (aggregated) sector “industry” per year in the period 1983 to 2003 are depicted:
For a consideration of the hierarchy of seniority classes see Schuermann (2004, p. 10).
Alfred Hamerle, Michael Knapp, Nicole Wildenauer
80% 70%
LGD (in percent)
60% 50% 40% 30% 20% 10% 0% 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003
Figure 1. Average LGD per year for bonds in the (aggregated) sector “industry”
As can be seen from Figure 1, the LGD is obviously underlying cyclical variability. This is why the cyclical variations of LGD are explained with the help of macroeconomic variables in the vector
zt-1. Therefore, a database with more than 60 potential macroeconomic variables is established. It contains interest rates, labour market data, business indicators like gross domestic product,
consumer price index or consumer sentiment index, inflation data, stock indices, the Leading Index etc.17 In addition, the average default rate per year of the bond market is taken into account. All
variables are included contemporarily and with a time lag of at least one year. The consideration of these variables should enable a “point in time” model. 3.2. Results
Two different model specifications for the (aggregated) sector “industry” are examined.18 In contrast to model (1), another (but equivalent) parameterisation is used. The models can be
instantaneously estimated with the procedure MIXED in the statistical program SAS. In the next step, the parameter estimates for V and Z can be determined from the estimates for b1 and b2. Table 1
summarises the results.
A list of potential macroeconomic factors can be found in the appendix. Additionally, models for all sectors are estimated containing dummy variables for the different sectors in addition to the
variables mentioned below. The use of a single sector leads to more homogenous data.
VII. Modelling Loss Given Default: A “Point in Time”-Approach
Model I:
y t (i )
E 0 E ' x t 1(i ) b1 f t b2 H t (i ) .
Model II:
y t (i )
E 0 E ' x t 1(i ) J ' z t 1 b1 f t b2 H t (i ) .
Table 1. Parameter estimates and p-values (in parentheses) for models I and II (only bonds of the (aggregated) sector “industry”) Model I
Model II
1.7336 ( E(f) then EAD(f) t E(f) if and only if LEQ(f) t 0 x If L(f) < E(f) then EAD(f) t E(f) if and only if LEQ(f) d 0. Therefore, without any additional hypothesis, for facilities that verify L(f)
z E(f), it has been shown that to estimate EAD(f), it is sufficient to focus on methods that estimate suitable conversion factors LEQ(f) based on the observations included in
Gregorio Moral
the reference data set, RDS(f) and afterwards to employ Equation (13) to assign individual EAD estimates. Finally, the simplest procedure to estimate a class EAD is to add the individual EAD
estimates for all the facilities included in the class. For example, for certain facility types, some banks assign EADs by using a previously estimated CCF(f), and then applying the formula: EAD( f )
CCF ( f ) L( f )
This method is sometimes called Usage at Default Method22. If e(f) 1, this case can be reduced to the general method, given in equation (13), by assigning a LEQ(f) factor given by: LEQ ( f )
CCF ( f ) L( f ) E ( f ) L( f ) E ( f )
CCF ( f ) e( f ) 1 e( f )
Conversely, if a LEQ(f) is available, from Equation (16), an expression for an equivalent CCF(f) can be found, given by: CCF ( f )
LEQ ( f ) 1 e( f ) e( f )
Therefore, the EAD estimation method based on LEQ(f) and the one based on CCF(f) are equivalent, with the exception of those facilities with e(f) = 1. In the following sections, several methods that
are normally used in practice by banks to estimate LEQ factors are presented from a unified perspective. This is used later to analyse the optimality of the different approaches. Additionally, the
formulae most used in practice are derived as special cases of the previous methods when a specific functional form has been assumed for LEQ(f). 5.3. Modelling Conversion Factors from the Reference
Data Set This section presents several methods for estimating conversion factors based on regression problems starting with the following basic equation: EAD( f ) E ( f )
LEQ ( f ) ( L( f ) E ( f ))
These methods try to explain the observed increases in the exposure between the reference date and the default date and they can be grouped into three approaches depending on how these increases are
measured: as a percentage of the available amount (focus on realised LEQ factors); as a percentage of the observed limit (fo22
This method is called Momentum Method in CEBS Guidelines (2006, §§ 253 and 254).
X. EAD Estimates for Facilities with Explicit Limits
cus on percent increase in usage); or finally in absolute value (focus on increase in exposure). Model I. Focus on realised LEQ factors Dividing (18) by L(f) E(f), it is obtained: ead ( f ) e( f ) 1
e( f )
EAD( f ) E ( f ) ( L( f ) E ( f ))
LEQ ( f )
In this approach, the rationale is to determine a function of the risk drivers LEQ(RD) which “explains” the LEQi factors associated with RDS(f), LEQi = (EADi Ei)/(Li Ei), in terms of LEQ(RDi). This
can be made starting with an expression for the error associated with LEQi LEQ(RDi) and solving a minimisation problem. In practice, a quadratic and symmetric error function is almost universally
used. As a consequence of this choice, the minimisation problem to solve is given by (Problem P.I): ° Min® LEQ ° ¯
° Min® °¿ LEQ ° ¯ ½
° ¦ LEQi LEQRDi 2 ¾ i
¦ i
§ EADi Ei · ¨ ¸ ¨ L E LEQRDi ¸ i i © ¹
° ¾ °¿
Or: LEQ ( f )
° Min ® LEQ ° ¯
¦ (L E ) i
½° EADi Ei LEQ ( RDi ) ( Li Ei ) 2 ¾ °¿
Model II. Focus on the increase of the exposure as a percentage of the observed limit (focus on percent increase in usage). Dividing the basic Equation (18) by L(f), it is obtained: EAD( f ) E ( f )
L( f )
LEQ ( f )
L( f ) E ( f ) L( f )
Therefore, using this approach, the observable amounts to be explained are (EADi – Ei) / Li and the explanatory values are LEQ(RDi) (Li – Ei) / Li. Following the same reasoning as in the previous
approach, the minimisation problem to solve is given by (Problem P.II): ° Min® LEQ °¯
¦ i
§ EADi Ei L Ei · ¨ ) ¸¸ LEQ ( RDi ) ( i ¨ Li Li ¹ ©
° ¾ °¿
Or: LEQ( f )
° 1 ½° Min®¦ 2 EADi Ei LEQ ( RDi ) ( Li Ei ) 2 ¾ LEQ ° i L °¿ ¯ i
Gregorio Moral
Model III. Focus on increases in the exposure Directly from the basic equation, it is obtained:
EAD ( f ) E ( f )
LEQ ( f ) L ( f ) E ( f )
In this case, the amounts to explain are EADi Ei and the explanatory variable is LEQ(RDi) (Li – Ei). As in the other cases, the associated minimization problem is given by (Problem P.III): ° Min ®
LEQ ° ¯
¦ EAD
LEQ ( f )
½° E i LEQ ( RDi ) Li E i 2 ¾ °¿
From equations (21), (24) and (26), these problems can be reduced to a more general (Problem P.IV): ° Min ® LEQ °¯
¦ i
§ EADi Ei L Ei ·¸ ¨ LEQ( RDi ) i ¨ Zi Zi ¸¹ ©
° ¾ °¿
where Zi stands for Li – Ei in Model I, Li in Model II, and 1 in Model III. If F* denotes the empirical distribution of (EAD E) / Z associated with the observations included in RDS(f), the Problem
P.IV can be expressed as: EAD E L E · 2 ½° LEQ( f ) Min°® E §¨ LEQ( RD) ¸ ¾ LEQ F * °¯
In the most general case, assuming that (L E) / Z is constant for observations in RDS(f), the solution to (28) is given by23: LEQ ( f )
Z f § EAD E · E ¨ ¸ RD f F* © Z L f E f ¹
As a consequence, the practical problem is to find out methods to approximate these conditional expectations. If a parametric form for LEQ is assumed, the problem becomes: LEQ ( aˆ , bˆ,...),
LEQ ( f ) {aˆ , bˆ,...}
° § EAD E L E · 2 Min ® E ¨ LEQ (a, b,...) ¸ {a ,b ,...} F * © Z Z ¹ °¯
See Appendix B.
½° ¾ °¿
X. EAD Estimates for Facilities with Explicit Limits
If the parametric functional form is linear in the parameters, the problem becomes a linear regression problem. In summary, traditional methods can be classified as regression models that focus on
the minimization of quadratic errors in the forecasts of: LEQi factors; EADi in percentage of the limit; or EADi. These methods produce different EAD(f) estimates based on LEQ(f) estimates
proportional to conditional expectations. At first glance, the approach that focuses directly on LEQ factors (Model I) seems the most natural, the method that focuses on percent increases in usage
(Model II) seems more stable than the previous one and, as is shown in detail in Section 6, the approach based on EAD increases (Model III), could present advantages when the estimates are used in
regulatory capital computations because of the link between capital requirements and EAD. 5.4. LEQ = Constant
Problem P.I: The Sample Mean In practice24, banks frequently use, as an estimator for LEQ(f) at t, the sample mean of realised LEQi, restricted to those observations i={g,t} similar to {f,t,RD}.
Assuming that the conversion factor is a constant for observations similar to {f, t}, LEQ(f) = LEQ, and solving the Problem P.I the following is obtained: LEQ
° Min ® LEQ R ° ¯
¦ i
§ EADi Ei · LEQ ¸ ¨ © ( Li Ei ) ¹
° ¾ °¿
1 n
1 n
EADi Ei
¦ (L E ) i
¦ LEQ
In other cases, banks use a sample weighted mean that tries to account for a possible relationship between size of the exposures (or limits) and LEQ. If in Problem P.I a weight wi is introduced, and
it is assumed that LEQ is constant for observations similar to {f, t}, then: LEQ
° Min ® LEQ R ° ¯
§ EADi Ei · LEQ ¸ ( Li Ei ) ¹
¦ w ¨© i
° ¾ °¿
¦ w LEQ ¦w i
When the reason for incorporating the weighting is to take into account a LEQ risk driver, this approach is inconsistent. The reason for this is that the weighted average is the optimum solution only
after assuming that LEQ = constant, i.e. no risk drivers are considered.
At least this is the case in models applied by some Spanish banks at present (2006).
Gregorio Moral
Problem P.II: The Regression without Constant Another method widely used by banks is to use the regression estimator for the slope of the regression line based on Model II, assuming that LEQ is a
constant. Under these conditions the expression for the regression estimator is given by: LEQ
° Min ® LEQ R °¯
§ EADi Ei § L Ei ¨ LEQ ¨¨ i ¨ Li © Li ©
EADi Ei Li Ei Li 2
° ¾ °¿
¦ ead e 1 e ¦ 1 e i
§ Li Ei · ¸¸ ¨¨ © Li ¹
·· ¸¸ ¸¸ ¹¹
Problem P.III: Sample weighted mean If in P.III it is assumed that LEQ = constant it can be expressed as: LEQ ( f )
° Min ® LEQ R °¯
2 · ½° ¾ °¿
EAD E ¦i Li Ei 2 ¨¨© Li i Ei i LEQ ¸¸¹ §
And the optimum is given by: LEQ
¦ w LEQ , with w ¦w i
Li Ei 2
Therefore, using this approach, a weighted mean naturally arises. However, it is worth noting that these weights (Li Ei)2 are different from those currently proposed by some banks (based on Li or
Ei). 5.5. Usage at Default Method with CCF = Constant (Simplified Momentum Method): This method is sometimes used by banks that try to avoid the explicit use of realised negative LEQ factors, or for
facilities for which the current usage has no predictive power on EADs. It estimates the EAD for a non-defaulted facility, EAD(f), by using Equation (15) directly and a rough CCF estimate, for
example, the sample mean of the realised CCFs computed from a set of defaulted facilities C. EAD ( f )
CCF (C ) L ( f )
From Equation (16) and assuming that CCF = constant, a specific functional form for LEQ(e(f)) is founded, given by:
X. EAD Estimates for Facilities with Explicit Limits
LEQ ( f )
CCF L( f ) E ( f ) L( f ) E ( f )
CCF e( f ) 1 e( f )
In general, two facilities with the same estimated CCF and with different values for current percent usage, e(t), will have different LEQ estimates following the former Equation (37). The main
drawback with the procedure based on Equation (36) is that experience shows that, in general, drawn and undrawn limits have strong explanatory power for the EAD. For this reason, this method (with
CCF=constant) does not seem to meet the requirement of using all the relevant information25 (because it does not take into account the drawn and undrawn amounts as explanatory variables in the EAD
estimating procedure) for most of the types of facilities that arise in practice.
6. How to Assess the Optimality of the Estimates To assess the optimality of the different CF estimates associated with a reference data set and a portfolio, it is necessary to be more precise about
some elements in the basic problem. The first element requiring clarification is the type of estimates according to the role of macroeconomic risk drivers in the estimation method. The second element
is how to measure the errors associated with the estimates and to motivate that particular choice. This can be done by introducing a loss function that specifies how the differences between the
estimated values for the EAD and the actual values are penalised. 6.1. Type of Estimates Focusing on the use of the macroeconomic risk drivers, the following types of estimates can be distinguished:
x Point in Time estimates (PIT): these estimates are conditional on certain values of the macroeconomic risk drivers, for example, values close to the current ones. This allows the estimates to be
affected by current economic conditions and to vary over the economic cycle. In theory, this is a good property for the internal estimates banks need for pricing and other management purposes. The
main problem with PIT estimates is that they are based on less data than longrun estimates (LR estimates, defined below) and therefore, in practice, they are less stable than LR estimates and harder
to estimate. 25
§ 476. “The criteria by which estimates of EAD are derived must be plausible and intuitive, and represent what the bank believes to be the material drivers of EAD. The choices must be supported by
credible internal analysis by the bank. […] A bank must use all relevant and material information in its derivation of EAD estimates. […]”, BCBS (2004).
Gregorio Moral
x Long-run estimates (LR): These are unconditional macroeconomic estimates, i.e. the macroeconomic risk drivers are ignored. The main advantage is that they are more robust and stable than PIT
estimates. These LR estimates are required in AIRB approaches26, except for those portfolios in which there is evidence of negative dependence between default rates and LEQ factors. Currently, these
LR estimates are also used by banks for internal purposes. x Downturn estimates (DT): these are specific PIT estimates based on macroeconomic scenarios (downturn conditions) in which the default
rates for the portfolio are deemed to be especially high. When there is evidence of the existence of adverse dependencies between default rates and conversion factors, this could be the type of
estimates that, in theory, should be used in IRB approaches27. In practice, the use of DT estimates is difficult because, in addition to the difficulties associated with PIT estimates, it is
necessary to identify downturn conditions and to have sufficient observations in the RDS restricted to these scenarios. In the following, it is assumed that the focus is on long run estimates. 6.2. A
Suitable Class of Loss Functions The objective of this section is to determine a type of loss function that meets the basic requirements for the EAD estimation problem when it is necessary to obtain
EAD estimates adequate for IRB approaches. Therefore, it makes sense to specify the loss associated with the difference between the estimated value and the real one in terms of the error in the
minimum regulatory capital (computed as the difference between the capital requirements under both values). By using the regulatory formula, at the level of the facility, the loss associated with the
difference between the capital requirement under the estimated value of the exposure K(EAD(f)) and the real one K(EAD), could be expressed as follows28: L'K f
LK EAD K ( EAD f )
LI PD LGD EAD EAD f 26
LI PD LGD 'EAD f
§ 475. “Advanced approach banks must assign an estimate of EAD for each facility. It must be an estimate of the long-run default-weighted average EAD for similar facilities and borrowers over a
sufficiently long period of time, […] If a positive correlation can reasonably be expected between the default frequency and the magnitude of EAD, the EAD estimate must incorporate a larger margin of
conservatism. Moreover, for exposures for which EAD estimates are volatile over the economic cycle, the bank must use EAD estimates that are appropriate for an economic downturn, if these are more
conservative than the long-run average.”, BCBS (2004). 27 This can be interpreted in the light of the clarification of the requirements on LGD estimates in Paragraph 468 of the Revised Framework,
BCBS (2005). 28 In the following it is assumed that a PD = PD(f) and an LGD = LGD(f) have been estimated previously.
X. EAD Estimates for Facilities with Explicit Limits
Additionally, at least from a regulatory point of view, underestimating the capital requirement creates more problems than overestimating such a figure. For this reason, it is appropriate to use
asymmetric loss functions that penalises more an underestimation of the capital requirement than an overestimation of the same amount. The simplest family of such functions is given by (39), where b
> a: a 'K L'K ® ¯b 'K
iff 'K t 0 iff 'K 0
These loss functions quantify the level of conservatism. The larger b/a (relative loss associated with an underestimation of K), the larger is the level of conservatism imposed. For example, if a = 1
and b = 2, the loss associated with an underestimation of the capital requirement ('K < 0) is twice the loss for an overestimation of the same amount.29 The graphic of the loss function is presented
in Figure 6. €
L=2 'K L='K
Figure 6. Linear asymmetric loss function
By using this specific type of loss function (39), and assuming that LGD t 0, a simpler expression for the error in K in terms of the error in EAD is obtained: L'K f I PD LGD L'EAD f
The loss associated with an error in the capital requirement is proportional to the loss associated with the error in terms of exposure and the units of the loss are the same as those of the exposure
(€). 6.3. The Objective Function Once the loss function has been determined, it is necessary to find the most natural objective function for the estimation problem.
To the best of my knowledge, the first application of such a loss function in the credit risk context was proposed in Moral (1996). In that paper the loss function is used to determine the optimal
level of provisioning as a quantile of the portfolio loss distribution.
Gregorio Moral
6.3.1. Minimization at Facility Level of the Expectation in the Capital Requirement Error If the expected error in the minimum capital requirement at the level of exposure is used as an objective
function, by using Equation (40) the following is obtained: Min^E >L'K f @` I PD LGD Min^E >L'EAD f @` LEQ
This means that Problem P.III in Section 5.3 arises with a different loss function:
LEQ ¯ F *
Min ® E L EAD E LEQ ( RD ) L E ¾
or in terms of the sample ° Min ® LEQ ° ¯
¦ LEAD E LEQ ( RD ) L E ¾° i
and a solution is given30 by: LEQ ( f )
Q EAD E , F*
b 1 , a b RD f L f E f
where Q(x, b/(a+b)) stands for a quantile of the distribution F(x) such that31 F(Q) = b/(a+b). When a = b, the loss function (39) is symmetric and the former quantile is the median and for values of
b/a > 1 the associated quantile is placed to the right of the median and, therefore, a more conservative estimate of LEQ(f) is obtained. It is interesting to note that (44), with b > a, penalises
uncertainty32. An important consequence of using the former loss function L is that the problems M.I and M.II described in equations (45) and (46) are equivalent33. Problem M.I: ° ½° Min® LEADi Ei
LEQ( RDi ) Li Ei ¾ LEQ ° °¿ ¯ i Subject to : 0 d LEQ ( RD ) d 1
See Appendix B. In practice, it is necessary to be more precise when defining a q-quantile because the distribution F(x) is discrete. A common definition is: a “q-quantile” of F(x) is a real number,
Q(x,q), that satisfies P[X Q(x,q)] q and P[X Q(x,q)] 1q. In general, with this definition there is more than a q-quantile. 32 § 475. “Advanced approach banks must assign an estimate of EAD
for each facility. It must be an estimate […] with a margin of conservatism appropriate to the likely range of errors in the estimate.”, BCBS (2004). 33 The proof follows from the proposition in
Appendix A. 31
X. EAD Estimates for Facilities with Explicit Limits
Problem M.II: °
¦ LMin>Max>EAD , E @, L @ E LEQ( RD ) L E ¾°¿ ¯
Min ® LEQ °
Subject to : 0 d LEQ ( RD ) d 1
This means that an estimator meeting the constraint 0 d LEQ(f) d 1 that is optimal when using the original data is also optimal when using data censored to show realised LEQ factors between zero and
one. 6.3.2. Minimization of the Error in the Capital Requirement at Facility Level for Regulatory Classes Sometimes, in spite of the existence of internal estimates for LEQ factors at facility level,
it could be necessary to associate a common LEQ with all the facilities included in a class comprising facilities with different values for the internal risk drivers. This could occur due to
difficulties in demonstrating with the available data, that discrimination at an internal level of granularity is justified. In this case, for regulatory use, it is necessary to adopt a less granular
structure for the risk drivers than the existing internal one. Therefore, the problem of finding an optimal estimator for regulatory use can be solved by using the regulatory structure for the risk
drivers. In other words, the procedure is to compute new estimates using the same method and a less granular risk driver structure. In general, the new estimator is not a simple or weighted average
of the former more granular estimates.
7. Example 1 This example34 illustrates the pros and cons of using the methods explained in the former sections for estimating LEQ factors and EADs. The focus is on long run estimates for the EAD of
a facility f in normal status by using as basic risk drivers the current limit L(f) and exposure E(f). 7.1. RDS
7.1.1. Characteristics The main characteristics of the reference data set, used in this example, are described below:
Although this example could be representative for certain SME portfolios comprising credit lines, it is not a portfolio taken from a bank.
Gregorio Moral
x Source of the RDS: the observations were obtained from a set of defaulted facilities from a portfolio of SMEs x Observation period: 5 years x Product types: credit lines with a committed limit of
credit, that is known for the borrower, given by L(t) x Exclusions: It does not include all the internal defaults which took place during the observation period because several filters had been
applied previously. As a minimum, the following facilities were excluded from the data set: defaulted facilities with L(td12) < E(td12) and those with less than twelve monthly observations before the
default date x Number of observations, Oi: #RDS = 417·12 = 5004 observations, which are associated with 417 defaulted facilities and dates 1, 2,…,12 months before the default date x Structure of the
reference data set: the structure proposed in (8) but, for simplicity, only a basic set of risk drivers is considered: Oi
^ i, f , tr , RDi ^L(tr ), E (tr ), S (tr )`, EAD, td , tr`
x Status of a facility at the reference date, S(tr): there is no information about the status of the facilities. The bank has implemented a warning system that classifies the exposures on four broad
classes: N = normal monitoring and controls; V = under close monitoring for drawdowns; I = current exposure greater than the limit and implies tight controls making additional drawdowns impossible
without a previous approval; D = defaulted, no additional drawdowns are possible, but sometimes there are increases in the exposures due to the payment of interest and fees. However, in this example,
in order to take into account the status, S(tr), as a risk driver, observations with S(tr) = N are approximated using the following procedure: First, all the observations with L(tr) < E(tr) are
marked as in a non-normal status Second, after analysing the empirical distributions of realised LEQ factors (and other information) it was decided to consider all the observations with td tr less
than five months as if they were in a non-normal status and to eliminate all the observations with td tr = 7 months (see next section). In practice, the use of the values of the variable status is
necessary, because the early identification of problematic borrowers and the subsequent changes in the availability of access to the nominal limit have important consequences in the observed EADs.
For this reason, observations up to five months before default for which E(tr) d L(tr) are considered in normal status. In this case, the number of observations with S(tr) = N is: #RDS(N) = 2,919.
X. EAD Estimates for Facilities with Explicit Limits
7.1.2. Empirical distributions of certain statistics Distribution of realised LEQ factors Figure 7 summarises the empirical variability of the realised LEQ factors associated with 2,442 observations
for which it is possible to compute this statistic35. #Oi
0 -4
LEQi Figure 7. Histogram of realised LEQ factors
It shows that the distribution is asymmetric with a high number of observations outside of [0,1] which is the natural range for LEQ factors. The sample mean is about -525 due to the existence of many
observations with large negative values and it highlights one of the main issues when a sample mean is used as the estimator. The median is 0.97 and this value, in contrast with the former sample
mean value, highlights the advantages of using statistics less dependent on the extreme values of the distribution for estimation purposes. Joint distribution of realised LEQ factors and percent
usage at the reference date To reduce the variability in the observed realised LEQ factors, it is necessary to consider a variable that exhibits explanatory power, at least, for the range of values
of realised LEQ factors. For example, the joint empirical distribution presented in Figure 8 shows that the variable percent usage at the reference date is important for limiting the variability of
realised LEQ factors. Black points at the top of Figure 8 represent the observations in the space {1e(tr), LEQi}.
Observations associated with, the horizon value, td tr = 7 were removed from the RDS as it is explained later on.
Gregorio Moral
Figure 8. Joint distribution of LEQi and percent usage at the reference date tr
Influence of td tr in the basic statistics Figure 9 presents the empirical distributions of realised LEQs associated with a fixed distance in months between the default and reference dates for td tr
= 1,…,12. 1
0 -10
2 -4
Figure 9. Empirical distributions of LEQi conditional on different td tr values
The distributions associated with td tr = 1, 2, 3, 4 are very different from the others. The distribution conditional on td tr = 7 months is totally anomalous and the reason for that is an error in
the processes that generated these data.
X. EAD Estimates for Facilities with Explicit Limits
Figure 10 presents the empirical distributions of the percent increase in usage between the reference and the default dates, eadi e(tr), associated with a fixed distance in months between the default
and reference dates for td tr = 1,…,12. Again, the differences among the distributions conditional on reference dates near to default and far from default are obvious and the existence of anomalous
values for the case td tr = 7 is evident. 1
7 0.6
4 0 -1
Figure 10. Empirical distributions of percent increase in usage since the reference date
Finally, Figure 11 shows the empirical distributions of the increase in exposure, EADi E(tr), between the reference and the default dates.
0 -40000
Figure 11. Empirical distributions of the increase in exposure from tr to td
Gregorio Moral
7.2. Estimation Procedures
7.2.1. Model II Original Data and Fixed Time Horizon Some banks use Model II assuming a constant LEQ, and a fixed time horizon approach, T = 12 months. This means that they adjust a linear regression
model without an independent term, given by: EADi E td 12 Ltd 12 Ltd 12
§ E td 12 · k ¨¨1 ¸¸ © Ltd 12 ¹
Therefore, in these cases, the bank’s approach focuses on the minimisation of the quadratic error in the increase of the exposure expressed in percentage terms of the limit. The results with this
method are summarised below: By using the original data, the estimated LEQ factor is LEQ= 0.637 and the adjusted R2 is 0.13. Therefore, the final estimate for the EAD of a facility, f, in normal
status is given by the formula: EAD f E t 0.637 Lt E t
Figure 12 presents, for each observation in the RDS(td12), the values of the pairs {1 e(td12), eadi e(td12)}. The upper shadow zone in Figures 12, 13, 14 are associated with points with LEQi > 1. 4
-1 0
Figure 12. Percent increase in usage from tr, to td and percent usage at the reference date
From analysis of the distribution of these points and the results of the regression it is clear that, at least:
X. EAD Estimates for Facilities with Explicit Limits
1. It is necessary to carry out an explicit RDS cleaning process before the estimation phase. For example, it is necessary to analyse the observations associated with the points above the line y = x
and afterwards to make decisions about which observations have to be removed from the RDS. 2. The degree of adjustment is very low. Most of the points (those with 1 e(tr) closer to zero) have little
influence on the result of the regression model because of the constraint that there is no independent term. 3. In order to assess the reliability of the estimated LEQ it is necessary to identify
outliers and influential observations and to perform stability tests. In this case, given the functional form of the model, y = k x, and the low number of points associated with large values of 1 e
(tr), these observations are influential points.36 It is easy to understand that changes in these points affect the result of the regression and therefore the LEQ estimate. 4. In order to get more
stable results, it is necessary to get more observations (for example by using a variable time horizon approach). Censored Data and Fixed Time Horizon Sometimes banks use censored data to force
realised LEQ factors to satisfy the constraint 0 d LEQi d 1. Using censored data, the estimated LEQ factor is 0.7 and the R2 increase to 0.75. In this case, all the points are in the white triangular
region of Figure 13 and it is clear that the existence of very influential points (those with large values of 1 e(r)) introduces instability. Figure 13 presents the censored observations and the
regression line. 1
Fig. 13. Linear regression in Model II and censored data
The EAD estimator is in this case: EAD f E t 0.7 Lt E t 36
Influential points have a significant impact on the slope of the regression line which, in Model II, is precisely the LEQ estimate.
Gregorio Moral
Original Data and Variable Time Approach By using a variable time approach, based on observations with tr = td {12, 11, 10, 9, 8}, the estimated LEQ factor is LEQ = 0.49 and the R2 is 0.06. Figure 14
presents, for each observation in the RDS, the pairs {1 e(tr), eadi e(tr)} and the regression line associated with this extended data set and Model II. 4
-1 0
Fig. 14. Linear regression in Model II and variable time approach
In Model II, the use of a time variable approach does not increase the degree of adjustment (which is very low due to the functional form assumed in the model), but increases the stability of the
results. The EAD estimator in this case is: EAD f E t 0.49 Lt E t
7.2.2. The Sample Mean and the Conditional Sample Mean If Model I is used and a constant LEQ for facilities “similar” to f is assumed, an estimate for EAD(f) is obtained by computing the sample mean
of the realised LEQ conditional on observations in RDS(f) as the LEQ(f) estimate and then applying Equation (13). With regard to RDS(f), in this example, two possibilities are analysed: x RDS(f) =
RDS or equivalently to use a global sample mean as estimator. x RDS(f) = {Oi such as percent usage ei is similar to e(f)} or equivalently to use as estimator a function based on different local means
depending on e(f). Case RDS(f) = RDS, Use of a Global Sample Mean If the sample mean of all the realised LEQ factors associated with the observations in the RDS is computed, the result is a
nonsensical figure:
X. EAD Estimates for Facilities with Explicit Limits
LEQ ( f )
1 n
¦ LEQ
The problems that arise when using this global average are due to: 1. Instability of certain realised LEQ factors: when 1 E(f)/L(f) is small the realised LEQs are not informative. 2. Very high values
for certain observations, in some cases several times L(tr) E(tr). The treatment of these observations needs a case by case analysis. 3. Asymmetries in the behaviour of positive and negative realised
LEQ factors. 4. Evidence of a non-constant LEQi sample mean depending on the values of 1E(f)/L(f). Figure 15 represents the distribution of the realised LEQ factors and undrawn amounts as a
percentage of the limit, 1 E(f)/L(f) and it can help to increase understanding of the main problems associated with this method:
-20 -40 -60 0
Figure 15. Realised LEQ factors and percent usage at the reference date
Figure 16 focuses on observations associated with values of realised LEQ factors less than 2. It is clear that there are observation realised LEQ factors greater than one, (upper shadow zones in
Figures 16 and 17) across the range of percent usage values, although such observations are much more common when the percent usage values are large (small values of 1 e(tr)).
Gregorio Moral 2 1
0 -1 -2 -3 -4
Figure 16. Realised LEQ factors smaller than two
For these reasons, before using this procedure, it is necessary to make some decisions after analysing the observations in the RDS, for example: x To eliminate from the RDS those anomalous
observations with large LEQi factors x To censor other observations associated with LEQi factors greater than one x To remove observations with very low values of E(f) L(f) from the RDS, because
their LEQi values are not informative. In this example, observations with 1 E(tr)/L(tr) d 0.1 and those with LEQi t 2 were removed from the reference data set. After these modifications of the RDS,
the new LEQi sample mean is: LEQ ( f )
1 m
¦ LEQ
It is clear that this global estimate of 8% is very low for most of the facilities in the portfolio because of the weight in the global average of the negative realised LEQ factors associated with
observations with low values of 1 e(f). An improvement to the former estimate is to eliminate outliers, i.e observations associated with very large (in absolute terms) realised LEQ factors. If
observations with LEQ factors below the 10th percentile and above the 90th are considered outliers, the average restricted to the RDS without outliers is about 33% and this value is stable when the
former percentiles are changed. LEQ ( f )
1 r
¦ LEQ
However, it is clear that local averages are very different and therefore this global estimate of 33% for the LEQ is not adequate. For this reason, it is necessary to consider different estimates for
the LEQ factor for different values of 1 E(f)/L(f).
X. EAD Estimates for Facilities with Explicit Limits
Case RDS(f) = {Oi such as percent usage ei is similar to e(f)} In this case, the RDS(f) comprises all the observations Oi with 1 e(tr) [1 e(f) 0.2, 1 e(f) + 0.2] and the average of the realised LEQ
factors restricted to observations in the RDS(f) is used as the estimate of LEQ(f). To select a functional form for LEQ(f), first the estimated values for different 1 e(tr) values are computed and
second, a regression model is adjusted using 1 e(tr) as the explanatory variable, and the local sample mean as the dependent variable. After rejecting different models and using intervals of width
0.4 an expression for the “local”37 sample mean of LEQ factors based on a b 1 etr is obtained as: LEQ f 0.82 1.49 1 E f / L f
with an adjusted R2 equal to 0.94. Figure 17 represents the realised LEQ factors, the local averages and the adjusted function (with the constraint LEQ(f) 0). 1.5
Figure 17. Approximation for E[LEQ|1 e(tr)] and the adjusted regression curve
Therefore an estimator for EAD(f) of a facility f in normal status is given by:
EAD f E f Max 0, 0.82 1.49 1 E f / L f L f E f
7.2.3. The Median and the Conditional Quantiles The rationale under Model III is to explain directly the increase in exposure from the reference date to the default date. Therefore, it is necessary
to explain EADi E(tr) in terms of LEQ(RDi) (L(tr) E(tr)). For simplicity, it is assumed that RDi={S(tr), L(tr) E(tr)} and the focus is on observations with status S(tr) = 37
The “local” condition is to consider only those observations in an interval centred on 1E(f)/L(f) and with length 0.4.
Gregorio Moral
”normal” and the only quantitative variable that is taken into account is the current undrawn amount L(f) E(f). Moreover, the loss function proposed in (39) is used to determine the optimal estimates
and therefore as shown in Section 6.3.1, the solution is to approximate the quantile Q[b/(a+b)] of the distribution of EADi E(tr) conditional on those observations which satisfy L(tr) E(tr) = L(f) E
(f). To approximate that quantile for each value of EAD(f) E(f), the process is similar to the one explained in the previous section. First, RDS(f) is defined as all the observations such as (L(tr) E
(tr)) [(L(f) E(f)) 0.8, (L(f) E(f)) 1.2]. Second, for each value of L(tr) E(tr) the optimal quantile is computed. Third, a linear regression model that uses L(tr) E(tr) as the explanatory
variable and the optimal quantile as the dependent variable is adjusted and, finally, the estimator for LEQ(f) is obtained by using formula (44). Figure 18 represents, for each observation in the RDS
with tr = td {12, 11, 10, 9, 8}, the pairs {L(tr) – E(tr), EADi – E(tr)} in the range of values of L(tr) E(tr) given by [0, 17000]€, for which it is considered there exists sufficient number of
observations. The shadow zones in Figures 18 and 19 are defined as EADi L(tr). € 20000
-30000 0
Figure 18. Observations in Model III
The results of the regression model for the local medians (case a = b) and for the 66.6th percentile (case 2 a = b) produces the following results: Median>EAD f E f @ 86.8 0.76 L f E f Quantile>
EAD f E f ,0.666@ 337.8 0.92 L f E f
With adjusted R2 equal to 0.95 and 0.99 respectively. Therefore, the associated LEQ estimates, obtained dividing (57) by L(f) E(f), are almost constant (close to 0.76 and 0.92 respectively) and have
values larger than the previous estimates.
X. EAD Estimates for Facilities with Explicit Limits
Figure 19 represents the local medians (Q50% line) and local 66.6 percentiles (Q66% line) obtained from the original points, the regression lines associated with (57) (dotted line for the adjusted
66.6 percentiles, thick line for the adjusted local medians). €
Figure 19. Quantiles conditional on the undrawn amount and adjusted EAD E(tr) values
8. Summary and Conclusions The following points summarise the current practice on CF and EAD estimates and highlight some problematic aspects: x The CF and EAD estimators applied by banks can be
derived from special cases of regression problems, and therefore these estimators are based on conditional expectations x Implicitly, the use of these estimators assumes the minimisation of
prediction errors by using a quadratic and symmetric loss function that is neither directly correlated with the errors in terms of minimum capital requirements nor penalises uncertainty. The way in
which these errors are measured is crucial because they are very large x In most of the cases, the EAD estimates are based on the unrealistic assumption of a constant LEQ factor mean x Frequently,
the basic statistics for the estimation process are censored to obtain realised LEQ factors between zero and one x Banks frequently use “Cohort Approaches” or “Fixed Time Horizon Approaches” to
select the observations included in the estimation process. These approaches do not take into account all the relevant information because they only focus on a conventional reference date for each
defaulted facility x With regard to risk drivers, the focus is on the rating at the reference date. Other approaches and some comments on different aspects:
Gregorio Moral
x For regulatory use, it seems appropriate for the estimators to be solutions to optimisation problems that use a loss function directly related with errors in terms of capital requirements x For
example, a logical choice is to use a simple linear asymmetric loss function applied at the level of facility. This loss function enables banks or supervisors to quantify the level of conservatism
implicit in the estimates x Using this loss function, the derived estimators are based on conditional quantiles (for example, the median for internal purposes and a more conservative quantile for
regulatory use) x If the estimates are based on sample means LEQ factors, as a minimum, should depend on the level of the existing availability of additional drawdowns: LEQ(1e(tr)) x The common
practice of censoring the realised LEQ factors to [0, 1], is not justified and, in general, it is not possible to conclude ex ante if the associated LEQ estimates are biased in a conservative manner
x However, under certain hypotheses, the use of censored data does not change the optimal estimator for LEQ x The estimates should be based on observations at all the relevant reference dates for
defaulted facilities, “Variable Time Approach” x With regard to risk drivers, if there is a warning system for the portfolio, it is important to focus on the status of the facility at the reference
date rather than on the rating x The example presented here suggests that: Estimates based on sample means are less conservative than those based on conditional quantiles above the median The CF
estimates obtained by using these conditional quantiles, are so large that the use of downturn estimates in this case might not be a priority.
References Araten M, Jacobs M (2001), Loan Equivalents for Revolving Credits and Advised Lines, The RMA Journal, pp. 34-39. Basel Committee on Banking Supervision (2005), Guidance on Paragraph 468 of
the Framework Document. Basel Committee on Banking Supervision (2004), International Convergence of Capital Measurement and Capital Standards, a Revised Framework. Basel Committee on Banking
Supervision (2005), Studies on the Validation of Internal Rating Systems, Working Paper No. 14. CEBS (2006), Guidelines on the implementation, validation and assessment of Advanced Measurement (AMA)
and Internal Ratings Based (IRB) Approaches, CP 10 revised. Lev B, Rayan S (2004) Accounting for commercial loan commitments. Moral G (1996), Pérdida latente, incertidumbre y provisión óptima, Banco
de España, Boletín de la Inspección de ECA.
X. EAD Estimates for Facilities with Explicit Limits
Office of the Comptroller of the Currency (OCC), Treasury; Board of Governors of the Federal Reserve System (Board); Federal Deposit Insurance Corporation (FDIC); and Office of Thrift Supervision
(OTS), Treasury. Federal (2003), Draft Supervisory Guidance on Internal Ratings-Based Systems for Corporate Credit (corporate IRB guidance). Register / Vol. 68, No. 149. Office of the Comptroller of
the Currency (OCC), Treasury; Board of Governors of the Federal Reserve System (Board); Federal Deposit Insurance Corporation (FDIC); and Office of Thrift Supervision (OTS), (2004), Proposed
supervisory guidance for banks, savings associations, and bank holding companies (banking organizations) that would use the internal-ratings-based (IRB) approach to determine their regulatory capital
requirements for retail credit exposures. Treasury. Federal Register / Vol. 69, No. 207. Pratt J, Raiffa H, Schlaifer R (1995) Introduction to Statistical Decision Theory. The MIT Press. Sufi A.
(2005) Bank Lines of Credit in Corporate Finance: An Empirical Analysis University of Chicago Graduate School of Business.
Appendix A. Equivalence Between two Minimisation Problems Proposition: Consider a set of observations O
^xi , y i `i 1,,n
and the problem
G.I given by: Minimise g G ª «¬
n i 1
L yi g ( xi ) º »¼
Subject to f ( x) t g ( x) t h( x)
where the error is measured in terms of the function L that satisfies: L( x y )
L( x ) L( y ) if x y t 0
then, g is a solution of Problem G.I if and only if it is a solution of Problem G.II given by: Minimise g G ª «¬
n i 1
LMin>Max> yi , h( xi )@, f ( xi )@ g ( xi ) º »¼
Subjec t to f ( x ) t g ( x) t h( x )
Proof: The set O can be partitioned into three classes O O
{( x , y ) y ! f ( x )}, O i i i i
For observations in O+:
O O O , where:
{( x , y ) y h ( x )} i i i i
Gregorio Moral
yi f xi f xi g ( xi ) t 0
Therefore, from (59) and (62), the error in Problem G.I associated with an observation in O+ can be expressed in terms of the error in Problem G.II plus an amount independent of g:
err GI , x , y i i
L yi g xi
L yi f xi f xi g xi
L yi f xi L f xi g xi L yi f xi LMin>Max>yi , hxi @, f xi @ g xi
L yi f xi err GII , x , y i i
But the O+ set does not depend on the function g , therefore for these observations, and for all g, the error in Problem G.I can be decomposed in a fixed amount, independent of the g function, given
by L yi f xi , where the index i applies at
the observations in O+ and the error in Problem G.II. Similarly, for observations in O , the error in Problem G.I is equal to the error in Problem G.II plus the fixed amount
¦ Lhxi yi .
Finally, for the observations in O= the errors in Problem G.I and in Problem G.II are the same.Ƒ
Appendix B. Optimal Solutions of Certain Regression and Optimization Problems Let X and Y be random variables with joint distribution given by F(x,y), then we get in the case of a quadratic loss
d * x E Y X
½ Min® E Y d X 2 ¾ . d ( x) ¯ F * ¿
In the case of the linear asymmetric loss function: a x iff x t 0 L x ® ¯b x iff x 0
The following is found
X. EAD Estimates for Facilities with Explicit Limits
d * x Q Y X ,
b ab
½ Min® E LY d X ¾ d ( x) ¯ F* ¿
See, for example, Pratt (1995, pp. 261-263). Therefore, a solution for (28) can be obtained from (64), and taking into account: Y
; d X
LEQ ( RD ) h( RD ); where h( RD )
Then, d* is given by (64) and assuming that h(RD) = h(f) for observations in RDS(f): d
RD ( f )
LEQ ( RD ( f ))
L( f ) E ( f )
Z( f )
The result showed in (29) is obtained from the former equation.Ƒ
Appendix C. Diagnostics of Regressions Models Model II (Section 7.2.1)
x By using original data: EADi Ltd 12
Parameter Table
Ltd 12
DF 1 416 417
§ ©
0.64 ¨1
E td 12 · ¸ L td 12 ¹
Degree of adjustment
Model Error Total
E td 12
Rsquared ADjustedRSquared EstimatedVariance 0,13 0,13 0,21 SumOfSq 13,5 87,41 100,95
ANOVATable MeanSq 13,54 0,21
FRatio 64,45
Pvalue 0
x By using censored data: EADi Ltd 12
E td 12 Ltd 12
§ ©
0. 7 ¨ 1
E td 12 · ¸ L td 12 ¹
Gregorio Moral
Parameter Table
Degree of adjustment
Model Error Total
Rsquared ADjustedRSquared EstimatedVariance 0,75 0,75 0,013
DF 1 416 417
ANOVATable MeanSq 16,49 0,013
SumOfSq 16,49 5,58 22,05
FRatio 1229,67
Pvalue 0
x By using a variable time approach: EADi Ltr
Parameter Table
E tr Ltr
§ ©
0.49 ¨1
E tr · ¸ L tr ¹
Degree of adjustment
Model Error Total
Rsquared ADjustedRSquared EstimatedVariance 0,06 0,06 0,19
DF 1 2918 2919
SumOfSq 35,92 545,3 581,2
ANOVATable MeanSq 16,49 0,19
FRatio 1229,67
Pvalue 0
Model I (Section 7.2.2)
x By using Model I, variable time approach:
LEQ f
0.82 1.49 1 E f / L f
The diagnostics for this regression model are:
X. EAD Estimates for Facilities with Explicit Limits
Parameter Table
1 x^0,5
Degree of adjustment
Model Error Total
Estimate -0,82 1,49
SE 0,009 0,014
TStat -93,09 104,57
Pvalue 0 0
Rsquared ADjustedRSquared EstimatedVariance 0,94 0,94 0,006
DF 1 663 664
SumOfSq 66,13 4,01 70,14
ANOVATable MeanSq 66,13 0,006
FRatio 10934,1
Pvalue 0
Model III (Section 7.2.3)
x By using a variable time approach: Median>EAD f E f @ 86.8 0.76 L f E f
Quantile>EAD f E f ,0.666@ 337.8 0.92 L f E f
With the diagnostics given by: Parameter Table
1 x
Degree of adjustment
Model Error Total
Estimate 86,8 0,76
SE 11,23 0,003
TStat 7,73 222,74
Pvalue 0 0
Rsquared ADjustedRSquared EstimatedVariance 0,95 0,95 227741
DF 1 2370 2371
SumOfSq 1,13*10^10 5,40*10^8 1,18*10^10
ANOVATable MeanSq 1,13*10^10 227741
FRatio 49611
Pvalue 0
TStat 65,7 594,6
Pvalue 0 0
and for the quantile: Parameter Table
1 x
Degree of adjustment
Model Error Total
DF 1 2370 2371
Estimate 337,8 0,92
SE 5,14 0,002
Rsquared ADjustedRSquared EstimatedVariance 0,99 0,99 47774,6
SumOfSq 1,69*10^10 1,13*10^8 1,7*10^10
ANOVATable MeanSq 1,7*10^10 47774,6
FRatio 353621
Pvalue 0
Gregorio Moral
Appendix D. Abbreviations AIRB CCF CF EAD EADi = E(td) EAD(f) eadi E(t) e(t) ei = e(tr) f g i = {g, tr} IRB LEQ LEQ(f) LEQi LGD L(t) Oi PD Qa = Q(x, a) RDS RDS(f) RD S(tr) t td tr td tr
Advanced internal ratings-based approach Credit conversion factor Conversion factor Exposure at default Realised exposure at default associated with Oi EAD estimate for f Realised percent exposure at
default, associated with Oi Usage or exposure of a facility at the date t Percent usage of a facility at the date t Percent usage associated with the observation Oi={g, tr} Non-defaulted facility
Defaulted facility Index associated with the observation of g at tr Internal ratings-based approach Loan equivalent exposure LEQ estimate for f Realised LEQ factor associated with the observation Oi
Loss given default Limit of the credit facility at the date t Observation associated with the pair i={g, tr} Probability of default Quantile associated with the a% of the distribution F(x) Reference
data set RDS associated with f Risk drivers Status of a facility at the reference date tr Current date Default date Reference date Horizon
XI. Validation of Banks’ Internal Rating Systems A Supervisory Perspective Stefan Blochwitz and Stefan Hohl Deutsche Bundesbank and Bank for International Settlements (BIS)1
1. Basel II and Validating IRB Systems
1.1. Basel’s New Framework (Basel II) ‘Basel II’ is associated with the work recently undertaken by the Basel Committee on Banking Supervision (BCBS)2. This aimed to secure international convergence
on revisions to supervisory regulations on capital adequacy standards of internationally active banks. The main objective of the 1988 Accord3 and its revision is to develop a risk-based capital
framework that strengthens and stabilises the banking system. At the same time, it should provide for sufficient consistency on capital adequacy regulation across countries in order to minimize
competitive inequality among international banks. In June 2004, the BCBS issued ‘Basel II’, titled ‘International Convergence of Capital Measurement and Capital Standards: A Revised Framework’,
carefully crafting the balance between convergence and differences in capital requirements. This paper presents pragmatic views on validating IRB systems. It discusses issues related to the
challenges facing supervisors and banks of validating the systems that generate inputs into the internal ratings-based approach (IRBA) used to calculate the minimum regulatory capital for credit
risk, based on internal bank information. 1
The views expressed are those of the authors and do not necessarily reflect those of the Bank for International Settlements (BIS), the Basel Committee of Banking Supervision, or the Deutsche
Bundesbank. The Basel Committee on Banking Supervision is a committee of banking supervisory authorities that was established by the central bank governors of the Group of Ten countries in 1975. It
consists of senior representatives of bank supervisory authorities and central banks from Belgium, Canada, France, Germany, Italy, Japan, Luxembourg, the Netherlands, Spain, Sweden, Switzerland, the
United Kingdom, and the United States. see BIS (1988)
Stefan Blochwitz and Stefan Hohl
The key role of Banks as financial intermediaries highlights their core competences as lending, investing and risk management. In particular, analysing and quantifying risks is a vital part of
efficient bank management. An appropriate corporate structure is vital to successful risk management. Active credit risk management is indispensable for efficiently steering a bank through the
economic and financial cycles, despite the difficulties stemming from a lack of credit risk data. A well-functioning credit risk measurement system is the key element in every bank’s internal risk
management process. It is interesting to note that the debate about the new version of the Basel Capital Accord (Basel II), which establishes the international minimum requirements for capital to be
held by banks, has moved this topic back to the centre of the discussion about sound banking. The proper implementation of the IRBA is at the heart of a lively debate among bankers, academics and
regulators. At the same time a paradigm shift has taken place. Previously, credit risk assessment used only the experience, intuition and powers of discernment of a few select specialists. The new
process is more formalised, standardised and much more objective by bank’s internal rating systems. The human element has not been entirely discounted, however; now both human judgement and rating
systems each play an equally important role in deciding the credit risk of a loan. Since the IRBA approach will be implemented in most of the G10-countries in the very near future, the debate on the
IRBA has shifted its accent. More emphasis is now given to the problem of validating a rating system, rather than how to design a rating system. Both the private sector and banking supervisors need
wellfunctioning rating systems. This overlap of interests and objectives is reflected in the approach towards validation of rating systems; even if different objectives imply different priorities in
qualifying and monitoring the plausibility of such systems. We will discuss some of the challenges faced by banks and supervisors, aware that we have only scratched the surface. This is followed by a
discussion of some of the responses given by the BCBS. We then will discuss a pragmatic approach towards validating IRB systems while touching on some issues previously raised. However, we would like
to stress that implementation of Basel II, and especially the validation of IRB systems (and similarly AMA models for operational risk) requires ongoing dialogue between supervisors and banks. This
article, including its limitations, offers a conceptual starting point to deal with the issues related to the validation of IRB systems. 1.2. Some Challenges The discussion on validation has to start
with a discussion of the structure and usage of internal rating systems within banks. The two-dimensional risk assessment for credit risk as required in Basel II, aims to quantify borrower risk, via
the prob-
XI. Validation of Banks’ Internal Rating Systems - A Supervisory Perspective
ability of default (PD) for a rating grade, and the facility risk by means of the Loss Given default (LGD). The third dimension is the facility’s exposure at default (EAD). The broad structure of a
bank-internal rating system is shown in Figure 1. First, the information, i.e. the raw data on the borrower to be rated have to be collected in accordance with established banking standards.
Accordingly, the data is used to determine the potential borrower’s credit risk. In most cases, a quantitative rating method which draws on the bank’s previous experience with credit defaults is
initially used to determine a credit score. Borrowers with broadly similar credit scores, reflecting similar risk characteristics, are typically allocated to a preliminary risk category, i.e. rating
grade. Usually, a loan officer then decides the borrower’s final rating and risk category, i.e. this stage involves the application of qualitative information. Bank Controlling and Management
Rating System Rating Proposal Extraordinary Effects Final Rating
Portfolio Models
Quantitative Method
Overview Internal Reporting
Figure 1. Schematic evolution of a rating process and its integration in the bank as a whole
A well-working rating system should demonstrate that the risk categories differ in terms of their risk content. The quantification of risk parameters is based on the bank’s own historical experience,
backed by other public information and to certain extent, private information. For example, the share of borrowers in a given rating category who have experienced an occurrence defined as a credit
default4 4
What constitutes credit default is a matter of definition. For banks, this tends to be the occurrence of an individual value adjustment, whereas at rating agencies, it is insolvency or evident
payment difficulties. The IRBA included in the new Basel Capital Accord is based on an established definition of default. Compared with individual value adjustments, the Basel definition of default
provides for a forward-looking and therefore relatively early warning of default together with a retrospective flagging of payments that are 90 days overdue.
Stefan Blochwitz and Stefan Hohl
within a given time-frame, usually one year, will be used for the estimation process. The described standardisation of ratings allows the use of quantitative models where sufficient borrower data is
available and highlights the need for highquality, informative data. For consumer loans, the BCBS also allows risk assessment on the level of homogenous retail portfolios that are managed accordingly
by the bank on a pool basis. The challenge for banks is to identify such homogenous portfolios exhibiting similar risk characteristics. This leads to the importance of using bank-internal data, which
plays a crucial role in both the segmentation process used to find homogenous portfolios, and the quantification process used for the risk parameters. One of the techniques used for segmentation and
quantification is the utilisation of so-called “roll rates”5, where different delinquency stages are defined (for example 30 days, 60 days etc.). Counting the roll rate from one delinquency stage to
another and filling the migration matrix would serve as a basis for estimating the PDs for those exposures. There are a couple of issues related to this procedure. Firstly, there is the issue of
segmentation, i.e. do roll rates take into account all relevant risk drivers as required in the Basel II framework? Secondly, for quantification purposes, how will roll rates be translated into PDs,
more specifically, which delinquency class should be used (to comply with the Basel II framework), and to what extent can these PDs be validated? Lastly, in many instances a quicker reaction of
current conditions, sometimes coupled with a longer time horizon, might be needed for purposes of risk management and pricing, especially for retail exposures. How would such quantification processes
for PDs (and LGDs) be rectified with the application of the use-test as required in Basel II? Another issue relates to the modification performed by a credit officer of the automated rating proposal,
i.e. a qualitative adjustment. This may question the rigidity needed for validation, especially in cases where documentation may be insufficient, and the information used is more qualitatively based,
the latter being a general problem in credit assessments. A simple, but important, question is who has the responsibility for validating a rating system in the context of Basel II, given that the
calculation of minimum regulatory capital is legally binding and set by the supervisors. In addition, a valid point in this regard is that some requirements, for example, the quantification process
focussing on long-term averages to reduce volatility of minimum regulatory capital requirements, are not fully in line with bank practice. This may lead to a different quantification process, i.e. a
second process for the sole purpose of meeting supervisory standards, or even to a different risk management process as suggested above in the retail portfolios. In sum, the use-test requirement, the
extent to which an internal rating system is used in daily banking business, will play
Fritz et al. (2002)
XI. Validation of Banks’ Internal Rating Systems - A Supervisory Perspective
a crucial role in assessing compliance with Basel II implementation including the validation of IRB systems. Since a bank’s internal rating systems are individual, and in the best case, fully
tailored to the bank’s necessities; validation techniques must be as individual as the rating system they are used for. As an example, we highlight the so-called LowDefault-Portfolios. As the IRB
framework in Basel II is intended to apply to all asset classes, there are naturally portfolios which exhibit relatively low or even no default at all.6 This makes the quantification, required to be
grounded in historical experience, of PDs and LGDs, extremely challenging. Thus, a straightforward assessment based on historic losses would not be sufficiently reliable for the quantification
process of the risk parameters, but conservative estimates serving as an upper benchmark may be derived (cf. Chapter V). Some of the issues raised in this section have been discussed by the BCBS.
1.3. Provisions by the BCBS The Subgroup on Validation (AIGV)7 of the BCBS’ Accord Implementation Group (AIG) was established in 2004. The objective of the AIGV is to share and exchange views related
to the validation of IRB systems. To the extent possible, the groups should also narrow gaps between different assessments of the New Framework by the different supervising agencies represented in
the AIGV. The objective of the validation of a rating system is to assess whether a rating system can – and ultimately does – fulfil its task of accurately distinguishing and measuring credit risk.
The common view describes the term ‘validation’ as a means to combine quantitative and qualitative methods. If applied together, it should indicate whether a rating system measures credit risks
appropriately and is properly implemented in the bank in question. The BCBS newsletter No. 4, January 2004, informs about the work of the AIGV in the area of validation in Basel II. The most
important information provided was the relatively simple answer to the question, “what aspects of validation will be looked at?” Despite the importance of validation as a requirement for the IRB
approach, the New Framework does not explicitly specify what constitutes validation. Consequently, the Subgroup reached agreement on that question. In the context of rating systems, the term
“validation” encompasses a range of processes and 6
See BCBS newsletter No 6, “for example, some portfolios historically have experienced low numbers of defaults and are generally—but not always—considered to be low-risk (e.g. portfolios of exposures
to sovereigns, banks, insurance companies or highly rated corporate borrowers)”. The Validation Subgroup is focusing primarily on the IRB approach, although the principles should also apply to
validation of advanced measurement approaches for operational risk. A separate Subgroup has been established to explore issues related to operational risk (see BCBS newsletter No 4.).
Stefan Blochwitz and Stefan Hohl
activities that contribute to assessing whether ratings adequately differentiate risk, and importantly, whether estimates of risk components (such as PD, LGD, or EAD) appropriately characterise and
quantify the relevant risk dimension. Starting with this definition, the AIGV developed six important principles (see Figure 2), on validation that result in a broad framework for validation. The
validation framework covers all aspects of validation, including the goal of validation (principle 1), the responsibility for validation (principle 2), expectations on validation techniques
(principles 3, 4, and 5), and the control environment for validation (principle 6). Publishing these principles was a major step in clarifying the ongoing discussions between banks and their
supervisors about validation for at least three reasons: 1. The principles establish a broad view on validation. Quite often, validation was seen as being restricted to only dealing with aspects
related to backtesting. The established broad view on validation reinforces the importance of the minimum requirements of the IRBA, as well as highlighting the importance of riskmanagement. The
debate around the IRBA was too often restricted to solely risk quantification or measurement aspects. We think that this balanced perspective, including the more qualitative aspects of the IRBA,
reflects the shortcomings in establishing and validating rating systems, especially given the data limitations. This clarification also formed the basis for the development of validation principles
for the so-called “Low Default Portfolios (LDPs)” as proposed in the BCBS newsletter No. 6 from August 2005. 2. The responsibility for validation and the delegation of duties has also been clarified.
The main responsibility lies rightfully with the bank, given the importance of rating systems in the bank’s overall risk management and capital allocation procedures. Since validation is seen as the
ultimate sanity-check for a rating system and all its components, this task clearly must be performed by the bank itself, including the final sign-off by senior management. It should be noted that
only banks can provide the resources necessary to validate rating systems. 3. Principles 3 - 5 establish a comprehensive approach for validating rating systems. This approach proposes the key
elements of a broad validation process, on which we will elaborate more in the next section.
XI. Validation of Banks’ Internal Rating Systems - A Supervisory Perspective
Principle 1: Validation is fundamentally about assessing the predictive ability of a bank’s risk estimates and the use of ratings in credit processes The two step process for ratings systems requires
banks to firstly discriminate adequately between risky borrowers (i.e. being able o discriminate between risks and its associated risk of loss) and secondly calibrate risk (i.e. being able to
accurately quantify the level of risk). The IRB parameters must, as always with statistical estimates, be based on historical experience which should form the basis for the forward-looking quality of
the IRB parameters. IRB validation should encompass the processes for assigning those estimates including the governance and control procedures in a bank. Principle 2: The bank has primary
responsibility for validation The primary responsibility for validating IRB systems lies with the banks itself and does not remain with the supervisor. This certainly should reflect the self-interest
and the need for banks having a rating system in place reflecting its business. Supervisors obviously must review the bank’s validation processes and should also rely upon additional processes in
order to get the adequate level of supervisory comfort. Principle 3: Validation is an iterative process Setting up and running an IRB system in real life is by design an iterative process.
Validation, as an important part of this circle, should therefore be an ongoing, iterative process following an iterative dialogue between banks and their supervisors. This may result in a refinement
of the validation tools used. Principle 4: There is no single validation method Many well-known validation tools like backtesting, benchmarking, replication, etc are a useful supplement to the
overall goal of achieving a sound IRB system. However, there is unanimous agreement that there is no universal tool available, which could be used across portfolios and across markets. Principle 5:
Validation should encompass both quantitative and qualitative elements Validation is not a technical or solely mathematical exercise. Validation must be considered and applied a broad sense, its
individual components like data, documentation, internal use and the underlying rating models and all processes which the rating system uses are equally important. Principle 6: Validation processes
and outcomes should be subject to independent review For IRB systems, there must be an independent review within the bank. This specifies neither the organigram in the bank nor its relationship
across departments, but the review team must be independent of designers of the IRB system and those who implement the validation process. Figure 2. The six principles of validation
Stefan Blochwitz and Stefan Hohl
2. Validation of Internal Rating Systems in Detail According to the BCBS elaboration on the term “validation”, we consider three mutually supporting ways to validate bank internal rating systems.
This encompasses a range of processes and activities that contribute to the overall assessment and final judgement. More specifically, this can be directly related to the application of principle 4
and principle 5 of the BCBS newsletter as discussed above. 1. Component-based validation: - analyses each of the three elements – data collection and compilation, quantitative procedure and human
influence – for appropriateness and workability. 2. Result-based validation (also known as backtesting): - analyses the rating system’s quantification of credit risk ex post. 3. Process-based
validation: - analyses the rating system’s interfaces with other processes in the bank and how the rating system is integrated into the bank’s overall management structure. 2.1. Component-based
2.1.1. Availability of High-quality Data Ensuring adequate data quality is the key task which, for at least two reasons, must be addressed with the greatest urgency. First, as the rating is based
primarily on individual borrowers’ current data, it can only be as good as the underlying data. Second, the quantitative component of the rating process requires a sufficiently reliable set of data,
including a cross-sectional basis, which is crucial for calibration of the risk parameters. Accordingly, both historical data and high-quality recent data are essential to ensure that a rating system
can be set up adequately, and will also be successful in the future. Clearly, the availability of data, i.e. financial versus account specific information, and its use for different borrower
characteristics, - wholesale versus consumer – is dissimilar. Activities in consumer finance may produce more bank-specific behavioural data whereas financial information for large wholesale
borrowers should be publicly available. However, the availability of reliable and informative data, especially for the mid-size privately owned borrowers, may frequently not be met for at least
several reasons: x Data compilation and quality assurance incur high costs because they require both qualified staff and a high-performance IT infrastructure. In addition, these tasks seem to have
little to do with original banking business in its strict sense, and their usefulness may only become apparent years later. Clearly, proper investment is needed, adding pressure to costs and
competition. x Similarly, it is a costly exercise in staffing and resource allocation in credit departments. However, the Basel II efforts may have helped to allocate more resources to capturing
adequate and reliable credit data.
XI. Validation of Banks’ Internal Rating Systems - A Supervisory Perspective
x In reality, borrowers also are often reluctant to supply the requested data. This may be because, especially at smaller enterprises, this data is not readily available. Admittedly, because of the
predominant classic “house bank” system in Germany, this information historically had not been requested. Also, potential misuse of data and reluctance on the part of firms to provide information on
their own economic situation seems to be a widespread concern. Sometimes, data is passed on to a very limited number of third parties only.8 x Further concentration in the banking industry is also
contributing to the problem. Owing to the lack of uniform standards for banks, in the event of a merger, different sets of data have to be synchronized – this adds a new dimension to the problem and
is, again, no quick and easy task to do. A thorough knowledge of the IT systems underlying the rating approach is necessary for the proper assessment of data quality; in addition the following may
help to provide a realistic evaluation: x Ensuring data quality: The sheer existence and quality of bank internal guidelines, including tests around them, is an indication of the importance banks
place on good data quality. Whether a bank takes its own guidelines seriously can be gauged from day-to-day applications. For instance, data quality assurance can reasonably be expected to be as
automated as possible to ensure that a uniform standard is applied throughout the bank. Also, comparison with external sources of data seems to be necessary to ensure data plausibility. x Bank-wide
use of the data: The extent to which data are used allows assessing the confidence that the bank has in its data. This leads to two consequences. On the one hand, frequent and intensive use of
specific data within a bank exposes inconsistencies which might exist. On the other hand, where larger numbers of people are able to manually adjust data, the more likely is its potential
contamination, unless suitable countermeasures are taken. 2.1.2. The Quantitative Rating Models The second facet of the rating process, in the broadest sense, is the mathematical approach which can
be used to standardise the use of data. The aim is to compress data collected in the first stage to prepare and facilitate the loan officer’s decision on the credit risk assessment of a borrower. In
recent years, the analysis and development of possible methods has been a focus of research at banks and in microeconomics. The second stage methods attribute to each borrower, via a rating function
fRat, a continuous or discrete risk measure Z, a rating score, which is dependent on both
An indication of this attitude, which is widespread in Germany, is, for example, the approach that is adopted to the obligation laid down in Section 325 of the German Commercial Code for corporations
to publish their annual accounts. No more than 10% of the enterprises concerned fulfil this statutory obligation.
Stefan Blochwitz and Stefan Hohl
the individual features of each borrower X1,X2, ..., XN – also denoted as risk factors – and free, initially unknown model parameters D1,D2, ..., DM: Z
f Rat (D1 , D M ; X 1 , , X N )
The value of Z permits the suggested rating to be derived from the quantitative analysis of the borrower concerned, in that each value of Z is allocated precisely to one of Y various rating
categories. The methods suitable for this kind of quantitative component can be classified as: x Statistical methods: This is probably the best known and the most widespread group of methods. They
are used by almost all banks in both corporate and private sector business. The best known of such methods are discriminatory analyses (primarily in corporate business) and logit regressions (used
mainly as scorecards in private sector business). Generalised regression and classification methods (such as neural networks) also belong in this category, even if they are rarely used in practice. x
Rule-based systems: Such systems model the way in which human credit experts reach a decision and are used in corporate business. They comprise a set of predetermined “if ... then” rules (i.e. expert
knowledge). Each enterprise is first graded according to these rules. The next stage is for the rules matched by the firm to be aggregated in order to give a risk rating. x Benchmarking methods: In
these methods, observable criteria, such as bond spreads, are used to compare borrowers with unknown risk content with rated borrowers with known risk content – the so-called benchmarks. x Applied
economic models: Option price theory models are the most widely known. They enable, for example, an enterprise’s equity capital to be modelled as a call option on its asset value and thus the
concepts used in option price theory to be applied to credit risk measurement. The starting point for the development of these models was the Merton model; KMV has been successful in its further
development offering its Public Firm Model for listed enterprises and a Private Firm Model for unlisted enterprises (Crosbie and Bohn, 2001). Another classification distinguishes between empirical
models, where the parameters are determined from data of known borrowers by using mathematical or numerical optimisation methods, and expert methods, where the parameters are specified by credit
experts based on their experience. Basically, the difference lies in the specification of the model parameters D1,D2, ..., DM. 2.1.3. The Model Itself Transparency, intelligibility and plausibility
are crucial for validating the appropriateness of the rating process. Clearly, either with the set of rules for expert systems or with the underlying model in the case of benchmarking methods and
applied economic models, these requirements seem to be easily fulfilled. The situation regarding statistical models is somewhat more complex – as there is no
XI. Validation of Banks’ Internal Rating Systems - A Supervisory Perspective
economic theory underlying these models. However, certain basic economic requirements should also be incorporated in using statistical models. For example, experience has shown that many risk factors
are invariably more marked among “good” borrowers than “bad” borrowers. Likewise, if a requirement of risk measure Z is invariably larger among better borrowers than among worse borrowers, the direct
consequence is that the monotony of the risk factor must also be evident in the monotony of the risk measure. Therefore, for the i-th risk factor Xi, the following applies: wZ wX i
wf Rat (Į1 , Į M ;X 1 , ,X N ) !0 wX i
Economic plausibility leads to the exclusion of “non-monotonous” risk factors in linear models. Non-monotonous risk factors are, for example, growth variables, such as changes in the balance sheet
total, changes in turnover etc. Experience shows that both a decline and excessively high growth of these variables imply a high risk. Such variables cannot be processed in linear models, i.e. in
models like Z = D0+D1 X1+}+DN XN, because, owing to wZ wX i
const. ,
the plausibility criterion in these models cannot be fulfilled for non-monotonous features.9 Further economic plausibility requirements and sensitivity analysis should be considered in a causal
relationship with economic risk, for example the creditworthiness of an enterprise cannot be derived from the managing director’s shoe size! The commonly applied statistical standards must be
observed for all empirical models (statistical models, specific expert systems and applied economic models). Non-compliance with these standards is always an indication of design defects, which
generally exhibit an adverse effect when applied. Without claiming completeness, we consider the following aspects to be vital when developing a model: x Appropriateness of the random sample for the
empirical model: The appropriateness of the random sample is the decisive factor for all empirical and statistical models. This is also relevant to applied economic models, as is illustrated by the
KMV models. These models have been based on data on US firms, meaning that they draw on developments in the US markets only and solely reflect US accounting standards. Not all data which is important
in this system is available when other accounting standards are used, with the result that when the models are transferred to other countries, one has to work with possibly questionable
approximations. This has a bearing on certain characteristics of the models such as lack of ambiguity and the stability of the results. 9
Moody’s RiskCalc (Falkenstein et al., 2000) provides one way of processing nonmonotonous risk factors by appropriate transformation in linear models. Another one can be found in Chapter II.
Stefan Blochwitz and Stefan Hohl
x Over-parameterising the model: A mistake, frequently observed, is to include too many risk factors in the design of a rating system. The reasons for this include an overly cautious approach when
developing the system, i.e. each conceivable risk factor, or those which credit experts seem to consider obvious, are to be fed into the system. On the other hand, rating systems are often developed
by committees and these would naturally like to see their particular “babies” (mostly a “favourite variable” or a special risk factor) reflected in the rating design. Neither approach is optimal from
the statistical perspective as there is an upper limit to the number of parameters to be calculated, depending on the size of the sample and the model used. If this rule is breached, errors are made
which are called “overfitting”. x Statistical performance of the estimated model: The performance of the model in a statistical sense is generally provided as a type-1 or a type-2 error, applying
measures of inequality such as Gini coefficients or entropy measures (Falkenstein et al., 2000), or other statistical measures which can be determined either for the sample or the entire population.
These variables quantify the rating system’s ability to distinguish between good and bad borrowers and thus provide important information about the capability of the rating model with regard to
discriminating between risks. These variables are especially important during the development of a rating system as they allow comparison of the performance of various models within the same data
set. However, we think that these tools are only of minor importance for ongoing prudential monitoring. First, owing to the concave form of the risk weighting function in the new Basel Accord, which
provides logical incentives so that systems which discriminate more finely, are less burdened by regulatory capital than coarser systems. Second, the absolute size of the probability of default is
the variable relevant for banking supervision as it is linked to the size of the regulatory capital. x Modelling errors, precision and stability: Certain modelling errors are inevitably part of every
model because each model can depict only a part of economic reality in a simplified form. In order to be able to use a model correctly, one has to be aware of these limitations. However, in addition
to these limitations, which are to a certain extent a “natural” feature of each model, the modelling errors caused by using an optimisation or estimation procedure also need to be considered. These
estimation errors can be quantified for the model parameters from the confidence levels of the model parameters. Given certain distribution assumptions, or with the aid of cyclical or rotation
methods, these confidence levels can be determined analytically from the same data which is used to estimate the parameters (Fahrmeir et al., 1996). If error calculation methods frequently used in
the natural sciences are applied, it is possible to estimate the extent to which measurement bias of the individual model parameters affects the credit score Z. The stability of a model can be
derived from the confidence levels of model parameters. Determining the stability of a model seems to be particularly important, i.e. the responsiveness to portfolio changes. A more critical issue is
model precision. In some methods, model parameters are determined – though illogically – with a precision that is several orders of magnitude higher than for the risk parameters.
XI. Validation of Banks’ Internal Rating Systems - A Supervisory Perspective
2.1.4. Role of the Loan Officer – or Qualitative Assessment Loan officers play an important role in both setting up a rating system as well as using it in practice. We think that qualitative
assessments should be included in the final rating assignment, by allowing the loan officer to modify the suggested credit rating provided by the quantitative model.10 This is certainly necessary for
exposures above a certain size; retail loans may be dependent on the business activities and risk management structures in the bank. The sheer size of mass financing of consumer loans certainly
results in less influence for the loan officer, rather, they rely on correct procedures to check the automated rating proposal and the input provided by the sales officer. We discuss three important
aspects accordingly: x The loan officer’s powers: Any manual modification of the automated rating proposal should be contained within a controlled and well-documented framework. The loan officer’s
discretion should be set within clearly defined limits which specify at least the conditions permitting a deviation from the automated rating proposal and the information that the loan officer used
additionally. One way to look at discretion is the use of a deviation matrix of final and suggested ratings, showing for each rating category, how many suggested ratings (generated by the
quantitative rating tool) are changed by manual override: more specifically, the share Mij of borrowers assigned by the quantitative system to the ith category which loan officers finally place in
category j. In a well-defined, practicable rating system, a high match between suggested ratings and final ratings should be expected in most cases, so in each line the values of Mii should be the
largest and Mij should decrease the more the final ratings diverge from the suggestions. Clearly, greater deviations should lead to careful analysis of the shortcomings of the rating model, either
indicating data issues or problems with the model itself. x Monitoring the ratings over time: Any rating system must ideally be monitored continuously and be able to process incoming information
swiftly; however, ratings must be updated at least annually. This does also apply for consumer loans. However, the focus is on ensuring that loans and borrowers are still assigned to the correct
pool, i.e. still exhibiting the loss characteristics and the delinquency status of the previously assigned pool. As such, different methodologies may be used, for example by using an account-specific
behavioural score. For wholesale loans, it may be helpful to analyse the frequency distribution of the time-span between two successive ratings of all borrowers in a specific portfolio. The expected
pattern is shown in Figure 3: most borrowers are reevaluated at regular intervals, roughly once every 12 months, but in between, “ad hoc ratings” are based on information deemed to be important and
their frequency increases with the amount of time that has elapsed since the first rating. Between the two regular re-ratings, a whole group of the same type of borrowers (e.g. enterprises in one
sector) may occasionally be re-rated because in10
The normal transformation of qualitative information like family status, gender, etc into numerical variables for the assessment of consumer loans would not replace such a qualitative oversight.
Stefan Blochwitz and Stefan Hohl
formation relevant to the rating of this group has been received. It should be possible to explain any divergent behaviour which, in any case, provides insights into the quality of the rating
process. 40%
Ratio of rerated clients
35% 30%
Regular rating review
25% 20% 15% 10%
Ad-hoc reratings caused by material new information
Review of ratings of a group of similar borrowers, eg. of a certain sector due to a specific event
5% 0% 1
more than 12
Time between two subsequent ratings in months
Figure 3. Frequency distribution of the time-span between two successive ratings for all borrowers in one portfolio
x Feedback mechanisms of the rating process: A rating system must take account of both the justified interests of the user – i.e. the loan officer – whose interest is driven by having a rating
process which is lean, easy to use, comprehensible and efficient. On the other hand, the model developer is interested in a rating model which is theoretically demanding and as comprehensive as
possible. Where interests conflict, these will need to be reconciled. It is all the more important that a rating system is checked whilst in operational mode, to ascertain whether the model which the
process is based on is appropriate and sufficiently understood by the users. In any case, procedures must be implemented according to which a new version – or at least a new parameterisation — of the
rating model is carried out. 2.2. Result-based Validation In 1996, the publication of capital requirements for market risk for a bank’s trading book positions as an amendment to the 1988 Basel
Accord, was the first time that a bank’s internal methodology could be used for purposes of regulatory capital. The output of bank internal models, the so-called Value-at-Risk (VaR) which is the most
popular risk measure in market risk, is translated into a minimum capi-
XI. Validation of Banks’ Internal Rating Systems - A Supervisory Perspective
tal requirement, i.e. three times VaR. The supervisory challenge for most countries, certainly Germany, was to establish an appropriate supervisory strategy to finally permit these bank internal
models for calculating regulatory capital. In addition to the supervisory assessment of the qualitative market risk environment in a bank, another crucial element of the strategy was the
implementation of an efficient “top-down” monitoring approach for banks and banking supervisors. The relatively simple comparison between ex-ante estimation of VaR and ex-post realisation of the
“clean” P&L11 of a trading book position, excluding extraneous factors such as interest payments, was the foundation for the quantitative appraisal. The concept for backtesting in the IRBA as
introduced in paragraph 501 of the New Framework is relatively similar. In the IRB approach, according to market risk, the probability of default (PD) per rating category or, in special cases, the
expected loss (EL) in the case of consumer loans, must be compared with the realised default rate or losses that have occurred. Despite the basic features common to market risk and credit risk, there
are also important differences, most importantly the following two. First, the conceptual nature is different; in market risk the forecasted VaR is a percentile of the “clean” P&L distribution. This
distribution can be generated from the directly observable profit and losses, and thus the VaR can be directly observed. By contrast, in credit risk only realised defaults (and losses) according to a
specific definition can be observed directly instead of the forecasted PD (and EL). A common and widespread approach for credit risk is the application of the law of large numbers and to infer from
the observed default rate, the probability of default.12 To our knowledge, almost all backtesting techniques for PD (or EL) rely on this statistical concept. However, a proper application requires
that borrowers are grouped into grades exhibiting similar default risk characteristics.13 This is necessary even in the case of direct estimates of PD, when each borrower is assigned an individual
There are different interpretations among different supervisors on this issue. Beside the fact, that an application of the law of large numbers would require that defaults are uncorrelated, there is
another subtle violation in the prerequisites for applying the law of large numbers. It is required that the defaults stem from the same distribution. This requirement cannot be seen to be fulfilled
for different borrowers. To give a picture: The difference for the task of determining the probability of throwing a six is like approximating this probability either by throwing the same dice 1,000
times and calculating the ratio of sixes to the total number of throws or throwing 1,000 dices once and calculating the ratio of sixes to the number of dices thrown. 13 We believe that validation of
rating systems, i.e. the calibration of PDs is almost impossible without the grouping of borrowers to grades with the same risk profile; which is also one of the key requirements of Basel II. 12
Stefan Blochwitz and Stefan Hohl
The second main difference relates to the available data history on which the comparison is based. In market risk, the frequency is at least 250 times a year in the case of daily data. By contrast,
in credit risk there is only one data point per annum to be assumed. To make it more complex, there is an additional problem arising from measuring credit default, which is the key variable for the
quantification and therefore the validation. The definition of credit default is largely subjective. The New Framework suggests retaining this subjective element as the basis of the IRB approach,
albeit with a forward-looking focus and a back-stop ratio of 90 days past due. This may be justified, not least by the fact that a significant number of defaulted borrowers seem to have a
considerable influence on the timing of the credit default. Correspondingly, the criteria and – more importantly – the applied methodology are also different. In market risk, the challenge is to
provide a clean P&L and to store the corresponding data. This differs significantly from the necessary compilation of the rating history and credit defaults over time. Depending on the required
reproducibility of the results, considerable time and effort may be needed and it is difficult to estimate what requirement is most important for what area, thus entailing higher costs for the bank.
Owing to the volume of available data points in market risk, the simplicity and multiplicity of the applicable methods are impressive. This naturally poses an apparently insuperable challenge for
credit risk. A further problem is the impact of a rating philosophy on backtesting. The rating philosophy is what is commonly referred to as either Point-in-Time (PIT) or Through-the-Cycle (TTC)
ratings. PIT-ratings measure credit risk given the current state of a borrower in its current economic environment, whereas TTC-ratings measure credit risk taking into account the (assumed) state of
the borrower over a “full” economic cycle. PIT and TTC mark the ends of the spectrum of possible rating systems. In practice, neither pure TTC nor pure PIT systems will be found, but hybrid systems,
which are rather PIT or rather TTC. Agency ratings are assumed to be TTC, whereas bank internal systems – at least in most cases in Germany – are looked at as PIT. The rating philosophy has an
important impact on backtesting. In theory, for TTC systems borrower ratings, i.e. its rating grade, are stable over time, reflecting the long-term full cycle assessment. However, the observed
default rates for the individual grades are expected to vary over time in accordance with the change in the economic environment. The contrary is the case for PIT systems. By more quickly reacting to
changing economic conditions, borrower ratings tend to migrate through the rating grades over the cycle, whereas the PD for each grade is expected to be more stable over time, i.e. the PD is more
independent from the current economic environment. The Basel Committee did not favour a special rating philosophy. Both PIT systems as well as TTC systems are fit for the IRBA. However, it seems to
be reasonable to look at risk parameters as a forecast for their realisations which can be observed within a one year time horizon. This reasoning is
XI. Validation of Banks’ Internal Rating Systems - A Supervisory Perspective
reflected in the first validation principle of the AIGV, where a forward looking element is required to be included in the estimation of Basel’s risk parameters. In the special case of consumer
loans, the estimation and validation of key parameters is extremely dependent on the approach taken by a bank. A similar rating system as used for wholesale borrowers, leads to an analogous
assessment for purposes of validation. In contrast, instead of rating each borrower separately, the BCBS clusters loans in homogenous portfolios during the segmentation process (see above). This
segmentation process should include assessing borrower and transaction risk characteristics like product type etc., as well as identifying the different delinquency stages (30 days, 60 days, 90 days
etc.). Subsequently, the risk assessment on a (sub-) portfolio level could be based on its roll rates, transaction moving from one delinquency stage to another. The implications of these rather
general considerations and possible solutions for the problems raised here are discussed in detail in Chapter VIII. 2.3. Process-based Validation Validating rating processes includes analysing the
extent to which an internal rating system is used in daily banking business. The use test and associated risk estimates is one of the key requirements in the BCBS’ final framework. There are two
different levels of validation. Firstly, the plausibility of the actual rating in itself, and secondly, the integration of ratings output in the operational procedure and interaction with other
processes: x Understanding the rating system: It is fundamental to both types of analysis that employees understand whichever rating methodology is used. The learning process should not be restricted
to loan officers. As mentioned above, it should also include those employees who are involved in the rating process. In-house training courses and other training measures are required to ensure that
the process operates properly. x Importance for management: Adequate corporate governance is crucial for banks. In the case of a rating system, this requires the responsibility of executive
management and to a certain extent the supervisory board, for authorising the rating methods and their implementation in the bank’s day-to-day business. We would expect different rating methods to be
used depending on the size of the borrower14, and taking account of the borrowers’ different risk content and the relevance of the incoming information following the decision by senior management.
In the Basel Committee’s new proposals in respect of the IRB approach, small enterprises may, for regulatory purposes, be treated as retail customers and, unlike large corporate customers, small and
medium-sized enterprises are given a reduced risk weighting in line with their turnover.
Stefan Blochwitz and Stefan Hohl
x Internal monitoring processes: The monitoring process must cover at least the extent and the type of rating system used. In particular, it should be possible to rate all borrowers in the system,
with the final rating allocated before credit is granted. If the rating is given after credit has been granted, this raises doubts about the usefulness of internal rating. The same applies to a
rating which is not subject to a regular check. There should be a check at least annually and whenever new information about the debtor is received which casts doubt on their ability to clear their
debts. The stability of the rating method over time, balanced with the need to update the method as appropriate, is a key part of the validation. To do this, it is necessary to show that objective
criteria are incorporated so as to lay down the conditions for a re-estimation of the quantitative rating model or to determine whether a new rating model should be established. x Integration in the
bank’s financial management structure: Unless rational credit risk is recorded for each borrower, it is impossible to perform the proper margin calculation taking into account standard risk costs. If
this is to be part of bank management by its decision-making and supervisory bodies, a relationship must be determined between the individual rating categories and the standard risk costs. However,
it must be borne in mind that the probability of default is simply a component of the calculation of the standard risk costs and, similarly to the credit risk models, other risk parameters, such as
the rate of debt collection and the size of the exposure in the event of a credit default, the maturity of the loan, transfer risk and concentration risks should also be recorded. Ultimately the
gross margin, which approximates to the difference between lending rates and refinancing costs, can act as a yardstick for including the standard risk costs. In order to determine the concentration
risks at portfolio level more appropriately, it seems essential to use credit risk models and thus to be in a position to allocate venture capital costs appropriately. Therefore, if net commission
income is added to the gross margin, the operational costs netted out, and also the venture capital costs taken into account, it is possible to calculate the result of lending business. It is
naturally advisable to include as part of the management of the bank, all other conventional instruments of credit risk measurement, such as employee bonus systems, portfolio optimisation. In
principle, the Basel Committee requires these mainly portfolio-based methods in the second pillar of the new Accord as part of the self-assessment of capital adequacy required of the banks in the
Capital Adequacy Assessment Process (CAAP). This frequently leads to problems when integrating banks’ own rating systems into credit risk models purchased from specialist providers. In our view, this
may ultimately increase the complexity for banks and banking supervisors and at the same time entail considerable competitive distortions if the rating is less objective.
XI. Validation of Banks’ Internal Rating Systems - A Supervisory Perspective
3. Concluding Remarks To set up and to validate bank-internal rating systems is a challenging task and requires a considerable degree of sensitivity (Neale, 2001). Our analysis started with the
comparatively more difficult data situation and the availability of public and private information in order to quantify credit risk of banks’ borrowers in a structured way including its subsequent
validation. The advantage of the structured credit risk assessment, when applying an automated rating process, is its objectivity. This is true for the rating method and for the selection of the risk
factors in the rating model, including their effectiveness in generating a rating proposal. The final integration of the qualitative credit assessment, based on a subjective decision by the loan
officer, is more difficult in the structured assessment. The final rating outcome comprises an array of individual observations, which may provide very different results. Ultimately, our suggested
approach to validation takes this complexity into account by highlighting the importance of the rating process. This interdependence is reflected in the ongoing cycle of setting up and monitoring the
rating system. Individual observations during the monitoring process are frequently integrated quickly into a revision of the methodological process. The validation method is analogous to a jigsaw
puzzle. Only if the many individual pieces are being assembled properly, will the desired result be achieved. The individual pieces of the puzzle seem unimpressive and often unattractive at first,
but they eventually contribute to the ultimate picture. This may, for example, be an appropriate description when setting up the system and conducting ongoing checks on the quality of the data
management or the ongoing adjustment of banks’ internal credit standards. Each piece of the puzzle is crucial, to both componentbased and process-based validation. One crucially important piece is
the processbased component. All conventional methods of quantitative validation should encompass the assessment of the rating tool’s economic meaningfulness as well as its compliance with statistical
standards. Transparency and comprehensibility of the chosen methods at each stage of development, as well as its plausibility, are fundamental requirements of a sound rating system. The advantage of
using empirical statistical approaches is that these models are comprehensible and that defects or statistical shortcomings can be detected by simple statistical tests. By contrast, rule-based
systems and applied economic models are more heavily model-dependent and therefore point to model risk. In the case of benchmarking methods; however, the choice of the peer group with known risk
content is decisive, although the instability of such models, in particular, can be a problem. Despite the differences, most applied methods can fulfil all requirements initially, albeit to a
differing degree. The broad use and the interplay of different quantitative plausibility and validation methods is the basis of a quantitative analysis of the methods used. Backtesting
Stefan Blochwitz and Stefan Hohl
using a simple retrospective comparison of estimated default probabilities with actual default rates is crucial, and therefore a decisive element in the validation of the results.15 Complementary
methods are also needed, particularly in the development stage of rating models, in order to ensure the plausibility of the selected methods. These include techniques which underscore the stability
and accuracy of the methods, although caution is required with regard to quantification and especially with regard to methods used to measure accuracy. The validation of internal rating systems
underscores the importance of using a formalised process when devising them and in their daily application. This covers both the formalised keying in of data and the criteria for subjectively
“overruling” the rating proposal. Unless internal ratings are used on a regular basis and in a structured manner over time, banks and banking supervisors by referring to the “use-test” will find
difficulties in accepting such a rating system.
References Basel Committee on Banking Supervision (2005), International Convergence of Capital Measurement and Capital Standards: A Revised Framework, BIS, Updated November 2005. http://www.bis.org/
publ/bcbs107.htm Basel Committee on Banking Supervision (2005a), Validation, Newsletter No. 4. www.bis.org/publ/bcbs_nl4.htm Basel Committee on Banking Supervision (2005b), Validation of Low Default
Portfolios, Newsletter No. 6. www.bis.org/publ/bcbs_nl4.htm Basel Committee on Banking Supervision (2005c), The Application of Basel II to Trading Activities and the Treatment of Double Default
Effects. http://www.bis.org/publ/ bcbs116.htm Basel Committee on Banking Supervision (1988): International Convergence of Capital Measurement and Capital Standards. http://www.bis.org/publ/
bcbs04a.htm Crosbie PJ, Bohn JR: Modelling Default Risk, KMV LLC 2001. http://www.kmv.com/ insight/index.html Deutsche Bundesbank (2003): The new “Minimum requirements for the credit business of
credit institutions” and Basel II, Monthly Report, January 2003, 45–58. Falkenstein E, Boral A, Carty LV (2000), RiskCalcTM for Private Companies: Moody’s Default Model, Moody’s Investor Service May
2000. http://www.moodys.com/ cust/research/venus/Publication/Rating%20Methodology/noncategorized_number/564 02.pdf Fritz S, Popken L, Wagner C (2002), Scoring and validating Techniques for Credit
Risk Rating Systems, in “Credit Ratings”, Risk books, London. Koyluoglu, U, Hickman A (1998), Reconcilable Differences, Risk Magazine, October, pp. 56-62. Neale C (2001), The Truth and the Proof,
Risk Magazine, March, pp 18-19. Wilde T (2001), IRB Approach Explained, Risk Magazine, May, pp. 87-90.
We thus concur with the Basel Committee on Banking Supervision.
XII. Measures of a Rating’s Discriminative Power – Applications and Limitations Bernd Engelmann Quanteam AG
1. Introduction A key attribute of a rating system is its discriminative power, i.e. its ability to separate good credit quality from bad credit quality. Similar problems arise in other scientific
disciplines. In medicine, the quality of a diagnostic test is mainly determined by its ability to distinguish between ill and healthy persons. Analogous applications exist in biology, information
technology, and engineering sciences. The development of measures of discriminative power dates back to the early 1950’s. An interesting overview is given in Swets (1988). Many of the concepts
developed in other scientific disciplines in different contexts can be transferred to the problem of measuring the discriminative power of a rating system. Most of the concepts presented in this
article were developed in medical statistics. We will show how to apply them in a ratings context. Throughout the article, we will demonstrate the application of all concepts on two prototype rating
systems which are developed from the same data base. We consider only rating systems which distribute debtors in separate rating categories, i.e. the rating system assigns one out of a finite number
of rating scores to a debtor. For both rating systems, we assume that the total portfolio consists of 1000 debtors, where 50 debtors defaulted and 950 debtors survived. Both rating systems assign
five rating scores 1, 2, 3, 4, and 5 to debtors where 1 stands for the worst credit quality and 5 for the best. Table 1 summarizes the rating scores that were assigned to the non-defaulting debtors
by both rating systems. Table 1 tells us precisely the distribution of the non-defaulting debtors on the two rating systems. For example, we can read from Table 1 that there are 40 nondefaulting
debtors who were assigned into rating category 4 by Rating 1 while they were assigned into rating category 5 by Rating 2. The other numbers are interpreted analogously. The distribution of the
defaulting debtors on the two rating systems is given in Table 2. Both tables provide all information needed to apply the concepts that will be introduced in the subsequent sections of this article.
Bernd Engelmann
Table 1. Distribution of the non-defaulting debtors in Rating 1 and Rating 2
Rating 2
1 2 3 4 5 Total
Rating 1 3 4 15 10 30 20 100 45 30 100 10 40 185 215
Total 180 200 210 215 145
Total 28 11 5 4 2
Table 2. Distribution of the defaulting debtors in Rating 1 and Rating 2
Rating 2
1 2 3 4 5 Total
Rating 1 4 3 0 0 2 0 5
We introduce the notation we will use throughout this article. We assume a rating system which consists of discrete rating categories. The rating categories1 are denoted with R1,…, Rk where we assume
that the rating categories are sorted in increasing credit quality, i.e. the debtors with worst credit quality are assigned to R1 while the debtors with the best credit quality are assigned to Rk. In
our example in Table 1 and Table 2 we have k=5 and R1=1,…, R5=5. We denote the set of defaulting debtors with D, the set of non-defaulting debtors with ND, and the set of all debtors with T. The
number of debtors in the rating category Ri is denoted with N(i) where the subscript refers to the group of debtors. If we discuss a specific rating we make this clear by an additional argument. e.g.
for Rating 1 the number of defaulters in rating category 4 is ND(4;1) = 5, or the total number of debtors in rating category 2 is NT(2;1) = 214. Since the event ‘Default’ or ‘Non-default’ of a debtor
is random, we have to introduce some random variables. With S we denote random distribution of rating scores while the subscript will indicate the group of debtors the distribution function
corresponds to, e.g. SD denotes the distribution of the rating scores of the defaulting debtors. The empirical distribution of the rating scores, i.e. the distribution of the rating scores that is
realised by the observed defaults and non-defaults is denoted by Sˆ , where the subscript again refers to the group of debtors. For example, for Rating 1
The terminology rating category or rating score is used interchangeably throughout this chapter.
XII. Measures of a Rating’s Discriminative Power – Applications and Limitations
Sˆ D 3;1 2 / 50
Sˆ ND 2;1 200 / 950 SˆT 5;1 202 / 1000
0.21, 0.20.
The cumulative distribution of S is denoted with C, i.e. C(Ri) is the probability that a debtor has a rating score lower than or equal to Ri. The specific group of debtors the distribution function
is referring to is given by the corresponding subscript. The empirical cumulative distribution function is denoted by Cˆ , e.g. the empirical
probability that a non-defaulting debtor’s rating score under Rating 2 is lower than or equal to ‘4’ is given by Cˆ ND 4;2
180 200 210 215 / 950
0.847 .
Finally, we define the common score distribution of two rating systems Rating 1 and Rating 2 by S12. The expression S12(Ri,Rj) gives the probability that a debtor has rating score Ri under Rating 1
and a rating score Rj under Rating 2. Again the index D, ND, T refers to the set of debtors to which the score distribution corresponds. The cumulative distribution is denoted with C12, i.e. C12
(Ri,Rj) gives the probability that a debtor has a rating score less than or equal to Ri under Rating 1 and less than or equal to Rj under Rating 2. Again, examples are given for the corresponding
empirical distributions using the data of Table 1: Sˆ 12 D 2,2
7 / 50
Sˆ 12 ND 2,4 10 / 950 Cˆ 12 D 2,3
0.14, 0.0105,
20 5 4 7 3 0 / 50
Having defined the notation we will use throughout this chapter, we give a short outline. In Section 2 we will define the measures, Cumulative Accuracy Profile (CAP) and Receiver Operating
Characteristic (ROC), which are the most popular in practice and show how they are interrelated. In Section 3 we will focus on the statistical properties of the summary measures of the CAP and the
ROC. The final section discusses the applicability and the correct interpretation of these measures.
2. Measures of a Rating System’s Discriminative Power We will define the measures of discriminative power that are of interest to us in this section. We will focus on the Cumulative Accuracy Profile
(CAP) and the Receiver Operating Characteristic (ROC). These are not the only measures described in the literature but the most important and the most widely applied in practice. Examples of measures
that are not treated in this article are entropy measures. We
Bernd Engelmann
refer the reader to Sobehart et al. (2000) for an introduction to these measures. Besides the basic definitions of the CAP and the ROC and their summary measures, we will show how both concepts are
connected and explore some extensions in this section. 2.1. Cumulative Accuracy Profile
The definition of the Cumulative Accuracy Profile (CAP) can be found in Sobehart et al (2000). It plots the empirical cumulative distribution of the defaulting debtors Cˆ against the empirical
cumulative distribution of all debtors Cˆ . This D
is illustrated in Figure 1. For a given rating category Ri, the percentage of all debtors with a rating of Ri or worse is determined, i.e. Cˆ R . Next, the percentage of T
defaulted debtors with a rating score worse than or equal to Ri, i.e. Cˆ D Ri , is computed. This determines the point A in Figure 1. Completing this exercise for all rating categories of a rating
system determines the CAP curve. Therefore, every CAP curve must start in the point (0,0) and end in the point (1,1).
Figure 1. Illustration of Cumulative Accuracy Profiles
There are two special situations which serve as limiting cases. The first is a rating system which does not contain any discriminative power. In this case, the CAP curve is a straight line which
halves the quadrant because if the rating system contains no information about a debtor’s credit quality it will assign x% of the defaulters among the x% of the debtors with the worst rating scores
(‘Random Model’ in Figure 1). The other extreme is a rating system which contains perfect
XII. Measures of a Rating’s Discriminative Power – Applications and Limitations
information concerning the credit quality of the debtors. In this case, all defaulting debtors will get a worse rating than the non-defaulting debtors and the resulting CAP curve raises straight to
one and stays there (‘Perfect Forecaster’ in Figure 1). The information contained in a CAP curve can be summarised into a single number, the Accuracy Ratio (AR) (this number is also known as Gini
Coefficient or Power Statistics). It is given by AR
aR , aP
where aR is the area between the CAP curve of the rating model and CAP curve of the random model (grey/black area in Figure 1) and aP is the area between the CAP curve of the perfect forecaster and
the CAP curve of the random model (grey area in Figure 1). The ratio AR can take values between zero and one.2 The closer AR is to one, i.e. the more the CAP curve is to the upper left, the higher is
the discriminative power of a rating model. We finish this subsection by calculating the CAP curves of Rating 1 and Rating 2. Since both rating systems have five rating categories, we can compute
four points of the CAP curve in addition to the points (0,0) and (1,1). To get a real curve, the six points of each CAP curve able to be computed have to be connected by straight lines. We illustrate
the procedure with Rating 1. Starting at the left, we have to compute Cˆ 1;1 and Cˆ 1;1 , which we get from Table 1 and Table 2 as T
Cˆ T 1;1 177 / 1000 Cˆ D 1;1 27 / 50
In the next step, we compute Cˆ T 2;1 and Cˆ D 2;1 which results in Cˆ T 2;1 (177 214) / 1000 Cˆ D 2;1 (27 14) / 50
The remaining points are computed analogously. The procedure for Rating 2 is similar. The resulting CAP curves are illustrated in Figure 2. We see that the CAP curve of Rating 1 is always higher than
the CAP curve of Rating 2, i.e. the discriminative power of Rating 1 is higher. This is also reflected in the AR values of both rating models. For Rating 1, we find an AR of 0.523 while for Rating 2,
the AR is calculated as 0.471.
In principle, AR could be negative. This would be the case when the ranking of the debtors by the rating system is wrong, i.e. the good debtors are assigned to the rating categories of the poor
Bernd Engelmann
CAP Curves
Percentage of defaulting debtors
1.00 0.80 0.60 0.40 0.20 0.00 0.00
Percentage of total debtors Rating 1
Rating 2
Random Rating
Perfect Forecaster
Figure 2. CAP curves for Rating 1 and Rating 2
2.2. Receiver Operating Characteristic
The concept of the Receiver Operating Characteristic (ROC) was developed in signal detection theory, therefore the name. It was introduced to rating systems in Sobehart and Keenan (2001). The concept
is illustrated in Figure 3. This figure shows the distributions of the rating scores for defaulting and non-defaulting debtors. It can be seen that the rating system has discriminative power since
the rating scores are higher for non-defaulting debtors. A cut-off value V, provides a simple decision rule to classify debtors into potential defaulters and non-defaulters. All debtors with a rating
score lower than V are considered as defaulters while all debtors with a rating score higher than V are treated as non-defaulters. Under this decision, rule four scenarios can occur which are
summarised in Table 3. Table 3. Outcomes of the simple classification rule using the cut-off value V
below cut-off value Rating Score
above cut-off value
Default correct prediction (hit) wrong prediction (error)
Non-default wrong prediction (false alarm) correct prediction (correct rejection)
XII. Measures of a Rating’s Discriminative Power – Applications and Limitations
If a debtor with a rating score below V defaults, the rating system’s prediction was correct. We call the fraction of correctly forecasted defaulters the “hit rate”. The same is true for
non-defaulters with a rating score above V. In this case, a nondefaulter was predicted correctly. If a non-defaulter has a rating score below V, the decision was wrong. The rating system raised a
false alarm. The fourth and final case is a defaulter with a rating score above V. In this case the rating system missed a defaulter and made a wrong prediction.
Figure 3. Rating score distributions for defaulting and non-defaulting debtors
For a given cut-off value V, a rating system should have a high hit rate and a low false alarm rate. The Receiver Operating Characteristic curve is given by all pairs (false alarm rate, hit rate),
which are computed for every reasonable cut-off value. It is clear that the ROC curve starts in the point (0,0) and ends in the point (1,1). If the cut-off value lies below all feasible rating scores
both the hit rate and the false alarm rate is zero. Similarly, if the cut-off value is above all feasible rating scores, the hit rate and the false alarm rate are equal to one. The concept of the ROC
curve is illustrated in Figure 4 above. In our setting, the cut-off points V are defined by the rating categories. Therefore, we get in total k-1 cut-off points. Consider the point B in Figure 4. To
compute this point we define the decision rule: A debtor is classified as a defaulter if he has a rating of Ri or worse, otherwise he is classified as a non-defaulter. Under this decision rule, the
hit rate is given by Cˆ R , which is the fraction of all deD
faulters with a rating of Ri or worse. Similarly, the false alarm rate is given by Cˆ ND Ri , which is the fraction of all non-defaulters with a rating of Ri or worse. The ROC curve is obtained by
computing these numbers for all rating categories.
Bernd Engelmann
Cˆ D
Rating Model
Perfect Forecaster
1 Random Model
Cˆ D Ri
Cˆ ND Ri
Cˆ ND
Figure 4. Illustration of Receiver Operating Characteristic curves
Again, we have the two limiting cases of a random model and the perfect forecaster. In the case of a random model where the rating system contains no discriminative power, the hit rate and the false
alarm rate are equal regardless of the cut-off point. In the case of the perfect forecaster, the rating scores distributions of the defaulters and the non-defaulters of Figure 3 are separated
perfectly. Therefore, for every value of the hit rate less than one the false alarm rate is zero and for every value of the false alarm rate greater than zero, the hit rate is one. The corresponding
ROC curve connects the three points (0,0), (0,1), and (1,1) by straight lines. Similar to the CAP curve, where the information of the curve was summarized in the Accuracy Ratio, there is also a
summary statistic for the ROC curve. It is the area below the ROC curve (AUROC). This statistic can take values between zero and one, where the AUROC of the random model is 0.5 and the AUROC of the
perfect forecaster is 1.0. The closer the value of AUROC is to one, i.e. the more the ROC curve is to the upper left, the higher is the discriminative power of a rating system.3 We apply the concept
of the ROC curve to the example in Table 1 and Table 2. We proceed in the same way as in the previous subsection, when we computed the CAP curve. Since we have five rating categories, we can define
four decision rules 3
A rating system with an AUROC close to zero also has a high discriminative power. In this case, the order of good and bad debtors is reversed. The good debtors have low rating scores while the poor
debtors have high ratings.
XII. Measures of a Rating’s Discriminative Power – Applications and Limitations
in total which gives us four points in addition to the points (0,0) and (1,1) on the ROC curve. To get a curve, the points have to be connected by straight lines. We compute the second point of the
ROC curve for Rating 2 to illustrate the procedure. The remaining points are computed in an analogous way. Consider the decision rule that a debtor is classified as a defaulter if he has a rating of
‘2’ or worse and is classified as a non-defaulter if he has a rating higher than ‘2’. The corresponding hit rate is computed as Cˆ D 2;2 (28 11) / 50
0.78 ,
while the corresponding false alarm rate is given by Cˆ ND 2;2 (180 200) / 950
0.40 .
The remaining points on the ROC curve of Rating 2 and Rating 1 are computed in a similar fashion. The ROC curves of Rating 1 and Rating 2 are illustrated in Figure 5. Computing the area below the ROC
curve, we get a value of 0.762 for Rating 1 and 0.735 for Rating 2. ROC Curves 1.00
Hit Rate
0.00 0.00
False Alarm Rate Rating 1
Rating 2
Random Rating
Perfect Forecaster
Figure 5. ROC curves for Rating 1 and Rating 2
We finish this subsection by exploring the connection between AR and AUROC. We have seen that the CAP curve and the ROC curve are computed in a similar way. In fact, it can be shown that both
concepts are just different ways to represent the same information. In Appendix A, we proof the simple relation between AR and AUROC AR
2 AUROC 1 .
Bernd Engelmann
From a practical perspective, both concepts are equivalent and it is a question of preference as to which one is used to evaluate the discriminative power of a rating system. In Section 3, we will
see that AUROC allows for an intuitive probabilistic interpretation which can be used to derive various statistical properties of AUROC. By (2) this interpretation carries over to AR. However, it is
less intuitive in this case. 2.3. Extensions
CAP curves and ROC curves only allow a meaningful evaluation of some rating function’s ability to discriminate between ‘good’ and ‘bad’ if there is a linear relationship between the function’s value
and the attributes ‘good’ and ‘bad’. This is illustrated in Figure 6. The figure shows a situation where the rating is able to discriminate perfectly between defaulters and non-defaulters. However,
the score distribution of the defaulters is bimodal. Defaulters have either very high or very low score values. In practice, when designing corporate ratings, some balance sheet variables like growth
in sales have this feature.
Defaulters Non-defaulters
Rating Score
Figure 6. Score distribution of a non-linear rating function
A straight forward application of the ROC concept to this situation results in a misleading value for AUROC. The ROC curve which corresponds to the rating distribution of Figure 6 is shown in Figure
7. It can be seen that the AUROC corresponding to the score distribution in Figure 6 is equal to 0.5. In spite of the rating system’s ability to discriminate perfectly between defaulters and
nondefaulters, its AUROC is the same as the AUROC of a rating system without any discriminative power. This is due to the non-linearity in the relationship between the rating score and credit quality
of the debtors.
XII. Measures of a Rating’s Discriminative Power – Applications and Limitations
ROC Curves
Hit Rate
False Alarm Rate Rating System
Perfect Forecaster
Random Rating
Figure 7. ROC curve corresponding to the score distribution of Figure 6
To obtain meaningful measures of discriminatory power also in this situation, Lee and Hsiao (1996) and Lee (1999) provide several extensions to the AUROC measure we have introduced in Section 2.2. We
discuss only one of these extensions, the one which could be most useful in a rating context. Lee (1999) proposes a simple modification to the ROC concept which delivers meaningful results for score
distributions as illustrated in Figure 6. For each rating category the likelihood ratio L is computed as L Ri
S D Ri . S ND Ri
The likelihood ratio is the ratio of the probability that a defaulter is assigned to rating category Ri to the corresponding probability for a non-defaulter. To illustrate this concept, we compute
the empirical likelihood ratio Lˆ which is defined as Lˆ Ri
Sˆ D Ri Sˆ R ND
for the rating systems Rating 1 and Rating 2. The results are given in Table 4. In the next step, the likelihood ratios are sorted from the highest to the least. Finally, the likelihood ratios are
inverted to define a new rating score.4 In doing so, 4
The inversion of the likelihood ratios is not necessary. We are doing this here just for didactical reasons to ensure that low credit quality corresponds to low rating scores throughout this article.
Bernd Engelmann
we have defined a new rating score that assigns low score values to low credit quality. The crucial point in this transformation is that we can be sure that after the transformation, low credit
quality corresponds to low score values even if the original data looks like the data in Figure 6. Table 4. Empirical likelihood ratios for Rating 1 and Rating 2
Rating 1
Sˆ D Ri ;1
Sˆ ND Ri ;1 Lˆ R ;1
Sˆ D Ri ;2
Sˆ ND Ri ;2 Lˆ R ;2
Rating 2
Rating Category 3 4
We compute the ROC curves for the new rating score. They are given in Figure 8. ROC Curves (after Transformation of Scores) 1.00
Hit Rate
0.00 0.00
False Alarm Rate Rating 1 (LR)
Rating 2 (LR)
Random Rating
Perfect Forecaster
Figure 8. ROC curves for the transformed rating scores of Rating 1 and Rating 2
Note that there is no difference to the previous ROC curve for Rating 2 because the sorting of the likelihood ratios did not change the order of the rating scores. However, there is a difference for
Rating 1. The AUROC of Rating 1 has increased slightly from 0.7616 to 0.7721. Furthermore, the ROC curve of Rating 1 is concave everywhere after the transformation. As pointed out by Tasche (2002),
XII. Measures of a Rating’s Discriminative Power – Applications and Limitations
the non-concavity of a ROC curve is a clear sign that the rating model does not reflect the information contained in the data in an optimal way. With this simple transformation, the quality of the
rating model can be improved. A practical problem in the construction of rating models is the inclusion of variables that are non-linear in the credit quality of debtors (e.g. Figure 6). As pointed
out in Hayden (2006) in this book, these variables can offer a valuable contribution to a rating model provided that they are transformed prior to the estimation of the rating model. There are
several ways to conduct this transformation. Computing likelihood ratios and sorting them as was done here is a feasible way of producing linear variables from non-linear ones. For further details
and an example with real data, refer to Engelmann et al. (2003b).
3. Statistical Properties of AUROC In this section we will discuss the statistical properties of AUROC. We focus on AUROC because it can be interpreted intuitively in terms of a probability. Starting
from this interpretation we can derive several useful expressions which allow the computation of confidence intervals for AUOC, a rigorous test if a rating model has any discriminative power at all,
and a test for the difference of two rating systems’ AUROC. All results that are derived in this section carry over to AR by applying the simple relation (2) between AR and AUROC. 3.1. Probabilistic
Interpretation of AUROC
The cumulative distribution function of a random variable evaluated at some value x, gives the probability that this random variable takes a value which is less than or equal to x. In our notation,
this reads as C D Ri
PS D d Ri ,
C ND Ri
PS ND d Ri ,
or in terms of the empirical distribution function Cˆ D Ri Cˆ ND Ri
P Sˆ
P Sˆ D d Ri , ND
d Ri ,
where P(.) denotes the probability of the event in brackets (.). In Appendix B, we show that AUROC can be expressed in terms of empirical probabilities as
Bernd Engelmann
P Sˆ D Sˆ ND
1 ˆ P SD 2
Sˆ ND .
To get further insight, we introduce the Mann-Whitney statistic U as U
u D , ND
1 N D N ND
D , ND ,
( D , ND )
1, falls sˆ D sˆ ND °1 ® , falls sˆ D sˆ ND ° 2 falls sˆ ! sˆ D ND ¯ 0,
where sˆ D is a realisation of the empirical score distribution Sˆ D and sˆ ND is a realisation of Sˆ . The sum in (8) is over all possible pairs of a defaulter and a nonND
defaulter. It is easy to see that U
1 P Sˆ D Sˆ ND P Sˆ D 2
Sˆ ND ,
what means the area below the ROC curve and the Mann-Whitney statistic are measuring the same quantity. This gives us a very intuitive interpretation of AUROC. Suppose we draw randomly one defaulter
out of the sample of defaulters and one non-defaulter out of the sample of non-defaulters. Suppose further we should decide from the rating scores of both debtors which one is the defaulter. If the
rating scores are different, we would guess that the debtor with the lower rating score is the defaulter. If both scores are equal we would toss a coin. The probability that we make a correct
decision is given by the right-hand-side of (9), i.e. by the area below the ROC curve. Throughout this article, we have introduced all concepts and quantities with the data set given in Tables 1 and
2. However, the data set of Tables 1 and 2 is only one particular realisation of defaults and non-defaults from the underlying score distributions which are unknown. It is not the only possible
realisation. In principle, other realisations of defaults could occur which lead to different values for AUROC and U. These different possible values are dispersed about the expected values of AUROC
and U that are given by E >AUROC @
E >U @
PS D S ND
1 PS D 2
S ND .
To get a feeling of how far the realised value of AUROC deviates from its expected value, confidence intervals have to be computed. This is done in the next subsection.
XII. Measures of a Rating’s Discriminative Power – Applications and Limitations
Finally, we remark that the AR measure can also be expressed in terms of probabilities. Applying (2) we find E >AR @
PS D S ND PS D ! S ND .
The expected value of AR is the difference between the probability that a defaulter has a lower rating score than a non-defaulter and the probability that a defaulter has a higher rating score than a
non-defaulter. It is not so clear how to give an intuitive interpretation of this expression. 3.2. Computing Confidence Intervals for AUROC
To get a feeling for the accuracy of a measure obtained from a data sample, it is customary to state confidence intervals to a confidence level D, e.g. D In the first papers regarding measuring the
discriminative power of rating systems, confidence intervals were always computed by bootstrapping.5 These papers mainly used the measure AR. Bootstrapping requires the drawing of lots of portfolios
with replacement from the original portfolio. For each portfolio, the AR has to be computed. From the resulting distribution of the AR values, confidence intervals can be computed. The main drawback
of this method is its computational inefficiency. A more efficient method is based on the application of well-known properties of the Mann-Whitney statistic introduced in (8). The connection between
AR and a slightly modified Mann-Whitney statistic is less obvious6 than for AUROC which might be the reason for the inefficient techniques that were used in those early papers. From mathematical
statistics it is known that an unbiased estimator of the variance V U2 of the Mann-Whitney statistic U in (8) is given by
Vˆ U2
1 PˆD z ND N D 1 PˆD, D, ND 4 N D 1 N ND 1
N ND
1 PˆND, ND , D 4 N D N ND 1 U 0.5 2
where PˆD z ND , PˆD , D , ND , and PˆND , ND , D are estimators for the probabilities P(SDzSND), PD,D,ND, and PND,ND,D where the latter two are defined as
Efron and Tibshirani (1998) is a standard reference for this technique. In (8) the ½ has to be replaced by 0, and the 0 has to be replaced by -1 to get the corresponding Mann-Whitney statistic for
the AR.
Bernd Engelmann PS D,1 , S D, 2 S ND P S D,1 S ND S D, 2
PD, D, ND
P S D, 2 S ND S D,1 PS ND S D,1 , S D, 2 , PS ND,1 , S ND, 2 S D PS ND,1 S D S ND, 2
PND, ND, D
P S ND, 2 S D S ND,1 PS D S ND,1 , S ND, 2 ,
where SD,1 and SD,2 are two independent draws of the defaulter’s score distribution and SND,1 and SND,2 are two independent draws of the non-defaulter’s score distribution. Using (12) confidence
intervals can be easily computed using the asymptotic relationship AUROC E >AUROC @ N D , N ND of o N (0,1) . Vˆ U
The corresponding confidence intervals to the level D are given by ª 1 § 1 D · 1 § 1 D · º ¸, AUROC Vˆ U ) ¨ ¸» , « AUROC Vˆ U ) ¨ © 2 ¹ © 2 ¹¼ ¬
where ) denotes the cumulative distribution function of the standard normal distribution. The asymptotic relation (14) is valid for large number ND and NND. The question arises how many defaults a
portfolio must contain to make the asymptotic valid. In Engelmann et al. (2003a, 2003b) a comparison between (14) and bootstrapping is carried out. It is shown that for 50 defaults a very good
agreement between (14) and bootstrapping is achieved. But even for small numbers like 10 or 20 reasonable approximations for the bootstrapping results are obtained. We finish this subsection by
computing the 95% confidence interval for the AUROC of our examples Rating 1 and Rating 2. We start with Rating 1. First we compute PˆD z ND . It is given by the fraction of all pairs of a defaulter
and a nondefaulter with different rating scores. It is computed explicitly as PˆD z ND
1 PˆD
§ 5 · Sˆ D i;1 Sˆ ND i;1 ¸ / N D N ND 1 ¨ ¨ ¸ ©i 1 ¹
1 2 200 5 215 2 185 14 200 27 150 / 50 950 0.817
The estimators for PD,D,ND and PND,ND,D are more difficult to compute than for PDzND. To estimate PD,D,ND it is necessary to estimate three probabilities, P(SD,1,SD,2 0.9999 0.6839 0.1111
> 0.9999 > 0.9999 0.9606
> 0.9999 > 0.9999 > 0.9999
Mode 2: all borrowers graded up by s = 1 0.1927 0.0203 0.0015 0.1863 0.0200 0.0016 0.0036 0.0204 0.0016
0.00 0.01 0.10
0.00 0.01 0.10
0.7605 0.4697 0.1369
0.0291 0.0228 0.0130
0.0139 0.0138 0.0138
0.00 0.01 0.10
> 0.9999 0.7510 0.1543
> 0.9999 0.6141 0.0078
0.9996 0.9996 0.9996
Mode 3: all borrowers graded down by s = 1 0.3428 0.1699 0.1568 0.2836 0.1719 0.1563 0.1217 0.1385 0.1560
0.00 0.01 0.10
0.00 0.01 0.10
0.9119 0.5854 0.1362
0.4875 0.4275 0.1905
0.4277 0.4282 0.4295
0.00 0.01 0.10
> 0.9999 0.7771 0.1388
> 0.9999 0.8669 0.2212
> 0.9999 > 0.9999 > 0.9999
Table 5 shows the result of our simulation study. As expected, an increase in portfolio size leads, ceteris paribus, generally to an increase in power. This is true for the three tests and for the
three modes regarded. Further on, an increase in asset correlation - leaving the portfolio size constant - decreases the power. 40
Within the master scale we use (see Section 2.4) the PD from one rating grade to the next worse grade increases by a factor between 1.75 and 2 depending on the specific grade.
XIV. PD-Validation – Experience from Banking Practice
It is remarkable that when looking at the SPGH already at N = 1,000 and by U = 0.01 or lower for all three modes, a power near to or over 0.5 is achieved. But the picture is quite mixed when
regarding the HSLS or Sim Bin. These two tests perform worse in comparison to SPGH especially for Mode 2 and a small portfolio size. Analysing the relative competitiveness of the SPGH, HSLS and Sim
Bin the picture is not unambiguous. Regarding Mode 1, which stands for an interchange of obligors’ assessed rating, HSLS seems to be the best choice. SPGH outperforms when the systematic up-grade by
one grade is analysed as an alternative hypothesis. Even the Sim Bin in some situations has the highest power. What can we learn from this simulation study about power and what are the consequences
for practical backtesting? We conclude that unfortunately none of the statistical test we analysed clearly outperforms the others in all circumstances. For practical issues, all tests should be
performed when an assessment of the probability of non-detecting a low quality rating system is required. What is most important at all is that especially the higher management should be aware that
there is41 a (perhaps significant) probability that in fact H0 is wrong, but the statistical tools did not reveal this. Our simulation approach can be interpreted as an instrument to fulfil this
6. Creating Backtesting Data Sets – The Concept of the Rolling 12-Month-Windows Up to now we have shown some concepts for statistical backtesting, but when dealing with real data, the first step is
always to create a specific sample on which a meaningful analysis can be carried out. In banking practice ratings are performed continually over the year, for instance, when a new customer must be
evaluated, a credit line requires extension, new information (e.g. financial figures) concerning a borrower already in the portfolio comes up, or questions of any fields regarding the
creditworthiness are recognised. We propose an approach for creating backtesting samples clearly in line with x the definition of what a rating is, namely a forecast for the 1-Year-PD. x what could
be found in the IT-database at any point of time we may look into it. x the general concept a bank manages its credit risks including the calculation of Basel II risk capital.
This is true even if the hypothesis H0 “The rating system forecasts the PD well.” could not be rejected at a certain level D.
Robert Rauhmeier
From these guidelines, it follows that whenever we look into the rating database we find the bank’s best assessment of the borrower’s probability of default for the next year. This is irrespective of
how old the rating is at the time we look into the database. This is because when the bank has an inducement that when there is a noteworthy change in the creditworthiness of the borrower (its PD),
the bank has to alter the rating immediately42. This means that a re-rating just once a year, for example whenever new annual accounts are available, might be not adequate in the case when other,
relevant information regarding the PD in any form is made available. When there is no change in the rating, it remains valid and predicates each day the same, namely the forecast of the 1-year-PD
from the day we found it in the database. In the same way, the second essential variable, the defaults and non-defaults, have to be collected. The termination of the backtesting sample is done
according to the principle of reporting date. We call this approach ‘cutting slices’ or ‘rolling 12-months-window’ (compare to Figure 7).
01/01/2004 Q1 Q4
Q1 Q2 Q3 Q4 G8 G8G4G4-
G10 1
G14 1
G14 1
G14 1
01/01/2005 Q1 Q4
G6 -
G12 G12 G12 1 G12 1 proportional . n.a. n.a. n.a.
G14 1
Figure 7. Concept of the rolling 12-Months-Windows – the backtesting slices
We start with the first slice called ‘Q1/2004’, which begins at January 2004. We look in the database and find borrower A with rating grade 8. He was rated with grade 8 a few months before (and gets
other ratings after 1st January 2004), but 42
See BCBS (2005a), § 411 and § 449.
XIV. PD-Validation – Experience from Banking Practice
has grade 8 at the beginning of January 2004. Within the next 12 months (up to the end of December 2004) he did not get into default, this was indicated with a -. He enters the slice ‘Q1/2004’, as
non-default and rating grade 8 ( y A 0 ; Sˆ g 8 0.0105 ). The second borrower B enters with grade 10 but as default, because he defaulted somewhere in the third quarter of 2004 indicated with 1 ( y A
1 ; Sˆ g 10 ). Borrower C was not found in the rating database at 1st January 2004 as he was rated for the first time just before the beginning of the second quarter 2004. Therefore he is not
contained in slice ‘Q1/2004’. Borrower D enters with grade 12 as non-default, because the default we observe is past the end of the 12 month period which ends by 31st December 2004. Borrower E is
found in the database with a rating grade 5 but he ended the business connection with the bank (indicated by /). Therefore it is impossible to observe if he has defaulted or survived within the 12
month period. This observation for borrower E should be included in the slice ‘Q1/2004’ as a weighted non-default, where the proportion is calculated as the quota (number of months it has been
observed)/12. A nonconsideration or full consideration may cause biases. In the same way, the following slices have to be constructed. We show the compositions of the slices as a summary in the left
side of Figure 7. For practical issues, ultimo data files can be used best. So for the slice ‘Q1/2004’, we use the ultimo data files from December 2003. In Figure 7 we present the slice on a
quarterly basis but sample creation can also be done on a monthly basis. This has the advantage that some elements of monitoring are fulfilled and nearly no rating and default is lost. The only
exception is when a rating changes within a month. Therefore, the initial rating was not seen in the ultimo data file. The same is true when a rating is completed and the rated borrower gets into
default before he has passed his first end of month. We recommend analysing these special cases separately, for example regarding detection of fraud. When using the introduced method of rolling
12-month-windows, it is of concern that the slices greatly overlap. For a tuned (entries and exits are balanced, dates of rating compilations are evenly distributed all over the year) portfolio of
borrowers with long term business relationship, two subsequent slices may overlap by about 11/12. As a consequence, we expect that we get often the same test results for two or more subsequent
slices. We will see this in the next section, where we demonstrate our theoretical considerations by applying them to real world rating data.
Robert Rauhmeier
7. Empirical Results
7.1. Data Description
In this section, we demonstrate the application of our concepts to real rating data. The data used is part of a rating system introduced in the beginning of 2004 for small business clients in
Germany.43 We analysed slices beginning in February 2004 up to January 2005.44 So for backtesting slice ‘Jan2005’, we considered the defaults and non-defaults up to the end of December 2005. Here we
can see that for backtesting a complete vintage of ratings, in fact a period of two years, is needed. The rating system follows mainly the architecture sketched in Section 2.2, and is composed of
various parallel sub-models for the machine rating module. These sub-models differ according to whether there is a tradesman, freelancer / professional45 or a micro corporate to be rated. Micro
corporates dominate with about 45% of all ratings, followed by tradesman (about 30%) and remaining freelancer and professionals with about 25%. The basic structure of all sub-models contains
approximately a dozen quantitative and qualitative risk drivers as it is usual for this kind of portfolio in banking practice. Within the second module, ‘expert guided adjustment’, up or down grading
of the machine rating can be done. For micro corporates a ‘supporter logic’ module is available. In our empirical analysis, we want to examine the slices ‘Feb2004’ to ‘Jan2005’ and in detail the
comprehensive slice ‘Jan2005’. Altogether, more than 26,000 different ratings can be analysed in the slices ‘Feb2004’ to ‘Jan2005’. Whereas slice ‘Feb2004’, consists of little more than a hundred
ratings because of the recent launch of the rating system, the numbers in the slices increase steadily up to more than 24,000 in ‘Jan2005’. Note that with our concept of rolling 12-months-windows,
the slices overlap by a high degree. For example ‘Jan2005’ and ‘Dec2004’ have 88% observations in common, slices ‘Jun2004’ and ‘Jul2004’ about 75%.
In order to avoid disclosure of sensitive business information, the data base was restricted to a (representative) sub-sample. 44 For the construction of e.g. the slice ‘Feb2004’ we used the ultimo
data store of 31st January 2004. 45 like architects, doctors, or lawyers
XIV. PD-Validation – Experience from Banking Practice
7.2. The First Glance: Forecast vs. Realised Default Rates
When talking about the quality of a rating system, we get a first impression by looking at forecast default rates and realised default rates. Figure 8 shows that realised default rates vary between
2% and 2.5%, whereas the forecast PD underestimates the realised default rate slightly for almost all slices. 0.040 0.030 0.020 0.010
b2 00 4 M ar 20 0 Ap 4 r2 00 4 M ay 20 04 Ju n2 00 4 Ju l2 00 4 Au g2 00 4 Se p2 00 4 O ct 20 04 N ov 20 04 D ec 20 04 Ja n2 00 5
realised Default Rate forecast Default Rate (final rating) forecast Default Rate (machine rating)
Figure 8. Realised default rate vs. forecast default rate by slice
Furthermore, it can be seen that on average, the final rating is more conservative than the machine rating. This means that the ‘expert guided adjustments’ and ‘supporter logic’ on average lead to a
downgrade of borrowers. This might be an interesting result, because in banking practice the opposite is often assumed. The line of thought is, rating analysts or loan managers are primarily
interested in selling loans which is easier - because of bank internal competence guidelines or simply by questions regarding the credit terms - if the machine rating is upgraded by the expert. The
“accurate rating” is often assumed to be of subordinate importance for the loan manager. Here we have an example, which disproves this hypothesis. We will see whether this difference of machine
rating and final rating regarding the quality of forecasts is significant or not in Section 7.4. 7.3. Results of the Hypothesis Tests for all Slices
As we are interested in whether the deviation of the final rating from the default rates is significant, we focus on the SPGH and the HSLS test. Table 6 shows the results.
Robert Rauhmeier
For U = 0.01, the SPGH rejects in no slice the null hypothesis of “being calibrated”, the HSLS rejects in two slices tightly. For the very conservative approach with U = 0.00, in some slices the null
hypothesis has to be rejected for SPGH and HSLS. The simultaneous binomial tests (results are not shown here explicitly), shows that even for U = 0.00, the null hypothesis in no slice could be
rejected, indicating a good quality of the rating system, too. Note the different test decisions of the consulted tests SPGH and HSLS for some slices. Table 6. Test decisions by slice, final rating,
1 Mio runs, D = 0.05 slice
Feb2004 Mar2004 Apr2004 May2004 Jun2004 Jul2004 Aug2004 Sep2004 Oct2004 Nov2004 Dec2004 Jan2005 Feb2004 Mar2004 Apr2004 May2004 Jun2004 Jul2004 Aug2004 Sep2004 Oct2004 Nov2004 Dec2004 Jan2005
lower bound -1.5063 -1.8380 -1.8948 -1.9512 -1.9549 -1.9544 -1.9549 -1.9626 -1.9570 -1.9575 -1.9592 -1.9569
SPGH upper Test bound statistic 2.2255 0.2075 2.0736 -0.3214 2.0137 0.2490 1.9780 0.9859 1.9697 2.0617 1.9697 1.3236 1.9673 2.0724 1.9675 2.4033 1.9691 2.1408 1.9604 1.6973 1.9629 1.0893 1.9620
-1.5063 -2.2839 -3.1715 -3.9862 -4.7208 -5.5315 -6.2755 -6.9194 -7.5017 -8.0797 -8.6682 -9.1811
2.3911 2.9406 4.0670 5.1376 6.1255 7.2272 8.2214 9.0275 9.7802 10.5260 11.2619 11.9508
0.2075 -0.3214 0.2490 0.9859 2.0617 1.3236 2.0724 2.4033 2.1408 1.6973 1.0893 0.9927
No rej. No rej. No rej. No rej. Rej. No rej. Rej Rej Rej No rej. No rej. No rej.
upper bound 25.3473 27.7315 21.5598 21.3653 20.8402 20.6058 20.3097 20.3765 20.5659 20.6235 20.6672 20.9511
HSLS Test statistic 5.7982 6.1607 6.8883 10.8339 17.1008 33.3231 67.6734 78.3339 68.2907 70.2873 78.3400 96.3306
No rej. No rej. No rej. No rej. No rej. No rej. No rej. No rej. No rej. No rej. No rej. No rej.
26.5294 30.5962 29.4874 35.4975 43.0297 53.6896 65.1878 76.8287 90.2356 103.8628 119.0537 130.8062
5.7982 6.1607 6.8883 10.8339 17.1008 33.3231 67.6734 78.3339 68.2907 70.2873 78.3400 96.3306
Decision No rej. No rej. No rej. No rej. No rej. Rej Rej Rej Rej Rej Rej Rej No rej. No rej. No rej. No rej. No rej. No rej. Rej Rej No rej. No rej. No rej. No rej.
From Table 6, we can also see how well the approximation of the SPGH to the standard normal under H0, works as the number of ratings in the slices increases for U = 0.00. The same is true for the
HSLS, when we take into account that only 10 of 14 grades have a large number of observations46 F2(0.95,10) = 20.48. Sec46
Ratings Grades 1 to 3 of the master scale are intended mainly for sovereigns, international large corporates and financial institutions with excellent creditworthiness and could only in exceptional
cases be achieved by small business clients. The worst rating grade is assigned to a very low number of borrowers in the data base, what is comprehensible because the rated portfolio mainly consists
of initial ratings, so potential borrowers with a low creditworthiness are not accepted by the bank at all and therefore do not get into the rating database.
XIV. PD-Validation – Experience from Banking Practice
ondly, we might find it impressive how broad the non-rejection area is when taking correlation into account, even when used for a very low asset correlation of U = 0.01. Notice that the non-rejection
areas for U = 0.01 of SPGH, HSLS and Sim Bin, get even broader when the number of ratings increases, although the relative distribution of the borrowers over the grades only changes negligibly. The
same phenomenon was observed in the simulation study A, Table 2. 7.4. Detailed Analysis of Slice ‘Jan2005’
Now we turn to a more detailed analysis of slice ‘Jan2005’, as we can observe up to now that the rating system passes our quality checks well. The distribution, not shown here explicitly, of the
observations over the rating grades, is roughly bellshaped, for example about 900 observations in grade 4, up to 4,500 in grade 8 and 1,000 in grade 12.
Rating Grade
We can see in Figure 9 that for three rating grades, the realised default rate is in the rejection area for the binomial test. Hereby we assumed U = 0.01. The realised default rate increases in the
rating grades as is assumed and therefore confirms our previous impression of the rating system we obtained from the SPGH and HSLS.
Rating Grade
Figure 9. Realised default rates and exact binomial test by grades, slice ‘Jan2005’, 1 million runs, U = 0.01, D = 0.05
Robert Rauhmeier
Next we analysed the power of our tests. As could be seen from Table 7, the high number of ratings leads to a high power of all tests in all analysed circumstances. When assuming no correlation, the
power is > 0.9999 for each of the three tests. When assuming U = 0.01 we get, e.g. for the SPGH in Mode 3, a power of 0.7548. This means that the SPGH - when in fact all borrowers should have got a
rating one grade worse - would have detected this with a probability of about 76%. Table 7. Analysis of Power, final rating, slice ‘Jan2005’, 1 million runs, D = 0.05
0.00 0.01
> 0.9999 0.7894
Sim Bin
Mode 1: q = 0.5 > 0.9999 > 0.9999
> 0.9999 > 0.9999
Mode 2: all borrowers graded up by s = 1 0.00 0.01
> 0.9999 0.6798
> 0.9999 0.4888
> 0.9999 0.5549
Mode 3: all borrowers graded down by s = 1 0.00 0.01
> 0.9999 0.7548
> 0.9999 0.8201
> 0.9999 0.8227
To get a complete picture of the quality of the rating system and the regarded portfolio, we look at its discriminatory power.47 Figure 10 displays the ROC-Curves for the machine rating and the final
rating. For both rating modules, no discrepancies could be observed from the ROCs. We see that the ROC-Curve of final rating is always atop of the ROC-Curve of the machine rating, indicating an
increase in discriminatory power when human expert assessment is brought into account. The AUROC of the final rating is therefore a bit higher (0.7450), than those of the machine rating (0.7258). As
could be seen from Table 8, the AUROC and MSE of the machine rating and final rating differ significantly. For comparing the MSE, we used the Redelmeier test described in detail in Section 4.4.48
For a definition of the measures ROC-Curve and AUROC and their statistical properties, we refer to Chapter XII. 48 As it was a prerequisite that the machine rating should pass a test on calibration
we conducted the SPGH and the HSLS. We find that we could not reject the null hypothesis of being calibrated with U = 0.01, but we have to reject the null hypothesis with U = 0.00.
Sensitivity 0.50
XIV. PD-Validation – Experience from Banking Practice
0.50 1-Specificity
Final_Rating ROC area: 0.745 Machine_Rating ROC area: 0.7258 Reference
Figure 10. ROC-Curve for final rating and machine rating, slice ‘Jan2005’
Table 8. Machine Rating vs. Final Rating MSE
Machine Rating
MSEmach.rating = MSEfin.rating
< 0.0001
Final Rating
AUROCmach.rating = AUROCfin.rating
< 0.0001
To draw an overall result, the rating system passes our quality checks very well. With the high number of ratings in the analysed portfolio, we would have been able to detect potential shortcomings,
but we did not find any. As the system was introduced two years ago, this was the first backtest that was performed, and the more highly this good result is to be regarded.
8. Conclusion In this chapter we dealt with validation of rating systems, constructed to forecast a 1-year probability of default. Hereby, we focused on statistical tests and their application for
bank internal purposes, especially in the Basel II periphery. We built
Robert Rauhmeier
up a simulation based framework to take account of dependencies in defaults (asset correlation), which additionally has the potential to appraise the type II error, i.e. the non-detection of a bad
rating system, for optional scenarios. Hereby, the well known exact and approximated binomial test and the Hosmer-Lemeshow-F2 test are used, but we also introduced the less popular Spiegelhalter test
and an approach called simultaneous binomial test, which allow the testing of a complete rating system and not just each grade separately. As it is important for banks to compare the quality of
modules of their rating system, we also refer to the Redelmeier test. As for any applied statistical method, building test samples is an important issue. We designed the concept of “the rolling
12-months-window” to fulfil the Basel II and bank’s internal risk management requirements as well as using the bank’s IT-environment (rating database) effectively and is in harmony with our
definition of what a rating should reflect, namely the bank’s most accurate assessment of the 1-year-PD of a borrower. All concepts are demonstrated with a very up-to-date, real-life bank internal
rating data set in detail. We focus mainly on statistical concepts for rating validation (backtesting) but it has to be emphasised that for a comprehensive and adequate validation in the spirit of
Basel II, much more is required. To name a few, these include adherence of defined bank internal rating processes, accurate and meaningful use of ratings in the bank’s management systems and correct
implementation in the ITenvironment.
References Balthazar L (2004), PD estimates for Basel II, Risk, April, pp. 84-85. Basel Committee on Banking Supervision (BCBS) (2005a), Basel II: International Convergence of Capital Measurement and
Capital Standards: a Revised Framework – Updated November 2005. http://www.bis.org/publ/bcbs118.pdf Basel Committee on Banking Supervision (BCBS) (2005b), Studies on the Validation of Internal Rating
Systems – revised version May 2005, Working Paper No. 14. http://www.bis.org/publ/bcbs_wp14.pdf Brier G (1950), Verification of Forecasts expressed in Terms of Probability, Monthly Weather Review,
Vol. 78, No. 1, pp. 1-3. DeGroot M, Fienberg S (1983), The Comparison and Evaluation of Forecasters, The Statistician, Vol. 32, pp. 12-22. Finger C (2001), The One-Factor CreditMetrics Model in The
New Basle Capital Accord, RiskMetrics Journal, Vol. 2(1), pp. 9-18. Hamerle A, Liebig T, Rösch D (2003), Benchmarking Asset Correlation, Risk, November, pp. 77-81. Hosmer D, Lemeshow S, Klar J
(1988), Goodness-of-Fit Testing for Multiple Logistic Regression Analysis when the estimated Probabilities are small, Biometrical Journal, Vol. 30, pp. 911-924.
XIV. PD-Validation – Experience from Banking Practice
Murphy A, Daan H (1985), Forecast Evaluation, in: Murphy A, Katz R (eds.), Prob-ability, Statistics, and Decision Making in the Atmospheric Sciences, Westview Press, Boulder, pp. 379-438. Redelmeier
DA, Bloch DA, Hickman DH (1991), Assessing Predictive Accuracy: How to compare Brier Score, Journal of Clinical Epidemiology, Vol. 44, No. 11, pp. 11411146. Scheule H, Rauhmeier R (2005), Rating
Properties and their Implication on Basel IICapital, Risk, March, pp.78-81. Sobehart JR, Keenan SC (2001), Measuring Default Accurately, Risk, pp. S31-S33. Sobehart JR, Keenan SC, Stein RM (2000),
Benchmarking Quantitative Default Risk Models: A Validation Methodology, Moody’s Investors Service. Somers R (1962), A New Asymmetric Measure of Association for Ordinal Variables, American
Sociological Review, 27, pp. 799 -811. Spiegelhalter D (1986), Probabilistic Prediction in Patient Management and Clinical Trails, Statistics in Medicine, Vol. 5, pp. 421-433.
Robert Rauhmeier
Appendix A We show that the SPGH test statistic ZS is equal to the Zbin test statistic of the approximated binomial test in case where there is only one single PD. This is when all obligors are rated
in the same rating grade g. We start with (19) and substitute Sˆi Sˆ g respectively S i S g because we argue under H0:
1 Ng
¦ y
i 1
1 N g2 Ng
1 Ng
¦ 1 2S g
¦ S 1 S g
i 1
S g 1 S g
i 1
2 S g
i 1
S g
i 1
N g 1 2S g Ng
y 2 S g
i 1
2 S g 1 S g
y Ng S g2
¦ S 1 S g
i 1
N g 1 2S g
2 S g 1 S g
¦ y 2 S ¦ y 2 N g
i 1
i 1
i 1
¦ S 1 S
S g 2 N g S g
i 1
N g 1 2S g
2 S g 1 S g
1 2S g ¦ y 1 2S g N g S g i 1
N g 1 2S g
2 S g 1 S g
N g y N S g
Ng S g 1 S g
and get (14).
¦ y N S i 1
N g S g 1 S g
XIV. PD-Validation – Experience from Banking Practice
Appendix B We want to derive the test statistic ZR of the Redelmeier test as it is shown in (22) according to Redelmeier et al (1991). We start with the MSE from module 1 as
1 N
1 N
¦ yi Sˆi,m1 2 i 1
¦ y 2 y Sˆ i
2 i , m1 Sˆi , m1
i 1
Because of the randomness of the defaults the MSE will differ from its expected value E MSEm1
1 N
¦ S
1 N
2 Sˆi , m1
i 1
¦ S
2 S i Sˆi , m1 Sˆi2, m1
i 1
The difference of the realized and the expected MSE for module 1 is
d m1
1 N
E MSEm1 MSEm1
¦ y 2 y Sˆ i
2 S i Sˆi , m1
Sˆi 2 S i Sˆi , m 2
i , m1 Sˆ i
i 1
The same consideration has to be done for module 2: dm2
1 N
E MSEm 2 MSEm 2
¦ y 2 y Sˆ i
i 1
To determine whether two sets of judgments are equally realistic we compare the difference between dm1 and dm2: d m1 d m 2
2 N
¦ Sˆ
i , m1 Sˆ i , m 2
S i yi
i 1
As it can be seen from (28) the true but unknown PD Si is still required and has therefore be assessed. A choice might be to set all Si equal to the average of the corresponding judgments ( Sˆi , m1
, Sˆi , m 2 ) (consensus forecast).49 This seems to be a
reasonable choice since we presumed that each module itself has satisfied the null hypothesis of being compatible with the data. Using the consensus forecast
0.5 Sˆ i ,m1 Sˆ i ,m 2
we get
Other approaches are possible, e.g. one may get the “true” Si’s from an external source.
Robert Rauhmeier
d m1 d m 2 2 N 1 N
¦ Sˆ
i , m1 Sˆi , m 2
0.5 Sˆi,m1 0.5 Sˆi,m2 yi
i 1 N
2 2 i , m1 Sˆi , m 2
2 yi Sˆi, m1 Sˆi , m2
i 1
1 N
¦ >y 2 y Sˆ
1 N
1 ¦ y Sˆ N ¦ y Sˆ
2 i i , m1 Sˆi , m1
y 2 y Sˆ i
Sˆi2, m 2
i 1 N
i , m1
i 1
i, m2
i 1
MSEm1 MSEm 2
It is interesting that in the case we use the consensus forecast for substituting Si the term dm1 dm2 is simply the difference of the two realized MSEs. In the next step we calculate the variance
using the fact that the expected value of dm1 dm2 is zero under the null hypothesis, see (23). §2 Var d m1 d m 2 Var ¨ ¨N © 4 N2
¦ Sˆ
i , m1 Sˆ i , m 2
i 1
¦ Sˆ
S i yi ¸¸
i , m1 Sˆi , m 2
2 S i 1 S i
i 1
Finally we get the test statistic d m1 d m 2
Var d m1 d m 2 N
¦ >Sˆ
2 i , m1
Sˆ i2, m 2 2 Sˆ i , m1 Sˆ i , m 2 y i
i 1
ªN « «¬ i 1
ª Sˆ Sˆ i , m 2 «¬ i , m1
Sˆ 2
i , m1
Sˆ i ,m 2 2 Sˆ i , m1 Sˆ i , m 2
ºº »¼ » »¼
0, 5
XV. Development of Stress Tests for Credit Portfolios Volker Matthias Gundlach KfW Bankengruppe1
1. Introduction Advanced portfolio models combined with naive reliance on statistics in credit risk estimations run the danger of underestimating latent risks and neglecting the peril arising from
very rare, but not unrealistic risk constellations. The latter might be caused by abnormal economic conditions or dramatic events for the portfolio of a single credit institute or a complete market.
This includes events of a political or economic nature. To limit the impact of such sudden incidents, the study of fictional perturbations and shock testing the robustness/vulnerability of risk
characteristics is required. This procedure is known as stress testing. It allows the review and actualisation of risk strategies, risk capacities and capital allocation. Thus it can play an
important role in risk controlling and management in a credit institute. This view is shared by the banking supervision, in particular by the Basel Committee on Banking Supervision of the Bank for
International Settlements (BIS). Consequently, stress testing for credit risk plays a role in the regulatory requirements of the Revised Framework on the International Convergence of Capital
Measurements and Capital Standards (Basel II). Nevertheless, it has not reached the standards of stress testing for market risk estimations, which has been common practice for several years (see
Breuer and Krenn, 1999). In the following, we describe the purpose and signification of stress testing for credit risk evaluations. Then we recall the regulatory requirements, in particular of the
Basel II framework. We describe how stress tests work and present some wellestablished forms of stress tests, a classification for them and suggestions how to deal with them. We also include examples
for illustration. To conclude, we offer a concept for an evolutionary way towards a stress testing procedure. This is done in view of the applicability of the procedure in banks.
The material and opinions presented and expressed in this article are those of the author and do not necessarily reflect views of KfW Bankengruppe or methods applied by the bank.
Volker Matthias Gundlach
2. The Purpose of Stress Testing Stress testing means (regular) expeditions into an unknown, but important territory: the land of unexpected loss. As the latter is of immense relevance for financial
institutions, there is growing interest in this topic. Moreover, there are various reasons for conducting stress testing due to the explicit or implicit relation between unexpected loss and economic
capital or regulatory capital, respectively. Crucial for the understanding of and the approach towards stress testing, is the definition of unexpected loss. Though it is clear that this quantity
should be covered by economic capital, there is no general agreement as to how to define unexpected loss. It is quite common to regard the difference between expected loss and the valueat-risk (VaR)
of a given confidence level, or the expected shortfall exceeding the VaR, as unexpected loss. One of the problems with this approach is that such an unexpected loss might not only be unexpected, but
also quite unrealistic, as its definition is purely of a statistical nature. Therefore, it is sensible to use stress tests to underscore which losses amongst the unexpected are plausible or to use
the outcome of stress tests, instead of unexpected loss to determine economic capital. Though the idea of using stress tests for estimating economic capital seems quite straight forward, it is only
rarely realized, as it requires reliable occurrence probabilities for the stress events. With these, one could use the expected loss under stress as an economic capital requirement. Nevertheless,
stress tests are mainly used to challenge the regulatory and economic capital requirements determined by unexpected loss calculations. This can be done as a simple test for the adequacy, but also to
derive a capital buffer for extreme losses exceeding the unexpected losses, and to define the risk appetite of a bank. For new credit products like credit derivatives used for hedging against extreme
losses it might be of particular importance to conduct stress tests on the evaluation and capital requirements. Using stress tests to evaluate capital requirements has the additional advantage of
allowing the combination of different kind of risks; in particular market risk, credit and liquidity risk, but also operational risk and other risks such as reputational risk. Because time horizons
for market and credit risk transactions are different, and it is common for banks to use different confidence levels for the calculation of VaRs for credit and market risk (mainly due to the
different time horizons), joint considerations of market and credit risk are difficult and seldom used. Realistic stress scenarios influencing various kinds of risk therefore could lead to extreme
losses, which could be of enormous importance for controlling risk and should be reflected in the capital requirements. In any case, there can be strong correlations between the developments of
market, liquidity and credit risk which could result in extreme losses and should not be neglected. Consequently, investigations into events causing simultaneous increases
XV. Development of Stress Tests for Credit Portfolios
in market and credit risk are more than reasonable. An overview over several types of risk relevant for stress testing can be found in Blaschke et al. (2001). The quantitative outcome of stress
testing can be used in several places for portfolio and risk management: x risk buffers can be determined and/or tested against extreme losses x the risk capacity of a financial institution can be
determined and/or tested against extreme losses x limits for sub-portfolios can be fixed to avoid given amounts of extreme losses x risk policy, risk tolerance and risk appetite can be tested by
visualising the risk/return under abnormal market conditions. Such approaches focusing on quantitative results might be of particular interest for sub-portfolios (like some country-portfolios), where
the historic volatility of the respective loans is low, but drastic changes in risk relevant parameters cannot be excluded. Stress tests should not only be reduced to their purely quantitative
features. They can and should also play a major role in the portfolio management of a bank, as they offer the possibility of testing the structure and robustness of a portfolio against perturbations
and shocks. In particular they can represent a worthwhile tool to: x identify potential risks and locate the weak spots of a portfolio x study effects of new intricate credit products x guide
discussion on unfavourable developments like crises and abnormal market conditions, which cannot be excluded x help monitor important sub-portfolios exhibiting large exposures or extreme
vulnerability to changes in the market x derive some need for action to reduce the risk of extreme losses and hence economic capital, and mitigate the vulnerability to important risk relevant effects
x test the portfolio diversification by introducing additional (implicit) correlations x question the bank’s attitude towards risk
3. Regulatory Requirements As we have seen in the previous section, the benefits of using stress tests are manifold for the controlling and portfolio management. Tribute to this fact is also paid by
the Basel II Revised Framework (see BIS, 2004). Here stress testing appears in Pillar 1 (about the minimum capital requirements) and Pillar 2 (about the supervisory review process) for banks using
the IRB approach. The target of the requirements is improved risk management.
Volker Matthias Gundlach
The requirements in the Basel II Revised Framework are not precise. They can be summarized as:2 x Task: Every IRB bank has to conduct sound, significant and meaningful stress testing to assess the
capital adequacy in a reasonably conservative way. In particular, major credit risk concentrations have to undergo periodic stress tests. Furthermore, stress tests should be integrated in the
internal capital adequacy process, in particular, risk management strategies to respond to the outcome of stress testing. x Intention: Banks shall ensure that they dispose of enough capital to meet
the regulatory capital requirements even in the case of stress. x Requirements: Banks should identify possible events and future changes in economic conditions, which could have disadvantageous
effects on their credit exposure. Moreover, the ability of the bank to withstand these unfavourable impairments has to be assessed. x Design: A quantification of the impact on the parameters
probability of default (PD), loss given default (LGD) and exposure at default (EAD) is required. Rating migrations should also be taken into account. Special notes on how to implement these
requirements include: x The use of scenarios like: economic or industry downturn market-risk events liquidity shortage is recommended. x Recession scenarios should be considered, worst-case scenarios
are not required x Banks should use their own data for estimating rating migrations and integrate the insight of rating migrations in external ratings x Banks should build their stress testing also
on the study of the impact of smaller deterioration in the credit environment. Though the requirements for stress testing are mainly contained in Pillar 1 of Basel II, the method is a fundamental
part of Pillar 2, since it is an important way of assessing capital adequacy. This explains the lack of extensive regulations for stress testing in that document as Pillar 2 acknowledges the ability
to judge risk and use the right means for this procedure. As another consequence, not only regulatory capital should be the focus of stress tests, but also economic capital as the counterpart of the
portfolio risk as seen by the bank. Not only the BIS (see CGFS, 2000, 2001, and 2005) promotes stress testing, but also some central banks and regulators3 have taken care of this topic (e.g., 2 3
The exact statements can be found in §§ 434-437, § 765, § 775 and § 777 of BIS (2004). Regulators are also interested in contagion, i.e. the transmission of shocks in the financial system. This topic
is not part of this contribution.
XV. Development of Stress Tests for Credit Portfolios
Deutsche Bundesbank (2003, 2004), Fender et al., 2001), in particular regarding the stability of financial systems. They have published statements which can be regarded as supplements to the Basel II
Revised Framework. These publications give a better impression of the regulatory goals and basic conditions for stress testing, which can be summarized as: x Stress tests should consider extreme
deviations from normal developments and hence should invoke unrealistic, but still plausible situations, i.e. situations with low probability of occurrence x Stress tests should also consider
constellations which might occur in future and which have not yet been observed x Financial institutions should also use stress testing to become aware of their risk profile and to challenge their
business plans, target portfolios, risk politics, etc. x Stress testing should not only be addressed to check the capital adequacy, but also used to determine and question limits for awarding credit
x Stress testing should not be treated only as an amendment to the VaRevaluations for credit portfolios, but as a complimentary method, which contrasts the purely statistical approach of VaR-methods
by including causally determined considerations for unexpected losses. In particular, it can be used to specify extreme losses in a qualitative and quantitative way.
4. Risk Parameters for Stress Testing The central point of the procedure of stress testing – also seen in Basel II – is the change in risk parameters. For regulatory capital, these parameters are
given by the probability of default (PD), loss given default (LGD) and exposure at default (EAD). In this connection, a superior role is played by the variations of PD, as LGD and EAD are lasting
quantities which – due to their definition – already are conditioned to disadvantageous situations, namely the default of the obligor. The possibilities of stress effects are hence restricted,
especially for EAD. The latter might be worsened by a few exogenous factors such as the exchange rate, but they should also be partly considered in the usual EAD. The exogenous factors affecting the
EAD might only be of interest if they also have an impact on the other risk parameters and hence could lead to an accumulation of risky influence. The possible variances for the LGD depend heavily on
the procedure used to determine this quantity. Thus, deviations which might arise from the estimation methods, should be determined, as well as parts of the process that might depend on economic
conditions. As the determination of the LGD is conditioned – by definition – to the unfavourable situation of a default, it should take into account lasting values for collaterals, and lead to values
that can be seen as conservative. Thus, there should not too many factors be left, that could lead to extreme changes for the LGD. Mainly the evaluation of collateral could have some influence which
cannot be neglected when stressing the LGD. In particular, it might be possible
Volker Matthias Gundlach
that factors affecting the value of the collaterals also have an impact on other risk parameters and hence should be taken into account. The PD is by far the most popular risk parameter which is
varied in stress tests. There are two main reasons why variations in the PD of an obligor can occur. On the one hand, the assignment of an obligor to a rating grade might change due to altered inputs
for the rating process. On the other hand, the realised default rates of the rating grades itself might change, e.g., because of modified economic conditions and their impact on the performance of
the loans. This allows two options for the design of the integration of PDs into stress testing: modifications either of the assignment to rating grades or of the PDs of the rating grades for stress
tests. Altered assignments of rating grades for obligors in stress tests have the advantage that they also allow the inclusion of transitions to non-performing loans. The change of PDs corresponds to
a change of rating grade. The possible deviation in the assignment of rating grades can be promoted by the rating procedure. Thus, the possibilities of variances and the sensitivity of the input for
the rating process should be investigated in order to get a first estimate for possible deviations. Consequently, as well as the analysis of historic data for rating transitions, expert opinions on
the rating methodology should be a part of the design process for the stress test. The modification of PDs for the rating grades, could have its origin in systematic risk, i.e. in the dependence on
risk drivers, one of the main topics in designing stress tests, as will be discussed below. While it is sensible to estimate the volatility of PDs in a first step and use the outcome of this
procedure for tests on regulatory capital, the differentiation of the effects of systematic and idiosyncratic risk on PD deviations should be considered in a second step. This will lead to more
advanced and realistic stress tests, in particular on economic capital. An analysis of the transition structure for rating grades might also be used to determine PDs under stress conditions. The
advantage of modifying PDs against modifying the assignment of rating grades is a greater variety for the choices of changes; the disadvantage is the absence of a modified assignment to the
performing and non-performing portfolio. This has to take place on top of the modification of PDs. Estimating economic capital PD, LGD and EAD might not be sufficient to design stress tests. In
addition, parameters used for displaying portfolio effects, including correlations between the loans or the common dependence on risk drivers are needed.4 Investigations on historic crises for credit
risk show that correlations and risk concentration exhibit huge deviations in these circumstances. In any case, 4
The basis for widely used portfolio models like CreditRisk+ or CreditMetrics, which are used by banks for estimating the VaR, are provided by factor models. The (abstract) factors are used to present
systematic risk affecting the loans. In these models it makes sense to stress the strength of the dependence on the factors and the factors themselves.
XV. Development of Stress Tests for Credit Portfolios
their variations should be considered in stress tests with portfolio models if possible. Some advanced models for estimating economic capital might even require more information, in particular
economic conditions. Portfolio models such as CreditMetrics not only consider the default of loans, but also the change of value by using migration probabilities. In this case, the migration
probabilities should be stressed in the same way as PDs. Stressing of risk parameters in tests need not take place for the whole portfolio, but only for parts of it. Also, the strength of the
parameter modification might depend on sub-portfolios. Such approaches are used to pay tribute to different sensitivities of parts of the portfolio to risk relevant influences or to study the
vulnerability of certain (important) sub-portfolios. They can be particularly interesting for investigations on economic capital with the help of portfolio models. In these cases, parameter changes
for parts of the portfolio need not have a smaller impact than analogous variations for the whole portfolio due to effects of concentration risk or diversification, respectively.
5. Evaluating Stress Tests As stress testing should be a part of the internal capital adequacy process, there should be an understanding of how to use the outcome of stress tests for controlling and
managing portfolio risk. The starting point for this should be the regulatory and economic capital as output of the underlying stress tests. The first task consists of checking whether the financial
institution holds sufficient capital to also cover the requirements in the stress situation. As there should be limits, buffers and policies to guarantee this, the evaluation of stress testing should
be also used to review these tools. Since the latter might be applicable to different portfolio levels (e.g. limits for sub-portfolios, countries, obligors), they should be checked in detail. The
concept of stress testing would be incomplete without knowing when action has to be considered as a result of the outcome of tests. It makes sense to introduce indicators and thresholds for
suggesting when: x x x x x
to inform management about potential critical developments to develop guidelines for new business in order to avoid the extension of existing risky constellations to reduce risk for the portfolio or
sub-portfolios with the help of securitisation and syndication to readjust an existing limit management system and the capital buffer for credit risk to re-think the risk policy and risk tolerance
Indicators for the call on action could be:
x x x x x x
Volker Matthias Gundlach
the increase of risk indicators as expected loss, unexpected loss, expected shortfall over a threshold or by a specified factor the increase of capital requirements (regulatory or economic) over a
threshold or by a specified factor the solvency ratio of capital and capital requirements under a threshold a low solvency level for meeting the economic capital requirements under stress a specified
quantile of the loss distribution for the portfolio under stress conditions does not lie within a specified quantile of the loss distribution for the original portfolio expected loss for the
portfolio under stress conditions overlaps the standard risk costs (calculated on the basis of expected loss for the duration of the loans) by a specified factor or gets too close to the unexpected
loss for the unstressed portfolio the risk/return lies above a specified threshold, where risk is measured in terms of unexpected loss
The interpretation of the outcome of stress tests on economic capital can easily lead to misapprehensions, in particular if the capital requirement is estimated on the basis of a VaR for a rather
large confidence level. The motivation for the latter approach is the avoidance of insolvency by holding enough capital, except for some very rare events. Stress tests might simulate situations
coming quite close to these rare events. Adhering to the large confidence levels for estimating economic capital, offers the possibility of comparing the capital requirements under different
conditions, but the resulting VaR or economic capital should not be used to question the solvency. In fact, it should be considered whether to use adapted confidence levels for stress testing or to
rethink the appropriateness of high confidence levels. One can see the probability of occurrence or the plausibility of a stress test as a related problem. We refer to a detailed discussion on this
topic and an approach to resolution to Breuer and Krenn (2001).
6. Classifying Stress Tests According to regulatory requirements, a bank should perform stress tests on its regulatory as well as its economic capital. This differentiation of stress tests is not
essential and mainly technical, as the input for determining these two forms of capital might be quite different as described in the previous section. Another technical reason for differentiating
stress tests is the division into performing and non-performing loans, as their respective capital requirements follow different rules. For non-performing loans, loss provisions have to be made. Thus
one has to consider the following cases for stress tests: x A performing loan gets downgraded but remains a performing loan – the estimation of economic capital involves updated risk parameters
XV. Development of Stress Tests for Credit Portfolios
x A performing loan gets downgraded and becomes a non-performing loan – provisions have to be estimated involving the net exposures calculated with the LGD x A non-performing loan deteriorates – the
provisions have to be increased on the basis of a declined LGD As already discussed in the previous section, defaults can be included in stress tests via a worsened assignment to rating grades. If
stress tests focus on PDs rather than rating grades, then stress rates for the transition of performing to nonperforming loans are required for the same purpose. Ideally, they depend on ratings,
branches, economic states, etc. and are applied to the portfolio after stressing the PDs. Moreover, the methodology of a bank to determine the volume of the provision for a defaulted credit should be
considered. A typical approach is to equate the loss amount given the default (i.e. the product of LGD with the exposure) with the provision. Typical ways to categorize stress tests can be taken over
from market risk. They are well documented in the literature (CGFS (2005), Deutsche Bundesbank, 2003 and 2004). The most important way to classify stress tests is via the methodology. One can
distinguish stress tests with respect to techniques in statistically and model based methods, and with respect to conceptual design in sensitivity analysis and scenario analysis. While the latter is
based on modelling economic variances, sensitivity analysis is statistically founded. The common basis for all these specifications is the elementary requirement for stress tests to perturb the risk
parameters. These can be the basic risk parameters (EAD, LGD, PD), of the loans as already mentioned for the tests on the regulatory capital. However, these can also be parameters used in a portfolio
model like asset correlations or dependencies on systematic risk drivers. The easiest way to perform stress tests is a direct modification of the risk parameters and belongs to the class of
sensitivity analysis. The goal is to study the impact of major changes in the parameters on the portfolio values. For this method, one or more risk parameters are increased (simultaneously) and the
evaluations are made for this new constellation. The increase of parameters should depend on statistical analysis or/and expert opinion. As these stress tests are not linked to any event or context
and are executed for all loans of a (sub-) portfolio, without respect to individual properties, we refer to them as flat or uniform stress tests. Most popular are the flat stress tests for PDs, where
the increase of the default rates can be derived from transition rates between the rating grades. An advantage of these tests is the possibility of performing them simultaneously at different
financial institutions and aggregating these results to check the financial stability of a system. This is done by several central banks. Such tests are suited to checking the space and buffer for
capital requirements, but it does not mean any help for portfolio and risk management. Model based methods for stress testing incorporate observable risk drivers, in particular, macroeconomic
variables for representing the changes of risk parameters.
Volker Matthias Gundlach
In the following, we will refer to these risk drivers as risk factors. The respective methods rely on the existence of a model – mainly based on econometrical methods – that explains the variations
of the risk parameters by changes of such risk factors. One can distinguish univariate stress tests, which are defined by the use of a single, isolated risk factor, and multivariate stress tests,
where several factors are changed simultaneously. These tests can be seen as a refinement of those previously described: stressing the risk factors leads to modified risk parameters which are finally
used for the evaluation of the capital requirements. Note that risk factors can have quite different effects on risk parameters throughout a portfolio. Changes in the risk factors can lead to
upgrades as well as downgrades of risk parameters. For example, an increase in price of resources such as oil or energy can have a negative impact on PDs in the automobile or any other industry
consuming lots of energy, but it could have a positive impact on the PDs in the country trading these resources. By using univariate stress tests, banks can study specific and especially relevant
impacts on their portfolios. This has the benefit of isolating the influence of an important observable quantity. Consequently, it can be used to identify weak spots in the portfolio structure. Thus,
univariate stress tests represent another kind of sensitivity analysis, now in terms of risk factors instead of risk parameters. They have the disadvantage of possibly leading to an underestimation
of risk by neglecting potential effects resulting from possible correlations of risk factors. This shortcoming is abolished by using multivariate stress tests. The price is the reliance on additional
statistical analysis, assumptions or the establishment of another model describing the correlation of the risk factors involved. This is done in a framework known as scenario analysis, where
hypothetical, historical and statistically determined scenarios are distinguished. It results in the determination of stress values for the risk factors which are used to evaluate stress values for
the risk parameters. With respect to the design of scenarios, we can discriminate approaches driven by the portfolio (bottom-up approaches) and driven by events (top-down approaches). Bottom-up
approaches tend to use the results of sensitivity analysis to identify sensitive dependence on risk factors as starting points. As a consequence, those scenarios are chosen which involve risk factors
having the largest impact. For example, for a bank focusing on real estate, GDP, employment rate, inflation rate, spending capacity in the countries, it is acting in, will be of more relevance than
the oil price, exchange rates, etc. Thus, it will look for scenarios involving the relevant risk factors. Top-down approaches start with a chosen scenario, e.g., the terror attack in New York on
September 11, 2001, and require the analysis of the impact of this scenario on the portfolio. The task in this situation is to identify those tests which cause the most dramatic and relevant changes.
Historical scenarios are typical examples of top-down approaches. They refer to extreme constellations of the risk factors which were observed in the past and in the majority of the cases can be
related to historical events and crises. They are
XV. Development of Stress Tests for Credit Portfolios
transferred to the current situation and portfolio. This can be seen as a disadvantage of this approach, as the transferred values may no longer be realistic. Another drawback is that generally, it
is not possible to specify the probability of the scenario occurring. Also, statistically determined scenarios might depend on historical data. They are based on the (joint) statistical distribution
of risk factors. In this approach, scenarios might be specified by quantiles of such distributions. Whilst it might be very difficult to produce suitable distributions in particular, joint
distributions, the advantage is that it is possible to evaluate the probability of the scenario occurring as this is given by the complement of the confidence level used for the quantile. The
existence of such probabilities of occurrence allows the calculation of expected extreme losses which can be used for the estimation of economic capital. The crucial point of this approach is the
generation of a suitable risk factor distribution. Only if the latter is chosen compatible with the state of economy, (hence does not rely too heavily on historic data), can useful conclusions for
the management of the portfolio be derived. Finally, there are hypothetical scenarios which focus on possible rare events that might have an important impact on the portfolio, but have not been
observed yet in the form they are considered. The crucial point is the presentation of the consequences of the event on the risk factors. For the estimation of this expert opinion, it is necessary to
accompany the macro-economic modelling of the dependence of the risk parameters on risk factors. If macroeconomic parameters are not part of the input for determining the risk parameters which are
stressed, there are three steps required for macro stress tests. Firstly, it is necessary to model the dependence of the risk parameters on the risk factors. Secondly, it is necessary to choose
values for the risk factors which are representative for stress events. Since it is intended to reproduce correlations and causal interrelations between risk factors and stress events, intricate
(macro-economic), methods of estimation and validation are needed. A disadvantage of hypothetical scenarios might be the need to specify the probability of occurrence of such hypothetical scenarios.
On the other hand, there is the major advantage of having forward-looking scenarios which do not necessarily reflect historical events. Thus, hypothetical scenarios present interesting adjuncts to
VaR-based analysis of portfolio risk and are a worthwhile tool for portfolio management. The use of risk factors as in the multivariate scenario analysis has the additional advantage of allowing
common stress tests for credit, market and liquidity risk. Here, it is necessary to consider factors that influence several forms of risk or scenarios that involve risk factors for them.
Volker Matthias Gundlach
7. Conducting Stress Tests In the following section we will discuss how the stress tests we have just introduced in the previous section, can be and are, applied in financial institutions. We try to
provide details how to determine and conduct stress tests, focussing mainly on the performing part of credit portfolios. 7.1. Uniform Stress Tests The most popular stress tests in banks are uniform
stress tests, in particular for the PDs. The intention is to use increased PDs for the calculation of economic or regulatory capital. In the easiest case, there is a flat increase rate for all PDs5
of obligors or/and countries, but in general, the change might depend on rating grades, branches, countries, regions, etc. We suggest several ways to derive the stress PDs: 1. Analyse the default
data with respect to the dependence on rating grades, branches, countries, regions, etc. This data could originate from the bank’s own portfolio or from rating agencies. Determine the deviations of
the default rates from the PD. Another way to derive such variations might arise from the analysis of spreads for respective credit derivatives. The stress PD then can be determined from the PD by
adding the standard deviation, a quantile or other relevant characteristic of the deviation distribution. It might seem to be a good idea to use the quantile to determine also a probability of the
stress occurring, but one should question the quality and the relevance of the distribution before using this approach. 2. Use migration rates (referring to the bank’s own portfolio or coming from
rating agencies), to determine transitions between rating grades. These transitions might depend on branches, countries, etc. In an intermediate step, stressed migration matrices can be generated by
omitting rating upgrades, by conditioning on economic downturns (Bangia et al. 2002), by uniformly increasing the downgrade rates at the expense of uniformly decreasing the upgrade rates or on the
basis of a time series analysis. Next, one can derive for every original rating grade, a stressed rating grade by evaluating quantiles or any other characteristics for the transition probabilities.
Consequently, it is possible to build the stress test on the rating grades. Now, the stress test consists of replacing the original rating grade by the stressed rating grade. Alternatively, one can
replace the original PD by the PD of the stressed rating grade. A different approach uses the stressed migration rates. Depending on their derivation, they possibly 5
Such stress tests are often used by central banks to test the stability of the respective financial systems. In the studies in Deutsche Bundesbank (2003) PDs are increased by 30% and 60%,
respectively. These changes approximately correspond to downgrades of Standard and Poor’s’ ratings by one or two grades, respectively. The latter is seen as conservative in that paper. Banks should
analyse their default data to come up with their own rates of increase, which we expect to be in the worst case larger than 60%.
XV. Development of Stress Tests for Credit Portfolios
have to be calibrated to become transition probabilities. Then they can be used to calculate an expected PD for every rating grade, which can play the role of a stressed PD. The decision as to which
option should be chosen for determining the stress PD should depend on the data, which is available for statistical analysis. Also, expert opinions could be a part of the process to generate the
stress PDs. In particular, it makes sense to study the deviations that can be caused by the rating process due to sensitive dependence on input parameters. This could lead to an additional add-on
when generating the stress PDs. The preference for stressed PDs or stressed rating grades should depend on the possibilities of realising the stress tests. Regarding the portfolio model, the
dependence of a PD on a branch or country in a rating grade could – for example – represent a problem. A criterion in favour of stressed rating grades is the inclusion of defaults. Such a stressing
might lead to assignments of loans to grades belonging to the non-performing portfolio. These can be treated respectively, i.e. instead of the capital requirements, provisions can be calculated. In
the case that PDs are stressed, instead of rating grades, one should first consider the stressing of the PDs in the portfolio and then the stressing of transition rates to the non-performing part of
the portfolio. In this context, Monte Carlo simulations can be used to estimate capital requirements for the performing, and provisions for the non-performing part of the portfolio. Transition rates
to the non-performing portfolio, usually corresponding to default rates, can be stressed in the same form and with the same methods as the PDs. The same holds for migration rates between rating
grades which are used in some portfolio models. Flat stress tests for LGDs could also be based on statistical analysis, in this case for loss data. The approach to determine and study deviations in
loss rates is analogous to the one for default rates. Expert opinion could play a bigger role. An example of an interesting stress test could be provided by a significant fall in real estate prices
in some markets. Uniform stressing of EAD is often not relevant. Deviations of this quantity mainly depend on individual properties of the loans. Variations of exchange rates can be seen as the most
important influence on the deviations of EAD from the expected values. It is commendable to investigate this effect separately. For uniform stressing of parameters used in portfolio models, it seems
to be the best to rely on expert opinions, as it is very difficult to detect and statistically verify, the effect of these parameters on the deviations from expected or predicted values of defaults
and losses. While it is already rather intrinsic to determine suitable parameter values for the uniform tests involving single parameters, it even becomes more difficult to do
Volker Matthias Gundlach
this for several parameters at the same time. Experience derived from historic observations and expert opinion seems to be indispensable in this situation. 7.2. Sensitivity Analysis for Risk Factors
This kind of stress testing is very popular for market risk, where risk factors can easily be identified, but it can also be seen as basic for scenario analysis. This is due to the crucial task of
recognising suitable risk factors and introducing a valid macroeconomic model for the dependence of risk parameters on the risk factors representing the state of the business cycle. Of course, there
are obvious candidates for risk factors like interest rates, inflation rates, stock market indices, credit spreads, exchange rates, annual growth in GDP, oil price, etc. (Kalirai and Scheicher,
2002). Others might depend on the portfolio of the financial institute and should be evident for good risk managers. Using time series for the risk factors on relevant markets, as well as for the
deviations of risk parameters and standard methods of statistical analysis like discriminant analysis, one should try to develop a macroeconomic model and determine those factors suitable to describe
the evolution of risk parameters. Typically, the impact of stress on the risk parameters or directly on credit loss characteristics is modelled using linear regression. One of the problems involves
determining the extent to which the risk factors must be restricted, whilst allowing a feasible model. Discovering which risk factors have the biggest impact on the portfolio risk in terms of the VaR
or whatever is used for the evaluation of unexpected losses, is the target and the benefit of sensitivity analysis. Stressing is analogous to the uniform stress test on risk parameters. Stress values
for a single risk factor are fixed on the basis of statistical analysis or expert opinion. The consequences for the risk parameters are calculated with the help of the macroeconomic model and the
modified values for the risk parameters are finally used for evaluating capital requirements. Risk factors, which have an impact on several risk parameters and which also play a role for stress
testing market risk, might be of particular interest. Sensitivity analysis could also be used to verify the uniform stress testing by checking whether the range of parameter changes due to
sensitivity analysis is also covered by the flat stress tests. Moreover, it can be seen as a way to pre-select scenarios: only those historical or hypothetical scenarios which involve risk factors
showing some essential effects in the sensitivity analysis are worth considering. 7.3. Scenario Analysis Having specified the relevant risk factors, one can launch historic scenarios, statistical
selection of scenarios and hypothetical scenarios. These different methods
XV. Development of Stress Tests for Credit Portfolios
should partly be seen as complementing each other. They can also be used for specifying, supporting and accentuating the other. 7.3.1. Historical Scenarios Historical scenarios are easy to implement,
as one only has to transfer the values of risk factors corresponding to a historic event to the current situation. In most cases, it does not make sense to copy the value of the risk factors, but to
determine the change of value (either in absolute or in relative form) which is accompanied by the insertion of the event and assume it also applies to the actual evaluation. The following events are
most popular for historical scenarios: x Oil crisis 1973/74 x Stock market crash (Black Monday 1987, global bond price crash 1994, Asia 1998) x Terrorist attacks (New York 9/11 2001, Madrid 2004) or
wars (Gulf war 1990/91, Iraq war 2003) x Currency crisis (Asian 1997, European Exchange Rate Mechanism crisis 1992, Mexican Peso crisis 1994) x Emerging market crisis x Failure of LTCM6 and/or
Russian default (1998) Though the implications of historical scenario analysis for risk management might be restricted due its backward looking approach, there are good reasons to use it. First of
all, there are interesting historic scenarios which certainly would not have been considered, as they happened by accident, i.e. the probability of occurrence would have been seen too low to look at
them. Examples of this case are provided by the coincidence of the failure of LTCM and the Russian default or the 1994 global bond price crash. It can be assumed that both events would rarely have
contributed to the VaR at the time of their occurrence, due to the extremely low probability of joint occurrence for the single incidents.7 There is also much to learn about stress testing and
scenario analysis from working with historic scenarios. On the one hand, the latter can be used to check the validity of the uniform stress tests and sensitivity analysis; on the other hand, they can
be very helpful in designing hypothetical scenarios. Thus, the analysis of his6
The hedge fund Long-Term Capital Management (LTCM) with huge, but well diversified risk positions was affected in 1998 by a market-wide uprising of risk boosted by the Russia crisis. This led to
large losses of equity value. Only a joint cooperation of several US-investment banks under the guidance of the Federal Reserve could avoid the complete default of the fund and a systemic crisis in
the world’s financial system. The movements of government bond yields in the US, Europe and Japan are usually seen as uncorrelated. Hence their joint upward movement in 1994 can be seen as an
extremely unlikely event.
Volker Matthias Gundlach
torical scenarios offers the unique possibility of learning about the joint occurrence of major changes to different risk factors and the interaction of several types of risks, e.g., the impact of
credit risk events on liquidity risk. For these reasons, we regard historical scenario analysis as a worthwhile part of establishing a stress testing framework, but not necessarily as an essential
part of managing and controlling risk. 7.3.2. Statistically Determined Scenarios A special role is played by the analysis of scenarios which are chosen on the basis of risk factor distributions.
These are not directly related to the other types of scenario analysis. Central to this approach is the use of (joint) risk factor distributions. While it should not be too difficult for isolated
common risk to generate such distributions on the basis of historic data, a situation involving several factors can be far more intricate. Nevertheless, distributions generated from historic data
might not be sufficient. It would be much better to use distributions conditioned to the situation applying at the time of stress testing. This could represent a real problem. We would like to point
out that only in the case of a reliable factor distribution, should this approach be used. If expected losses conditioned to a quantile are evaluated in order to interpret them as unexpected losses
and treat them as economical capital requirement, then the risk factor distribution should also be conditioned to the given (economic) situation. 7.3.3. Hypothetical Scenarios Hypothetical scenario
analysis is the most advanced means of stress testing in risk management. It should combine experience in analysing risk relevant events with expert opinion on the portfolio, as well as the economic
conditions and statistical competency. The implementation of hypothetical scenario analysis is analogous to the one for historic scenarios. The only difference is provided by the choice of values for
the risk factors. This can be based on or derived from historical data, but expert opinion might also be used to fix relevant values. The choice of scenarios should reflect the focus of the portfolio
for which the stress test is conducted and should have the most vulnerable parts of it as the target. Common scenarios (together with risk factors involved) are provided by the following: x
Significant rise in oil price (increased oil price, reduced annual growth in GDP to describe weakened economic growth, indices describing increased consumer prices, etc.)
XV. Development of Stress Tests for Credit Portfolios
x Major increase of interest rates (indices describing the volatility of financial markets, increased spreads, reduced annual growth in GDP to describe weakened economic growth, volatility of
exchange rates, consumer indices, etc.) x Drop in global demand (reduced annual growth in GDP, stock market indices, consumer indices, etc.) x Emerging market crisis (reduced annual growth in GDP to
describe weakened economic growth, widened sovereign credit spreads, decline in stock prices, etc.) Hypothetical scenarios have the additional advantage that they can take into account recent
developments, events, news and prospects. Note that scenarios involving market parameters like interest rates are well suited for combinations with stress tests on market and liquidity risk.
8. Examples In the following we will present the outcome of some stress tests on a virtual portfolio to illustrate the possible phenomena, the range of applications and advantages corresponding to
the tests. The portfolio consists of 10,000 loans and exhibits a volume of 159 billion EUR. The loans are normally distributed over 18 rating grades (PDs between 0.03% and 20% and a mean of 0.6%) and
LGDs (ranging from 5% to 50% with a mean of 24%). Moreover, they are gamma-distributed with respect to exposure size (ranging from 2,000 EUR to 100 million EUR with mean 1 million EUR). To determine
economic capital, we employ the well known portfolio model CreditRisk+ (Gundlach and Lehrbass, 2004). We use it here as a six-factor-model, this means that we incorporate six (abstract) factors
corresponding to so-called sectors (real estate, transport, energy, resources, airplanes, manufacturing) which represent systematic risk drivers. For our version of CreditRisk+, each obligor j is
assigned exactly to one sector k = k(j). This is done according to a weight wj, 0 wj 1. For each sector k there is a corresponding random risk factor Sk, which is used to modify the PD pj to ȡj
p j w j Sk j .
The random factors Sk have mean 1 and are gamma-distributed with one parameter ık corresponding to the variance of the distribution. Correlations in CreditRisk+ are thus introduced via the Sk, i.e.
in our CreditRisk+-version, only correlations between obligors from the same sector are sustained. The strength of the correlations depends on the weights wj and the variation ık. These parameters
can both be modified in stress tests, though it seems more natural to increase the ık. The loans in the portfolio are randomly distributed over the six sectors, representing systematic risk, and
thirteen countries, which play a role in some of the scenar-
Volker Matthias Gundlach
ios. The dependence of the loans on respective systematic risk factors varies between 25% and 75% and is randomly distributed in each sector. The sectorial variation parameters ık’s are calculated
from the volatilities of the PDs according to some suggestion from the original version of CreditRisk+ and range between 1.8 and 2.6. In the stress tests we only take account of the dependence of the
risk parameter PD, on risk factors ȕi. When modelling this interrelation, we used a simple linear regression to predict the changes of rating agencies’ default rates for the sector and country
division of the portfolio and transferred this dependence to the PDs pj used in our model pi
¦x i
ji E i
uj .
Here the uj’s represent residual variables and the indices refer to a classification of PDs according to sectors and countries. Due to the small amount of data and the crude portfolio division, we
ended with a rather simple model for the PDs with respect to their assignment to sectors and countries involving only an oil price index, the S&P 500-Index, the EURIBOR interest rate, the EUR/USD
exchange rate and the GDP of the USA and EU. We performed several stress tests on the virtual portfolio. The evaluation of these tests takes place in terms of expected loss, regulatory and economic
capital. For the latter, we calculate the unexpected loss as the difference between VaR for a confidence level of 99.99% and expected loss. We focus on the outcome for the whole portfolio, but also
report on interesting phenomena for sub-portfolios. The calculations of regulatory capital are based on the Basel II IRBA approach for corporates, while the estimations of VaR are done with
CreditRisk+. Loss provisions are also considered in some tests. In the case that the assignment of obligors to rating grades is stressed, non-performing loans and hence candidates for loan provisions
are implicitly given. In other cases, they are determined for each rating grade according to a stressed PD. The volume of the respective portfolio is reduced respectively. We have considered the
following stress tests, including uniform stress tests, sensitivity analysis, and historical and hypothetical scenario analysis: 1. flat increase of all PDs by a rate of 50%, a) with and b) without
loan loss provisions 2. flat increase of all PDs by a rate of 100% a) with and b) without loss provisions 3. uniform upgrade of all rating grades by one 4. flat increase of all LGDs by 5% 5. flat
increase of all PDs by a rate of 50% and all LGDs by 5% 6. flat increase of all sectorial variances ık by a rate of 50% 7. flat increase of all LGDs by 10% for real estates in UK and USA (burst of
real estate bubble)
XV. Development of Stress Tests for Credit Portfolios
8. drop of stock market index (S&P500-Index) by 25% 9. rise of oil price by 40% 10. September 11 (drop of oil price by 25%, of S&P-Index by 5.5%, EURIBOR by 25%) 11. recession USA (drop of S&P-Index
by 10%, GDP of USA by 5%, GDP of EU by 2%, increase of EUR/USD-exchange rate by 20%) The outcome is summarised in the following table where all listed values are in Mill. EUR: Table 1. Outcome of
stress testing on a virtual portfolio No. 0 1 a) 1 b) 2 a) 2 b) 3 4 5 6
Stress Test None (Basis portfolio) PD *150% PD *150% with provisions PD *200% PD *200% with provisions Rating class + 1 LGD + 5% LGD + 5%, PD *150% Systematic factor *150%
Real estate bubble
Stock price decline
Rise of oil price
Terror Attack New York September 11
11 Recession USA
Regulatory Capital 3,041 3,715 3,631 4,238 4,151 3,451 3,676 4,490 3,041
Economic Expected Capital Loss 1,650 235 2,458 353 2,255 320 3,267 470 2,996 427 1,911 273 1,985 283 3,935 567 3,041 235
Loss Sectorial Increase of Provision Economic Capital 0 0 332 0 332 376 0 0 0 32% for real estates, 45% for UK and USA 0 58% for USA, Western Europe, Japan 0 65% for transport and airplanes 0 77% for
USA, Western Europe, Japan 0 68% for USA and South America, 57% for airplanes 0
The inclusion of loss provisions does not seem to play a major role in the overall outcome of stress testing, as the sum of the provisions and the economic capital is rather small. Nevertheless, the
discrimination of economic capital and provisions (in particular with the comparison of the latter with expected loss), is quite interesting. Also, the distinction between stressing PDs and stressing
the assignment to rating grades has a rather limited impact on the result of the stress testing. Furthermore, it is not a surprise that stress testing has a larger impact on economic capital than on
regulatory capital. The significant diversity of impact on the sectors and countries by the scenario analysis underscores the importance of this kind of stress testing for detecting weak spots in the
portfolio and for portfolio management. As the portfolio used here is rather well diversified, the effects would be larger in a real portfolio. Also, the simultaneous stressing of several risk
parameters has major effects. This is underlined by the joint increase of PDs and LGDs. Also, the role of parameters describing systematic risk cannot be overestimated, as is indicated by the test
given by the increase of systematic risk factors. Some of the scenarios lack the exhibition effects one would expect (like a major deterioration of airplane industry in the historic scenario
concerning the terrorist attacks of September 11), which could
Volker Matthias Gundlach
not be indicated by the linear regression, but which could be produced in the design of the stress test using expert opinion.
9. Conclusion Stress testing is a management tool for estimating the impact on a portfolio of a specific event, an economic constellation or a drastic change in risk relevant input parameters, which
are exceptional, even abnormal, but plausible and can cause large losses. It can be seen as an amendment as well as a complement to VaRbased evaluations of risk. It allows the combinations of
statistical analysis and expert opinions for generating relevant and useful predictions on the limits for unexpected losses. Stress testing should not only be seen as a risk management method –
though it can be used in various ways, but also as an means towards analysing risk and risk relevant constellations. In particular, it should lead to a higher awareness and sensitivity towards risk.
This requires a better knowledge of risk drivers, portfolio structure and the development of risk concentrations. It cannot be achieved in a standard way. Instead experience, research and sustained
investigations are required. In particular it makes sense to use an evolutionary approach to overcome the complexity of requirements for stress testing. We would like to make the following suggestion
as an evolutionary way towards a reasonable and feasible framework of stress testing. The basis of stress tests is provided by rich data for defaults, rating transitions and losses. The starting
point for the development of stress tests should be an analysis of the volatilities of these rates and estimations for bounds on deviations for them. The statistical analysis should be accompanied by
investigations of the reasons for the deviations. It should be studied which fraction of the deviations arise from the methodology of the rating processes and which from changes in the economic,
political, etc. environment. Expert opinion should be used to estimate bounds for the deviations arising from the methodology. Statistical analysis should lead to an identification and quantification
of the exogenous risk factors having the main impact on the risk parameters needed to determine capital requirements. The combination of these two procedures should enable the establishment of
uniform stress testing. The analysis of default and loss data with respect to estimating deviations from the risk parameters should be followed by statistical analysis of the dependence of these
deviations on risk factors and an identification of the most relevant factors. For the latter, first considerations of historic events which are known to have a large impact on portfolio risk should
also be taken into account. These investigations should culminate in a macroeconomic model for the dependence of risk parameters on risk factors. With this model sensitivity, analysis for risk
factors can be performed. The outcome of these stress tests can be used to check whether the uniform stress tests involve sufficient variations of risk parameters to cover the re-
XV. Development of Stress Tests for Credit Portfolios
sults of univariate stress tests. As a consequence, the uniform stress tests might have to be readjusted. Moreover, the sensitivity analysis should also be used to check whether the chosen risk
factors are contributing to drastic changes in the portfolio. If this not the case, they should be neglected for further stress tests. The involvement of relevant risk factors should also be a good
criterion for picking historical and hypothetical scenarios. It makes sense to consider historical scenarios first in order to benefit from the experience with historical data. This experience should
also include the consideration of the interplay of different kinds of risks like market, credit, operational, liquidity risk, etc. The design of hypothetical scenario analysis should be seen as the
highlight and culmination point of the stress testing framework. Scenario analysis based on statistical analysis is a method which is not connected too closely with the others. Nevertheless, a lot of
preliminary work has to be done to generate reliable tests of this kind. The main problem is the generation of probability distributions for the risk factors, in particular joint distributions and
distributions conditioned on actual (economic) situations. The evolutionary approach towards a feasible framework for stress testing can be summarized by the chart in Figure 2. Statistical analysis
and research
Stress tests
Analyse methods for risk Parameter determination w.r.t. possibilities for deviations Design and realisation of uniform stress tests
Analyse dependence of risk parameter deviations on risk factors
Analyse historic events with high impact on portfolio risk
Determine risk factors and their possible deviations relevant for the portfolio and stress testing
Realisation of sensitivity analysis for risk factors selection
Analyse default and loss data w.r.t. possibilities for deviations of risk parameters
Develop a macroeconomical model for the dependence of risk parameters on risk factors validation
Realisation of historic scenario analysis
Determine a distribution for the (historic) deviations of the relevant risk factors
Realisation of statistically based scenario analysis Design and realisation of hypothetical scenario analysis
Figure 1. Development of a stress testing framework
Investigate the interplay of credit risk, market risk and other sorts of risk
Volker Matthias Gundlach
Having established a stress testing framework, we recommend x regular uniform stress tests for regulatory and economic capital in order to provide a possibility for evaluating the changes made to the
portfolio in terms of possible extreme losses x hypothetical scenario analysis suitable to the actual portfolio structure and the conditions provided by the economy, politics, nature, etc. The latter
should partly be combined with stress tests on market and liquidity risk. Also, effects on reputational and other risks should not be neglected. Furthermore, one should have in mind that a crisis
might have a longer horizon than one year, the typical period for evaluations of risk, even in the common stress scenarios.
References Bangia A, Diebold F, Schuermann T (2002), Ratings Migration and the Business Cycle, With Application to Credit Portfolio Stress Testing, Journal of Banking and Finance, Vol. 26 (2-3),
(March 2002), pp. 445-474. Basel Committee on Banking Supervision (2004), International Convergence of Capital Measurement and Capital Standards, Bank for International Settlements. Blaschke W, Jones
M, Majnoni G (2001), Stress Testing of Financial Systems: An Overview of Issues, Methodologies, and FSAP Experiences, IMF Working Paper. Breuer T, Krenn G (1999), Stress Testing Guidelines on Market
Risk, Oesterreichische Nationalbank, Vol. 5. Breuer T, Krenn G (2001), What is a Plausible Stress Scenario, in Kuncheva L. et al. (eds.): Computational Intelligence: Methods and Applications, ICSC
Academic Press, 215221. CGFS – Committee on the Global financial System (2000), Stress Testing by Large Financial Institutions: Current Practice and Aggregation Issues, Bank for International
Settlements. CGFS – Committee on the Global Financial System (2001), A Survey of Stress Tests and Current Practice at Major Financial Institutions, Bank for International Settlements. CGFS –
Committee on the Global Financial System (2005), Stress Testing at Major Financial Institutions: Survey Results and Practice, Bank for International Settlements. Deutsche Bundesbank (2003), Das
deutsche Bankensystem im Stresstest. Monatsbericht Dezember. Deutsche Bundesbank (2004), Stresstests bei deutschen Banken – Methoden und Ergebnisse. Monatsbericht Oktober. Fender I, Gibson G, Mosser
P (2001), An International Survey of Stress Tests, Federal Reserve Bank of New York, Current Issues in Economics and Finance, Vol. 7(10), 1-6. Gundlach M, Lehrbass F (eds.) (2004), CreditRisk+ in the
Banking Industry, Springer Berlin Heidelberg New York. Kalirai H, Scheicher M (2002), Macroeconomic Stress Testing: Preliminary Evidence for Austria. Oesterreichische Nationalbank, Financial
Stability Report 3, 58-74.
Contributors Stefan Blochwitz is Head of Section in the Department of Banking Supervision of Deutsche Bundesbank‘s Central Office in Frankfurt am Main. He is in charge of implementing Basel’s
internal ratings based approach (IRBA) in Germany from a supervisory perspective and is a member of the AIG-subgroup on validation issues. His responsibilities include setting up the supervision of
credit risk models as well as research activities in credit risk measurement and management. Bernd Engelmann is a founder and a managing director of Quanteam AG, a derivatives technology and
consulting boutique in Frankfurt that focuses on the development of tailor-made front-office and risk management solutions for the financial industry. Prior to that, Bernd worked in the research
department of the Deutsche Bundesbank where his focus was on the construction and validation of statistical rating models. He has published several articles in this field. Bernd holds a diploma in
mathematics and a Ph.D. in finance from the University of Vienna. Ulrich Erlenmaier works for KfW Bankengruppe’s risk control and management unit in Frankfurt am Main. He is responsible for the
development and validation of rating systems. Previously he worked for Aareal Bank in Wiesbaden (Germany), where he supervised a project for the development and implementation of new Basel II
compatible rating systems. Ulrich holds a diploma in mathematics and a Ph.D. in economics from the University of Heidelberg. Walter Gruber holds a Ph.D. in business mathematics and is managing
partner of 1 PLUS i GmbH. He started his professional career at an investment bank in the Treasury division and ALCO Management. Following that, Walter worked as team leader, banking supervision, on
the board of management of Deutsche Bundesbank in the areas of research and the principal issues of internal risk models and standard procedures. He also represented the Bundesbank on various
international boards (Basel, IOSCO). Following this he was managing director of a consulting firm where he worked as consultant and trainer in banking supervision, risk management and product
assessment procedures. Walter has published several papers, in banking supervision, market and credit risk models and derivative finance products. He has also had several standard works published in
these fields. Volker Matthias Gundlach is senior project manager for KfW Bankengruppe’s risk and portfolio management unit in Frankfurt am Main. He is responsible for the development and realisation
of stress tests. Previously he worked for Aareal Bank in Wiesbaden (Germany), where he developed a credit portfolio model, set up a credit risk reporting and evaluation for MBS. Matthias holds a
diploma in mathematics from the University of Würzburg (Germany), an MSc and a PhD in mathematics from the University of Warwick (UK) and a ‘Habilitation’ in mathe-
matics from the University Bremen (Germany). He edited two books for Springer Verlag on stochastic dynamics and the portfolio model CreditRisk+. His other research activities include work on ergodic
theory, stochastic dynamical systems and mathematical biology. Alfred Hamerle is Professor of Statistics at the Faculty of Business, Economics and Information Systems, University of Regensburg,
Germany. Prior to his present position, he was Professor of Statistics at the University of Konstanz and Professor of Statistics and Econometrics at the University of Tübingen. His primary areas of
research include statistical and econometric methods in finance, credit risk modelling and Basel II, and multivariate statistics. Alfred has published eight books and more than eighty articles in
scientific journals. Evelyn Hayden holds a Ph.D. in finance from the University of Vienna, where she worked as an assistant professor for several years. At the same time she participated in
banking-industry projects as a freelance consultant. Currently Evelyn works at the Austrian National Bank in the Banking Analysis and Inspections Division. Her research interests are in the area of
risk measurement and management with focus on the development and validation of statistical rating models and the estimation of default probabilities. Stefan Hohl is a Senior Financial Sector
Specialist at the Financial Stability Institute (FSI) of the Bank for International Settlements (BIS). He is primarily responsible for the dissemination of information on sound practices for
effective banking supervision, covering a broad range of topics. Before joining the FSI, Stefan was a Senior Economist (Supervision) in the BIS Representative Office for Asia and the Pacific in Hong
Kong. This followed his work for the Deutsche Bundesbank, Frankfurt, in the department for Banking Supervision, being responsible for the Deutsche Bundesbank’s market risk models examination and
validation team. He is an Adjunct Professor at City University of Hong Kong and a qualified mathematician. Michael Knapp is academic adviser at the Chair of Statistics, Faculty of Business, at the
University of Regensburg. After his business studies and various internships, Michael wrote his doctorate on ‘Point-in-Time Credit Portfolio Models’. The main emphasis of his research activities is
the development of credit portfolio models, the PD / LGD modelling and concepts for credit portfolio controlling. Marcus R. W. Martin heads the professional group for risk models and rating systems
at the banking examination department I of Deutsche Bundesbank’s Regional Office in Frankfurt. He is examiner in charge and senior examiner for conducting audits for internal market risk models as
well as for internal ratings based approaches (IRBA) and advanced measurement approaches (AMA). Gregorio Moral is lead expert for Validation of Credit Risk Models in the Treasury and Risk Management
Models Division (TRM) of the Banco de España (BE).
He has contributed to the development of the validation scheme adopted by the BE for Basel II. Currently, he represents the BE on several international validation groups. His present work involves
on-site visits to some of the largest banking institutions in Spain, the examination of the models used by these institutions to manage credit risk, and the review of the implementation of Basel II.
He develops review procedures, publishes on validation issues and contributes to seminars on validation approaches. He has been working in the TRM Division, focusing on credit risk models since 2001.
In his previous supervisory experience, he worked as an examiner reviewing a range of banking institutions. His research interests include applied topics in credit risk, especially the estimation and
validation of risk parameters. He holds a degree in mathematics from the Universidad Complutense de Madrid. Ronny Parchert holds a degree in business administration (BA) and is a managing partner of
1 PLUS i GmbH. He is specialised in Basel II and credit risk management. He commenced his career working for a saving bank as a credit risk analyst in the loan department and as risk manager for the
ALM-positions. Following this, Ronny worked as consultant and trainer in banking supervision, risk management and risk measurement. Ronny has had several papers published, particularly concerning
banking supervision, as well as market and credit risk management. Christian Peter works as a senior analyst for the risk controlling and management department of KfW Bankengruppe in Frankfurt am
Main, Germany. Among other things he has been involved in the development of a rating and pricing tool for specialized lending transactions and has supervised projects on collateral valuation as well
as EAD and LGD modelling. Christian holds a diploma in business engineering and a Ph.D. from the Faculty of Economics and Business Engineering of the University of Karlsruhe (TH), Germany. Katja
Pluto works at the banking and financial supervision department of Deutsche Bundesbank. She is responsible for international regulation of bank internal risk measurement methods, and represents
Deutsche Bundesbank in various Basel Committee and European Working Groups on the issue. Before joining the Deutsche Bundesbank, Katja had worked at the credit risk management department of Dresdner
Bank. Daniel Porath has developed and validated scoring models for various banks when he was a senior analyst at the INFORMA consulting company. Afterwards, he entered the bank supervision department
of the Deutsche Bundesbank where he developed a hazard model for the risk-monitoring of German credit cooperatives and savings banks. He was also involved in the supervisory assessment of the banks’
risk management methods and in the on-site inspections of banks. Since 2005 he is Professor for Quantitative Methods at the University of Applied Sciences at Mainz. His research is focused on
empirical studies about rating methods and the German banking market.
Robert Rauhmeier has been a member of the Group Risk Architecture, Risk Instruments at Dresdner Bank in Frankfurt am Main since 2005. He works on the development, enhancement and validation of the
groupwide rating models of Dresdner Bank. Prior to that, he worked for KfW-Bankengruppe in the risk controlling and management department for two years. In that role he supervised the project
'Conception and Implementation of a Backtesting Environment' and covered various rating and validation projects. Robert studied economics and holds a Ph.D. from the University of Regensburg. His
thesis involved an analysis of the 'Validation and Performance measuring of Bank Internal Rating Systems'. Daniel Rösch is currently Assistant Professor ("Privatdozent") at the Department of
Statistics at the University of Regensburg, Germany. He received his Ph.D. in business administration for a study on asset pricing models. His research areas include modelling and estimation of
credit risk, internal models for credit scoring and portfolio credit risk, and development and implementation of banking supervisory guidelines. He also works as a consultant in these fields for
leading financial institutions. Daniel has published numerous articles on these topics in international journals. Harald Scheule is a senior lecturer in finance at The University of Melbourne. Prior
to taking up this position in 2005, he worked as a consultant for banks, insurance and other financial service companies in various countries. Harald completed his Ph.D. at the University of
Regensburg, Germany. He wrote his thesis on ‘Forecasting Credit Portfolio Risk’ in collaboration with the Deutsche Bundesbank. Dirk Tasche is a risk analyst in the banking and financial supervision
department of Deutsche Bundesbank, Frankfurt am Main. Prior to joining Bundesbank, he worked in the credit risk management of Bayerische HypoVereinsbank and as a researcher at universities in Germany
and Switzerland. Dirk has published several papers on measurement of financial risk and capital allocation. Carsten S. Wehn is a market risk specialist at DekaBank where he is responsible for the
methodology and development of the internal market risk model. Before joining the DekaBank, he was senior examiner and examiner in charge at the Deutsche Bundesbank, where he conducted audits for
internal market risk models at commercial banks. Carsten studied applied mathematics in Siegen, Germany, and Nantes, France, and he holds a Ph.D. in mathematics from the University of Siegen. Nicole
Wildenauer is a research assistant and Ph.D. candidate at the Chair of Statistics at the University of Regensburg, Germany. She is a graduate of the University of Regensburg with a degree in Business
Administration, specialising in Finance and Statistics. Her research interests include the estimation and prediction of central risk parameters including probability of default and loss given default
in the realm of Basel II. Her Ph.D. thesis is titled “Loss Given Default Modelling in the Banking Sector”.
Index Accord Implementation Group (AIG) 247 Accuracy Ratio (AR) 267 add-on factor 182 alpha error see statistical tests amortization effect 180, 194 application scoring 29 asset correlation 86, 293,
315 AUROC see ROC curve Austrian data sample 14 autocorrelation 69 average migration 291 backtesting see validation Basel Committee on Banking Supervision 105 Basel II 144, 347 behavioral scoring 29
beta distribution 100 beta error see statistical tests binomial test 292, 317 including correlations 294 modified binomial test 294 normal approximation 294 simultaneous binomial test 324 Brier score
296 decomposition 297 quadratic deviation 297 bucket plots 58 business cycle 109 calibration 41, 292, 316 CAP curve 266 relation to ROC curve 271 capital adequacy 350 capital allocation 347 capital
requirements 198 central tendency 89 Chi-square test 45, 296 classing 27, 35 collateral 180 confidence intervals 45 confidence level 85, 354
conversion factors see exposure at default (EAD) correlation 105 credit conversion factor (CCF) see exposure at default (EAD) credit loss database (CLDB) 154 credit portfolio model 139 credit risk
127 CreditRisk+ 352 crosstabulation 33 Cumulative Accuracy Profile 266 current exposure (CE) 182 data cleaning 15, 207 correlated observations 51 external data 51 internal data 51 limitations for
backtesting 304 missing values 58 outliers 17 representative 58 sample construction 41 weighted observations 51 De Finetti's theorem 86 decision tree 10 default 105 after-default scenario 151 default
correlation 293 default criteria 44 default event 13 default generating process 310, 313 dependent default events 316 double default effects 165 default probability see probability of default (PD)
diffusion effect 194 dilution 325 discount rates 168 discriminant analysis 3, 360 discriminative power 34, 263, 292 EAD see exposure at default (EAD) economic capital 118, 348 economic loss 147
expected loss (EL) 42, 180, 354
expected shortfall 354 exposure at default (EAD) 177, 197, 350 average exposure 192 credit conversion factor (CCF) 177, 198 estimation for derivatives 181 estimation for facilities with explicit
limits 213 internal models 184 loan equivalent factor (LEQ) 198 maximum exposure 192 on-balance sheet netting 178 optimal estimators 198 quantile exposure 192 reference data set 205 regulatory
requirements 178 risk drivers 211 sensitivity approach 187 variance-covariance approach 187 extreme loss 349 fair value 146 financial ratios 16 finite population correction 294 forward/backward
selection 21 frequency plots 62 Gini coefficient see Accuracy Ratio hazard model 8 Hosmer-Lemeshow-test 296, 320 IAS/ IFRS 146 idiosyncratic risk 315 impact analysis 41 impairment 146 implied
historical LGD 150 implied market LGD 150 information value 35 internal capital adequacy process 350 intertemporal correlation 94 IRB approach 349 LGD see loss given default (LGD) likelihood ratio
273 linear regression 360 linearity assumption 17 loan equivalent factor (LEQ) see exposure at default (EAD)
logit model 4 loss distribution 354 loss function 198 loss given default (LGD) 45, 127, 129, 148, 350 best estimate 171 conservative estimate 169 costs 148, 167 explanatory variables 158 implied
historical LGD 150 implied market LGD 150 lookup table 157 market LGD 149 NPL 170 point-in-time estimation 128 provisioning and NPL 172 recovery 160 workout LGD 149 low default portfolio 79 machine
rating 308 macroeconomic risk factors 109 Mann-Whitney statistic 276 manual adjustments 72 market LGD 149 master scales 310 Mean Square Error (MSE) 319 minimum requirements 11 model estimation 21
sample splitting 19 selection of input ratios 19 model validation 22 most prudent estimation principle 81 multi factor analysis 41 multi factor model 105 netting agreement 183 neural network 9
non-performing loans (NPL) 170, 352 normal test 298 one-factor model 315 one-factor probit model 86 panel data 69 panel model 7 panel regression 130 PD see probability of default (PD) peak exposure
194 PFE-floor 183
Index point-in-time 42, 109, 114, 128, 134, 312 potential future exposure (PFE) 182 Power Statistics see Accuracy Ratio pre-settlement risk 181 probability of default (PD) 42, 105, 109, 350
conditional 107 correlation with recovery rate 105 default definition 290 default recognition 290 duration-based estimation 47 estimation 42, 79 estimation by cohort method 46 forecast 313 forecast
horizon 14 one-year 308 validation 307 probit model 4 random effect 128, 132, 138 random factor 315 rating dimensions 30 grade 310, 352 machine rating 308 methodology 352 migrations 350 point-in-time
291 rating agency 291 rating scales 310 through-the-cycle 291 transition 352 rating system 308 calibration 292 design 30 discriminative power 263 expert-guided adjustments 308 impact of stress 304
manual override 309 master scales 310 modular design 308 precision of measurement 303 stability 292 supporter logic 309 Receiver Operating Characteristic 268 recoding 27 recovery 148 recovery rate
105, 129 correlation with PD 105 risk drivers 106
Redelmeier test 321 regression analysis 2 regulatory capital 119, 348 representativeness 62 risk appetite 349 risk buffer 349 risk capacity 349 risk factors correlation 58 standardisation 59
transformation 51, 59 truncation 59 weights 70 risk policy 349 risk tolerance 349 ROC curve 268 area below the (AUROC) 270, 340 correct interpretation 283 relation to CAP curve 271 statistical
properties 275 ROC receiver operating curve 340 rolling 12-months-window 334 scaling factor 89 score 2 scoring model 26 classing 27 recoding 27 risk drivers 31 sensitivity approach 187 settlement
risk 181 shadow rating 39 corporate group 40 external ratings 39 model assumptions 67 model selection 66 point-in-time 42 probability of default (PD) 42 risk factors 39 systems 39 through-the-cycle
42 shock 347 software tools 41 solvency ratio 354 Specialized Lending 97 Spiegelhalter test 319 stability of financial systems 351 statistical tests 316 alpha-error 329 beta-error 329 Chi-square test
Hosmer-Lemeshow-test 296, 320 Monte-Carlo simulation technique 323 most-powerful 293 multi period 298 normal test 298 power 329 Redelmeier test 321 single period 292 Spiegelhalter test 319 type I
error 329 type II error 329 stress test 347 bottom-up approach 356 historical scenario analysis 356 hypothetical scenario analysis 356 market risk 360 migration rate 358 multivariate stress test 356
recession scenarios 350 risk factor 356 scenario analysis 355 sensitivity analysis 355 top-down approach 356 uniform stress test 358 univariate stress test 356 worst-case scenario 350 stress testing
see stress test substitution approach 165 supporter logic 309 Sydney curve 194 systematic risk 105, 128, 315 through-the-cycle 42, 114 traffic light approach 300
extended traffic light approach 300 practical application 300 type I error see statistical tests type II error see statistical tests unexpected loss 348 univariate analysis 32, 57 univariate
discriminatory power 57 upper confidence bound 83 validation 307 alternative approaches 292 assessment of the loan officer's role 255 component-based 250 data quality 250 evidence from the model 251
model validation 22 practical limitations 303 principles of the AIGV 248 probability of default (PD) 289, 291, 307 process-based 259 qualitative approaches 292 rating systems 247, 291 regulatory
requirement 289 result-based 256 statistical tests 292 value-at-risk (VaR) 348 variance-covariance approach 187 Vasicek model 300 Weight of Evidence 34 workout LGD 149 | {"url":"https://vdoc.pub/documents/the-basel-ii-risk-parameters-estimation-validation-and-stress-testing-1613hgv5n198","timestamp":"2024-11-13T12:43:39Z","content_type":"text/html","content_length":"662268","record_id":"<urn:uuid:0ffca4e8-bd2a-42f3-8940-73b79ecfeba0>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00186.warc.gz"} |
Web Paint-by-Number Forum
Puzzle Description Suppressed:Click below to view spoilers
#1: Valerie Mates (valerie) on May 13, 2024
I am stuck at 75% and could use a hint.
N - That sounds like a horrible vacation! Much sympathy!!
#2: Web Paint-By-Number Robot (webpbn) on May 13, 2024
Found to be solvable with moderate lookahead by dbouldin.
#3: David Bouldin (dbouldin) on May 13, 2024
I didn't track my solve :( Lemme see what I can find...
#4: David Bouldin (dbouldin) on May 13, 2024 [HINT]
El dots C25-30R27 (looking at R25) helps some, but also look at R14. Using color logic (and starting on the right side), ask where the black 2 can start, then where the red 1, then the red 2,
etc...as you go, you'll see that there's only one spot for the leftmost 2 to go. There really is just a lot of color logic in here, but with some EL, 2-way and a smile mixed in. Let me know your
percentage if you get stuck again :)
#5: Scott (McEncheese) on May 13, 2024
I think you win. Much sympathy to your cousins.
#6: Wombat (wombatilim) on May 14, 2024 [HINT]
I got to 95% just with some careful color logic (EL was not necessary for R27, for example). The rows with only black or only reds between the sides are helpful guides for getting that far.
R16-17: If either of these puts black into C16, the other will put black into C14. Therefore R18-19 and R22-23 in C14 can be marked white. L&CL.
Summing: We need 4 total blacks in R22-23 C15-16 and C22-23, and the 7 in R22 only accounts for 2 of these. Therefore we will need both 1s from R23 in these columns, and the black in R23C26 must
be part of the last 2. L&CL.
Smile logic finishes the black.
C10: Whether the red 2 goes into R14-15 or R18-19, it will place red into C11, so R22C11 must be white. LL.
R15: If the red 2 is placed in C18-19, R14 is invalid.
LL to finish.
#7: valerie o..travis (bigblue) on May 14, 2024
omg, that's not fun at all :(
#8: Jota (jota) on May 15, 2024
Poor family!
#9: Koreen (mom24plus) on May 15, 2024
Oh my.
#10: Kathy Roth (clydie) on May 25, 2024 [SPOILER]
Comment Suppressed:Click below to view spoilers
#11: Kristen Vognild (kristen) on Jun 18, 2024 [SPOILER]
Comment Suppressed:Click below to view spoilers
Show: Spoilers
You must register and log in to be able to participate in this discussion. | {"url":"https://webpbn.com/read.cgi?type=P&id=38627&sid=&what=all&start=1&showhint=Y","timestamp":"2024-11-10T14:10:55Z","content_type":"text/html","content_length":"7396","record_id":"<urn:uuid:9d1ae40b-cb2d-4e9a-9c1b-3e6a20cad85a>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00846.warc.gz"} |
Giancoli 6th Edition, Chapter 4, Problem 5
Giancoli Answers, including solutions and videos, is copyright © 2009-2024 Shaun Dychko, Vancouver, BC, Canada. Giancoli Answers is not affiliated with the textbook publisher. Book covers, titles,
and author names appear for reference purposes only and are the property of their respective owners. Giancoli Answers is your best source for the 7th and 6th edition Giancoli physics solutions. | {"url":"https://www.giancolianswers.com/giancoli-physics-6th-edition-solutions/chapter-4/problem-5","timestamp":"2024-11-05T23:35:54Z","content_type":"text/html","content_length":"176786","record_id":"<urn:uuid:2e87c17d-ad31-4a26-9bd2-e36daae275bc>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00176.warc.gz"} |
Calculating Normal Forces on Inclined Car - Torque Homework Solution
• Thread starter inner08
• Start date
In summary, a car with a mass of 1200kg and a distance of 3.0m from front to back has a CM located at 0.8m off the road. When the car is at rest on an incline of 20 degrees, with the car oriented in
the direction of the hill, the normal force on each wheel can be calculated by first determining the CM's location and then calculating torques about one point of contact between the wheel and the
Homework Statement
A car of mass 1200kg has a distance (front to back) of 3.0m. On a horizontal surface, the front tires hold 60% of the weight. The CM is located at 0.8m off the road. Find the normal force on each
wheel when the car is at rest on an incline of 20 degrees. The car is oriented in the sens of the hill.
Homework Equations
The Attempt at a Solution
I'm having some trouble putting all my forces together. I drew out an FBD but I'm not sure how to incorporate the CM and the front/back distance into the equations.
So far, this is what I have:
I figured one normal force for the front tires, and another normal force for the back tires.
Fy: N1 + N2 - Fg = 0
N1 + N2 - mgcos= 0
Fx: Ff1 + Ff2 = ma
-mgsintheta - mgsintheta = ma
sum torque: (Ff1 + Ff2)H + (N1-N2)D/2 = 0
H = CM height
D = distance between the front and back
I'm pretty sure the sum torque is wrong...but hopefully I atleast got the Fx and Fy correctly. Any help is appreciated!
Science Advisor
Homework Helper
I think you are missing something important about the CM. Try making a stick figure of the car on level ground with a horizontal line through the CM and two vertical lines to represent the wheels.
(You have to assume the wheels are at the end of the car, since no other distance was given.) Where on the horizontal line is the CM? It is not in the middle.
After you locate the CM, rotate your figure to the incline of the hill and then calculate the torques about one point of contact between wheel and road.
I would like to point out that the equations and calculations provided in the attempt at a solution are correct. However, it is important to clarify that the CM height and the distance between the
front and back tires are not interchangeable. The CM height, denoted as H in the sum torque equation, is the vertical distance from the center of mass to the ground, while the distance between the
front and back tires, denoted as D, is the horizontal distance between the two points of contact with the ground.
In order to accurately calculate the normal forces on the inclined car, both the CM height and the distance between the front and back tires need to be taken into account. The sum torque equation is
correct, but it can be rewritten as:
(Ff1 + Ff2)H + (N1-N2)(D/2)cos(theta) = 0
Where theta is the angle of inclination. This equation takes into account the torque caused by the car's weight and the normal forces on the front and back tires.
In conclusion, the attempt at a solution is correct, but it is important to clarify the difference between CM height and the distance between the front and back tires. Additionally, the sum torque
equation should include the angle of inclination in order to accurately calculate the normal forces on the inclined car.
FAQ: Calculating Normal Forces on Inclined Car - Torque Homework Solution
1. What is torque and why is it important in components?
Torque is a measure of the rotational force of an object around an axis. It is important in components because it determines how effectively a machine can perform its intended function. In other
words, torque helps to determine the power and efficiency of a component.
2. How does torque affect the performance of a component?
The torque applied to a component can affect its performance in several ways. Firstly, a higher torque can lead to a higher acceleration and speed of the component. Additionally, torque can also
affect the stability and durability of a component, as too much or too little torque can cause damage or malfunction.
3. What factors can affect torque in components?
The amount of torque produced by a component can be influenced by various factors, such as the size and shape of the component, the type of material used, and the angle at which the force is applied.
Other factors include friction, air resistance, and the presence of other external forces.
4. How is torque measured in components?
Torque is typically measured in units of force multiplied by distance, such as Newton-meters (Nm) or pound-feet (lb-ft). To measure torque, a torque wrench is commonly used, which applies a specific
amount of force to a component and measures the resulting torque produced.
5. What are some common methods for increasing torque in components?
There are several ways to increase the torque in components, including increasing the size or number of components, changing the gear ratio, and using a more powerful motor or engine. Other
techniques include reducing friction and using materials with higher strength and stiffness properties. | {"url":"https://www.physicsforums.com/threads/calculating-normal-forces-on-inclined-car-torque-homework-solution.146162/","timestamp":"2024-11-08T04:48:42Z","content_type":"text/html","content_length":"82317","record_id":"<urn:uuid:79b10009-f3fe-4831-8e11-49167c027a25>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00137.warc.gz"} |
Observation of Ds1(2536)+→D+π-K+ and angular decomposition of Ds1(2536)+→D*+KS0
Using 462fb-1 of e+e- annihilation data recorded by the Belle detector, we report the first observation of the decay Ds1(2536)+→D+π-K+. The ratio of branching fractions B(Ds1(2536)+→D+π-K+)B(Ds1
(2536) +→D*+K0) is measured to be (3.27±0.18±0.37)%. We also study the angular distributions in the Ds1(2536)+→D*+KS0 decay and measure the ratio of D- and S-wave amplitudes. The S-wave dominates,
with a partial width of ΓS/Γtotal=0.72±0.05±0.01.
ASJC Scopus subject areas
• Nuclear and High Energy Physics
• Physics and Astronomy (miscellaneous)
Dive into the research topics of 'Observation of Ds1(2536)+→D+π-K+ and angular decomposition of Ds1(2536)+→D*+KS0'. Together they form a unique fingerprint. | {"url":"https://pure.korea.ac.kr/en/publications/observation-of-ds12536d%CF%80-k-and-angular-decomposition-of-ds12536dk","timestamp":"2024-11-12T06:31:51Z","content_type":"text/html","content_length":"82950","record_id":"<urn:uuid:36377e26-35e2-4ee1-8935-f033d9348466>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00091.warc.gz"} |
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.ISAAC.2020.60
URN: urn:nbn:de:0030-drops-134042
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2020/13404/
de Berg, Mark ; Markovic, Aleksandar ; Umboh, Seeun William
The Online Broadcast Range-Assignment Problem
Let P = {p₀,…,p_{n-1}} be a set of points in ℝ^d, modeling devices in a wireless network. A range assignment assigns a range r(p_i) to each point p_i ∈ P, thus inducing a directed communication graph
?_r in which there is a directed edge (p_i,p_j) iff dist(p_i, p_j) ⩽ r(p_i), where dist(p_i,p_j) denotes the distance between p_i and p_j. The range-assignment problem is to assign the transmission
ranges such that ?_r has a certain desirable property, while minimizing the cost of the assignment; here the cost is given by ∑_{p_i ∈ P} r(p_i)^α, for some constant α > 1 called the distance-power
We introduce the online version of the range-assignment problem, where the points p_j arrive one by one, and the range assignment has to be updated at each arrival. Following the standard in online
algorithms, resources given out cannot be taken away - in our case this means that the transmission ranges will never decrease. The property we want to maintain is that ?_r has a broadcast tree
rooted at the first point p₀. Our results include the following.
- We prove that already in ℝ¹, a 1-competitive algorithm does not exist. In particular, for distance-power gradient α = 2 any online algorithm has competitive ratio at least 1.57.
- For points in ℝ¹ and ℝ², we analyze two natural strategies for updating the range assignment upon the arrival of a new point p_j. The strategies do not change the assignment if p_j is already
within range of an existing point, otherwise they increase the range of a single point, as follows: Nearest-Neighbor (NN) increases the range of NN(p_j), the nearest neighbor of p_j, to dist(p_j, NN
(p_j)), and Cheapest Increase (CI) increases the range of the point p_i for which the resulting cost increase to be able to reach the new point p_j is minimal. We give lower and upper bounds on the
competitive ratio of these strategies as a function of the distance-power gradient α. We also analyze the following variant of NN in ℝ² for α = 2: 2-Nearest-Neighbor (2-NN) increases the range of NN
(p_j) to 2⋅ dist(p_j,NN(p_j)),
- We generalize the problem to points in arbitrary metric spaces, where we present an O(log n)-competitive algorithm.
BibTeX - Entry
author = {Mark de Berg and Aleksandar Markovic and Seeun William Umboh},
title = {{The Online Broadcast Range-Assignment Problem}},
booktitle = {31st International Symposium on Algorithms and Computation (ISAAC 2020)},
pages = {60:1--60:15},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-173-3},
ISSN = {1868-8969},
year = {2020},
volume = {181},
editor = {Yixin Cao and Siu-Wing Cheng and Minming Li},
publisher = {Schloss Dagstuhl--Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/opus/volltexte/2020/13404},
URN = {urn:nbn:de:0030-drops-134042},
doi = {10.4230/LIPIcs.ISAAC.2020.60},
annote = {Keywords: Computational geometry, online algorithms, range assignment, broadcast}
Keywords: Computational geometry, online algorithms, range assignment, broadcast
Collection: 31st International Symposium on Algorithms and Computation (ISAAC 2020)
Issue Date: 2020
Date of publication: 04.12.2020
DROPS-Home | Fulltext Search | Imprint | Privacy | {"url":"http://dagstuhl.sunsite.rwth-aachen.de/opus/frontdoor.php?source_opus=13404","timestamp":"2024-11-06T21:54:30Z","content_type":"text/html","content_length":"7859","record_id":"<urn:uuid:598227bd-e5b2-4e4d-928a-c11bbc6e5b4e>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00186.warc.gz"} |
American Mathematical Society
Countably additive homomorphisms between von Neumann algebras
HTML articles powered by AMS MathViewer
by L. J. Bunce and J. Hamhalter
Proc. Amer. Math. Soc. 123 (1995), 3437-3441
DOI: https://doi.org/10.1090/S0002-9939-1995-1285978-7
PDF | Request permission
Let M and N be von Neumann algebras where M has no abelian direct summand. A $\ast$-homomorphism $\pi :M \to N$ is said to be countably additive if $\pi (\sum \nolimits _1^\infty {{e_n}) = \sum \
nolimits _1^\infty {\pi ({e_n})} }$, for every sequence $({e_n})$ of orthogonal projections in M. We prove that a $\ast$-homomorphism $\pi :M \to N$ is countably additive if and only if $\pi (e \vee
f) = \pi (e) \vee \pi (f)$ for every pair of projections e and f of M. A corollary is that if, in addition, M has no Type ${{\text {I}}_2}$ direct summands, then every lattice morphism from the
projections of M into the projections of N is a $\sigma$-lattice morphism. Similar Articles
• Retrieve articles in Proceedings of the American Mathematical Society with MSC: 46L50
• Retrieve articles in all journals with MSC: 46L50
Bibliographic Information
• © Copyright 1995 American Mathematical Society
• Journal: Proc. Amer. Math. Soc. 123 (1995), 3437-3441
• MSC: Primary 46L50
• DOI: https://doi.org/10.1090/S0002-9939-1995-1285978-7
• MathSciNet review: 1285978 | {"url":"https://www.ams.org/journals/proc/1995-123-11/S0002-9939-1995-1285978-7/?active=current","timestamp":"2024-11-02T02:07:58Z","content_type":"text/html","content_length":"61177","record_id":"<urn:uuid:1cc25c52-2503-4f76-893d-1c9e7ded6233>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00688.warc.gz"} |
Practice Parsing and Algebra with the exercise "Quaternion Multiplication"
Learning Opportunities
This puzzle can be solved using the following concepts. Practice using these concepts and improve your skills.
The quaternions belong to a number system that extends the complex numbers. A quaternion is defined by the sum of scalar multiples of the constants i,j,k and 1.
More information is available at http://mathworld.wolfram.com/Quaternion.html
Consider the following properties:
jk = i
ki = j
ij = k
i² = j² = k² = -1
These properties also imply that:
kj = -i
ik = -j
ji = -k
The order of multiplication is important.
Your program must output the result of the product of a number of bracketed simplified quaternions.
Pay attention to the formatting
The coefficient is appended to the left of the constant.
If a coefficient is 1 or -1, don't include the 1 symbol.
If a coefficient or scalar term is 0, don't include it.
The terms must be displayed in order: ai + bj + ck + d.
Example Multiplication
(2i+2j)(j+1) = (2ij+2i+2j² +2j) = (2k+2i-2+2j) = (2i+2j+2k-2)
Line 1:The expression expr to evaluate. This will always be the product of simplified bracketed expressions.
A single line containing the simplified result of the product expression. No brackets are required.
All coefficients in any part of evaluation will be less than 10^9
The input contains no more than 10 simplified bracketed expressions
A higher resolution is required to access the IDE | {"url":"https://www.codingame.com/training/medium/quaternion-multiplication","timestamp":"2024-11-02T01:32:00Z","content_type":"text/html","content_length":"146855","record_id":"<urn:uuid:5b87210b-9ffb-4c55-8bf6-e017301bcf26>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00796.warc.gz"} |
DAT GEN CHEM Flashcards | Knowt
n, l, ml, ms
n= principal; the energy level/distance from nucleus; range is 1-infinity
l= azimuthal; the type of orbital it is (l=0=s, l=1=p, l=2=d, l=3=f); range is 0, 1, 2, or 3
ml= magnetic; shares which p, d, or f orbital you have (oriental space); range is [-l -→ +l]
ms= spin; shares the electrons spin; range is either +1/2 or -1/2 | {"url":"https://knowt.com/flashcards/ff455e05-ec44-40d1-a231-7e6a7a9ddc5d","timestamp":"2024-11-05T10:36:47Z","content_type":"text/html","content_length":"828321","record_id":"<urn:uuid:303dcc3b-7b28-4e0b-963e-64fda1067621>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00019.warc.gz"} |
Constitutive framework optimized for myocardium and other high-strain, laminar materials with one fiber family
Central to this analysis is the identification of six rotation invariant scalars α[1-6] that succinctly define the strain in materials that have one family of parallel fibers arranged in laminae.
These scalars were chosen so as to minimize covariance amongst the response terms in the hyperelastic limit, and they are termed strain attributes because it is necessary to distinguish them from
strain invariants. The Cauchy stress t is expressed as the sum of six response terms, almost all of which are mutually orthogonal for finite strain (i.e. 14 of the 15 inner products vanish). For
small deformations, the response terms are entirely orthogonal (i.e. all 15 inner products vanish). A response term is the product of a response function with its associated kinematic tensor. Each
response function is a scalar partial derivative of the strain energy W with respect to a strain attribute. Applications for this theory presently include myocardium (heart muscle) which is often
modeled as having muscle fibers arranged in sheets. Utility for experimental identification of strain energy functions is demonstrated by showing that common tests on incompressible materials can
directly determine terms in W. Since the described set of strain attributes reduces the covariance amongst response terms, this approach may enhance the speed and precision of inverse finite element
All Science Journal Classification (ASJC) codes
• Condensed Matter Physics
• Mechanics of Materials
• Mechanical Engineering
• A. Finite strain
• B. Anisotropic material
• B. Constitutive behavior
• B. Elastic material
Dive into the research topics of 'Constitutive framework optimized for myocardium and other high-strain, laminar materials with one fiber family'. Together they form a unique fingerprint. | {"url":"https://researchwith.njit.edu/en/publications/constitutive-framework-optimized-for-myocardium-and-other-high-st","timestamp":"2024-11-02T02:45:16Z","content_type":"text/html","content_length":"51757","record_id":"<urn:uuid:65f0fde6-e6f3-4f3f-9935-acc7a17a8b5d>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00424.warc.gz"} |
Quantile Regression in Python for Multiple Quantiles Simultaneously
Asked by Reina Decker on
3 Answers
Answer by Siena Butler
I'd like to perform quantile regression of multiple quantiles simultaneously. Statsmodels API offers quantile regression for a single quantile. Is there some way I can use it for multiple quantiles
simultaneously?,prediction_q0.1 yields the prediction for the 10th quantile,The resulting pretty quantile plot:, Why are other countries reacting negatively to Australia's decision to deploy nuclear
I don't think I have an optimum solution, but I may be close. Based on that cost function, it seems like you are trying to fit one coefficient matrix (beta) and several intercepts (b_k). I would do
this by first fitting a quantile regression line to the median (q = 0.5), then fitting the other quantile regression lines to the residuals. I understand this isn't "simultaneously", but perhaps its
close enough. see below:
X = np.random.randn(100)
Y = X + 0.1*np.random.randn(100)
data = pd.DataFrame(dict(X1=X, Y=Y))
mod = smf.quantreg('Y ~ X1', data)
original_model = mod.fit(q=0.5)
resids = pd.DataFrame(dict(resid=original_model.resid))
residual_model = smf.quantreg('resid ~ 1', resids)
quantiles = {q:residual_model.fit(q=q).params for q in np.arange(0.1,1.,0.1)}
prediction_q0.1 = original_model.predict(data) + quantiles[0.1]
Source: https://stackoverflow.com/questions/62616793/quantile-regression-in-python-for-multiple-quantiles-simultaneously
Answer by Genesis Thornton
There are at least two motivations for quantile regression: Suppose our dependent variable is bimodal or multimodal that is, it has multiple humps. If we knew what caused the multimodality, we could
separate on that variable and do stratified analysis, but if we don’t know that, quantile regression might be good. OLS regression will, here, be as misleading as relying on the mean as a measure of
centrality for a bimodal distribution.,Quantile regression is a valuable tool for cases where the assumptions of OLS regression are not met and for cases where interest is in the quantiles.,Ordinary
least square regression is one of the most widely used statistical methods. However, it is a parametric model and relies on assumptions that are often not met. Quantile regression makes no
assumptions about the distribution of the residuals. It also lets you explore different aspects of the relationship between the dependent variable and the independent variables.,We can now look at
the effects of each of these variables at each quantile. The best way to do this is probably graphically. The output from the program is 9 times as long as for a regular regression (since we have 9
quantiles) and is laborious to read. However, it can be printed out. The SAS code shown below generates four panels of graphics. The first panel:
Program 1: Basic quantile regression
proc quantreg ci=sparsity/iid algorithm=interior(tolerance=1.e-4)data=new;class visit MomEdLevel;model weight = black married boy visit MomEdLevel MomSmokecigsperday MomAge MomAge*MomAgeMomWtGain MomWtGain*MomWtGain/quantile= 0.05 to 0.95 by 0.05plot=quantplot;run;
Get predicted values:
proc quantreg ci=sparsity/iid algorithm=interior(tolerance=1.e-4)data=new;class visit MomEdLevel;model weight = black married boy visit MomEdLevel MomSmokecigsperday MomAge MomAge*MomAgeMomWtGain MomWtGain*MomWtGain/quantile= 0.05 to 0.95 by 0.05;output out = predictquant p = predquant;run;
then we subset this to get only the cases where the other values are their means or modes. First, for maternal age:
data mwtgaingraph;set predictquant;where black = 0 and married = 1 and boy = 1 and MomAge = 0 and MomSmoke = 0 and visit = 3 and MomEdLevel = 3;run;
Then sort it:
proc sort data = mwtgaingraph;by MomWtGain;run;
Then graph it.
proc sgplot data = mwtgaingraph;title ’Quantile fit plot for maternal weight gain’;yaxis label = “Predicted birth weight”;series x = MomWtGain y = predquant1 /curvelabel = “5 %tile”;series x = MomWtGain y = predquant2/curvelabel = “10 %tile”;series x = MomWtGain y = predquant5/curvelabel = “25 %tile”;series x = MomWtGain y = predquant10/curvelabel = “50 %tile”;series x = MomWtGain y = predquant15/curvelabel = “75 %tile”;series x = MomWtGain y = predquant18/curvelabel = “90 %tile”;series x = MomWtGain y = predquant19/curvelabel = “95 %tile”;run;
Source: https://towardsdatascience.com/an-introduction-to-quantile-regression-eca5e3e2036a
Answer by Chris Murphy
The loss in Quantile Regression for an individual data point is defined as:,Thanks to Jacob Zweig for implementing the simultaneous multiple Quantiles in TensorFlow: https://github.com/strongio/
quantile-regression-tensorflow/blob/master/Quantile%20Loss.ipynb,where f(x) is the predicted (quantile) model and y is the observed value for the corresponding input x. The average loss over the
entire dataset is shown below:,Note that for each quantile I had to rerun the training. This is due to the fact that for each quantile the loss function is different, as the quantile in itself is a
parameter for the loss function.
import keras.backend as K
def tilted_loss(q,y,f):
e = (y-f)
return K.mean(K.maximum(q*e, (q-1)*e), axis=-1)
Source: https://www.kdnuggets.com/2018/07/deep-quantile-regression.html | {"url":"https://www.devasking.com/issue/quantile-regression-in-python-for-multiple-quantiles-simultaneously","timestamp":"2024-11-02T18:46:40Z","content_type":"text/html","content_length":"127862","record_id":"<urn:uuid:a173a4c5-a1a7-4d35-b403-5ec2a4e3f1e5>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00444.warc.gz"} |
Synthetic Homotopy Theory
In this package we collect a few results about synthetic homotopy theory. The main one is the construction of the circle as the type of torsors over the group of integers and the proof of its
induction principle.
The files were written by Daniel Grayson.
Overview of contents
Construction of the “half line” by squashing nat. A Map from it to another type is determined by a sequence of points connected by paths. The techniques are a warm-up for the construction of the
We show that the propositional truncation of a ℤ-torsor, where ℤ is the additive group of the integers, behaves like an affine line. It’s a contractible type, but maps from it to another type Y are
determined by specifying where the points of T should go, and where the paths joining consecutive points of T should go. This forms the basis for the construction of the circle as a quotient of the
affine line.
Construction of the circle as Bℤ. A map from it to another type is determined by a point and a loop. Forthcoming: the dependent version. | {"url":"https://unimath.github.io/UniMath/UniMath/SyntheticHomotopyTheory/","timestamp":"2024-11-09T14:12:47Z","content_type":"text/html","content_length":"4939","record_id":"<urn:uuid:d70d7a8e-cc72-4dfa-a8fb-bbbf124c4b2d>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00506.warc.gz"} |
Moving Average Method for Time-series forecasting - Analytics Yogi
Moving Average Method for Time-series forecasting
In this post, you will learn about the concepts of the moving average method in relation to time-series forecasting. You will get to learn Python examples in relation to training a moving average
machine learning model. The following are some of the topics which will get covered in this post:
• What is the moving average method?
• Why use the moving average method?
• Python code example for the moving average methods
What is Moving Average method?
The moving average is a statistical method used for forecasting long-term trends. The technique represents taking an average of a set of numbers in a given range while moving the range. For example,
let’s say the sales figure of 6 years from 2000 to 2005 is given and it is required to calculate the moving average taking three years at a time. In order to calculate the moving average, one would
take an average of 2000-2002, 2001-2003, 2002-2004, 2003-2005, and 2004-2006. Let’s understand it with example. In the table given below, the average value is computed by taking average of previous
three years including the current one.
Year Sales (M$) Moving Average (MA)
2000 4 NaN
2001 7 NaN
2003 9 6.67
2004 7 6.67
2005 10 Needs to be predict assuming
2005 is the current year
Plotting the moving average from the above table would look like the following. The moving average are usually plotted for visualisation purpose.
Fig 1. Moving average of Sales figure from 2000-2005
There are different variations of moving average technique (also termed as rolling mean) such as some of the following:
• Simple moving average (SMA): Simple moving average (SMA) is a form of moving average (MA) that is used in time series forecasting. It is calculated by taking the arithmetic mean of a given set of
data over a certain period of time. It takes the sliding window over a given time period as shown in the above example (3 years interval). It can be termed as an equally weighted mean of n
records. The advantage of using SMA is that it is simple to calculate and understand. However, one disadvantage is that it is based on past data and does not take into account future events. For
this reason, SMA should not be used as the sole forecasting method, but rather as one tool in a broader forecasting arsenal.
• Exponential moving average (EMA): Exponential moving average (EMA) is a type of moving average that places more weight on recent data points and helps smooth out the data points in a time series.
Unlike simple moving averages, which give equal weight to all data points, EMAs give more weight to recent data points. This makes it more responsive to new information than a simple moving
average. EMAs are often considered to be more responsive to changes in the underlying data. There are a number of different ways to calculate an EMA, but the most common approach is to use a
weighting factor that decreases exponentially over time. This weighting factor can be used to give more or less emphasis to recent data points, depending on the needs of the forecaster.
Exponential moving average forecasting can be used with any time series data, including stock prices, economic indicators, or weather data.
Interpreting a moving average graph that plots output of the moving average method in time series forecasting (as shown in the above plot) can be a useful tool for analysts, economists and investors
to assess the current state of an asset or market. The concept behind this analysis is to identify trends in the data and make predictions about future outcomes based on these trends.
In simple terms, a moving average graph takes the average of several different points in the data set and then plots it over time. A longer-term moving average will give more emphasis to older data
points, while a shorter-term one will look more closely at recent values. In case of stock price prediction, by examining how the line moves from period to period, investors can get a sense of where
prices may be headed in the near future. For example, if prices were generally increasing with each new period up until now, then investors may expect prices to continue rising at least until there
is clear evidence suggesting otherwise. On the other hand, if prices began dropping off sharply after some time period and continued to do so until present day, then this could indicate that
downwards trend could continue.
It is important to note that while interpreting moving averages can provide helpful insights into future market fluctuations, it should not be treated as an infallible indicator. A single moving
average line may not accurately depict all of the nuances and complexities of a given market environment; rather it should be used as one tool among many when trying to draw conclusions about
potential price action going forward. As such, it may also be beneficial to take into account other types of technical analysis like support/resistance levels or momentum indicators when building out
an entire trading strategy around a particular asset or security. Additionally, using multiple different moving averages with different lengths (i.e., short-, medium-, and long-term) can help
investors better analyze how markets are behaving across various time horizons which could lead them to make wiser investment decisions about their portfolios going forward.
Why use Moving Average method?
The moving average method is widely used in time-series forecasting because of its flexibility and simplicity. Unlike other methods, such as ARIMA or neural networks, it does not require an advanced
knowledge of mathematics. This means that even those with basic statistical knowledge can use it to get reliable results.
The main advantage of the moving average method is that it takes into account all previous values when predicting future values. This helps to reduce the effect of outliers when making predictions
and also makes it easier to identify seasonal patterns in a time-series data set. Furthermore, the weighting methodology used by the moving average method gives more importance to recent values over
older ones, which is beneficial when predicting short-term trends.
In addition, the simple moving average (SMA) method is usually computationally faster than more complex methods such as exponential moving average (EMA). It also requires less parameters and can be
used on shorter data sets. And finally, the SMA method has been proven to be effective in many applications such as stock market analysis as well as seasonal forecasting.
Overall, the moving average method is an effective tool for short-term forecasting due to its flexibility and ease of use. Its ability to take into account all past values when making predictions
ensures accuracy while its ability to identify seasonal patterns means that it can be used effectively for long-term forecasting too. Furthermore, its computational speed and minimal parameters make
it a popular choice for many applications.
Python Example for Moving Average Method
Here is the Python code for calculating moving average for sales figure. The code that calculates the moving average or rolling mean is df[‘Sales’].rolling(window=3).mean(). The example below
represents the calculation of simple moving average (SMA).
import pandas as pd
import numpy as np
# Create a numpy array of years and sales
arr = np.array([['2000', 4], ['2001',7],
['2002',4], ['2003',9],
['2004',7], ['2005',10]])
# Transpose array
arr_tp = arr.transpose()
# Create a dataframe
df = pd.DataFrame({'Years': arr_tp[0], 'Sales': arr_tp[1]})
# Calculate rolling mean
df['MA'] = df['Sales'].rolling(window=3).mean()
This is how the output would look like:
Fig 2. Moving average of sales figure
Latest posts by Ajitesh Kumar
(see all)
One Response
Umair January 13, 2022 at 9:29 pm
it is very helpfull article for calculation of your moving averages,you must keep up to help others.
I found it very helpful. However the differences are not too understandable for me
Very Nice Explaination. Thankyiu very much,
in your case E respresent Member or Oraganization which include on e or more peers?
Such a informative post. Keep it up
Thank you....for your support. you given a good solution for me.
Ajitesh Kumar
I have been recently working in the area of Data analytics including Data Science and Machine Learning / Deep Learning. I am also passionate about different technologies including programming
languages such as Java/JEE, Javascript, Python, R, Julia, etc, and technologies such as Blockchain, mobile computing, cloud-native technologies, application security, cloud computing platforms, big
data, etc. I would love to connect with you on Linkedin. Check out my latest book titled as First Principles Thinking: Building winning products using first principles thinking.
Posted in Data Science, Machine Learning. Tagged with Data Science, machine learnin. | {"url":"https://vitalflux.com/moving-average-method-for-time-series-forecasting/","timestamp":"2024-11-04T04:15:19Z","content_type":"text/html","content_length":"112615","record_id":"<urn:uuid:538bb541-6e09-45bf-b1de-5c9aee43feac>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00698.warc.gz"} |
One Step Equations With Rational Numbers Worksheet
One Step Equations With Rational Numbers Worksheet serve as foundational devices in the realm of maths, offering a structured yet versatile platform for learners to explore and master numerical
concepts. These worksheets use a structured approach to comprehending numbers, nurturing a strong foundation whereupon mathematical efficiency thrives. From the most basic checking workouts to the
ins and outs of advanced estimations, One Step Equations With Rational Numbers Worksheet satisfy students of diverse ages and skill degrees.
Revealing the Essence of One Step Equations With Rational Numbers Worksheet
One Step Equations With Rational Numbers Worksheet
One Step Equations With Rational Numbers Worksheet -
One step equations Two step equations Multi step equations Absolute value equations Radical equations Easy Hard Rational equations Easy Hard Solving proportions Percent problems Distance rate time
word problems Mixture word problems Work word problems Literal Equations
One Step Equations With Rational Numbers Displaying top 8 worksheets found for this concept Some of the worksheets for this concept are One step equations with fractions One step equations date
period Solving rational equations 1 Solving one step equations additionsubtraction One step equations integers mixed operations level 1 s1
At their core, One Step Equations With Rational Numbers Worksheet are lorries for conceptual understanding. They encapsulate a myriad of mathematical principles, directing students through the maze
of numbers with a collection of appealing and purposeful exercises. These worksheets transcend the limits of typical rote learning, encouraging active interaction and promoting an user-friendly grasp
of mathematical partnerships.
Nurturing Number Sense and Reasoning
One Step Equations Worksheet Pdf
One Step Equations Worksheet Pdf
Solving Rational Equations Date Period Solve each equation Remember to check for extraneous solutions 1 1 6 k2 1 3k2 1 k 2 1 n2 1 n 1 2n2 3 1 6b2 1 6b 1 b2 4 b 6 4b2 3 2b2 b 4 2b2 5 1 x 6 5x 1 6 1
6x2 1 2x 7 6x2 7 1 v 3v 12 v2 5v 7v 56 v2 5v 8 1 m2 m 1 m 5 m2 m 9 1
NOTES One Step Equations with Date Rational Coefficients To solve an equation when the coefficient is a fraction each side of the equation by the of the fraction Examples Solve the following
equations Remember to check your solutions 1 3 18 4 c 2 1 12 5 x 3 2 4 9 d 4 5 15 3 n 5 11 1 16 22 s 6 11
The heart of One Step Equations With Rational Numbers Worksheet lies in growing number sense-- a deep understanding of numbers' definitions and affiliations. They encourage expedition, welcoming
learners to study math procedures, decode patterns, and unlock the enigmas of series. Through provocative difficulties and sensible challenges, these worksheets come to be gateways to developing
reasoning skills, supporting the analytical minds of budding mathematicians.
From Theory to Real-World Application
8 Best Images Of Solving Rational Equations Worksheet Simple Solving Equations With Rational
8 Best Images Of Solving Rational Equations Worksheet Simple Solving Equations With Rational
Six half sheets with positive and negative integers for students who are learning or reviewing solving one step equations Half sheets cover adding subtracting multiplying multiplying with fractions
division equations and mixed review with rational numbers
Access your students understanding of solving one step equations with this interactive digital and printable activity Students will model the solution of a given equation with algebra tiles AND write
steps by applying the appropriate property of equality
One Step Equations With Rational Numbers Worksheet act as conduits linking academic abstractions with the apparent facts of daily life. By infusing functional circumstances right into mathematical
workouts, learners witness the relevance of numbers in their surroundings. From budgeting and dimension conversions to comprehending statistical data, these worksheets equip pupils to wield their
mathematical expertise beyond the boundaries of the classroom.
Diverse Tools and Techniques
Flexibility is inherent in One Step Equations With Rational Numbers Worksheet, employing an arsenal of instructional tools to accommodate varied knowing styles. Aesthetic help such as number lines,
manipulatives, and electronic resources function as buddies in envisioning abstract principles. This diverse approach makes certain inclusivity, fitting students with various preferences, toughness,
and cognitive styles.
Inclusivity and Cultural Relevance
In a progressively diverse globe, One Step Equations With Rational Numbers Worksheet embrace inclusivity. They go beyond social borders, incorporating examples and issues that reverberate with
learners from varied backgrounds. By integrating culturally appropriate contexts, these worksheets promote a setting where every student really feels represented and valued, enhancing their
connection with mathematical principles.
Crafting a Path to Mathematical Mastery
One Step Equations With Rational Numbers Worksheet chart a course towards mathematical fluency. They instill determination, important reasoning, and analytic abilities, necessary qualities not only
in mathematics but in numerous elements of life. These worksheets equip students to browse the complex surface of numbers, nurturing a profound recognition for the elegance and logic inherent in
Embracing the Future of Education
In an age noted by technological improvement, One Step Equations With Rational Numbers Worksheet effortlessly adapt to electronic platforms. Interactive user interfaces and electronic resources boost
standard knowing, using immersive experiences that go beyond spatial and temporal limits. This combinations of conventional approaches with technical technologies advertises an encouraging age in
education and learning, fostering a more dynamic and interesting learning setting.
Verdict: Embracing the Magic of Numbers
One Step Equations With Rational Numbers Worksheet characterize the magic inherent in maths-- a captivating trip of exploration, discovery, and proficiency. They go beyond conventional pedagogy,
functioning as stimulants for sparking the fires of curiosity and query. Through One Step Equations With Rational Numbers Worksheet, students embark on an odyssey, unlocking the enigmatic globe of
numbers-- one trouble, one service, at once.
8 Best Images Of Rational Numbers 7th Grade Math Worksheets Algebra 1 Worksheets Rational
Operations With Rational Numbers Worksheet
Check more of One Step Equations With Rational Numbers Worksheet below
Solving Rational Equations 1 Worksheet Answers Precalculus Worksheet 3 Solving Rational
Algebra Equations Worksheets
Solving Equations With Rational Numbers Worksheets
One Step Equations Worksheet Equations Worksheets Worksheet Monks WorkSheet For Nobb
10 Rational Equations Worksheet Worksheets Decoomo
Rational Numbers Worksheet For 8th 9th Grade Lesson Planet
One Step Equations With Rational Numbers Worksheets
One Step Equations With Rational Numbers Displaying top 8 worksheets found for this concept Some of the worksheets for this concept are One step equations with fractions One step equations date
period Solving rational equations 1 Solving one step equations additionsubtraction One step equations integers mixed operations level 1 s1
One Step Equations With Rational Coefficients
One Step Equations with Rational Coefficients Practice and Problem Solving A B Solve 1 1 4 3 n 2 Write each sentence as an equation 9 Eight less than 1 3 a number n is 13 10 A number f multiplied by
12 3 is 73 8 Write an equation Then solve 11 During unusually cold weather the temperature in Miami Beach
One Step Equations With Rational Numbers Displaying top 8 worksheets found for this concept Some of the worksheets for this concept are One step equations with fractions One step equations date
period Solving rational equations 1 Solving one step equations additionsubtraction One step equations integers mixed operations level 1 s1
One Step Equations with Rational Coefficients Practice and Problem Solving A B Solve 1 1 4 3 n 2 Write each sentence as an equation 9 Eight less than 1 3 a number n is 13 10 A number f multiplied by
12 3 is 73 8 Write an equation Then solve 11 During unusually cold weather the temperature in Miami Beach
One Step Equations Worksheet Equations Worksheets Worksheet Monks WorkSheet For Nobb
Algebra Equations Worksheets
10 Rational Equations Worksheet Worksheets Decoomo
Rational Numbers Worksheet For 8th 9th Grade Lesson Planet
Operations With Rational Numbers Worksheet
One Step Equations With Rational Coefficients Worksheet
One Step Equations With Rational Coefficients Worksheet
One Step Equations With Rational Coefficients Worksheet | {"url":"https://alien-devices.com/en/one-step-equations-with-rational-numbers-worksheet.html","timestamp":"2024-11-08T12:05:33Z","content_type":"text/html","content_length":"27110","record_id":"<urn:uuid:38f981ff-de68-46e9-9c33-75db5c2b4717>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00185.warc.gz"} |
Binary Tree Level Order Traversal
Problem statement:
Given the root of a binary tree, return the level order traversal of its nodes' values. (i.e., from left to right, level by level).
Example 1:
Input: root = [3,9,20,null,null,15,7]
Output: [[3],[9,20],[15,7]]
Example 2:
Input: root = [1]
Output: [[1]]
Example 3:
Input: root = []
Output: []
* The number of nodes in the tree is in the range [0, 2000].
* -1000 <= Node.val <= 1000
Solution in C++
Solution in Python
Solution in Java
Solution in Javascript
Solution explanation
The algorithm for level order traversal uses a Breadth-First Search approach. We use a queue to traverse the tree level by level.
1. Initialize an empty result list.
2. If the root is empty, return the empty result list.
3. Create a queue, and enqueue the root element.
4. Enter a loop while the queue is not empty.
- Initialize an empty level list.
- Get the number of elements at the current level by using the queue's size.
- For each element at the current level, do the following:
-- Dequeue the element, and add its value to the level list.
-- Enqueue the left child of the dequeued element (if it exists).
-- Enqueue the right child of the dequeued element (if it exists).
- Add the level list to the result list.
5. Return the resulting list of lists. | {"url":"https://www.freecodecompiler.com/tutorials/dsa/binary-tree-level-order-traversal","timestamp":"2024-11-07T13:31:57Z","content_type":"text/html","content_length":"35277","record_id":"<urn:uuid:c96e4034-0bb5-4e88-8f17-4f92ab2f60d9>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00719.warc.gz"} |
Save Percentage Calculator
Save Percentage Calculator - Web this simple percentage discount calculator is here to help you in everyday situations. Web using the percentage savings calculator is a straightforward process. Web
the 4% rule. Web save percentage is the percentage of saves a goalkeeper makes to the number of shots made on the goal. A “shot” is an action that directs the puck towards the net and is either
stopped by the goaltender or goes in the net.
Enter your own data or use the default values and get. If a relief pitcher has had 27 save opportunities. Web this simple percentage discount calculator is here to help you in everyday situations.
The monthly cost of property taxes, hoa dues and. Web the average yield in the u.s. Calculate your save percentage for any sport with this. Emily saves 1 percent of her salary because that’s all she
Save Percentage Calculator How Is It Calculated?
A “save” is a shot on the goal that the goaltender stops. Web save percentage is the percentage of saves a goalkeeper makes to the number of shots made on the goal. Web calculate the.
How To Calculate Save Percentage Step By Step
Web the goalie save percentage calculator is a valuable tool for both hockey enthusiasts and statisticians. Web save percentage is a statistic that measures the performance of a hockey goalie. Save
percentage = saves ÷.
How To Calculate Save Percentage Step By Step
Calculate interest growth on all your savings accounts. Learn the formula and examples of percentage calculation in mathematics and common phrases. The monthly cost of property taxes, hoa dues and.
Enter the number of saves.
Percentage of an Amount GCSE Maths Steps, Examples & Worksheet
The monthly cost of property taxes, hoa dues and. Web the 4% rule. Web use this online tool to calculate any percentage value, difference or change. Web this simple percentage discount calculator is
here to.
How Much to Save Monthly Your Savings Percentage Money Bliss
Enter the number of saves and shots by the goalie and get the save. Web save percentage is the percentage of saves a goalkeeper makes to the number of shots made on the goal. If.
Ratio to Percentage Calculator Inch Calculator
Web calculate the percentage of shots on goal that goalie stops in hockey games using this online tool. Web using the percentage savings calculator is a straightforward process. Learn the formula and
examples of percentage.
Savings Rate 101 What It Is and How to Calculate It Savology
The calculator gives you the. Calculate interest growth on all your savings accounts. Enter the number of saves and shots by the goalie and get the save. Provide the number against which this
percentage must.
Percentage Of A Percentage Calculator Sales Cheapest, Save 65 jlcatj
This online tool takes the number of saves and shots on. Web save percentage is the percentage of saves a goalkeeper makes to the number of shots made on the goal. Estimate payments on biden’s.
How To Calculate Percentage Solve Through Percentage Formula
It's the perfect tool if you want to determine how much you need to pay after a. Web the goalie save percentage calculator is a valuable tool for both hockey enthusiasts and statisticians. Web the.
Savings Rate 101 What It Is and How to Calculate It Savology
Web use this online tool to calculate any percentage value, difference or change. A “save” is a shot on the goal that the goaltender stops. Enter your own data or use the default values and.
Save Percentage Calculator Enter the total number of shots faced by the goalie and the number of goals allowed in the respective input fields. The shooting percentage calculator (soccer) emerges as
a. Web let’s say emily, age 30, earns $40,000 a year and her boss, ebenezer, gives 1 percent annual raises. A “save” is a shot on the goal that the goaltender stops. Web the average yield in the u.s.
Save Percentage Calculator Related Post : | {"url":"https://wm.edu.pl/view/save-percentage-calculator.html","timestamp":"2024-11-08T15:29:11Z","content_type":"application/xhtml+xml","content_length":"24705","record_id":"<urn:uuid:6255d99f-08e9-4282-8750-567675757330>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00233.warc.gz"} |
Surface Areas and Volumes MCQ Test CBSE Class 9 MathsSurface Areas and Volumes MCQ Test CBSE Class 9 Maths - The Brainbox Tutorials
Surface Areas and Volumes MCQ Test CBSE Class 9 Maths
Take this Maths Practice Test On Surface areas and volumes here. This online aptitude test on Surface Areas and Volumes of Cube, Cuboid, Cylinder, Cone and Sphere will increase the confidence in
students and they will never hesitate to attempt mensuration sums. Surface Areas and Volumes MCQ Test CBSE Class 9 Maths is provided by The Brainbox Tutorials. The quiz has multiple choice questions
on Volume and Surface Area of 3D solids. This quiz has answers too.
Surface Areas and Volumes MCQ Test CBSE Class 9 Maths
Below are the quiz and the rules to be followed while performing this test on Surface Areas and Volumes MCQ CBSE Class 9 Maths.
You can go through the tutorial of this chapter in video form to understand the concept clearly. Here is the link to the video tutorial.
Video tutorial on:
Rules for Surface Area and Volume MCQ Test CBSE Class 9 Maths
• This quiz has 10 multiple-choice questions.
• Each sum has 2 marks.
• So the maximum marks of this test are 20.
• There is no time limit.
• You should be ready with a pen and copy in your hand to solve the sums.
• keep your Maths book away from you. This is the test of your memory. So do not take the help of the Maths book.
• The correct answer and explanation are provided at the end of this quiz.
ICSE Related links
Chapter-wise Quiz/MCQ/Test:
Sample Papers:
Board Papers:
Chapter-wise Quiz/MCQ/Test:
Sample Papers:
Subscribe our YouTube channel for latest educational updates.
Leave a Comment | {"url":"https://thebrainboxtutorials.com/2021/11/surface-areas-and-volumes-mcq-test-cbse-class-9-maths.html","timestamp":"2024-11-02T17:05:28Z","content_type":"text/html","content_length":"115157","record_id":"<urn:uuid:9da25ac5-1e95-4e21-903e-031251c8c6b4>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00785.warc.gz"} |
Scramble String LeetCode
In this problem, we must determine whether one string can be transformed into another by performing a series of character splits and swaps. This concept has practical applications in cryptography,
where strings are often manipulated through various operations to achieve encryption or decryption. Additionally, it is relevant in data compression algorithms that rearrange or transform strings to
optimize space usage. Imagine we’re playing a word game where you have a set of letters, and our goal is to create as many valid words as possible with those letters. This is similar to the Scramble
String problem.
In the illustration above, we can see that great can be scrambled to rgaet. Here, we’ll explore the problem statement and provide a Python solution and explanation.
Problem statement
Given two strings, s1 and s2 of the same length, return TRUE if s2 is a scrambled string of s1; otherwise, return FALSE.
To get one string from another, we need to follow the steps below:
1. If the string’s length is 1, no further action is taken.
2. If the string’s length is greater than 1, perform the following:
1. Divide the string into two substrings (nonempty) at a random index. In other words, if the string is s, we can split it into two parts: x and y, such that s = x + y.
2. Either swap the positions of the substrings or keep them in their original order (the decision is taken randomly). This means s may become s = x + y or s = y + x.
3. Apply the scrambling process recursively to the two substrings, x and y. Given two strings, s1, and s2, which have the same length; the goal is to determine whether s2 can be derived from
applying the scramble operation on s1. If this is possible, return True; otherwise, return False.
Knowledge test
Attempt the quiz below to test your concepts on this problem:
What is the Scramble String problem in Python?
Question 1 of 30 attempted
We use a dynamic programming approach with a 3D table to store intermediate results efficiently, avoiding redundant computations and facilitating memoization, thus improving the algorithm’s
The algorithm can be described as follows:
1. Base case:
1. If the lengths of s1 and s2 are not equal, return FALSE as the transformation is not possible.
2. Table initialization:
1. Create a 3D table table with dimensions [n][n][n + 1] where n is the length of the strings s1 and s2. Initialize all values to FALSE.
3. Base case for single characters:
1. Iterate over characters in both strings. If characters in the same positions are equal, set table[i][j][1] to TRUE, indicating that a single-character substring of s1 can be scrambled to
match that of s2.
4. Dynamic programming:
1. Use nested loops to fill values in the table. Iterate over the lengths of the substrings, starting positions, and splitting positions.
2. For each substring length from 2 to n, for each starting position i in s1, and for each starting position j in s2, check if it’s possible to split, scramble, and match the current substrings
of s1 and s2.
3. Set table[i][j][length] to True if the scramble is possible by checking the possible splits and their combinations..
5. Result:
1. The final result is stored in table[0][0][len(s1)], indicating whether the entire strings s1 and s2 can be scrambled to match each other.
This can be seen in the slides below:
1 of 4
Educative 99 helps you solve complex coding problems, such as the Scramble String problem, by recognizing patterns and applying the right algorithms.
The code for this problem is as follows:
A problem that involves reversing a string
Finding the longest common subsequence between two strings
Splitting a string into two substrings and swapping them to match another string
Rearranging characters in a string to form a palindrome
def isScramble(s1, s2):
if len(s1) != len(s2):
return False
n = len(s1)
table = [[[False] * (n + 1) for _ in range(n)] for _ in range(n)]
# Base case for single characters
for i in range(n):
for j in range(n):
if s1[i] == s2[j]:
table[i][j][1] = True
# Dynamic programming to fill the table
for length in range(2, n + 1):
for i in range(n - length + 1):
for j in range(n - length + 1):
for k in range(1, length):
if (table[i][j][k] and table[i + k][j + k][length - k]) or (table[i][j + length - k][k] and table[i + k][j][length - k]):
table[i][j][length] = True
return table[0][0][len(s1)]
# Driver Code
def main():
# Test cases
test_cases = [
("great", "rgeat", True),
("abcde", "caebd", False),
("a", "a", True),
("abc", "bca", True),
("abcd", "bdac", False)
for s1, s2, expected in test_cases:
result = isScramble(s1, s2)
print(f"isScramble({s1}, {s2}) = {result} (expected: {expected})")
if __name__ == "__main__":
Time complexity
The time complexity of the isScramble function is $O(n^4)$. This is because the algorithm uses three nested loops: the outermost loop iterates over substring lengths up to $n$, the middle loop
iterates over starting positions in s1 up to$n$, and the innermost loop iterates over starting positions in s2 up to$n$. Additionally, there is an inner loop that iterates over possible splitting
positions up to $n$, resulting in a total of $O(n^4)$ operations.
Space complexity
The space complexity of the isScramble function is $O(n^{3})$, where n is the length of the input strings s1 and s2. This is due to the size of the 3D table.
Copyright ©2024 Educative, Inc. All rights reserved | {"url":"https://www.educative.io/answers/scramble-string-leetcode","timestamp":"2024-11-11T11:14:38Z","content_type":"text/html","content_length":"217072","record_id":"<urn:uuid:8e23a374-7834-454a-913e-63892496627f>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00227.warc.gz"} |
If the total number of H2 molecules is double of the class 11 physics JEE_Main
Hint: Here, we need to find the formula for kinetic energy of gases. Apply the given condition in two separate cases for two hydrogen molecules and one oxygen molecule. Find the ratio between the
Complete Step By Step Solution
We are given that the total number of \[{H_2}\] is double that of the number of \[{O_2}\]molecules. Which means that\[{n_{{H_2}}} = 2 \times {n_{{O_2}}}\]. We need to obtain its energy relation by
using the kinetic theory of gases. According to the equi-partition theorem of gases, in thermal equilibrium, the kinetic energy of the particle is a product of average energy created by movement of a
gas particle in each of its degrees of freedom. The average energy per particle is given as \[{k_b}T/2\], where \[{k_b}\]is called as Boltzmann’s constant and T is the temperature of the gas
Now according to kinetic theory of gases and equipartition theorem
\[E = \dfrac{3}{2}{k_b}T\] (Since a gas particle have 3 degrees of freedom)
Now, the energy [produced by 2 molecules of hydrogen is given as
\[{E_{{H_2}}} = 2n \times E\]
Now the energy produced by one molecule of oxygen is given as :
\[{E_{{O_2}}} = E\]
Finding the ratio by dividing the above terms, we get,
\[\dfrac{{{E_{{H_2}}}}}{{{E_{{O_2}}}}} = \dfrac{2}{1}\](Common term E is cancelled out)
Therefore the ratios of the energy produced at 300K is \[2:1\]
Hence, Option(c) is the right answer for the given question.
Note Kinetic theory of gases states that gases molecules do not have any force of attraction between them and volume of the gas molecules are significantly smaller than the space occupied by gases.
In order to explain this theory, five postulates were constructed which are
1. The molecules of gas are very far apart.
2. Gas molecules move in constant random motion
3. Molecules can collide with each other when they’re within a specified boundary.
4. Due to collision, no molecules lose kinetic energy and are said to be perfectly elastic.
5. Molecules have no attractive or repulsive force but have intermolecular force. | {"url":"https://www.vedantu.com/jee-main/if-the-total-number-of-h2-molecules-is-double-of-physics-question-answer","timestamp":"2024-11-04T10:23:21Z","content_type":"text/html","content_length":"145518","record_id":"<urn:uuid:da36ff19-e6cf-4caa-91cf-ff60e71c288d>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00437.warc.gz"} |
Trigonometry, Radicals, and a Very Common Error
(A new question of the week)
While I was looking through recent questions to choose one to post, I ran across one that deals with an error we see very commonly – in fact, a student I had worked with that very afternoon in
face-to-face tutoring had done the same sort of thing. The context here deals with trigonometric identities, but it could just as well occur in working with the Pythagorean Theorem in geometry, or
solving an equation in algebra, or even in calculus. We’ll also see a number of other pitfalls for beginning algebra students.
Why is the sine squared?
This is the question, sent to us by Simran in mid-August:
Hello math doctors,
When we write cos A as sin A, that is,
cos A = 1/√1 – sin²A
why is it sin² A and not sin A? Shouldn’t it be sin A in the root as we have taken the square root?
Sorry, if the doubt is too dumb.
Thanking you,
Our first task, besides reassuring Simran, is to determine which of several issues is the central doubt.
Doctor Rick answered with a variation on our common statement that “the only stupid question is the one you don’t ask”:
A doubt unexpressed is dumb (literally – “dumb” means silent). It is smart to express your doubts so that they can be cleared up.
Simran is doing just what needs to be done. I often discuss with tutees the fact that asking questions is the only way to truly learn. If you feel you can’t ask a question in class, ask a tutor, or
write to a site like ours.
The word “dumb” originally meant “mute” (unable to speak), but came increasingly to mean “stupid” (having nothing to say), and its use to refer to mute people became offensive. But either way, if you
ask questions, you are not dumb!
Clarifying the expression
What you wrote has several problems in terms of order of operations, and some other problems as well, so I can’t be exactly sure what you are thinking. You wrote:
When we write cos A as sin A, that is,
cos A = 1/√1 – sin²A
why is it sin² A and not sin A? Shouldn’t it be sin A in the root as we have taken the square root?
I assume that you meant “when we write cos A in terms of sin A”.
This is a phrase many students need to be introduced to! In fact, again, just today I helped a student understand what she was being asked to do, when the question said to express one function in
terms of another. It means to use the latter to state the former.
Now, what does “cos A = 1/√1 – sin²A” mean?
What you wrote on the second line is not correct. Here is how the correct equation is derived from the Pythagorean identity:
sin^2 A + cos^2 A = 1
cos^2 A = 1 – sin^2 A
cos A = ± √(1 – sin^2 A)
The square root is not in the denominator as you put it. Also, notice the plus-or-minus sign. That is needed to make the equation true for all angles — for instance, if A = 120°, then sin A = √3/
2 and cos A = –1/2.
The parentheses are also important! This is a common problem in typing radicals, because we can’t draw the bar (called a vinculum) over the radicand, to show what we are taking a root of. In this
setting, we need to use parentheses in place of the untypable vinculum. As Simran had typed it, “1/√1 – sin^2 A” would just mean \(\frac{1}{\sqrt 1}-\sin^2(A)\) rather than what is presumably
intended, \(\frac{1}{\sqrt{1-\sin^2(A)}}\), or what it should be, \(\sqrt{1-\sin^2(A)}\). But perhaps he did mean what he wrote, as we’ll be seeing.
Focusing on the algebra
But these things aren’t what you’re asking about; you are focused only on whether sin A should be squared. Could you say more about why you think sin A should not be squared? What if we put trig
aside, and just solve the equation
x^2 + y^2 = 1
for y in terms of x?
y^2 = 1 – x^2
y = ± √(1 – x^2)
Perhaps you’re thinking that, to take the square root of 1 – x^2, we can just take the square root of 1 and the square root of x^2, getting
√(1 – x^2) = √1 – √(x^2) = 1 – x
If that’s what you’re thinking, we can talk it through thoroughly — but if it’s something else, I want to know, so I don’t waste your time. So let me know what you’re thinking, and we’ll talk
about it.
The suggestion here is that Simran might be thinking that the square root “distributes over the addition”, so that \(\sqrt{a+b}=\sqrt{a}+\sqrt{b}\). As we’ll see, this is a common mistake.
Simran replied,
Hi Doctor Rick,
Actually I thought that when we will square it the square will go away. I don’t understand the square still remaining in the expression. Since cos A is written without square so shouldn’t 1 – sin
A be written without square too? I don’t understand how after writing square root of a number we can write the square along with it. If possible can you explain me in simple terms.
I hope it’s understandable because I am myself confused how to express the doubt.
And thankyou I often think my doubts are too dumb to be answered.
It appears that he did mean \(\cos(A) = 1 – \sin(A)\), and the distribution idea is probably part of the problem. We’re getting closer.
And he is saying “square it” probably because of the awkwardness of expressing a square root in English; I find many students say “square root it”, which is not a standard verb, rather than the
proper but wordy “extract the square root” or “take the square root”. It’s also easy to omit the word “root”, as he has done here.
Doctor Rick wrote back, pointing out the wrong word usage (which, again, I find to be quite common even in native speakers of English):
I cannot understand what you are saying. “When we square it …” What did we square? Did you mean “when we take the square root“? If we’re going to straighten out your confusion, we have to write
carefully. I know that’s hard when you’re confused!
Perhaps you are thinking that, for example, √(x^2) = x. You may know that this isn’t quite correct – if x can be negative, then we should write √(x^2) = |x|. But I don’t want to cause more
confusion, so I’ll keep it simple: if x is positive, then squaring it and then taking the square root of the result gets you back to x.
Note, though, that when we do this, both the square-root sign and the exponent 2 go away. If we still need the radical (square-root sign) then we also still need the square.
That is, we can’t say that \(\sqrt{x^2}=\sqrt{x}\). But more important, in the expression we’re discussing, we aren’t taking the square root of the square itself at all:
And if we do something else to the square before we take the square root, then we can’t just “cancel” the two symbols. We have to first write the expression just as it is, and look to see if
there is any valid property or fact that we can use to simplify it. If I have
y = √(1 – x^2)
there is no property of exponents or radicals that I can use to simplify this!
I very often find that students need to consciously ask themselves whether there is more that can be done, forcing themselves to stop and think, rather than let their “simplifying momentum” carry
them beyond what is legal.
A numerical example
When you don’t know whether a step is valid, one way to check is to try it with specific numbers.
Let’s plug in some numbers. If x is greater than 1, then 1 – x^2 will be less than zero, and we can’t take the square root of a negative number. (Well, we can, but we’d need to talk about
imaginary numbers then, and that’s irrelevant to the trig context.) So I’ll choose a number between 0 and 1 – let’s say x = 0.75. Then
y = √(1 – (0.75)^2)
= √(1 – 0.5625)
= √0.4375
= 0.6614
I think (though you did not confirm this) that you are thinking √(1 – x^2) is the same as √1 – √(x^2), which is the same as 1 – x. But if x = 0.75, then 1 – x = 0.25, not 0.6614.
So, $$\sqrt{1-0.75^2} = 0.6614$$ but $$1-0.75 = 0.25$$
They are not equal. So when you come to an expression like the former (but with a variable so you can’t just evaluate it), you must put on your brakes and stop. Don’t keep simplifying when the next
step, though it feels natural, is illegal!
If the values had been equal, it would not prove the general statement to be true; but if it fails for one number, then it can’t be generally true!
Again, I have to ask: Is this what you’re thinking? If the problem lies here, then we can solve your confusion without bringing trigonometry into it. If not, then keep trying to explain your
thinking as clearly as you can. For instance, don’t use words like “it” without saying exactly what “it” is.
Success, and a deeper look
Simran replied, confirming the interpretation of his issue:
Hi Doctor Rick,
Thank you!! Yes I was assuming we took the root of x² that’s why I was confused and by square I meant that 1 – x² is present in square form so when we take the root we should be writing as ✓1 –
x but now I get it.
Thanks a lot!!
Doctor Rick could say more now:
Good! Since now I know what your misconception was, let me say a bit more about it.
It is not unusual for students to think that √(a^2 + b^2) = a + b. If they think about it, though, this would turn the Pythagorean theorem, c^2 = a^2 + b^2 where a, b, and c are sides of a right
triangle, into c = a + b. That’s not right! So it’s mostly when a student isn’t really thinking about it that they fall into this trap.
For example, taking the familiar 3-4-5 right triangle, the hypotenuse can be found as $$\sqrt{3^2+4^2}=\sqrt{9+16}=\sqrt{25}=5$$ but if we could distribute the root, we’d have $$\sqrt{3^2+4^2}=\sqrt
{9+16}=\sqrt{9}+\sqrt{16}=3+4=7$$ which is wrong.
If we compare problem solving to a race, it is easy to coast at the end, when the goal is just ahead; but we must keep thinking carefully to the end. Never stop thinking!
In the same way, if a student is going too quickly though a problem, he might write (a + b)^2 as a^2 + b^2. It’s really the same error. If we expand the square properly, we get (a + b)^2 = a^2 +
2ab + b^2. And if we take the square root of both sides of this equation, we find that a + b = √(a^2 + 2ab + b^2), not √(a^2 + b^2).
If it were true that \(\sqrt{a^2+b^2}=a+b\), then it would be true that \(a^2+b^2=(a+b)^2\); but that is missing the middle term.
Simran closed:
Hello doctor Rick,
Thank you, the example is really helpful and as I always solve questions in hurry I would be cautious of this.
There’s another lesson learned! And this is what we’re here for.
Leave a Comment
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://www.themathdoctors.org/trigonometry-radicals-and-a-very-common-error/","timestamp":"2024-11-06T12:31:06Z","content_type":"text/html","content_length":"121252","record_id":"<urn:uuid:2b42aa0e-668f-4105-b3ff-5db0df0916c0>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00425.warc.gz"} |
Related Queries: PyTorch MCQ | PyTorch Quiz | PyTorch Questions and Answers
How can you freeze layers in a PyTorch model during fine-tuning?
By setting requires_grad = False for the parameters of those layers
By deleting the layers
By setting the learning rate to zero
Layers cannot be frozen in PyTorch
What is the primary function of torch.flip() in PyTorch?
To flip tensor signs
To reverse tensor elements
To flip tensor upside down
To transpose tensor
Which PyTorch function can be used to apply softmax?
All of the above
How does PyTorch handle memory management?
It automates memory management for hassle-free usage
It requires manual memory allocation and deallocation
It doesn't support memory management
It only manages CPU memory, not GPU memory
What is the purpose of torch.nn.utils.clip_grad_value_ in PyTorch?
To clip gradient values
To normalize gradient values
To optimize gradient computations
To visualize gradient flow
Which of the following is NOT a valid method to move a PyTorch model to GPU?
What are gradient issues in PyTorch training?
Problems such as vanishing or exploding gradients that complicate training
Issues related to data preprocessing
Problems with model architecture design
Difficulties in deploying models
What is the purpose of torch.save() in PyTorch?
To save model parameters
To save entire datasets
To create model checkpoints
All of the above
How can you use torch.nn.utils.prune.global_unstructured in PyTorch?
To structure global pruning
For applying global unstructured pruning
To optimize model sparsity
To implement custom pruning techniques
Which PyTorch function computes the mean squared error loss?
Score: 0/10
What is the purpose of computational graphs in PyTorch?
To visually represent the sequence of operations during model training
To store data in a structured format
To manage memory allocation
To create user interfaces
What is learning rate annealing?
Increasing learning rate
Decreasing learning rate
Oscillating learning rate
Randomizing learning rate
What is the function of torch.utils.data.IterableDataset in PyTorch?
To create iterable neural networks
For defining datasets that are loaded iteratively
To implement iterator patterns
To optimize data loading
How can overfitting be addressed in PyTorch?
By using techniques like regularization and dropout
By increasing the model size
By using a larger learning rate
By training for more epochs
Which PyTorch library can be used for federated learning?
Which method is used to compute the dot product of two tensors in PyTorch?
All of the above
What is PyTorch Profiler?
A tool for profiling and optimizing PyTorch models
A social network for PyTorch developers
A code generator for PyTorch models
A debugging tool for Python code
What is the purpose of torch.nn.utils.rnn in PyTorch?
To create RNN layers
For utilities to work with RNN models
To optimize RNN computations
To implement RNN-specific loss functions
Which PyTorch module is responsible for automatic differentiation?
What is the function of torch.nn.utils.parametrizations.orthogonal in PyTorch?
To create orthogonal matrices
For applying orthogonal parametrization
To optimize matrix operations
To implement custom orthogonalization
Score: 0/10
Which of the following is NOT a valid pooling layer in PyTorch?
How can you use torch.onnx in PyTorch?
To create neural network visualizations
For exporting models to the ONNX format
To optimize network architectures
To implement custom operators
Which PyTorch function is used to apply max pooling?
All of the above
Which PyTorch function is used to move a model to GPU?
All of the above
How can you use torch.nn.utils.prune.random_unstructured in PyTorch?
To randomize pruning
For applying random unstructured pruning
To implement stochastic thinning
To optimize model sparsity
How does torch.nn.functional.gelu work in PyTorch?
It creates GELU functions
For applying the GELU activation function
To optimize non-linear operations
To implement custom activation functions
How can you implement custom initialization methods in PyTorch?
Only by using pre-defined initializations
By subclassing init._Initializer
By defining a new function
Both B and C
Which PyTorch module is used to create convolutional layers?
What is the purpose of model.eval() in PyTorch?
To set the model in evaluation mode
To evaluate the model
To calculate model metrics
To print model summary
Which loss function is commonly used for multi-class classification problems in PyTorch?
Score: 0/10
What is the purpose of torch.nn.functional.binary_cross_entropy in PyTorch?
To compute binary cross-entropy loss
To apply binary activation
To implement binary classification
To optimize binary inputs
Which PyTorch function is used to compute gradients?
What method is used to compute the natural logarithm of (1 + x) for a tensor?
How can you use torch.backends.cudnn in PyTorch?
To create CUDA networks
For configuring cuDNN backend
To optimize CUDA operations
To implement custom CUDA kernels
What is the function of torch.utils.bottleneck in PyTorch?
To create bottleneck layers
For performance debugging and profiling
To optimize network architecture
To compress model parameters
Which of the following is NOT a valid optimizer in PyTorch?
What is the function of autograd in PyTorch?
To automate the calculation of gradients of the loss function relative to model parameters
To automatically generate code for neural networks
To create data augmentation pipelines
To optimize memory usage in GPU computations
What is the purpose of torch.cuda in PyTorch?
To perform CPU computations
To set up and run CUDA operations
To visualize GPU usage
To optimize memory usage
Which PyTorch function applies layer normalization?
Both a and c
How does torch.utils.data.WeightedRandomSampler work in PyTorch?
It weighs random numbers
For sampling from a weighted distribution
To implement importance sampling
To optimize sample selection
Score: 0/10
Related Reads: | {"url":"https://coolgenerativeai.com/interview-questions-pytorch/","timestamp":"2024-11-11T20:17:29Z","content_type":"text/html","content_length":"191152","record_id":"<urn:uuid:1d035669-d8f5-41d9-9757-a15360a4d0b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00463.warc.gz"} |
Fill void regions in 2-D and split cells in 3-D geometry
Since R2020a
h = addFace(g,edges) adds a new face to the geometry g. The specified edges must form a closed contour. For a 2-D geometry, adding a new face lets you fill voids in the geometry. For a 3-D geometry,
adding a new face lets you split one cell into multiple cells.
You can add several new faces simultaneously by specifying their contours in a cell array. Each contour in the cell array must be unique.
After modifying a geometry, always call generateMesh to ensure a proper mesh association with the new geometry.
[h,FaceID] = addFace(g,edges) also returns a row vector containing IDs of the added faces.
Fill Void Region in 2-D Geometry
Add a face to a 2-D geometry to fill an internal void.
Create an fegeometry object representing a plate with a hole in its center. This geometry has one face.
gm = fegeometry("PlateSquareHolePlanar.stl")
gm =
fegeometry with properties:
NumCells: 0
NumFaces: 1
NumEdges: 8
NumVertices: 8
Vertices: [8x3 double]
Mesh: []
Plot the geometry and display the face labels.
Zoom in and display the edge labels of the small hole at the center.
axis([49 51 99 101])
Fill the hole by adding a face. The number of faces in the geometry changes to 2.
gm = addFace(gm,[1 8 4 5])
gm =
fegeometry with properties:
NumCells: 0
NumFaces: 2
NumEdges: 8
NumVertices: 8
Vertices: [8x3 double]
Mesh: []
Plot the modified geometry and display the face labels.
Split Cells in 3-D Geometry
Add a face in a 3-D geometry to split a cell into two cells.
Create a geometry that consists of one cell.
gm = fegeometry("MotherboardFragment1.stl")
gm =
fegeometry with properties:
NumCells: 1
NumFaces: 26
NumEdges: 46
NumVertices: 34
Vertices: [34x3 double]
Mesh: []
Plot the geometry and display the edge labels. Zoom in on the corresponding part of the geometry to see the edge labels there more clearly.
xlim([-0.05 0.05])
ylim([-0.05 0.05])
zlim([0 0.05])
Split the cuboid on the right side into a separate cell. For this, add a face bounded by edges 1, 3, 6, and 12.
[gm,ID] = addFace(gm,[1 3 6 12])
gm =
fegeometry with properties:
NumCells: 2
NumFaces: 27
NumEdges: 46
NumVertices: 34
Vertices: [34x3 double]
Mesh: []
Plot the modified geometry and display the cell labels.
Now split the cuboid on the left side of the board and all cylinders into separate cells by adding a face at the bottom of each shape. To see edge labels more clearly, zoom and rotate the plot. Use a
cell array to add several new faces simultaneously.
[gm,IDs] = addFace(gm,{[5 7 8 10], ...
30, ...
31, ...
32, ...
33, ...
gm =
fegeometry with properties:
NumCells: 8
NumFaces: 33
NumEdges: 46
NumVertices: 34
Vertices: [34x3 double]
Mesh: []
IDs = 6×1
Plot the modified geometry and display the cell labels.
Input Arguments
edges — Edges forming unique closed flat contour
vector of positive integers | cell array of vectors of positive integers
Edges forming a unique closed flat contour, specified as a vector of positive integers or a cell array of such vectors. You can specify edges within a vector in any order.
When you use a cell array to add several new faces, each contour in the cell array must be unique.
Example: addFace(g,[1 3 4 7])
Output Arguments
h — Resulting geometry
fegeometry object | handle
• If the original geometry g is an fegeometry object, then h is a new fegeometry object representing the modified geometry. The original geometry g remains unchanged.
• If the original geometry g is a DiscreteGeometry AnalyticGeometry object, then h is a handle to the modified object g.
FaceID — Face ID
positive number | row vector of positive numbers
Face ID, returned as a positive number or a row vector of positive numbers. Each number represents a face ID. When you add a new face to a geometry with N faces, the ID of the added face is N + 1.
• addFace errors when the specified contour defines an already existing face.
• If the original geometry g is a DiscreteGeometry or AnalyticGeometry object, addFace modifies the original geometry g.
• If the original geometry g is an fegeometry object, and you want to replace it with the modified geometry, assign the output to the original geometry, for example, g = addFace(g,[1 3 4 7]).
Version History
Introduced in R2020a
R2023a: Finite element model
addFace now accepts geometries specified by fegeometry objects.
R2021a: Face addition for analytic geometries
addFace now lets you fill void regions in 2-D AnalyticGeometry objects. | {"url":"https://nl.mathworks.com/help/pde/ug/pde.discretegeometry.addface.html","timestamp":"2024-11-06T09:01:40Z","content_type":"text/html","content_length":"96503","record_id":"<urn:uuid:84c8f0f7-280d-4ea3-b784-05ec9481fd7a>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00482.warc.gz"} |
The Information Interpretation of Quantum Mechanics
There are presently several "
" of quantum mechanics. Many, perhaps most, are attempts to eliminate the element of
that is involved in the so-called
collapse of the wave function
. The Information Interpretation is simply "standard quantum physics"
plus information
being recorded
. Unlike the
Copenhagen Interpretation
, we offer several
of what is going on in quantum reality, The Information Interpretation is based on three simple premises:
When you hear or read that electrons are both waves and particles, think "either-or" -
first a wave of possibilities, then an actual particle.
• Quantum systems evolve in two ways:
• No knowledge can be gained by a "conscious observer" unless new information has already been irreversibly recorded in the universe. That information can be created and recorded in either the
target quantum system or the measuring apparatus. Only then can it become knowledge in the observer's mind.
• In our
two-stage model of free will
, an agent first freely generates
alternative possibilities
, then evaluates them and chooses one,
adequately determined
by its motives, reasons, desires, etc. First come "free alternatives," then "willed actions." Just as with quantum processes - first possibilities, then actuality.
The measuring apparatus is quantal, not deterministic or "classical." It need only be statistically determined and capable of recording the irreversible information about an interaction. The
human mind is similarly only statistically determined.
There is only one world.
It is a quantum world.
Ontologically it is
. Epistemically, common sense and experience incline us to see it as
Information physics claims there is only one world, the quantum world, and the "
quantum to classical transition
" occurs for any large macroscopic object that contains a large number of atoms. For large enough systems, independent quantum events are "averaged over." The uncertainty in position and momentum of
the object becomes less than the observational uncertainty
Δv Δx ≥ h / m
goes to zero as
h / m
goes to zero. The classical laws of motion, with their implicit
and strict
when objects are large enough so that microscopic events can be ignored, but this determinism is fundamentally
and causes are only
, however near to certainty. Information philosophy interprets the wave function
as a "possibilities" function. With this simple change in terminology, the mysterious process of a wave function "collapsing" becomes more understandable. The wave function
evolves to explore all the
(with mathematically calculable
). When a single
is realized, the probability for all the non-actualized possibilities goes to zero ("collapses") instantaneously. But they could never reconcile the macroscopic irreversibility needed for the second
law Information physics is standard quantum physics. It accepts the Schrödinger
equation of motion
, the
principle of superposition
, the
axiom of measurement
(now including the actual information "bits" measured), and - most important - the
projection postulate
of standard quantum mechanics (the "collapse" that so many
deny). The "conscious observer" of the Copenhagen Interpretation is not required for a projection, for the
wave-function to "collapse"
, for one of the
to become an
. What the collapse does require is an
between systems that creates
that is
and observable, though not necessarily observed. Among the founders of quantum mechanics, almost everyone agreed that
is a key requirement for a measurement. Irreversibility introduces thermodynamics into a proper formulation of quantum mechanics, and this is a key element of our information interpretation. But this
requirement was never reconciled with classical statistical mechanics, which says that collisions between material particles are reversible. Even quantum statistical mechanics claims collisions are
reversible because the Schrödinger equation is time reversible. Note that Maxwell's equations of electromagnetic radiation are also time reversible. We have shown that it is the interaction of light
and matter, both on their own time reversible, that is the
origin of irreversibility
Information is
a conserved quantity like energy and mass, despite the view of many mathematical physicists, who generally accept
. The universe began in a state of equilibrium with minimal information, and information is being
created every day
, despite the second law of thermodynamics
Classical interactions between large macroscopic bodies do not generate new information. Newton's laws of motion imply that the information in any configuration of bodies, motions, and force is
enough to know all past and future configurations.
Classical mechanics conserves information
. In the absence of interactions, an isolated quantum system evolves according to the unitary
Schrödinger equation of motion
. Just like classical systems, the
Schrödinger equation conserves information. And just like classical systems, Schrödinger's unitary evolution is time reversible. Unlike classical systems however, when there is an interaction between
material quantum systems, the two systems become
and there may be a change of state in either or both systems. This change of state may create new information. Or if there is an interaction between light and matter the evolution is no longer
unitary, there is an
collapse of the wave function. If that information is instantly destroyed, as in most interactions, it may never be observed macroscopically. If, on the other hand, the information is stabilized for
some length of time, it may be seen by an observer and considered to be a "
." But it need not be seen by anyone to become new information in the universe. The universe is its own observer!
Schrödinger's Cat
as its own observer. For the information (negative entropy) to be stabilized, the second law of thermodynamics requires that an amount of positive entropy greater than the negative entropy must be
transferred away from the new information structure. Exactly how the universe allows pockets of negative entropy to form as "information structures" we describe as the "
cosmic creation process
." This core two-step process has been going on since the origin of the universe. It continues today as we add information to the
sum of human knowledge
Note that despite the Heisenberg principle, quantum mechanical measurements are not always uncertain. When a system is measured (prepared) in an eigenstate, a subsequent measurement (Pauli's
measurement of the first kind) will find it in the same state with perfect certainty.
What then are the possibilities for new quantum states? The transformation theory of
lets us represent
in a set of basis functions for which the combination of quantum systems (one may be a measurement apparatus) has eigenvalues (
the axiom of measurement
). We represent
as in a linear combination (
the principle of superposition
) of those "possible" eigenfunctions. Quantum mechanics lets us calculate the probabilities of each of those "possibilities." Interaction with the measurement apparatus (or indeed interaction with
any other system) may select out (
the projection postulate
) one of those possibilities as an actuality. But for this event to be an "observable" (a
John Bell
must be created
positive entropy must be transferred away from the new information structure, in accordance with our two-stage
information creation process
. All
interpretations of quantum mechanics
predict the same experimental results.
Information physics
is no exception, because the experimental data from quantum experiments is the most accurate in the history of science. Where interpretations differ is in the picture (the visualization) they provide
of what is "really" going on in the microscopic world - the so-called "quantum reality." The "orthodox" Copenhagen interpretation of
Neils Bohr
Werner Heisenberg
discourages such attempts to understand the nature of the "quantum world," because they say that all our experience is derived from the "classical world" and should be described in ordinary language.
This is why Bohr and Heisenberg insisted on the path and the "
" between the quantum event and the mind of an observer. The information interpretation encourages visualization. Schrödinger called it
. He and
were right that we should be able to picture quantum reality. But that demands that we accept the
reality of quantum possibilities
and discontinuous random "quantum jumps," something many modern interpretations do not do. (See our visualization of the
two-slit experiment
, our
EPR experiment
visualizations, and
Dirac's three polarizers
to visualize the superposition of states and the projection or "collapse" of a wave function.) Bohr was of course right that classical physics plays an essential role. His Correspondence Principle
allowed him to recover some important physical constants by assuming that the discontinuous quantum jumps for low quantum numbers (low "orbits" in his old quantum theory model)
in the limit of large quantum numbers to the continuous radiation emission and absorption of classical electromagnetic theory. In addition, we know that in macroscopic bodies with enormous numbers of
quantum particles, quantum effects are averaged over, so that the uncertainty in position and momentum of a large body still obeys Heisenberg's
indeterminacy principle
, but the uncertainty is for all practical purposes unmeasurable and the body can be treated classically. We can say that the quantum description of matter also
to a classical description in the limit of large numbers of quantum particles. We call this
"adequate" or statistical determinism
. It is the apparent determinism we find behind Newton's laws of motion for macroscopic objects. The statistics of averaging over many independent quantum events then produces the "
quantum to classical transition
" for the same reason as the "law of large numbers" in probability theory. Both Bohr and Heisenberg suggested that just as relativistic effects can be ignored when the velocity is small compared to
the velocity of light (
v / c
→ 0), so quantum effects might be ignorable when Planck's quantum of action
→ 0. But this is quite wrong, because
is a constant that never goes to zero. In the information interpretation, it is always a quantum world. The conditions needed for ignoring quantum indeterminacy are when the
of the macroscopic "classical" object is large. Noting that the momentum
is the product of mass and velocity
, Heisenberg's
Δp Δx > h
, can be rewritten as
Δv Δx > h / m
. It is thus not when
is small, but when
h / m
is small enough, that errors in the position and momentum of macroscopic objects become smaller that can be measured.
Note that the macromolecules of biology are large enough to stabilize their information structures. DNA has been replicating its essential information for billions of years, resisting equilibrium
despite the second law of thermodynamics
new information also marks the transition between the quantum world and the "
adequately deterministic
" classical world, because the information structure itself must be large enough (and stable enough) to be seen. The typical measurement apparatus is macroscopic, so the quantum of action
becomes small compared to the mass
h / m
approaches zero.
Decoherence theorists say that our failure to see quantum superpositions in the macroscopic world
is the measurement problem
The information interpretation thus explains why quantum superpositions like
Schrödinger's Cat
are not seen in the macroscopic world. Stable new information structures in the dying cat reduce the quantum
(and their potential interference effects) to a classical
. Upon opening the box and finding a dead cat, an autopsy will reveal that the time of death was observed/recorded. The cat is its own observer.
The "Possibilities Function"
The central element in quantum physics is the "wave function"
, with its mysterious wave-particle dual nature (sometimes a wave, sometimes a particle, etc.). We believe that teaching and understanding quantum mechanics would be much simpler if we called
the "
function." It only looks like a wave in simple cases of low-dimensional coordinate space. But it always tells us the possibilities - the possible values of any observable, for example. Given the
"possibilities function"
, quantum mechanics allows us to calculate the "probabilities" for each of the "possibilities." The calculation depends on the free choice of the experimenter as to which "observables" to look for.
If the measurement apparatus can register
discrete values,
can be expanded in terms of a set of basis functions (eigenfunctions) appropriate for the chosen observable, say
. The expansion is
ψ = ∑ c[n] φ[n]
When the absolute squares of the coefficients
are appropriately normalized to add up to 1, the probability P
of observing an eigenvalue
P[n] = | c[n] |^2 = | < ψ | φ[n] > | ^2
These probabilities are confirmed statistically by repeated identical experiments that collect large numbers of results. Quantum mechanics is the most accurate physical theory in science, with
measurements accurate to thirteen decimal places. In each individual experiment, generally just one of the possibilities becomes an
(some experiments leave the quantum system in a new
of multiple possibilities). In our information interpretation, a possibility is realized or actualized at the moment when information is created about the new state of the system. This new
information requires that positive entropy be carried away from the local increase in negative entropy. Note that an "observer" would not be able to make a "measurement" unless there is new
information to be "observed." Information must be (and is in all modern experimental systems) created and recorded
before any observer looks at the results
. An information approach can help philosophers to think more clearly about quantum physics. Instead of getting trapped in talk about mysterious "collapses of the wave function," "reductions of the
wave packet," or the "projection postulate" (all important issues), the information interpretation proposes we simply say that one of the "possibilities" has become "actual." It is intuitively
obvious that when one possibility becomes actual, all the others are annihilated, consigned to "nothingness," as
Jean-Paul Sartre
put it. And because the other possibilities may have been extremely "distant" from the actuality, their instantanteous disappearances looked to Einstein to violate his principle of relativity, but
they do not. Quantum theory lets us put quantitative values on the "probabilities" for each of the "possibilities." But this means that quantum theory is fundamentally statistical, meaning
indeterministic and "random." It is not a question of our being ignorant about what is going on (an epistemological problem). What's happening is
ontological chance
, as Einstein first showed, but as he forever disliked. We can describe the "possibilities function"
as moving through space (at the speed of light, or even faster, as Einstein feared?),
exploring all the possibilities
for wherever the particle might be found. This too may be seen as a special kind of information. In the famous "
two-slit experiment
," the "possibilities function" travels everywhere, meaning that it passes through both slits, interfering with itself and thus changing the possibilities where the particle might be found.
Metaphorically, it "knows" when both slits are open, even if our intuitive classical view imagines that the particle must go through only one. This changes the probabilities associated with each of
the possibilities.
Possibilities and Information Theory
It is of the deepest philosophical significance that information theory is based on the mathematics of
. If all outcomes were
, there would be no "surprises" in the universe. Information would be conserved and a universal constant, as some mathematicians mistakenly believe. Information philosophy requires the ontological
uncertainty and probabilistic outcomes of modern quantum physics to produce new information. In
Claude Shannon
's theory of the communication of information, there must be multiple possible messages in order for information to be communicated. If there is but one possible message, there is no uncertainty, and
no information can be communicated. In a universe describable by the classical Newtonian laws of motion, all the information needed to produce the next moment is contained in the positions, motions,
and forces on the material particles. In a quantum world describable by the unitary evolution of the deterministic Schrödinger equation, nothing new ever happens, there is no new "outcome." Outcomes
are added to standard quantum mechanics by the addition of the "projection postulate" or "collapse of the wave function" when the quantum system interacts with another system. Information is constant
in a
universe. There is "nothing new under the sun." The
creation of new information
is not possible without the
random chance
of quantum mechanics, plus the extraordinary temporal stability of quantum mechanical structures needed to store information once it is created. Without the extraordinary
of quantized information structures over cosmological time scales, life and the universe we know would not be possible. That stability is the consequence of an underlying digital nature. Quantum
mechanics reveals the architecture of the universe to be
rather than
, to be
rather than
. Digital information transfers are essentially perfect. All analog transfers are "lossy." It is Bohr's "correspondence principle" of quantum mechanics for large quantum numbers and the "law of large
numbers" of statistics which ensure that macroscopic objects can normally average out microscopic uncertainties and probabilities to provide the
"adequate" determinism
that shows up in all our "Laws of Nature."
There is no separate classical world and no need for a quantum-to-classical transition. The quantum world becomes statistically deterministic when the mass of an object is such that h / m approaches
We conclude, contrary to the views of Bohr and Heisenberg, that there is no need for a separate classical world. The classical laws of nature
statistically from quantum laws. Quantum laws, which are therefore universally applicable, converge in these two limits of large numbers to classical laws. There is no "transition" from the quantum
world to a separate classical world. There is just one world, where quantum physics applies universally, but its mysterious properties, like interference, entanglement, and nonlocality, are normally
invisible, averaged over, in the macroscopic world. The problem for an informational interpretation of quantum mechanics is to explain exactly how these two convergences (large numbers of particles
and large quantum numbers) allow continuous and apparently
macroscopic information structures to emerge from the
and discontinuous microscopic quantum world. We must show how the determinism in the macroscopic world is only a
determinism or
adequate determinism
that results from "averaging over" the large number of independent quantum events happening in a macroscopic object. And even more important, we must show how the occasional magnification or
amplification of microscopic quantum events leads to new macroscopic information that makes human beings the "authors of their lives", that makes them "co-creators of our universe," and that
guarantees a genuinely open future with alternative possibilities, not in inaccessible "parallel universes" but in the one universe that we have.
Feynman's Path Integral, Diagrams, and Sum of Histories
Richard Feynman
's "path integral"
of quantum mechanics, we may have a way to help visualize our "possibilities" function. Feynman proposed to reformulate quantum mechanics based on just three postulates:
A. The probability for an event is given by the squared modulus of a complex number called the "probability amplitude," just as with the Heisenberg and Schrödinger pictures.
B. The probability amplitude is given by adding together the contributions of all paths in configuration space, where paths include not only the most direct from the initial state, but also
paths with arbitrary curves that can go arbitrarily far away and then come back to the final state, paths so long that they imply traversal at supraluminal speeds!
C. The contribution of a path is proportional to e^i S / ℏ, where S is the action given by the time integral of the Lagrangian along the path.
The overall probability amplitude for a given process is the sum of the contributions over the space of
all possible paths
of the system in between the initial and final states. All probability amplitudes have equal weight but have varying phase of the complex action. Rapidly varying phase may significantly reduce the
contribution along a path and paths may interfere with neighboring paths. The path integrals are described as the "sum of histories" of all paths through an infinite space–time. Local quantum field
theory might restrict paths to lie within a finite causally closed region, with time-like separations inside a light-cone. A Feynman diagram is a graphical representation of one path's contribution
to the probability amplitude. Feynman's path integral method gives the same results as quantum mechanics for Hamiltonians quadratic in the momentum. Feynman's amplitudes also obey the Schrödinger
equation for the Hamiltonian corresponding to the action
. How do we interpret this as visualizing the "possibilities" in our information interpretation? We can compare the individual paths to the "virtual photons" that mediate the electromagnetic field in
quantum field theory. The picture is then that an infinite number of virtual photons explore all the possibilities in the given situation. Large numbers of them go through both slits for example, and
interfere with one another, preventing even a single real photon from landing on a null point in the interference pattern.
Information Creation without Observers
Information physics explores the quantum mechanical and thermodynamic properties of cosmic information structures, especially those that were
created before the existence of human observers
. A key parameter is the amount of
information per particle
. When particles combine, the information per particle increases. Starting with quarks forming nucleons, nuclei combining with electrons to form atoms, atoms combining into molecules, macromolecules,
crystal lattices, and other solid state structures, at every stage of growth two things are happening. Binding energy is being transferred away from the new composite structure, carrying away
positive entropy. This positive entropy more than balances the negative entropy (or information) in the new structure, thus satisfying the second law of thermodynamics. But the important thing is the
increasing information per particle, which allows the new information structure to approach classical behavior. Individual particles are no longer acting alone. Acting in concert allows them to
average over quantum noise. They acquire new properties and capabilities that
from the component particles and are not
to the parts. Quantum noise destroys coherent actions of the lower-level parts of the new structure, preventing the lower level from exerting "botton-up" control. Information in the higher-level
structure allows the composite system to generate new behaviors that can exercise
downward causal control
. Some of these behaviors depend for their successful operation on the disruptive noise in the lower level. For example, a ribosome, assembling a polypeptide chain into a protein, depends on the
chaotic and random motions of the transfer RNA molecules with amino acids attached to rapidly provide the next codon match. To understand information creation, information physics provides new
insights into the puzzling "
problem of measurement
" and the mysterious "
collapse of the wave function
" in quantum mechanics. Information physics also probes deeply into the second law of thermodynamics to establish the
increase of entropy on a quantum mechanical basis. "Information physics" provides a new "interpretation" of quantum mechanics. But it is is not an attempt to alter the basic assumptions of standard
quantum mechanics, extending them to include "hidden variables," for example. It does reject the unfortunate idea that nothing happens to quantum systems without the intervention of an "observer."
Possibilities, Probabilities, and Actuality
1) The Wave Function. The central element in quantum physics is the "wave function"
, with its mysterious
wave-particle dual nature
(sometimes a wave, sometimes a particle, etc.). We believe that teaching and understanding quantum mechanics would be much simpler if we called
the "
function." It only looks like a wave in simple cases of configuration space. But it always tells us the possibilities - the possible values of any observable, for example. 2. The Superposition
Principle (and transformation theory). Given the "possibilities function"
, quantum mechanics allows us to calculate the "probabilities" for each of the "possibilities." The calculation will depend on the
free choice
of the experimenter as to which "observables" to look for, by designing an appropriate measurement apparatus. For example, if the measurement apparatus can register
discrete values,
can be expanded in terms of a set of
basis functions appropriate for the chosen observable, say
. The expansion is then
ψ = ∑ c[n] φ[n],
and we say the system is in a "superposition" of these
states. When the absolute squares of the coefficients
are appropriately normalized to add up to 1, the probability P
of observing value
P[n] = c[n]^2 = | < ψ | φ[n] > | ^2
These probabilities are confirmed statistically by repeated identical experiments that collect large numbers of results. Quantum mechanics is the most accurate physical theory in science, with
measurements accurate to thirteen decimal places. 2. The Equation of Motion. The "possibilities function"
evolves in time according to the unitary Schrödinger equation of motion.
ih δψ[a]/δt = H ψ[a],
is the Hamiltonian. 3. The Projection Postulate ("Collapse" of the Wave Function). 4. The Axiom of Measurement (impossible without information creation). In each individual experiment, generally just
one of the possibilities becomes an
(some experiments leave the quantum system in a new
of multiple possibilities). In our information interpretation, a possibility is realized or actualized when information is created about the new state of the system. This new information requires
that more positive entropy be carried away than the local increase in negative entropy. Note that an "observer" would not be able to make a "measurement" unless there is new information to be
"observed." Information can be (and usually is) created and recorded before any observer looks at the results. An information approach can help philosophers to think more clearly about quantum
physics. Instead of getting trapped in talk about mysterious "collapses of the wave function," "reductions of the wave packet," or the "projection postulate" (all important issues), the information
interpretation proposes we simply say that one of the "possibilities" has become "actual." It is intuitively obvious that when one possibility becomes actual, all the others are annihilated,
consigned to "nothingness," as Jean Paul-Sartre put it. We can also say that quantum theory lets us put quantitative values on the "probabilities" for each of the "possibilities." But this means that
quantum theory is statistical, meaning indeterministic and "random. It is not a question of our being ignorant about what is going on (an epistemological problem). What's happening is
ontological chance
. We can also say that the "possibilities function"
moves through space (at the speed of light , or even faster?), exploring all the possibilities for where the particle might be found. This too may be seen as a special kind of (potential?)
information. In the famous "two-slit experiment," the "possibilities function" travels everywhere, meaning that it passes through both slits, interfering with itself and thus changing the
possibilities where the particle might be found. Metaphorically, it "knows" when both slits are open, even if our intuitive classical view imagines the particle to go through only one slit. This
changes the probabilities associated with each of the possibilities.
The Axioms of an Information Quantum Mechanics
Information physics accepts the
principle of superposition
(that arbitrary quantum states can be represented as linear combinations of system eigenstates) and the
projection postulate
(that interactions between quantum systems can project the two system states - or "collapse" them - into eigenstates of the separate systems or the combined entangled system). But we replace the
Copenhagen axiom of "measurement" (as involving a "measurer" or "observer") by considering instead the increased information that can sometimes occur when systems interact. If this increased
information is recorded for long enough to be seen by an observer, then we can call it a measurement. The appearance of a particle at a particular spot on a photographic plate, the firing of a Geiger
counter recording a particle event, or a ribosome adding an amino acid to a polypeptide chain may all be considered to be "measurements." Just as the traditional axiom of measurement says that
measurements of observables (quantities that commute with the Hamiltonian) can only yield eigenvalues, information physics accepts that the quantities of information added correspond to these
eigenvalues. What these measurements (or simply interactions without "measurements") have in common is the appearance of a microscopic particle in a particular "observable" state. But whether or not
"observed," quantum particles are always interacting with other particles, and some of these interactions increase the per-particle information as the particles build up information structures. In
classical mechanics, the material universe is thought to be made up of tiny particles whose motions are completely determined by forces that act between the particles, forces such as gravitation,
electromagnetic attractions and repulsions, nuclear and weak forces, etc. The equations that describe those motions, Newton's laws of motion, were for many centuries thought to be perfect and
sufficient to predict the future of any mechanical system. They provided support for
many philosophical ideas about determinism
The Schrödinger Equation of Motion
In quantum mechanics, Newton's laws of motion, especially in the Hamiltonian formulation of those laws, are replaced by
Erwin Schrodinger
's wave equation, which describes the time evolution of a probability amplitude wave ψ
ih δψ[a]/δt = H ψ[a],
is the Hamiltonian. The probability amplitude looks like a wave and the Schrödinger equation is a wave equation. But the wave is an abstract quantity whose absolute square is interpreted as the
probability of finding a quantum particle somewhere. It is distinctly not the particle itself, whose exact position is unknowable while the quantum system is evolving deterministically. It is the
probability amplitude wave that interferes with itself. Particles, as such, never interfere (although they may collide). And while probabilities can be distributed throughout space, some here, some
there, parts of a particle are never found. It is the whole quantum particle that is always found. Information is
(a constant of the motion) during the unitary and deterministic time evolution of the wave function according to the Schrödinger equation. Information can only be created (or destroyed) during an
interaction between quantum systems. It is because the system goes from a superposition of states to one of the possible states that the information changes. In what
Wolfgang Pauli
called a "measurement of the second kind," a system prepared in an eigenstate will, when measured again, be found with certainty, in the same state. There is no information gained in this case.
In the information interpretation of quantum mechanics, the time evolution of the wave function is not interpreted as a motion of the particle. What is evolving, what is moving, is the probability,
more fundamentally, the
, of finding the particle in different places, or the expectation value of various physical quantities as a function of the time.
An electron is not both a wave and a particle, think "either-or"
first a wave of possibilities, then an actual particle.
There is no logical contradiction in the use of the deterministic Schrödinger wave equation to predict values of the (indeterministic) probability amplitude wave functions at future times. The
Schrödinger equation is a universal equation (and quantum mechanics is universally true), but there is no universal wave function. Each wave function is particular to specific quantum systems,
sometimes to just a single particle. And when a particle is
found, the
(probability) of finding it elsewhere simply vanishes instantaneously. As
Max Born
put it, the equations of motion for the probability waves are deterministic and continuous, but the motion of a particle itself is indeterministic, discontinuous, and probabilistic.
John von Neumann
was bothered by the fact that the projection postulate or wave-function collapse (his Process
) was not a formal part of quantum mechanics. Dozens, if not hundreds, of physicists have attempted to produce a formal theory to describe the collapse process and make it predictable. But this is to
misunderstand the
irreducible chance
nature of the wave-function collapse. Long before
Werner Heisenberg
formulated his
principle, it was
Albert Einstein
who discovered the fundamental
and discontinuous nature of quantum mechanics. He postulated "light quanta" in 1905, more than twenty years before quantum mechanics was developed. And in 1916, his explanation of light emission and
absorption by atoms showed that the process is essentially a
process. All attempts to
a random process are denials of the
nature of the quantum world. Such efforts often assume that probability itself can be understood as epistemic, the consequence of human ignorance about underlying, as yet undiscovered,
laws. Historically, both scientists and philosophers have had what
William James
called an "antipathy to chance." The information interpretation says that if deterministic underlying laws existed, there would be no new information creation. There would be only one possible
future. But we know that the universe has been creating new information from the time of the Big Bang. Information physics adds this cosmological information creation to Einstein's discovery of
microscopic quantum chance to give us a consistent picture of a quantum world that appears
adequately determined
at the macroscopic level. But why then did Einstein object all his life to the indeterminism ("God does not play dice," etc.) that was his own fundamental discovery?
Probability Amplitude Waves Are Not "Fields" like Electromagnetic or Gravitational Fields
In classical electrodynamics, electromagnetic radiation (light, radio) is known to have wave properties such as interference. When the crest of one wave in the electromagnetic field meets the trough
of another, the two waves cancel one another. This field is a ponderable object. A disturbance of the field at one place is propagated to other parts of the field at the velocity of light. Einstein's
gravitational field in general relativity is similar. Matter moving at one point produces changes in the field which propagate to other parts of space, again at the velocity of light, in order to be
relativistically invariant. The probability amplitude wave function ψ of quantum mechanics is not such a field, however. Although Einstein sometimes referred to it as a "ghost field," and
Louis de Broglie
later developed a "pilot wave" theory in which waves guide the motion of material particles along paths perpendicular to the wave fronts, the wave function ψ has no ponderable substance. When a
particle is detected at a given point, the probability of its being found elsewhere goes immediately to zero. Probability does not propagate at some speed to other places, reducing the probability of
the particle
being found there. Once located, the particle cannot be elsewhere. And that is instantaneously true. Einstein, who spent his life searching for a "unified field theory," found this very hard to
accept. He thought that instantaneous changes in the "ghost field" must violate his special theory of relativity by traveling faster than the speed of light. He described this as "nonlocal reality,"
the idea that something at one point (the probability of finding a particle there) could move to be somewhere else faster than light speed. In our information interpretation, nothing that is material
or energy is transferred when the probability "collapses," it is only
information that changes instantly. Animation of a probability amplitude wave function ψ collapsing -
click to restart
Whereas the abstract probability amplitude moves continuously and deterministically throughout space, the concrete particle, for all we know, may move discontinuously and indeterministically to a
particular point in space.
Who Really Discovered Quantum Indeterminism and Discontinuity?
In 1900,
Max Planck
made the assumption that radiation is produced by resonant oscillators whose energy is quantized and proportional to the frequency of radiation
E = hν
For Planck, the proportionality constant
is a "quantum of action," a heuristic mathematical device that allowed him to apply
Ludwig Boltzmann
's work on the statistical mechanics and kinetic theory of gases to the radiation field. (Boltzmann had shown in the 1870's that the increase in entropy (the second law) could be explained if gases
were made up of enormous numbers of particles his
-Theorem). Planck applied Boltzmann's statistics of many particles to radiation and derived the distribution of radiation at different frequencies (or wavelengths) just as
James Clerk Maxwell
and Boltzmann had derived the distribution of velocities (or energies) of the gas particles. Note the mathematical similarity of Planck's radiation distribution law (photons) and the
Maxwell-Boltzmann velocity distribution (molecules). Both curves have a power law increase on one side to a maximum and an exponential decrease on the other side of the maximum. The molecular
velocity curves cross one another because the total number of molecules is the same. With increasing temperature
, the number of photons increases at all wavelengths.
But Planck did not actually believe that radiation came in discrete particles, at least until a dozen years later. In the meantime,
Albert Einstein
's 1905 paper on the photoelectric effect showed that light came in discrete particles he called "light quanta," subsequently called "photons," by analogy to electrons. Planck was not happy about the
idea of light particles, because his use of Boltzmann's statistics implied that
was real. Boltzmann himself had qualms about the reality of chance. Although Einstein also did not like the idea of chancy statistics, he did believe that energy came in packages of discrete
"quanta." It was Einstein, not Planck, who quantized mechanics and electrodynamics. Nevertheless, it was for the introduction of the quantum of action
that Planck was awarded the Nobel prize in 1918. Meanwhile, in 1916, after completing his work on the general theory of relativity, Einstein returned to thinking about a quantum theory for the
interaction of radiation and matter (N.B., ten years before quantum mechanics). In 1924,
Louis de Broglie
argued that if photons, with their known wavelike properties, could be described as particles, electrons as particles might show wavelike properties with a wavelength
inversely proportional to their momentum
p = m[e]v
p = h/2πλ
Experiments confirmed de Broglie's assumption and led
Erwin Schrödinger
to derive a "wave equation" to describe the motion of de Broglie's waves. Schrödinger's equation replaces the classical Newton equations of motion. Note that Schrödinger's equation describes the
motion of only the wave aspect, not the particle aspect, and as such it implies interference. Note also that the Schrödinger equation is as fully deterministic an equation of motion as Newton's
equations. Schrödinger attempted to interpret his "wave function" for the electron as a probability density for electrical charge, but charge density would be positive everywhere and unable to
interfere with itself.
Max Born
shocked the world of physics by suggesting that the absolute values of the wave function
squared (
) could be interpreted as the
of finding the electron in various position and momentum states - if a
is made. This allows the probability amplitude
to interfere with itself, producing highly non-intuitive phenomena such as the
two-slit experiment
. Despite the probability amplitude going through two slits and interfering with itself, experimenters never find parts of electrons. They always are found whole. In 1932
John von Neumann
explained that
two fundamentally different processes are going on
in quantum mechanics.
1. A non-causal process, in which the measured electron winds up randomly in one of the possible physical states (eigenstates) of the measuring apparatus plus electron. The probability for each
eigenstate is given by the square of the coefficients c[n] of the expansion of the original system state (wave function ψ) in an infinite set of wave functions φ that represent the eigenfunctions
of the measuring apparatus plus electron.
c[n] = < φ[n] | ψ >
This is as close as we get to a description of the motion of the particle aspect of a quantum system. According to von Neumann, the particle simply shows up somewhere as a result of a
measurement. Information physics says it shows up whenever a new stable information structure is created.
2. A causal process, in which the electron wave function ψ evolves deterministically according to Schrödinger's equation of motion for the wavelike aspect. This evolution describes the motion of the
probability amplitude wave ψ between measurements.
(ih/2π) ∂ψ/∂t = Hψ
Von Neumann claimed there is another major difference between these two processes. Process
is thermodynamically
. Process
. This confirms the fundamental connection between quantum mechanics and thermodynamics that is explainable by
information physics
Information physics
establishes that process
may create information. Process
information preserving
The second law of thermodynamics says that the entropy (or disorder) of a closed physical system increases until it reaches a maximum, the state of thermodynamic equilibrium. It requires that the
entropy of the universe is now and has always been increasing. (The first law is that energy is conserved.)
Creation of information structures means that in parts of the universe the local entropy is actually going down. Reduction of entropy locally is always accompanied by radiation of entropy away from
the local structures to distant parts of the universe, into the night sky for example. Since the total entropy in the universe always increases, the amount of entropy radiated away always exceeds
(often by many times) the local reduction in entropy, which mathematically equals the increase in information.
We will describe processes that create information structures, reducing the entropy locally, as "
." This is a new use for a term from statistical mechanics that describes a hypothetical property of classical mechanical gases. See the
. Ergodic processes (in our new sense of the word) are those that appear to resist the second law of thermodynamics because of a local increase in information or "negative entropy" (
's term). But any local decrease in entropy is more than compensated for by increases elsewhere, satisfying the second law. Normal entropy-increasing processes we will call "
". Encoding new information requires the equivalent of a quantum measurement - each new bit of information produces a local decrease in entropy but requires that at least one bit (generally much much
more) of entropy be radiated or conducted away. Without violating the inviolable second law of thermodynamics overall, ergodic processes reduce the entropy locally, producing those pockets of cosmos
and negative entropy (order and information-rich structures) that are the principal objects in the universe and in life on earth.
Rudolf Clausius' second law of thermodynamics, namely that the entropy of a closed system always increases to a maximum and then remains in thermal equilibrium. Clausius predicted that the universe
would end with a "heat death" because of the second law. Boltzmann formulated a mathematical quantity H for a system of
ideal gas particles, showing that it had the property δΗ/δτ ≤ 0, that H always decreased with time. He identified his H as the opposite of Rudolf Clausius' entropy
. In 1850 Clausius had formulated the second law of thermodynamics. In 1857 he showed that for a typical gas like air at standard temperatures and pressures, the gas particles spend most of their
time traveling in straight lines between collisions with the wall of a containing vessel or with other gas particles. He defined the "mean free path" of a particle between collisions. Clausius and
essentially all physicists since have assumed that gas particles can be treated as structureless "billiard balls" undergoing "elastic" collisions. Elastic means no motion energy is lost to internal
friction. Shortly after Clausius first defined the entropy mathematically and named it in 1865,
determined the distribution of velocities of gas particles (Clausius for simplicity had assumed that all particles moved at the average speed
). Maxwell's derivation was very simple. He assumed the velocities in the x, y, and z directions were independent. [more...] Boltzmann improved on Maxwell's statistical derivation by equating the
number of particles entering a given range of velocities and positions to the number leaving the same volume in 6n-dimensional phase space. This is a necessary state for the gas to be in equilibrium.
Boltzmann then used Newtonian physics to get the same result as Maxwell, which is thus called the Maxwell-Boltzmann distribution.
Boltzmann's first derivation of his H-theorem (1872) was based on the same classical mechanical analysis he had used to derive Maxwell's distribution function. It was an analytical mathematical
consequence of Newton's laws of motion applied to the particles of a gas. But it ran into immediate objections. The objection is the hypothetical and counterfactual idea of
. If time were reversed, the entropy would simply decrease. Since the fundamental Newtonian equations of motion are time reversible, this appears to be a paradox. How could the irreversibile increase
of the macroscopic entropy result from microscopic physical laws that are time reversible? Lord Kelvin (William Thomson) was the first to point out the time asymmetry in macroscopic processes, but
the criticism of Boltzmann's H-theorem is associated with his lifelong friend Joseph Loschmidt. Boltzmann immediately agreed with Loschmidt that the possibility of decreasing entropy could not be
ruled out if the classical motion paths were reversed. Boltzmann then reformulated his H-theorem (1877). He analyzed a gas into "microstates" of the individual gas particle positions and velocities.
For any "macrostate" consistent with certain macroscopic variables like volume, pressure, and temperature, there could be many microstates corresponding to different locations and speeds for the
individual particles. Any individual microstate of the system was intrinsically as probable as any other specific microstate, he said. But the number of microstates consistent with the disorderly or
uniform distribution in the equilibrium case of maximum entropy simply overwhelms the number of microstates consistent with an orderly initial distribution.
About twenty years later, Boltzmann's revised argument that entropy statistically increased ran into another criticism, this time not so counterfactual. This is the
objection. Given enough time, any system could return to its starting state, which implies that the entropy must at some point
. These reversibility and recurrence objections are still prominent in the physics literature. The recurrence idea has a long intellectual history. Ancient Babylonian astronomers thought the known
planets would, given enough time, return to any given position and thus begin again what they called a "great cycle," estimated by some at 36,000 years. Their belief in an astrological determinism
suggested that all events in the world would also recur.
made this idea famous in the nineteenth century, at the same time as Boltzmann's hypothesis was being debated, as the "eternal return" in his
. The recurrence objection was first noted in the early 1890's by French mathematician and physicist
. He had found an analytic solution to the three-body problem and noted that the configuration of three bodies returns arbitrarily close to the initial conditions after calculable times. Even for a
handful of planets, the recurrence time is longer than the age of the universe, if the positions are specified precisely enough. Poincaré then proposed that the presumed "heat death" of the universe
predicted by the second law of thermodynamics could be avoided by "a little patience." Another mathematician, Ernst Zermelo, a young colleague of Max Planck in Berlin, is more famous for this
recurrence paradox. Boltzmann accepted the recurrence criticism. He calculated the extremely small probability that entropy would decrease noticeably, even for gas with a very small number of
particles (1000). He showed the time associated with such an event was 10
years. But the objections in principle to his work continued, especially from those who thought the atomic hypothesis was wrong. It is very important to understand that both Maxwell's original
derivation of the velocities distribution and Boltzmann's H-theorem showing an entropy increase are only statistical or probabilistic arguments. Boltzmann's work was done twenty years before atoms
were established as real and fifty years before the theory of quantum mechanics established that at the microscopic level all interactions of matter and energy are fundamentally and irreducibly
statistical and probabilistic.
A quantum mechanical analysis of the microscopic collisions of gas particles (these are usually molecules - or atoms in a noble gas) can provide revised analyses for the two problems of reversibility
and recurrence. Note this requires more than quantum statistical mechanics. It needs the quantum kinetic theory of collisions in gases. There are great differences between
. Boltzmann assumed that collisions would result in random distributions of velocities and positions so that all the possible configurations would be realized in proportion to their number. He called
this "molecular chaos." But if the path of a system of n particles in 6n-dimensional phase space should be closed and repeat itself after a short and finite time during which the system occupies only
a small fraction of the possible states, Boltzmann's assumptions would be wrong. What is needed is for collisions to completely randomize the directions of particles after collisions, and this is
just what the quantum theory of collisions can provide. Randomization of directions is the norm in some quantum phenomena, for example the absorption and re-emission of photons by atoms as well as
Raman scattering of photons. In the deterministic evolution of the Schrödinger equation, just as in the classical path evolution of the Hamiltonian equations of motion, the time can be reversed and
all the coherent information in the wave function will describe a particle that goes back exactly the way it came before the collision. But if when two particles collide the internal structure of one
or both of the particles is changed, and particularly if the two particles form a temporary larger molecule (even a quasi-molecule in an unbound state), then the separating atoms or molecules lose
the coherent wave functions that would be needed to allow time reversal back along the original path. During the collision, one particle can transfer energy from one of its internal quantum states to
the other particle. At room temperature, this will typically be a transition between rotational states that are populated. Another possibility is an exchange of energy with the background thermal
radiation, which at room temperatures peaks at the frequencies of molecular rotational energy level differences. Such a quantum event can be analyzed by assuming a short-lived quasi-molecule is
formed (the energy levels for such an unbound system are a continuum of, so that almost any photon can cause a change of rotational state of the quasi-molecule. A short time later, the quasi-molecule
dissociates into the two original particles but in different energy states. We can describe the overall process as a quasi-measurement, because there is temporary information present about the new
structure. This information is lost as the particles separate in random directions (consistent with conservation of energy, momentum, and angular momentum). The decoherence associated with this
quasi-measurement means that if the post-collision wave functions were to be time reversed, the reverse collision would be very unlikely to send the particles back along their incoming trajectories.
Boltzmann's assumption of random occupancy of possible configurations is no longer necessary. Randomness in the form of "molecular chaos" is assured by quantum mechanics. The result is a statistical
picture that shows that entropy would normally increase even if time could be reversed. This does not rule out the kind of departures from equilibrium that occur in small groups of particles as in
Brownian motion, which Boltzmann anticipated long before Brown's experiments and
's explanation. These fluctuations can be described as forming short-lived information structures, brief and localized regions of negative entropy, that get destroyed in subsequent interactions. Nor
does it change the remote possibility of a recurrence of any particular initial microstate of the system. But it does prove that Poincaré was wrong about such a recurrence being periodic. Periodicity
depends on the dynamical paths of particles being classical, deterministic, and thus time reversible. Since quantum mechanical paths are fundamentally indeterministic, recurrences are simply
statistically improbable departures from equilibrium, like the fluctuations that cause Brownian motion.
Entropy increase can be easily understood as the loss of information as a system moves from an initially ordered state to a final disordered state. Although the physical dimensions of thermodynamic
entropy (joules/ºK) are not the same as (dimensionless) mathematical information, apart from units they share the same famous formula.
To see this very simply, let's consider the well-known example of a bottle of perfume in the corner of a room. We can represent the room as a grid of 64 squares. Suppose the air is filled with
molecules moving randomly at room temperature (blue circles). In the lower left corner the perfume molecules will be released when we open the bottle (when we start the demonstration). What is the
quantity of information we have about the perfume molecules? We know their location in the lower left square, a bit less than 1/64th of the container. The quantity of information is determined by the
minimum number of yes/no questions it takes to locate them. The best questions are those that split the locations evenly (a binary tree). For example:
Answers to these six optimized questions give us six bits of information for each molecule, locating it to 1/64th of the container. This is the amount of information that will be lost for each
molecule if it is allowed to escape and diffuse fully into the room. The thermodynamic entropy increase is Boltzmann's constant
multiplied by the number of bits. If the room had no air, the perfume would rapidly reach an equilibrium state, since the molecular velocity at room temperature is about 400 meters/second. Collisions
with air molecules prevent the perfume from dissipating quickly. This lets us see the approach to equilibrium. When the perfume has diffused to one-sixteenth of the room, the entropy will have risen
2 bits for each molecule, to one-quarter of the room, four bits, etc. Let's look at a | {"url":"https://www.informationphilosopher.com/introduction/physics/interpretation/","timestamp":"2024-11-04T01:27:33Z","content_type":"text/html","content_length":"174925","record_id":"<urn:uuid:d033e5f8-9cf3-4d20-914d-da2accf098ce>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00542.warc.gz"} |
Mean curvature flow evolves isometrically immersed base Riemannian manifolds M in the direction of their mean curvature in an ambient manifold M. We consider the classical solutions to the mean
curvature flow. If the base manifold M is compact, the short time existence and uniqueness of the mean curvature flow are well-known. For complete noncompact isometrically immersed hypersurfaces M
(uniformly local lipschitz) in Euclidean space, the short time existence was established by Ecker and Huisken in [10]. The short time existence and the uniqueness of the solutions to the mean
curvature flow of complete isometrically immersed manifolds of arbitrary codimensions in the Euclidean space are still open questions. In this thesis, we solve the uniqueness problem affirmatively
for the mean curvature flow of general codimensions and general ambient manifolds. More precisely, let (M, g) be a complete Riemannian manifold of dimension n such that the curvature and its
covariant derivatives up to order 2 are bounded and the injectivity radius is bounded from below by a positive constant, we prove that the solution of the mean curvature flow with bounded second
fundamental form on an isometrically immersed manifold M (may be of high codimension) is unique. In the second part of the thesis, inspired by the Ricci flow, we prove the pseudolocality theorem of
mean curvature flow. As a consequence, we obtain the strong uniqueness theorem, which removes the boundedness assumption of the second fundamental form of the solution in the uniqueness theorem (only
assume the second fundamental form of the initial submanifold is bounded). / Yin, Le. / "July 2007." / Adviser: Leung Nai-Chung. / Source: Dissertation Abstracts International, Volume: 69-01,
Section: B, . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2007. / Includes bibliographical references (p. 65-68). / Electronic reproduction. Hong Kong : Chinese University of Hong
Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe
Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307. | {"url":"http://search.ndltd.org/show.php?id=oai%3Aunion.ndltd.org%3Acuhk.edu.hk%2Foai%3Acuhk-dr%3Acuhk_343982&back=http%3A%2F%2Fsearch.ndltd.org%2Fsearch.php%3Fq%3Dsubject%253A%2522Differentiable%2Bdynamical%2Bsystems%2522","timestamp":"2024-11-10T21:23:28Z","content_type":"text/html","content_length":"7982","record_id":"<urn:uuid:f52bb7a7-24ce-4dbc-bb60-3cb20387ad3d>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00022.warc.gz"} |
How To Write 33% As A Fraction
Writing 33 percent as a fraction requires a basic knowledge of fraction and percentage conversion. A fraction represents an amount relative to a whole. With percentages, the same concept applies,
with 100 designated as the whole. Checking your work requires an additional understanding of fraction-to-decimal conversion.
Step 1
Recognize that a percent represents a fraction, with the percent as the numerator and 100 as the denominator. A numerator is the top number in a fraction, and a denominator is a bottom number.
Step 2
Write down the number 33, with a line beneath it and 100 below the line. You should wind up with 33/100. You now have your fraction.
Step 3
Check your work by dividing 33 by 100, to convert your fraction into a decimal. You should have 0.33. The number is equivalent to a percentage, without the decimal point. You will arrive at 33
percent, the same number with which you started.
Step 4
Round the number to simplify the fraction, if desired. The goal is to reduce the number as much as possible. You can do this by dividing 33 by 33, to arrive at a 1. Whenever a top number is divided,
the bottom number must be divided by the same figure. Therefore, 100 must also be divided by 33. You will be left with 3.03 for the bottom number.
Step 5
Round the bottom number to the first decimal place. Now you have 3. You are left with the simplified fraction of 1/3.
TL;DR (Too Long; Didn't Read)
The simplified 1/3 is actually equivalent to 33 and 1/3 percent. Some instructors will not allow students to round to this number. In this case, 33/100 is the accurate equivalent.
Cite This Article
Doucette, Chrystal. "How To Write 33% As A Fraction" sciencing.com, https://www.sciencing.com/write-33-fraction-8555456/. 24 April 2017.
Doucette, Chrystal. (2017, April 24). How To Write 33% As A Fraction. sciencing.com. Retrieved from https://www.sciencing.com/write-33-fraction-8555456/
Doucette, Chrystal. How To Write 33% As A Fraction last modified March 24, 2022. https://www.sciencing.com/write-33-fraction-8555456/ | {"url":"https://www.sciencing.com:443/write-33-fraction-8555456/","timestamp":"2024-11-07T11:12:33Z","content_type":"application/xhtml+xml","content_length":"69888","record_id":"<urn:uuid:0565cf4a-c1cd-478d-8861-cd290998814b>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00452.warc.gz"} |
dBm Calculator – Accurate Signal Strength Measurement
This tool allows you to convert between dBm and milliwatts instantly.
How to Use the dBm Calculator
The dBm calculator is designed to help you convert power values in watts to decibel-milliwatts (dBm). To use the calculator, follow these simple steps:
1. Enter the power in watts into the “Power (W)” field. This is the value you want to convert to dBm.
2. Enter the reference power in watts into the “Reference Power (W)” field. The default value is 1 watt, which is commonly used.
3. Click the “Calculate” button. The result, in dBm, will be displayed in the “Result” field.
How It Calculates the Results
The formula used by this calculator to convert power (P) in watts to decibel-milliwatts (dBm) is:
dBm = 10 * log10(P / P0)
• P is the power level in watts that you input.
• P0 is the reference power level in watts that you input (default is 1 watt).
The logarithmic conversion allows us to express the power level in a more manageable scale, especially useful in fields like telecommunications and signal processing.
Please note that this calculator has certain limitations:
• Both power (P) and reference power (P0) must be greater than 0.
• This tool is for educational purposes and should be validated against precise instruments in professional settings.
Use Cases for This Calculator
Calculate Decibels (dB) from Milliwatts (mW)
Enter the milliwatts value into the calculator to effortlessly convert it to decibels (dB). This use case is perfect for determining the sound power level based on the milliwatts input, providing you
with accurate decibel measurements for your audio equipment or power output evaluations.
Convert Decibels (dB) to Milliwatts (mW)
Input the decibels value to swiftly convert it to milliwatts, allowing you to understand power levels or signal strength in milliwatts based on the provided decibel value. This feature is beneficial
for professionals in the fields of telecommunications, audio engineering, and electronics.
Calculate Decibels (dB) from Watts (W)
Input the watts value to effortlessly calculate the decibels (dB) using this calculator. This functionality allows you to determine the decibel level based on the power in watts, aiding in various
applications such as electrical engineering and acoustic measurements.
Convert Decibels (dB) to Watts (W)
Enter the decibels value to accurately convert it to watts with ease. This feature is invaluable for professionals needing to understand power levels in watts corresponding to a given decibel value
in fields like sound engineering, telecommunications, and physics.
Calculate Gain/Loss in Decibels (dB)
By entering the initial and final watt or milliwatt values, this calculator determines the gain or loss in decibels (dB). It’s a convenient tool for quickly assessing changes in power levels in
various scenarios, including amplifier setups, signal transmission, and circuit design.
Calculate Percentage Power Change
With this feature, input the initial and final watt or milliwatt values to determine the percentage change in power. Whether you’re analyzing power amplification or attenuation, this functionality
provides you with the exact percentage difference, aiding in your decision-making process.
Convert Decibels (dB) to Voltage (V)
Input the decibels value to convert it to voltage in volts (V) effortlessly. This use case is ideal for professionals working with audio equipment, RF engineering, or signal processing, as it helps
in understanding the voltage levels corresponding to a specific decibel measure.
Convert Voltage (V) to Decibels (dB)
Enter the voltage value to convert it to decibels (dB) swiftly. This functionality is crucial for individuals dealing with electronics, telecommunications, or electrical systems, providing them with
insights into the decibel levels associated with specific voltage values.
Calculate Signal-to-Noise Ratio (SNR) in Decibels (dB)
By inputting the signal power and noise power values into the calculator, determine the signal-to-noise ratio in decibels (dB). This feature is essential for professionals in communication systems,
audio engineering, and wireless networks, allowing precise measurement and analysis of signal quality.
Calculate Voltage Gain in Decibels (dB)
Input the output and input voltage values to calculate the voltage gain in decibels (dB) accurately. Whether you’re designing audio amplifiers, RF circuits, or control systems, this use case helps
you understand the voltage amplification or attenuation in your electronic components. | {"url":"https://madecalculators.com/dbm-calculator/","timestamp":"2024-11-09T17:40:54Z","content_type":"text/html","content_length":"145758","record_id":"<urn:uuid:17a68f99-e426-4b04-bb51-42eae7f0ea9d>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00607.warc.gz"} |
Do you know where are quartiles? - Yihui Xie | 谢益辉
This is a very simple question but it’s interesting too. In some very influential domestic textbooks of statistics (e.g. Jia Junping, 2004), the definition for quartiles are not appropriate. I didn’t
notice this point until a member of COS raised a question about how to compute quartiles in M$ Excel; he just doubted the results computed by the formula QUANTILE() of Excel according to those famous
textbooks: the lower and upper quartiles are respectively defined at (n+1)/4 and 3*(n+1)/4. While I believe [(n+1)/2 + 1]/2 and [(n+1)/2 + 1]/2 are better.
Actually when I made some tests in R, Excel, SPSS, Stata and Statistica, I found that all these statistical packages computed quartiles in the latter way except SPSS, which adopted the former way.
Just use five numbers from 1 to 5 to test the results: if the lower quartile is 2, then the latter way is used, otherwise 1.5 means the former.
My another interesting finding in SPSS is that it computes quartiles in the former way but draws boxplots in the latter way. Again we may use 1, 2, 3, 4, 5 to confirm it. The quartiles presented in
the boxplot are 2 and 4, whereas computed by “Frequencies” are 1.5 and 4.5. | {"url":"https://yihui.org/cn/2007/04/do-you-know-where-are-quartiles/","timestamp":"2024-11-14T08:56:29Z","content_type":"text/html","content_length":"7362","record_id":"<urn:uuid:77b112a1-0031-4646-b694-c8234d6c01d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00392.warc.gz"} |