content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Sistemas Dinámicos y Teoría Ergódica
Organizadores: Jairo Bochi (jairo.bochi@mat.uc.cl), Katrin Gelfert (gelfert@im.ufrj.br), Rafael Potrie (rpotrie@cmat.edu.uy)
• Thursday 16
15:00 - 15:45
Polynomial decay of correlations of geodesic flows on some nonpositively curved surfaces
Yuri Lima (Universidade Federal do Ceará, Brasil)
We consider a class of nonpositively curved surfaces and show that their geodesic flows have polynomial decay of correlations. This is a joint work with Carlos Matheus and Ian Melbourne.
15:45 - 16:30
Actions of abelian-by-cyclic groups on surfaces
Sebastián Hurtado-Salazar (University of Chicago, Estados Unidos)
I'll discuss some results and open questions about global rigidity of actions of certain solvable groups (abelian by cyclic) on two dimensional manifolds. Joint work with Jinxin Xue.
16:45 - 17:30
On tilings, amenable equivalence relations and foliated spaces
Matilde Martínez (Universidad de la República, Uruguay)
I will describe a family of foliated spaces constructed from tllings on Lie groups. They provide a negative answer to the following question by G.Hector: are leaves of a compact foliated space
always quasi-isometric to Cayley graphs? Their construction was motivated by a profound conjecture of Giordano, Putnam and Skau on the classification, up to orbit equivalence, of actions of
countable amenable groups on the Cantor set. I will briefly explain how these examples relate to the GPS conjecture. This is joint work with Fernando Alcalde Cuesta and Álvaro Lozano Rojo.
17:30 - 18:15
Conjugacy classes of big mapping class groups
Ferrán Valdez (Universidad Nacional Autónoma de México, México)
A surface \(S\) is big if its fundamental group is not finitely generated. To each big surface one can associate its mapping class group, \(\mathrm{Map}(S)\), which is \(\mathrm{Homeo}(S)\) mod
isotopy. This is a Polish group for the compact-open topology. In this talk we study the action of \(\mathrm{Map}(S)\) on itself by conjugacy and characterize when this action has a dense or
co-meager orbit. This is a joint work with Jesus Hernández Hernández, Michael Hrusak, Israel Morales, Anja Randecker and Manuel Sedano (arxiv.org/abs/2105.11282v2).
• Friday 17
15:00 - 15:45
Multiplicative actions and applications
Sebastián Donoso (Universidad de Chile, Chile)
In this talk, I will discuss recurrence problems for actions of the multiplicative semigroup of integers. Answers to these problems have consequences in number theory and combinatorics, such as
understanding whether Pythagorean trios are partition regular. I will present in general terms the questions, strategies from dynamics to address them and mention some recent results we obtained.
This is joint work with Anh Le, Joel Moreira, and Wenbo Sun.
15:45 - 16:30
Continuity of center Lyapunov exponents.
Karina Marín (Universidade Federal de Minas Gerais, Brasil)
The continuity of Lyapunov exponents has been extensively studied in the context of linear cocycles. However, there are few theorems that provide information for the case of diffeomorphisms. In
this talk, we will review some of the known results and explain the main difficulties that appear when trying to adapt the usual techniques to the study of center Lyapunov exponents of partially
hyperbolic diffeomorphisms.
16:45 - 17:30
Lyapunov exponents of hyperbolic and partially hyperbolic diffeomorphisms
Radu Saghin (Pontificia Universidad Católica de Valparaiso, Chile)
If \(f\) is a diffeomorphism on a compact \(d\)-dimensional manifold \(M\) preserving the Lebesgue measure \(\mu\), then Oseledets Theorem tells us that almost every point has \(d\) Lyapunov
exponents (possibly repeated): $$\lambda_1(f,x)\leq\lambda_2(f,x)\leq\dots\leq\lambda_d(f,x).$$ If furthermore \(\mu\) is ergodic, then the Lyapunov exponents are independent of the point \(x\)
(a.e.). We are interested in understanding the map $$f\in Diff_{\mu}^r(M)\ \mapsto\ (\lambda_1(f),\lambda_2(f),\dots,\lambda_d(f)),\ r\geq 1.$$ In general this map may be very complicated.
However, if we restrict our attention to the set of Anosov or partially hyperbolic diffeomorphisms, then we can understand this map better. I will present various results related to the
regularity, rigidity and flexibility of the Lyapunov exponents in this setting.
Some of the results presented are joint with C. Vasquez, F. Valenzuela, J. Yang and P. Carrasco.
17:30 - 18:15
Zero Entropy area preserving homeomorphisms on surfaces
Fabio Tal (Universidade de São Paulo, Brasil)
We review some recent results describing the behaviour of homeomorphisms of surfaces with zero topological entropy. Using mostly techniques from Brouwer theory, we show that the dynamics of such
maps in the sphere is very restricted and in many ways similar to that of an integrable flow. We also show that many of these restrictions are still valid for \(2\)-torus homeomorphisms. | {"url":"https://clam2021.cmat.edu.uy/sesiones/29","timestamp":"2024-11-03T11:59:36Z","content_type":"text/html","content_length":"22960","record_id":"<urn:uuid:366620fb-c77c-4244-888a-ca446f04b858>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00443.warc.gz"} |
How to Convert Joules to Grams
••• RomoloTavani/iStock/GettyImages
Joules are an expression of energy with the base units (kilograms_meters^2)/seconds^2. In modern physics, the mass of an object is also a measure of the energy contained in the object. Albert
Einstein proposed that mass and energy are related by the equation "E = m_c^2," where "E" is the object's energy in joules, "m" is the object's mass and "c" is the speed of light. This equation,
called the mass-energy equivalence formula, is used to convert between energy and mass.
Set up the mass-energy equivalence equation. Set your amount of joules equal to mass multiplied by the speed of light, which is 3_10^8 meters per second. As an example, if you have 5 joules of
energy, the equation "E = m_c^2" is set equal to "5 = m * (3*10^8)^2"
Solve for "m" in the energy equation by dividing both sides of the equation by (3_10^8)^2. Using the same example, "m" is equal to 5.556_10^-17 kilograms.
Convert "m" to grams. There are 1,000 grams in every kilogram, so you can convert 5.556_10^-17 kilograms to grams by multiplying by 1,000. The resulting answer is 5.556_10^-14 grams.
About the Author
Kay Santos is a freelance writer specializing in math and science. She holds Bachelor of Science degrees in physics and health science, both from Clemson University. | {"url":"https://sciencing.com/convert-joules-grams-8555855.html","timestamp":"2024-11-02T14:21:54Z","content_type":"text/html","content_length":"401724","record_id":"<urn:uuid:074a4b76-7abe-40ff-82be-1905d5dbd2e2>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00513.warc.gz"} |
Evaluation of lidar-assisted wind turbine control under various turbulence characteristics
Articles | Volume 8, issue 2
© Author(s) 2023. This work is distributed under the Creative Commons Attribution 4.0 License.
Evaluation of lidar-assisted wind turbine control under various turbulence characteristics
Lidar systems installed on the nacelle of wind turbines can provide a preview of incoming turbulent wind. Lidar-assisted control (LAC) allows the turbine controller to react to changes in the wind
before they affect the wind turbine. Currently, the most proven LAC technique is the collective pitch feedforward control, which has been found to be beneficial for load reduction. In literature, the
benefits were mainly investigated using standard turbulence parameters suggested by the IEC 61400-1 standard and assuming Taylor's frozen hypothesis (the turbulence measured by the lidar propagates
unchanged to the rotor). In reality, the turbulence spectrum and the spatial coherence change by the atmospheric stability conditions. Also, Taylor's frozen hypothesis does not take into account the
coherence decay of turbulence in the longitudinal direction. In this work, we consider three atmospheric stability classes, unstable, neutral, and stable, and generate four-dimensional stochastic
turbulence fields based on two models: the Mann model and the Kaimal model. The generated four-dimensional stochastic turbulence fields include realistic longitudinal coherence, thus avoiding
assuming Taylor's frozen hypothesis. The Reference Open-Source Controller (ROSCO) by NREL is used as the baseline feedback-only controller. A reference lidar-assisted controller is developed and used
to evaluate the benefit of LAC. Considering the NREL 5.0MW reference wind turbine and a typical four-beam pulsed lidar system, it is found that the filter design of the LAC is not sensitive to the
turbulence characteristics representative of the investigated atmospheric stability classes. The benefits of LAC are analyzed using the aeroelastic tool OpenFAST. According to the simulations, LAC's
benefits are mainly the reductions in rotor speed variation (up to 40%), tower fore–aft bending moment (up to 16.7%), and power variation (up to 20%). This work reveals that the benefits of LAC
can depend on the turbulence models, the turbulence parameters, and the mean wind speed.
Received: 05 Jul 2022 – Discussion started: 07 Jul 2022 – Revised: 02 Jan 2023 – Accepted: 10 Jan 2023 – Published: 09 Feb 2023
Traditionally, wind turbine control only relies on the feedback (FB) control strategy. For the above-rated wind operations, the generator speed change caused by the turbulence wind is measured, and
the blade pitch is adjusted to maintain the rated rotor/generator speed. This means that the turbine reacts to the wind disturbance only after it has been affected. A nacelle lidar scanning in front
of the turbine can provide a preview of the incoming turbulence. Based on the preview, a rotor-effective wind speed (REWS) can be derived and used to provide a feedforward pitch signal. The
feedforward pitch signal can be simply added to the conventional feedback controller (Schlipf, 2015), which is often referred to as lidar-assisted collective pitch feedforward control (CPFF). Apart
from CPFF, there are other lidar-assisted control (LAC) concepts that have been presented in the literature, e.g., the works by Schlipf et al. (2013b), Schlipf (2015), Schlipf et al. (2020). However,
CPFF is so far the most promising technology, and it has been deployed in commercial projects (Schlipf et al., 2018b). Thus, we focus on assessing the benefits of CPFF in this work.
To utilize the lidar measurement for LAC, a correlation study is necessary to determine how much the lidar-estimated REWS is correlated with the actual REWS that acts on the turbine rotor. Some facts
that could have an impact on the measurement correlation are listed below:
• a.
Lidar measurement positions. A typical lidar system has fewer measurement points within the rotor-swept area compared to the rotational sampling rotor. Thus, the lidar-estimated REWS is less
spatially filtered.
• b.
Line-of-sight (LOS) wind speed v[los] measurement. This is the cumulative projection of longitudinal (u), lateral (v), and vertical (w) components in the lidar beam direction. The turbine's
aerodynamic performance is mainly driven by the u component, and lidar is expected to measure the u component for control purposes. In reality, the lidar measurements can be contaminated by
lateral and vertical wind speed components (Held and Mann, 2019), because of the beam opening angles, the nacelle movement, or the turbine yaw misalignment.
• c.
Lidar probe volume. The lidar measurement is the weighted average of the LOS along the lidar beam (Peña et al., 2013, 2017).
• d.
Turbulence spectrum and coherence. The lidar measurement coherence is mathematically derived based on the spectrum and coherence (Schlipf, 2015; Held and Mann, 2019; Guo et al., 2022a), which
will be further discussed in Sect. 3.
• e.
Atmospheric stability. The turbulence spectrum and coherence have been shown to vary by atmospheric stability conditions (Peña, 2019; Guo et al., 2022a).
According to the IEC standard, two turbulence models are commonly used for wind turbine design as provided by the IEC 61400-1:2019 (2019) standard; one is the Mann (1994) uniform shear model, and
another one is the Kaimal et al. (1972) spectra combined with exponential coherence model (hereafter referred to as the Mann model and the Kaimal model, respectively). The derivation of lidar
measurement coherence based on a specific turbulence model has been studied in the literature. For example, Schlipf et al. (2013a) and Schlipf (2015) show the derivation by the Kaimal model. Mirzaei
and Mann (2016), Held and Mann (2019), and Guo et al. (2022a) demonstrate the solution for the Mann model. Based on the two turbulence models, several authors investigated the lidar measurement
coherence considering different lidar measurement trajectories and turbine sizes, e.g., the works by Simley et al. (2018), Held and Mann (2019), and Dong et al. (2021). Specifically, in work by Dong
et al. (2021), the lidar measurement coherence by the two turbulence models is compared, assuming Taylor’s frozen hypothesis. In this paper, we also consider two turbulence models and include
turbulence evolution in our analysis.
Once the lidar measurement coherence is analyzed, a filter needs to be designed to filter out uncorrelated information in the lidar-estimated REWS. Because the filter introduces a certain time delay
(Schlipf, 2015), a timing algorithm is necessary to ensure the turbine feedforward pitch acts at the correct time. Usually, the time that turbulence requires to propagate from upstream to downstream,
the time delay in the pitch actuator, the time delay by averaging sequential lidar measurements of a full scan, and the time delay caused by filtering should all be considered. In this work, we will
contribute by providing a reference lidar-assisted controller. It includes (1) a lidar data processing module that provides the lidar-estimated REWS, (2) a feedforward blade pitch rate provider, and
(3) a modified Reference Open-Source Controller (ROSCO) with the capability to accept feedforward pitch rate signal. ROSCO (Abbas et al., 2022) is an open, modular, and fully adaptable baseline wind
turbine controller with industry-standard functionality.
When evaluating the benefits of LAC, Schlipf (2015) uses the Kaimal model with the turbulence spectral parameters provided by the IEC standard through FAST (Jonkman and Buhl, 2005) (the previous
version of OpenFAST NREL, 2022) aeroelastic simulation. With a circular scanning lidar, LAC is found to bring a noticeable reduction in the lifetime damage equivalent load (DEL) in the tower base
fore–aft bending moment, the low-speed shaft torque, and the blade root out-of-plane moment. However, the variations of turbulence parameters have not been considered.
The recent developments in turbulence simulation tools, evoTurb by Chen et al. (2022) and the 4D Mann Turbulence Generator by Guo et al. (2022a), have made it possible to integrate turbulence
evolution into aeroelastic simulation. With the updated OpenFAST lidar simulator (Guo et al., 2022b), the 4D turbulence field can be imported into OpenFAST, and the upstream lidar measurement can be
simulated using the upstream turbulence fields.
The variation of turbulence parameters from the standard values given by IEC 61400-1:2019 (2019) can be interesting for wind energy. Turbulence parameters under different atmospheric stability
classes are investigated and summarized by, e.g., Cheynet et al. (2017), Peña (2019), and Nybø et al. (2020). For example, Fig. 1 shows how the turbulence structure changes by the turbulence length
scale L. A larger coherent eddy structure is observed in the unstable stability, and the eddy structure is much smaller in size under the stable stability. In the neutral case, the eddy structure is
somewhere between the two cases. The length scale can have an impact on the power spectrum and turbulence spatial coherence (as later discussed in Sect. 2.4). Further, the spectrum and coherence can
have potential impacts not only on the lidar measurement coherence but also on the turbine loads because the turbulence spectrum peaks can distribute at different frequency ranges, and different
frequencies can produce different excitations for the turbine structure motions.
In this work, we summarize how the turbulence spectrum and spatial coherence can vary by atmospheric stability from literature. Three atmospheric stability classes, unstable, neutral, and stable, are
considered. For each atmospheric stability class, the Mann model parameters are collected, and then the Kaimal model parameters are fitted to have similar spectra and coherence compared to the Mann
model. Then the four-dimensional stochastic turbulence fields are generated using the 4D Mann Turbulence Generator (Guo et al., 2022a) and evoTurb (Chen et al., 2022). The benefits of LAC are then
assessed using a typical four-beam commercial lidar configuration and the 5MW reference wind turbine by NREL (Jonkman et al., 2009) through the lidar simulator-integrated aeroelastic simulation
tool: OpenFAST. To compare CPFF with the traditional feedback-only controller, ROSCO is considered to be the baseline feedback controller.
This paper is organized as follows: Sect. 2 gives the background about turbulence modeling, Sect. 3 discusses the correlation between the REWS and the lidar-estimated REWS, Sect. 4 introduces the
design of lidar-assisted controller, Sect. 5 presents and discusses the simulation results, and Sect. 6 draws conclusions for this research.
In this section, we first introduce the Mann (1994) model and the Kaimal et al. (1972) spectrum and exponential coherence model (Davenport, 1961) used in this work. Then, the methods to include
turbulence evolution in the two turbulence models are discussed. Lastly, we show the turbulence spectra and coherence under different atmospheric stability classes.
2.1The Mann turbulence model
The Mann (1994) model is a spectral tensor model recommended by the IEC 61400-1:2019 (2019) standard for wind turbine load calculations. It applies the rapid distortion theory (Hunt and Carruthers,
1990) to an isotropic spectral tensor based on the von Kármán (1948) energy spectrum, to model the shear stretched eddy structures.
At a certain moment, the velocity field can be described by $\stackrel{\mathrm{̃}}{\mathbf{u}}\left(\mathbf{x}\right)$, with $\mathbf{x}=\left(x,y,z\right)$ the position vector in space (Cartesian
coordinate). After applying Taylor's frozen hypothesis (Taylor, 1938) and Reynolds' decomposition, the fluctuation part of the turbulence $\mathbf{u}\left(\mathbf{x}\right)=\stackrel{\mathrm{̃}}{\
mathbf{u}}-\mathbf{U}$ about the mean flow $\mathbf{U}=\left(U,\mathrm{0},\mathrm{0}\right)$ is assumed homogeneous in space, and it can be computed from the Fourier transform
$\begin{array}{}\text{(1)}& \mathbf{u}\left(\mathbf{x},{t}_{\mathrm{0}}\right)=\int \stackrel{\mathrm{^}}{\mathbf{u}}\left(\mathbf{k},{t}_{\mathrm{0}}\right)\mathrm{exp}\left(\mathrm{i}\mathbf{k}\
cdot \mathbf{x}\right)\mathrm{d}\mathbf{k},\end{array}$
where $\stackrel{\mathrm{^}}{\mathbf{u}}\left(\mathbf{k},{t}_{\mathrm{0}}\right)$ is the Fourier coefficient of the velocity field, i is the imaginary unit, and $\int \mathrm{d}\mathbf{k}\equiv {\int
}_{-\mathrm{\infty }}^{\mathrm{\infty }}{\int }_{-\mathrm{\infty }}^{\mathrm{\infty }}{\int }_{-\mathrm{\infty }}^{\mathrm{\infty }}\mathrm{d}{k}_{\mathrm{1}}\mathrm{d}{k}_{\mathrm{2}}\mathrm{d}{k}_
{\mathrm{3}}$ means the integration over all the wavenumber vectors $\mathbf{k}=\left({k}_{\mathrm{1}},{k}_{\mathrm{2}},{k}_{\mathrm{3}}\right)$. Conversely,
$\begin{array}{}\text{(2)}& \stackrel{\mathrm{^}}{\mathbf{u}}\left(\mathbf{k},{t}_{\mathrm{0}}\right)=\frac{\mathrm{1}}{\left(\mathrm{2}\mathit{\pi }{\right)}^{\mathrm{3}}}\int \mathbf{u}\left(\
mathbf{x},{t}_{\mathrm{0}}\right)\mathrm{exp}\left(-\mathrm{i}\mathbf{k}\cdot \mathbf{x}\right)\mathrm{d}\mathbf{x},\end{array}$
with $\int \mathrm{d}\mathbf{x}\equiv {\int }_{-\mathrm{\infty }}^{\mathrm{\infty }}{\int }_{-\mathrm{\infty }}^{\mathrm{\infty }}{\int }_{-\mathrm{\infty }}^{\mathrm{\infty }}\mathrm{d}x\mathrm{d}y\
mathrm{d}z$. The Fourier coefficients are connected to the elements in the spectral tensor (denoted as Φ) by
$\begin{array}{}\text{(3)}& {\mathrm{\Phi }}_{ij}\left(\mathbf{k}\right)\mathit{\delta }\left(\mathbf{k}-{\mathbf{k}}^{\prime }\right)=〈{{\stackrel{\mathrm{^}}{u}}_{i}}^{*}\left(\mathbf{k},{t}_{\
mathrm{0}}\right){\stackrel{\mathrm{^}}{u}}_{j}\left({\mathbf{k}}^{\prime },{t}_{\mathrm{0}}\right)〉,\end{array}$
where 〈〉 means the ensemble average, ^* denotes the complex conjugate, and δ() is the Dirac delta function. k^′ is also the wavenumber vectors, and it is used to differentiate with k. Equation (3)
implies that the ensemble averages of the Fourier coefficients of non-identical wavenumber vectors are all zero. $i,j=\mathrm{1},\mathrm{2},\mathrm{3}$ are indexes that stand for u, v, and w
components, i.e., $\mathbf{u}=\left({u}_{\mathrm{1}},{u}_{\mathrm{2}},{u}_{\mathrm{3}}\right)=\left(u,v,w\right)$. The detailed expression of Φ[ij](k) can be found from the work by Mann (1994). Note
that the spectral tensor Φ is a 3 by 3 matrix for any wavenumber vector k, and Φ[ij](k) denotes an element in the matrix. Except for the wavenumber vector, there are three other parameters in the
model. They are as follows:
• $\mathit{\alpha }{\mathit{\epsilon }}^{\mathrm{2}/\mathrm{3}}$ [${\mathrm{m}}^{\mathrm{4}/\mathrm{3}}\phantom{\rule{0.125em}{0ex}}{\mathrm{s}}^{-\mathrm{2}}$] is an energy level constant valid in
the inertial subrange, composed by the spectral Kolmogorov constant α and the rate of viscous dissipation of specific turbulent kinetic energy ε (Mann, 1998). This constant actually acts as a
proportional gain to the spectral tensor and it is often adjusted to obtain a specific turbulence intensity (TI).
• L [m] is a length scale related to the size of the eddies containing the most energy (Held and Mann, 2019).
• Γ [–] is a non-dimensional anisotropy due to shear effect in near-surface boundary layer. When Γ=0, the turbulence is isotropic (Mann, 1994, 1998).
Mann (1994) uses Γ to calculate the eddy lifetime by
$\begin{array}{}\text{(4)}& \mathit{\tau }\left(\mathbf{k}\right)=\mathrm{\Gamma }{\left(\frac{\mathrm{d}U}{\mathrm{d}z}\right)}^{-\mathrm{1}}\left(|\mathbf{k}|L{\right)}^{-\frac{\mathrm{2}}{\mathrm
where [2]F[1]() is a hypergeometric function and $\frac{\mathrm{d}U}{\mathrm{d}z}$ is the mean vertical shear profile. The eddy lifetime τ actually distorts the wavenumber k[3] (corresponds to the z
direction) from the initial shearless state k[30] by ${k}_{\mathrm{3}}={k}_{\mathrm{30}}-\mathit{\beta }{k}_{\mathrm{1}}$. Here, $\mathit{\beta }=\frac{\mathrm{d}U}{\mathrm{d}z}\mathit{\tau }$ is a
non-dimensional distortion factor (Mann, 1994). The effect of the hypergeometric function [2]F[1]() is to have
where b[1] and b[2] are two constants standing for the slopes of τ in logarithmic scale. Instead of using the hypergeometric function, Guo et al. (2022a) proposed another equation for the eddy
$\begin{array}{}\text{(6)}& \mathit{\tau }\left(\mathbf{k}\right)=\mathrm{\Gamma }{\left(\frac{\mathrm{d}U}{\mathrm{d}z}\right)}^{-\mathrm{1}}\left[a{\left(|\mathbf{k}|L\right)}^{{b}_{\mathrm{1}}}{\
left(\left(|\mathbf{k}|L{\right)}^{\mathrm{10}}+\mathrm{1}\right)}^{\frac{{b}_{\mathrm{2}}-{b}_{\mathrm{1}}}{\mathrm{10}}}\right],\text{(7)}& \mathrm{with}\phantom{\rule{0.25em}{0ex}}a={\left[{}_{\
which is straightforward to adjust the slopes of the eddy lifetime. They found that adjusting the slope constant b[1] for stable atmospheric stability tends to give better agreements of spectra and
coherence between the model and the measurements from a lidar and a meteorological mast. We will use Eq. (6) for the rest of this paper.
The one-dimensional (along the longitudinal wavenumber) cross-spectra of all velocity components with separations Δy and Δz can be obtained by
$\begin{array}{}\text{(8)}& {F}_{ij}\left({k}_{\mathrm{1}},\mathrm{\Delta }y,\mathrm{\Delta }z\right)=\int {\mathbf{\Phi }}_{ij}\left(\mathbf{k}\right)\mathrm{exp}\left(\mathrm{i}\left({k}_{\mathrm
{2}}\mathrm{\Delta }y+{k}_{\mathrm{3}}\mathrm{\Delta }z\right)\right)\mathrm{d}{\mathbf{k}}_{⟂},\end{array}$
where $\int \mathrm{d}{\mathbf{k}}_{⟂}\equiv {\int }_{-\mathrm{\infty }}^{\mathrm{\infty }}{\int }_{-\mathrm{\infty }}^{\mathrm{\infty }}\mathrm{d}{k}_{\mathrm{2}}\mathrm{d}{k}_{\mathrm{3}}$.
Specifically, when i=j and $\mathrm{\Delta }y=\mathrm{\Delta }z=\mathrm{0}$, it becomes the auto-spectrum of one velocity component at one point, usually written as F[ii](k[1]). The magnitude-squared
coherence between two points in the same yz plane is often interesting, which can be calculated by (Mann, 1994)
$\begin{array}{}\text{(9)}& {\mathrm{coh}}_{ij}^{\mathrm{2}}\left({k}_{\mathrm{1}},\mathrm{\Delta }y,\mathrm{\Delta }z\right)=\frac{|{F}_{ij}\left({k}_{\mathrm{1}},\mathrm{\Delta }y,\mathrm{\Delta }z
And the yz plane co-coherence and quad-coherence are defined by
$\begin{array}{}\text{(10)}& {\mathrm{cocoh}}_{ij}\left({k}_{\mathrm{1}},\mathrm{\Delta }y,\mathrm{\Delta }z\right)=\frac{\mathrm{\Re }\left({F}_{ij}\left({k}_{\mathrm{1}},\mathrm{\Delta }y,\mathrm{\
Delta }z\right)\right)}{\sqrt{{F}_{ii}\left({k}_{\mathrm{1}}\right){F}_{jj}\left({k}_{\mathrm{1}}\right)}},\end{array}$
$\begin{array}{}\text{(11)}& {\mathrm{quadcoh}}_{ij}\left({k}_{\mathrm{1}},\mathrm{\Delta }y,\mathrm{\Delta }z\right)=\frac{\mathrm{\Im }\left({F}_{ij}\left({k}_{\mathrm{1}},\mathrm{\Delta }y,\mathrm
{\Delta }z\right)\right)}{\sqrt{{F}_{ii}\left({k}_{\mathrm{1}}\right){F}_{jj}\left({k}_{\mathrm{1}}\right)}},\end{array}$
where ℜ and ℑ are the real and imaginary number operators, respectively.
2.2Kaimal spectra and exponential coherence model
The Kaimal model given by IEC 61400-1:2019 (2019) uses the following formula to determine the auto-spectra of velocity components:
$\begin{array}{}\text{(12)}& {S}_{i}\left(f\right)=\frac{\mathrm{4}{\mathit{\sigma }}_{i}^{\mathrm{2}}\frac{{L}_{i}}{{U}_{\mathrm{ref}}}}{\left(\mathrm{1}+\mathrm{6}f\frac{{L}_{i}}{{U}_{\mathrm
where f is the frequency, L[i] is the integral length scale, σ[i] is the standard deviation, and U[ref] is the reference wind speed equivalent to hub-height mean wind speed. The coherence (with
square) of the u components of two points in the yz plane is described as
$\begin{array}{}\text{(13)}& {\mathit{\gamma }}_{yz}^{\mathrm{2}}\left(\mathrm{\Delta }yz,f\right)=\mathrm{exp}\left(-\mathrm{2}{a}_{yz}\mathrm{\Delta }yz\sqrt{{\left(\frac{f}{{V}_{\mathrm{hub}}}\
with $\mathrm{\Delta }yz=\sqrt{\mathrm{\Delta }{y}^{\mathrm{2}}+\mathrm{\Delta }{z}^{\mathrm{2}}}$ the separation distance, a[yz] the coherence decay constant, and L[c] the coherence scale parameter.
Note that the coherence without square is used in IEC 61400-1:2019 (2019). The yz plane coherence for the v and w components is not given by the IEC 61400-1:2019 (2019), and they are ignored in this
2.3Modeling of turbulence evolution
The turbulence evolution refers to the phenomenon that the eddy structure changes when the turbulence propagates from upstream to downstream. It is often represented using longitudinal coherence.
2.3.1Extending the Mann model to include evolution
A space–time tensor that extends the three-dimensional Mann spectral tensor Φ to count for the temporal evolution of the turbulence field has been proposed by Guo et al. (2022a). The space–time
tensor is evaluated to provide good agreements on the turbulence spectra and coherence including the spectra of all velocity components and the coherence with longitudinal, vertical–lateral, and all
combined spatial separations. The validation has been made using measurement from a pulsed lidar and a meteorological mast. Details of the model validation can be found in the work by Guo et al. (
2022a). The space–time tensor is written as
$\begin{array}{}\text{(14)}& {\mathrm{\Theta }}_{ij}\left(\mathbf{k},\mathrm{\Delta }t\right)=\mathrm{exp}\left(-\frac{\mathrm{\Delta }t}{{\mathit{\tau }}_{e}\left(\mathbf{k}\right)}\right){\mathrm{\
Phi }}_{ij}\left(\mathbf{k}\right),\end{array}$
which defines the ensemble average
$\begin{array}{}\text{(15)}& {\mathrm{\Theta }}_{ij}\left(\mathbf{k},\mathrm{\Delta }t\right)\mathit{\delta }\left(\mathbf{k}-{\mathbf{k}}^{\prime }\right)=〈{{\stackrel{\mathrm{^}}{u}}_{i}}^{*}\left
(\mathbf{k},{t}_{\mathrm{0}}\right){\stackrel{\mathrm{^}}{u}}_{j}\left({\mathbf{k}}^{\prime },{t}_{\mathrm{0}}+\mathrm{\Delta }t\right)〉,\end{array}$
where ${\stackrel{\mathrm{^}}{u}}_{j}\left({\mathbf{k}}^{\prime },{t}_{\mathrm{0}}+\mathrm{\Delta }t\right)$ denotes the Fourier coefficients of the turbulence field at time t[0]+Δt. τ[e] is another
eddy lifetime (different from τ) that defines the temporal evolution of the turbulence field. The expression
$\begin{array}{}\text{(16)}& {\mathit{\tau }}_{e}\left(\mathbf{k}\right)=\mathit{\gamma }\left[a{\left(|\mathbf{k}|L\right)}^{-\mathrm{1}}{\left(\left(|\mathbf{k}|L{\right)}^{\mathrm{10}}+\mathrm{1}\
was found to predicts the longitudinal coherence well as investigated by Guo et al. (2022a). Here, γ is a parameter that determines the strength of turbulence evolution.
In the space–time tensor, the turbulence field is assumed to travel with a mean reference wind speed U[ref]. After time Δt, the field moves downstream in the positive x direction by U[ref]Δt. Thus,
for two points with a longitudinal separation of Δx, the longitudinal coherence (magnitude-squared) of u component can be calculated from
$\begin{array}{}\text{(17)}& {\mathrm{coh}}_{\mathrm{11}}^{\mathrm{2}}\left({k}_{\mathrm{1}},\mathrm{\Delta }x\right)=\frac{|\int {\mathrm{\Theta }}_{\mathrm{11}}\left(\mathbf{k},\mathrm{\Delta }x/
$\begin{array}{}\text{(18)}& {F}_{\mathrm{11}}\left({k}_{\mathrm{1}}\right)=\int {\mathrm{\Phi }}_{\mathrm{11}}\left(\mathbf{k}\right)\mathrm{d}{\mathbf{k}}_{⟂}\end{array}$
is the auto-spectrum of u component. In practice, the wavenumber-based spectra or coherence is converted to the frequency-based ones using conversion ${k}_{\mathrm{1}}=\mathrm{2}\mathit{\pi }f/{U}_{\
mathrm{ref}}$, assuming Taylor (1938)'s frozen hypothesis.
2.3.2Exponential longitudinal coherence model
On the other hand, Simley and Pao (2015) adjusted the exponential coherence model listed in the IEC 61400-1:2019 (2019) by replacing the transverse and vertical separations with longitudinal
separations, which gives the following expression for the longitudinal coherence
$\begin{array}{}\text{(19)}& {\mathit{\gamma }}_{x}^{\mathrm{2}}\left(\mathrm{\Delta }x,f\right)=\mathrm{exp}\left(-{a}_{x}\mathrm{\Delta }x\sqrt{{\left(\frac{f}{{U}_{\mathrm{ref}}}\right)}^{\mathrm
where a[x] and b[x] are two parameters, and f is the frequency. Specifically, a[x] determines the decay effect of the coherence, and b[x] determines the intercept (value at 0 frequency) (Chen et al.
, 2021). Simley and Pao (2015) validated Eq. (19) using large eddy simulations (LESs) of different atmospheric stability classes. Besides, Davoust and von Terzi (2016) and Chen et al. (2021) verified
the exponential evolution model using lidar measurement, showing that the expression by Simley and Pao (2015) agrees well with the measurement. In their study, they found possible a[x] and b[x] by
fitting the coherence calculated from measurement data to the model. As a result, $\mathrm{0}<{a}_{x}<\mathrm{6}$ was observed, and b[x] was found in the order of magnitude $\le {\mathrm{10}}^{-\
To include the exponential longitudinal coherence model into the analysis of lidar measurement correlation, a general “direct product” approach is used to combine the lateral-vertical coherence and
the longitudinal coherence (Laks et al., 2013; Simley, 2015; Bossanyi et al., 2014; Schlipf et al., 2013a), which means the overall coherence
$\begin{array}{}\text{(20)}& {\mathit{\gamma }}_{xyz}\left(f\right)={\mathit{\gamma }}_{yz}\left(f\right)\cdot {\mathit{\gamma }}_{x}\left(f\right).\end{array}$
As shown by Chen et al. (2022), the direct product approach allows an efficient algorithm to generate the Kaimal-model-based 4D stochastic turbulence field using statically independent 3D turbulence
fields using evoTurb.
2.4Turbulence under different atmospheric stability classes
Atmospheric stability indicates the buoyancy effect on the turbulence generation, and it is usually related to the temperature gradient by height. It is interesting to investigate its impact on the
filter design of LAC since the turbine will experience different atmospheric stability conditions during operation. The filter is necessary to filter out the uncorrelated frequencies in the REWS
estimated by lidar, as will be discussed later in Sect. 3. In the rest of this paper, we use the Mann turbulence parameter sets representative of unstable, neutral, and stable conditions based on the
study by Peña (2019) and Guo et al. (2022a), as listed in Table 1. It is worth mentioning that the $\mathit{\alpha }{\mathit{\epsilon }}^{\mathrm{2}/\mathrm{3}}$ parameter is scaled such that the TI
corresponds to the IEC 61400-1:2019 (2019) class 1A definition. Actually, the turbulence intensity is related to the atmospheric conditions. Usually, TI is generally high in unstable stability,
moderate in neutral stability, and low in stable stability (Peña et al., 2017). In this work, we emphasize analyzing the impact of turbulence length scale and anisotropy on turbine loads and LAC
benefits. Therefore, the same TI level is assumed for the three stability classes. This assumption tends to be not realistic, but it helps to identify the impact of length scale on turbine load, as
later analyzed in Sect. 5.2.
As for the Kaimal model, we chose the parameters listed by the IEC 61400-1:2019 (2019) for the neutral stability because these parameters were already found to give similar spectra and coherence
compared to the Mann model with neutral stability parameters. Also, keeping these parameters allows readers to compare the results with those from existing literature, e.g., Schlipf (2015), Simley
et al. (2018), and Dong et al. (2021). For unstable and stable stability classes, we fit the Kaimal spectra by the Mann-model-based spectra using the following optimization process:
$\begin{array}{}\text{(21)}& \begin{array}{rl}\begin{array}{c}min\\ {L}_{i},{\mathit{\sigma }}_{i}\end{array}& {\sum }_{n=\mathrm{1}}^{N}\left[\frac{\mathrm{1}}{{k}_{\mathrm{1},n}}{\left({S}_{i}\left
({f}_{n}\right)\cdot {f}_{n}-\mathrm{2}{F}_{ii}\left({k}_{\mathrm{1},n}\right)\cdot {k}_{\mathrm{1},n}\right)}^{\mathrm{2}}\right],\\ & \text{s.t.}\phantom{\rule{0.25em}{0ex}}{k}_{\mathrm{1},n}=\frac
{\mathrm{2}\mathit{\pi }{f}_{n}}{{U}_{\mathrm{ref}}}\phantom{\rule{0.25em}{0ex}}\mathrm{and}\phantom{\rule{0.25em}{0ex}}i=\mathrm{1},\mathrm{2},\mathrm{3}.\end{array}\end{array}$
Here, n is the index of the discrete frequency vector f[n] and wavenumber vector k[1,n], N is the size of the discrete vector, and s.t. denotes “subject to”. Note that the Mann model spectra F[ii](k
[1,n]) are multiplied by 2 since they are the two-sided spectra, while the Kaimal spectra are single-sided. Similarly, we fit the yz plane exponential coherence for the Kaimal model by the Mann model
$\begin{array}{}\text{(22)}& \begin{array}{rl}\begin{array}{c}min\\ {a}_{yz},{L}_{\mathrm{c}}\end{array}& {\sum }_{n=\mathrm{1}}^{N}\left[\frac{\mathrm{1}}{{k}_{\mathrm{1},n}}{\left({\mathit{\gamma
}}_{yz}\left(\mathrm{\Delta }yz,{f}_{n}\right)-{\mathrm{cocoh}}_{\mathrm{11}}\left({k}_{\mathrm{1},n},\mathrm{\Delta }y,\mathrm{\Delta }z\right)\right)}^{\mathrm{2}}\right],\\ & \text{s.t.}\phantom{\
rule{0.25em}{0ex}}{k}_{\mathrm{1},n}=\frac{\mathrm{2}\mathit{\pi }{f}_{n}}{{U}_{\mathrm{ref}}}\phantom{\rule{0.25em}{0ex}}\mathrm{and}\phantom{\rule{0.25em}{0ex}}\mathrm{\Delta }y=\mathrm{\Delta }z=\
where the fitting uses the co-coherence and ignores the quad-coherence. We fit the co-coherence instead of the magnitude-squared coherence, because the exponential coherence model (Eqs. 13 and 19)
only includes the real co-coherence, whereas the coherence of the Mann model includes both co-coherence and quad-coherence. The medium separation $\mathrm{\Delta }y=\mathrm{\Delta }z=\mathrm{20}$m
has been chosen for the optimization problem. For both optimization equations, the squared error in each discrete vector is divided by k[1,n] to ensure equivalent weighting of the optimization
function at a different frequency or wavenumber ranges. The fitted spectra and yz plane coherence are shown by Fig. 2a and b, and the turbulence parameters are summarized in Table 1.
Except for the spectra and yz plane coherence, Guo et al. (2022a) showed that the longitudinal coherence is related to the atmospheric stability based on measurement. In their study, a smaller
intercept was found for a more stable class. Also, Simley and Pao (2015) studied the turbulence evolution under different stability classes using LES, and the smaller intercept was also observed in
stable atmospheric (as shown in Fig. 2). In order to compare the longitudinal coherence under different atmospheric stability, we use three sets of $\mathit{\gamma }=\mathrm{200},\mathrm{400}$, and
600s to calculate the longitudinal coherence based on the space–time tensor Θ. The reason for choosing these values for γ is that they result in coherence close to observations in existing
literature, as will be discussed later at the end of this section. Afterward, we fit the exponential coherence (Eq. 19) using the following optimization process:
$\begin{array}{}\text{(23)}& \begin{array}{rl}\begin{array}{c}min\\ {a}_{x},{b}_{x}\end{array}& {\sum }_{n=\mathrm{1}}^{N}\left[\frac{\mathrm{1}}{{f}_{n}}{\left({\mathit{\gamma }}_{x}\left(\mathrm{\
Delta }x,{f}_{n}\right)-{\mathrm{coh}}_{\mathrm{11}}\left({k}_{\mathrm{1},n},\mathrm{\Delta }x\right)\right)}^{\mathrm{2}}\right],\\ & \text{s.t.}\phantom{\rule{0.25em}{0ex}}\mathrm{\Delta }x=\mathrm
Here we chose to fit the separation at Δx=100m, which is the medium separation for a commercial lidar measuring in front of the turbine (Simley et al., 2018; Guo et al., 2022b). The fitted coherence
is shown in Fig. 2c. The fitted exponential coherence parameters a[x] and b[x] are summarized in Table 2, and they show similar trend as the observation by Simley and Pao (2015) using LES. For an
unstable atmosphere, a[x] is generally larger, and b[x] is in a very small order close to zero. In the neutral condition, a[x] lies in a medium value, and b[x] is also a small order close to zero. As
for the stable case, a[x] is the smallest, meaning a weaker coherence decay, while b[x] is larger, resulting in a smaller intercept.
Based on the study by Guo et al. (2022a), γ was found to be 430 and 207s for neutral and stable stability classes, respectively, while the value of γ in the unstable scenario has not been derived
due to a lack of samples from measurement. Chen et al. (2021) performed a probability study of the coherence parameter a[x] based on lidar measurement, and it is found to appear between one and two
with a higher probability. According to the analysis by Simley and Pao (2015), a[x] tends to be the largest in an unstable condition compared to that in a neutral or stable condition. Based on the
previous observations by these authors, and since γ=200 or 400s gives unrealistically large values of a[x] in the unstable atmosphere that are less likely to happen, we decided to choose γ=600s
for the unstable condition, which results in a[x]=4.1. And γ=400 and γ=200 are used for neutral and stable stability classes, respectively. In addition, it is worth mentioning that we do not
consider the dependence of the turbulence evolution parameters on TI level. The selection of turbulence evolution parameters is based on relevant studies, and typical values are chosen. As studied by
Simley and Pao (2015), the TI values can be different for the same atmospheric stability, and the evolution parameters show some dependence on the TI values. In the future, a joint probabilistic
study on the turbulence spectral parameters, TI levels, and evolution parameters is necessary for defining more realistic simulation scenarios for LAC.
3Correlation between lidars and turbines
In this section, the definitions of REWS and the REWS estimated by lidar will first be discussed. Then the auto-spectra of these two signals and the cross-spectrum between them will be presented. In
the end, we summarize the wind preview quality of the investigated four-beam lidar for the NREL 5.0MW reference turbine under different atmospheric stability classes.
3.1Rotor-effective wind speed
As discussed by Schlipf (2015), one way of defining the rotor-effective wind speed for control purpose is the mean longitudinal component u over the turbine rotor-swept area:
$\begin{array}{}\text{(24)}& {u}_{\mathrm{RR}}\left(x\right)=\frac{\mathrm{1}}{\mathit{\pi }{R}^{\mathrm{2}}}\underset{D}{\int }u\left(\mathbf{x}\right)\mathrm{d}y\mathrm{d}z,\end{array}$
where D denotes the integration over the rotor area defined by rotor radius R.
For the Mann model, as derived by Held and Mann (2019), the auto-spectrum of the REWS u[RR] can be calculated using the spectral tensor by
$\begin{array}{}\text{(25)}& {S}_{\mathrm{RR}}\left({k}_{\mathrm{1}}\right)=\underset{-\mathrm{\infty }}{\overset{\mathrm{\infty }}{\int }}{\mathrm{\Phi }}_{\mathrm{11}}\left(\mathbf{k}\right)\frac{\
mathrm{4}{J}_{\mathrm{1}}^{\mathrm{2}}\left(\mathit{\kappa }R\right)}{{\mathit{\kappa }}^{\mathrm{2}}{R}^{\mathrm{2}}}\mathrm{d}{\mathbf{k}}_{⟂},\end{array}$
with $\mathit{\kappa }=\sqrt{{k}_{\mathrm{2}}^{\mathrm{2}}+{k}_{\mathrm{3}}^{\mathrm{2}}}$ and J[1] the Bessel function of the first kind. The detailed derivation of the auto-spectrum can be found in
the works by Held and Mann (2019) and Mirzaei and Mann (2016).
As for the Kaimal model, the spectrum is derived by Schlipf et al. (2013a) and Schlipf (2015), i.e.,
$\begin{array}{}\text{(26)}& {S}_{\mathrm{RR}}\left(f\right)=\frac{{S}_{\mathrm{1}}\left(f\right)}{{n}_{\mathrm{R}}^{\mathrm{2}}}\sum _{i=\mathrm{1}}^{{n}_{\mathrm{R}}}\sum _{j=\mathrm{1}}^{{n}_{\
mathrm{R}}}{\mathit{\gamma }}_{yz}\left(\mathrm{\Delta }y{z}_{ij},f\right),\end{array}$
where Δyz[ij] is the separation distance between point i and j in the same yz plane, and n[R] is the total number of points in the rotor area. The detailed derivation of the auto-spectrum can be
found in Schlipf (2015).
3.2Lidar-estimated rotor-effective wind speed
Lidar utilizes the Doppler spectrum contributed by the aerosol backscatters within the probe volume to determine wind measurement. It is necessary to include the probe volume averaging effect. Mann
et al. (2009) show that the lidar LOS measurements at a focus position $\mathbf{x}=\left(x,y,z\right)$ can be approximated by
$\begin{array}{}\text{(27)}& {v}_{\mathrm{los}}\left(\mathbf{x}\right)=\underset{-\mathrm{\infty }}{\overset{\mathrm{\infty }}{\int }}\mathit{\phi }\left(r\right)\mathbf{n}\cdot \mathbf{u}\left(r\
where $\mathbf{n}=\left({n}_{\mathrm{1}},{n}_{\mathrm{2}},{n}_{\mathrm{3}}\right)=\left(\mathrm{cos}\mathit{\beta }\mathrm{cos}\mathit{\varphi },\mathrm{cos}\mathit{\beta }\mathrm{sin}\mathit{\varphi
},\mathrm{sin}\mathit{\beta }\right)$ is a unit vector aligned in the direction of a lidar beam that can be simply calculated after knowing the azimuth angle ϕ and elevation angle β (see Fig. 3 for
the definition). r is the displacement along the lidar beam direction from the focused position x. φ(r) is the weighting function due to the lidar volume averaging. In this work, a typical pulsed
lidar is considered whose weighting function is modeled by a Gaussian-shape function (Schlipf, 2015)
$\begin{array}{}\text{(28)}& \mathit{\phi }\left(r\right)=\frac{\mathrm{1}}{{\mathit{\sigma }}_{\mathrm{L}}\sqrt{\mathrm{2}\mathit{\pi }}}\mathrm{exp}\left(-\frac{{r}^{\mathrm{2}}}{\mathrm{2}{\mathit
{\sigma }}_{\mathrm{L}}^{\mathrm{2}}}\right)\phantom{\rule{0.25em}{0ex}}\mathrm{with}\phantom{\rule{0.25em}{0ex}}{\mathit{\sigma }}_{\mathrm{L}}=\frac{{W}_{\mathrm{L}}}{\mathrm{2}\sqrt{\mathrm{2}\
where the full width at half maximum W[L] is about 30m.
Since lidar only provides the wind speed in the LOS direction, the u component is needed to be reconstructed from LOS speed. A simple algorithm is used to assume zero v and w components because they
usually contribute much less than the u component to the LOS speed. In fact, this is true if lidar beam misalignment to the longitudinal direction is small. Based on this assumption, the
lidar-estimated rotor-effective wind speed is often obtained by (see Schlipf, 2015)
$\begin{array}{}\text{(29)}& {u}_{\mathrm{LL}}\left(t\right)=\sum _{i=\mathrm{1}}^{{n}_{\mathrm{L}}}\frac{\mathrm{1}}{{n}_{\mathrm{L}}\mathrm{cos}{\mathit{\beta }}_{i}\mathrm{cos}{\mathit{\varphi }}_
where n[L] is the total number of lidar measurement positions, v[los,i](x) denotes the ith lidar measurement position, ϕ[i] is the azimuth angle of the ith measured position, and β[i] is the
elevation angle of the ith measured position.
Guo et al. (2022a) suggested calculating the auto-spectrum of the lidar-estimated REWS (u[LL]) from the Mann-model-based space–time tensor by
$\begin{array}{}\text{(30)}& \begin{array}{rl}{S}_{\mathrm{LL}}\left({k}_{\mathrm{1}}\right)& =\sum _{i,j=\mathrm{1}}^{{n}_{\mathrm{L}}}\sum _{l,m=\mathrm{1}}^{\mathrm{3}}\frac{\mathrm{1}}{{n}_{\
mathrm{L}}^{\mathrm{2}}\mathrm{cos}{\mathit{\beta }}_{i}\mathrm{cos}{\mathit{\varphi }}_{i}\mathrm{cos}{\mathit{\beta }}_{j}\mathrm{cos}{\mathit{\varphi }}_{j}}\\ & \int {n}_{il}{n}_{jm}{\mathrm{\
Theta }}_{lm}\left(\mathbf{k},\mathrm{\Delta }{t}_{ij}\right)\\ & \mathrm{exp}\left(\mathrm{i}\mathbf{k}\cdot \left({\mathbf{x}}_{i}-{\mathbf{x}}_{j}\right)\right)\stackrel{\mathrm{^}}{\mathit{\phi
}}\left(\mathbf{k}\cdot {\mathbf{n}}_{i}\right)\stackrel{\mathrm{^}}{\mathit{\phi }}\left(\mathbf{k}\cdot {\mathbf{n}}_{j}\right)\mathrm{d}{\mathbf{k}}_{⟂},\end{array}\end{array}$
where x[i] and n[i] denote the focus position vector and the unit vector of the ith lidar measurement, respectively, n[il] is the lth element in the unit vector n[i],
$\begin{array}{}\text{(31)}& \stackrel{\mathrm{^}}{\mathit{\phi }}\left(\mathit{u }\right)=\underset{-\mathrm{\infty }}{\overset{\mathrm{\infty }}{\int }}\mathit{\phi }\left(r\right)\mathrm{exp}\left
(-i\mathit{u }r\right)\mathrm{d}r=\mathrm{exp}\left(-{\mathit{u }}^{\mathrm{2}}\frac{{\mathit{\sigma }}_{\mathrm{L}}^{\mathrm{2}}}{\mathrm{2}}\right)\end{array}$
is the Fourier transform (non-unitary convention) of the weighting function of lidar, and $\mathrm{\Delta }{t}_{ij}=\left({x}_{i}-{x}_{j}\right)/{U}_{\mathrm{ref}}$ is the time required for
turbulence to propagate from position x[i] to x[j]. A more detailed derivation of Eq. (30) can be found in the works by Mirzaei and Mann (2016), Held and Mann (2019), and Guo et al. (2022a). In
practical lidar data processing for wind turbine control, as discussed in Sect. 4.2, the lidar measurement data from different measurement gates are phase shifted to the nearest used measurement
range gate using Taylor (1938)'s frozen hypothesis. This means that v[los,i](t) in Eq.( 29) should be shifted in time according to the mean wind speed and the longitudinal separation, i.e.,
$\begin{array}{}\text{(32)}& {u}_{\mathrm{LL}}\left(t\right)=\sum _{i=\mathrm{1}}^{{n}_{\mathrm{L}}}\frac{\mathrm{1}}{{n}_{\mathrm{L}}\mathrm{cos}{\mathit{\beta }}_{i}\mathrm{cos}{\mathit{\varphi }}_
where x[nrg] is the longitudinal position of the used measurement range gate nearest to the rotor plane. As a consequence, the phase shifts contributed by longitudinal separations (x[i]−x[j]) in Eq.
(30) are always zero.
For the Kaimal model, the auto-spectrum can be derived based on the Fourier transform:
$\begin{array}{}\text{(33)}& \begin{array}{rl}{S}_{\mathrm{LL}}\left(f\right)& =\mathcal{F}\mathit{\left\{}{u}_{\mathrm{LL}}\mathit{\right\}}{\mathcal{F}}^{*}\mathit{\left\{}{u}_{\mathrm{LL}}\mathit
{\right\}}\\ & =\sum _{i,j=\mathrm{1}}^{{n}_{\mathrm{L}}}\frac{\mathrm{1}}{{n}_{\mathrm{L}}^{\mathrm{2}}\mathrm{cos}{\mathit{\beta }}_{i}\mathrm{cos}{\mathit{\varphi }}_{i}\mathrm{cos}{\mathit{\beta
}}_{j}\mathrm{cos}{\mathit{\varphi }}_{j}}\\ & \mathcal{F}\mathit{\left\{}{v}_{\mathrm{los},i}\mathit{\right\}}{\mathcal{F}}^{*}\mathit{\left\{}{v}_{\mathrm{los},j}\mathit{\right\}},\end{array}\end
where ℱ{} denotes the Fourier transform. The Fourier transform of the ith LOS speed v[los,i] is quite lengthy and thus is not extended here. The detailed expression can be found in the work by Chen
et al. (2022).
3.3Cross-spectrum between rotor and lidar
When turbulence evolution is considered with the Mann model, Guo et al. (2022a) show that the cross-spectrum between REWS u[RR] and the lidar-estimated one u[LL] can be calculated using the
space–time tensor by
$\begin{array}{}\text{(34)}& \begin{array}{rl}{S}_{\mathrm{RL}}\left({k}_{\mathrm{1}}\right)=& \sum _{i=\mathrm{1}}^{{n}_{\mathrm{L}}}\sum _{j=\mathrm{1}}^{\mathrm{3}}\frac{\mathrm{1}}{{n}_{\mathrm
{L}}\mathrm{cos}{\mathit{\beta }}_{i}\mathrm{cos}{\mathit{\varphi }}_{i}}\int {n}_{ij}{\mathrm{\Theta }}_{j\mathrm{1}}\left(\mathbf{k},\mathrm{\Delta }{t}_{i}\right)\\ & \stackrel{\mathrm{^}}{\mathit
{\phi }}\left(\mathbf{k}\cdot {\mathbf{n}}_{i}\right)\mathrm{exp}\left(\mathrm{i}\mathbf{k}\cdot {\mathbf{x}}_{i}-\mathrm{i}{k}_{\mathrm{1}}{x}_{i}\right)\frac{\mathrm{2}{J}_{\mathrm{1}}\left(\mathit
{\kappa }R\right)}{\mathit{\kappa }R}\mathrm{d}{\mathbf{k}}_{⟂},\end{array}\end{array}$
where Δt[i] is the time required for the turbulence field to move from the ith lidar measurement position to the rotor plane, which can be approximated by $\mathrm{\Delta }{t}_{i}=|\mathrm{\Delta }
{x}_{i}|/{U}_{\mathrm{ref}}$. Here, Δx[i] is the longitudinal separation between the rotor plane and the ith lidar measurement position and $\mathrm{\Delta }{x}_{i}={x}_{i}-{x}_{\mathrm{R}}$, with x
[R] being the rotor plane position on the x axis. For LAC, the lidar measurement data from different range gates are phase shifted to the rotor plane using Taylor (1938) frozen hypothesis; therefore,
this assumption is also made when deriving Eq. (34).
Similarly, following Schlipf (2015), the cross-spectrum for the Kaimal model is
$\begin{array}{}\text{(35)}& \begin{array}{rl}{S}_{\mathrm{RL}}\left(f\right)& =\mathcal{F}\mathit{\left\{}{u}_{\mathrm{RR}}\mathit{\right\}}{\mathcal{F}}^{*}\mathit{\left\{}{u}_{\mathrm{LL}}\mathit
{\right\}}\\ & =\sum _{i=\mathrm{1}}^{{n}_{\mathrm{R}}}\sum _{j=\mathrm{1}}^{{n}_{\mathrm{L}}}\frac{\mathrm{1}}{{n}_{\mathrm{L}}{n}_{\mathrm{R}}\mathrm{cos}{\mathit{\beta }}_{i}\mathrm{cos}{\mathit{\
varphi }}_{i}}\mathcal{F}\mathit{\left\{}{u}_{i}\mathit{\right\}}{\mathcal{F}}^{*}\mathit{\left\{}{v}_{\mathrm{los},j}\mathit{\right\}},\end{array}\end{array}$
with u[i] the ith longitudinal wind component in the rotor swept area. See Chen et al. (2022) for detailed derivation of the Fourier transform of v[los,j], where the main algorithm is to loop over
the Fourier transform of all velocity components included in u[i] and v[los,j].
3.4Lidar wind preview and filter design: case analysis
To evaluate the preview quality of lidar measurement, one can calculate the lidar–rotor coherence by
$\begin{array}{}\text{(36)}& {\mathit{\gamma }}_{\mathrm{RL}}\left(f\right)=\frac{|{S}_{\mathrm{RL}}\left(f\right){|}^{\mathrm{2}}}{{S}_{\mathrm{RR}}\left(f\right){S}_{\mathrm{LL}}\left(f\right)}.\
Then, a measurement coherence bandwidth (the wavenumber at which the coherence drops to 0.5, noted as k[0.5]) can be found. Note that ${k}_{\mathrm{0.5}}=\mathrm{2}\mathit{\pi }{f}_{\mathrm{0.5}}/{U}
_{\mathrm{ref}}$, where f[0.5] is the frequency at which the coherence drops to 0.5. k[0.5] and is usually used as the optimization criteria for the LAC-oriented lidar measurement trajectory (Schlipf
et al., 2018a).
In this work, we chose the medium-size NREL 5.0MW reference wind turbine with a rotor diameter of 126m (Jonkman et al., 2009) and a typical four-beam pulsed lidar trajectory (e.g., WindCube Nacelle
and Molas NL). The lidar trajectory is firstly optimized following the method proposed by Schlipf et al. (2018a) using the space–time tensor-based lidar–rotor coherence γ[RL]. The turbulence
parameters corresponding to the neutral stability in Table 1 are considered in the optimization process. The optimized trajectory parameters of the used lidar are given in Table 3. A front view of
the lidar and turbine geometry is shown in Fig. 3.
With the optimized lidar trajectory, we show the coherence γ[RL] under different stability classes in Fig. 4a. It can be seen that the coherence using the Mann-model-based space–time tensor is
generally better than that using the Kaimal model. For both models, the coherence in neutral and stable stability classes is higher than that in the unstable stability, which can be caused by
stronger turbulence evolution in the unstable situation. The coherence in the unstable case is especially lower using the Kaimal model, which can be caused by the direct product method. Based on the
investigation by Simley (2015) using LES, combining coherence using the direct product can underestimate the overall coherence.
Except for the coherence, another indicator of how well the lidar predicts the REWS can be the following transfer function (Schlipf, 2015; Simley and Pao, 2013):
$\begin{array}{}\text{(37)}& |{G}_{\mathrm{RL}}\left(f\right)|=\frac{|{S}_{\mathrm{RL}}\left(f\right)|}{{S}_{\mathrm{LL}}\left(f\right)}.\end{array}$
If a filter is designed to have a gain of G[RL](f), it turns out to be an optimal Wiener filter (Simley and Pao, 2013; Wiener, 1964), which produces an estimate of a desired or target signal (here
the u[RL]). The Wiener filter minimizes the mean square error between the target signal and the estimate of the signal. When used for LAC, if the system is modeled as a system with two inputs, REWS
and lidar-estimated REWS, and one output, rotor speed, the Wiener filter leads to minimal rotor speed variance as formulated by Simley and Pao (2013). At a certain frequency, the larger gain means
that less information needs to be filtered out before the signal is used. So, it indicates how much information measured by the lidar is usable for feedforward control.
The transfer functions under the three investigated stability classes are shown in Fig. 4b. The transfer function gains are similar in the three stability classes for the space–time tensor-derived
results. As for the results by the Kaimal model, the transfer function gain is lower in unstable stability but similar in neutral and stable stability classes.
By the turbulence spectral model, which represents the mean spectral properties, we can obtain the expected Wiener transfer function gain. However, in real operation, the Wiener filter design is more
complicated and requires a higher-order filter. In contrast, a linear filter that has similar damping as the Wiener filter can also provide a similar filtering effect as the Wiener filter. The linear
filter is usually designed to have a cutoff frequency at −3dB of the Wiener filter (see Schlipf, 2015, and Simley et al., 2018). The cutoff frequencies as a function of mean wind speed are
calculated by fitting the G[RL] and are shown in Fig. 5. Note that the TI value is also adjusted using the mean wind speed according to the IEC 61400-1:2019 (2019) standard. Firstly, both turbulence
models indicate that the cutoff frequencies depend on the mean wind speed linearly. Therefore, the cutoff frequency of the filter can be scheduled based on this linearity. Generally, the cutoff
frequencies by the Mann-model-based space–time tensor are generally larger than those by the Kaimal model. For the same turbulence model, the resulting cutoff frequency does not change significantly
by the analyzed turbulence stability conditions. The largest difference appears at the highest mean wind speed 24ms^−1, where the difference of cutoff frequency between unstable and stable
conditions is about 0.02Hz. As for lower mean wind speed (≤18ms^−1), it can be seen that the turbulence parameters of different atmospheric stability classes do not influence the cutoff frequency
very much, and the difference is smaller than 0.01Hz. This also indicates that, for mean wind speed ≤18ms^−1, the filter design is not very sensitive to the change in turbulence parameters
related to atmospheric stability, and a constant filter design is robust. In the rest of this work, we will use the constant cutoff frequency derived from neutral stability for both the
Mann-model-based and the Kaimal-model-based simulations. For example, 0.0490 and 0.0449Hz will be used, respectively, for the Mann model and the Kaimal-model-based simulations with a mean wind speed
of 16ms^−1. However, for a mean wind speed above 20ms^−1, using the cutoff frequency derived from neutral stability is relatively biased from the cutoff frequency derived for unstable conditions.
The impact of this non-ideal filtering should be analyzed further in future works.
Apart from the case that all measurement gates (see the caption of Fig. 5) are considered, another case, where nine lidar measurement gates are considered, is also shown in Fig. 5. It can be clearly
seen that the cutoff frequencies are only slightly reduced when the first measurement gate is ignored. The reason for considering nine measurement gates is that the leading time of the
lidar-estimated REWS needs to be larger than the time delays caused by filtering, by time-averaging over the full lidar scan, and by the pitch actuator. The leading time of the first measurement gate
can be insufficient for very high wind speed, and it must be ignored. A more detailed discussion about the leading time and time delay will be discussed in Sect. 4.4.
4Lidar-assisted controller design
In this section, we introduce the lidar-assisted turbine controller theory and its integration into OpenFAST aeroelastic simulation.
4.1Data exchange framework
To configure LAC in the OpenFAST aeroelastic simulation, we chose to use the Bladed-style interface (DNV-GL, 2016). The interface is responsible for exchanging variables between the OpenFAST
executable and the external controllers compiled as a dynamic link library (DLL). To make each controller as modular as possible, we programmed an open-source main DLL (written in FORTRAN), namely
the “wrapper DLL”. The main function of the wrapper DLL is to call the sub-DLLs by a specified sequence. Note that all the sub-DLLs work based on the same variable exchange pattern specified by the
Bladed-style interface. This means each sub-DLL can also be called by OpenFAST independently and directly. Or, several sub-DLLs can be called by the wrapper DLL together. An overview of the LAC and
OpenFAST interface is shown in Fig. 6. Three sub-DLLs will be called by the wrapper DLL following the sequence from up to below in the figure. The source code of a baseline version of these DLLs has
been made openly available (see “Code availability”).
4.2Lidar data processing
As mentioned before, the lidar measurement data need to be processed before they can be used for control. The first sub-DLL is the lidar data processing (LDP) which calculates the lidar-estimated
REWS from the lidar LOS speed.
In reality, the lidar usually does not measure all beam directions simultaneously. Instead, it sequentially measures from one direction to the next direction. This sequential measurement property is
later simulated using the lidar module in the aeroelastic simulation (see Sect. 5.1.1). Therefore, a time-averaging window needs to be applied to estimate the REWS from a full LOS scan. For the
four-beam lidar used in this work, the averaging window is chosen to be 1s, which is the time required to finish a full scan by four beams. To apply the averaging window, the LDP module also needs
to record the leading time of the successful measurement. The leading time can be approximated by $\mathrm{\Delta }{x}_{i}/{U}_{\mathrm{ref}}$. When estimating the REWS, only the LOS measurements
whose leading times are within the time-averaging window will be chosen, and then Eq. (32) is applied to estimate the REWS. Besides, the blade blockage effect is considered in the simulation, and
this phenomenon is included in the updated OpenFAST lidar module (Guo et al., 2022b). Due to the blade blockage, the LOS measurements for a certain lidar beam are not always available. Therefore, the
LDP module estimates the REWS only using all the available LOS measurements.
4.3Feedback-only controller
A typical variable-speed wind turbine is controlled by a blade pitch and generator torque controller. A baseline collective feedback blade pitch control is achieved by a proportional-integral (PI)
controller (Jonkman et al., 2009):
$\begin{array}{}\text{(38)}& {\mathit{\theta }}_{\mathrm{FB}}={k}_{\mathrm{p}}\left({\mathrm{\Omega }}_{\mathrm{gf}}-{\mathrm{\Omega }}_{\mathrm{g},\mathrm{ref}}\right)+\frac{{k}_{\mathrm{p}}}{{T}_{\
mathrm{I}}s}\left({\mathrm{\Omega }}_{\mathrm{gf}}-{\mathrm{\Omega }}_{\mathrm{g},\mathrm{ref}}\right),\end{array}$
where θ[FB] is the feedback pitch reference value, Ω[g,ref] is the generator speed control reference, Ω[gf] is the measured and low-pass-filtered generator speed, k[p] is the proportional gain, T[I]
is the integrator time constant, and s is the complex frequency. The pitch controller is only active in the above-rated wind speed, and k[p] and T[I] are scheduled to have a constant closed-loop
behavior through gain scheduling (Abbas et al., 2022). For the NREL 5.0MW wind turbine, the desired damping and angular frequency are tuned to be 0.7 and 0.5rads^−1, respectively.
For better code accessibility, the recently developed open-source reference controller, ROSCO (v2.6.0) by Abbas et al. (2022), is used as the reference FB-only controller. ROSCO uses a PI controller
for the pitch control in the above-rated wind speed operation. In terms of generator torque control in the above-rated operation, we have chosen the option of constant power mode in our simulations,
with which the generator torque is set according to the filtered generator speed to keep the electrical power close to its rated value. The generator torque (M[g]) is set according to the
low-pass-filtered generator speed, the rated electrical power (P[rated]), and the generator efficiency (η) by ${M}_{\mathrm{g}}={P}_{\mathrm{rated}}/\left(\mathit{\eta }{\mathrm{\Omega }}_{\mathrm
{gf}}\right)$. See the work by Abbas et al. (2022) for a more detailed description of the reference controller. We have modified the ROSCO source code to allow it to accept the feedforward pitch rate
signal. The feedforward pitch rate (see next section) is added before the integrator of the PI controller.
4.4Combined feedforward and feedback controller
The collective feedforward pitch control proposed by Schlipf (2015) is used in this work where the feedforward pitch reference value is obtained by
$\begin{array}{}\text{(39)}& {\mathit{\theta }}_{\mathrm{FF}}={\mathit{\theta }}_{\mathrm{ss}}\left({u}_{\mathrm{LLf}}\right),\end{array}$
with u[LLf] the filtered REWS estimated by lidar and θ[ss] the steady-state pitch angle as a function of the steady-state wind speed u[ss]. The steady-state pitch curve can usually be obtained by
running aeroelastic simulations using uniform and constant wind speed. Figure 7 shows the general control diagram with the lidar-assisted pitch feedforward signal θ[FF]. In practice, the pitch time
derivative of the pitch feedforward signal is fed into the integral block of the feedback PI controller. This gives the overall collective pitch control reference as
$\begin{array}{}\text{(40)}& {\mathit{\theta }}_{\mathrm{ref}}={\mathit{\theta }}_{\mathrm{FB}}+\frac{\mathrm{1}}{s}{\stackrel{\mathrm{˙}}{\mathit{\theta }}}_{\mathrm{FF}}.\end{array}$
A feedforward pitch (FFP) sub-DLL is programmed to be responsible for filtering the lidar-estimated REWS and provide feedforward pitch rate at correct time. A first-order low-pass filter with the
following transfer function:
$\begin{array}{}\text{(41)}& {G}_{\mathrm{LPF}}\left(s\right)=\frac{\mathrm{2}\mathit{\pi }{f}_{\mathrm{c}}}{s+\mathrm{2}\mathit{\pi }{f}_{\mathrm{c}}},\end{array}$
where f[c] is the cutoff frequency, as discussed in Sect. 3.4, that is applied to filter the u[LL] signal. Based on the filter cutoff frequency, the time delay introduced by the low-pass filtering of
lidar-estimated REWS (T[filter]) can be estimated (see Schlipf, 2015, for detailed calculation). The pitch feedforward signal is then sent to ROSCO after accounting for the pitch actuator delay (T
[pitch]), the filter delay, and the half of the time-averaging window (T[window]). That is, the signal recorded in the timing buffer that has a time close to the buffer time is activated. The buffer
time is defined as
$\begin{array}{}\text{(42)}& {T}_{\mathrm{buffer}}={T}_{\mathrm{lead}}-{T}_{\mathrm{filter}}-{T}_{\mathrm{pitch}}-\frac{\mathrm{1}}{\mathrm{2}}{T}_{\mathrm{window}}.\end{array}$
Here, T[window]=1s is the time-averaging window equivalent to one full scan time of the lidar. It is multiplied by $\mathrm{1}/\mathrm{2}$ in Eq. (42), because of the phase delay property of the
time-averaging filter (Lee et al., 2018). The actuator delay is chosen to be T[pitch]=0.22s based on the phase delay of the pitch actuator. The actuator is modeled as a second-order system with a
natural frequency of 1Hz and a damping ratio of 0.7 (Dunne et al., 2012). Figure 8 shows the leading time (T[lead]) by the first two measurement gates and the required leading time (${T}_{\mathrm
{filter}}+{T}_{\mathrm{pitch}}+\frac{\mathrm{1}}{\mathrm{2}}{T}_{\mathrm{window}}$). For the mean wind speed range where the leading time of gate 1 is lower than the required leading time, we only
use the lidar measurement gates from 2 to 10 for estimating the REWS. The leading time of gate 2 is sufficient to provide enough leading time for all the considered mean wind speeds.
Another point for the feedforward pitch command is that it is only activated when the REWS is above 14ms^−1. The reason for setting this threshold value is that the pitch curve has much higher
gradients with respect to wind speed in the range between 12 and 14ms^−1 (Schlipf, 2015), where the turbine thrust is the highest. If the feedforward pitch is activated only depending on the
lidar-estimated REWS, a short interval of wind rise or drop in this range can cause a relatively large pitch rate and change in thrust force. Then the benefits of LAC are offset by the additional
load caused by these pitch actions.
5Simulation, results, and discussion
In this section, we use the open-source aeroelastic simulation tool OpenFAST to further evaluate the benefits of LAC. The simulation results will be presented and discussed.
5.1Simulation environment
5.1.1Lidar simulation
Previously, OpenFAST (v3.0) was modified to integrate a lidar simulation module (Guo et al., 2022b). The lidar simulation module includes several main characteristics of nacelle lidar measurement:
(a) lidar probe volume, (b) turbulence evolution (lidar measures at the upstream wind field), (c) the LOS wind speed affected by the nacelle motion, (d) lidar beam blockage by turbine blade, and (e)
adjustable measurement availability. Based on the study by Guo et al. (2022b) the blade blockage does not have an impact on the lidar measurement coherence for above-rated wind speed operation, but
special treatment needs to be made to process the invalid measurement caused by the blade blockage effect. In this work, a similar algorithm discussed by Guo et al. (2022b) is used to process the
invalid measurement data. Also, the data unavailability caused by low back scatters is not considered. Therefore, the unavailable data are only caused by the blade blockage.
5.1.2Stochastic turbulence generation
To include the turbulence evolution for the aeroelastic simulation, four-dimensional stochastic turbulence fields are required. We use the newly developed 4D Mann Turbulence Generator (Guo et al.,
2022a) and evoTurb (Chen et al., 2022) to generate the Mann-model- and Kaimal-model-based 4D turbulence fields, respectively. The turbulence parameters representative for three atmospheric stability
classes are used (see Table 1 in Sect. 2).
For the turbulence field generated by the 4D Mann turbulence generator, since it only contains the fluctuation part of the turbulence, we add the mean field (only for u component) considering a power
law shear profile with a shear exponent of 0.2. Each 4D turbulence field has a size of $\mathrm{4096}×\mathrm{11}×\mathrm{64}×\mathrm{64}$ grid points, corresponding to the time and the x, y, and z
directions. The lengths in the y and z directions are both 310m, which is much larger than the rotor size. The reason for choosing this size is to avoid the periodicity of the turbulence field in y
and z directions (Mann, 1998).
For the Kaimal-model-based 4D wind fields, evoTurb is used, which calls on Turbsim (Jonkman, 2009) to generate statistically independent 3D turbulence field and then composite 4D turbulence with the
exponential longitudinal coherence discussed in Sect. 2. Only the coherence of the u component is considered, and the rest of the velocity components are not correlated. Similarly, the mean field
(only for u component) is considered to be a power law shear profile with a shear exponent of 0.2. Each turbulence field has a size of $\mathrm{4096}×\mathrm{11}×\mathrm{31}×\mathrm{31}$ grid points,
corresponding to the time and the x, y, and z directions. The lengths in the y and z directions are both 150m, which are enough to simulate the aerodynamic of the 126m rotor of the NREL 5.0MW
turbine. Note that the Kaimal-model-based wind fields do not have the issue of periodicity so that the field size is not as large as that of the Mann-model-based fields.
For both types of 4D turbulence fields, the time step is chosen to be 0.5s, and the hub height mean wind speed from 12 to 24ms^−1 with a step of 2ms^−1 is considered. The turbulence parameters
are chosen based on Table 1. However, $\mathit{\alpha }{\mathit{\epsilon }}^{\mathrm{2}/\mathrm{3}}$, σ[1], σ[2], and σ[3] are adjusted according to the mean wind to reach the TI corresponding to
class 1A, as specified in IEC 61400-1:2019 (2019). The positions in the x direction both contain the rotor plane position and the lidar range gate positions (see Table 3). Taylor (1938)'s frozen
theory is applied within the probe volume, which has been shown not to influence the lidar measurement spectral properties by Chen et al. (2022). For example, the lidar measurement gate at x=50m is
calculated using the yz plane wind field at x=50m, which is then shifted with Taylor's frozen theory to count for the lidar probe volume averaging. The time length of each field is 2048s.
5.1.3Simulation setup
For each stability class, we generate 4D turbulence fields with 12 different random seed numbers. For each turbulent wind field, the OpenFAST simulation is executed with the following configurations:
(a) FB control using ROSCO only and (b) feedforward+feedback (FFFB) control using lidar measurements. All the degrees of freedom for a fixed-bottom turbine except for the yawing are activated. Each
simulation is executed for 31min. For each simulation, we remove the initial 60s time series, which contains the initialization.
5.2Results and discussion
5.2.1Time series
In Fig. 9, we take the one simulation (with a mean wind speed of 16ms^−1) using the 4D Mann turbulence generator with the neutral stability condition as an example to show the time series.
Panel (a) compares the REWS estimated by the lidar data processing algorithm and that estimated by the extended Kalman filter (EKF) (Julier and Uhlmann, 2004) implemented in ROSCO. The
lidar-estimated REWS is shifted according to the time buffer by the FFP module so that it does not show any time lag in the plot. The lidar-estimated REWS shows good agreement with that estimated by
the Kalman filter. It can be seen that some additional fluctuations with higher frequency appear in the time series of ROSCO-based REWS. This can be caused by the fact that ROSCO only uses a model
with 1 degree of freedom containing the rotor rotational motion and all the other structural motions affecting the rotor speed can be “mistakenly” estimated as wind speed.
Panel (b) shows that the rotor speed obviously fluctuates less using FFFB control compared to that using FB control only. Also, the peak values with FFFB control are smaller.
The tower fore–aft bending moment M[yT] is compared in panel (c), where it is generally less fluctuating with the help of LAC. Further, the blade root out-of-plane bending moment (M[y,root]) is shown
by panel (d), in which FFFB slightly reduces the fluctuation compared to FB-only control. The low-speed shaft torques (M[LSS]) are compared in panel (e). Again it is clear that the fluctuation with
FFFB control is a bit lower than that with FB-only control.
In panel (f), we show the pitch action between the two control strategies. The pitch angles in the FFFB control generally lead that by the FB-only control in time, as expected. The pitch angle
trajectories are overall similar between the FFFB and FB-only controls.
Lastly, the generator power is shown in panel (g). Here, we can see that the generator power fluctuates even though the constant power torque control mode is activated. The reason is that ROSCO uses
low-pass-filtered generator speed to calculate the generator torque command by ${M}_{\mathrm{g}}={P}_{\mathrm{rated}}/\left(\mathit{\eta }{\mathrm{\Omega }}_{\mathrm{gf}}\right)$, as mentioned
previously in Sect. 4.3. If we do not consider the fact that the turbine might have a short interval to reach below-rated operation during a wind speed drop, the formula above ensures that the
electrical power is constant if the electrical power is calculated using the filtered generator speed. However, the actual electrical power is determined by the non-filtered generator speed, and the
difference between the filtered and non-filtered generator speeds determines the power fluctuation. Because the difference is mainly the generator speed fluctuations of high frequencies, we can see
that the electrical power contains fluctuations of high frequencies. By comparing FFFB and FB-only controls, it can be seen that reduced low-frequency rotor speed fluctuations are observed in FFFB
control. Because the low-frequency power fluctuation is highly coupled with the rotor speed fluctuation (see panel b), less fluctuating power can be expected from the less low-frequency rotor speed
fluctuation in FFFB control.
5.2.2Spectral analysis
We estimate the spectra from the collected time series using the Welch (1967) method. The spectra are averaged by different samples. Each sample is the aeroelastic simulation result produced by a
turbulence field generated by a specific random seed number.
Before comparing the OpenFAST outputs spectra, the spectra of the REWS by the input turbulent wind fields are first compared in Fig. 10. Here, the simulated REWS is calculated by averaging the u
components within the rotor-swept area from the discrete turbulent wind field. We show that the simulated spectra follow the theoretical ones well, which validates the turbulence simulation. In
Sect. 2, the single-point u component spectrum by the two models is fitted. Also, the yz plane coherence is fitted using a single separation. Here, it can be seen that the REWS spectra by the two
models show a similar trend in different atmospheric stability classes. In the unstable case, the REWS spectrum does not reduce a lot compared to a single-point u spectrum, and the spectrum peak
appears at a lower frequency. This is because the turbulence field has more large-scale coherent structures in the unstable atmosphere, as depicted in Fig. 1. In the stable case, everything is
opposite to the unstable case where the REWS spectrum is much lower compared to the single-point u spectrum because of the low-level coherence and the spatial filtering effect of the rotor. In
addition, the neutral stability shows a medium spatial filtering effect, and the spectrum peak is between that of unstable and stable conditions. For each stability class, it can be seen that the
Kaimal-derived REWS generally has a higher spectrum compared to that derived by the Mann model. This can be caused by the fact that the yz plane coherence by the Mann model is more complicated than
the exponential coherence model used in the Kaimal model. Fitting the coherence using one separation is insufficient to represent all possible separations. By comparing the spectra by mean wind
speeds of 16 and 18ms^−1, we observe that the spectral peaks are shifted to a higher-frequency side in all stability classes.
In Figs. 11 and 12, the auto-spectra of some of the most interesting output variables by FB-only control and FFFB control are compared. Figure 11 shows the results using the Mann model, and Fig. 12
shows the results using the Kaimal model.
Panels (a), (b), and (c) compare the rotor speed spectra between FFFB and FB controls under three stability classes. The FFFB control generally reduces the rotor speed spectrum in the frequency range
from 0.01 to 0.1Hz. It can also be seen that the spectra using the Mann model and the Kaimal model show some differences, which can be summarized as higher spectra of the rotor motion by the Kaimal
model than that by the Mann model. However, the spectra estimated from simulated time series using the two models generally have similar shapes.
The comparison of the tower fore–aft bending moment is shown in panels (d), (e), and (f). In neutral and stable cases, the main benefits bought by FFFB control are the reductions in the frequency
range from 0.01–0.2Hz, which is as expected since the lidar–rotor transfer function (Eq. 37) becomes zero close to 0.2Hz. Below 0.01Hz, there are not many differences between FB-only and FFFB
controls, because the tower fore–aft mode is naturally damped well in this frequency range.
Panels (g), (h), and (i) show the blade root out-of-plane moment of blade one. There are slight reductions in the blade root out-of-plane moment in the frequency range from 0.02 to 0.1Hz contributed
by LAC. It can also be seen that the spectrum is mainly composited by the excitation at the 1p (once per rotation) frequency.
The comparison of low-speed shaft torque is shown by the panels (j), (k), and (l). Using FFFB control brings some benefits in the frequency range from 0.01 to 0.1Hz, which is similar to the
reduction range of the rotor speed.
Overall, the relative reductions in the spectra bought by adding FF control mainly lie in the frequency range where the lidar–rotor transfer function is above zero. For very low-frequency ranges, the
turbine motions are naturally damped; thus, no obvious benefits are brought by adding the pitch feedforward signal. Based on the spectral analysis, we found reductions significantly in rotor speed,
some in tower fore–aft moment, and slightly in low-speed shaft torque. Also, the reductions are observed by both turbulence models in three different atmospheric stability classes.
5.2.3Simulation statistic
To further evaluate the benefits of LAC, we calculate the DEL using the rainflow counting method (Matsuishi and Endo, 1968) with 2×10^6 as a reference number of cycles and a lifetime of 20 years.
The Wöhler exponent of 4 is used for the tower fore–aft bending moment and the low-speed shaft torque, and the Wöhler exponent of 10 is used for the blade root out-of-plane bending moment. The
averaged DEL is calculated from the results by different random seed numbers. The overall statistics are compared and shown in Figs. 13 and 14. For rotor speed, pitch rate, and electrical power (P
[el]) signals, the standard deviation of time series of each simulation sample is calculated, and then the mean value is calculated from all samples. We use the standard deviation of pitch rate
(speed) to assess the impact of different control methods on the pitch actuator (also used by Chen and Stol, 2014; Jones et al., 2018), because pitch speed causes damping torque in the pitch gear and
is related to the friction torque of the pitch bearing (Shan, 2017; Stammler et al., 2018).
Mann-model-based results
Figure 13 compares the DEL, standard deviation (SD), and energy production (EP) results by the Mann model. The relative reductions (see the figure caption) between FB-only and FFFB controls are
plotted by the grey lines.
There are overall obvious reductions of the tower fore–aft bending moment DEL in all the investigated atmospheric stability classes. The largest reduction is found to be 16.7% by a mean wind speed
of 22ms^−1 and under an unstable atmosphere. In the unstable case, it can be seen that the reduction is more clear with a higher wind speed. On the opposite, for the stable stability, the reduction
is larger at 16 and 18ms^−1, and it reduces as wind speed increases. As for the neutral case, the benefits are the greatest close to 18ms^−1. However, with the mean wind speeds below 14ms^−1
and in the unstable and neutral cases, the FFFB benefits become marginal. This can be caused by a higher possibility to pass the wind speed range where the feedforward pitch is inactivated, as
discussed in Sect. 4.4.
As for the low-speed shaft torque, the DEL is reduced by more than 4.0% under the unstable case for wind speed above 18ms^−1. In addition, the reduction is about 1.5%–3.3% and 1.4%– 2.3% under
neutral and stable cases, respectively.
The DEL of the blade out-of-plane moment is reduced by introducing LAC. More benefits (about 2.7%–6.0%) are found under the unstable case. In the neutral stability, the reduction is better at 20
ms^−1, where the value is close to 4.3%, and it drops to 2.5% by higher wind speeds and to 1.3% by lower wind speeds. As for stable atmosphere, the reduction is more obvious (around 3.0%) at
wind speeds between 16 and 20ms^−1.
The SD of rotor speed is found to be reduced significantly using FFFB control. The reductions are more than 20% and up to 40%. Also, it can be seen that the reductions are more significant under
higher mean wind speeds, which is similar in all the three atmosphere stability classes.
Introduction of the FF pitch also generally helps to reduce the standard deviation of pitch rate (speed) $\stackrel{\mathrm{˙}}{\mathit{\theta }}$. Among the three stability classes, the standard
deviations of pitch rate are reduced clearly (varying from 2.0% to 6.1%) from 14 to 20ms^−1. However, the reduction stops at the mean wind of 24ms^−1 for unstable and neutral conditions. In the
stable atmosphere, the pitch rate SD only reduces with mean wind speeds smaller than 20ms^−1.
As for the electrical power SD, it is reduced obviously by about 16% in the unstable case for wind speed above 18ms^−1, by about 17% in the neutral case for wind speed above 16ms^−1, and by
13% in the stable case for wind speed above 14ms^−1.
With the same mean wind speed but under different stability cases, the electricity productions are similar either using LAC or not. For all the stability conditions, the electricity productions are
lower at wind speeds below 14ms^−1 because there is a higher probability that the REWS goes below the rated value, and the electrical power does not reach the rated power.
Kaimal-model-based results
The results using the Kaimal model are shown in Fig. 14. Generally, under different stability classes and mean wind speeds, the statistics show a similar trend to the results obtained by the Mann
model. However, the values show some differences.
In terms of tower fore–aft bending moment, the reductions of DEL are from 10.4% to 13.4% with a mean wind speed from 18 to 20ms^−1 under unstable and neutral conditions. In the stable case, the
reduction is close to 11.5%, with the mean wind speed of 16ms^−1, and it drops with the higher mean wind speeds.
The results of low-speed shaft DEL show a similar trend to that using the Mann model. On average, for wind speed above 16ms^−1, the shaft load is reduced by around 2.3%, 1.9%, and 1.7%,
respectively, under the three investigated stability classes.
Generally, the reduction of the blade root load simulated using the Kaimal model is similar to that based on the Mann model. On average, for wind speed above 16ms^−1, the blade root DEL is reduced
by around 4.1%, 3.0%, and 3.0%, respectively, under the three investigated stability classes.
The SD of rotor speed is found to be reduced obviously using FFFB control. The reductions are more than 15% and are up to 30%. The result shows a similar trend to that of the Mann-model-based
result. However, we can also see the reduction is less than that shown by the Mann model.
The pitch actions show high similarity with that simulated using the Mann model. At mean wind speeds from 16 to 20ms^−1, the reductions in pitch rate SD are about 3.0% to 3.5% under unstable and
neutral stability classes, and they become less in other mean wind speeds. For the stable case, the reduction is higher at 16ms^−1, reaching 6.2%, but decreases rapidly as the mean wind speed
increases. For very high mean wind speeds above 22ms^−1, the pitch rate SD is increased using LAC.
Since the variation in electrical power is highly linked with the rotor speed, the reductions in the SD of power lie around 10%, 13%, and 11%, respectively, under the three investigated stability
classes. These values are smaller than those observed using the Mann model.
The electricity production shows very similar results to those simulated by the Mann model. Using LAC has a marginal impact on electricity production.
In general, the benefits of LAC in load reduction by a four-beam lidar are clear. However, we also show that there are some uncertainties and differences when assessing LAC by different IEC
turbulence models. Among the compared turbine loads, LAC has the most significant load reduction effect in the tower base fore–aft bending moment. There are also considerable reductions in speed and
power variations. The electrical power generation is not significantly affected by introducing LAC. The load reductions also show differently under different turbulence parameters represented by
different atmosphere stability classes. For different stability conditions but the same mean wind speed, it can be seen that the LAC benefits for the load reduction are overall highest in the
unstable, medium in neutral, and lowest in stable atmospheric classes. The reason could be the difference in turbulence length scales. The turbulence length scale is lower under a stable condition,
which means the peak of the turbulence spectrum appears at a higher wavenumber/frequency (based on the conversion $f={k}_{\mathrm{1}}{U}_{\mathrm{ref}}/\mathrm{2}\mathit{\pi }$). The turbine's
structural loads are mainly excited by frequency above 0.1Hz, e.g., the tower natural frequency, the shaft natural frequency (above 1Hz), the 1p frequency, and the 3p (three times per rotation)
frequency. If the spectrum has a higher peak frequency, the load will be more dominated by the higher-frequency parts due to the higher excitation of the natural modes. Then the LAC benefits become
less significant because it mainly reduces the loads below 0.1Hz (for the lidar and turbine we used). When considering different mean wind speeds, the discussions above indicate that a higher mean
wind speed shifts the spectral peak frequency to be a higher value; therefore, the LAC benefits become less. For the stable condition, the spectral peak frequency is naturally high due to the smaller
turbulence length scale, so it is more sensitive to the changes in the mean wind speed. For unstable and neutral cases, the spectrum peak frequency is naturally lower than that in the stable
condition; thus the LAC benefits do not decrease as fast as that in the stable condition.
This paper evaluates lidar-assisted wind turbine control under various turbulence characteristics using a four-beam liar and the NREL 5.0MW reference turbine. The main contributions of this work
include (a) summarizing the turbulence spectra and the coherence under various atmosphere stability conditions, (b) analyzing the requirement of filter design for lidar-assisted wind turbine control
under various turbulence characteristics, (c) developing a reference lidar-assisted control package, and (d) evaluating the benefits of lidar-assisted wind turbine control using two turbulence models
through aeroelastic simulations.
Currently, two turbulence models (the Mann model and the Kaimal model) are provided by the IEC standard for turbine aeroelastic simulation. The recent research has made it possible to generate 4D
stochastic turbulence fields in aeroelastic simulation for both the Mann model and the Kaimal model, which allows for simulating lidar measurements more realistically and assessing the potential
benefits by lidar-assisted control more reasonably. When evaluating the benefits of lidar-assisted control, previous research uses the Kaimal model with fixed-turbulence spectral parameters provided
by the IEC standard (Schlipf, 2015). Thus, the variations of turbulence characteristics by atmospheric stability have not been considered. In this study, we defined three turbulence cases whose
characteristics are summarized from unstable, neutral, and stable atmospheric stability conditions. The turbulence spectrum and spatial coherence with separations in all directions are derived.
Based on the defined three turbulence cases, we analyzed the coherence between the rotor-effective wind speed and the one estimated by lidar. The NREL 5.0MW reference wind turbine and a four-beam
pulsed lidar system are taken into consideration. It is found that some differences appear between the results of the Mann model and that of the Kaimal model. The coherence using the Mann model is
generally higher in all atmospheric stability classes than the coherence using the Kaimal model. We further analyzed the optimal transfer function, which is important to design a filter that removes
the uncorrelated content in the lidar-estimated rotor-effective wind speed signal for lidar-assisted control. For most of the above-rated wind speeds, the analysis revealed that the difference for
the transfer function between using different turbulence models or different stability classes is not very significant. This also means a simple linear filter design for lidar-assisted control is
sufficient for various atmospheric stability conditions. However, for wind speed above 20ms^−1, the cutoff frequency of unstable condition is about 0.02Hz higher than that in the neutral
stability. The non-ideal filtering should be further analyzed, which is caused by using the cutoff frequency derived from neutral stability for unstable stability. Also, the conclusions in this
paragraph may not be applied to turbines of other sizes and lidars with other trajectories. The analysis of coherence and transfer function study can be extended for larger rotor turbines and other
lidars with different trajectories.
To further analyze the impact of atmospheric stability for lidar-assisted control, a reference lidar-assisted control package is developed and used in this work. The lidar-assisted control package
includes several DLL modules written in FORTRAN: (1) a wrapper DLL that calls all sub-DLLs sequentially, (2) the lidar data processing DLL that estimates the REWS and records the leading time of the
REWS, (3) a feedforward pitch module that filters the REWS and activates the feedforward rate at the correct time, and (4) a modified reference FB controller (ROSCO) which can receive a feedforward
The benefits of lidar-assisted control are evaluated using both the Mann model and the Kaimal-model-based 4D turbulence. The simulations are performed for the mean wind speed level from 12 to 24ms^
−1, using the NREL 5.0MW reference wind turbine and a four-beam lidar system. For the results with the Mann model, using lidar-assisted control reduces the variations in rotor speed, blade pitch
rate, and electrical power significantly. Among the three investigated stability classes and above the mean wind speed of 16ms^−1, the load reductions for the tower bending moment, blade root
bending moment, and low-speed shaft torque are observed to be approximately 3.0% to 16.7%, 1.5% to 6.0%, and 1.7% to 5.0%, respectively. The greatest potential of lidar-assisted control in load
reduction is found in the tower base loads, and the benefits are found to vary by turbulence spectral properties and mean wind speeds. For the results of the Kaimal model, using lidar-assisted
control also clearly reduces the variation in rotor speed, blade pitch rate, and electrical power. The load reduction of the tower bending moment is found in all stability classes for wind speed
above 16ms^−1, and it varies from 3.6% to 13.4%. The load reduction for the blade root bending moment is between 1.6% to 4.5% and for the low-speed shaft torque between 1.6% to 2.5%. Besides,
with the help of lidar-assisted control, for both turbulence models, the standard deviation of pitch rate (speed) can be reduced (up to 6%,) for most of the mean wind speed range (below 20ms^−1)
and for all stability classes. The pitch rate standard deviation reduction can bring potential load alleviation for the pitch bearings and gears. Overall, we found the benefits of lidar-assisted
control by the Kaimal model are slightly different from the results obtained using the Mann model. The benefits of lidar-assisted control simulated using the Mann model are slightly better than those
using the Kaimal model, which can be caused by differences in the turbulence spatial coherence between the two models. The lidar preview quality modeled using the Mann model is generally superior to
that modeled using the Kaimal model. For both turbulence models, there are clear trends that the benefits of lidar-assisted control in load reduction are the highest in unstable stability, medium in
neutral stability, and lowest in a stable atmosphere.
With this work, we show that the mean wind speed, the turbulence spectrum, coherence, and the used turbulence models all have certain impacts on the results of evaluating lidar-assisted control. In
this paper, the same turbulence intensity level is assumed for different atmospheric conditions. However, in reality, the turbulence intensity depends on the stability conditions of the atmosphere.
In the future, we recommend assessing the benefits of lidar-assisted control depending on site-specific turbulence characteristics and statistics. Also, it is necessary to consider the uncertainties
in turbulence models when performing load analysis using aeroelastic simulations.
Simulation data of this paper are available upon request from the corresponding author.
FG conceived the concept, performed the simulations, and prepared the paper. DS supported by verifying the simulations, provided general guidance, and reviewed the paper. PWC provided suggestions and
revised and reviewed the paper.
The contact author has declared that none of the authors has any competing interests.
Publisher’s note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This research received financial support from the European Union's Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement no. 858358 (LIKE – Lidar Knowledge
This research has been supported by the European Commission, Horizon 2020 Framework Programme (LIKE (grant no. 858358)).
This paper was edited by Jan-Willem van Wingerden and reviewed by Eric Simley and one anonymous referee.
Abbas, N. J., Zalkind, D. S., Pao, L., and Wright, A.: A reference open-source controller for fixed and floating offshore wind turbines, Wind Energ. Sci., 7, 53–73, https://doi.org/10.5194/
wes-7-53-2022, 2022.a, b, c, d
Bossanyi, E. A., Kumar, A., and Hugues-Salas, O.: Wind turbine control applications of turbine-mounted LIDAR, J. Phys.-Conf. Ser., 555, 012011, https://doi.org/10.1088/1742-6596/555/1/012011, 2014.a
Chen, Y., Schlipf, D., and Cheng, P. W.: Parameterization of wind evolution using lidar, Wind Energ. Sci., 6, 61–91, https://doi.org/10.5194/wes-6-61-2021, 2021.a, b, c
Chen, Y., Guo, F., Schlipf, D., and Cheng, P. W.: Four-dimensional wind field generation for the aeroelastic simulation of wind turbines with lidars, Wind Energ. Sci., 7, 539–558, https://doi.org/
10.5194/wes-7-539-2022, 2022.a, b, c, d, e, f, g
Chen, Z. and Stol, K.: An assessment of the effectiveness of individual pitch control on upscaled wind turbines, J. Phys.-Conf. Ser., 524, 012045, https://doi.org/10.1088/1742-6596/524/1/012045,
Cheynet, E., Jakobsen, J. B., and Obhrai, C.: Spectral characteristics of surface-layer turbulence in the North Sea, Energ. Proced., 137, 414–427, https://doi.org/10.1016/j.egypro.2017.10.366, 2017.
Davenport, A. G.: The spectrum of horizontal gustiness near the ground in high winds, Q. J. Roy. Meteor. Soc., 87, 194–211, https://doi.org/10.1002/qj.49708737208, 1961.a
Davoust, S. and von Terzi, D.: Analysis of wind coherence in the longitudinal direction using turbine mounted lidar, J. Phys.-Conf. Ser., 753, 072005, https://doi.org/10.1088/1742-6596/753/7/072005,
DNV-GL: Bladed theory manual: version 4.8, Tech. rep., Garrad Hassan & Partners Ltd., Bristol, UK, 2016.a
Dong, L., Lio, W. H., and Simley, E.: On turbulence models and lidar measurements for wind turbine control, Wind Energ. Sci., 6, 1491–1500, https://doi.org/10.5194/wes-6-1491-2021, 2021.a, b, c
Dunne, F., Schlipf, D., Pao, L., Wright, A., Jonkman, B., Kelley, N., and Simley, E.: Comparison of two independent lidar-based pitch control designs, in: 50th AIAA Aerospace Sciences Meeting
Including the New Horizons Forum and Aerospace Exposition, Nashville, Tennessee, January 2012, https://www.osti.gov/biblio/1047948 (last access: 1 Februrary 2023), p. 1151, 2012.a
fengguoFUAS: MSCA-LIKE/OpenFAST3.0_Lidarsim: OpenFAST3.0_Lidarsim (OpenFAST3.0_Lidarsim_v1), Zenodo [code], https://doi.org/10.5281/zenodo.7594971, 2023a.a
fengguoFUAS: MSCA-LIKE/4D-Mann-Turbulence-Generator: 4D-Mann-Turbulence-Generator (4D_MannTurbulence_v1), Zenodo [code], https://doi.org/10.5281/zenodo.7594951, 2023b.a
fengguoFUAS: MSCA-LIKE/Baseline-Lidar-assisted-Controller: Baseline-Lidar-assisted-Controller (Baseline-Lidar-assisted-Controllerv_1), Zenodo [code], https://doi.org/10.5281/zenodo.7594961, 2023c.a
Guo, F., Mann, J., Peña, A., Schlipf, D., and Cheng, P. W.: The space-time structure of turbulence for lidar-assisted wind turbine control, Renew. Energ., 195, 293–310, https://doi.org/10.1016/
j.renene.2022.05.133, 2022a.a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q
Guo, F., Schlipf, D., Zhu, H., Platt, A., Cheng, P. W., and Thomas, F.: Updates on the OpenFAST Lidar Simulator, J. Phys.-Conf. Ser., 2265, 042030, https://doi.org/10.1088/1742-6596/2265/4/042030,
2022b.a, b, c, d, e, f
Held, D. P. and Mann, J.: Lidar estimation of rotor-effective wind speed – an experimental comparison, Wind Energ. Sci., 4, 421–438, https://doi.org/10.5194/wes-4-421-2019, 2019.a, b, c, d, e, f, g,
Hunt, J. C. and Carruthers, D. J.: Rapid distortion theory and the “problems” of turbulence, J. Fluid Mech., 212, 497–532, https://doi.org/10.1017/S0022112090002075, 1990.a
IEC 61400-1:2019: Wind energy generation systems – Part 1: Design requirements, Standard, International Electrotechnical Commission, Geneva, Switzerland, 2019.a, b, c, d, e, f, g, h, i, j, k, l
Jones, B. L., Lio, W., and Rossiter, J.: Overcoming fundamental limitations of wind turbine individual blade pitch control with inflow sensors, Wind Energy, 21, 922–936, https://doi.org/10.1002/
we.2205, 2018.a
Jonkman, B. J.: TurbSim user's guide: Version 1.50, Tech. rep., National Renewable Energy Lab. (NREL), Golden, CO (United States), https://doi.org/10.2172/965520, 2009.a
Jonkman, J. and Buhl, M. L.: FAST User's Guide, Tech. Rep. EL-500-38230, NREL, https://doi.org/10.2172/15020796, 2005.a
Jonkman, J., Butterfield, S., Musial, W., and Scott, G.: Definition of a 5-MW reference wind turbine for offshore system development, Tech. rep., National Renewable Energy Lab. (NREL), Golden, CO
(United States), https://doi.org/10.2172/947422, 2009.a, b, c
Julier, S. J. and Uhlmann, J. K.: Unscented filtering and nonlinear estimation, P. IEEE, 92, 401–422, https://doi.org/10.1109/JPROC.2003.823141, 2004.a
Kaimal, J. C., Wyngaard, J. C., Izumi, Y., and Coté, O. R.: Spectral characteristics of surface-layer turbulence, Q. J. Roy. Meteor. Soc., 98, 563–589, https://doi.org/10.1002/qj.49709841707, 1972.a
, b
Laks, J., Simley, E., and Pao, L.: A spectral model for evaluating the effect of wind evolution on wind turbine preview control, in: 2013 American Control Conference,Washington, DC, USA, 17–19 June
2013, IEEE, 3673–3679, https://doi.org/10.1109/ACC.2013.6580400, 2013.a
Lee, K., Shin, H., and Bak, Y.: Control of Power Electronic Converters and Systems, Academic Press, 392 pp., https://doi.org/10.1016/C2015-0-02427-3, 2018.a
Mann, J.: The spatial structure of neutral atmospheric surface-layer turbulence, J. Fluid Mech., 273, 141–168, https://doi.org/10.1017/S0022112094001886, 1994.a, b, c, d, e, f, g, h
Mann, J.: Wind field simulation, Probabilist. Eng. Mech., 13, 269–282, https://doi.org/10.1016/S0266-8920(97)00036-2, 1998.a, b, c
Mann, J., Cariou, J.-P. C., Parmentier, R. M., Wagner, R., Lindelöw, P., Sjöholm, M., and Enevoldsen, K.: Comparison of 3D turbulence measurements using three staring wind lidars and a sonic
anemometer, Meteorol. Z., 18, 135–140, https://doi.org/10.1127/0941-2948/2009/0370, 2009.a
Matsuishi, M. and Endo, T.: Fatigue of metals subjected to varying stress, Japan Society of Mechanical Engineers, Fukuoka, Japan, 68, 37–40, 1968.a
Mirzaei, M. and Mann, J.: Lidar configurations for wind turbine control, J. Phys.-Conf. Ser., 753, 032019, https://doi.org/10.1088/1742-6596/753/3/032019, 2016.a, b, c
NREL: OpenFAST Documentation, Tech. Rep. Release v3.3.0, National Renewable Energy Laboratory, https://openfast.readthedocs.io/en/main/ (last access: 1 January 2023), 2022.a
Nybø, A., Nielsen, F. G., Reuder, J., Churchfield, M. J., and Godvik, M.: Evaluation of different wind fields for the investigation of the dynamic response of offshore wind turbines, Wind Energy, 23,
1810–1830, https://doi.org/10.1002/we.2518, 2020.a
Peña, A., Mann, J., and Dimitrov, N.: Turbulence characterization from a forward-looking nacelle lidar, Wind Energ. Sci., 2, 133–152, https://doi.org/10.5194/wes-2-133-2017, 2017.a, b
Peña, A.: Østerild: A natural laboratory for atmospheric turbulence, J. Renew. Sustain. Ener., 11, 063302, https://doi.org/10.1063/1.5121486, 2019.a, b, c, d, e
Peña, A., Hasager, C. B., Lange, J., Anger, J., Badger, M., and Bingöl, F.: Remote Sensing for Wind Energy, Tech. Rep. DTU Wind Energy-E-Report-0029(EN), DTU Wind Energy, Roskilde, Denmark, https://
orbit.dtu.dk/files/55501125/Remote_Sensing_for_Wind_Energy.pdf (last access: 1 February 2023), 2013.a
Schlipf, D.: Lidar-Assisted Control Concepts for Wind Turbines, Dissertation, University of Stuttgart, https://doi.org/10.18419/opus-8796, 2015.a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r,
Schlipf, D., Cheng, P. W., and Mann, J.: Model of the Correlation between Lidar Systems and Wind Turbines for Lidar-Assisted Control, J. Atmos. Ocean. Tech., 30, 2233–2240, https://doi.org/10.1175/
JTECH-D-13-00077.1, 2013a.a, b, c
Schlipf, D., Schlipf, D. J., and Kühn, M.: Nonlinear model predictive control of wind turbines using LIDAR, Wind Energy, 16, 1107–1129, https://doi.org/10.1002/we.1533, 2013b.a
Schlipf, D., Fürst, H., Raach, S., and Haizmann, F.: Systems Engineering for Lidar-Assisted Control: A Sequential Approach, J. Phys.-Conf. Ser., 1102, 012014, https://doi.org/10.1088/1742-6596/1102/1
/012014, 2018a.a, b
Schlipf, D., Hille, N., Raach, S., Scholbrock, A., and Simley, E.: IEA Wind Task 32: Best Practices for the Certification of Lidar-Assisted Control Applications, J. Phys.-Conf. Ser., 1102, 012010,
https://doi.org/10.1088/1742-6596/1102/1/012010, 2018b.a
Schlipf, D., Lemmer, F., and Raach, S.: Multi-variable feedforward control for floating wind turbines using lidar, in: The 30th International Ocean and Polar Engineering Conference, OnePetro,
Virtual, 11–16 October 2020, https://doi.org/10.18419/opus-11067, 2020.a
Shan, M.: Load Reducing Control for Wind Turbines: Load Estimation and Higher Level Controller Tuning based on Disturbance Spectra and Linear Models, PhD thesis, Kassel, Universität Kassel,
Fachbereich Elektrotechnik/Informatik, https://kobra.uni-kassel.de/handle/123456789/2017050852519 (last access: 1 February 2023), 2017.a
Simley, E.: Wind Speed Preview Measurement and Estimation for Feedforward Control of Wind Turbines, ProQuest Dissertations & Theses, Ann Arbor, https://www.proquest.com/docview/1719284807 (last
access: 1 February 2023), 2015.a, b
Simley, E. and Pao, L.: Reducing LIDAR wind speed measurement error with optimal filtering, in: 2013 American Control Conference, Washington, DC, USA, 17–19 June 2013, 621–627, https://doi.org/
10.1109/ACC.2013.6579906, 2013. a, b, c
Simley, E. and Pao, L.: A longitudinal spatial coherence model for wind evolution based on large-eddy simulation, in: 2015 American Control Conference (ACC), IEEE, Chicago, IL, USA, 1–3 July 2015,
3708–3714, https://doi.org/10.1109/ACC.2015.7171906, 2015.a, b, c, d, e, f, g
Simley, E., Fürst, H., Haizmann, F., and Schlipf, D.: Optimizing Lidars for Wind Turbine Control Applications – Results from the IEA Wind Task 32 Workshop, Remote Sensing, 10, 863, https://doi.org/
10.3390/rs10060863, 2018.a, b, c, d
Stammler, M., Schwack, F., Bader, N., Reuter, A., and Poll, G.: Friction torque of wind-turbine pitch bearings – comparison of experimental results with available models, Wind Energ. Sci., 3, 97–105,
https://doi.org/10.5194/wes-3-97-2018, 2018.a
Taylor, G. I.: The spectrum of turbulence, P. R. Soc. A, 164, 476–490, https://doi.org/10.1098/rspa.1938.0032, 1938.a, b, c, d, e
von Kármán, T.: Progress in the statistical theory of turbulence, P. Natl. Acad. Sci. USA, 34, 530, https://doi.org/10.1073/pnas.34.11.530, 1948.a
Welch, P.: The use of fast Fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms, IEEE Trans. Audio, 15, 70–73, https://doi.org/
10.1109/TAU.1967.1161901, 1967.a, b
Wiener, N.: Extrapolation, interpolation, and smoothing of stationary time series: with engineering applications, vol. 8, MIT press Cambridge, MA, ISBN 9780262730051, 1964.a | {"url":"https://wes.copernicus.org/articles/8/149/2023/","timestamp":"2024-11-08T04:31:21Z","content_type":"text/html","content_length":"528221","record_id":"<urn:uuid:5940cf20-c071-4c68-9f47-db76262938c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00441.warc.gz"} |
Peter Bürgisser, M. Levent Doğan, Visu Makam, Michael Walter, Avi Wigderson
An action of a group on a vector space partitions the latter into a set of orbits. We consider three natural and useful algorithmic "isomorphism" or "classification" problems, namely, orbit equality,
orbit closure intersection, and orbit closure containment. These capture and relate to a variety of problems within mathematics, physics and computer science, optimization and statistics. These orbit
problems extend the more basic null cone problem, whose algorithmic complexity has seen significant progress in recent years. In this paper, we initiate a study of these problems by focusing on the
actions of commutative groups (namely, tori). We explain how this setting is motivated from questions in algebraic complexity, and is still rich enough to capture interesting combinatorial
algorithmic problems. While the structural theory of commutative actions is well understood, no general efficient algorithms were known for the aforementioned problems. Our main results are
polynomial time algorithms for all three problems. We also show how to efficiently find separating invariants for orbits, and how to compute systems of generating rational invariants for these
actions (in contrast, for polynomial invariants the latter is known to be hard). Our techniques are based on a combination of fundamental results in invariant theory, linear programming, and
algorithmic lattice theory. | {"url":"https://www.thejournal.club/c/paper/325637/","timestamp":"2024-11-05T01:12:06Z","content_type":"text/html","content_length":"33726","record_id":"<urn:uuid:ba5363bf-779e-460c-8cc8-323deb9c1906>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00516.warc.gz"} |
Simple linear regression Archives - Dibyendu Deb
Simple linear regression is the most basic form of regression. It is the foundation of statistical or machine learning modelling technique. All advance techniques you may use in future will be based
on the idea and concepts of linear regression. It is the most primary skill to explore your data and have the first look into it.
Simple linear regression is a statistical model which studies the relationship between two variables. These two variables will be such that one of them is dependent on the other. A simple example of
such two variables can be the height and weight of the human body. From our experience, we know that the bodyweight of any person is correlated with his height.
The body weight changes as the height changes. So here body weight and height are dependent and independent variable respectively. The task of simple linear regression is to quantify the change
happens in the dependent variables for a unit change in the independent variable.
Mathematical expression
We can express this relationship using a mathematical equation. If we express a person’s height and weight with X and Y respectively, then a simple linear regression equation will be:
With this equation, we can estimate the dependent variable corresponding to any known independent variable. Simple linear regression helps us to estimate the coefficients of this equation. As a is
known now, we can say for one unit change in X, there will be exactly a unit change in Y.
See the figure below, the a in the equation is actually the slope of the line and b is the intercept from X-axis.
As the primary focus of this post is to implement simple linear regression through Python, so I would not go deeper into the theoretical part of it. Rather we will jump straight into the application
of it.
Before we start coding with Python, we should know about the essential libraries we will need to implement this. The three basic libraries are NumPy, pandas and matplotlib. I will discuss about these
libraries briefly in a bit.
Application of Python for simple linear regression
I know you were waiting for this part only. So, here is the main part of this post i.e. how we can implement simple linear regression using Python. For demonstration purpose I have selected an
imaginary database which contains data on tree total biomass above the ground and several other tree physical parameters like tree commercial bole height, diameter, height, first forking height,
diameter at breast height, basal area. Tree biomass is the dependent variable here which depends on all other independent variables.
Here is a glimpse of the database:
From this complete dataset, we will use only Tree_height_m and Tree_biomass (kg) for this present demonstration. So, here the dataset name is tree_height and has the look as below:
Python code for simple linear regression
Importing required libraries
Before you start the coding, the first task is to import the required libraries. Give them a short name to refer them easily in the later part of coding.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
These are the topmost important libraries for data science applications. These libraries contain several classes and functions which make performing data analysis tasks in Python super easy.
For example, numPy and Pandas are the two libraries which encapsulate all the matrix and vector operation functions. They allow users to perform complex matrix operations required for machine
learning and artificial intelligence research with a very intuitive manner. Actually the name numPy comes from “Numeric Python”.
Whereas Matplotlib is a full-fledged plotting library and works as an extension of numPy. The main function of this library to provide an object-oriented API for useful graphs and plots embedded in
the applications itself.
These libraries get automatically installed if you are installing Python from Anaconda, which is a free and opensource resource for R and Python for data science computation. So as the libraries are
already installed you have to just import them.
Importing dataset
y=dataset.iloc[:, 1].values
Before you use this piece of code, make sure the .csv file you are about to import is located in the same working directory where the Python file is located. Otherwise, the compiler will not be able
to find the file.
Then we have to create two variables to store the independent and dependent data. Here the use of matrix needs special mention. Please keep in mind that the dataset I have used has the dependent (Y)
variable in the last column. So, while storing the independent variable in x, the last column is excluded and for dependent variable y, the location of the last column is considered.
Splitting the dataset in training and testing data
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test=train_test_split(x,y,test_size=1/4, random_state=0)
This is of utmost importance when we are performing statistical modelling. Any model developed should be tested with an independent dataset which has net been used for model building. As we have only
one dataset in our hand so, I have created two independent datasets with 80:20 ratio.
The train data consists of 80% of the data and used for training the model. Whereas rest of the 20% data was kept aside for testing the model. Luckily the famous sklearn library for Python already
has a module called model_selection which contains a function called train_test_split. We can easily get this data split task done using this library.
Application of linear regression
from sklearn.linear_model import LinearRegression
This is the main part where the regression takes place using Linear Regression function of sklearn library.
Printing coefficients
#To retrieve the intercept:
#For retrieving the slope:
Here we can get the expression of the linear regression equation with the slope and intercept constant.
Validation plot to check homoscedasticity assumption
#***** Plotting residual errors in training data
plt.scatter(regressor.predict(x_train), (regressor.predict(x_train)-y_train),
color='blue', s=10, label = 'Train data')
# ******Plotting residual errors in testing data
color='red',s=10,label = 'Test data')
#******Plotting reference line for zero residual error
plt.title('Residual Vs Predicted plot for train and test data set')
plt.ylabel('Predicted values')
For the data used here this part will create a plot like this:
This part is for checking an important assumption of a linear regression which is the residuals are homoscedastic. That means the residuals have equal variance. If this assumption fails then the
whole regression process does not stand.
Predicting the test results
The independent test dataset is now in use to predict the result using the newly developed model.
Printing actual and predicted values
new_dataset=pd.DataFrame({'Actual':y_test.flatten(), 'Predicted':y_predict.flatten()})
Creating scatterplot using the training set
plt.scatter(x_train, y_train, color='red')
plt.plot(x_train, regressor.predict(x_train), color='blue')
plt.title('Tree heihgt vs tree weight')
plt.xlabel('Tree height (m)')
plt.ylabel('Tree wieght (kg)')
Visualization of model’s performance using test set data
plt.scatter(x_test, y_test, color='red')
plt.plot(x_test, regressor.predict(x_test), color='blue')
plt.title('Tree heihgt vs tree weight')
plt.xlabel(‘Tree height (m)')
plt.ylabel('Tree wieght (kg)')
Calculating fit statistics for the model
r_square=regressor.score(x_train, y_train)
print('Coefficient of determination(R square):',r_square)
from sklearn import metrics
print('Mean Absolute Error:', metrics.mean_absolute_error(y_test, y_predict))
print('Mean Squared Error:', metrics.mean_squared_error(y_test,y_predict))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, y_predict)))
This is th final step of finding the goodness of fit of the model. This piece of code generates some statistics which will quantitatively tell the performance of your model. Here the most important
and popular four fit statistics are calculated. Except for the coefficient of determination, the lower the value of all other statistics better is the model. | {"url":"https://dibyendudeb.com/tag/simple-linear-regression/","timestamp":"2024-11-14T22:10:07Z","content_type":"text/html","content_length":"107293","record_id":"<urn:uuid:5572ed4b-8835-4302-9bab-c7955312fb8f>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00507.warc.gz"} |
Nikola Šerman,
prof. emeritus
Linear Theory
Transfer Function Concept Digest
Controlling various physical variables most often relies on negative feedback concept.
Controller detects a deviation between set up and current value and acts back to the controlled process by trying to change the current value in a direction opposite to the deviation. Through this
an action of a circular flow influences within loop controller – controlled process – controller is established. Such an arrangement is commonly referred to as closed loop.
Dynamic behavior of closed loop significantly differ from what the controlled process and the controller behaves individually. In situations when dynamic behavior of a closed loop is technically
unacceptable, e.g. due to large oscillations, it is not easy to find a remedy. Quite often even a mighty “common sense” might be misleading.
There are plenty of fine textbooks and tutorials on the subject. The following web pages are not intended to substitute them. They contain a compressed digest of the matter. Might be used either
as an introductory “first glance” or it can be a refresher to the readers already dealt with the subject.
The following content offers a good foundation for the qualitative analysis approach in controls study and design. Such an approach is most often the best available tool in troubleshooting control
loops problems.
A basic assumption here is that closed loop system components behave as linear, lumped-parameters, time-invariant dynamic systems^(1) within certain limits and acceptable level of accuracy.
Mathematical model, describing such a system dynamics ends up in a form of an ordinary linear differential equation with constant coefficients or as a set of such equations.
There are two main concepts for analyzing such models:
• State Space Concept
• Transfer Function Concept
State Space Concept relies on a differential equations matrix formulation to represent the system dynamics. It requires a full knowledge of system structure and parameters. The analysis is
performed by computer-added matrix calculations. State Space is a powerful mean for quantitative analysis of complex dynamic systems. However, it offers a little ground, if any, for
troubleshooting and resolving real world control engineering problems.
Transfer Function Concept, although based on a bit abstract notion of the Laplace transform it presents a firm ground for qualitative analysis approach in solving feedback related control problems.
The following pages offer a brief digest of Transfer Function Concept from a closed loop dynamics point of view.
1. Brief description of linear, lumped-parameters and time-invariant dynamic systems is given in chapter 13. | {"url":"http://turbine.arirang.hr/linear-theory/01-2/","timestamp":"2024-11-02T09:44:23Z","content_type":"text/html","content_length":"33637","record_id":"<urn:uuid:2e572a61-2a03-4a3f-a7a0-eefa69ad421c>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00052.warc.gz"} |
Electron-Atom Collisions. Quantum-Relativistic Theory and Exercises
Electron collisions with atoms, ions, and molecules have been investigated since the earliest years of the last century because of their pervasiveness and importance in fields ranging from
astrophysics and plasma physics to atmospheric and condensed matter physics. Written in an accessible yet rigorous style, this book introduces the theory of electron-atom scattering into both the
non-relativistic and relativistic quantum frameworks. Quantum-relativistic electron-atom scattering theory is fundamental for simulation of electron-solid interaction (using the transport Monte Carlo
method). Chapters are included explaining the computational physics and mathematics used in the book. The book also includes exercises with an increasing degree of difficulty to allow the reader to
become familiar with the subject.
Maurizio Dapor is teaching fellow at the Department of Physics of t the University of Trento and Head of the Interdisciplinary Laboratory for Computational Science at ECT*-FBK
From the Preface (pag. VII)
This book deals with collisions of electrons with atoms. Both the nonrelativistic and the relativistic theories are presented here. Since we are interested in applications, the first part of the
book is devoted to the basic concepts of computational physics, describing the main numerical tools necessary for solving problems concerning the scattering of charged particles by central
fields. We also briefly describe the main special functions of mathematical physics and provide methods to numerically calculate them.
The second part of the book is dedicated to the nonrelativistic approach to the study of electron–atom scattering and to an introduction to Pauli matrices and spin. The Thomas Fermi and
Hartree–Fock methods for describing many-electron atoms and, in particular, for calculating the so-called screening function are described in the second part of the book. The screening function
is crucial for the calculation of phase shifts, and its analytical approximation is also presented to make easier the calculation of the electrostatic atomic potential.
In the third part of the volume, after an introduction to the quantum relativistic equations (Klein–Gordon equation and Dirac equation), the Mott theory is described. It represents the
quantum-relativistic theory of elastic scattering of electrons by central fields, the so-called relativistic partial wave expansion method.
The last part of the book presents several applications. It contains exercises devoted to the calculation of the special functions of mathematical physics (notably, Legendre polynomials and
spherical Bessel functions, both regular and irregular) and to their use for computing phase shifts, scattering amplitudes, differential elastic scattering cross-sections, and spin-polarization
parameters. The exercises are provided with an increasing degree of difficulty. With the aid of these exercises, the reader can use all the information described in the first three parts of the
book to write her/his own computer codes for the computation of all the quantities relevant to the scattering processes.
Courtesy by De Gruyter. | {"url":"https://mag.unitn.it/in-libreria/109737/electron-atom-collisions-quantum-relativistic-theory-and-exercises","timestamp":"2024-11-06T08:01:41Z","content_type":"text/html","content_length":"41551","record_id":"<urn:uuid:e8f630b4-bf13-4165-85be-d4c1b5392d5e>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00234.warc.gz"} |
Drawing Light ... – Stephen Hawking: “A Brief History of Time” – Space and Time – TO EN
Fig. 2.9 Space-time diagram showing a light signal (diagonal line) going from the sun to Alpha Centauri. The paths of the sun and Alpha Centauri through space-time are straight lines.
Maxwell's equations predicted that the speed of light should be the same whatever the speed of the source, and this has been confirmed by accurate measurements. It follows from this that if a pulse
of light is emitted at a particular time at a particular point in space, then as time goes on it will spread out as a sphere of light whose size and position are independent of the speed of the
source. After one millionth of a second the light will have spread out to form a sphere with a radius of 300 meters; after two millionths of a second, the radius will be 600 meters; and so on. It
will be like the ripples that spread out on the surface of a pond when a stone is thrown in. The ripples spread out as a circle that gets bigger as time goes on. If one stacks snapshots of the
ripples at different times one above the other, the expanding circle of ripples will mark out a cone whose tip is at the place and time at which the stone hit the water (Figure 2:10)
Fig. 2.10 A space-time diagram showing ripples spreading on the surface of a pond. The expanding circle of ripples makes a cone in space-time of two space directions and one time direction.
Similarly, the light spreading out from an event forms a (three- dimensional) cone in (the four-dimensional) space-time. This cone is called the future light cone of the event. In the same way we can
draw another cone, called the past light cone, which is the set of events from which a pulse of light is able to reach the given event (Figure 2:11)
Fig. 2.11 The path of a pulse of light from an event P forms a cone in space-time called "the future light cone of P." Similarly, "the past light cone of P" is the path of rays of light that will
pass through the event P. The two light cones divide space-time into the future, the past and the elsewhere of P.
Given an event P, one can divide the other events in the universe into three classes. Those events that can be reached from the event P by a particle or wave traveling at or below the speed of light
are said to be in the future of P. They will lie within or on the expanding sphere of light emitted from the event P. Thus they will lie within or on the future light cone of P in the space-time
diagram. Only events in the future of P can be affected by what happens at P because nothing can travel faster than light.
Similarly, the past of P can be defined as the set of all events from which it is possible to reach the event P traveling at or below the speed of light. It is thus the set of events that can affect
what happens at P. The events that do not lie in the future or past of P are said to lie in the elsewhere of P. What happens at such events can neither affect nor be affected by what happens at P.
For example, if the sun were to cease to shine at this very moment, it would not affect things on earth at the present time because they would be in the elsewhere of the event when the sun went out
(Figure 2:12)
Fig. 2.12 Space-time diagram showing how long we would have to wait to know that the sun has died.
We would know about it only after eight minutes, the time it takes light to reach us from the sun. Only then would events on earth lie in the future light cone of the event at which the sun went out.
Similarly, we do not know what is happening at the moment farther away in the universe: the light that we see from distant galaxies left them millions of years ago, and in the case of the most
distant object that we have seen, the light left some eight thousand million years ago. Thus, when we look at the universe, we are seeing it as it was in the past.
Fig. 2.13 When the effects of gravity are neglected, the light cones of all events all point in the same direction. | {"url":"https://to-en.gr/afos/Hawking/p1_en.htm","timestamp":"2024-11-11T03:15:36Z","content_type":"text/html","content_length":"9658","record_id":"<urn:uuid:fd0d6d7e-3380-400c-8f00-29f865634424>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00863.warc.gz"} |
week 10.2 discussion response - College School Essays
week 10.2 discussion response
Write a response to each discussion post.
Sperry 10.2
The Pearson correlation is to summarize the relationship between two variables as a single number, and the coefficient of determination indicates what proportion of the variance in one of the
variables is associated with the variance in the other variable. For example, I state coat sales increase with dropping temperatures. To prove this I would gather data on sales and temperatures for
those days. Next, I would plot this information and would see that the linear relationship is strong. Then I would calculate the Pearson correlation and the coefficient of determination (Erford,
Linear regression tries to explain the relationship between the two variables by fitting the linear equation into observed data. The linear regression tells the strength of the effect that the
independent variable have on the dependent variable. It can also be used to make predictions. Take my example of coat sales, due to global warming coat sales will slowly decrease (Erford, 2015).
Partial correlation measures association between two variables, while at the same time it controls the effects of a third variable. For example, in April there is always a spike in sales when the
average temperature is 42 degrees, because the third controlled variable would the end of season sales. The Semipartial correlation removes the effects of additional variables from one of the
variables that were included.
Pomajzl 10.2
The Pearson product-moment correlation coefficient summarizes the relationship between two variables as a single number. It is able to take values with a positive “r” indicating a positive linear
relationship and values with a negative “r” indicating a negative linear relationship (Erford, p. 363). The decimal indicated the strength of the relationship, meaning the closer the absolute value
of “r” to 1, the stronger the correlation is (Erford, p. 363). The easiest way to explain this and determine this is using a scatterplot. The closer the plots are to creating a straight line
determines the indication of a stronger relationship.
The coefficient of determination is the square of the correlation coefficient, which indicates the ratio of the modification in one of the variables, is associated with the modification of the other
variable. (Erford, p. 371).
The linear relationship between two variables can be used to predict the value between two variables (Erford, p. 375). For example, if the amount of x is used to predict the amount of y, then the
amount of y is used to predict the amount of x. They rely on each other.
The elements of linear regression are the slope and intercept. The slope tells the steepness and the intercept is the predicted value of the regression line.
Partial correlation is when the effect of a third variable is removed (Erford, p. 375). Partial correlation only occurs when two variables are correlated because of the third variable in which they
are both linked.
A semipartial correlation occurs when three variables have an effect on one another in some form or way however, it may not be direction. For example, ‘x’, ‘y’, ‘z’ are all in the same study. However
‘x’ and ‘y’ correlate and ‘x’ and ‘z’ correlation but, ‘y’ and ‘z’ are partialled out because they do not coordinate with one another (Erford, p. 381).
Partial correlation and semi partial correlation are similar because they both contain three variables and the one of the variables is removed in some form whether it be completely removed or just
partially removed with two variables, which do not coordinate.
Bottom of Form
Bottom of Form
Bottom of Form
If you need assistance with writing your assignment, essay, our professional assignments / essay writing service is here to help!
Order Now | {"url":"https://collegeschoolessays.com/2023/11/03/week-10-2-discussion-response/","timestamp":"2024-11-07T03:51:58Z","content_type":"text/html","content_length":"47238","record_id":"<urn:uuid:3e60612b-9fa2-4dc0-8c03-f63c9a1c7b2a>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00871.warc.gz"} |
Work Out the Perimeter of a Range of 2D Shapes
The perimeter of a shape is simply the distance around the outside of the shape.
In this activity, we'll be working out the perimeter of a range of different 2D shapes.
When working out the perimeter we must make sure that we check the unit of measure - is it measured in mm, cm, m, km, or something else?
Sometimes we need to use a ruler to work out the perimeter.
In this activity, we will be given some of the measurements, so we won't need a ruler.
We'll use the measurements we are given to work out those we don't know.
Let's get started.
Let's work out the perimeter of this rectangle.
The short side (height) is 12 cm long.
The long side (width) is 22 cm long.
We have been given two of the measurements - 12 cm and 22 cm.
We know that the opposite sides will be equal.
So we can see that we have two sides of 12 cm = 24 cm.
We also have two sides of 22 cm = 44 cm.
Now we add these together to get the measurements of all four sides:
24 cm + 44 cm = 68 cm
The perimeter of the rectangle is 68 cm.
Now let's have a look at another 2D shape.
This is a regular hexagon.
If a shape is regular, it has sides that are all the same length.
If the sides of this hexagon are 5 cm each, what is the total perimeter?
We know that the sides are all the same length because the shape is regular.
We know that a hexagon has 6 sides.
So, we simply multiply 5 cm by 6.
5 cm x 6 = 30 cm
Now it's your turn to have a go. | {"url":"https://www.edplace.com/worksheet_info/maths/keystage2/year3/topic/269/12100/work-out-the-perimeter-of-a-range-of-2d-shapes","timestamp":"2024-11-12T15:23:31Z","content_type":"text/html","content_length":"82412","record_id":"<urn:uuid:9fea03a0-f502-473d-b10d-09e730a67920>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00449.warc.gz"} |
Quantifying R Package Dependency Risk | R-bloggersQuantifying R Package Dependency Risk
Quantifying R Package Dependency Risk
[This article was first published on
R – Win-Vector Blog
, and kindly contributed to
]. (You can report issue about the content on this page
) Want to share your content on R-bloggers?
if you have a blog, or
if you don't.
We recently commented on excess package dependencies as representing risk in the R package ecosystem.
The question remains: how much risk? Is low dependency a mere talisman, or is there evidence it is a good practice (or at least correlates with other good practices)?
Well, it turns out we can quantify it: each additional non-core package declared as an “Imports” or “Depends” is associated with an extra 11% relative chance of a package having an indicated issue on
CRAN. At over 5 non-core “Imports” plus “Depends” a package has significantly elevated risk.
The number of dependent packages in use versus modeled issue probability can be summed up in the following graph.
In the above graph the dashed horizontal line is the overall rate that packages have issues on CRAN. Notice the curve crosses the line well before 5 non-trivial dependencies.
In fact packages importing more than 5 non-trivial dependencies have issues on CRAN at an empirical rate of 35%, (above the model prediction at 5 dependencies) and double the overall rate of 17%.
Doubling a risk is considered very significant. And almost half the packages using more than 10 non-trivial dependencies have known issues on CRAN.
A very short analysis deriving the above can be found here.
Obviously we are using lack of problems on CRAN as a rough approximation for package quality, and number of non-trivial package Imports and Depends as rough proxy for package complexity. It would be
interesting to quantify and control for other measures (including package complexity and purpose).
Our theory is the imports are not so much causing problems, but are a “code smell” correlated with other package issues. We feel this is evidence that the principles that guide some package
developers to prefer packages with well defined purposes and low dependencies are part of a larger set of principles that lead to higher quality software.
A table of all scored packages and their problem/risk estimate can be found here. | {"url":"https://www.r-bloggers.com/2019/03/quantifying-r-package-dependency-risk/","timestamp":"2024-11-06T21:04:43Z","content_type":"text/html","content_length":"89533","record_id":"<urn:uuid:f25f531a-46c8-42a1-9e9e-d25323999e05>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00884.warc.gz"} |
Set Theory
05-03-2019, 04:12 PM
Post: #1
Leviset Posts: 152
Member Joined: Aug 2015
Set Theory
Lately I’ve been revisiting some work I did using Set Theory 10 years ago.
Having not come back to Programmable Calculators until 2016, a long gap from programming the HP-97 back in 1981.
I haven’t got the length and breadth of experience that you guys/gals have with Programmable Calculators - some of it amazing and highly technical.
My best working calculators I have are the HP-41CX and HP-15C plus a DM41 and DM16.
I like that you’ve been very supportive of some probably simple questions to you but not to this 70 year old semi-retired Mathemation!
Back to Set Theory - I’ve done a fair bit of looking for anything Set Theory related on The Museum USB Stick and online.
Can anyone point me towards any articles or software that I could use on both my vintage HP Calculators and/or my later models, namely HP Prime (1st edition) or my HP-50G that relates to Set Theory
Finally do nearly all HP-42S programs run on the HP-41CX?
Denny Tuckerman
05-03-2019, 07:00 PM
Post: #2
John Keith Posts: 1,067
Senior Member Joined: Dec 2013
RE: Set Theory
The Prime includes the basic set operations (union, intersection, difference). The HP 50 does not, but they are included in
The GoferLists commands are a bit slow, so I posted some more-or-less equivalent programs in
this thread.
They require the
ListExt Library
IMHO, both of the above-mentioned libraries are essential for anyone doing list-based programming on the HP 50.
05-03-2019, 09:27 PM
Post: #3
Thomas Okken Posts: 1,896
Senior Member Joined: Feb 2014
RE: Set Theory
(05-03-2019 04:12 PM)Leviset Wrote: do nearly all HP-42S programs run on the HP-41CX?
Judging by my own collection, I'd have to say no, absolutely not. I do have a few programs that will run on the 41CX (or even on the 41C), but those are all programs that were originally written for
those calculators.
Around 80% of the programs in my collection make use of features like named variables, matrices, or the solver, and would not run on the 41CX without at least extensive modifications. Those features
are so powerful and so versatile that they tend to crop up in almost any nontrivial program you'll write on the 42S.
05-04-2019, 12:56 PM
Post: #4
Karl-Ludwig Butte Posts: 156
Member Joined: Dec 2013
RE: Set Theory
Hi Dennis,
maybe you'll like to have a look at my article
"The Secret of the Aleph"
where you'll find some background about Georg Cantor and a program to calculate a power set for the HP-50G.
Have fun.
Best regards
05-04-2019, 11:49 PM
Post: #5
Leviset Posts: 152
Member Joined: Aug 2015
RE: Set Theory
Karl - Thanks
Denny Tuckerman
05-05-2019, 03:12 PM
(This post was last modified: 05-06-2019 03:25 PM by Jonathan Busby.)
Post: #6
Jonathan Busby Posts: 284
Member Joined: Nov 2014
RE: Set Theory
The method I have used when manipulating sets on my HP48GX is based off of the fact that the power set of a set of size N is 2^N . One might be tempted to construct the power set by iterating through
all 2^N values, bitwise, but that is very inefficient. A better method is to use a gray code counter so that to build a power set, only one element has to be altered at a time.
EDIT : If you really want to be pedantic then the number of subsets is 2^N - 1, but technically, since every set includes the empty set { } then it's 2^N
Aeternitas modo est. Longa non est, paene nil.
User(s) browsing this thread: 1 Guest(s) | {"url":"https://hpmuseum.org/forum/thread-12916-post-115979.html","timestamp":"2024-11-07T22:37:47Z","content_type":"application/xhtml+xml","content_length":"32807","record_id":"<urn:uuid:16931f6c-6ce0-4c6f-94b4-57b960874b91>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00364.warc.gz"} |
How do you solve and write the following in interval notation: -5<3x + 4? | HIX Tutor
How do you solve and write the following in interval notation: #-5<3x + 4#?
Answer 1
$- 5 < 3 x + 4 \textcolor{w h i t e}{\text{XX")rArrcolor(white)("XX}} - 3 < x$
$\textcolor{w h i t e}{\text{XXXXX}} x \in \left(- 3 , + \infty\right)$
Given #color(white)("XXX")-5 < 3x+4#
Since we can subtract the same amount to both sides of an inequality without effecting the validity or orientation of the inequality: #color(white)("XXX")-9 < 3x
We can also divide both sides of an inequality by any amount greater than zero without effecting the validity or orientation of the inequality: #color(white)("XXX")-3 < x#
If you prefer you could write this as #x > -3# but it's not really necessary.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To solve and write the inequality -5 < 3x + 4:
1. Subtract 4 from both sides: -5 - 4 < 3x -9 < 3x
2. Divide both sides by 3 (since 3 is positive, the inequality sign remains the same): -9/3 < x -3 < x
3. Write the solution in interval notation: (-3, ∞)
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/how-do-you-solve-and-write-the-following-in-interval-notation-5-3x-4-8f9af93913","timestamp":"2024-11-06T01:47:42Z","content_type":"text/html","content_length":"569538","record_id":"<urn:uuid:253ec281-26ce-46b2-86ce-ce7edaa1cf4e>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00319.warc.gz"} |
Approximate the posterior not the model
Many people like to use Gaussian process (GP) models. Even the most expert deep learning researchers will use a GP for regression problems with small numbers of data points (Bengio 2016) or in the
context of optimising hyper-parameters of deep learning models and algorithms (Snoek et al. 2012). For larger datasets, researchers have spent years trying to make reasonable approximations so that
they can use GPs in those settings too: sacrificing some accuracy for computability. A question I often get asked is: "there are so many GP approximations, which should we use !?
I normally use variational approximation. This post explains my reasons and proposes a shift in how sparse GPs might be perceived: from model approximations to posterior approximations.
Sparseness, pseudo-inputs and inducing points
The research of scaling GPs to larger datasets is often covered by the catch-all term ‘sparse GPs’. The term stems from some of the original work in this area, where methods used a subset of the
dataset to approximate the kernel matrix (e.g. Williams and Seeger 2000). Sparseness occurred through selection of the datapoints. Given a dataset of inputs \(\mathbf X = [\mathbf x_i]_{i=1}^N\), \(\
mathbf x_i \in \mathcal X\), and responses (targets) \(\mathbf y = [y_i]_{i=1}^N\), the task became to select a smaller set \(\mathbf Z \subset \mathbf X\) and \(\tilde{\mathbf y} \subset \mathbf y\)
of the training data, and to compute the Gaussian process on that subset instead.
Then, Ed Snelson and Zoubin Ghahramani (2006) had a great idea, which was to generalise the set of selected points so that they did not have to be a subset of the training dataset. They called these
new points ‘pseudo inputs’, since \(\mathbf z_i \in \mathcal X\), but not necessarily \(\mathbf z_i \in \mathbf X\). A neat thing about this approach was that the locations of these pseudo-inputs
could be found by (unconstrained) gradient-based optimisation of the marginal likelihood. This idea stuck: the method was renamed ‘FITC’ (fully independent training conditional) by (Quiñonero-Candela
and Rasmussen 2005), and has been the workhorse of sparse Gaussian processes ever since.
Model Approximations
Quiñonero-Candela and Rasmussen's paper nicely provided an overview of all the sparse (or pseudo-input) GP methods in use at the time, and it showed that they could be interpreted as alternative
models. If we consider the Gaussian process model as a prior on functions, then many sparse GPs can be considered as alternative priors on functions.
But this view conflates the model and the approximation and makes it hard to know whether to attribute performance (or problems) to one or the other. For example, we sometimes see that FITC
outperforms standard GP regression. This might sound like a bonus, but it's actually rather problematic. It causes problems for users because we are no longer able to criticise the model.
Let me elaborate: suppose we fit a model using the FITC approximation, and it performs worse than desired. Should we modify the model (switch to a different kernel) or should we modify the
approximation (increase the number of inducing points)? Equally, suppose we fit a model using FITC and it performs really well. Is that because we have made an excellent kernel choice, or because
pathologies in the FITC approximation are hiding mis-specifications in the model? Let's look at some concrete examples:
• In replicating the ‘Generalised FITC’ paper (Naish-Guzman and Holden 2007), Ricardo Andrade (now at UCSF) found issues replicating the results of the paper. The model seemed to overfit. It turned
out that stopping the optimisation of the inducing inputs after only 100 iterations (as specified in Naish-Guzman’s article) made the method work. The FITC approximation conflated with early
stopping, which made it hard to know what was wrong.
• The FITC approximation gives rise to a heteroscedastic effect (see Ed Snelson’s thesis (2006) for an in-depth discussion). This seems to help on some datasets. But how are we to know if a dataset
needs a heteroscedastic model? If I want such a model, I want to specify it, understand it and subject it to model comparison and criticism.
There is an accumulation of evidence that the FITC method overfits. Alex Matthews has a tidy case study in his thesis for the regression case. The FITC approximation will give us the real posterior
if the inducing points are placed at the data points, but optimising the locations of the inducing points will not necessarily help. In fact, Alex demonstrated that even when initialised at the
perfect solution \(\mathbf Z = \mathbf X\), the FITC objective encourages \(\mathbf Z\) to move away from this solution. Quiñonero-Candela and Rasmussen's explanation of FITC as an alternative model
(prior) explains why: the inducing points are parameters of a very large model, and optimising those parameters can lead to overfitting.
Posterior Approximations
In 2009, Michalis Titsias published a paper that proposed a different approach: “Variational Learning of Inducing Variables in Sparse Gaussian Processes” (Titsias 2009). This method does not quite
fit into the unifying view proposed by Quiñonero-Candela. The key idea is to construct a variational approximation to the posterior process, and learn the pseudo-points \(\mathbf Z\) alongside the
kernel parameters by maximising the evidence lower bound (ELBO), i.e. a lower bound on the log-marginal likelihood. There was quite a bit of prior art on variational inference in Gaussian processes
(e.g. Csató and Opper 2002; Seeger 2003): Titsias’ important contribution was to treat \(\mathbf Z\) as parameters of the variational approximation, rather than model parameters.
Now we understand that the variational method proposed by Titsias (and other related methods, e.g. Opper and Archambeau 2009) minimises the Kullback-Leibler (KL) divergence between the posterior GP
approximation and the true posterior GP, where the KL is defined across the whole process (Matthews et al. 2016, Matthews 2017). This gives us a solid foundation on which to build approximations to
the posterior of many Gaussian process models, including GP classification, GP state-space models, and more.
The construction is as follows. If the GP prior is
$$p( f(\cdot) ) = \mathcal{GP}\big(0,\, k(\cdot, \cdot)\big)$$
$$p(\mathbf y \,|\, f(\cdot), \mathbf X ) = \prod_i p(y_i \,|\, f(\mathbf x_i) )\,$$
then we can give an approximation to the posterior process as
$$q(f(\cdot)) = \mathcal{GP}\big( \mu(\cdot),\, \sigma(\cdot, \cdot)\big)\,,$$
where \(\mu(\cdot)\) and \(\sigma(\cdot,\cdot)\) are functions that depend on the pseudo-points and maybe other parameters, too. Then, we ‘just’ do variational Bayes. The beauty of this is that it is
totally clear that improving the ELBO will improve the approximation: we are (almost) completely free to do whatever we want with \(\mu(\cdot)\) and \(\sigma(\cdot,\cdot)\). However, to be able to
compute the ELBO, we do need to pick particular forms for \(\mu(\cdot)\) and \(\sigma(\cdot,\cdot)\). The most straightforward way to do this is $$ \mu(\cdot) = k(., \mathbf Z) k(\mathbf Z, \mathbf
Z)^{-1} \mathbf m\\ \sigma(\cdot, \cdot) = k(\cdot, \cdot) - k(\cdot, \mathbf Z)[k(\mathbf Z, \mathbf Z)^{-1} - k(\mathbf Z, \mathbf Z)^{-1}\Sigma k(\mathbf Z, \mathbf Z)^{-1}]k(\mathbf Z, \cdot) $$
where \(\mathbf m\), \(\mathbf \Sigma\) and \(\mathbf Z\) are variational parameters to be adjusted in order to maximise the ELBO, and thus reduce the Kullback-Leibler divergence \(\mathcal{KL}\big[q
(f(\cdot))\,||\,p(f(\cdot)\,|\,\mathbf y, \mathbf X)\big]\).
These expressions appear superficially similar to the approximate-prior approach, but the underpinning idea is where most of the value lies. What we are doing is variational Bayes with the whole
Gaussian process. Some immediate advantages include:
• The approximation is nonparametric. This means that behaviour of the predictions away from the data takes the same form as the true posterior (usually increasing predictive variance away from the
• We can always add more inducing points: since the solution with \(M\) pseudo points could always be captured by the one with \(M+1\) pseudo points, the quality of the approximation increases
monotonically with the number of pseudo points.
• The pseudo-inputs \(\mathbf Z\) (and their number!) are parameters of the approximation, not parameters of the model. This means that we can apply whatever strategy we want to the optimisation of
\(\mathbf Z\): if the ELBO increases, then the approximation must be better (in the KL sense). So \(\mathbf Z\) are protected from overfitting.
• It is clear what to do at predict time. The whole process is included in the approximation, so we simply have to evaluate the approximate posterior GP to make a prediction. This has been unclear
in the past, when some authors have made the distinction of ‘extra approximations’ being required for prediction.
Variational Bayes gives a solid foundation for research in approximate inference for Gaussian processes. VB is making a serious comeback at the moment: stochastic variational inference lets us apply
VB to large datasets; recognition models let us make efficient representations of approximate posteriors in latent variable models, and normalising flows (Rezende and Mohamed 2015) let us make the
approximating posteriors very flexible. In my view, these technologies are all possible because VB genuinely turns inference into optimisation: a better ELBO (evidence lower bound) is always better.
I am looking forward to seeing more GP models being solved by VB as new ideas filter into the GP community. Equally, I am looking forward to GPs becoming ‘the standard’ building block for modelling
functions within models as the variational approximation fits within wider inference schemes.
Commenting has been turned off. | {"url":"https://www.predapp.com/post/approximate-the-posterior-not-the-model","timestamp":"2024-11-12T05:25:27Z","content_type":"text/html","content_length":"1040550","record_id":"<urn:uuid:b0ca1504-38a9-49ae-a322-5801fd5f5348>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00364.warc.gz"} |
256 Bits of Security
This is an incomplete discussion of SSL/TLS authentication and encryption. This post only goes into RSA and does not discuss DHE, PFS, elliptical, or other mechanisms.
In a previous post I created an 15,360-bit RSA key and timed how long it took to create the key. Some may have thought that was some sort of stunt to check processor speed. I mean, who needs an RSA
key of such strength? Well, it turns out that if you actually need 256 bits of security then you'll actually need an RSA key of this size.
According to NIST (SP 800-57, Part 1, Rev 3), to achieve 256 bits of security you need an RSA key of at least 15,360 bits to protect the symmetric 256-bit cipher that's being used to secure the
communications (SSL/TLS). So what does the new industry-standard RSA key size of 2048 bits buy you? According to the same document that 2048-bit key buys you 112 bits of security. Increasing the bit
strength to 3072 will bring you up to the 128 bits that most people expect to be the minimum protection. And this is assuming that the certificate and the certificate chain are all signed using a
SHA-2 algorithm (SHA-1 only gets you [STRIKEOUT:80] 60 bits of security when used for digital signatures and hashes).
So what does this mean for those websites running AES-256 or CAMELLIA-256 ciphers? They are likely wasting processor cycles and not adding to the overall security of the circuit. I'll make two
examples of TLS implementations in the wild.
First, we'll look at wordpress.com. This website is protected using a 2048-bit RSA certificate, signed using SHA256, and using AES-128 cipher. This represents 112 bits of security because of the
limitation of the 2048-bit key. The certificate is properly chained back to the GoDaddy CA which has a root and intermediate certificates that are all 2048 bits and signed using SHA-256. Even though
there is a reduced security when using the 2048-bit key, it's likely more efficient to use the AES-128 cipher than any other due to chip accelerations that are typically found in computers now days.
Next we'll look at one of my domains: christensenplace.us. This website is protected using a 2048-bit RSA certifcate, signed using SHA-1, and using CAMELLIA-256 cipher. This represents [STRIKEOUT:80]
60 bits of security due to the limitation of the SHA-1 signature used on the certificate and the CA and intermediate certificates from AddTrust and COMODO CA. My hosting company uses both the RC4
cipher and the CAMELLIA-256 cipher. In this case the CAMELLIA-256 cipher is a waste of processor since the certificates used aren't nearly strong enough to support such encryption. I block RC4 in my
browser as RC4 is no longer recommended to protect anything. I'm not really sure exactly how much security you'll get from using RC4 but I suspect it's less than SHA-1.
So what to do? Well, if system administrators are concerned with performance then using a 128-bit cipher (like AES-128) is a good idea. For those that are concerned with security, using a 3072-bit
RSA key (at a minimum) will give you 128 bits of security. If you feel you need more bits of security than 128 then generating a solid, large RSA key is the first step. Deciding how many bits of
security you need all depends on how long you want the information to be secure. But that's a post for another day. | {"url":"https://eric.aehe.us/256-bits-of-security.html","timestamp":"2024-11-10T02:31:33Z","content_type":"text/html","content_length":"9016","record_id":"<urn:uuid:bab42d52-f265-42c3-a2ae-0ac4d8d6756b>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00293.warc.gz"} |
How do you perform a Newton-Raphson method in Matlab?
Steps to find root using Newton’s Method:
1. Check if the given function is differentiable or not.
2. Find the first derivative f'(x) of the given function f(x).
3. Take an initial guess root of the function, say x1.
4. Use Newton’s iteration formula to get new better approximate of the root, say x2
5. Repeat the process for x3, x4…
For which equation we can apply Newton-Raphson method?
Use the Newton-Raphson method to determine an improvement on the initial estimate of the root in the following cases. a. f(x) =ex − 4 from initial estimate x0 = 1.5….Algorithm for the Newton-Raphson
1. n = 0.
2. x n + 1 = x n − f ( x n ) f ′ ( x n )
3. if |xn+1 − xn|≤ ϵ accept xr ≃ xn+1 then end, otherwise.
How can Newton-Raphson method be used to solve nonlinear equations?
EXAMPLE: Let us solve cos(x)=x using Newton-Raphson method starting with x0=1. Here f(x)=cos(x)−x, and hence f′(x)=−sin(x)−1. So the Newton-Raphson iteration is xk+1=xk+cos(xk)−xksin(xk)+1.
How do you put E into MATLAB?
The exponential function and the number e as exp(x) so the number e in MATLAB is exp(1).
How does Newtons method work?
The Newton-Raphson method (also known as Newton’s method) is a way to quickly find a good approximation for the root of a real-valued function f ( x ) = 0 f(x) = 0 f(x)=0. It uses the idea that a
continuous and differentiable function can be approximated by a straight line tangent to it.
Is Newton-Raphson iterative?
Newton’s method (or Newton-Raphson method) is an iterative procedure used to find the roots of a function. until the root is found to the desired accuracy. …
What are the limitations of Newton-Raphson method?
Disadvantages of Newton Raphson Method Division by zero problem can occur. Root jumping might take place thereby not getting intended solution. Inflection point issue might occur. Symbolic derivative
is required.
Is Newton-Raphson a bracketing method?
In the Newton-Raphson method, the root is not bracketed. In fact, only one initial guess of the root is needed to get the iterative process started to find the root of an equation. The method hence
falls in the category of open methods.
Is Newton-Raphson method accurate than bisection method for solving non linear equations?
The study is at comparing the rate of performance (convergence) of Bisection, Newton-Raphson and Secant as methods of root-finding. They concluded that Newton method is 7.678622465 times better than
the Bisection method while Secant method is 1.389482397 times better than the Newton method.
Is e recognized in MATLAB?
In MATLAB the function exp(x) gives the value of the exponential function ex. e = e1 = exp(1). MATLAB does not use the symbol e for the mathematical constant e = 2.718281828459046. | {"url":"https://ru-facts.com/how-do-you-perform-a-newton-raphson-method-in-matlab/","timestamp":"2024-11-07T16:29:44Z","content_type":"text/html","content_length":"53033","record_id":"<urn:uuid:eb0e3570-af13-4cc3-805c-35202c025249>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00086.warc.gz"} |
2022 Ph.D Thesis Defenses
2023 Ph.D Thesis Defenses
Tam Cheetham-West
Title: Finite Quotients of Hyperbolic 3-Manifold Groups
Thesis Advisor: Alan Reid
This thesis provides further evidence of the seemingly very close relationship be-tween the geometry of a finite-volume hyperbolic 3-manifold and the profinite completion of its fundamental group.
Ethen Gwaltney
Title: Stahl-Totik regularity and exotic spectra of Dirac operators
Thesis Advisor: Milivoje Lukic
This thesis motivates and presents three novel results in the spectral theory of one-dimensional Dirac operators, each of which concerns various forms of exotic or distinguished spectral
characteristics. First, we consider the possibility of embedded eigenvalues in the absolutely continuous spectrum of a Dirac operator with operator data of Wigner-von Neumann type. Second, we
demonstrate the genericity of Cantor spectrum when the operator data is chosen to be limit-periodic. Third, we provide for the Dirac operator setting an analogue of Stahl-Totik regularity, which,
among other things, provides a lower bound on the thickness of the spectrum in terms of the operator data when the data is taken to be uniformly locally square integrable.
Connor Sell
Title: Cusps and commensurability classes of hyperbolic 4-manifolds
Thesis Advisor: Alan Reid
It is well-known that the cusp cross-sections of finite-volume, cusped hyperbolic n-manifolds are flat, compact (n − 1)-manifolds. In 2002, Long and Reid proved that each of the finitely many
homeomorphism classes of flat, compact (n − 1)-manifolds occur as the cusp cross-section of some arithmetic hyperbolic n-orbifold; the orbifold was upgraded to a manifold by McReynolds
in 2004. There are six orientable, compact, flat 3-manifolds that can occur as cusp cross-sections of hyperbolic 4-manifolds. This thesis provides criteria for exactly when a given commensurability
class of arithmetic hyperbolic 4-manifolds contains a representative with a given cusp type. In particular, for three of the six cusp types, we provide infinitely many examples of commensurability
classes that contain no manifolds with cusps of the given type; no such examples were previously known for any cusp type in any dimension. Further, we extend this result to find commensurability
classes of hyperbolic 5-manifolds that avoid some compact, flat 4-manifolds as cusp cross-sections, and classes of non-arithmetic manifolds in both dimensions that avoid some cusp types
Asgeir Valfells
Title: Local Criteria in Polyhedral Minimizing Problems
Thesis Advisor: Bob Hardt
This thesis will discuss two polyhedral minimizing problems and the necessary local criteria we find any such minimizers must have. We will also briefly discuss an extension of a third minimizing
problem to higher dimension. The first result we present classifies the three-dimensional piecewise linear cones in R4 that are mass minimizing w.r.t. Lipschitz maps in the sense of Almgren’s M (0,
δ) sets as in Taylor’s classification of two-dimensional soap film singularities. There are three that arise naturally by taking products of R with lower dimensional cases and earlier literature has
demonstrated the existence of two with 0-dimensional singularities. We classify all possible candidates and demonstrate that there are no p.l. minimizers outside these five. The second result we
present is an assortment of criteria for edge-length minimizing polyhedrons. The aim is to get closer to answering a 1957 conjecture by Zdzislaw Melzak, that the unit volume polyhedron with least
edge length was a triangular right prism, with edge length 22/3311/6 ≈ 11.896. We present a variety of variational arguments to restrict the class of minimizing candidates.
Chunyi Wang
Title: Direct and Inverse Spectral Theory for the Hamiltonian System with Measure Coefficients
Thesis Advisor: David Damanik
This thesis discusses the direct and inverse spectral theory of Hamiltonian systems with measure coefficients, which can cover more singular cases. In the first part, we define self-adjoint relations
associated with the systems and develop Weyl-Titchmarsh theory for these relations. Then, we develop subordinacy theory for the relations and discuss several cases when the absolutely continuous
spectrum appears. Finally, we develop inverse uniqueness results for Hamiltonian systems with measure coefficients by applying de Branges’ subspace ordering theorem. Overall, this thesis contributes
to the study of Hamiltonian systems with measure coefficients, expands the self-adjoint operator theory to a more general class of physical models, and investigates common spectral properties among
different model
Harshit Yadav
Title: Functorial constructions of Frobenius algebras in the Drinfeld center
Thesis Advisor: Chelsea Walton
Frobenius algebras in vector spaces are classical algebraic structures. However, because of their discovered connections to various fields, including computer science and
topological quantum field theories, there is a growing interest in exploring their generalizations within the framework of monoidal categories. Inspired by these connections, this thesis delves into
the problem of functorially constructing ‘nice’ Frobenius algebra objects in such categories. We introduce unimodular module categories and employ them to provide a functorial construction of
Frobenius algebras in the Drinfeld center of a finite tensor category. We also classify unimodular module categories over the category of representations of a finite dimensional Hopf algebra
Kenneth Zheng
Title: Brauer groups of a family of nonnegative Kodaira dimension elliptic surfaces
Thesis Advisor: Anthony Varilly-Alvarado
We explore the Brauer groups of the elliptic surfaces given by y2 = x3 + t6m + 1 over Q for m = 2, 3. When m = 2, the resulting surface is K3, and when m = 3, the surface is honestly elliptic with
Kodaira dimension 1. We compute the algebraic Brauer groups of these surfaces by studying the action of Gal(Q/Q) on their Neron-Severi groups. Following the work of Gvirtz, Loughran, and Nakahara
[GLN22], we find bounds for the exponents of transcendental Brauer groups of these surfaces. The transcendental Brauer group is closely related to the transcendental lattice. The argument begins with
an explicit description of the basis of the respective transcendental lattices and reinterpreting elements of these lattices as elements in rings of integers. From this, we bound the transcendental
Brauer group. These bounds apply more generally to the surfaces given by y2 = x3 + A1t6m + A2 for Ai ∈ Z and m = 2, 3
2022 Ph.D Thesis Defenses
Austen James
Title: A Bayesian Approach to Computing Brauer Groups of Cubic Surfaces
Thesis Advisor: Tony Varilly-Alvarado
We present an algorithm for computing Brauer groups of cubic surfaces. The algorithm takes as input an equation f (x, y, z, w) = 0 for a cubic surface X over Q and a confidence threshold 0.5 < ρ < 1,
and outputs the Brauer group of X, Br X/ Br Q and a confidence level ψ > ρ for the result. The algorithm runs by sampling lifts of Frobenius at many primes of good reduction and relies on
Chebotarev’s density theorem and Bayesian inference to produce, with confidence ψ > ρ, a subgroup of W (E6). This subgroup represents the action of Galois on the geometric Picard group of X, from
which we compute Br X/ Br Q. We give a description of this algorithm and a proof that it terminates, as well as an implementation in Magma. We also examine the speed of such an approach relative to
existing methods and explore how the Bayesian technique of this algorithm can be applied to answer questions concerning the Galois and Brauer groups of other classes of surfaces.
Yikai Chen
Title: Mathematical Results for Michell Trusses
Thesis Advisor: Robert Hardt
Given an equilibrated vector force system $\mathbf{F}$ of finite mass and bounded support, we investigate the possibility and properties of a cost minimizing structure of given materials that
balances $\mathbf{F}$. Our work generalizes and reinterprets results of Michell’s paper in 1904 and Gangbo’s recent work where the given equilibrated force system occurs on a finite set of points and
the balancing structure consists of finitely many stressed bars joining these points. Such a bar corresponds to an interval $[a,b] \subset \mathbb{R}^n$ having a multiplicity $\lambda \in \mathbb{R}$
where $|\lambda|$ indicates the stress density on the bar and $\sgn(\lambda)$ indicates whether it is being compressed or extended. While there exists a finite bar system to balance any given
equilibrated finite force system, Michell already observed that a finite cost-minimizing one may not exist. In this thesis, we introduce two new mathematical representations of Michell trusses based
on one-dimensional finite mass varifolds and flat $\R^n$-chains. Here one may use a one-dimensional signed varifold to model the balancing structure so that the internal force of the positive (or
compressed) part coincides with its first variation while the internal force of the negative (or extended) part coincides with its negative first variation. For the chain model, we use the subspace
of structural flat $\mathbb{R}^n$ chains in which the coefficient vectors are a.e. co-linear with the orientation vectors. The net force then becomes simply the $\mathbb{R}^n$ chain boundary and so
cost-minimization becomes precisely the mass-minimizing Plateau problem for structural chains. For either model, a known compactness theorem leads to existence of optimal cost-minimizers as well as
time-continuous cost-decreasing flows.
Giorgio Young
Title: Some results on the spectral theory of one-dimensional operators and associated problems
Thesis Advisor: Milivoje Lukić
This thesis discusses results in the area of spectral theory of Schrödinger operators, and their discrete analogs, Jacobi matrices, as well as some closely associated problems. The first result we
present relates to the quantum dynamics generated by a particular class of almost periodic Schrödinger operators. We show that the dynamics generated by Schrödinger operators whose potentials are
approximated exponentially quickly by a periodic sequence exhibit a strong form of ballistic transport. The second result exploits the connection between the KdV hierarchy and one-dimensional
Schrödinger operators to prove a uniqueness result for the KdV hierarchy with reflectionless initial data via inverse spectral theoretic techniques. The third and fourth results concern orthogonal
and Chebyshev rational functions with poles on the extended real line. In the process of extending some of the existing theory for polynomials and exploring some of the new phenomena that arise, we
present a proof of a conjecture of Barry Simon’s. This thesis contains joint work with Benjamin Eichinger and Milivoje Lukić.
Nicholas Rouse
Title: On a conjecture of Chinburg-Reid-Stover
Thesis Advisor: Alan Reid
We study a conjecture of Chinburg-Reid-Stover about ramification sets of quaternion algebras associated to hyperbolic 3-orbifolds obtained by (d,0) Dehn surgery on hyperbolic knot complements in S^3.
For a sporadic example and an infinite family, we prove that the set of rational primes p such that there is some d such that the quaternion algebra associated to the (d,0) surgery is ramified at
some prime ideal above p is infinite. This behavior is governed by the Alexander polynomial of the knot, and we investigate its connection to reducible representations on the canonical component of
the character variety and the failure of a certain function field quaternion algebra to extend to an Azumaya algebra over the canonical component. We further provide a more general framework for
finding such examples that one may use to recover the infinite family.
Stephen Wolff
Title: The inverse Galois problem for del Pezzo surfaces of degree 1 and algebraic K3 surfaces
Thesis Advisor: Anthony Várilly-Alvarado
In this thesis we study the inverse Galois problem for del Pezzo surfaces of degree one and for algebraic K3 surfaces. We begin with an overview of how the question of the existence of k-points on a
nice k-variety leads, via Brauer groups, to the inverse Galois problem. We then discuss an algorithm to compute all finite subgroups of the general linear group GL(n,Z) up to conjugacy. The first
cohomologies of these subgroups are a superset of the target groups of the inverse Galois problem for any family of nice k-varieties whose geometric Picard group is free and of finite rank. We apply
these results to algebraic K3 surfaces defined over the rational numbers, providing explicit equations for a surface solving the only nontrivial instance of the inverse Galois problem in geometric
Picard rank two. Next we study representatives from three families of del Pezzo surfaces of degree one, searching for 5-torsion in the Brauer group. For two of the three surfaces, we show that the
Brauer group is trivial when the surface defined over the rational numbers, but becomes isomorphic to Z/5Z or (Z/5Z)^2 when the base field is raised to a suitable number field. For the third surface,
we show that its splitting field has degree 2400 as an extension of the rational numbers, a degree consistent with 5-torsion in the Brauer group.
Gilliam Stagner
Title: Filling links and minimal surfaces in 3-manifolds. William Stagner
Thesis Advisor: Alan Reid
This thesis studies this existence of filling links 3-manifolds. A link L in a 3-manifold M is filling in M if, for any spine G of M disjoint from L, \pi_1(G) injects into \pi_1(M - L ).
Conceptually, a filling link cuts through all of the the topology 3-manifold. These links were first studied by Freedman-Krushkal in the concrete case of the 3-torus M = T^3, but they leave open the
question of whether a filling link actually exists in T^3. We answer this question affirmatively by proving in fact that every closed, orientable 3-manifold M with fundamental group of rank 3
contains a filling link.
Leonardo S. Digiosia
Title: Cylindrical contact homology of links of simple singularities
Thesis Advisor: Joanna Nelson
In this talk we consider the links of simple singularities, which are contactomoprhic to S^3/G for finite subgroups G of SU(2,C). We explain how to compute the cylindrical contact homology of S^3/G
by means of perturbing the canonical contact form by a Morse function that is invariant under the corresponding rotation subgroup. We prove that the ranks are given in terms of the number of
conjugacy classes of G, demonstrating a form of the McKay correspondence. We also explain how our computation realizes the Seifert fiber structure of these links.
Shawn Williams
Title: Extensions of the Fox-Milnor Condition
Thesis Advisor: Shelly Harvey
The search for slice knots is an important task in low dimensional topology. In the 1960s, Fox and Milnor proved a theorem stating that the Alexander polynomial of a slice knot satisfies a special
factorization. A decade later, Kawauchi extended this theorem for the multivariable Alexander polynomial of slice links. This factorization, known as the Fox-Milnor condition, has been used and
generalized many times as an obstruction to a link being slice. In this defense, we will see two more extensions of this condition, first to the multivariable Alexander polynomial of 1-solvable
links, and then for the first order Alexander polynomial of ribbon knots. | {"url":"https://math.rice.edu/research/graduate-research/2022theses","timestamp":"2024-11-04T12:13:11Z","content_type":"text/html","content_length":"92676","record_id":"<urn:uuid:fb6a23fa-cc6a-4d12-87b8-2312d3d57c88>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00088.warc.gz"} |
Thomas Pender
Research Interests:
My research is focused in combinatorial design theory. In particular, I am interested in explicit constructions of combinatorial matrices such as weighing and incidence matrices. I also pursue
constructions of sets of sequences with good correlation properties, and seek to understand their relationships to objects like various difference configurations in finite groups. Computational
techniques are often employed in order to facilitate generating stores of examples as well as gaining structural understanding.
Brief Bio:
I received my Bachelor's degree at the University of Lethbridge in 2020. There I studied under Dr. Hadi Kharaghani as a research assistant. Continuing under Dr. Kharaghani's tutelage, I
subsequently completed the Master's of Science (Mathematics) program at the University of Lethbridge in 2022. I am now a doctoral student at Simon Fraser University where I study under Dr.
Jonathan Jedwab.
Current Program: Graduate student (doctoral, mathematics), Simon Fraser University, Present. Supervisor: Dr. Jonathan Jedwab.
Master's Degree of Science in Mathematics, University of Lethbridge, 2022. Supervisor: Dr. Hadi Kharaghani.
"Weighing Matrices: generalizations and related configurations" (U of L Library).
Bachelor's Degree of Science, University of Lethbridge, 2020.
"Balanced Group Matrices: theory and applications" (U of L Library).
Submitted Papers
Jedwab, J. and T. Pender. "Two Constructions of Quaternary Legendre Pairs of Even Length." (submitted, 2024).
Abstract: We give the first general constructions of even length Legendre pairs: there is a quaternary Legendre pair of length (q-1)/2 for every prime power q congruent to 1 modulo 4, and there
is a quaternary Legendre pair of length 2p for ever odd prime p for which 2p-1 is a prime power.
Refereed Publications
For every article listed below, there is a doi link pointing to the published material. A link to the publicly available arXiv versions is also provided whenever possible. NB: The arXiv versions
often differ markedly from the versions accepted for publication.
Kharaghani, H., T. Pender and V. Tonchev. "Optimal Constant Weight Codes Derived from Balanced Generalized Weighing Matrices." Des. Codes Cryptog. 92, no. 10 (2024): 2791-2799.
Abstract: Balanced generalized weighing matrices are used to construct optimal constant weight codes that are monomially inequivalent to codes derived from the classical simplex codes. What's
more, these codes can be assumed to be generated entirely by omega-shifts of a single codeword where omega is a primitive element of a Galois field. Additional constant weight codes are derived
by projecting onto subgroups of the alphabet sets. These too are shown to be optimal.
Pender, T.. "On Extremal and Near-Extremal Self-Dual Ternay Codes." Discrete Math. 347, no. 6 (2024): 113968.
Abstract: A computational approach to using plug-in arrays, circulant matrices, and negacirculant matrices in the construction and enumeration of extremal and near-extremal self-dual ternary
codes. Isomorphism classes of such codes obtainable from orthogonal designs of dimensions 2, 4, and 8 are completely enumerated for several lengths. Additionally, partial searches are conducted
for larger lengths, and weight enumerators are derived for near-extremal codes.
Kharaghani, H., T. Pender, C. Van't Land and V. Zaisev. "Bush-Type Butson Hadamard Matrices." Glas. Mat. 58, no. 2 (2023): 247-257.
Abstract: Bush-type Butson Hadamard matrices are introduced. It is shown that a nonextendable set of mutually unbiased Butson Hadamard matrices is obtained by adding a specific Butson Hadamard
matrix to a set of mutually unbiased Bush-type Butson Hadamard matrices. A class of symmetric Bush-type Butson Hadamard matrices over the group G of n-th roots of unity is introduced that is also
valid over any subgroup of G. The case of Bush-type Butson Hadamard matrices of even order will be discussed.
Kharaghani, H., T. Pender and Sho Suda. "Quasi-Balanced Weighing Matrices, Signed Strongly Regular Graphs, and Association Schemes." Finite Fields Appl. 83, no. 25 (2022): 102065.
Abstract: A weighing matrix W is quasi-balanced if |W| |W|^T = |W|^T |W| has at most two off-diagonal entries, where |W|_{i,j} = |W|_{j,i}. A quasi-balanced weighing matrix W signs a strongly
regular graph if |W| coincides with its adjacency matrix. Among other things, signed strongly regular graphs and their association schemes are presented.
Kharaghani, H., T. Pender and Sho Suda. "Balanced Weighing Matrices." J. Combin. Theory Ser. A 186, no. 18 (2022): 105552.
Abstract: A unified approach to the construction of weighing matrices and certain symmetric designs is presented. Assuming the weight p in a weighing matrix W(n, p) is a prime power, it is shown
that there is a balanced weighing matrix with Ionin-type parameters. Equivalence with certain classes of association schemes is discussed in detail.
Kharaghani, H., T. Pender and Sho Suda. "A Family of Balanced Weighing Matrices." Combinatorica 42, no. 6 (2022): 881-894.
Abstract: Balanced weighing matrices with parameters [1+18(9^{m+1}-1)/8, 9^{m+1}, 4 9^m] for each nonzero integer m are constructed. This is the first infinite class not belonging to those with
classical parameters. It is shown that any balanced weighing matrix is equivalent to a five-class association scheme.
Kharaghani, H., T. Pender and Sho Suda. "Balancedly Splittable Orthogonal Designs and Equiangular Tight Frames." Des. Codes Cryptog. 89, no. 9, (2021): 2033-2050.
Abstract: The concept of balancedly splittable orthogonal designs is introduced along with a recursive construction. As an application, equiangular tight frames over the real, complex, and
quaternions meeting the Delsarte-Goethals-Seidel upper bound are obtained.
Source Code
Search for even length quaternary Legendre pairs:
[ tarball ] [ repository ]
Search for (near-)extremal self-dual ternary codes:
[ tarball ] [ repository ]
Auxilary scripts for generating various combinatorial matrices:
[ tarball ] [ repository ]
Search for mutually orthogoval affine translation planes:
[ tarball ] [ repository ]
Command line indexing utility:
[ tarball ] [ repository ]
Pender T.. "Balanced Weighing Matrices." Presented at the Coast Combinatorics Conference at The University of Victoria, Victoria, BC, November 2023:
[ beamer ] [ repository ] [ website ]
Pender T.. "Balancedly Splittable Orthogonal Designs." Presented at the Alberta-Montana Combinatorics and Algorithm Days at The Banff International Research Station, Banff, AB, June 2022:
[ beamer ] [ repository ] [ website ]
Pender T.. "Balanced Generalized Weighing Matrices and Optimal Codes." Presented at the Canadian Mathematical Society's Winter Meeting. December 2021:
[ beamer ] [ repository ] [ website ]
Department of Mathematics Email: tsp7@sfu.ca
Simon Fraser University
8888 University Drive
Burnaby BC V5A 1S6 | {"url":"https://www.sfu.ca/~tsp7/","timestamp":"2024-11-14T05:41:28Z","content_type":"text/html","content_length":"16117","record_id":"<urn:uuid:577da7cd-a9a3-4351-920d-d0c346856425>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00189.warc.gz"} |
Mathematical Center for Science and Technology
The Mathematical Center for Sciences and Technology of the Institute of Mathematics of the Polish Academy of Sciences was established on 28 November 2003. The Center’s objective is to consolidate the
actions of the mathematical community in favour of the applications of Mathematics. The Scientific Council of the Center for the Applications of Mathematics consists of 22 specialists in different
applied branches of Mathematics as well as related disciplines.
The President of the Center for the Mathematical Center for Sciences and Technology is Professor Ł. Stettner, whereas the Vice-presidents
are Professor T. Cieślak and Professor R. Rudnicki.
The Center for Applications includes scientific groups working in the following disciplines:
– Financial Mathematics,
– Biomathematics,
– Mathematical Physics,
– Cryptology,
– Numerical analysis,
– Statistics.
The Center organizes regular seminars dedicated to different branches of the applications of Mathematics as well as seminars focusing on particular disciplines of applications. The Center is in
charge of organizing the National Conference on the Applications of Mathematics in Zakopane. One of the Center’s tasks is also to supervise the editorial process of the quaterly Applicationes
Mathematicae, which celebrated in 2003 its 50th anniversary (the magazine’s founder was Hugo Steinhaus). | {"url":"https://www.impan.pl/en/activities/zespoly-i-centra-naukowe/mathematical-center-for-science-and-technology","timestamp":"2024-11-09T08:01:54Z","content_type":"text/html","content_length":"42015","record_id":"<urn:uuid:a179fa63-0703-4fbe-b94e-ac31b8a4323c>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00147.warc.gz"} |
How to Calculate the AB in Baseball - The Baseball Lifestyle
How to Calculate the AB in Baseball
Baseball is a game that is often described as a “numbers game” and one of the most important metrics used to measure a player’s performance is the AB or “at-bat”. The AB is an important statistic
that is used to measure a player’s success rate in the game and it is also used to determine how much a player is being paid. In this article, we will discuss what AB is and how to calculate it.
What is AB?
AB stands for at-bats and it is a statistic used in baseball to measure the number of times a player has stepped up to the plate to take a swing. In other words, it is the number of times a player
has attempted to hit the ball. The AB is an important statistic because it indicates how often a player is able to put the ball in play and it is also used to measure the success rate of a player.
How to Calculate the AB
Calculating the AB is relatively simple and can be done in a few easy steps. The first step is to record the number of times a player has stepped up to the plate. This can be done by counting the
number of plate appearances in a game or in a season. A plate appearance is defined as any time a player steps up to the plate and attempts to hit the ball.
The second step is to subtract any times that the player reached base without swinging. This includes walks, hit by pitch, sacrifice flies, and other bases gained without swinging. The third step is
to subtract the number of times the player was caught stealing or was called out on a caught stealing. The fourth step is to subtract the number of times the player was hit by a pitch. Finally, the
fifth step is to subtract the number of times the player struck out or was called out on a strikeout.
Once all of these steps have been completed, the final number is the player’s AB. This number is then used to calculate the player’s batting average and other statistics.
Calculating the AB is a relatively simple process and it is an important statistic used to measure a player’s success rate in baseball. It is used to determine how often a player is able to put the
ball in play and it is also used to measure the success rate of a player. By following the steps outlined above, it is easy to calculate the AB for a player. | {"url":"https://thebaseballlifestyle.com/how-to-calculate-the-ab-in-baseball/","timestamp":"2024-11-07T09:49:29Z","content_type":"text/html","content_length":"142702","record_id":"<urn:uuid:1febca6d-d9c4-45e7-b831-29938abfb0e4>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00731.warc.gz"} |
Calculus: The Logical Extension of Arithmetic
Markov Chain Process: Theory and Cases is designed for students of natural and formal sciences. It explains the fundamentals related to a stochastic process that satisfies the Markov property. It
presents 10 structured chapters that provide a comprehensive insight into the complexity of this subject by presenting many examples and case studies that will help readers to deepen their acquired
knowledge and relate learned theory to practice. This book is divided into four parts. The first part thoroughly examines the definitions of probability, independent events, mutually (and not
mutually) exclusive events, conditional probability, and Bayes' theorem, which are essential elements in Markov's theory. The second part examines the elements of probability vectors, stochastic
matrices, regular stochastic matrices, and fixed points. The third part presents multiple cases in various disciplines: Predictive computational science, Urban complex systems, Computational finance,
Computational biology, Complex systems theory, and Computational Science in Engineering. The last part introduces learners to Fortran 90 programs and Linux scripts. To make the comprehension of
Markov Chain concepts easier, all the examples, exercises, and case studies presented in this book are completely solved and given in a separate section. This book serves as a textbook (either
primary or auxiliary) for students required to understand Markov Chains in their courses, and as a reference book for researchers who want to learn about methods that involve Markov Processes. | {"url":"https://www.benthamscience.com/ebook_volume/2186/related-ebooks?page=2","timestamp":"2024-11-03T10:34:30Z","content_type":"text/html","content_length":"100505","record_id":"<urn:uuid:152ce1bf-bc9e-4472-adc5-1119aec7023f>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00586.warc.gz"} |
Ginzburg-Landau minimizers with prescribed degrees. Capacity of the domain and emergence of vortices
Let Ω ⊂ R^2 be a simply connected domain, let ω be a simply connected subdomain of Ω, and set A = Ω {set minus} ω. Suppose that J is the class of complex-valued maps on the annular domain A with
degree 1 both on ∂Ω and on ∂ω. We consider the variational problem for the Ginzburg-Landau energy E[λ] among all maps in J. Because only the degree of the map is prescribed on the boundary, the set J
is not necessarily closed under a weak H^1-convergence. We show that the attainability of the minimum of E[λ] over J is determined by the value of cap (A)-the H^1-capacity of the domain A. In
contrast, it is known, that the existence of minimizers of E[λ] among the maps with a prescribed Dirichlet boundary data does not depend on this geometric characteristic. When cap (A) ≥ π (A is
either subcritical or critical), we show that the global minimizers of E[λ] exist for each λ > 0 and they are vortexless when λ is large. Assuming that λ → ∞, we demonstrate that the minimizers of E
[λ] converge in H^1 (A) to an S^1-valued harmonic map which we explicitly identify. When cap (A) < π (A is supercritical), we prove that either (i) there is a critical value λ[0] such that the global
minimizers exist when λ < λ[0] and they do not exist when λ > λ[0], or (ii) the global minimizers exist for each λ > 0. We conjecture that the second case never occurs. Further, for large λ, we
establish that the minimizing sequences/minimizers in supercritical domains develop exactly two vortices-a vortex of degree 1 near ∂Ω and a vortex of degree -1 near ∂ω.
All Science Journal Classification (ASJC) codes
Dive into the research topics of 'Ginzburg-Landau minimizers with prescribed degrees. Capacity of the domain and emergence of vortices'. Together they form a unique fingerprint. | {"url":"https://pure.psu.edu/en/publications/ginzburg-landau-minimizers-with-prescribed-degrees-capacity-of-th","timestamp":"2024-11-09T03:59:34Z","content_type":"text/html","content_length":"54003","record_id":"<urn:uuid:3e94c68a-bd48-4612-9e49-9310dfaa7c07>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00182.warc.gz"} |
Stolen base attempts: an algorithm for allocating run value
It is customary to credit a runner if he runs on the pitch and reaches the next base safely without the benefit of any major misplays by the defense. We call this a stolen base.
But how much credit does the runner really deserve? After all, there are always other players involved in a stolen base attempt, and frequently these other players are more responsible than the
runner for the advancement, or the out that results. The pitcher may ignore the runner and allow a long lead, or a walking lead, or he may execute a very slow delivery to home plate. The catcher may
bobble the pitch, or execute a slow exchange and release; his throw may be off target, or weak. The fielder at the play base may drop or miss the throw, or may fail to apply the tag.
This article presents an algorithm for logically dividing credit on stolen base attempts among the participating players, sharing the run value of the play result based on the quality of their
To keep things simpler, this algorithm will cover only situations with a sole runner on first base who attempts to steal second base, where the play does not result in a passed ball or wild pitch,
and where there is no defensive error or any additional advancement by the runner beyond the play base. Other plays involving these situations have their own algorithms, which will be discussed
The algorithm will also consider contributions from only the runner, pitcher and catcher, leaving consideration of the fielder’s contribution for future discussion.
Play value
How much is a successful steal of second base worth to the offense? How much does a failed attempt at second base cost the offense? These questions we will answer using run expectancy values. Run
expectancy refers to the expected number of runs scored from each of the 24 run-out states. Here are the RE numbers for 2012:
The run value of any play can be determined by calculating the change in run expectancy from the initial to final state. So, with a runner on first, a successful steal of second base with none out
changes the RE from 0.858 to 1.073 (the run value for a man on second, none out), a change of +0.215 runs. If the attempt is unsuccessful and the runner is thrown out, the change in RE is from the
initial 0.858 to a final value of 0.263 (the run value for bases empty, one out), a net change of -0.595 runs. With one out, the play values are +0.144 runs for success, and -0.411 runs for failure.
With two outs, the values are +0.097 and -0.221 runs.
A brief aside here: it is important to keep in mind that the RE values listed above are an aggregation of all major league data, so the precise run expectancies for a situation may (will) be
different, depending on the players involved. Figuring out how the odds shift in particular situations is one of the things a good manager does. Figuring out the “centerline” odds and making them
available to the manager is one of the things a good analyst does.
So, how does one go about deciding if the potential reward of a stolen base is worth the risk? Let’s do the math. The “break-even point” (BEP) is the success rate for attempts for which run value
gained on successes and run value lost on failures balance each other. It is given by the equation:
BEP = CS Value / (CS Value – SB Value)
For zero outs: BEP = (-0.595)/(-0.595 – 0.215) = 0.735 = 73.5%
So, if one can exceed a 73.5 percent success rate, attempting to steal second with none out will be beneficial in the long term. If not, one would be advised to not try for the steal, although making
an attempt from time to time when the odds say not to will help to keep opposing teams from becoming too accurate in anticipating one’s tactical moves.
With the groundwork now laid, we can move on to discussing the attribution of credit/blame for the outcome of stolen base attempts. When trying to allocate performance value on a play, we first must
identify the participant players. For a stolen base attempt at second base, there are four participants: the runner, pitcher, catcher and a fielder (we will treat any advances or putouts that take
place after the main as separate). Let’s consider each of the participants for a moment.
The participants
The runner is the most important player in any stolen base attempt, in the sense that he (among the involved players) decides when there will be an attempt, and of course he can unilaterally decide
not to attempt a steal as well. Another important aspect of the runner’s involvement is that the runner’s performance forms one complete side of the confrontation: the runner’s reaction to the
pitcher’s first move begins the play, and his initial touch of second base ends the play (if a tag hasn’t ended it sooner). The elapsed time between the pitcher’s first move and the runner’s touching
second base is the key metric for the runner.
The pitcher’s delivery time to home plate governs the first portion of the defensive side of the stolen base attempt. The pitcher’s performance is relatively independent, in that the pitcher
generally cannot alter his delivery or pitch selection based on the actions of the runner. The pitcher’s impact on the runner’s lead and/or jump, including the influence of the handedness of the
pitcher, is significant, but will be discussed elsewhere.
The catcher exerts a huge influence on stolen base attempts, naturally. Unlike the runner and pitcher, the catcher does not begin his performance from a “clean slate”; he inherits a different
situation on every stolen base attempt, based on the pitcher’s delivery time and the first portion of the runner’s sprint to second base. The catcher’s performance is encapsulated in the amount of
time between his first touching the pitch and the arrival of his throw in the glove of the fielder covering second base. The accuracy of his throw is, of course, important, but will be discussed
The fielder’s task is simple, if not always easy: He catches the throw and applies the tag. In this discussion, we will assume the fielder catches the throw and applies the tag, and we will not
consider the value he provides by doing so; analysis of the fielder’s contribution will be discussed elsewhere.
Calculating individual player values
A sampling of stolen base attempts from the 2011-13 seasons was analyzed, with times measured for segments of the play corresponding to the performances of the runner and pitcher. The data for
successful and unsuccessful attempts were separated, and probability density functions (PDFs) were fit for each category. The PDFs were then weighted and combined to yield plots which show the
likelihood of success vs. the runner’s and pitcher’s times.
Runner time chart:
Pitcher time chart:
Note: due to the limited size of the sample, these plots should be considered approximate, and those who wish to make use of this algorithm should avail themselves of a larger sample of data, ideally
a full season or more. However, the effectiveness of the algorithm is not dependent on the precision of the charts, and the focus of this discussion will remain on the algorithm.
Upon measuring the runner or pitcher’s time, and using the appropriate chart to convert the time to a “Safe %,” the weighted value of the performance is calculated by multiplying the Safe% by the SB
Value, multiplying (1-Safe%) by the CS Value, and adding the two numbers.
Runner’s Value: The first value contribution to be calculated is that of the runner. It is determined as follows:
• Measure the runner’s time, which is the time elapsed between the pitcher’s first move and the runner touching second base. Even if the runner is tagged out, the runner’s time is counted to the
instant he touches second base.
• Consult the runner’s time chart and find the corresponding Safe% for the runner’s time.
• Multiply the Safe% by the SB Value, multiply (1-Safe%) by the CS Value, and add the two numbers. This is the Runner’s Value.
• Example (using values for zero outs): for runner’s time = 3.26 seconds, the corresponding Safe % is 90.0%. Multiply 90.0% by +0.215, and add (1-90.0%) times -0.595, which equals +0.134 runs. This
is the Runner’s Value. A positive number indicates a favorable contribution for the runner (adding runs), while a negative number indicates an unfavorable contribution (reducing runs) .
Pitcher’s Value: Next, the pitcher’s value is determined, as follows:
• Measure the pitcher’s time, which is the time elapsed between the pitcher’s first move and the pitch touching the catcher’s glove.
• Consult the pitcher’s time chart and find the corresponding Safe % for the Pitcher’s Time.
• Multiply the Safe% by the SB Value, multiply (1-Safe%) by the CS Value, and add the two numbers. This is the Pitcher’s Value.
• Example: for pitcher’s time = 1.33 seconds, the corresponding Safe% is 68.5 percent. Multiply 68.5% by +0.215, and add (1-68.5%) * -0.595, which equals -0.040 runs. This is the Pitcher’s Value.
The negative number here indicates a favorable result for the pitcher (reducing runs).
Catcher’s Value: Finally, the catcher’s value is determined, as follows:
• The Catcher’s Value is calculated as the overall run value of the play result (i.e. SB Value or CS Value) minus the sum of the Runner’s Value and Pitcher’s Value.
• Example: given the inputs above (Runner’s Value = +0.134 runs, Pitcher’s Value = -0.040 runs), the Catcher’s Value will depend on whether the runner is safe or out at second base. If the runner
is safe, the Catcher’s Value = +0.215 runs – (+0.134 runs) – (-0.040 runs) = +0.121 runs. If the runner is out at second, the Catcher’s Value = -0.595 runs – (+0.134 runs) – (-0.040 runs) =
-0.689 runs.
• If the runner successfully steals second base with a very fast time, and the pitcher’s delivery time to home is extremely slow, the sum of the Runner’s Value and Pitcher’s Value could in an
extremely rare instance exceed the SB Value. In this case, the Catcher’s Value would be negative (i.e. reducing runs, i.e. a good defensive contribution), which would not make sense on a play
where the catcher had essentially no impact on the play and the runner was safe. In this case, the Catcher’s Value is set equal to zero, and the Pitcher’s Value is adjusted so that the total play
value equals the SB Value.
If the runner is safe in our example, the credit/blame is allotted as follows:
• Runner’s Value: +0.134 runs
• Pitcher’s Value: -0.040 runs
• Catcher’s Value: +0.121 runs
• Total Run Value: +0.215 runs
If the runner is out in our example, the credit/blame is allotted as follows:
• Runner’s Value: +0.134 runs
• Pitcher’s Value: -0.040 runs
• Catcher’s Value: -0.689 runs
• Total Run Value: -0.595 runs
Note that the runner and pitcher get the same credit in both instances, because they delivered the same performances. The catcher’s credit depends on whether he was able to receive a pitch at time =
+1.33 seconds, and get it to second base in time for the tag to be applied before time = +3.26 seconds. This is a tough play for a catcher to make, and if we do the math, we find that the catcher’s
break-even point on this play is 15 percent—if he can throw out runners on a play like this more than 15% of the time, his performance is adding value to his team.
Boundary cases:
To satisfy ourselves that this algorithm delivers sensible values, let’s consider some boundary plays (using values for zero outs).
Fast runner, slow pitcher: Runner’s time = 3.30 seconds -> 87 percent safe -> +0.110 runs. Pitcher’s time = 1.65 seconds -> 82 percent safe -> +0.067 runs. Catcher’s Value = +0.047 runs if SAFE,
-0.763 runs if OUT. This fits: With a fast runner and slow pitcher delivery, the catcher gets a huge amount of credit if he throws the runner out, but only a small penalty for failing to do so.
Fast runner, fast pitcher: Runner’s time = 3.30 seconds -> 87 percent safe -> +0.110 runs. Pitcher’s time = 1.26 seconds -> 60 percent safe -> -0.109 runs. Catcher’s Value = +0.223 runs if SAFE,
-0.588 runs if OUT. The runner’s excellent performance and the pitcher’s excellent performance cancel each other out, leaving the outcome of the play in the hands of the catcher.
Slow runner, slow pitcher: Runner’s time = 3.88 seconds -> 63 percent safe -> -0.082 runs. Pitcher’s time = 1.76 seconds -> 84 percent stfe -> +0.082 runs. Catcher’s Value = +0.224 runs if SAFE,
-0.587 runs if OUT. Again, the runner’s performance and the pitcher’s performance balance each other, rendering the catcher’s performance decisive.
Slow runner, fast pitcher: Runner’s time = 3.75 seconds -> 67 percent safe -> -0.052 runs. Pitcher’s time = 1.22 seconds -> 51 per cent safe -> -0.185 runs. Catcher’s Value = +0.461 runs if SAFE,
-0.349 runs if OUT. With a slow runner and fast pitcher delivery, the catcher has an easier-than-usual task, and thus merits a big penalty if he allows the stolen base; if he guns the runner down, he
gets less credit than in most situations, since the runner and pitcher have essentially done some of his work for him.
Average runner, average pitcher: Runner’s time = 3.56 seconds -> 74 percent safe -> +0.001 runs. Pitcher’s time = 1.40 seconds -> 73 percent safe -> -0.001 runs. Catcher’s Value = +0.223 runs if
SAFE, -0.587 runs if OUT. Both the runner and the pitcher have delivered performances that are essentially at the break-even point, which of course means that the catcher’s performance will decide
the outcome.
What about “deterrence”?
Some pitchers (typically left-handed ones) are known for their deceptive delivery, which makes it difficult for a runner to detect whether the pitcher is going home or coming over to first; this, of
course, makes runners less willing to attempt a stolen base, since they don’t want to be picked off if they read the pitcher’s motion incorrectly. This apparent ability to deter stolen base attempts
is usually regarded as a positive feature for a pitcher.
However, it is important to keep in mind that pitchers like this do not deter stolen bases; they deter stolen base attempts, and stolen base attempts end in both positive and negative results for
both sides. In 2012, there were 3,229 successful stolen bases, and 1,136 caught stealing, for a success rate of 74.0 percent. The break-even success rate in 2012, based on the frequency of RE24
states during stolen base attempts, and the value of stolen bases and caught-stealings, was about 74.7 percent. In 2012, major league teams in aggregate stole bases at a success rate equal to
break-even, meaning the overall run value from stolen base attempts is near zero.
If the average run value of a stolen base attempt is zero, then there is no value, positive or negative, in deterring attempts, on average. A pitcher who generally discourages attempts will allow
fewer stolen bases, but he will also benefit from fewer caught stealings, and the net value will be essentially zero. Therefore, no value is attributed to a pitcher for stolen base attempts that do
not occur.
Future considerations
There are lots of areas where this stolen base attempt algorithm can be expanded. First of all, the performance values of the participating players can be subdivided, to provide additional insights
on specific aspects of their play.
• The runner’s performance value can be divided into lead, jump, run, and slide.
• The pitcher’s performance value can be divided into release time, pitch time/speed and handedness (as it pertains to delaying the runner’s jump)
• The catcher’s performance value can be divided into exchange/release time, throw accuracy and throw power
We discussed earlier that deterrence of steal attempts, such as might come from a pitcher having a very deceptive pitching motion, would not be assigned value, based on the similarity of the
break-even rate and the actual success rate. However, a deceptive motion may not always completely deter attempts; it may instead hamper them, as measured by a shorter lead allowed, and/or a slower
jump allowed. Future elaboration of the stolen base algorithm may include allotting a portion of the responsibility (run credit) for the runner’s lead and jump to the pitcher, which should allow
better modeling of pitchers with deceptive deliveries.
Some other situations that were excluded from this discussion of the basic algorithm can be covered in the future. For example, stolen base attempts at third base, double steals and steals of second
with a runner on third who stays put each have their own algorithms. Stolen base attempts where the pitch is off-target and not caught cleanly by the catcher can be considered. Wild throws, and the
value added (or lost) by the fielder at the play base can be considered.
There is a lot to consider when diving deep on valuation of player performances; we are only at the very beginning.
Inline Feedbacks
View all comments
I’m a stat freak, but I admit you have overwhelmed me with info!
Bill James purported that in order for base stealing to be beneficial, the baserunner has to succeed 2 out of 3 times.
Obviously the timing and circumstances of these attempted thefts are important.
Not to sidetrack the conversation, I would love to see a study of baserunning overall. Many speedsters are terrible baserunners!
Being a Sawx fan, I remember Tony Armas, not a whirlwind by any stretch, was one of the smartest baserunners you would ever find. He seemed to know when to go from first to third, and when to tag and
go after a fly ball, etc. I suppose it’s instinct, and teaching can only go so far.
Greg – I think this is a good first attempt to divide responsibility for stolen bases. I am concerned about your arguments about deterrence however. There is a huge amount of deterrence in just being
a left handed pitcher. Using 2009 to 2011 Retrosheet data the rate of stolen base attempts including pickoffs with first base only occupied is 35% greater for right handed pitchers than for left
handers. And the SB success rate (again including pickoffs) is 67.6% against right handers and 59.4% against lefties.
I also think you have to consider the importance of pitchouts in the caught stealing rate. Also, break even points change dramatically by ball/strike count and batting order position.
Finally, I think you need to construct a PDF table for catcher times as well. You mention the rare case where the runners value and the pitchers value may the SB value resulting in a negative value
for the catcher. However the problem is greater than that. Even if the pitcher’s time to home plate does not exceed the runners’s time from first to second the difference between the two might be
small enough that a catcher might not be able to physically make the throw to second no matter how good he is. The catcher should not be considered to have any responsibilty in thes cases.
Good comments, Peter, thanks.
You will notice that I talked about working more on deterrence in the “Future Considerations” section. I’m not satisfied with entirely discounting it, but for this basic algorithm, I think it’s
reasonable to do so.
You no doubt recognize that deterrence is not always a good thing for the pitcher. If a pitcher is quick to home, or deceptive enough to induce bad jumps, such that SB attempts against him succeed at
less than the break-even point, it would be a bad thing to deter all attempts, since such attempts would favor the pitcher in terms of average run value. Deterring excellent base stealers, however,
is a good thing. So, a tricky pitcher deters mediocre base stealers (bad for the pitcher), but also deters excellent base stealers (good for the pitcher). Lots more work to do here to capture all of
this, which I hope to share more of soon.
I don’t favor lumping pickoffs and stolen base attempts together. Obviously they’re related, but they can and should be handled separately. The practice of labeling some pickoffs as caught stealing
muddies the water, and is something I hope to be able to illuminate in a better way.
Pitchouts? They are already picked up by this algorithm in the (presumably) quicker delivery time to home plate.
BEP by count and/or batting order? I could see adding that in the next level version of this…
Re the catcher time, I think if you run the numbers (I did), you will see that the exception I described (capping pitcher +run value to keep the catcher at zero) covers this. Your point is valid, but
it’s covered.
Thanks for taking the time to provide your thoughts, they are much appreciated…
Also cross-posted at Tango’s blog:
Greg, I really like the framework, though my personal experience recording times makes me a bit skeptical of these particular success/fail curves. Four seconds for a runner is a lot of time to still
have a greater than 50% chance of stealing the base. Many of the times I personally record are in the 3.4-3.5 range, with the game’s truly fast guys in the 3.3-3.35 range.
Let’s even consider this a different way: If a pitcher from the stretch is somewhere in the 1.3 (slide step) to 1.5 range, and a big league catcher should have a pop time around 1.8-2.0+, our success
/fail inflection point should be somewhere closer to 3.5, give or take a tenth of a second or two.
My guess is that the difference is partially accounted for by errant throws, which do not get errors unless the runner takes an extra base, or pitches that hit the dirt. The latter was somewhat
controlled for by Greg by removing PB/WP, but again, that typically doesn’t get scored if the runner was going on the pitch and only stole the one base. Also, as Tango mentions, the issue about pitch
types and location.
So overall, the success/fail curves may be skewed due to including errant throws and pitches in the dirt, giving (slow) runners too much credit because of the lack of granularity. Or maybe it’s just
differences in stopwatch accuracy, though this is a pretty big difference. Greg, maybe you can clarify how these times were recorded?
Greg – The value of pitchouts will not be in the quicker time to home plate, presumably a pitcher cannot throw a pitchout faster than his normal fastball, but in the catcher’s faster catch and
release time to second.
I think you will find that the numbers for pitchers with good moves to first are going to parallel the numbers that I quoted for left handers versus right handers. That is that deterrence (fewer
stolen base attempts) and a lower success rate on attempts will go hand in hand.
You are already lumping CS and PO together because as you note above some of what is labeled CS are POs where the runner tried to reach 2nd instead of going back to first. What you don’t mention is
“some” is over 50% of the CS of second base for left handed pitchers.
Greg – I also think that you may have some selection bias problems. If a pitcher is able to get the ball to the catcher in 1.20 seconds and still have 40% of the attempted steals successful then he
either has a really bad catcher or only very fast base runners are attempting to steal on him.
I went in with some preconceived notions about the shape of those curves as well, and what I found wasn’t what I expected.
I’d definitely encourage you to pull up MLB’s video archive, search on key word stolen base, and time a bunch of attempts that fit my criteria (i.e. 2nd base, no other runners, no wild pitch, etc.)
You’ll understand how fast runners frequently get nailed and slow runners frequently reach safely a lot better if you do this yourself…
Will do, Greg. Thanks for the research!
Greg- I would have liked you to have shown the BEP for all of the potential starting states. If I did the math right, below is what results. The surprise to me was that the requisite success rate for
stealing home with two outs is less than 50%. On the other hand, perhaps the 2/3rds rule seem to be more like 3/4s.
Outs Bases Attempt BEP
0 0 NA
0 1 1-2 73.5%
0 2 2-3 77.5%
0 3 3-h 85.2%
0 12 2-3 79.8%
0 13 1-2 78.3%
0 13 3-h 86.6%
0 23 3-h 87.3%
0 123 3-h 88.3%
1 0 NA
1 1 1-2 74.2%
1 2 2-3 69.5%
1 3 3-h 68.6%
1 12 2-3 73.8%
1 13 1-2 84.5%
1 13 3-h 71.6%
1 23 3-h 72.7%
1 123 3-h 75.0%
2 0 NA
2 1 1-2 69.3%
2 2 2-3 87.9%
2 3 3-h 33.0%
2 12 2-3 90.7%
2 13 1-2 83.3%
2 13 3-h 39.6%
2 23 3-h 44.0%
2 123 3-h 48.8% | {"url":"https://tht.fangraphs.com/stolen-base-attempts-an-algorithm-for-allocating-run-value/","timestamp":"2024-11-09T16:48:32Z","content_type":"application/xhtml+xml","content_length":"177953","record_id":"<urn:uuid:20374872-79e7-45cb-bebb-e6a8648843cf>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00452.warc.gz"} |
What is Computer and Explain the Features of a Computer?
Question: What is Computer? A computer is derived from the Latin word ‘Computare’ which means ‘calculate’. So according to this definition, a computer is a machine which can perform calculations at
very high speed.
A computer is an electromechanical device that works on command and performs mathematical and logical operations within a short time and with high accuracy.
A computer is a programmable machine designed to sequentially and automatically carry out a sequence of arithmetic or logical operations. The particular sequence of operations can be changed readily,
allowing the computer to solve more than one kind of problem.
The following are the part of the computer.
CPU (Central Processing Unit) – Processing unit or CPU which give desired output after processing of input.
VDU ( Visual Display Unit) – Display unit is known as monitor it is the output unit. It gives a soft-copy output which we can not touch.
MOUSE – Used to give input to the computer. It is an input device which has a pointer to do the movement.
KEYBOARD – Keyboard is also an input device we have to press keys to give instructions to the computer.
PRINTER AND SCANNER – Printer is an output device which gives a hard-copy output which we can touch. A scanner is input devices through scanner we can give instruction to the computer.
Feature / Characteristics of Computer
A computer is a powerful machine. It can perform a large number of tasks. The main capacities of the computer are work length, speed accuracy, diligence, versatility memory and automation and lots of
more tasks.
Speed The time taken to perform any task by a computer is called the speed of the computer as you know a computer can work very fast. It takes only a few seconds for calculations that we take hours
to complete. The speed of the computer is measured in MIPS (Millions Inch per Seconds)
Accuracy A computer is an accurate machine. It can perform a large number of takes without errors but if we feed wrong data to the computer it returns the same wrong output or wrong information. If
the computer hardware parts are able to work and given input is correct the computer can give a 100% accurate result. The process of giving correct result and wrong result is called GIGO (Garbage In
Garbage Out).
Diligence The capacity of performing a repetitive task without getting tired is called diligence capacity of the computer. A computer is free from tiredness, lack of concentration, fatigue, etc. It
can work for hours without creating any error.
Versatility The capacity for performing more than one work is called the versatility of a computer. It means the capacity to perform a completely different type of work. You may use your computer to
prepare payroll slips, office work, mathematical calculation, word processing etc.
Storage The computer has a mass storage section where we can store a large volume of data for future work. Such data are easily accessible when needed. Magnetic disk, magnetic tape, and optical disk
are used as the mass storage devices. The storage capacity is measured in terms of KB, MB, GB, TB, PB, EB etc…
Automatic Once we give the appropriate instruction, a computer can perform the operations automatically. Like computer can do addition, subtraction, division and multiplication etc. A computer can
automatically compare values also. It also does copying the value from one memory location to other locations.
Power of Remembering Computer has the power of storing any amount of information or data. Any information can be stored and recalled as long as you require it, for any numbers of years. It depends
entirely upon you how much data you want to store on a computer and when to lose or retrieve these data.
Word Length A digital computer operates on binary digits-0 and 1. It manipulates data only in terms of 0 and 1. A binary digit is called a bit. The number of bits that a computer can process at a
time in parallel is called its word length. Word lengths computer varies such as 8, 16, 32, 64 bits. It is the measurement of the computing power of the computer.
Processing Computer can process a large volume of data at great speed. There are different types of operation during processing such as input/output operation logic operation comparison operation
text manipulation operation. | {"url":"https://tutsmaster.org/what-is-computer-and-explain-the-features-of-a-computer/","timestamp":"2024-11-10T18:19:43Z","content_type":"text/html","content_length":"89235","record_id":"<urn:uuid:e70c1ea9-2743-4227-9a35-c280dfce3b84>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00505.warc.gz"} |
Continuous Thickening Theory - 911Metallurgist
In a continuous thickener the downward solids velocity with respect to the thickener wall is the sum of the settling velocity of the solids by gravity alone, Vi, plus the downward velocity caused by
removal of the thickener underflow, U. The rate at which solids move downward by gravity alone is expressed as the product of the concentration of the solids in a layer, Ci, and the settling velocity
of the layer. The rate at which solids move downward due to withdrawal of the underflow is expressed as the product of the concentration of the layer and the downward velocity caused by removal of
the thickener underflow. Therefore, the solids flux in a continuous thickener, Gi, in any concentration layer can be expressed as:
Gi = CiVi + CiU……………………………………….(1)
If the relationship between settling velocity and concentration is known, the solids flux can be calculated as a function of concentration at various underflow withdrawal rates by using Equation.
Figure 2 shows a typical flux curve at one underflow pumping rate. The procedure used to obtain the relationship between settling velocity and concentration is discussed later. Unless the underflow
pumping rate is extremely high, a minimum exists in the flux curve, as shown, which limits the thickener loading rate at a given underflow concentration. If the settling velocity is a function only
of the local concentration, it is theoretically true that the thickener must be operated at the limiting solids flux, GL, for this particular underflow pumping rate. Otherwise, if the thickener is
loaded at a higher solids flux without a change of the underflow pumping rate, the thickener bed level will increase and eventually overflow the thickener.
In this case, the feed rate of solids exceeds the ability of the thickener to allow all of the solids to reach the bottom of the thickener. Conversely, if the thickener is loaded at a lower solids
rate, the thickener bed level will decrease until it disappears.
The limiting solids flux, GL, can be expressed in terms of the limiting concentration, CL, and the limiting solids velocity, VL, as:
GL = CLVL + CLU……………………………………(2)
The limiting solids flux is also the product of the underflow pumping rate and the underflow concentration, Cu, which is easily obtained from material balance around the thickener. Thus:
GL = CuU……………………………………(3)
The predicted underflow concentration can be given graphically by intercepting the tangent at CL with the underflow pumping rate line at Cu as shown in Figure 2.
The underflow withdrawal rate can be expressed in terms of concentrations and settling velocities at the limiting conditions by taking the derivative of Equation (1) at the minimum point shown in
Figure 2, or:
In order to differentiate the right-side term in Equation (5), the relationship between settling velocity and concentration must be known. A single generalized mathematical expression has not been
found to define the settling velocity as a function of concentration for the entire range of concentrations attainable . A typical curve is shown in Figure 3 where the settling velocity is plotted as
a function of the concentration on a log-log graph. The initial portion of this curve describes particulate settling phenomenon. In this region, the settling velocity is not a function of
concentration. Each particle acts independently as described by Stokes’s Law.
Zone 2 is known as hindered or zone settling. As the concentration increases in this region, the settling velocity decreases for two basic reasons. First, the particles displace liquid which must
flow upwards with respect to the particles. This relative motion between the liquid and particles resists the settling of solids. Secondly, as the concentration increases, the specific gravity of the
slurry increases, which increases the buoyancy of the slurry and reduces the driving force for settling of the solids.
In this second settling zone, the relationship between settling velocity and concentration can be approximated by one or more straight lines on this log-log plot. The equation(s) of this, straight
line(s) has the form:
Vi = aCi -b……………………………………(6)
The constants “a” and “b” must be determined from batch settling data, as described later. Both are influenced by particle size and shape, liquid and solids specific gravities, liquid, viscosity,
attractive or repulsive forces between particles, and other factors that may influence settling phenomenon. The exponent “b” is calculated from the slope of the line(s) in Figure 3.
The third settling zone is commonly known as the “compression zone,” and is characterized by particle to particle contacts that allow slurry layers to bear or to be compressed by the weight of the
solids above. A portion of the weight of the particles is transmitted to the solids of the lower layer, which results in packing of the solids. The remaining-fraction of the weight is transmitted to
the liquid of the lower layer to increase the liquid pressure, which results in liquid escaping through the solids layer, that is increasing solids settling rate. The settling velocity is influenced
by the compression force, caused by the weight of solids transmitted to both the solids and the liquid of the lower layer. Therefore, several lines can be obtained as shown in Figure 3, depending
upon the compression force, which is related to the slurry depth and concentration of solids in the continuous thickener (and to the initial concentration of the solids and the initial height of the
slurry in batch tests). In this compression zone settling velocity is a function of not only concentration, but also compression force.
The relationship between settling velocity and concentration in the compression zone can also be approximated by one or more equations of the form given in Equation (6), but it must be remembered
that the values of “a” and “b” will be influenced by compression force. A more detailed discussion of the affects of this compression force is given later on in this paper.
If Equation (6) is substituted into Equation (5), the result is:
Equation (10) relates the limiting concentration to the underflow concentration. Figure 4 shows a concentration profile at a certain operating condition in a continuous thickener. The uniform
concentration beginning at the interface in this graph is the limiting concentration. Between the interface and the outlet of the feed-well, there appears another layer of very low concentration,
which is shown in Figure 2 as the concentration at the intersection of the total flux line with the tangent through the minimum.
A drastic increase in concentration occurs at the interface. Toward the bottom of the thickener there is usually a fairly rapid increase in concentration to the underflow concentration. It is
supposed that this increase occurs because the concentration in this range can handle more solids flux than the limiting concentration.
When the solids inventory in the thickener increases due to the change of the feed solids rate, the increased solids build a layer on the previous interface without changing the limiting
concentration, if the change of the inventory results in negligible effect on the underflow concentration. Therefore, Equation (10) is useful in determining the limiting concentration and estimating
the change in the bed depth when the feed solids rate changes.
Substituting Equation (10) into Equation (8) gives:
This expression relates the thickener loading rate to the desired underflow concentration. Since several straight lines of the form found in Equation (6) are usually required to approximate the
complete relationship between settling velocity and concentration, several equations of the form of Equation (11) must be calculated for the full range of thickener loading rates.
Since most thickener manufacturers use unit area, U.A., instead of solids loading in their calculations, Equation (11) can be modified to give:
Equation (12) presents the relationship between desired underflow concentration and unit area needed for sizing thickeners, and indicates that a log-log plot of unit area versus underflow
concentration should give a straight line for each straight line segment approximated in Figure 3. These lines form the operating line for the thickener. A typical operating line is shown in Figure 5
where, four line segments were used to approximate the relationship between settling velocity and concentration. The number of line segments chosen depends upon the number required to describe the
relationship between concentration and settling velocity. These segments do not usually apply to the specific settling zones described earlier. They are merely a tool to quantitatively describe the
relationship between settling velocity and concentration.
One method for obtaining the relationship between settling velocity and concentration is to run a series of batch tests at different initial concentrations. Initial settling rates and concentrations
are recorded and plotted directly to give the desired relationship between settling velocity and concentration. This method is difficult if flocculation of the slurry is required. As the initial
concentration is increased, the flocculation generally becomes less efficient, so that measurements of the initial settling rates can only be made for a narrow range of dilute concentrations. This
type of testing also requires considerable time in order of obtain enough data points with the desired accuracy.
An alternative method for measuring the relationship between settling velocity and concentration is to use a single batch settling curve as developed by Kynch. He analyxed the batch settling
phenomenon under the assumption that settling velocity is a function of only the local concentration, and described mathematically the propogation of uniform concentration layers upward at a constant
rate from the bottom of the test cylinder until they reach the slurry-liquid interface.
If the interface height is plotted as a function of time, a curve such as that shown in Figure 6 is obtained. Kynch showed that tangents to the batch curve can be used to relate settling velocity and
concentration. The slope of the tangent expresses the settling velocity of the slurry layer existing just below the interface, whose concentration can be determined at the intersection of the tangent
with the vertical axis as shown in Figure 6. The settling velocities are calculated from the slopes of the tangent and the concentrations are given by:
Where C0 is the initial concentration of solids; Ci is the concentration just below the interface; Hi is the intercept of tangent with vertical or height axis; and H0 is the initial height of the
Equation (13) and the slopes of the tangent provide the relationship between the settling velocity and the concentration or the constants “a” and “b”. This relationship is based upon the assumption
that the settling rate is a function of solids concentration only. At higher concentrations the settling rate is also influenced by the compressive force created by the depth and concentration of
solids, as described earlier. Since this enhanced settling rate is part of the characteristic of the full-scale thickener due to the compressive force, it is desirable to have this same influence in
the batch settling tests. It is, therefore, desirable to run the batch settling tests at similar slurry depths and concentrations to those expected in a full-scale thickener. | {"url":"https://www.911metallurgist.com/blog/continuous-thickening-theory/","timestamp":"2024-11-08T21:32:05Z","content_type":"text/html","content_length":"179096","record_id":"<urn:uuid:531e4f44-0dbd-46a6-af43-7fe37889ca27>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00215.warc.gz"} |
Math Words That Start with D: Definitions & Examples
In this article, you will find a comprehensive list of math words that start with the letter “D.
1. Data- Information collected for analysis
2. Decimal- A number with a point dividing the whole and fractional parts
3. Degree- Unit of measurement for angles
4. Denominator- The bottom number in a fraction
5. Diameter- A straight line passing through the center of a circle
6. Difference- The result of subtraction
7. Digit- A single numeral (0-9)
8. Dimension- A measurable extent of some kind, such as length, breadth, depth, or height
9. Direct Proportion- A relationship where one quantity increases as another increases
10. Distributive Property- a(b + c) = ab + ac, distribution over addition or subtraction
11. Divisor- The number by which another number is divided
12. Dividend- The number to be divided
13. Division- The process of determining how many times one number is contained within another
14. Dodecahedron- A three-dimensional shape with 12 flat faces
15. Domain- The set of possible input values (x-values) for a function
16. Diagonal- A line segment joining two non-adjacent vertices of a polygon
17. Differential- Relating to the difference of quantities
18. Degree of Polynomial- The highest power of the variable in a polynomial
19. Dependent Variable- A variable whose value depends on another
20. Decomposition- Breaking down a number into smaller parts
21. Deduction- Deriving conclusions through logical reasoning
22. Degree of a Graph- The number of edges incident to a vertex
23. Duality- Concept where structures have a complementary counterpart
24. Disjoint Sets- Sets that have no elements in common
25. Differential Equation- An equation involving derivatives
26. Discrete Mathematics- Branch of math dealing with countable, distinct elements
27. Disjoint- Mutually exclusive or having no intersection
28. Deviation- Difference from a standard or average
29. Depreciation- The reduction in value of an asset over time
30. Directrix- A fixed line used in describing a curve or surface
31. Dilation- Transformation altering the size but not the shape
32. Discriminant- Part of the quadratic formula, b² – 4ac
33. Dot Product- Scalar product of two vectors
34. Determinant- A value calculated from a square matrix
35. Derivative- The rate of change of a function with respect to a variable
36. Distribution- A function showing the range of possible values for a variable
37. Diophantine Equation- Polynomial equations with integer solutions
38. Dual Space- Set of linear functionals on a vector space
39. Directional Derivative- Rate of change of a function in a given direction
40. Double Integral- Integration over a two-dimensional area
41. Dynamic- Relating to a process of change
42. Difference of Squares- a² – b² = (a + b)(a – b)
43. Dichotomy- Division into two mutually exclusive groups
44. Discrete Variable- A variable that can take on a finite or countable number of values
45. Diagonal Matrix- A matrix with non-zero elements only on the diagonal
46. Divisibility- The ability of one number to be divided by another without a remainder
47. Duality Principle- Interchange properties in mathematical proofs
48. Distribution Property- Property of distributing operations over functions
49. Differential Calculus- The study of how functions change when their inputs change
50. Discrete Probability- The probability of outcomes in a discrete sample space
51. Direct Method- Solving linear equations by directly manipulating them
52. Disjoint Union- Combination of disjoint sets
53. Discrete Distribution- Probability distribution for discrete random variables
54. Division Algorithm- The method of dividing polynomials
55. Dimensional Analysis- Analysis of units and their consistency
56. De Morgan’s Laws- Laws relating union and intersection of sets
57. Discrete Fourier Transform- Transform used in signal processing
58. Definite Integral- Integral evaluated over a specific interval
59. Domain of Definition- The set of values for which a function is defined
60. Deterministic Model- A model where outcomes are precisely determined
61. Difference Operator- Operator expressing finite differences
62. Disjunctive Normal Form- Standard form in boolean algebra
63. Decision Tree- A branching method to reach decisions based on conditions
64. Differential Geometry- Study of curves and surfaces through calculus
65. Dimensionality Reduction- Reducing the number of variables under consideration
66. Distance Formula- Formula to calculate distance between two points
67. Discrete Event Simulation- Simulation of processes in discrete steps
68. Discrete Optimization- Optimization problem over discrete spaces
69. Digest Function- Function that transforms data into fixed-size hash
70. Degrees of Freedom- The number of values in a calculation that are free to vary
71. Discrepancy- Difference between expected and actual values
72. Dynamic Programming- Method for solving complex problems by breaking them down
73. Doubling Time- Time required for a quantity to double in size
74. Difference Quotient- The ratio of change in function value to change in variable
75. Dual Graph- A graph associating polygon vertices and edges to another
76. Disjoint Partition- A partition where subsets have no elements in common
77. Divisible- Capable of being divided by another number without remainder
78. Discrete Logarithm- Logarithm in terms of integers over modular arithmetic
79. Discrete Set- A set of distinct and separate values
80. Distributed Processing- Processing spread over multiple systems
81. Discreteness- Quality of being separate or distinct
82. Decay Rate- The rate at which a substance decreases over time
83. Discovery Learning- Learning approach through discovery and exploration
84. Discrete Cosine Transform- Similar to Fourier transform, used in signal processing
85. Data Mining- Extracting useful information from large data sets
86. Design Matrix- A matrix of data for statistical models
87. Discontinuity- A point where a mathematical function is not continuous
88. Discrete Sampling- Sampling at distinct intervals
89. Decision Boundary- Separation in space used in classification problems
90. Deconvolution- Reversing the effect of convolution on data
91. Deterministic- Predictable with no randomness involved
92. Derivative Test- Test using derivatives to find local extrema
93. Double Counting- Counting an event more than once
94. Divide- and-Conquer Algorithm – Algorithm breaking problems into smaller subproblems
95. Damped Oscillation- Oscillation reduced over time due to damping
96. Data Frame- Structure for storing data in grids
97. Deadlock- A situation where processes cannot proceed due to dependency
98. Degree Sequence- Sequence of vertex degrees in a graph
99. Dichotomous Variable- A variable with only two possible values
100. Dynamic Range- The ratio between the largest and smallest possible values | {"url":"https://thismakeswords.com/math-words-that-start-with-d/","timestamp":"2024-11-08T17:20:22Z","content_type":"text/html","content_length":"145303","record_id":"<urn:uuid:c7da5d33-390f-459e-9358-0a8405b140f2>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00672.warc.gz"} |
Lesson 12
Practice With Proportional Relationships
Problem 1
Quadrilateral \(ABCD\) is similar to quadrilateral \(A’B’C’D’\). Select all statements that must be true.
Problem 2
Lines \(BC\) and \(DE\) are both vertical. What is the length of \(AD\)?
Problem 3
The quilt is made of squares with diagonals. Side length \(AB\) is 2.
1. What is the length of \(BD\)?
2. What is the area of triangle \(AEH\)?
Problem 4
Segment \(A’B’\) is parallel to segment \(AB\). What is the length of segment \(BB'\)?
Problem 5
Elena thinks length \(BC\) is 16.5 units. Lin thinks the length of \(BC\) is 17.1 units. Do you agree with either of them? Explain or show your reasoning.
Problem 6
Mai thinks knowing the measures of 2 sides is enough to show triangle similarity. Do you agree? Explain or show your reasoning.
Problem 7
Line \(g\) is dilated with a center of dilation at \(A\). The image is line \(f\). Approximate the scale factor. | {"url":"https://curriculum.illustrativemathematics.org/HS/teachers/2/3/12/practice.html","timestamp":"2024-11-11T08:17:41Z","content_type":"text/html","content_length":"92149","record_id":"<urn:uuid:024fb4e0-d01c-402b-9f57-951917ace5f2>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00241.warc.gz"} |
Category: Stats Tip Of The Week
In Regression, we attempt to fit a line or curve to the data. Let's say we're doing Simple Linear Regression in which we are trying to fit a straight line to a set of (x,y) data.
We test a number of subjects with dosages from 0 to 3 pills. And we find a straight line relationship, y = 3x, between the number of pills (x) and a measure of health of the subjects. So, we can say
But we cannot make a statement like the following:
This is called extrapolating the conclusions of your Regression Model beyond the range of the data used to create it. There is no mathematical basis for doing that, and it can have negative
consequences, as this little cartoon from my book illustrates.
In the graphs below, the dots are data points. In the graph on the left, it is clear that there is a linear correlation between the drug dosage (x) and the health outcome (y) for the range we
tested, 0 to 3 pills. And we can interpolate between the measured points. For example, we might reasonably expect that 1.5 pills would yield a health outcome halfway between that of 1 pill and 2
For more on this and other aspects of Regression, you can see the YouTube videos in my playlist on Regression. (See my channel: Statistics from A to Z - Confusing Concepts Clarified.
1 Comment
The Binomial Distribution is used with Count data. It displays the Probabilities of Count data from Binomial Experiments. In a Binomial Experiment,
• There are a fixed number of trials (e.g. coin flips)
• Each trial can have only 1 of 2 outcomes.
• The Probability of a given outcome is the same for each trial.
• Each trial is Independent of the others
There are many Binomial Distributions. Each one is defined by a pair of values for two Parameters,
is the number of trials, and
is the Probability of each trial.
The graphs below show the effect of varying
, while keeping the Probability the same at 50%. The Distribution retains its shape as
varies. But obviously, the Mean gets larger.
The effect of varying the Probability, p, is more dramatic.
For small values of
, the bulk of the Distribution is heavier on the left. However, as described
in my post of July 25, 2018
, statistics describes this as being skewed to the right, that is, having a positive skew. (The skew is in the direction of the long tail.) For large values of
, the skew is to the left, because the bulk of the Distribution is on the right.
0 Comments
One of the requirements for using the Binomial Distribution is that
each trial must be independent
. One consequence of this is that the Sampling must be With Replacement.
To illustrate this, let's say we are doing a study in a small lake to determine the Proportion of lake trout. Each trial consists of catching and identifying 1 fish. If it's a lake trout, we count
1. The population of the fish is finite. We don't know this, but let's say it's 100 total fish 70 lake trout and 30 other fish.
Each time we catch a fish, we throw it back before catching another fish. This is called Sampling
Replacement. Then, the Proportion of lake trout is remains at 70%. And the Probability for any one trial is 70% for lake trout.
If, on the other hand, we keep each fish we catch, then we are Sampling
Replacement. Let's say that the first 5 fish which we catch (and keep) are lake trout. Then, there are now 95 fish in the lake, of which 65 are lake trout. The percentage of lake trout is now 65/95
=68.4%. This is a change from the original 70%.
So, we don't have the same Probability each time of catching a lake trout. Sampling
Replacement has caused the trials to
be independent. So, we can't use the Binomial Distribution. We must use the Hypergeometric Distribution instead.
For more on the Binomial Distribution,
see my YouTube video.
0 Comments
The concept of ANOVA can be confusing in several aspects. To start with, its name is an acronym for "ANalysis Of VAriance", but it is not used for analyzing Variances. (F and Chi-square tests are
used for that.) ANOVA is used for analyzing Means. The internal calculations that it uses to do so involve analyzing Variances -- hence the name.
• The 1st column in the following table describes what ANOVA does do.
• The 2nd column says what ANOVA does not do.
• The 3rd column tells what to use if we want do what's in the 2nd column.
0 Comments
There are a number of see-saws (aka "teeter-totters" or "totterboards") like this in statistics. Here, we see that, as the Probability of an Alpha Error goes down, the Probability of a Beta Error
goes up. Likewise, as the Probability of an Alpha Error goes up, the Probability of a Beta Error goes down.
This being statistics, it would not be confusing enough if there were just one name for a concept. So, you may know Alpha and Beta Errors by different names:
• Alpha Error: false positive, type I error, error of the first kind
• Beta Error: false negative, type II error, error of the second kind
The following compare-and-contrast table should help explain the difference:
The see-saw effect is important when we are selecting a value for Alpha (
) as part of a Hypothesis test. Most commonly,
= 0.05 is selected. This gives us a 1 – 0.05 = 0.95 (95%) Probability of avoiding an Alpha Error.
Since the person performing the test is the one who gets to select the value for Alpha,
why don't we always select α = 0.000001 or something like that?
The answer is, selecting a low value for Alpha comes at price.
Reducing the risk of an Alpha Error increases the risk of a Beta Error, and vice versa.
There is an article in the book devoted to further comparing and contrasting these two types of errors. Some time in the future, I hope to get around to adding a video on the subject. (Currently
working on a playlist of videos about Regression.) See the
videos page
of this website for the latest status of videos completed and planned.
0 Comments
Most users of statistics are familiar with the F-test for Variances. But there is also a Chi-Square Test for the Variance. What's the difference?
The F-test compares the Variances from 2 different Populations or Processes. It basically divides one Variance by the other and uses the appropriate F Distribution to determine whether there is a
Statistically Significant difference.
If you're familiar with t-tests, the F-test is analogous to the 2-Sample t-test. The F-test is a Parametric test. It requires that the data from both the 2 Samples each be roughly Normal.
The following compare-and-contrast table may help clarify these concepts:
Chi-Square (like
, and
) is a Test Statistic. That is, it has an associated family of Probability Distributions.
The Chi-Square Test for the Variance compares the Variance from a Single Population or Process to a Variance that we specify. That specified Variance could be a target value, a historical value, or
anything else.
Since there is only 1 Sample of data from the single Population or Process, the Chi-Square test is analogous to the 1-Sample
In contrast to the the
-test, the Chi-Square test is
. It has no restrictions on the data.
: I have published the following relevant videos on my YouTube channel, "
Statistics from A to Z
0 Comments
There are 3 categories of numerical properties which describe a Probability Distribution (e.g. the Normal or Binomial Distributions).
• Center: e.g. Mean
• Variation: e.g. Standard Deviation
• Shape: e.g. Skewness
Skewness is a case in which common usage of a term is the opposite of statistical usage. If the average person saw the Distribution on the left, they would say that it's skewed to the right, because
that is where the bulk of the curve is. However, in statistics, it's the opposite. The Skew is in the direction of the long tail.
If you can remember these drawings, think of "the tail wagging the dog."
0 Comments
Many folks are confused about this, especially since the names for these tests themselves can be misleading. What we're calling the "2-Sample t-test" is sometimes called the "Independent Samples
t-test". And what we're calling the "Paired t-test" is then called the "Dependent Samples t-test", implying that it involves more than one Sample. But that is not the case. It is more accurate --
and less confusing -- to call it the Paired t-test.
First of all, notice that the 2-Sample test, on the left,
have 2 Samples. We see that there are two different groups of test subjects involved (note the names are different) -- the Trained and the Not Trained. The 2-Sample t-test will compare the Mean
score of the people who were
trained with the Mean score of different people who
The story with the
Paired Samples t-test
is very different. We only have
one set of test subjects
, but 2 different conditions under which their scores were collected. For each person (test subject), a pair of scores -- Before and After -- was collected. (Before-and-After comparisons appear to
be the most common use for the Paired test.)
Then, for each individual, the
between the two scores is calculated.
The values of the differences are the Sample
(in this case: 4, 7, 8, 3, 8 ). And
the Mean of those differences is compared by the test to a Mean of zero.
For more on the subject, you can view my video,
t, the Test Statistic and its Distributions
1 Comment
Alpha, p, Critical Value, and Test Statistic are 4 concepts which work together in many statistical tests. In this tip, we'll touch on part of the story. The pictures below show two graphs which are
close-ups of the right tail of a Normal Distribution. The graphs show the result of calculations in 2 different tests.
The horizontal axis shows values of the Test Statistic,
. So,
is a point value on this horizontal
= 0 is to the left of these close-ups of the right tail. The value of
z i
s calculated from the Sample data.
• Note that the calculated z defines the boundary of a hatched area. The hatched areas under the curve represent the value of the Cumulative Probability, p.
• And z-critical (the Critical Value of z) defines the boundary of the shaded area representing α.
For more on how these four concepts work together, there is an article in the book, "Alpha,
, Critical Value and Test Statistic -- How They Work Together". I think this is the best article in the book. You can also see that article's content on my YouTube video. There are also individual
articles and videos on each of the 4 concepts. My YouTube Channel is "
Statistics from A to Z -- Confusing Concepts Clarified
0 Comments
In ANOVA, Sum of Squares Total (SST) equals Sum of Squares Within (SSW) plus Sum of Squares Between. (SSB). That is, SST = SSW + SSB. In this Tip, we'll talk about Sum of Squares Within, SSW. In
ANOVA, Sum of Squares Within (SSW) is the sum of Variations within each of several datasets or Groups.
The following illustrations are not numerically precise. But, conceptually, they portray the concept of Sum of Squares Within as the width of the “meaty” part of a Distribution curve – the part
without the skinny tails on either side.
Here, SSW = SS1 + SS2 +SS3
For more on Sums of Squares, see my video of that name:
https://bit.ly/2JWMpoo .
For more on Sums of Squares within ANOVA, see my video, "ANOVA Part 2 (of 4): How It Does It:
http://bit.ly/2nI7ScR .
0 Comments | {"url":"https://www.statisticsfromatoz.com/blog/category/stats-tip-of-the-week","timestamp":"2024-11-11T09:30:27Z","content_type":"text/html","content_length":"74775","record_id":"<urn:uuid:b324b9c9-61e2-4a8b-b102-bec1eafcbb31>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00896.warc.gz"} |
How to Calculate Surface Area of a Sphere.
A sphere is a regular three-dimensional object in which every cross-section is a circle.
Formula to calculate surface area of a sphere.
r – is the radius of the circle.
π – is a constant estimated to be 3.142.
A globe has a radius of 20 cm .Calculate the surface area of the globe.
In this case a globe is a good example of a sphere.So we are going to use the formula used to find the surface area of the sphere to calculate the surface area of the globe.
Therefore, the surface area of the globe is 5027.2 cm² . | {"url":"https://www.learntocalculate.com/how-to-calculate-surface-area-of-a-sphere/","timestamp":"2024-11-08T18:52:01Z","content_type":"text/html","content_length":"57883","record_id":"<urn:uuid:3a98a95c-14de-4504-85d6-a5848a01acb2>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00558.warc.gz"} |
Thales – The Aperiodical
This is a guest post by Elliott Baxby, a maths undergraduate student who wants to share an appreciation of geometrical proofs.
I remember the days well when I first learnt about loci and constructions – what a wonderful thing. Granted, I love doing them now; to be able to appreciate how Euclid developed his incredible proofs
on geometry. | {"url":"https://aperiodical.com/tag/thales/","timestamp":"2024-11-05T21:47:28Z","content_type":"text/html","content_length":"28989","record_id":"<urn:uuid:f1c12c81-fa9a-43eb-88fe-84f08a4ff919>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00360.warc.gz"} |
Approximation, sampling and compression in data science
Created: 2019-02-12 11:00
Institution: Isaac Newton Institute for Mathematical Sciences
Description: Programme Theme
Approximation theory is the study of simulating potentially extremely complicated functions, called target functions, with simpler, more easily computable functions called approximants.
The purpose of the simulation could be to approximate values of the target function with respect to a given norm, to estimate the integral of the target function, or to compute its
minimum value. Approximation theory's relationship with computer science and engineering encourages solutions that are efficient with regards to computation time and space. In addition,
approximation theory problems may also deal with real-life restrictions on data, which can be incomplete, expensive, or noisy. As a result, approximation theory often overlaps with
sampling and compression problems.
The main aim of this programme is to understand and solve challenging problems in the high-dimensional context, but this aim is dual. On one hand, we would like to use the
high-dimensional context to understand classical approximation problems. For example, recent developments have revealed promising new directions towards a break-through in a set of
classical unsolved problems related to sampling in hyperbolic cross approximations. On the other hand, we want to understand why classical multivariate approximation methods fail in the
modern high-dimensional context and to find methods that will be better and more efficient for modern approximation in very high dimensions. This direction will focus on two conceptual
steps: First, replacement of classical smoothness assumptions by structural assumptions, such as those of sparsity used by compressed sensing. Second, the use of a nonlinear method, for
instance a greedy algorithm, to find an appropriate sparse approximant.
In order to achieve the goal the programme will bring together researchers from different fields to work in groups on modern problems of high-dimensional approximation and related
topics. It will foster exchange between different groups of researchers and practitioners.
This collection contains 31 media items.
Note: some media items are not shown, because they are only visible to Raven users. To see these media items, you must log in.
Showing results 1-20 of 30 < Prev
1 2 Next >
[Results 1-20 of 30] < Prev
1 2 Next > | {"url":"https://sms.cam.ac.uk/collection/2919113","timestamp":"2024-11-03T01:18:14Z","content_type":"application/xhtml+xml","content_length":"58226","record_id":"<urn:uuid:34afcaba-0632-4307-851c-c1cf7f8dfcb5>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00420.warc.gz"} |
Seems easy but is not
At first glance, this math problem appears straightforward, even simple. But don’t be fooled – this brain-teaser is designed to challenge your mathematical reasoning and problem-solving skills.
These math problems are more fun when you find yourself trying to remember the math you learned as a child. Can you figure out the correct solution? At the top of the picture, we see the task and
then four possible answers.
Which solution do you think is the correct one? How did you come up with it? Take your time and think about it to find the correct solution. Done?You can check if you picked the right number!
The equation presented, “3 + 3×3 – 3 + 3 = ?”, is a classic example of a math problem that is not as straightforward as it seems. The order of operations, the use of parentheses, and the combination
of addition, subtraction, and multiplication can easily trip up even the most math-savvy individuals.
The correct answer is B: 12.
Why is 12 the correct answer?
Well, if you remember from your school days, according to the order of operations, you do multiplication before addition and subtraction, so you start by solving 3 x 3, which results in 9.
Then we are left with a simpler math problem: 3 + 9 – 3 + 3
The answer is therefore 12.
While the solution may seem obvious once you’ve walked through the step-by-step process, the initial perception of simplicity can be deceiving. This type of math problem is designed to challenge your
attention to detail and your ability to properly apply the order of operations. | {"url":"https://sub.celebsnewslive.com/seems-easy-but-is-not/","timestamp":"2024-11-03T01:33:24Z","content_type":"text/html","content_length":"29112","record_id":"<urn:uuid:38d2e360-2738-4566-9bb2-a3a44dead4d2>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00043.warc.gz"} |
EULER CONSTANT infinite limit definition
01-03-2018, 03:06 AM
Post: #1
Ciro Bruno Posts: 5
Junior Member Joined: Jul 2017
EULER CONSTANT infinite limit definition
I've been hoping to see this bug solved in new 13217 FW Version. It regards the Euler Constant as defined by infinite limit.
Maybe nobody has noticed that before. So here it is.
Thanks to HP Prime development. The new Fw is a huge step!
Regards, Ciro Bruno.
01-03-2018, 08:37 PM
Post: #2
parisse Posts: 1,337
Senior Member Joined: Dec 2013
RE: EULER CONSTANT infinite limit definition
This is a user-interface problem, I mean that the x power is outside of the limit and this is unfortunately not clearly displayed. If you are using the template to enter a limit, make sure to keep
all the limit argument inside the parenthesis. Or switch to algebraic entry mode.
01-03-2018, 09:33 PM
Post: #3
Carlos295pz Posts: 365
Senior Member Joined: Sep 2015
RE: EULER CONSTANT infinite limit definition Viga C | TD | FB
01-05-2018, 07:36 PM
Post: #4
Tim Wessman Posts: 2,293
Senior Member Joined: Dec 2013
RE: EULER CONSTANT infinite limit definition
I've implemented this using the same mechanism used for a fraction being raised to a power (so check that and you'll see how it will behave). Impacts these nodes:
1. Fraction
2. Derivative
3. Summation
4. Product
5. Integration
6. Limit
Although I work for HP, the views and opinions I post here are my own.
User(s) browsing this thread: 1 Guest(s) | {"url":"https://hpmuseum.org/forum/showthread.php?mode=linear&tid=9843&pid=87767","timestamp":"2024-11-14T15:10:46Z","content_type":"application/xhtml+xml","content_length":"24808","record_id":"<urn:uuid:899a737d-1780-46fd-bc54-25f0d65c8583>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00725.warc.gz"} |
Multiplicative Identity Property of One ⭐ Definition, Examples
Multiplicative Identity Property of One – Definition with Examples
Updated on January 11, 2024
Welcome to another enlightening session from Brighterly, your trusted partner in making mathematics fun and engaging for children. At Brighterly, we strive to illuminate young minds by breaking down
complex math concepts into comprehensible nuggets of knowledge. Today, we’re diving into an indispensable cornerstone of math: the Multiplicative Identity Property of One. This might seem like a
daunting phrase, but trust us, by the end of this piece, you’ll see that it’s as straightforward and fascinating as it gets! This property might sound modest, but its impact in mathematics is
anything but small, extending from the simplest of calculations to the most complex of equations.
What Is the Multiplicative Identity Property of One?
In the vast universe of mathematics, the Multiplicative Identity Property of One is a simple yet significant concept that guides our understanding of numbers. This property states that when any
number is multiplied by one, the result is the original number itself. Mathematically, it is expressed as a * 1 = a and 1 * a = a, where ‘a’ is any real number. This property allows us to retain the
identity of the number during multiplication operations. The number one (1) is called the multiplicative identity because of this distinct capability. The power of this property is present everywhere
– from the basics of elementary mathematics to the complexities of algebraic equations.
Explanation of Multiplicative Identity Property
The beauty of the Multiplicative Identity Property lies in its simplicity. Let’s consider a basic scenario. Imagine you have 3 apples, and you multiply it by one. You still have 3 apples. This
reflects the fundamental principle behind the Multiplicative Identity Property. When one is the multiplying factor, the number retains its identity. In essence, the multiplicative identity property
of one is a cornerstone of mathematics that helps us maintain the value of a number despite multiplication. It’s like a mathematical magic trick: no matter the number, if you multiply it by one, it
stays the same!
Multiplicative Identity vs Additive Identity
While the Multiplicative Identity Property revolves around the number one, there’s another fundamental property in mathematics called the Additive Identity Property, which centers on the number zero.
This property states that adding zero to any number leaves the number unchanged (a + 0 = a and 0 + a = a). So, zero is the additive identity. Together, these two identities, the multiplicative and
additive identities, serve as two of the fundamental building blocks in the realm of numbers.
Properties of the Multiplicative Identity
The multiplicative identity property of one stands out for its unique characteristics. Apart from the primary property (a * 1 = a), it also follows the property of commutativity (1 * a = a). This
means that the order of multiplication doesn’t affect the outcome. This property is key in various mathematical applications, including solving equations and simplifying expressions.
Detailed Explanation of the Multiplicative Identity Property
In deeper mathematical contexts, the multiplicative identity property of one showcases its true value. Consider an equation like 2x * 1 = 2x. No matter what value ‘x’ holds, the product will always
be 2x, maintaining the identity. This property is incredibly useful when you need to isolate variables or simplify equations. Remember, regardless of whether you’re dealing with simple numbers or
complex equations, the number one is the ultimate game-keeper, ensuring the identity of your values.
Difference Between Multiplicative Identity and Other Mathematical Properties
The world of mathematics is filled with several properties, each with its own significance. The multiplicative identity property is unique because it involves a specific number (one) preserving the
identity of any number it multiplies. This contrasts with properties like the Associative Property, where the focus is on the grouping of numbers, or the Distributive Property, which involves both
addition and multiplication. Understanding these differences can help students appreciate the unique role of each property in mathematics.
Equations Involving the Multiplicative Identity
In the realm of equations, the multiplicative identity property serves a critical role. Take the equation 5y * 1 = 5y. No matter what value ‘y’ takes, the left-hand side of the equation will always
equal the right-hand side. This makes the number one a kind of secret agent in math – always there, always ready to ensure that the identity of the number or expression it multiplies remains
Writing Equations with the Multiplicative Identity Property
When it comes to writing equations with the multiplicative identity property, the process is straightforward. Start with any number or variable, then multiply it by one. The result is an equation
that adheres to the multiplicative identity property. For instance, if ‘x’ is your variable, you can write the equation as x * 1 = x. No matter the value of ‘x’, the equation holds true,
demonstrating the fundamental power of the multiplicative identity property.
Practice Problems on the Multiplicative Identity Property of One
Children learn best through practice. Engaging with exercises on the multiplicative identity property of one can help solidify their understanding of this property. Encourage your child to create
their own equations using the multiplicative identity property, or have them verify the property using different numbers and variables. Practice problems can range from simple tasks like verifying 3
* 1 = 3, to more advanced exercises involving variables, such as proving that for any value of ‘b’, b * 1 = b.
We’ve embarked on a fascinating journey exploring the Multiplicative Identity Property of One. While the road of mathematics is long and winding, with Brighterly as your co-pilot, there’s no concept
too challenging to grasp! The beauty of math lies in these basic building blocks. As we’ve seen, the Multiplicative Identity Property of One isn’t just about multiplying numbers. It’s a tool that
assists us in a myriad of mathematical operations and problem-solving situations. Whether your child is just beginning their mathematical journey or strengthening their skills, understanding this
property is crucial. It’s our mission at Brighterly to make sure your child does not just learn mathematics, but appreciates and enjoys it as well. Remember, every successful math journey starts with
Frequently Asked Questions on the Multiplicative Identity Property of One
What is the Multiplicative Identity Property of One?
The Multiplicative Identity Property of One states that any number, when multiplied by one, remains the same. It is a fundamental property of real numbers, providing the base for several mathematical
operations and problem-solving strategies.
Why is the number one called the multiplicative identity?
The number one is called the multiplicative identity because when any number is multiplied by one, it retains its identity, i.e., it stays the same. This property is unique to the number one, hence
the name.
How does the Multiplicative Identity Property differ from the Additive Identity Property?
The Multiplicative Identity Property of One revolves around the number one, indicating that any number multiplied by one stays the same. On the other hand, the Additive Identity Property centers on
zero, stating that when zero is added to any number, the number remains unchanged.
Where is the Multiplicative Identity Property used in real life?
The Multiplicative Identity Property is fundamental to many areas of mathematics and its applications in real life. For instance, when calculating quantities or scaling items, we implicitly use this
property. It’s also instrumental in more advanced math, such as algebra, where it’s used to solve and simplify equations.
Can the Multiplicative Identity Property be applied to fractions or decimals?
Yes, the Multiplicative Identity Property applies to all real numbers, including fractions and decimals. So, whether you’re multiplying one by a whole number, a fraction, or a decimal, the result
will always be the original number.
Information Sources:
Poor Level
Weak math proficiency can lead to academic struggles, limited college, and career options, and diminished self-confidence.
Mediocre Level
Weak math proficiency can lead to academic struggles, limited college, and career options, and diminished self-confidence.
Needs Improvement
Start practicing math regularly to avoid your child`s math scores dropping to C or even D.
High Potential
It's important to continue building math proficiency to make sure your child outperforms peers at school. | {"url":"https://brighterly.com/math/multiplicative-identity-property-of-one/","timestamp":"2024-11-02T11:52:44Z","content_type":"text/html","content_length":"92743","record_id":"<urn:uuid:ec0b2d1f-8aa2-43ac-b7a1-55fe8e76f3ed>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00390.warc.gz"} |
Geometry Archives | Page 3 of 3 | Wizako - GRE Prep Blog
What is the distance between two parallel chords of lengths 32 cm and 24 cm in a circle of radius 20 cm? Indicate ALL possible distances a) 1 b) 7 c) 4 d) 28 e) 2 f) 14 Solution: (C), (D) Explanation
The two parallel chords can either both be on one side of the centre or on either sides of the centre of the circle. Case i: The two chords … [Read more...] about More Than 1 Answer: Circles and | {"url":"https://gre-prep-blog.wizako.com/category/gre-quant-practice/geometry/page/3/","timestamp":"2024-11-12T14:57:58Z","content_type":"text/html","content_length":"49020","record_id":"<urn:uuid:b4fde90b-1e67-4b36-95fa-950ebf6819ee>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00033.warc.gz"} |
Finding the Transpose of a Matrix | CodingDrills
Finding the Transpose of a Matrix
Matrix Transpose: Finding the Transpose of a Matrix
Matrices are an essential part of linear algebra and are widely used in various fields, including computer science and mathematics. The transpose of a matrix is a fundamental operation that involves
interchanging its rows with columns. In this tutorial, we will delve into the concept of matrix transposition and learn how to find the transpose of a matrix.
Understanding Matrix Transposition
Before we dive into finding the transpose of a matrix, let's first understand what matrix transposition means. Given an m x n matrix, transposing it will result in an n x m matrix, where the rows of
the original matrix become the columns of the transposed matrix, and vice versa.
To illustrate this concept, let's consider the following matrix:
| 1 2 3 |
| 4 5 6 |
The transpose of this matrix will be:
| 1 4 |
| 2 5 |
| 3 6 |
As you can see, the rows of the original matrix have become the columns of the transposed matrix.
Finding the Transpose of a Matrix
Now that we understand the concept of matrix transposition, let's explore how to find the transpose of a matrix programmatically. We will use a programming language, such as Python, to demonstrate
the process.
Python Implementation
To find the transpose of a matrix in Python, we can utilize nested lists to represent the matrix. Here's an example implementation:
def transpose_matrix(matrix):
rows = len(matrix)
cols = len(matrix[0])
transposed_matrix = [[0 for _ in range(rows)] for _ in range(cols)]
for i in range(rows):
for j in range(cols):
transposed_matrix[j][i] = matrix[i][j]
return transposed_matrix
In the above code snippet, we define a function transpose_matrix that takes a matrix as input and returns its transpose. We initialize a new matrix transposed_matrix with dimensions swapped (n x m
instead of m x n). Then, we iterate over the elements of the original matrix and assign them to the corresponding positions in the transposed matrix.
Example Usage
Let's see the transpose_matrix function in action with an example:
matrix = [[1, 2, 3], [4, 5, 6]]
transposed_matrix = transpose_matrix(matrix)
The output will be:
[[1, 4], [2, 5], [3, 6]]
As expected, the function correctly transposes the given matrix.
In this tutorial, we explored the concept of matrix transposition and learned how to find the transpose of a matrix. We saw that the transpose of a matrix involves interchanging its rows with
columns, resulting in a new matrix with dimensions swapped. Using a Python implementation, we demonstrated how to programmatically find the transpose of a matrix.
Matrix transposition is a fundamental operation that finds applications in various areas, such as linear transformations, solving systems of linear equations, and image processing. Understanding this
concept is crucial for any programmer or mathematician working with matrices.
I hope this tutorial has provided you with a clear understanding of matrix transposition and its implementation. Happy coding!
Ada AI
Hi, I'm Ada, your personal AI tutor. I can help you with any coding tutorial. Go ahead and ask me anything.
I have a question about this topic | {"url":"https://www.codingdrills.com/tutorial/matrix-data-structure/finding-transpose-of-a-matrix","timestamp":"2024-11-09T22:31:02Z","content_type":"text/html","content_length":"308620","record_id":"<urn:uuid:9f293daf-609c-4c80-97e2-384c1cb7a1f2>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00074.warc.gz"} |
Mathematical celebration:
6/7/ • group B will be engaged in a Math Battle ;
2009 • group A will have a Math Relays contest;
• Junior circle will enjoy playing several mathematical games.
These events will be followed by the awards ceremony for all the wonderful things the participants have done throughout the year and a party!
We will get to know each other and go over the written assignment. Then we will solve several fun problems on dividing some number of objects between two people, or sharing some number of
Handouts: Sharing and Dividing
9/27 We will work on a variety of entertaining problems, brainteasers and puzzles.
2009 Handouts: Mathematical Potpourri | Pictures
Fermat's Last Theorem has been baffling and intriguing mathematicians for over 350 years. We are going to trace the work of some of the amazing men and women who worked on this problem, and even
prove the Theorem in a few cases ourselves!
Handouts: Fermat's Last Theorem
We will solve a lot of interesting problems dealing with balance scale.
Handouts: Problems with balance scales
We will do more problems on coloring maps as well as a variety of other topics.
/ Handouts: Problem set
We will discuss the continued fraction expansion and talk a little bit about the golden ratio and its occurrence in arts and nature. Then we will calculate the continued fraction expansion of
roots of small integers and discover some interesting structure in these expansions. This will lead to the understanding of a beautiful theorem named after Fermat but proven by Euler, that
characterizes the primes that can be written as the sum of two squares.
Handouts: Continued Fractions and a theorem by Fermat
You have a balance scale and a lot of weights. All the weights are powers of two (that is, they represent numbers 1, 2, 4, 8, 16, 32, 64, ..., expressed in grams). You have just one copy of each
of these weights. Using just these weights, can you balance any object weighing the whole number of grams on the balance scale? We will find out!
Handouts: Weighing with Powers of 2
11/ We will continue with the handout from last time and will solve problems on a variety of topics.
Handouts: Mike\'s group handout | Carey\'s group handout | Carey\'s group handout | Carey\'s group handout
We will continue with the topic started last time.
We will continue exploring binary notation for numbers concentrating on the analogies with the decimal system this time.
Handouts: Binary (continuation)Handouts: Combinatorics homework (Carey) | Logic arrows (Carey) | Combinatorics (Carey)
18/ Going from real numbers (ordinary numbers) to complex numbers is like coming out of a tunnel. You can see much more of the mathematical landscape than you thought possible. Even some properties
2009 of real numbers that were mysterious before become clearer. This session will be an introduction to complex numbers, their basic properties, and some things you can do with them. In future
sessions we'll discuss more applications, ranging from number theory to cell phones.
Handouts: Unit circle | Complex plane | Geoboard
We will solve a variety of problems involving inverse operations and backwards reasoning.
Handouts: Backwards reasoningHandouts: NumberTheoryProblems
25/ This is the first part of the meeting (2-3 p.m.): Some thoughts on using math and science thinking and math and science knowledge far outside math and the sciences, from Eugene Volokh, who?s a
2009 professor at UCLA School of Law. Eugene started as a math buff, shifted to computer programming, and eventually turned to law as well as popular writing about the law (he?s the founder of The
Volokh Conspiracy weblog, http://volokh.com). Before going into teaching, he clerked for Justice Sandra Day O?Connor at the U.S. Supreme Court.
This is the second part of the meeting (from 3 p.m. to 5 p.m.). We will start reviewing the material for AMC 10 and AMC 12.
We will be working on a variety of problems, including backwards reasoning, binary/decimal, and elementary algebra, to solve the mystery of the missing candy!
11/1 Handouts: The Mystery of the Missing Candy
2009 This week students will get more practice with doing calculations mod n, and using modular arithmetic in solving divisibility problems.
We will continue going over examples of the addition and multiplication principles. Then we will learn about Venn diagrams and double counting in order to begin counting more complex sets.
11/8 We will be building squares and cubes and examining some of their properties.
2009 Handouts: Squares and Cubes
We will make several models of simple 3d solid bodies. We will use paper and glue for some models, and clay and toothpicks for the rest.
Handouts: Making models of 3d solid bodies
In Carey's group, we will go over the homework, and begin applying what we have learned so far to counting with repetitions and the idea of a "combination." Attached are last week's handouts and
11/ the homework. In Mike's group, we will review/introduce the notions of least common multiple and greatest common divisor. Euclid's method, and various realizations of gcd's will be covered.
2009 Handouts: Homework | In-Class Problems
We will begin with a review of basic number theory/abstract algebra, discussing the "integers modulo N." We will move on to discussions of how to determine if a number is a "quadratic residue"
modulo N, introducing the Legendre and Jacobi symbols. We'll conclude by using these tools to build an encryption scheme, which we'll play around with at the end.
Handouts: Problem Set | Handout
Polyhedra are three dimensional shapes that have vertices, edges and faces. We will use the models we have built last time as well as other examples to figure out if there is a relationship
between the numbers of vertices, edges and faces for polyhedra.
11/ Handouts: Euler's formula
2009 In Mike's group, we will continue our study of greatest common divisors and least common multiples, and their application to problems in modular arithmetic and remainders. In Carey's group, we
look more at combinations and several cases with choosing 2 or 3 objects from a larger set. We also do a colored picross where different colors do not necessarily need to be separated by
Handouts: Colored Picross | Combinations Problems
We will be doing holiday-themed problems dealing with Euler's formula, Gauss's formula for summing up integers 1 to n, and other topics we have covered this year.
12/6 Handouts: Holiday Math
2009 Many surprises arise as one tries to apply the familiar notions of size of sets to infinite sets, or compare the sizes of two different infinite sets. We will explore some of these surprises in
a series of classic examples.
1/9/ In Mike's group, we will begin a series of sessions loosely centered around geometry.
Pirates are searching for buried treasure on Treasure Island and encounter many math problems along the way, including logic problems, magic squares, and games.
Handouts: The Treasure Island
/ Mike (6221): Proof Techniques in Number Theory Clint (6201): Counting with Combinations
Handouts: Clint's group handout | Mike's group handout
We will see how to glue surfaces out of polygons and learn about distinguishing properties of various surfaces.
1/16 We will take 75 minutes to solve a Math Kangaroo contest from one of the previous years. Note: class will meet 2-3:15 in MS 6627 (both groups).
We will be working on basic logic, including the negation of statements and finding counterexamples.
Handouts: Meeting Mr. No and drawing conclusions
/ In Clint's group, we will continue with the handout from last time. In Mike's group, we will continue to study linear congruences, and discuss proofs.
Handouts: Clint's group handout | Mike's group handout
This is a continuation of the previous meeting.
We will solve a series of problems about math circle students who take various classes, travel to different places and play various sports, as well as some logic puzzles.
1/24 Handouts: Venn Diagrams
2010 Clint: We will begin looking at topics in number theory, starting this week with parity.
Mike: We will discuss how to prove some statements in number theory, building on our discussion of logical propositions from last time.
Handouts: Clint's group parity handoutHandouts: Trigonometry | Logarithms
We will work on the problems that can be solved using a simple invariant (a notion that we will introduce), as well as discuss several two-player games.
Handouts: Invariants and Games
1/31 Mike: We will continue to practice formal proofs in number theory.
/ Clint: We will continue our study of parity. (Note that the handout below is different from previous week's.)
Handouts: Mike's group handout | Clint's group handout
We will solve a variety of problems on Graphs and Colorings
We will be playing games with coins, a chessboard, and a binary card trick!
Handouts: More fun and games!
2010 Clint, 6201: Our group has been studying parity, or divisibility by 2. This week we'll enlarge our focus and start looking at divisibility in general.
Mike, 6221: We will go over the worksheet "Well-definition of addition modulo n", and continue our study of modular arithmetic from a rigorous logical perspective.
Handouts: Clint's group handout
We will be solving some fun Math Kangaroo problems.
2/14 Handouts: Math Kangaroo Practice
2010 In Clint's group we'll continue divisibility, with special attention to the role of prime numbers.
As Mike is out of town Olga Radko will lead Mike's group. See last week's attachment for the handout "Well-definition of multiplication modulo n".
Handouts: Clint's group handout
We will be examining two basic mathematical operations: rotations and translations. We will also be discussing symmetry.
2/21 Handouts: Rotations and Translations
2010 In Clint's group, we'll see that there are infinitely many primes and use prime factorizations to solve a variety of problems.
In Mike's group, we'll try to apply some ideas we've learned about modular arithmetic to solve a variety of problems.
Handouts: Mike's group warmup | Mike's group handout I | Mike's group handout II | Clint's group handout
We will be examining compositions of translations, reflections, and rotations. We will also study symmetry with respect to a point and symmetry with respect to a line.
2/28 Handouts: Rigid motions of the plane
2010 Clint, 6201: We tie up some loose ends from previous sessions on primes and divisibility, and also try our hand at some problems involving measurements.
Mike, 6221: This week in Mike's group we will investigate writing numbers in binary (base 2) notation, as well as other number bases.
Handouts: Mike's Group Handout | Clint's group handoutHandouts: Maps, areas, and kissing numbers
We will be learning about implications, converses, and contrapositives, as well as doing some fun problems with reflections and mirrors.
Handouts: Logic and Mirror Problems
Clint, 6201: In Clint's group, we'll look at problems involving the GCD and LCM of numbers.
3/7/ Mike, 6221: In Mike's group, we'll continue to practice working with binary numbers by solving problems and playing Nim!
Handouts: Clint's group handout
We will discuss various questions typically of the form: What is the shortest path with prescribed properties? This will lead us to some consequences in optics. We will end with a discussion of
the isoperimetric inequality.
We will be working on a Math Kangaroo test from a previous year.
Handouts: Math Kangaroo
/ This week Clint's and Mike's groups will combine for a team problem solving contest called Relays! We will meet in our usual rooms (Clint's group in MS 6201, Mike's in MS 6221) to organize
2010 before moving to the Graduate Lounge (MS 6620) for the competition. Students will work in small groups on a series of fun problems ranging from divisibility and modular arithmetic to estimation
and combinatorial games.
We will discuss the three common coordinate systems associated with a triangle (trilinear, tripolar and barycentric coordinates) and explore some of their applications.
We will look at maps of "Insect Countries" consisting of cities and tunnels and explore their properties. (This is a first glimpse into the basic graph theory).
Handouts: Life in an Insect World
Mike, 6221: In Mike's group we will finish our discussion of 3-pile Nim, and other games.
Clint, 6201: Clint's group will examine the Pigeonhole Principle and see how it applies to a range of problems.
2010 Handouts: Pigeonhole Principle | Pigeonhole Principle Solutions | Nim Handout
In combinatorics, we are not only concerned with the study of combinatorial objects (such as graphs, permutations, partitions, and the like); we are also interested in how we can apply methods
from other areas of mathematics to help us understand these objects. In this lecture, I will present one of the most common ways of applying algebra (and some calculus) to combinatorics: the
generating function. A generating function is a way of encoding a sequence into a polynomial. With generating functions, we can use the algebraic operations of polynomials to greatly simplify
calculations and (in some cases) prove marvelous identities.
We will continue looking at Insect countries (consisting of several cities some of which are connected by tunnels). This time, we will decide what's the best way to build railroads (in addition
to tunnels) in the most economical ways.
Handouts: Railroads and Trees
4/11 Clint's group, MS 6201: We will turn to graph theory and, in particular, look at a number of problems whose solution can be found using trees.
/ Mike's group, MS 6221: This week we will play more combinatorial games!
Handouts: Trees and Trees | Trees and Trees Solutions | Game Theory
In the classic book ``Alice in Wonderland'' many strange things happen that are left unexplained by the mathematician author Lewis Carroll. Similarly, in this math circle session at UCLA,
reflections will ``mystically'' become rotations, rotations will turn into translations, and translations will transform into reflections! Is this possible and mathematically sound? Come to this
talk to find out what happened just a month ago at the Bay Area Math Olympiad and how three different brilliant solutions to the same geometry problem were created by student participants.
We will be making paths and circuits around graphs, as well as understanding the ideas of an Euler Path and Euler Circuit.
Handouts: Circuits and Paths
4/18 Mike's Group, MS 6201: This week we will take a look at some other mathematical games.
/ Clint's Group, MS 6221: This week we look at problems that can be solved by thinking about graphs. (Last week we looked at trees, a special kind of graph with no cycles.)
Handouts: Graph Theory 1 | Graph Theory I Solutions | Games
Groups are algebraic structures that are used, for example, to study symmetries of geometric objects, the invariance of laws of nature, conservation laws, roots of polynomials, combinatorial
counting problems and many other questions. We are going to take a look at examples of such structures taken from those various applications.
We will be finding the chromatic number of graphs and also testing the 4 color theorem.
Handouts: Graph Coloring
4/25 Mike's Group, MS 6221:This week we will study proofs by mathematical induction, and look at some games from the perspective of induction.
/ Clint's Group, MS 6201: We'll continue our study of graphs, looking at properties such as spanning trees, connectedness, and planarity. (See next week, 05/02/10, for handout and solutions.)
Handouts: Induction
Groups are algebraic structures that are used, for example, to study symmetries of geometric objects, the invariance of laws of nature, conservation laws, roots of polynomials, combinatorial
counting problems and many other questions. We are going to take a look at examples of such structures taken from those various applications.
We will be discussing the expected number of heads or tails after tossing a coin and calculating probabilities of certain numbers on dice.
Handouts: Probability
Clint's group, MS 6201: We will continue our study of some of the properties of graphs begun last week.
5/2/ Mike's group, MS 6221: We will study some simple examples of proofs by induction, moving on to more advanced problems if we have time.
Handouts: Graph Theory 2 | Graph Theory 2 Solutions | Induction Problems
This is the first in a series of 2 meetings. Mathematical probability emerged from the study of gambling, statistics, and the observed outcomes of experiments that are subject to some ?random?
external in- fluences. Here we will introduce some of its basic concepts: 1. Sample space and events; 2. Probability functions; 3. Random variables and expectation.
Handouts: Prelude to Probability (read before the meeting)
We will continue examining probabilities with coins and dice, as well as understanding some elementary counting principles.
Handouts: Probability and Reducing Fractions
5/9/ Clint's group, MS 6201: We do a little more graph theory, then switch gears and look at some problems that can be solved using logical reasoning.
2010 Mike's group, MS 6221: This week we will try to finish as many of the induction problems as possible from the last two weeks.
Handouts: Logic Puzzles Handout | Graph Theory 3 | Graph Theory 3 Solutions | Induction Problems
We will use the background in probability from last time to explore Random Walks.
We will be doing some elementary counting problems, including combinations and permutations.
Handouts: Multiplication Principle
Clint's group, MS 6201: Clint is out of town, so assistants Alyssa and Liz will lead the session this week. We'll look at binary and other non-10 bases, and play some Nim.
5/16 Mike's group, MS 6221: This week we will take a look at some problems in Graph Theory.
2010 Handouts: Graph Theory Problems
The Gaussian integers are a pretty set of numbers in the complex plane. Their properties resemble properties of the ordinary integers, but even better, they help to explain some properties of
the ordinary integers. We'll discuss what the Gaussian integers are and why they work the way they do. Knowledge of the complex numbers is not assumed; we'll review what's needed. (For those who
are interested, some notes from Math Circle sessions on complex numbers from Fall 2009 are at http://www.math.ucla.edu/~baker/circle/.)
We will be re-examining reflections and rotations, but this time from the perspective of a permutation. We will also examine compositions of reflections and rotations and their commutativity.
Handouts: Handout
Clint's group, MS 6201: We'll continue our study of the property of alternative bases, esp. binary, and its application to the game of Nim.
5/23 Mike's group, MS 6221: This week we will continue to study properties of graphs, particularly planar graphs.
2010 Handouts: Nim and P-positions | Graph Formulas
Geometers have been interested in the symmetry and aesthetic beauty of regular polygons and regular polyhedra since antiquity. Ancient Greeks even associated their 5 classical elements to the 5
convex regular polyhedra in 3 dimensions. In these two talks, we will use complex numbers and its close cousin, the quaternions, to study the symmetries of these beautiful objects and see how
their symmetries can have an impact in our life
We will continue studying geometric transformations, this time of a square. We will also find what the inverses are of these transformations, as well as which transformations commute.
Handouts: Commutativity and Inverses
Clint's group, MS 6201:
We will conclude our study of Nim strategy and turn an eye to the strategy behind a number of other mathematical games.
5/30 Mike's group, MS 6221:
2010 We will continue with the worksheet from last week (see
last week
on the Math Circle calendar), and discuss map coloring theorems (the six color theorem and possibly the five color theorem).
Handouts: Graph Formulas
This week, we will continue from last week by looking at symmetries of regular polyhedra and describing them using quaternions | {"url":"https://circles.math.ucla.edu/circles/archive.shtml?year=2009","timestamp":"2024-11-07T02:24:13Z","content_type":"text/html","content_length":"69549","record_id":"<urn:uuid:d07e6342-f764-4793-9699-56df7b08ad32>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00457.warc.gz"} |
G. GÜLPINAR And Y. Karaaslan, "Investigation of the effect of the off-diagonal Onsager rate coefficient on the relaxation dynamics of anhydrous dihalides of iron-group elements," PHYSICS LETTERS A ,
vol.375, no.6, pp.978-983, 2011
GÜLPINAR, G. And Karaaslan, Y. 2011. Investigation of the effect of the off-diagonal Onsager rate coefficient on the relaxation dynamics of anhydrous dihalides of iron-group elements. PHYSICS LETTERS
A , vol.375, no.6 , 978-983.
GÜLPINAR, G., & Karaaslan, Y., (2011). Investigation of the effect of the off-diagonal Onsager rate coefficient on the relaxation dynamics of anhydrous dihalides of iron-group elements. PHYSICS
LETTERS A , vol.375, no.6, 978-983.
GÜLPINAR, GÜL, And Yenal Karaaslan. "Investigation of the effect of the off-diagonal Onsager rate coefficient on the relaxation dynamics of anhydrous dihalides of iron-group elements," PHYSICS
LETTERS A , vol.375, no.6, 978-983, 2011
GÜLPINAR, GÜL And Karaaslan, Yenal. "Investigation of the effect of the off-diagonal Onsager rate coefficient on the relaxation dynamics of anhydrous dihalides of iron-group elements." PHYSICS
LETTERS A , vol.375, no.6, pp.978-983, 2011
GÜLPINAR, G. And Karaaslan, Y. (2011) . "Investigation of the effect of the off-diagonal Onsager rate coefficient on the relaxation dynamics of anhydrous dihalides of iron-group elements." PHYSICS
LETTERS A , vol.375, no.6, pp.978-983.
@article{article, author={GÜL GÜLPINAR And author={Yenal Karaaslan}, title={Investigation of the effect of the off-diagonal Onsager rate coefficient on the relaxation dynamics of anhydrous dihalides
of iron-group elements}, journal={PHYSICS LETTERS A}, year=2011, pages={978-983} } | {"url":"https://avesis.deu.edu.tr/activitycitation/index/1/a74e8d11-e4a9-4881-b9c0-1cc5d9075d02","timestamp":"2024-11-06T04:45:23Z","content_type":"text/html","content_length":"12710","record_id":"<urn:uuid:b8a9488e-2bba-4c07-91c8-8880d672b009>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00884.warc.gz"} |
What is the Difference Between NPV and IRR?
Difference Between NPV and IRR
Net Present Value (NPV) and Internal Rate of Return (IRR) are both financial metrics used in capital budgeting and investment appraisal. They help businesses decide whether or not to undertake a
particular investment. Here’s what each of them represents:
• Net Present Value (NPV): NPV calculates the present value of cash inflows and outflows of a project or investment. In essence, it tells you how much an investment is worth in today’s dollars. The
idea is that a dollar today is worth more than a dollar in the future due to inflation and the potential earning capacity of money. A positive NPV indicates that the projected earnings (in
present dollars) are expected to be greater than the costs, and thus the investment could be a good one.
• Internal Rate of Return (IRR): The IRR is the discount rate at which the NPV of all cash flows (both inflow and outflow) from a project or investment equal zero. In other words, it is the rate at
which the present value of future cash inflows equals the initial investment. The IRR can be used to compare the profitability of different investments: the higher the IRR, the more desirable the
While both are useful, they sometimes lead to different conclusions. NPV gives a dollar amount that an investment is expected to produce, which can be easier to understand. However, it relies on a
discount rate, which might be difficult to choose accurately. IRR provides a break-even rate, which can give a good sense of risk, but it can be misleading when comparing projects of different sizes
or timings of cash flows.
Example of the Difference Between NPV and IRR
Let’s consider a simple investment scenario to illustrate the difference between NPV and IRR.
Assume you have an opportunity to invest in a project that requires an upfront investment of $100,000. This project is expected to generate cash inflows of $40,000 per year for the next three years.
To calculate NPV, you’ll need a discount rate. Let’s use a discount rate of 10% to reflect the return you could have received from an alternative investment with similar risk.
The NPV calculation is as follows:
\(\text{NPV} = \frac{\text{Year 1 cash inflow}}{(1+ \text{discount rate})^1} + \frac{\text{Year 2 cash inflow}}{(1+ \text{discount rate})^2} + \frac{\text{Year 3 cash inflow}}{(1+ \text{discount
rate})^3} – \text{Initial Investment} \)
\(= \frac{\$40,000}{(1+0.10)^1} + \frac{\$40,000}{(1+0.10)^2} + \frac{\$40,000}{(1+0.10)^2} -\$100,000 \)
\(= \$36,364 + \$33,058 + \$30,052 – \$100,000 \)
\(= \-\)526 $
The negative NPV of $526 means the present value of cash inflows is $526 less than the initial investment, suggesting that the project may not be a good investment given the chosen discount rate.
To find the IRR, we would set the NPV equation to zero and solve for the discount rate. This is often done through trial and error, spreadsheet software, or financial calculators, as it involves
solving a polynomial equation. Let’s assume we find that the IRR is 8%.
The IRR of 8% means the project will break even when the discount rate is 8%. If the required rate of return for this project is less than 8%, the project could be considered a good investment.
In this example, NPV tells us that at a discount rate of 10%, the project is not a good investment, while the IRR tells us the project breaks even at an 8% discount rate. If the company requires a
return of less than 8%, the project could be a good investment. If the company requires a return higher than 8%, it should not undertake the project. | {"url":"https://www.superfastcpa.com/what-is-the-difference-between-npv-and-irr/","timestamp":"2024-11-05T16:18:05Z","content_type":"text/html","content_length":"397511","record_id":"<urn:uuid:a0bde4ed-aa5c-462c-b08f-d59bd5ca89b8>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00084.warc.gz"} |
General Definitions
6.3.1 General Definitions
In this section, the term complex refers to a collection of cells together with their boundaries. A partition into cells can be derived from a complex, but the complex contains additional information
that describes how the cells must fit together. The term cell decomposition still refers to the partition of the space into cells, which is derived from a complex.
It is tempting to define complexes and cell decompositions in a very general manner. Imagine that any partition of could be called a cell decomposition. A cell could be so complicated that the notion
would be useless. Even itself could be declared as one big cell. It is more useful to build decompositions out of simpler cells, such as ones that contain no holes. Formally, this requires that every
-dimensional cell is homeomorphic to , an open -dimensional unit ball. From a motion planning perspective, this still yields cells that are quite complicated, and it will be up to the particular cell
decomposition method to enforce further constraints to yield a complete planning algorithm.
Two different complexes will be introduced. The simplicial complex is explained because it is one of the easiest to understand. Although it is useful in many applications, it is not powerful enough
to represent all of the complexes that arise in motion planning. Therefore, the singular complex is also introduced. Although it is more complicated to define, it encompasses all of the cell
complexes that are of interest in this book. It also provides an elegant way to represent topological spaces. Another important cell complex, which is not covered here, is the CW-complex [439].
Subsections Steven M LaValle 2020-08-14 | {"url":"https://lavalle.pl/planning/node273.html","timestamp":"2024-11-09T19:24:50Z","content_type":"text/html","content_length":"6692","record_id":"<urn:uuid:3171e2ea-54d0-4f3f-83be-261f224941a8>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00669.warc.gz"} |
Notation Sentence Examples
• These days, tablature, not sheet music, is the primary form of music notation that is available on the Internet.
• This method of notation has various disadvantages.
• Hence, excluding ao, we may, in partition notation, write down the fundamental solutions of the equation, viz.
• The system of notation (by figures) concerning which he read a paper before the Academie des Sciences, August 22, 1742, was ingenious, but practically worse than useless, and failed to attract
attention, though the paper was published in 1 743 under the title of Dissertation sur la musique moderne.
• Denoting the value of T at any velocity v by T (v), then (8) T(v) = sum of all the preceding values of AT plus an arbitrary constant, expressed by the notation (9) T(v) =Z(Av)/gp+ a constant, or
fdv/gp+ a constant, in which p is supposed known as a function of v.
• At present it has not seriously threatened the hold of Gregory's notation on the critical world, but it will probably have to be adopted, at least to a large extent, when von Soden's text is
• The dating implied by the latter notation is wrong, as I certainly belongs to the 12th, not to the 10th century, and 118 is probably later than 209.
• It is customary to quote these by small letters of the Latin alphabet, but there is a regrettable absence of unanimity in the details of the notation.
• His edition is historically very important as it introduced the system of notation which, in the amplified form given to it by Gregory, is still in general use.
• The notation log x is generally employed in English and American works, but on the continent of Europe writers usually denote the function by lx or lg x.
• He also reduced the solar parallax to 14" (less than a quarter of Kepler's estimate), corrected the sun's semi-diameter to 15' 45", recommended decimal notation, and was the first to make tidal
• His notation is rather unwieldy.
• Martius yellow, C10H5(N02)20Na H20, the sodium salt of 2.4 dinitro-a-naphthol (for notation see Naphthalene), is prepared by the action of nitric acid on a-naphthol -2.4-disulphonic acid.
• Mat hematics.T he Egyptian notation for whole numbers was decimal, each power of 10 up to 100,000 being represented by a different figure, on much the same principle as the Roman numerals.
• Owing to the very imperfect notation of sound in the writing, the highly important subject, of the verbal roots and verbal forms was perhaps the obscurest branch of Egyptian grammar when Sethe
first attacked it in 1895.
• As a whole, we gain the Impression that a really distinct and more primitive stage of hieroglyphic writing by a substantially vaguer notation of words lay not far behind the time of the 1st
• The infinite superiority of the Greek alphabet with its full notation of vowels was readily seen, but piety and custom as yet barred the way to its full adoption.
• While admitting, therefore, that there are several facts in favour of the theory of an African origin of the Bovidae, final judgment Notation to E to t from from or even f 8va balsa.
• No better testimony to the value of the quaternion method could be desired than the constant use made of its notation by mathematicians like Clifford (in his Kinematic) and by physicists like
ClerkMaxwell (in his Electricity and Magnetism).
• The letters of abraxes, in the Greek notation, make up the number 365, and the Basilidians gave the name to the 365 orders of spirits which, as they conceived, emanated in succession from the
Supreme Being.
• It is convenient to have a notation which shall put in evidence the reciprocal character.
• The two diagrams are portions of reciprocal figures, so that Bows notation is applicable.
• If all the masses lie in a plane (1=0) we have, in the notation of (25), c2 = o, and therefore A = Mb, B = Ma, C = M (a +b), so that the equation of the momental ellipsoid takes the form b2x2+a
y2+(a2+b2) z1=s4.
• With the same notation for moments and products of inertia as in II (38), we have and therefore by (1),
• Henrici illustrated the subject by a simple and ingenious notation.
• The Musica Enchiriadis, published with other writings of minor importance in Gerbert's Scriptores de Musica, and containing a complete system of musical science as well as instructions regarding
notation, has now been proved to have originated about half a century later than the death of the monk Hucbald, and to have been the work of an unknown writer belonging to the close of the 10th
century and possibly also bearing the name of Hucbald.
• The notation employed by English writers for the general continued fraction is al b2 b3 b4 a 2 "' Continental writers frequently use the notation a 1 ?
• The notation for this type of fraction is b4 + b5+ b3+ al b2 + a4 a3 It is obviously equal to the series b 2 b3 b4 b5 al +a 2 +aza3a4 + a2a3a4a 5 + .
• His chief advance on Bombelli was in his notation.
• Along with Sir John Herschel and George Peacock he laboured to raise the standard of mathematical instruction in England, and especially endeavoured to supersede the Newtonian by the Leibnitzian
notation in the infinitesimal calculus.
• In the first volume Of the Entwickelungen he applied the method of abridged notation to the straight line, circle and conic sections, and he subsequently used it with great effect in many of his
researches, notably in his theory of cubic curves.
• Besides his edition of the Rumanian Church service-books with musical notation, he published a series of tales, proverbs and songs either from older texts or from oral information; and he made
the first collection' of popular songs, Spitalul amorului, " The Hospital of Love " (1850-53), with tunes either composed by himself or obtained from the gipsy musicians who alone performed them.
• The Principia gives no information on the subject of the notation adopted in the new calculus, and it was not until 1693 that it was com municated to the scientific world in the second volume of
Dr Wallis's works.
• Least Common Multiple 3.4.4 (B) Properties depending on the Scale of Notation 3.4.5 48.
• The representation of numbers by spoken sounds is called numeration; their representation by written signs is called notation.
• The systems adopted for numeration and for notation do not always agree with one another; nor do they always correspond with the idea which the numbers subjectively present.
• The notation is then said to be in the scale of which ten is the base, or in the denary scale.
• The figures used in the Hindu notation might be used to express numbers in any other scale than the denary, provided new symbols were introduced if the base of the scale exceeded ten.
• The use of the denary scale in notation is due to its use in numeration (§ 18); this again being due (as exemplified by the use of the word digit) to the primitive use of the fingers for
• Over a large part of the civilized world the introduction of the metric system (§ 118) has caused the notation of all numerical quantities to be in the denary scale.
• In Great Britain and her colonies, however, and in the United States, other systems of notation still survive, though there is none which is consistently in one scale, other than the denary.
• Within each denomination, however, the denary notation is employed exclusively, e.g.
• In order to apply arithmetical processes to a quantity expressed in two or more denominations, we must first express it in terms of a single denomination by means of a varying scale of notation.
• The system of counting by twenties instead of by tens has existed in many countries; and, though there is no corresponding notation, it still exhibits itself in the names of numbers.
• The Roman notation has been explained above (§ 15).
• The numeration was in the denary scale, so that it did not agree absolutely with the notation.
• The principle of subtraction from a higher number, which appeared in notation, also appeared in numeration, but not for exactly the same numbers or in exactly the same way; thus XVIII was
two-from-twenty, and the next number was onefrom-twenty, but it was written XIX, not IXX.
• The Hebrews had a notation containing separate signs (the letters of the alphabet) for numbers from t to to, then for multiplies of to up to zoo, and then for multiples of too up to 400, and
later up to moo.
• The earliest Greek system of notation was similar to the Roman, except that the symbols for 50, 500, &c., were more complicated.
• On the island of Ceylon there still exists, or existed till recently, a system which combines some of the characteristics of the later Greek (or Semitic) and the modern European notation; and it
is conjectured that this was the original Hindu system.
• In other words, the denary scale, though adopted in notation and in numeration, does not arise in the corresponding mental concept until we get beyond too.
• Under certain conditions it is less; thus IIII, the old Roman notation for four, is difficult to distinguish from III, and this may have been the main reason for replacing it by IV (§ 15).
• Finger-counting is of course natural to children, and leads to grouping into fives, and ultimately to an understanding of the denary system of notation.
• Addition is the process of expressing (in numeration or notation) a whole, the parts of which have already been expressed; while, if a whole has been expressed and also a part or parts,
subtraction is the process of expressing the remainder.
• The application of the above principles, and of similar principles with regard to multiplication and division, to numerical quantities expressed in any of the diverse British denominations,
presents no theoretical difficulty if the successive denominations are regarded as constituting a varying scale of notation (§17).
• The difficulty may be minimized by using the notation explained in § 17.
• This relation is of exactly the same kind as the relation of the successive digits in numbers expressed in a scale of notation whose base is n.
• They only apply accurately to divisions by 2, 4, 5, 10, 20, 25 or 50; but they have the convenience of fitting in with the denary scale of notation, and they can be extended to other divisions by
using a mixed number as numerator.
• A fraction written in this way is called a decimal fraction; or we might define a decimal fraction as a fraction having a power of To for its denominator, there being a special notation for
writing such fractions.
• This notation survives in reference to the minute (') and second (") of angular measurement, and has been extended, by analogy, to the foot (') and inch (").
• Various systems were tried before the present notation came to be generally accepted.
• Under one system, for instance, the continued sum 5 + X 5 + 8 X 7 X 5 would be denoted 7 by 8 I 5; this is somewhat similar in principle to a decimal notation, but with digits taken in the
reverse order.
• There was, however, no development in the direction of decimals in the modern sense, and the Arabs, by whom the Hindu notation of integers was brought to Europe, mainly used the sexagesimal
division in the ' " "' notation.
• Even where the decimal notation would seem to arise naturally, as in the case of approximate extraction of a square root, the portion which might have been expressed as a decimal was converted
into sexagesimal fractions.
• It is worthy of notice that the invention of this notation appears to have been due to practical needs, being required for the purpose of computation of compound interest.
• In each case the grouping system involves rearrangement, which implies the commutative law, while the counting system requires the expression of a quantity in different denominations to be
regarded as a notation in a varying scale (§§ 17, 3 2).
• If we have to divide 935 by 240, taking 12 and 20 as factors, the result will depend on the fact that, in the notation (20) (12) of § 1 7, 935=3 " 1 7 " 1 1.
• For the latter, and for systems of notation, reference may also be made to Peacock's article " Arithmetic " in the Encyclopaedia Metropolitana, which contains a detailed account of the Greek
• The Greek geometers were perfectly familiar with the property of an ellipse which in the Cartesian notation is x 2 /a 2 +y 2 /b 2 =1, the equation of the curve; but it was as one of a number of
properties, and in no wise selected out of the others for the characteristic property of the curve.
• These papers taken together constitute a great treatise on logic, in which he substituted improved systems of notation, and developed a new logic of relations, and a new onymatic system of
logical expression.
• The formation of the larger islands is volcanic, their surface rugged, their vegetation luxuriant, and their appearance very 1 The notation n!
• In modern notation, if we denote the ordinate by y, the distance of the foot of the ordinate from the vertex (the abscissa) by x, and the latus rectum by p, these relations may be expressed as 31
2 for the hyperbola.
• Regular expression pattern strings may not contain null bytes, but can specify the null byte using the \ number notation.
• Students can choose between a range of topics, including analysis, notation, historical subjects, ethnomusicology, performance and music cognition.
• The notation used here conforms with that being proposed for specifying polynucleotide conformation.
• This notation may impose timing constraints on the process flow.
• The server came by, looked at the now desecrated Fifth Floor restaurant cloths, and made a little notation on her pad.
• A quick word about hornpipes The hornpipe rhythm is useful to illustrate one more way abc allows the notation of notes of differing length.
• Viète introduced the first systematic algebraic notation in his book In artem analyticam isagoge published at Tours in 1591.
• One would expect this random method of notation to be discordant, however the resultant music is surprisingly reminiscent of classical piano minuets.
• Children can also learn basic musical notation & learn to recognize 8 musical notation & learn to recognize 8 musical instruments.
• It contains 15 well known jazz tunes with piano accompaniment written in single stave notation with chord symbols.
• Form The dot notation is used to separate each group of Classes.
• To simplify notation we define,, ,, ,, ,, , and the total hemoglobin in all forms.
• I now want to introduce a notation that goes some way toward making this idea precise.
• Skills on how to read and interpret the notation on one part of the course will be useful when learning other components.
• If using the subscript notation, solvers often create a larger copy of the puzzle or employ a sharp or mechanical pencil.
• We shall use decimal notation for units in this module.
• Children are known to often invent idiosyncratic notation to describe their mathematical findings, or to use algebraic notation in unusual ways.
• The spacing of ABC notation will tend to mirror the grouping which would be used in standard notation.
• We can get this if we go to Reverse Polish or postfix notation.
• Each character, in particular those which cannot be typed directly from the keyboard, can also be typed in three digit octal notation.
• Where possible the health gain notation reflects both the type of evidence and the small size of some of the samples.
• To introduce arbitrary characters into a string using octal or hexadecimal notation.
• The filename prefix for each ion follows spectroscopic notation.
• Results explained with reference to scientific theory using correct scientific notation.
• The unquestionable popularity of Curwen's Tonic sol-fa induced many publishers to issue hymnals employing sol-fa notation.
• In the subscript notation the candidate numerals are written in subscript notation the candidate numerals are written in subscript in the cells.
• Finger picking patterns are written using tablature and standard music notation.
• There is no need to read conventional music notation as all the music is written in easy-to-read mandolin tablature.
• The non-specific pitch indication was also used with a specific rhythmic notation to achieve rhythmic unisons within ' improvised ' tonalities and harmonies.
• The superiority of this notation over that of Dalton is not so obvious when we consider such simple cases as the above, but chemists are now acquainted with very complex molecules containing
numerous atoms; cane sugar, for example, has the formula C 12 H 22 0, 1.
• This single instance of the use of the decimal point in the midst of an arithmetical process, if it stood alone, would not suffice to establish a claim for its introduction, as the real
introducer of the decimal point is the person who first saw that a point or line as separator was all that was required to distinguish between the integers and fractions, and used it as a
permanent notation and not merely in the course of performing an arithmetical operation.
• He began by reading, with the most profound admiration and attention, the whole of Faraday's extraordinary self-revelations, and proceeded to translate the ideas of that master into the succinct
and expressive notation of the mathematicians.
• Berthelot's notation defines both initial and final systems by giving the chemical equation for the reaction considered, the thermal effect being appended, and the state of the various substances
being affixed to their formulae after brackets.
• The validity of his fundamental position was impaired by the absence of a well-constituted theory of series; the notation employed was inconvenient, and was abandoned by its inventor in the
second edition of his Mecanique; while his scruples as to the admission into analytical investigations of the idea of limits or vanishing ratios have long since been laid aside as idle.
• Notation of Multiples.-The above is arithmetic. The only thing which it is necessary to import from algebra is the notation by which we write 2X instead of 2 X X or 2.
• With this notation the values of x and y may be expressed in the forms x q q /N q ', gg /Nq', which are free from ambiguity, since scalars are commutative with quaternions.
• Vieta, who does not avail himself of the discoveries of his predecessors - the negative roots of Cardan, the revised notation of Stifel and Stevin, &c. - introduced or popularized many new terms
and symbols, some of which are still in use.
• In the Eulerian notation u, v, w denote the components of the velocity q parallel to the coordinate axes at any point (x, y, z) at the time t; u, v, w are functions of x, y, z, t, the independent
variables; and d is used here to denote partial differentiation with respect to any one of these four independent variables, all capable of varying one at a time.
• Recorde's chief contributions to the progress of algebra were in the way of systematizing its notation (see ALGEBRA, History).
• In the first volume of this treatise Plucker introduced for the first time the method of abridged notation which has become one of the characteristic features of modern analytical geometry (see
Geometry, Analytical).
• To my dismay I found that it was in the American notation.
• I received another paper and a table of signs by return mail, and I set to work to learn the notation.
• But, when I took up Algebra, I had a harder time still--I was terribly handicapped by my imperfect knowledge of the notation.
• The unquestionable popularity of Curwen 's Tonic Sol-fa induced many publishers to issue hymnals employing sol-fa notation.
• It contains 15 well known tunes written in single stave notation with chord symbols.
• The standard method for defining subdivision algorithms uses a matrix notation.
• In the subscript notation the candidate numerals are written in subscript in the cells.
• Introduction of suffix notation and the summation convention including and.
• Verify that the account was successfully closed and that the proper notation was made.
• An extended alert is an option after the initial report, which allows the notation to remain on file for up to seven years.
• Tablature is a music notation system that allows guitar players to learn their favorite songs without having to know how to read music.
• You get tabs, chords with diagrams, standard musical notation, vocal melody lines and lyrics.
• Playing by guitar tabs, where a musician reads the special notation system called guitar tablature to learn the music.
• When looking to learn musical tunes from guitar sheet music, it helps to be familiar with some of the shorthand notation that comes with the territory.
• P-I-M-A - this is the notation for the fingering on the right hand for picking the strings.
• It is possible to sight-read and therefore sight-play a piece of music written in sheet notation.
• This kind of music notation is probably more helpful for the serious student of classical guitar rather than the hobbyist looking to play modern music.
• Luckily, a great system of notation called tablature, or tabs for short, has been developed that allows novice guitarists with no musical training to conquer this all important step.
• Traditional music notation has time signatures, rests, holds, ties, etc. that inform the player about how the music relates to time.
• In addition, tablature is a very specific type of musical notation where every note is recorded.
• The site also features a Tip of the Day video at the end of the notation to give musicians a chance to learn a little something else after they've mastered Sweet Home Alabama.
• If you're good at reading standard musical notation, you can play using the same chords from the piano version on your guitar.
• Tablature is a form of guitar notation that illustrates where guitar players need to put their fingers in order to play a chord or a note.
• Depending on your skill level as a player, you might need to find multiple forms of notation for a particular song to help you learn how to play it correctly.
• All the songs are transcribed in both standard notation and guitar tabs.
• This article will answer the 21st century question, How do u read guitar tabs? by explaining the ancient system of music notation called tablature to you.
• What this means is that, in a way, the notation has nothing to do with music at all.
• However, sheet music is a much more detailed and accurate form of music notation because it takes into account subtleties in the music like dynamics and rhythm.
• Tablature, on the other hand, being a very simplified version of music notation, is very easy to learn and even very novice players can quickly learn how to create tabs.
• Be forewarned; some tablature books highlight the full dynamics of a song in tab notation like hammer-on's and staccato notes.
• Consisting (for guitar) of five staff lines and four spaces between the lines, sheet music provides standard musical notation that directs which notes to play at any given moment.
• Once you've learned a few easy steps, reading musical notation will quickly become much easier!
• Traditionally, it is considered more beneficial for players to learn to read the standard musical notation provided in classical guitar sheet music before moving on to tablature in order to
develop good playing technique.
• Remember those spaces - With so many lines going on in musical notation, it can be difficult to remember sometimes that the spaces have a note value.
• Take Lessons - Don't hesitate to take at least a few lessons with a professional guitar player or music teacher who can help you learn the basics of musical notation.
• Since notation systems such as chord charts and tabs are so popular for the guitar, many guitarists might wonder why anyone would need sheet music.
• Classical Guitar Pieces - This great text has 50 famous classical guitar songs in both standard notation and tab.
• Fingerpicking Beatles - 30 Beatles songs are included in this excellent book, and they are arranged in standard notation for a solo guitar.
• If you can read standard musical notation, it is definitely in your best interest to invest in good acoustic guitar sheet music.
• Since most of the music notation on the Internet is provided free of charge, this is usually a case of getting what you pay for, but it is still very frustrating.
• One of the best ways to make a piece of music your own is to consult as many types of music notation you can find and start to blend the ideas into one cohesive version.
• Tablature is a form of musical notation that makes it possible for someone with no knowledge of how to read music to play music.
• This allows you to learn scales and skills without having to learn notation.
• Tablature is a form of musical notation that just about anyone can understand.
• You may come up with a number of different varieties of notation for the song, including tablature and chord progressions.
• Generally, the results will say which type of notation the file is in.
• The notation is designed for the person who wants to spend less time learning theory and more time mastering her guitar.
• Still, if you've never done it before, you may find the notation a little confusing.
• Many of the sites offer chord notation as well as tablature.
• Tablatureis a system of musical notation for those who do not know how to read music.
• This is done through a method called "tablature" which shows the reader where to physically place her fingers on the guitar's fret board rather than using traditional musical notation.
• This easy-to-learn method of notation will help you play a number of your favorite bossa nova songs, as well as other types of Brazilian music in no time.
• Some of her most notable campaigns include photos in 2007 where she was featured amid smaller models without notation regarding her size - a rarity in conventional fashion advertising.
• A "made in" notation with the wrong point of origin is a dead giveaway.
• Instead of reading music in standard notation, the position to play notes is displayed by numbers, which represent the location of where you must hold down your string(s).
• However, the video game guitar tabs notation can help with a few techniques to assist you with your music piece, like hammer ons and pull offs, slides, string bends, vibrato and harmonics.
• Video game tabs are a specific form of notation for reading music that does not require you to understand traditional sheet music.
• The tablature system of music notation is a lot easier for beginners to pick up and understand because it directly corresponds to the instrument being played.
• One of the biggest shortcomings of video game tabs, however, is that the notation is not complete, forcing the musician to "play by ear" in terms of how long to hold a particular note or chord.
• When the results come back, you'll see a notation specifying how many Good Sam parks there are that match your search criteria.
• There is a standard notation for dance step illustrations that makes it very easy to read and understand what each step entails.
• There is a notation indicating when the record was created, so you can determine if the information may be out of date.
• If you have a price book, make a notation of what the standard price is for each item.
• Mark the items you have coupons for with a "C" or other notation.
• Close the letter with you signature and name follow by the enclosures notation.
• If you come across something you don't understand, make a notation on a separate piece of paper.
• Not everything in your possession is worthy of notation in your inventory.
• Often, sponsors prefer that the jersey have some notation as to the business that has sponsored your team.
• The answer for graphics designers came in the form of hexadecimal notation - which starts with the digits 0-9 just like decimal number systems, but instead of going on to "10" it instead goes
into the letters A-F.
• So to write the decimal number "10" in hexadecimal notation, it would be "A".
• By using hexadecimal notation, this can be exact regardless of the screen calibration or even color blindness of the people working on it.
• Each problem was something unique; the elements of transition from one to another were wanting; and the next step which mathematics had to make was to find some method of reducing, for instance,
all curves to a common notation.
• These examples show that Napier was in possession of all the conventions and attributes that enable the decimal point to complete so symmetrically our system of notation, viz.
• A word is necessary on Diophantus' notation.
• We may here notice the important chemical symbolism or notation introduced by Berzelius, which greatly contributed to the definite and convenient representation of chemical composition and the
tracing of chemical reactions.
• At a later date Berzelius denoted an oxide by dots, equal in number to the number of oxygen atoms present, placed over the element; this notation survived longest in mineralogy.
• Although the system of Berzelius has been modified and extended, its principles survive in the modern notation.
• Such an expression as a l b 2 -a 2 b i, which is aa 2 ab 2 aa x 2 2 ax1' is usually written (ab) for brevity; in the same notation the determinant, whose rows are a l, a 2, a3; b2, b 2, b 3; c 1,
c 2, c 3 respectively, is written (abc) and so on.
• Observe the notation, which is that introduced by Cayley into the theory of matrices which he himself created.
• It is convenient to retain x, to denote x r /r!, so that we have the consistent notation xr =x r /r!, n (r) =n(r)/r!, n[r] =n[r]/r!.
• According to this notation, the three equations of motion are dt2 = b2v2E + (a2 - b2) d.s dt =b2v2rj+(a2 - b2) dy d2 CIF - b2p2+(a2_b2)dz It is to be observed that denotes the dilatation of
volume of the element situated at (x, y, z).
• In the notation of the calculus the relations become - dH/dp (0 const) = odv /do (p const) (4) dH/dv (0 const) =odp/do (v const) The negative sign is prefixed to dH/dp because absorption of heat
+dH corresponds to diminution of pressure - dp. The utility of these relations results from the circumstance that the pressure and expansion co efficients are familiar and easily measured,
whereas the latent heat of expansion is difficult to determine.
• Substituting for H its value from (3), and employing the notation of the calculus, we obtain the relation S - s =0 (dp /do) (dv/do),.
• The documents discovered by Dom Germain Morin, the Belgian Benedictine, about 1888, point to the conclusion that Guido was a Frenchman and lived from his youth upwards in the Benedictine
monastery of St Maur des Fosses where he invented his novel system of notation and taught the brothers to sing by it.
• There is no doubt that Guido's method shows considerable progress in the evolution of modern notation.
• This, in the notation of §§ 46 and 54, may be written?
• In works on sound it is usual to adopt Helmholtz's notation, in which the octave from bass to middle C is written c d e f g a b c'.
• But a new system of musical notation which he thought he had discovered was unfavourably received by the Academie des sciences, where it was read in August 1742, and he was unable to obtain
• Among the natives of Arezzo the most famous are the Benedictine monk Guido of Arezzo, the inventor of the modern system of musical notation (died c. 1050), the poet Petrarch, Pietro Aretino, the
satirist (1492-1556), and Vasari, famous for his lives of Italian painters.
• Gregory's notation is more generally used, and Scrivener's, though still followed by a few English scholars, is likely to become obsolete.
• The Above Expression Must Therefore Be Diminished By The Number Of Units In 4, Or By () W (This Notation Being Used To Denote The Quotient, In A Whole Number, That Arises From Dividing X By 4).
• Guitar tabs are helpful for all guitar players because they provide an easy to read notation system that can be understood by people who can't read music.
• Accordingly, the typical form for such a complex number is x+yi, and then with this notation the above-mentioned definition of multiplication is invariably adopted.
• In the preface to this work, which is dedicated to one Dionysius, Diophantus explains his notation, naming the square, cube and fourth powers, dynamis, cubus, dynamodinimus, and so on, according
to the sum in the indices.
• Its great merit consists in the complete notation and symbolism, which avoided the cumbersome expressions of the earlier algebraists, and reduced the art to a form closely resembling that of
• While still an undergraduate he formed a league with John Herschel and Charles Babbage, to conduct the famous struggle of "d-ism versus dot-age," which ended in the introduction into Cambridge of
the continental notation in the infinitesimal calculus to the exclusion of the fluxional notation of Sir Isaac Newton.
• One drawback of Thomsen's notation is that the nature of the final system is not indicated, although this defect in general causes no ambiguity.
• In this notation the fundamental relation is written (l + a i x +01Y) (I + a 2x+l32Y) (1 + a3x+133y)...
• Even in ordinary algebra the notation for powers and roots disturbs the symmetry of the rational theory; and when a schoolboy illegitimately extends the distributive law by writing -V (a+b)a+J b,
he is unconsciously emphasizing this want of complete harmony.
• His notation is based primarily on that of Harriot; but he differs from that writer in retaining the first letters of the alphabet for the known quantities and the final letters for the unknowns.
• The famous inscriptions with hymns to Apollo accompanied by musical notation were found on stones belonging to this treasury.
• It is not, however, necessary that the notation of the calculus should be employed throughout.
• In the notation of the integral calculus, this area is equal to f x o udx; but the notation is inconvenient, since it implies a division into infinitesimal elements, which is not essential to the
idea of an area.
• According to the notation adopted by Meyer the atomic susceptibility k=KX atomic-weight/ (density X 1000).
• His travels and mercantile experience had led E t u eopre him to conclude that the Hindu methods of computing were in advance of those then in general use, and in 1202 he published his Liber
Abaci, which treats of both algebra and arithmetic. In this work, which is of great historical interest, since it was published about two centuries before the art of printing was discovered, he
adopts the Arabic notation for numbers, and solves many problems, both arithmetical and algebraical.
• He introduced the terms multinomial, trinomial, quadrinomial, &c., and considerably simplified the notation for decimals.
• Various special algebras (for example, quaternions) may be expressed in the notation of the algebra of matrices.
• Employing the notation in which the molecule is represented vertically with the aldehyde group at the bottom, and calling a carbon atom+or - according as the hydrogen atom is to the left or
right, the possible configurations are shown in the diagram.
• The symbol e 0 behaves exactly like i in ordinary algebra; Hamilton writes I, i, j, k instead of eo, el, e2, es, and in this notation all the special rules of operation may he summed up by the
equalities = - I.
• These works possess considerable originality, and contain many new improvements in algebraic notation; the unknown (res) is denoted by a small circle, in which he places an integer corresponding
to the power.
• Evolution and involution are usually regarded as operations of ordinary algebra; this leads to a notation for powers and roots, and a theory of irrational algebraic quantities analogous to that
of irrational numbers. | {"url":"https://sentence.yourdictionary.com/notation","timestamp":"2024-11-03T22:24:21Z","content_type":"text/html","content_length":"824207","record_id":"<urn:uuid:e5f2d834-778b-49ef-948b-14f8c52d9af8>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00362.warc.gz"} |
title: “CNAIM”
output: rmarkdown::html_vignette
vignette: >
# Introduction
The green transition will require significant investment in utility infrastructure. Many countries have restrictive income caps or fixed tariffs that do not allow for investment to support the
electrification of transport, heating and agricultural processes. Incentive-based revenue caps are the answer, and will likely be adopted in many countries over the next decade. This package allows
regulators, data scientists and researchers to calculate and understand:
1. Asset lifetimes
2. Economic consequences of asset failures, both minor and major
3. Monetary risk
4. Probability of failure parameter estimates based on fault statistics
## Probability of failure
In CNAIM, the probability of failure (PoF) is modelled as the first three terms of a Taylor Series expansion of an exponential function.
\(\begin{align*} PoF =& K \cdot e^{(C \cdot H)} \\=& K \cdot \sum_{n=1}^\infty \frac{(C \cdot H)^n}{n!} \\\approx& K \cdot \left[1 + (C \cdot H) + \frac{(C \cdot H)^2}{2!} + \frac{(C \cdot H)^3}{3!}\
right] \end{align*}\)
\(K\) scales the PoF to a failure rate that matches observed fault statistics
\(C\) describes the shape of the PoF curve
\(H\) is the health score based on observed and measured explanatory variables
The definition of a functional failure can be divided into three classes of failure modes:
• Incipient - minor failure
• Degraded - significant failure
• Catastrophic - total failure
Example for a 6.6/11 kV transformer
Assuming a transformer:
• has a utilization of 55%
• is placed indoors
• is sited at an altitude of 75m
• has 20km to the coast
• is sited in an area with a corrosion category index of 2
• is 25 years old
• has a low partial discharge
• has an oil acidity of 0.1mgKOH/g
• has a normal temperature reading
• is in good observed condition
• has a default reliability factor of 1
Then we can call the function as follows:
pof <- pof_transformer_11_20kv(
hv_transformer_type = "6.6/11kV Transformer (GM)",
utilisation_pct = 55,
placement = "Indoor",
altitude_m = 75,
distance_from_coast_km = 20,
corrosion_category_index = 2,
age = 25,
partial_discharge = "Low",
oil_acidity = 0.1,
temperature_reading = "Normal",
observed_condition = "Default",
reliability_factor = "Default")
sprintf("The probability of failure is %.2f%% per year", 100*pof)
[1] "The probability of failure is 0.01% per year"
[2] "The probability of failure is 50.00% per year"
The only mandetory input variable is age; the rest have defaults, so you can call the function like:
pof <- pof_transformer_11_20kv(age = 55)
sprintf("The probability of failure is %.2f%% per year", 100*pof)
[1] "The probability of failure is 0.03% per year"
[2] "The probability of failure is 127.31% per year"
Consequences of failure
The CNAIM methodology’s second key element is the consequence of a failure. When combined with probability of failure, the consequences of failure can be used to derive the monetary network risk.
Consequence of failure calculations are based on the same failure modes as probability of failure.
The consequences of failure can be divided into four:
• Financial consequences of failure
• Safety consequences of failure
• Environmental consequences of failure
• Network performance consequences of failure
The sub-consequences have an associated asset specific reference cost of failure based on the British DNOs’ experience and other objective sources. All reference costs are currently in 2012/2013
prices. The reference cost of failure for all sub-categories are scaled with respect to the specific conditions and locations of the individual asset.
Financial consequences of failure considers cost associated with replacement and repairs that returns the asset to its initial condition before the incident.
Safety consequences of failure considers the likelihood and the cost that a failure could be hazardous to a person or a worker including the likelihood that the failure could be fatal. The safety
implications is taken from Electricity Safety, Quality and Continuity Regulations (ESQCR).
Environmental consequences of failure considers the cost of a potential oil spill and mitigation of the extremely potent greenhouse gas, sulfur hexafluoride.
Network performance consequences of failure considers the cost a failure imposes to the customers served by the by the asset and the number of interrupting minutes.
Example for a 6.6/11kV transformer
Assuming a transformer:
• has a rated capacity of 750 kVA
• has confined access i.e. assessed to be of a “Type B”
• is exhibiting a low risk to the public
• is exposed to a medium risk of trespassers
• is located 95 meters from a stream
• is serving 750 customers
• has an average demand of 1 kVA per customer
Financial consequences of failure
The financial reference cost of failure for a 6.6/11kV transformer is £7,739, which is scaled by the rated capacity measured in kVA and the accessibility. The financial consequences of failure are
found using:
financial_cof <- f_cof_transformer_11kv(kva = 750, type = "Type B")
sprintf("The financial consequences of failure is GBP %.f", round(financial_cof))
[1] "The financial consequences of failure is GBP 13364"
Safety consequences of failure
The safety reference cost of failure for a 6.6/11kV transformer is £4,262, which is scaled by the location and the risk the transformer represents to the public. The function below is able to
calculate the safety consequences of failure for switchgears, transformers and overhead lines:
safety_cof <- s_cof_swg_tf_ohl(type_risk = "Low",
location_risk = "Medium",
asset_type_scf = "6.6/11kV Transformer (GM)")
sprintf("The safety consequences of failure is GBP %.f", round(safety_cof))
[1] "The safety consequences of failure is GBP 4341"
Network performance consequences of failure
The reference network performance cost of failure for a 6.6/11kV transformer is £4,862. This cost is scaled according to the number of customers connected to the transformer and kVA per customer.
This function can calculate network consequences of failure for all assets with the exception EHV and 132kV asset:
network_cof <- n_cof_excl_ehv_132kv_tf(asset_type_ncf = "6.6/11kV Transformer (GM)",
no_customers = 750,
kva_per_customer = 1)
sprintf("The network performance consequences of failure is GBP %.f", round(network_cof))
[1] "The network performance consequences of failure is GBP 16286"
Consequences of failure
The overall consequences of failure in our example can found using:
cof_transformer <- cof(financial_cof, safety_cof, environmental_cof, network_cof)
sprintf("The consequences of failure is GBP %.f", cof_transformer)
[1] "The consequences of failure is GBP 35896"
The function adds the sub-consequences together to a total consequence of a failure. For the 6.6/11kV transformer described in this example, it is now possible to derive the monetary risk.
A quick way to find the consequences of failure for the 6.6/11kV transformer in this example is:
cof_short_cut <- cof_transformer_11kv(kva = 750, type = "Type B",
type_risk = "Low", location_risk = "Medium",
prox_water = 95, bunded = "Yes",
no_customers = 750, kva_per_customer = 1)
all.equal(cof_transformer, cof_short_cut)
[1] TRUE
Monetary risk
Once probability of failure and consequences of failure have been calculated for each asset, monetary risk is calculated as:
\(Risk = PoF \cdot CoF\)
Risk matrices for each asset class, along with cost-benefit analyses of interventions (reinvestment and maintenance) are submitted to the regulator, allowing utility and regulator to reach concensus
on the right balance of cost and reliability.
Individual asset risk
Given an asset with a probability of failure = 0.08% per year and consequences of failure equal to £18,232, we can visualize and analyize which risk class this asset has with the following functions:
# Generate an empty 5x4 matrix
matrix_structure <- risk_matrix_structure(5,4,NA)
# Monetary risk for one asset
risk_coordinates <- risk_calculation(matrix_dimensions = matrix_structure,
id = "Transformer1",
pof = 0.08,
cof = 18232,
asset_type = "6.6/11kV Transformer (GM)")
dots_vector = risk_coordinates,
dot_radius = 4)
Asset class risk
Given a population of assets within the same asset class, we can visualize how monetary risk is distributed with the following example:
# Generate an empty 5x4 matrix
risk_data_matrix <- risk_matrix_structure(5,4,NA)
risk_data_matrix$value <- sample(1:30,size=nrow(matrix_structure),replace = T)
Non-linear bins
Sometimes it is desirable to create the matrix with non-linear intervals, since each interval represents a bin of CoF and PoF, bins which typically increase in size as the CoF and health scores
increase. The inputs x_intervals and y_intervals should match the x and y dimensions of the risk matrix data frame, but can contain any values, since these are internally normalised to 1.
# Generate an empty 5x4 matrix
risk_data_matrix <- risk_matrix_structure(5,4,NA)
risk_data_matrix$value <- sample(1:30,size=nrow(matrix_structure),replace = T)
x_intervals = c(0.1,0.1,0.1,0.2,0.3),
y_intervals = c(0.75,0.75,1,1.5))
Matrices with different dimensions
Although the CNAIM standard specifies a rigid 5x4 matrix, it might be desirable to implement different size risk matrices. The CNAIM R package offers this flexibility. For example, to make a 4x4
# Generate an empty 4x4 matrix
risk_data_matrix <- risk_matrix_structure(5,4,NA)
risk_data_matrix$value <- sample(1:30,size=nrow(matrix_structure),replace = T) | {"url":"https://cran.hafro.is/web/packages/CNAIM/vignettes/cnaim.html","timestamp":"2024-11-08T12:43:38Z","content_type":"text/html","content_length":"809209","record_id":"<urn:uuid:8d35eb33-19e1-4849-a77c-4a7ec4bb8bd9>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00754.warc.gz"} |
Exponential Functions - Formula, Properties, Graph, Rules
What is an Exponential Function?
An exponential function measures an exponential decrease or increase in a particular base. For instance, let us assume a country's population doubles yearly. This population growth can be depicted as
an exponential function.
Exponential functions have numerous real-world applications. Mathematically speaking, an exponential function is displayed as f(x) = b^x.
In this piece, we discuss the essentials of an exponential function along with appropriate examples.
What is the equation for an Exponential Function?
The common equation for an exponential function is f(x) = b^x, where:
1. b is the base, and x is the exponent or power.
2. b is fixed, and x varies
For instance, if b = 2, then we get the square function f(x) = 2^x. And if b = 1/2, then we get the square function f(x) = (1/2)^x.
In the event where b is larger than 0 and unequal to 1, x will be a real number.
How do you plot Exponential Functions?
To graph an exponential function, we must locate the points where the function crosses the axes. These are called the x and y-intercepts.
As the exponential function has a constant, one must set the value for it. Let's take the value of b = 2.
To discover the y-coordinates, one must to set the worth for x. For instance, for x = 1, y will be 2, for x = 2, y will be 4.
By following this technique, we get the domain and the range values for the function. After having the values, we need to chart them on the x-axis and the y-axis.
What are the properties of Exponential Functions?
All exponential functions share similar characteristics. When the base of an exponential function is more than 1, the graph would have the following qualities:
• The line intersects the point (0,1)
• The domain is all positive real numbers
• The range is more than 0
• The graph is a curved line
• The graph is increasing
• The graph is flat and constant
• As x approaches negative infinity, the graph is asymptomatic concerning the x-axis
• As x approaches positive infinity, the graph increases without bound.
In situations where the bases are fractions or decimals between 0 and 1, an exponential function displays the following characteristics:
• The graph passes the point (0,1)
• The range is larger than 0
• The domain is entirely real numbers
• The graph is decreasing
• The graph is a curved line
• As x nears positive infinity, the line in the graph is asymptotic to the x-axis.
• As x advances toward negative infinity, the line approaches without bound
• The graph is level
• The graph is constant
There are a few basic rules to bear in mind when dealing with exponential functions.
Rule 1: Multiply exponential functions with the same base, add the exponents.
For instance, if we have to multiply two exponential functions that have a base of 2, then we can write it as 2^x * 2^y = 2^(x+y).
Rule 2: To divide exponential functions with an equivalent base, subtract the exponents.
For example, if we need to divide two exponential functions that have a base of 3, we can write it as 3^x / 3^y = 3^(x-y).
Rule 3: To grow an exponential function to a power, multiply the exponents.
For instance, if we have to increase an exponential function with a base of 4 to the third power, then we can compose it as (4^x)^3 = 4^(3x).
Rule 4: An exponential function with a base of 1 is always equivalent to 1.
For instance, 1^x = 1 regardless of what the value of x is.
Rule 5: An exponential function with a base of 0 is always equal to 0.
For example, 0^x = 0 no matter what the value of x is.
Exponential functions are usually used to signify exponential growth. As the variable increases, the value of the function increases faster and faster.
Example 1
Let's look at the example of the growing of bacteria. Let’s say we have a culture of bacteria that duplicates each hour, then at the close of the first hour, we will have double as many bacteria.
At the end of hour two, we will have quadruple as many bacteria (2 x 2).
At the end of hour three, we will have 8x as many bacteria (2 x 2 x 2).
This rate of growth can be represented using an exponential function as follows:
f(t) = 2^t
where f(t) is the total sum of bacteria at time t and t is measured hourly.
Example 2
Moreover, exponential functions can illustrate exponential decay. Let’s say we had a radioactive material that degenerates at a rate of half its amount every hour, then at the end of hour one, we
will have half as much substance.
After two hours, we will have 1/4 as much material (1/2 x 1/2).
At the end of the third hour, we will have an eighth as much material (1/2 x 1/2 x 1/2).
This can be represented using an exponential equation as below:
f(t) = 1/2^t
where f(t) is the volume of substance at time t and t is calculated in hours.
As shown, both of these examples use a similar pattern, which is why they can be shown using exponential functions.
In fact, any rate of change can be indicated using exponential functions. Keep in mind that in exponential functions, the positive or the negative exponent is denoted by the variable whereas the base
remains the same. This indicates that any exponential growth or decay where the base changes is not an exponential function.
For instance, in the case of compound interest, the interest rate stays the same whilst the base varies in ordinary intervals of time.
An exponential function can be graphed using a table of values. To get the graph of an exponential function, we must enter different values for x and then calculate the corresponding values for y.
Let us check out the following example.
Example 1
Graph the this exponential function formula:
y = 3^x
To start, let's make a table of values.
As you can see, the rates of y rise very fast as x increases. Consider we were to draw this exponential function graph on a coordinate plane, it would look like the following:
As shown, the graph is a curved line that rises from left to right ,getting steeper as it continues.
Example 2
Graph the following exponential function:
y = 1/2^x
To begin, let's create a table of values.
As shown, the values of y decrease very rapidly as x surges. The reason is because 1/2 is less than 1.
If we were to draw the x-values and y-values on a coordinate plane, it would look like what you see below:
The above is a decay function. As shown, the graph is a curved line that descends from right to left and gets smoother as it goes.
The Derivative of Exponential Functions
The derivative of an exponential function f(x) = a^x can be displayed as f(ax)/dx = ax. All derivatives of exponential functions present particular characteristics where the derivative of the
function is the function itself.
The above can be written as following: f'x = a^x = f(x).
Exponential Series
The exponential series is a power series whose terminology are the powers of an independent variable digit. The common form of an exponential series is:
Grade Potential is Able to Help You Succeed at Exponential Functions
If you're struggling to comprehend exponential functions, or merely require a little extra assistance with math overall, consider partnering with a tutor. At Grade Potential, our Clear Water math
tutors are experts in their subjects and can supply you with the one-on-one support you need to thrive.
Call us at (727) 332-0362 or contact us now to learn more about how we can assist you in reaching your academic potential. | {"url":"https://www.clearwaterinhometutors.com/blog/exponential-functions-formula-properties-graph-rules","timestamp":"2024-11-02T14:26:32Z","content_type":"text/html","content_length":"83528","record_id":"<urn:uuid:8c50bf7c-5a69-4dff-a1ad-cf34a403a814>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00758.warc.gz"} |
SAT Math Topics and Question Format | Turito US Blog
Are you preparing for SAT? Nervous about SAT math topics? There is no wonder that math is a complex portion of SAT, and you should be familiarized with it with the right approach for SAT Math test
Well, don’t worry. We have brought you this article to help you prepare for the SAT math subject test. You must understand each topic and strategize plans for better preparation.
SAT Math Topics
There are mainly four topics covered in SAT Math subject test – Heart of Algebra, Problem Solving and Data Analysis, Passport to Advanced Math, and Additional Topics.
Heart of Algebra
It typically involves linear equations or inequality with one variable, systems of linear equations, and functions that are found in different fields of study. In the Heart of Algebra, SAT math
questions revolve around solving linear equations, inequalities, functions, and graphs. The College Board has defined the official topics for SAT math.
These are as follows:
• Solving linear equations and linear inequalities
• Understanding linear functions
• Linear inequality and equation word problems
• Graphing linear equations
• Linear function word problems
• Systems of linear inequalities word problems
• Interpreting how a linear graph relates to an equation or system of equations or inequalities.
Problem Solving and Data Analysis
In this section, topics include ratios, rates, proportions, percentages, units, table data scatterplots, key features of graphs, linear and exponential growth, and data inferences. It also covers the
centre, spread, and shape of distributions, data collection, and conclusions. Problem Solving and Data Analysis provide a strong foundation for the math you will solve in the future.
In this Area of Study, you will:
• Solve problems to measure ratios, rates, proportions, unit rates, or density.
• Use ratios, rates, and percentages to solve a multistep problem.
• Select an equation that best fits a scatterplot.
• Summarize data, such as probabilities, by using tables.
• Predict populations based on sample data.
• Determine mean, median, mode, range, and standard deviation by using statistics.
• Analyze graphs, tables, or text summaries.
Passport to Advanced Math
It is the third area of study in the SAT Math topics. Some of the example categories in Passport to Advanced Math include Arithmetic word problems such as per cent, ratio, and proportion; Properties
of integers like even, odd, prime numbers, divisibility, and so forth; Rational numbers; and Sets – union, intersection, elements.
In this section, problems focus on math necessary to pursue further study in science or economics and career opportunities in STEM. Official topics under Passport to Advanced Math include:
• Nonlinear expressions
• Quadratic and exponential word problems
• Radicals and rational exponents
• Operations with rational expressions and polynomials
• Nonlinear equation graphs
• Polynomial factors and graphs
• Linear and quadratic systems
• Structure in expressions
• Isolating quantities
• Functions
Additional SAT Math Topics
The SAT Math Test also involves additional topics, including geometry – including applications of volume, area, surface, and coordinate geometry. Some topics also focus on trigonometry and radian
measures that are essential for study in STEM fields and problems with complex numbers.
Preparing for the SAT Math Test
The SAT Math Test analyzes your understanding of mathematical concepts, skills, and fluency in math and the ability to apply those concepts and skills to real-world problems. The test will focus
profoundly on the above-mentioned areas of math.
Questions on the Math Test are aimed at solving the problem you will do in college math, science, and social science courses; and in your professional and personal life. The questions will assess
your skills in numerous ways and improve your ability to use mathematical ideas and methods that can be applied to an array of settings.
The SAT Math Test is divided into two portions- Calculator and No Calculator. In the calculator portion, you do not require a calculator to solve questions. You can do any question faster without
using a calculator. Questions in this portion are generally more complex than those in the no calculator portion. In the no calculator portion, you can use a calculator to solve problems.
However, many questions in this portion, too, do not require the calculator and can be solved more quickly without using it. No calculator questions emphasize your ability to solve problems
efficiently and accurately. It relies on your decision when to use a calculator. So, it would help if you carried a scientific or graphing calculator to use for some questions. Using a calculator may
lower the time required to complete the test. It can also help you avoid missing a question due to computation errors.
Question Format
There are two types of questions on SAT Math Practice Test – multiple-choice questions and grid-in questions. However, most questions, about 80%, are multiple-choice and consist of a question with
four options. You will be required to select the correct answer. There is no negative mark on selecting the incorrect answer. It means you can answer each question.
In grid-in questions, you will be required to provide an answer to each question in a number (fraction, decimal, or positive integer) that you will enter in the grid-like answer sheet. These types of
questions make up about 20% of the test.
The Math Test also includes reference information that can help when you answer the test questions. But you need to make sure that you have a practice test with this information beforehand. To
perform better, you should be comfortable working with these facts and formulas.
Through the SAT Math Test, you will have the chance to carry out processes flexibly, precisely, efficiently, and strategically. You will solve problems quickly by identifying and using the most
efficient solution approaches. The SAT Math Test will improve your conceptual understanding.
You will demonstrate your understanding of math concepts, operations, and relations. For example, you might be required to make connections between properties of linear equations, their graphs, and
the contexts they represent.
Here’s a Breakdown of the SAT Math Sections’ Time, Number of Questions, and Types.
Sections Number of Questions Time
No Calculator 15 multiple choices, 5 grid-ins 25 minutes
Calculator 30 multiple-choice, 8 grid-in questions (including one Extended Thinking question) 55 minutes
Total 58 Questions 80 minutes
Difficulty Level of SAT Math
Each competitive exam’s difficulty varies from person to person. The situation is similar when it comes to the SAT, particularly the SAT Math section. However, if a student has studied math in high
school and has a thorough understanding of the concepts, the test should be easy.
Well, you now have all the details about what topics are there in SAT Math section, which types of questions are asked in the test, etc. All you need to do is follow each instruction and information
provided for the test, and do a lot of practice beforehand. The more you practice, the more you immerse and prepare better for the test.
Frequently Asked Questions
1. What are the SAT math topics?
A. In the SAT Math subject test, there are primarily four subjects covered: Heart of Algebra, Problem Solving and Data Analysis, Passport to Advanced Math, and Additional Topics.
2. How Is the SAT Math Test Structured?
A. Math and Evidence-Based Reading and Writing are the two main sections of the SAT. There are two distinct parts to the Evidence-Based Reading and Writing focus: Each one is subject to different
time restrictions, question counts, etc. Math is the second important factor to think about.
3. Which Math Topics Are Tested on the SAT?
A. The SAT math section will test your knowledge of the following 4 topics:
• Heart of Algebra (33%)
• Data analysis and problem-solving (29%)
• Advanced Math Passport (28%)
• Additional Math Topics (10%)
4. How can students ace the math section of the SAT?
A. You must fully comprehend every common subject evaluated on the SAT if you want to ace it. You will be tested on a variety of reading, writing, and math skills on the SAT. You can determine the
topics you need to learn more about by analyzing the types of questions you frequently answer.
Relevant Articles
How Should You Calculate Cumulative GPA for all Semesters?
Introduction In today’s college admissions environment, your Cumulative Grade Point …
How Should You Calculate Cumulative GPA for all Semesters? Read More »
Read More >>
Study Abroad
Get an Expert Advice from Turito
Get an Expert Advice from Turito
1-on-1 tutoring for the undivided attention | {"url":"https://www.turito.com/blog/sat/sat-math-topics-and-question-format","timestamp":"2024-11-12T06:00:15Z","content_type":"application/xhtml+xml","content_length":"160428","record_id":"<urn:uuid:7a450717-4219-4e8b-9839-29c3902a079c>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00441.warc.gz"} |
Projectile deformation effects on single-nucleon removal reactions
We discuss intermediate-energy single-nucleon removal reactions from deformed projectile nuclei. The removed nucleon is assumed to originate from a given Nilsson model single-particle state and the
inclusive cross sections, to all rotational states of the residual nucleus, are calculated. We investigate the sensitivity of both the stripping cross sections and their momentum distributions to the
assumed size of the model space in the Nilsson model calculations and to the shape of the projectile and residue. We show that the cross sections for small deformations follow the decomposition of
the Nilsson state in a spherical basis. In the case of large and prolate projectile deformations the removal cross sections from prolate-like Nilsson states, having large values for the asymptotic
quantum number n[z], are reduced. For oblate-like Nilsson states, with small n[z], the removal cross sections are increased. Whatever the deformation, the residue momentum distributions are found to
remain robustly characteristic of the orbital angular momentum decomposition of the initial state of the nucleon in the projectile.
Dive into the research topics of 'Projectile deformation effects on single-nucleon removal reactions'. Together they form a unique fingerprint. | {"url":"https://researchportalplus.anu.edu.au/en/publications/projectile-deformation-effects-on-single-nucleon-removal-reaction","timestamp":"2024-11-07T03:43:21Z","content_type":"text/html","content_length":"50469","record_id":"<urn:uuid:cc5ab819-5474-4712-8c05-78f7f8618a93>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00164.warc.gz"} |
The Wheatstone bridge
Explore the mathematics behind the famous Wheatstone Bridge circuit.
The circuit below is known as a Wheatstone bridge, a device popularized by Sir Charles Wheatstone. The purpose of the Wheatstone bridge is to measure some unknown impedance. This is achieved by
'balancing' the two legs of bridge, one will contain a variable resistor and the other the component whose impedance we wish to measure.
Can you find the condition on the variable resistor needed for balance ($V = 0$) using loop current analysis?
Can you find the condition for balance using a potential divider argument?
If we now replace the DC voltage source with an AC voltage source of frequency $f$, add a capacitor $C_3$ in series with $R_3$ and added a second capacitor $C_2$ in parallel with $R_2$, at what
frequency will balance occur, in terms of $R_1,R_2,C_2,R_3,C_3$ and $R_x$?
Student Solutions
We can solve the problem using a potential divider or using loop currents.
Loop Currents:
We can assign loop currents to each loop as shown above. At balance the vector sum of the currents through the meter will be zero, we can therefore assign $I_2$to both the left and right loop, the
currents will cancel through the meter.
Applying Kirchoff's voltage law to each loop we find that:
$\sum_{Voltages} Left Hand Loop = - (I_2 - I_1)R_1 - I_2 R_x = 0 $
$\sum_{Voltages}Right Hand Loop = -I_2 R_3 -(I_2 - I_1)R_2 = 0 $
We have two independent equations and two unknowns ($I_1$ and $I_2$).
From the left loop: $I_2 = \frac{R_1}{R_1 + R_x} I_1$
From the right loop: $I_2 = \frac{R_2}{R_3 + R_2} I_1$
Equating we see:
$R_1R_3 = R_2R_x$
Potential Divider:
At balance $V_b = V_d$
The potential at C is zero (ground). The potential at A is therefore divided between $R_x$ and $R_3$, in addition it is also divided between $R_1$and $R_2$ .
By potential divider:
$V_b = \frac{R_3}{R_x} V_a$
$V_d = \frac{R_2}{R_1}V_a$
Equating $V_b$ and $V_d$
$R_1 R_3 = R_2 R_x$
If we replace:
$R_2 = Z_2$
We find $Z_2$ by combing the impedance of $C_2$ in parallel with $R_2$
$Z_2= \frac{R_2 \frac{1}{2 \pi f t C \bf i}} {R_2 + 2 \pi f t C \bf i }$
where $i = \sqrt{-1} = i$
$R_3 = Z_3 = $
We find $Z_3$ by combining the imperdance of $C_3$ in series with $R_3$
where $i = \sqrt{-1} = i$
From part 1 we know that:
$R_1Z_3 = Z_2R_x$
Substituting $Z_3$, $Z_4$ and equating real and imaginary terms we find that:
the real part tells us nothing about frequency (cancels)
The imaginary part tells us f = $\frac{1}{2 \pi} \sqrt{\frac{1}{C_3 C_2 R_3 R_2}} $ at balance | {"url":"http://nrich.maths.org/problems/wheatstone-bridge","timestamp":"2024-11-07T04:24:36Z","content_type":"text/html","content_length":"40351","record_id":"<urn:uuid:9eb59cb9-99c7-4cc5-9db1-c9ff7157322d>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00588.warc.gz"} |
Random Variables
In the past few lectures, we’ve examined the role of complexity in influencing model performance. We’ve considered model complexity in the context of a tradeoff between two competing factors: model
variance and training error. More specifically, in the last lecture we discussed how the optimal theta after regularization is the following:
\[ \hat{\theta} = \text{arg}\underset{\theta}{\text{min}}\ \textbf{Loss}(\theta, \text{data}) + \lambda\ \textbf{Regularizer}(\theta)\]
Here, the Loss term ensures that the model fits the data and the Regularizer term ensures that the model isn’t too complex. The \(\lambda\) term is the regularization hyperparameter which is tuned
using cross-validation to prevent over-fitting to our data. We also have two options for the regularizer:
• \(L_1\) (absolute) penalty \(\rightarrow\) Lasso Regression
□ Allows for sparsity where we can drop features that are less useful
• \(L_2\) (squared) penalty \(\rightarrow\) Ridge Regression
□ Allows for robustness where the weight is spread out over more features
The goal is to find a balance between bias (underfitting) and variance (overfitting). So far, our analysis has been mostly qualitative. We’ve acknowledged that our choice of model complexity needs to
strike a balance between the two, but we haven’t yet discussed why exactly this tradeoff exists.
To better understand the origin of this tradeoff, we will need to dive into random variables. The next two course notes on probability will be a brief digression from our work on modeling so we can
build up the concepts needed to understand this so-called bias-variance tradeoff. In specific, we will cover:
1. Random Variables: introduce random variables, considering the concepts of expectation, variance, and covariance
2. Estimators, Bias, and Variance: re-express the ideas of model variance and training error in terms of random variables and use this new perspective to investigate our choice of model complexity
We’ll go over just enough probability to help you understand its implications for modeling, but if you want to go a step further, take Data 140, CS 70, and/or EECS 126.
In Data 100, we want to understand the broader relationship between the following:
• Population parameter: a number that describes something about the population
• Sample statistic: an estimate of the number computed on a sample
17.1 Random Variables and Distributions
Suppose we generate a set of random data, like a random sample from some population. A random variable is a function from the outcome of a random event to a number.
It is random since our sample was drawn at random; it is variable because its exact value depends on how this random sample came out. As such, the domain or input of our random variable is all
possible outcomes for some random event in a sample space, and its range or output is the real number line. We typically denote random variables with uppercase letters, such as \(X\) or \(Y\). In
contrast, note that regular variables tend to be denoted using lowercase letters. Sometimes we also use uppercase letters to refer to matrices (such as your design matrix \(\mathbb{X}\)), but we will
do our best to be clear with the notation.
To motivate what this (rather abstract) definition means, let’s consider the following examples:
17.1.1 Example: Tossing a Coin
Let’s formally define a fair coin toss. A fair coin can land on heads (\(H\)) or tails (\(T\)), each with a probability of 0.5. With these possible outcomes, we can define a random variable \(X\) as:
\[X = \begin{cases} 1, \text{if the coin lands heads} \\ 0, \text{if the coin lands tails} \end{cases}\]
\(X\) is a function with a domain, or input, of \(\{H, T\}\) and a range, or output, of \(\{1, 0\}\). In practice, while we don’t use the following function notation, you could write the above as \[X
= \begin{cases} X(H) = 1 \\ X(T) = 0 \end{cases}\]
17.1.2 Example: Sampling Data 100 Students
Suppose we draw a random sample \(s\) of size 3 from all students enrolled in Data 100.
We can define the random variable \(Y\) as the number of data science students in our sample. Its domain is all possible samples of size 3, and its range is \(\{0, 1, 2, 3\}\).
Note that we can use random variables in mathematical expressions to create new random variables.
For example, let’s say we sample 3 students at random from lecture and look at their midterm scores. Let \(X_1\), \(X_2\), and \(X_3\) represent each student’s midterm grade.
We can use these random variables to create a new random variable, \(Y\), which represents the average of the 3 scores: \(Y = (X_1 + X_2 + X_3)/3\).
As we’re creating this random variable, a few questions arise:
• What can we say about the distribution of \(Y\)?
• How does it depend on the distribution of \(X_1\), \(X_2\), and \(X_3\)?
But, what exactly is a distribution? Let’s dive into this!
17.1.3 Distributions
To define any random variable \(X\), we need to be able to specify 2 things:
1. Possible values: the set of values the random variable can take on.
2. Probabilities: the chance that the random variable will take each possible value.
□ Each probability should be a real-number between 0 and 1 (inclusive)
□ The total probability of all possible values should be 1.
If \(X\) is discrete (has a finite number of possible values), the probability that a random variable \(X\) takes on the value \(x\) is given by \(P(X=x)\), and probabilities must sum to 1: \(\
underset{\text{all } x}{\sum} P(X=x) = 1\),
We can often display this using a probability distribution table. In the coin toss example, the probability distribution table of \(X\) is given by.
\(x\) \(P(X=x)\)
0 \(\frac{1}{2}\)
1 \(\frac{1}{2}\)
The distribution of a random variable \(X\) describes how the total probability of 100% is split across all the possible values of \(X\), and it fully defines a random variable. If you know the
distribution of a random variable you can:
• compute properties of the random variables and derived variables
• simulate the random variables by randomly picking values of \(X\) according to its distribution using np.random.choice, df.sample, or scipy.stats.<dist>.rvs(...)
The distribution of a discrete random variable can also be represented using a histogram. If a variable is continuous, meaning it can take on infinitely many values, we can illustrate its
distribution using a density curve.
We often don’t know the (true) distribution and instead compute an empirical distribution. If you flip a coin 3 times and get {H, H, T}, you may ask — what is the probability that the coin will land
heads? We can come up with an empirical estimate of \(\frac{2}{3}\), though the true probability might be \(\frac{1}{2}\).
Probabilities are areas. For discrete random variables, the area of the red bars represents the probability that a discrete random variable \(X\) falls within those values. For continuous random
variables, the area under the curve represents the probability that a discrete random variable \(Y\) falls within those values.
If we sum up the total area of the bars/under the density curve, we should get 100%, or 1. Continuous random variables are more complex (requiring a probability density function), so take a
probability class like Data 140 to learn more.
One common probability distribution is the Bernoulli distribution which is a binary variable that takes on two values: 0 or 1. For example,
\[X = \begin{cases} 1, \text{if the coin lands heads} \\ 0, \text{if the coin lands tails} \end{cases}\]
The Bernoulli distribution is parameterized by p the probability P(X=1), and the outcomes can be seen in the probability table below. Note that we use the notation X ~ Bernoulli(p) to denote that the
random variable X has a Bernoulli distribution with parameter p.
Outcome Probability
1 p
0 1-p
Rather than fully write out a probability distribution or show a histogram, there are some common distributions that come up frequently when doing data science. These distributions are specified by
some parameters (the numbers inside the parentheses), which are constants that specify the shape of the distribution. In terms of notation, the ‘~’ again means “has the probability distribution of”.
These common distributions are listed below:
1. Bernoulli(\(p\)): If \(X\) ~ Bernoulli(\(p\)), then \(X\) takes on a value 1 with probability \(p\), and 0 with probability \(1 - p\). Bernoulli random variables are also termed the “indicator”
random variables.
2. Binomial(\(n\), \(p\)): If \(X\) ~ Binomial(\(n\), \(p\)), then \(X\) counts the number of 1s in \(n\) independent Bernoulli(\(p\)) trials.
3. Categorical(\(p_1, ..., p_k\)) of values: The probability of each value is 1 / (number of possible values).
4. Uniform on the unit interval (0, 1): The density is flat at 1 on (0, 1) and 0 elsewhere. We won’t get into what density means as much here, but intuitively, this is saying that there’s an equally
likely chance of getting any value on the interval (0, 1).
5. Normal(\(\mu\), \(\sigma^2\)): The probability density is specified by \(\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}\). This bell-shaped distribution comes up fairly often in
data, in part due to the Central Limit Theorem you saw back in Data 8.
17.2 Expectation and Variance
There are several ways to describe a random variable. The methods shown above — a table of all samples \(s, X(s)\), distribution table \(P(X=x)\), and histograms — are all definitions that fully
describe a random variable. Often, it is easier to describe a random variable using some numerical summary rather than fully defining its distribution. These numerical summaries are numbers that
characterize some properties of the random variable. Because they give a “summary” of how the variable tends to behave, they are not random. Instead, think of them as a static number that describes a
certain property of the random variable. In Data 100, we will focus our attention on the expectation and variance of a random variable.
17.2.1 Expectation
The expectation of a random variable \(X\) is the weighted average of the values of \(X\), where the weights are the probabilities of each value occurring. There are two equivalent ways to compute
the expectation:
1. Apply the weights one sample at a time: \[\mathbb{E}[X] = \sum_{\text{all possible } s} X(s) P(s)\].
2. Apply the weights one possible value at a time: \[\mathbb{E}[X] = \sum_{\text{all possible } x} x P(X=x)\]
The latter is more commonly used as we are usually just given the distribution, not all possible samples.
We want to emphasize that the expectation is a number, not a random variable. Expectation is a generalization of the average, and it has the same units as the random variable. It is also the center
of gravity of the probability distribution histogram, meaning if we simulate the variable many times, it is the long-run average of the simulated values.
17.2.1.1 Example 1: Coin Toss
Going back to our coin toss example, we define a random variable \(X\) as: \[X = \begin{cases} 1, \text{if the coin lands heads} \\ 0, \text{if the coin lands tails} \end{cases}\]
We can calculate its expectation \(\mathbb{E}[X]\) using the second method of applying the weights one possible value at a time: \[ \mathbb{E}[X] &= \sum_{x} x P(X=x) \\ &= 1 * 0.5 + 0 * 0.5 \\ &=
0.5 \]
Note that \(\mathbb{E}[X] = 0.5\) is not a possible value of \(X\); it’s an average. The expectation of X does not need to be a possible value of X.
17.2.1.2 Example 2
Consider the random variable \(X\):
\(x\) \(P(X=x)\)
3 0.1
4 0.2
6 0.4
8 0.3
To calculate it’s expectation, \[ \mathbb{E}[X] &= \sum_{x} x P(X=x) \\ &= 3 * 0.1 + 4 * 0.2 + 6 * 0.4 + 8 * 0.3 \\ &= 0.3 + 0.8 + 2.4 + 2.4 \\ &= 5.9 \]
Again, note that \(\mathbb{E}[X] = 5.9\) is not a possible value of \(X\); it’s an average. The expectation of X does not need to be a possible value of X.
17.2.2 Variance
The variance of a random variable is a measure of its chance error. It is defined as the expected squared deviation from the expectation of \(X\). Put more simply, variance asks: how far does \(X\)
typically vary from its average value, just by chance? What is the spread of \(X\)’s distribution?
\[\text{Var}(X) = \mathbb{E}[(X-\mathbb{E}[X])^2]\]
The units of variance are the square of the units of \(X\). To get it back to the right scale, use the standard deviation of \(X\): \[\text{SD}(X) = \sqrt{\text{Var}(X)}\]
Like with expectation, variance and standard deviation are numbers, not random variables! Variance helps us describe the variability of a random variable. It is the expected squared error between the
random variable and its expected value. As you will see shortly, we can use variance to help us quantify the chance error that arises when using a sample \(X\) to estimate the population mean.
By Chebyshev’s inequality, which you saw in Data 8, no matter what the shape of the distribution of \(X\) is, the vast majority of the probability lies in the interval “expectation plus or minus a
few SDs.”
If we expand the square and use properties of expectation, we can re-express variance as the computational formula for variance.
\[\text{Var}(X) = \mathbb{E}[X^2] - (\mathbb{E}[X])^2\]
This form is often more convenient to use when computing the variance of a variable by hand, and it is also useful in Mean Squared Error calculations, as \(\mathbb{E}[X^2] = \text{Var}(X)\) if \(X\)
is centered and \(E(X)=0\).
How do we compute \(\mathbb{E}[X^2]\)? Any function of a random variable is also a random variable. That means that by squaring \(X\), we’ve created a new random variable. To compute \(\mathbb{E}[X^
2]\), we can simply apply our definition of expectation to the random variable \(X^2\).
\[\mathbb{E}[X^2] = \sum_{x} x^2 P(X = x)\]
17.2.3 Example: Die
Let \(X\) be the outcome of a single fair die roll. \(X\) is a random variable defined as \[X = \begin{cases} \frac{1}{6}, \text{if } x \in \{1,2,3,4,5,6\} \\ 0, \text{otherwise} \end{cases}\]
We can summarize our discussion so far in the following diagram:
17.3 Sums of Random Variables
Often, we will work with multiple random variables at the same time. A function of a random variable is also a random variable. If you create multiple random variables based on your sample, then
functions of those random variables are also random variables.
For example, if \(X_1, X_2, ..., X_n\) are random variables, then so are all of these:
• \(X_n^2\)
• \(\#\{i : X_i > 10\}\)
• \(\text{max}(X_1, X_2, ..., X_n)\)
• \(\frac{1}{n} \sum_{i=1}^n (X_i - c)^2\)
• \(\frac{1}{n} \sum_{i=1}^n X_i\)
Many functions of random variables that we are interested in (e.g., counts, means) involve sums of random variables, so let’s dive deeper into the properties of sums of random variables.
17.3.1 Properties of Expectation
Instead of simulating full distributions, we often just compute expectation and variance directly. Recall the definition of expectation: \[\mathbb{E}[X] = \sum_{\text{all possible}\ x} x P(X=x)\]
From it, we can derive some useful properties:
1. Linearity of expectation. The expectation of the linear transformation \(aX+b\), where \(a\) and \(b\) are constants, is:
\[\mathbb{E}[aX+b] = aE[\mathbb{X}] + b\]
2. Expectation is also linear in sums of random variables.
\[\mathbb{E}[X+Y] = \mathbb{E}[X] + \mathbb{E}[Y]\]
3. If \(g\) is a non-linear function, then in general, \[\mathbb{E}[g(X)] \neq g(\mathbb{E}[X])\] For example, if \(X\) is -1 or 1 with equal probability, then \(\mathbb{E}[X] = 0\), but \(\mathbb
{E}[X^2] = 1 \neq 0\).
Using these properties, we can again better understand how we got from the original definition of variance to the computational definition.
17.3.2 Properties of Variance
Let’s now get into the properties of variance. Recall the definition of variance: \[\text{Var}(X) = \mathbb{E}[(X-\mathbb{E}[X])^2]\]
Combining it with the properties of expectation, we can derive some useful properties:
1. Unlike expectation, variance is non-linear. The variance of the linear transformation \(aX+b\) is: \[\text{Var}(aX+b) = a^2 \text{Var}(X)\]
• Subsequently, \[\text{SD}(aX+b) = |a| \text{SD}(X)\]
• The full proof of this fact can be found using the definition of variance. As general intuition, consider that \(aX+b\) scales the variable \(X\) by a factor of \(a\), then shifts the
distribution of \(X\) by \(b\) units.
• Shifting the distribution by \(b\) does not impact the spread of the distribution. Thus, \(\text{Var}(aX+b) = \text{Var}(aX)\).
• Scaling the distribution by \(a\) does impact the spread of the distribution.
2. Variance of sums of random variables is affected by the (in)dependence of the random variables. \[\text{Var}(X + Y) = \text{Var}(X) + \text{Var}(Y) + 2\text{Cov}(X,Y)\] \[\text{Var}(X + Y) = \
text{Var}(X) + \text{Var}(Y) \qquad \text{if } X, Y \text{ independent}\]
17.3.3 Covariance and Correlation
We define the covariance of two random variables as the expected product of deviations from expectation. Put more simply, covariance is a generalization of variance to variance:
\[\text{Cov}(X, X) = \mathbb{E}[(X - \mathbb{E}[X])^2] = \text{Var}(X)\]
\[\text{Cov}(X, Y) = \mathbb{E}[(X - \mathbb{E}[X])(Y - \mathbb{E}[Y])]\]
We can treat the covariance as a measure of association. Remember the definition of correlation given when we first established SLR?
\[r(X, Y) = \mathbb{E}\left[\left(\frac{X-\mathbb{E}[X]}{\text{SD}(X)}\right)\left(\frac{Y-\mathbb{E}[Y]}{\text{SD}(Y)}\right)\right] = \frac{\text{Cov}(X, Y)}{\text{SD}(X)\text{SD}(Y)}\]
It turns out we’ve been quietly using covariance for some time now! Correlation (and therefore covariance) measures a linear relationship between \(X\) and \(Y\).
• If \(X\) and \(Y\) are correlated, then knowing \(X\) tells you something about \(Y\).
• “\(X\) and \(Y\) are uncorrelated” is the same as “Correlation and covariance equal to 0”
• Independent \(X\) and \(Y\) are uncorrelated, or in other words \(\text{Cov}(X, Y) =0\) and \(r(X, Y) = 0\), because knowing \(X\) tells you nothing about \(Y\)
• Note, however, that the converse is not always true: \(X\) and \(Y\) could be uncorrelated (having \(\text{Cov}(X, Y) = r(X, Y) = 0\)) but not be independent.
17.3.4 Equal vs. Identically Distributed vs. i.i.d
Suppose that we have two random variables \(X\) and \(Y\):
• \(X\) and \(Y\) are equal if \(X(s) = Y(s)\) for every sample \(s\). Regardless of the exact sample drawn, \(X\) is always equal to \(Y\).
• \(X\) and \(Y\) are identically distributed if the distribution of \(X\) is equal to the distribution of \(Y\). We say “\(X\) and \(Y\) are equal in distribution.” That is, \(X\) and \(Y\) take
on the same set of possible values, and each of these possible values is taken with the same probability. On any specific sample \(s\), identically distributed variables do not necessarily share
the same value. If \(X = Y\), then \(X\) and \(Y\) are identically distributed; however, the converse is not true (ex: \(Y = 7 - X\), \(X\) is a die)
• \(X\) and \(Y\) are independent and identically distributed (i.i.d) if
1. The variables are identically distributed.
2. Knowing the outcome of one variable does not influence our belief of the outcome of the other.
Note that in Data 100, you’ll never be expected to prove that random variables are i.i.d.
Now let’s walk through an example. Say \(X_1\) and \(X_2\) be numbers on rolls of two fair die. \(X_1\) and \(X_2\) are i.i.d, so \(X_1\) and \(X_2\) have the same distribution. However, the sums \(Y
= X_1 + X_1 = 2X_1\) and \(Z=X_1+X_2\) have different distributions but the same expectation (7).
However, looking at the graphs and running this through simulation, we can see that the left distribution (\(Y = 2X_1\)) has a larger variance.
17.3.5 Summary
• Let \(X\) be a random variable with distribution \(P(X=x)\).
□ \(\mathbb{E}[X] = \sum_{x} x P(X=x)\)
□ \(\text{Var}(X) = \mathbb{E}[(X-\mathbb{E}[X])^2] = \mathbb{E}[X^2] - (\mathbb{E}[X])^2\)
• Let \(a\) and \(b\) be scalar values.
□ \(\mathbb{E}[aX+b] = aE[\mathbb{X}] + b\)
□ \(\text{Var}(aX+b) = a^2 \text{Var}(X)\)
• Let \(Y\) be another random variable.
□ \(\mathbb{E}[X+Y] = \mathbb{E}[X] + \mathbb{E}[Y]\)
□ \(\text{Var}(X + Y) = \text{Var}(X) + \text{Var}(Y) + 2\text{Cov}(X,Y)\)
Note that \(\text{Cov}(X,Y)\) would equal 0 if \(X\) and \(Y\) are independent. | {"url":"https://ds100.org/course-notes/probability_1/probability_1.html","timestamp":"2024-11-10T17:39:49Z","content_type":"application/xhtml+xml","content_length":"159454","record_id":"<urn:uuid:4e8b4752-460c-45e1-b683-144b793827e8>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00559.warc.gz"} |
Euclidean space
A Euclidean space or, more precisely, a Euclidean n-space is the generalization of the notions "plane" and "space" (from elementary geometry) to arbitrary dimensions n. Thus Euclidean 2-space is the
plane, and Euclidean 3-space is space.
This generalization is obtained by extending the axioms of Euclidean geometry to allow n directions which are mutual perpendicular to each other. For practical purposes, Cartesian coordinates are
introduced just as for 2 or 3 dimensions: Because of the larger dimension, n coordinates are needed to identify a point of the space. This approach is called "analytic geometry" because it allows to
use the methods of linear algebra to solve geometrical questions by calculation with real numbers.
Euclidean space
Two- and three-dimensional geometry, as it is taught in school all over the world, was first described by Euclid more than two thousand years ago in his The Elements and is still useful for dealing
with physical space even though modern physics has shown that geometry in the universe is more complicated.
This so-called Euclidean space is based on a few fundamental concepts, the notions point, straight line, plane and how they are related.
Two points determine a straight line (and a line segment), and a line and a point determine a line through that point and parallel to the given line. a line and a point (not on that line) determine a
plane, and a plane and a point (not on that plane) "generate" 3-space.
Moreover, line segments have length, and the angle between intersecting lines can also be measured.
While it is difficult to picture higher dimensional objects it is easy to extend the mathematical concepts beyond three dimensions. In 4-dimensional space a plane and a point not on the plane
determine a subspace, while a subspace and a point outside it generates 4-space. This can be iterated until a (n-1)-dimensional subspace (called hyperplane) and an exterior point are sufficient to
generate the whole (n-dimensional) space.
Cartesian coordinates
A Euclidean space ${\displaystyle \mathbb {E} ^{n}}$ is a space of dimension n, where n is a finite natural number not equal to zero.
The n-dimensional Euclidean space is in one-to-one correspondence to the vector space ℝ^n consisting of ordered n-tuples (columns) of real numbers. The definition of a 1-1 map between the two spaces
is by choosing a point of ${\displaystyle \mathbb {E} ^{n}}$, the origin and erecting a set of axes in that point. Any point of ${\displaystyle \mathbb {E} ^{n}}$ obtains a unique set of coordinates
with respect to these axes and accordingly is represented by an ordered set of real numbers, i.e., by an element of ℝ^n. Conversely, given a column of n real numbers and a set of axes crossing in an
origin, an element of ${\displaystyle \mathbb {E} ^{n}}$ (a "point") is determined uniquely. In fact, the two spaces are so closely related that they are often identified; in that case ℝ^n is usually
referred to as Euclidean space. However, strictly speaking ℝ^n is not exactly the space appearing in Euclid's geometry, not even for n = 2 or n = 3. After all, it was almost 2000 years after Euclid
wrote his Elements that Descartes introduced in 1637 ordered 2- and 3-tuples, now known as Cartesian coordinates, to describe points in the plane and in space. In Euclid's geometry there is no
origin, all points are equal.
The definition of Euclidean space further requires a distance d(x,y) between any two of its elements x and y, i.e., a Euclidean space is an example of a metric space. The distance is defined by means
of the following positive definite inner product on ℝ^n,
${\displaystyle d(\mathbf {x} ,\mathbf {y} )\equiv \langle \mathbf {x} -\mathbf {y} ,\mathbf {x} -\mathbf {y} \rangle ^{\frac {1}{2}}\equiv \left[\sum _{i=1}^{n}(x_{i}-y_{i})^{2}\right]^{\frac
where x[i ] are the components of x and y[i ] of y. Further, ⟨a, b⟩ stands for an inner product between a and b. Thus, a common definition of Euclidean space is that it is the linear space ℝ^n
equipped with positive definite inner product.
In numerical applications one may meet a real n-dimensional linear space V with a basis {v[i]} such that the overlap matrix is not equal to the identity matrix,
${\displaystyle g_{ij}\equiv \langle v_{i},v_{j}\rangle eq \delta _{ij},\quad i,j=1,\ldots ,n}$
where δ[ij] is the Kronecker delta The inner product between two elements x and y of the space with component vectors {x[i]} and {y[j]} with respect to the basis {v[i]} is
${\displaystyle \langle \mathbf {x} ,\mathbf {y} \rangle =\sum _{ij=1}^{n}x_{i}g_{ij}y_{j}.}$
The overlap matrix g[i j] is an example of a metric tensor. When the metric tensor is a constant, symmetric, positive definite, n×n matrix, the linear space V is in fact (isomorphic to) an n
-dimensional Euclidean space. By a choice of a new basis for V the matrix g[i j] can be transformed to the identity matrix; the new basis is an orthonormal basis. Hence a Euclidean space may be
defined as a linear inner product space that contains a basis with the identity matrix as its overlap matrix. In non-linear (curved, non-Euclidean) spaces the metric tensor is a function of position
and cannot be transformed to an identity matrix by a global transformation, i.e., by a single transformation holding on the whole space.
One can introduce the following affine map on ℝ^n:
${\displaystyle \mathbf {x} \mapsto \mathbf {x} '=\mathbf {A} \mathbf {x} +\mathbf {c} ,\quad \mathbf {x} ,\mathbf {x} '\in \mathbb {R} ^{n},}$
where A is a real n×n matrix and c is an ordered n-tuple of real numbers. If A is an orthogonal matrix this map leaves distances invariant and is called an affine motion; if furthermore c = 0 it is a
rotation. If A = E (the identity matrix), it is a translation, equivalent to a shift of origin. In the classical Euclidean geometry it is irrelevant at which points in space the geometrical objects (
circles, triangles, Platonic solids, etc.) are located. This means that Euclid assumed implicitly the invariance of his geometry under translations. Also the orientation in space of an object is
irrelevant for its geometric properties, so that Euclid, also implicitly, assumed rotational invariance as well. The set of affine motions forms a group, named the Euclidean group.
A real inner product space equipped with an affine map is an affine space. Formally, the space of high-school geometry is the 2- or 3-dimensional affine space equipped with inner product. A general
Euclidean space may be defined as an n-dimensional affine space with inner product. Although classical Euclidean geometry does not introduce explicitly an inner product, it does so implicitly by
considering lengths of line segments and magnitudes of angles.
Finally, it may be of interest to mention an example of a space that is not Euclidean, i.e., non-flat—the flatness being given by the definition of distance. The best known example of a curved space
is the surface of the Earth. Locally the surface is flat, i.e., Euclidean, but globally it is curved. Somebody planning a day's hike will see the Earth as Euclidean, but an airplane pilot planning a
flight from Europe to the US will not. Most long-distance flights follow a great circle, because that is the shortest distance on the surface of a sphere. Planes do not fly along parallels of
latitude (the equator excepted), even if the points of departure and destination are at the same latitude. Flying along a parallel seems shortest on a chart in an atlas that uses the common Mercator
projection. However, such a chart gives wrong distances because it approximates the curved surface of the Earth by a flat 2-dimensional Euclidean plane, see Riemannian manifold for more details about
the distance on curved spaces embedded in higher-dimensional Euclidean spaces. | {"url":"https://citizendium.org/wiki/Euclidean_space","timestamp":"2024-11-06T21:08:51Z","content_type":"text/html","content_length":"53025","record_id":"<urn:uuid:4a58bd21-9678-4ea7-a5d4-278b4c1dfe19>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00046.warc.gz"} |
Computational Examination of Heat and Mass Transfer Induced by Ternary Nanofluid Flow across Convergent/Divergent Channels with Pollutant Concentration
Department of Studies in Mathematics, Davangere University, Davangere 577002, India
Department of Mathematics and Statistics, University College for Women, Koti, Hyderabad 500095, India
Department of Mathematics, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Bengaluru 560035, India
Department of Mathematical Sciences, Faculty of Science and Technology, Universiti Kebangsaan Malaysia UKM, Bangi 43600, Selangor, Malaysia
Department of Computer Science and Mathematics, Lebanese American University, Byblos 1401, Lebanon
Department of Mathematics and Social Sciences, Sukkur IBA University, Sukkur 65200, Sindh, Pakistan
Department of Mathematical Sciences, Federal Urdu University of Arts, Science & Technology, Gulshan-e-Iqbal, Karachi 75300, Pakistan
Mechanical Engineering Department, College of Engineering, King Saud University, P.O. Box 800, Al-Riyadh 11421, Saudi Arabia
Mechanical Engineering, Future University in Egypt, New Cairo 11835, Egypt
Department of Mathematics, Babeş-Bolyai University, 400084 Cluj-Napoca, Romania
Author to whom correspondence should be addressed.
Submission received: 7 July 2023 / Revised: 5 August 2023 / Accepted: 7 August 2023 / Published: 16 August 2023
Studying waste discharge concentration across a convergent/divergent channel is essential in environmental-related applications. Successful environmental administration must understand the behavior
and concentration of waste contaminants released into these channels. Analyzing waste discharge concentrations aids in determining the efficacy of treatment techniques and regulatory controls in
lowering pollutant scales. Because of this, the current analysis examines the ternary-based nanofluid flow across convergent/divergent channels, including non-uniform heat source/sink and
concentration pollutants. The study also concentrates on understanding the movement and heat transmission characteristics in ternary-based nano-liquid systems with divergent and convergent channels
and maximizing the ternary nanofluid flow’s effectiveness. The equations representing the flow, temperature, and concentrations are transformed into a system of ODEs (ordinary differential equations)
and are obtained by proper similarity variables. Further, solutions of ODEs are gathered by using the Runge Kutta Fehlberg 4-5 (RKF-45) method and shooting procedure. The significant dimensionless
constraints and their impacts are discussed using plots. The results mainly focus on improving local and external pollutant source variation will enhance the concentration for the divergent channel
while declining for the convergent channel. Adding a solid fraction of nanoparticles will escalate the surface drag force. These findings may enhance heat management, lessen pollutant dispersion, and
enhance the circulation of nanofluid systems.
1. Introduction
A form of fluid known as a “nanofluid” has small particles scattered throughout a base fluid. Nanoscale particles typically have sizes of less than 100 nm. The base fluid becomes more thermally
efficient when nanoparticles are added. In 1995, Choi and Eastman [
] first proposed the concept of nanofluids, and the results of their experiments show that introducing nanoparticles into base fluids enhances the fluid’s capacity to offer heat conductivity. Medical
areas, engineering detergent production, automotive cooling, nuclear reactor cooling fluid, and military applications are just a few of the different fields where nanofluids are used. Hybrid
nanofluids are base fluids with two distinct nanoparticles that have been combined. The thermal efficiency of a hybrid nanofluid is higher than that of an ordinary nanofluid due to the combines two
distinct nanoparticles with base fluid. Engineering, medical areas, nuclear reactor cooling, detergent production, automotive cooling, braking fluid, and military applications are just a few of the
different fields where nanofluids are used. Three distinct nanoparticles together with a base fluid combine to form a ternary nanofluid. Ternary hybrid nanofluid has superior thermal efficiency and
fluid flow motion than simple and hybrid nanofluid. Ternary nanofluids are used in industries, medical, and mechanical fields. Additionally, they are utilized as a coolant in automobiles, heat
exchangers, and other automotive fields. Madhukesh et al. [
] studied the movement of a hybrid nanofluid using Newtonian heating through a curved stretching sheet. Animasaun et al. [
] explored the dynamics of a ternary-hybrid nanofluid on convectively heated surfaces by taking the impact of magnetic flux density and the presence of a heat source or sink into consideration. The
findings reveal that convective heating and inclined magnetic fields have a significant positive influence on temperature distribution, and these outcomes have potential applications in enhancing
heat transfer and managing thermal processes. Yaseen et al. [
], investigated the bioconvection of a ternary nanofluid flow containing gyrotactic microbes, and the various factors considered, such as natural convection, heat source/sink, and radiation, while
examining different shapes. Khan et al. [
] studied the water-based ternary hybrid nanoparticles moving in a dependent-on-time mode over a rotating, porous stretching sphere. Ramesh et al. [
] studied the thermal behavior of ternary nanofluids flow on the porous medium surface with a heat source/sink. Das et al. [
] examined the theoretical modeling of a completely realized mixed convective flow of an ionic ternary hybrid nanofluid generated by electroosmosis and magnetohydrodynamics in an extended, vertical,
non-conducting channel with linearly rising channel walls.
Pollutant concentration measures how much a hazardous chemical is present in a given volume of soil, water, air, or another medium. Nanoparticles have the potential to boost heat transfer and have
many kinds of features in the removal of pollutants. A major problem that has influenced how individuals move about their everyday lives is pollution. Impurities seriously affect the health of
animals, humans, plants, and living things. The heart, liver, kidneys, and respiratory systems are all negatively affected by pollution. In most cases, the main causes of air pollution in cities are
heating systems in homes and automobile emissions. Pollution is only one of the industrial operations’ negative environmental consequences on the ecosystem. Even in tiny quantities, certain
pollutants could have positive benefits. Sulfur dioxide, in small amounts, may promote plant development. Copper and zinc are vital elements in animal bodies. Some researchers have studied the
contribution of external pollutant source factors on the concentration of pollutants. Makinde et al. [
] investigated the system of equations regulating river pollution transfer using classical Lie points. Numerically examined the pollutant concentration dispersion in a 2D consolidated river model by
Pengpop et al. [
]. Chinyoka and Makinde [
] explore the dynamics of polymeric pollutant dispersion in a rectangular channel, caused by an external source in a flowing Newtonian liquid, the numerical model provides insights into pollution
scenarios resulting from an improper discharge of hydrocarbon products offering measurements for detecting contamination levels. The study initiated by Cintolesi et al. [
] focuses on the interaction between inertial and thermal forces in urban canyons and their impact on turbulent characteristics and pollutant removal. Large-eddy simulations model the square canyons
with different facade temperatures, while a street-level scalar source represents traffic emissions. Heated facades induce convective flows to create an energetic, turbulent region at the
canyon-ambient interface, enhancing pollutant removal and reducing urban drag; heating the upwind facade strengthens the internal vortex and decreases overall pollutant concentration. Chinyoka and
Makinde [
] investigated the transient dispersion of a pollutant in a laminar channel flow using numerical methods, and the model considers density variation with pollutant concentration and provides insights
into pollutant behavior aiding in understanding and addressing improper discharge incidents and evaluating decontamination measures for water bodies. Southerland et al. [
] studied a neighborhood-scale analysis for determining the spread of air pollution health risks in cities with high-resolution data sets utilization in the bay area.
When introducing a heat source/sink parameter, the amount of heat distributed throughout the area may fluctuate. The heat source/sink parameter increases the thermal dispersion rate while decreasing
the mass transfer rate. Nuclear reactors, semiconductors, and electronic materials are examples of real-world uses for the heat source/sink phenomenon. Gireesha et al. [
] studied the effect of Biot number with an irregular heat source/sink and non-linear thermal radiation of nanofluids in the stretched surface. Khan et al. [
] examined the water-based alumina nanofluid implanted in a porous media with buoyancy force that caused a two-dimensional stretched wall jet to transfer heat in fluid flow. Khan et al. [
] studied the stability analysis and the movement of two distinct types of nanoparticles present in fluid across an extendable/shrinkable vertical surface produced by a micropolar fluid with an
inconsistent heat source/sink. Kumar et al. [
] studied the hybrid ferrofluid layer flow and heat transmission in the presence of radiation and erratic heat sources and sinks. Utilizing various nanofluids in three-dimensional motion across a
Riga plate, a numerical analysis of the irregular heat source/sink qualities was conducted by Raghupati et al. [
]. Khan et al. [
] studied how a wall jet nanofluid moving under the influence of Lorentz forces was affected by activation energy, an irregular heat source/sink, thermophoretic particle deposition, and chemical
reaction. Vijayalakshmi et al. [
] studied the effect of the chemical reaction and nonuniform heat source/sink with a porous medium on Casson fluid in a vertical cone and flat plate.
The ability to transform fluid energy into motion energy is a characteristic of convergent channels. Pressure may be enhanced in divergent channels while velocity is raised in converging channels.
Chemical, mechanical, biomechanical, civil, and environmental engineering are just a few practical applications in which fluid flow in a non-parallel channel is utilized. Convergent/divergent
channels are used to produce wires, dynamics, heat exchangers, aeronautical, civil, biomechanical, pharmaceutical, biomedical devices, fiberglass, rocket engines, plastic sheets, and metal casting
when a magnetic field is present. Blood movement through arteries and capillaries in the human body is an example of fluid flow in converging/diverging channels. The study of fluids over divergent/
convergent channels has attracted the attention of many researchers. Rashid et al. [
] examined the joule dissipation in the motion of water transporting zinc oxide across convergent and divergent channels with the impact of nanoparticles. Ramesh et al. [
] studied the impact of the permeable medium and heat source/sink in convergent/divergent channels of ternary hybrid nanofluids. Khan et al. [
] studied the MHD flow of a viscous fluid in a channel having irregular walls and the effect of mass and heat transport in a concentration and temperature profile. Mishra et al. [
] examined the combined impact of joule heating, heat generation/absorption, and magnetohydrodynamic flow of nanofluids in stretching/shrinking, porous, divergent/convergent channels with a viscous
dissipation and volume fraction. Based on ternary-nanoparticle efficiency for base fluid mixture along a convergent-divergent channel studied by Zahan et al. [
]. Adnan et al. [
] examined the influence of TR (thermal radiation) on the Jeffery-Hamel circulation as well as the motion of a viscous, incompressible liquid between two nonparallel plane walls. Saifi et al. [
] developed an innovative mathematical method for assessing heat distributions in convergent-divergent channels between non-parallel planar walls in a Jeffery Hamel flow.
Based on the literature, no study examined the non-uniform heat source/sink and pollutant concentration impacts on the ternary-based nanofluid circulation across convergent/divergent channels. The
current work has been studied by taking the above effects. Using suitable transformations, the governing equations are simplified to ODEs and solved numerically using the RKF-45 method. The
significant dimensionless constraints are analyzed with the help of graphs.
2. Mathematical Formulation
Consider a steady, incompressible, 2-dimensional flow and ternary nanofluid circulating between convergent/divergent channels with an angle
$2 γ$
Figure 1
). Here
$u r ⌢$
represents the uniform velocity in channels, and it depends on both
. The velocity profile is taken in the form of
$u r ⌢ , 0 , 0$
. Temperature and concentration equations contain irregular heat source/sink and pollutant concentration discharge, and both depend on
The following are the equations that indicate the flow that was previously discussed (See [
$∂ u r ⌢ r ⌢ ∂ r ⌢ r ⌢ − 1 = 0$
$∂ u r ⌢ ∂ r ⌢ u r ⌢ = υ t h n f 1 r ⌢ 2 ∂ 2 u r ⌢ ∂ θ 2 + ∂ u r ⌢ ∂ r ⌢ 1 r ⌢ + ∂ 2 u r ⌢ ∂ r ⌢ 2 − u r ⌢ r ⌢ 2 − 1 ρ t h n f ∂ P ∂ r ⌢ ,$
$− r ⌢ − 1 ρ t h n f ∂ P ∂ θ + υ t h n f ∂ u r ⌢ ∂ θ 2 r ⌢ 2 = 0$
$u r ⌢ ∂ T 1 ∂ r ⌢ = k t h n f ρ C p t h n f ∂ 2 T 1 ∂ r ⌢ 2 + 1 r ⌢ ∂ T 1 ∂ r ⌢ + 1 r ⌢ 2 ∂ 2 T 1 ∂ θ 2 + μ t h n f ρ C p t h n f 1 r ⌢ 2 ∂ u r ⌢ ∂ θ 2 + 4 ∂ u r ⌢ ∂ r ⌢ 2 + q ′ ′ ′ ρ C p t h n f ,$
$u r ⌢ ∂ C 1 ∂ r ⌢ = D f ∂ 2 C 1 ∂ r ⌢ 2 + 1 r ⌢ ∂ C 1 ∂ r ⌢ + 1 r ⌢ 2 ∂ 2 C 1 ∂ θ 2 + S 1 C 1$
with corresponding boundary conditions are,
$u r ⌢ = U * , ∂ u r ⌢ ∂ θ = 0 , ∂ T 1 ∂ θ = 0 , ∂ C 1 ∂ θ = 0 at θ = 0 , u r ⌢ = 0 , T 1 = T w 1 , C 1 = C w 1 at θ = γ$
In the temperature Equation (4),
$q ′ ′ ′$
represents the non-uniform heat source/sink, and modeled as, (see [
$q ′ ′ ′ = k f U * r ⌢ υ f G * T w 1 h ′ η + L * T 1$
The values of $L * & G *$ consequently $− v e$ and $+ v e$ denotes the internal heat sink and internal heat source factors.
From the above-stated Equations (1)–(7), the expressions are listed below:
$υ t h n f$ : Kinematic viscosity $υ t h n f = μ t h n f / ρ t h n f$
$D f$ : Coefficient of mass diffusivity
$m 3$ : External pollutant source variation parameter
$L *$ : Temperature dependent heat source/sink
$P$ : Pressure
$G *$ : The space dependent heat source/sink
$C p t h n f$ : Specific heat
$k t h n f$ : Thermal conductivity
$ρ C p t h n f$ : Specific heat capacitance
$μ t h n f$ : Absolute viscosity
$ρ t h n f$ : Density
Furthermore, the rest of the mathematical symbols or notations used in the governing equations represent the thermophysical properties of the ternary nanofluids. Each symbol is namely defined as
above, while the correlations of these thermophysical properties for ternary nanofluids are given below, (see [
$μ t h n f = μ f 1 − ϕ 3 − 2.5 1 − ϕ 2 − 2.5 1 − ϕ 1 − 2.5 , ρ t h n f ρ f = 1 − ϕ 3 ρ 2 ρ f ϕ 2 + ϕ 1 ρ 1 ρ f + 1 − ϕ 1 1 − ϕ 2 + ϕ 3 ρ 3 ρ f , ρ C p t h n f = ϕ 1 ρ C p 1 + ρ C p 2 ϕ 2 + ρ C p 3 ϕ
3 + 1 − ϕ 1 − ϕ 2 − ϕ 3 ρ C p f , k t h n f = k 3 + 2 k h n f + − 2 k h n f + 2 k 3 ϕ 3 k 3 + 2 k h n f + ϕ 3 k h n f − k 3 k h n f , k h n f = k 2 + 2 k n f − 2 k n f − 2 k 2 ϕ 2 k 2 + k n f 2 + ϕ 2
k n f − k 2 k n f , k n f = k 1 + 2 k f + 2 k 1 − 2 k f ϕ 1 k 1 + 2 k f + ϕ 1 k f − k 1 k f .$
—thermal conductivity,
$C p$
—heat capacity,
—dynamic viscosity,
—density, and
—solid volume fraction of the nanoparticles. In the above expression, particularly the case
$ϕ 3 = 0$
, it will reduce for hybrid nanofluid and
$ϕ 3 = ϕ 2 = 0$
reduces to nanofluid expression,
$ϕ 3 = ϕ 2 = ϕ 1 = 0$
denotes the expression to working base liquid. Moreover, the thermophysical experimental data of base/carrier liquid and the chosen nanoparticles are presented in
Table 1
The term
$S 1 C 1$
that represents the external pollutant concentration in Equation (5), (see [
$S 1 C 1 = H * e m 3 C 1 , H * = H 1 l 2 r ⌢ .$
To convert Equations (1)–(5) into non-dimensional form, introduced the following similarity variables, (see [
$h η = f θ f mx , η = θ γ , β * η = T 1 T w 1 , φ * η = C 1 C w 1$
For radial flow velocity profile can be represented in the form,
$f θ = r ⌢ u r ⌢ r ⌢ , θ$
From the Equations (2) and (3) we can eliminate the pressure term by utilizing (10) and (11), we obtain,
$B 1 − 1 B 2 − 1 h ′ ′ ′ + 2 γ Re h h ′ + 4 γ 2 B 1 − 1 B 2 − 1 h ′ = 0 ,$
and now Equations (4) and (5) are reduced to
$B 4 B 3 − 1 β * ′ ′ + B 1 − 1 B 2 − 1 Pr E c 1 4 γ 2 h 2 + h ′ 2 + B 3 − 1 γ Re G * h ′ + L * β * = 0 ,$
$φ * ′ ′ + 1 2 γ δ 2 Re S c 1 exp λ 1 φ * = 0 .$
$B 1 = 1 − ϕ 3 2.5 1 − ϕ 2 2.5 1 − ϕ 1 2.5 , B 2 = ϕ 3 ρ 3 ρ f + 1 − ϕ 3 1 − ϕ 2 1 − ϕ 1 + ϕ 1 ρ 1 ρ f + ϕ 2 ρ 2 ρ f , B 3 = ϕ 3 ρ C p 3 ρ C p f + 1 − ϕ 3 1 − ϕ 2 1 − ϕ 1 + ϕ 1 ρ C p 1 ρ C p f + ϕ 2
ρ C p 2 ρ C p f , B 4 = k t h n f k f .$
The reduced boundary conditions are,
$h 0 = 1 , h ′ 0 = 0 , β * ′ 0 = 0 , φ * ′ 0 = 0 at η = 0 , h 1 = 0 , β * 1 = 1 , φ * 1 = 1 as η → γ$
From the resultant equations, the important dimensionless constraints and its expressions are provided in
Table 2
The important engineering coefficients are provided as follows:
$C f = μ t h n f ρ f U * 2 1 r ⌢ ∂ u r ⌢ ∂ θ θ = γ , N u = − r ⌢ k t h n f k f T w 1 1 r ⌢ ∂ T 1 ∂ θ θ = γ , S h = − r ⌢ D f D f C w 1 1 r ⌢ ∂ C 1 ∂ θ θ = γ .$
Equation (16) reduces to the following form as follows:
$Re C f = h ′ 1 B 1 , N u = − B 4 β * ′ 1 γ , S h = − φ * ′ 1 γ$
3. Numerical Procedure
As a result of the Runge-Kutta-Fehlberg 4th and 5th order tactic being utilized to solve the obtained Equations (12)–(14) and boundary conditions (15). But the results of these equations are
two-point, higher-order equations. By adding additional variables, it is required to make them first order, which helps to solve the system of equations and gives stability and exactness.
$h = d 1 , h ′ = d 2 , h ′ ′ = d 3 , β * = d 4 , β * ′ = d 5 , φ * = d 6 , φ * ′ = d 7 .$
By using Equation (18), the Equations (11)–(13) can be written as follows,
$h ′ ′ ′ = − B 1 B 2 2 Re γ d 1 d 2 + 1 B 1 B 2 4 γ 2 d 2 ,$
$β * ′ ′ = − B 3 B 4 1 B 1 B 2 Pr E c 1 4 γ 2 d 1 2 + d 2 2 + 1 B 3 γ Re G * d 2 + L * d 4 ,$
$φ * ′ ′ = − 1 2 γ δ 2 Re S c 1 exp λ 1 d 6 .$
And the boundary conditions become:
$d 1 0 = 1 , d 2 0 = 0 , d 5 0 = 0 , d 7 0 = 0 : η = 0 d 1 1 = 0 , d 4 1 = 1 , d 6 1 = 1 : η → γ$
The properties of the nanofluid described in Equation (8) and the thermophysical characteristics listed in
Table 1
are used to successfully solve the resulting equations.
Table 2
lists the default settings used throughout the process. The current results are supported by previous studies by (see [
]) using restricted scenarios, as seen in
Table 3
4. Result and Discussion
The complete review of the outcomes of the numerical analysis is given in the present part. Graphs are used to illustrate the most important parameters $E c 1 , G * , L * , λ 1 , δ 2 , γ , Re and ϕ
3$ and their effects on the corresponding profiles. This section also covers additional important components including skin friction, the Nusselt number, and the Sherwood number. This research also
provides clarity on how ternary nanofluids in convergent and divergent channels, have a significant impact along with practical implications.
The impact of
$ϕ 3$
$β *$
profiles can be observed in
Figure 2
Figure 2
a explains that increasing
$ϕ 3$
will increase velocity
in convergent channels while decreasing in divergent channels. The behavior of the flow will be impacted by the presence of solid nanoparticles. The nanofluid will have a higher concentration as a
result of the addition of nanoparticles, which will also strengthen the contact between the liquid and solid particles and improve velocity.
When there are solid particles present, the expanding process becomes difficult in a diverging channel, which reduces flow velocity. The enhancement in the values of
$ϕ 3$
will decrease the concentration for both channel situations, as seen in
Figure 2
b. This is because the solid particles act as conductive channels that carry heat throughout the fluid. As
$ϕ 3$
increases, a narrower temperature distribution is produced because more solid particles contribute to overall thermal conductivity. The fact that the thermal distribution of the convergent channel is
different from that of the divergent channel provides additional confirmation that the convergent form facilitates higher heat transfer. Fluid particles are brought in closer by the converging walls
of the channel, which improves the transfer of heat energy. Thermal dispersion and an increased range of temperatures occur because of divergent channel shapes.
Figure 3
illustrates the impact of
$E c 1$
$β *$
for both convergent and divergent channel scenarios. Increasing in
$E c 1$
will increases the
$β *$
in both convergent and divergent channels. However, in this case of
$E c 1$
the divergent channel exhibits a higher thermal dispersion than the converging channel. This is because the improved thermal distribution seen with higher
$E c 1$
is caused by the fluid’s enhanced thermal conduction movement. Convective transfer of heat seems to be contributing more than other types of heat transmission when
$E c 1$
values are higher. A more efficient exchange of thermal energy between the liquid and its surroundings is made possible by this increased convective heat transfer, leading to better thermal
The impact of the space-dependent heat source/sink parameter on
$β *$
is shown in
Figure 4
Figure 4
a shows that, in the case of a convergent channel, an increase in the
$G *$
$G * > 0$
) values improves the thermal distribution, whereas contrary behavior is shown in the situation of a divergent channel.
$G *$
values represent the local heat production and absorption inside the liquid. Positive
$G *$
values indicate that the fluid is producing heat, which raises the temperature, like how a negative value for
$G *$
indicates the presence of heat causes a liquid to cool. Since the heat generated is partly confined in the convergent channel where the liquid is compressed, heat distribution is greatly improved. A
decrease in temperature with rising values in the convergent channel can be attributed to heat being absorbed from the fluid. The temperature drops as a result of the fluid losing thermal energy due
to the heat sink effect. The thermal energy within the fluid rises due to the generation of local heat, but as the fluid spreads in the diverging channel, the extra heat is diffused across a larger
area. As a result, the rise in temperature becomes less substantial, which causes a fall in temperature. The distribution of temperature is evened as a result of the liquid’s thermal energy being
drained by the heat sink effect. A more consistent temperature profile is produced as a result of heat absorption, which takes the place of temperature variations.
Figure 4
b represents the impact of temperature-dependent heat source/sink (
$L *$
) on
$β *$
for both divergent/convergent channel. Negative values of
$L *$
represents the temperature-dependent heat sink, and positive values of
$L *$
represents the temperature-dependent heat source. The thermal distribution will drop in the convergent channel with increased values of the temperature-dependent heat source restriction, but increase
in the divergent channel. The thermal distribution diminishes in the divergent channel but rises in the convergent channel due to the better temperature-dependent heat sink constraints.
$L * > 0$
indicates temperature is generated within the fluid as the temperature increases, and leads to the rise in temperature, while
$L * < 0$
indicates that temperature is transported by fluid as temperature improves, subsequently decreasing in thermal profile.
When $L * > 0$, the fluid’s internal heat generation develops as the liquid’s temperature does. This additional heat increases the channel’s overall temperature distribution, resulting in a more
stable temperature profile. As the fluid temperature increases, the amount of heat produced inside the fluid increases. However, the fluid gets squeezed in the convergent channel, which causes the
additional heat to be more constrained and less distributed, leading to a decrease in temperature. As the temperature of the liquid improves when, the influence of the heat sink increases, increasing
the heat absorption from the fluid. $L * > 0$, the heat sink impact increases more efficient at storing thermal energy from heated locations as the temperature of the liquid increases. A more
consistent temperature profile is produced as a result of this heat removal, which takes into account temperature variations in the convergent channel.
Figure 5
a,b demonstrate the distribution of concentration profile for
$δ 2$
$λ 1$
. The concentration will drop as
$δ 2$
$λ 1$
values rise in a convergent channel, whereas the converse is true in a divergent channel. Both the external pollutant origin variation constraint and the local pollutant external source constraint
pertain to fluctuations or external sources of contaminants injected into the liquid stream. An upsurge in these variables denotes a rise in pollutant intake or volatility, which raises
concentration. When more pollutants are provided, or the diversity of pollutant sources develops, the overall concentration of pollutants in the liquid stream rises. The increase in concentration can
be observed throughout as a result of the greater concentration being spread out across a larger volume in the diverging channel. When more pollutants are given, the fluid’s concentration of
pollutants decreases. In the convergent channel, where the fluid gets squeezed, the concentration of the pollutant is less distributed and more constrained.
Figure 6
a–c shows the engineering coefficients
$C f$
$N u$
, and
$S h$
, on various non-dimensional parameters, respectively. The influence of skin friction over
for growth in the magnitude of
$ϕ 3$
is shown in
Figure 6
a. The addition of
$ϕ 3$
and improved values of
will result in a surge at the surface drag force for the convergent and divergent channels. Additionally, divergent channels will have a greater surface drag force compared to convergent channels.
values and an increase in the solid volume percentage will enhance the contact between liquid and solid nano-sized particles. The flow encounters a larger surface area in a divergent channel scenario
than in a convergent channel situation. Therefore, there is more surface drag force in the case of a diverging channel.
Figure 6
b represents the change in the thermal distribution rate on
$L *$
for different values of
$G *$
. The rate of heat transmission will decrease as these two parameters get higher. The graph also makes it very evident that
$N u$
is more of a divergent channel case than a convergent channel scenario. When heat generation is more localized within the fluid and the parameters of heat transport change with temperature, thermal
energy is diffused more slowly. The rate of heat dispersion decreases as a result. The fluid is compressed in the convergent channel, where the thermal energy is more concentrated, leading to a
decreased rate of heat distribution. In the diverging channel, where the fluid expands, the thermal energy is distributed more uniformly, leading to a higher rate of heat dispersion.
The mass transfer rate on
$S c 1$
for a rise in the value of the external pollution source variation parameter is shown in
Figure 6
c. Increases in these two variables will speed up mass transport in the fluid flow. In addition, it has been discovered that divergent channels have a higher mass transfer rate than convergent
channels. When the Schmidt number rises and the pollution input or variation from outside the fluid increases, the mass transfer inside the fluid becomes more effective. The rate of mass transfer
consequently rises, indicating that contaminants move between locations faster. The fluid increases in the diverging channel, where there is more mixing and contact with the sources of pollution. As
a result, mass transfer happens more quickly. The convergent channel, on the other hand, is subjected to greater compression, which minimizes the dissolution and contact between the liquid and the
sources of pollution, resulting in a slower mass transmission rate.
5. Conclusions
The current work concentrates on the irregular heat source/sink impact and pollutant concentration in ternary nanofluid flow between convergent/divergent channels. The study’s principal findings are
listed below.
• For the convergent channel situation, the inclusion of $ϕ 3$ values will improve the velocity profile, but the divergent channel exhibits the opposite characteristic.
• The concentration profile in both channels will be improved by both $λ 1$ and $δ 2$.
• In the presence of the Eckert number, the heat spreading is more concentrated in divergent channels than convergent channels.
• Convergent channels display a lower surface drag than divergent channels due to an increase in the solid volume percentage and Reynolds number.
• When an external pollutant source parameter is present, the rate of mass transfer increases.
The domains of energy systems, environmental engineering, and thermal management systems can all benefit from the study’s conclusions. Information from pollutant concentration studies can assist
environmental organizations, businesses, and governments in controlling and lowering pollution levels, which promotes sustainable development. In addition, the realistic behavior of these materials
may differ owing to temperature-dependent changes, particle aggregation, and other phenomena not completely included in the model. The analysis may assume laminar flow conditions, ignoring
turbulence, which is important in certain applications. Nonetheless, the results might improve heat management, minimize pollutant dispersion, and increase nanofluid circulation. The external
pollutant source variation parameter affects the system’s behavior and pollution control performance. Increasing this parameter raises pollutant concentrations, influencing fluid flow patterns as
well as heat and mass transfer rates. Reduced it leads to smoother flow and lower pollutant concentrations, making pollution management more achievable for lower pollutant loads.
Author Contributions
Conceptualization, A.Z., V.K. and J.K.M.; methodology, A.Z., V.K. and J.K.M.; software, A.Z., V.K. and J.K.M.; validation, M.S., A.Z., V.K. and J.K.M.; formal analysis, M.S., A.Z., V.K., A.M.H. and
J.K.M.; investigation, A.M.H., U.K. and I.P.; resources, I.P.; data curation, U.K. and I.P.; writing—original draft preparation, M.S., A.M.H., U.K., I.P. and E.-S.M.S.; writing—review and editing,
M.S., A.M.H., U.K., I.P. and E.-S.M.S.; visualization, E.-S.M.S.; supervision, E.-S.M.S.; project administration, E.-S.M.S.; funding acquisition, E.-S.M.S. All authors have read and agreed to the
published version of the manuscript.
This work was funded by the Researchers Supporting Project number (RSP2023R33), King Saud University, Riyadh, Saudi Arabia.
Data Availability Statement
Not applicable.
The authors are thankful for the support of Researchers Supporting Project number (RSP2023R33), King Saud University, Riyadh, Saudi Arabia.
Conflicts of Interest
The authors declare no conflict of interest.
$P$ Pressure
$T 1$ Temperature
$k$ Thermal conductivity
$S 1$ The external pollutant concentration
$D f$ Coefficient of mass diffusivity
$C 1$ Concentration
$q ′ ′ ′$ Non-uniform heat source/sink
$G *$ Space dependent heat source/sink
$L *$ Temperature dependent heat source/sink
$C P$ Specific heat
$Pr$ Prandtl number
$Re$ Reynolds number
$S c 1$ Schmidt number
$m 3$ External pollutant source variation parameter
$E c 1$ Eckert number
$u r ⌢$ Uniform velocity
$C f$ Skin friction
$S h$ Sherwood number
$N u$ Nusselt number
Greek symbols
$υ$ Kinematic viscosity
$μ$ Dynamic viscosity
$ρ$ Density
$δ 2$ Local pollutant external source parameter
$λ 1$ External pollutant source variation parameter
$ϕ$ Solid volume fraction
$β * η$ Temperature profile
$φ * η$ Concentration profile
$f$ Fluid
$n f$ Nanofluid
$h n f$ Hybrid nanofluid
$t h n f$ Ternary hybrid nanofluid
1. Choi, S.U.S.; Eastman, J.A. Enhancing thermal conductivity of fluids with nanoparticles. In Proceedings of the 1995 International Mechanical Engineering Congress and Exhibition, San Francisco,
CA, USA, 12–17 November 1995. [Google Scholar]
2. Madhukesh, K.; Kumar, R.N.; Gowda, R.J.P.; Prasannakumara, B.C.; Ramesh, G.K.; Khan, M.I.; Khan, S.U.; Chu, Y.-M. Numerical simulation of AA7072-AA7075/water-based hybrid nanofluid flow over a
curved stretching sheet with Newtonian heating: A non-Fourier heat flux model approach. J. Mol. Liq. 2021, 335, 116103. [Google Scholar] [CrossRef]
3. Animasaun, I.; Yook, S.-J.; Muhammad, T.; Mathew, A. Dynamics of ternary-hybrid nanofluid subject to magnetic flux density and heat source or sink on a convectively heated surface. Surf.
Interfaces 2022, 28, 101654. [Google Scholar] [CrossRef]
4. Yaseen, M.; Rawat, S.K.; Shah, N.A.; Kumar, M.; Eldin, S.M. Ternary Hybrid Nanofluid Flow Containing Gyrotactic Microorganisms over Three Different Geometries with Cattaneo–Christov Model.
Mathematics 2023, 11, 1237. [Google Scholar] [CrossRef]
5. Khan, U.; Kumar, R.N.; Zaib, A.; Prasannakumara, B.; Ishak, A.; Galal, A.M.; Gowda, R.P. Time-dependent flow of water-based ternary hybrid nanoparticles over a radially contracting/expanding and
rotating permeable stretching sphere. Therm. Sci. Eng. Prog. 2022, 36, 101521. [Google Scholar] [CrossRef]
6. Ramesh, G.; Madhukesh, J.; Das, R.; Shah, N.A.; Yook, S.-J. Thermodynamic activity of a ternary nanofluid flow passing through a permeable slipped surface with heat source and sink. Waves Random
Complex Media 2022, 1–21. [Google Scholar] [CrossRef]
7. Das, S.; Ali, A.; Jana, R.N.; Makinde, O. EDL impact on mixed magneto-convection in a vertical channel using ternary hybrid nanofluid. Chem. Eng. J. Adv. 2022, 12, 100412. [Google Scholar] [
8. Makinde, O.; Moitsheki, R.; Tau, B. Similarity reductions of equations for river pollution. Appl. Math. Comput. 2007, 188, 1267–1273. [Google Scholar] [CrossRef]
9. Pengpom, N.; Vongpradubchai, S.; Rattanadecho, P. Numerical Analysis of Pollutant Concentration Dispersion and Convective Flow in a Two-dimensional Confluent River Model. Math. Model. Eng. Probl.
2019, 6, 271–279. [Google Scholar] [CrossRef]
10. Chinyoka, T.; Makinde, O.D. Modelling and Analysis of the Dispersal of a Polymeric Pollutant Injected into a Channel Flow of a Newtonian Liquid. Diffus. Found. Mater. Appl. 2023, 33, 23–56. [
Google Scholar] [CrossRef]
11. Cintolesi, C.; Barbano, F.; Di Sabatino, S. Large-Eddy Simulation Analyses of Heated Urban Canyon Facades. Energies 2021, 14, 3078. [Google Scholar] [CrossRef]
12. Chinyoka, T.; Makinde, O.D. Analysis of Nonlinear Dispersion of a Pollutant Ejected by an External Source into a Channel Flow. Math. Probl. Eng. 2010, 2010, e827363. [Google Scholar] [CrossRef]
13. Southerland, V.A.; Anenberg, S.C.; Harris, M.; Apte, J.; Hystad, P.; van Donkelaar, A.; Martin, R.V.; Beyers, M.; Roy, A. Assessing the Distribution of Air Pollution Health Risks within Cities: A
Neighborhood-Scale Analysis Leveraging High-Resolution Data Sets in the Bay Area, California. Environ. Health Perspect. 2021, 129, 37006. [Google Scholar] [CrossRef]
14. Gireesha, B.J.; Gorla, R.S.R.; Krishnamurthy, M.R.; Prasannakumara, B.C. Biot number effect on MHD flow and heat transfer of nanofluid with suspended dust particles in the presence of nonlinear
thermal radiation and non-uniform heat source/sink. Acta Comment. Univ. Tartu. Math. 2018, 22, 91–114. [Google Scholar] [CrossRef]
15. Khan, U.; Zaib, A.; Ishak, A.; Elattar, S.; Eldin, S.M.; Raizah, Z.; Waini, I.; Waqas, M. Impact of Irregular Heat Sink/Source on the Wall Jet Flow and Heat Transfer in a Porous Medium Induced by
a Nanofluid with Slip and Buoyancy Effects. Symmetry 2022, 14, 2212. [Google Scholar] [CrossRef]
16. Khan, U.; Zaib, A.; Ishak, A.; Alotaibi, A.M.; Eldin, S.M.; Akkurt, N.; Waini, I.; Madhukesh, J.K. Stability Analysis of Buoyancy Magneto Flow of Hybrid Nanofluid through a Stretchable/Shrinkable
Vertical Sheet Induced by a Micropolar Fluid Subject to Nonlinear Heat Sink/Source. Magnetochemistry 2022, 8, 188. [Google Scholar] [CrossRef]
17. Kumar, K.A.; Sandeep, N.; Sugunamma, V.; Animasaun, I.L. Effect of irregular heat source/sink on the radiative thin film flow of MHD hybrid ferrofluid. J. Therm. Anal. Calorim. 2020, 139,
2145–2153. [Google Scholar] [CrossRef]
18. Ragupathi, P.; Hakeem, A.K.A.; Al-Mdallal, Q.M.; Ganga, B.; Saranya, S. Non-uniform heat source/sink effects on the three-dimensional flow of Fe3O4 /Al2O3 nanoparticles with different base fluids
past a Riga plate. Case Stud. Therm. Eng. 2019, 15, 100521. [Google Scholar] [CrossRef]
19. Khan, U.; Zaib, A.; Ishak, A.; Waini, I.; Raizah, Z.; Boonsatit, N.; Jirawattanapanit, A.; Galal, A.M. Significance of Thermophoretic Particle Deposition, Arrhenius Activation Energy and Chemical
Reaction on the Dynamics of Wall Jet Nanofluid Flow Subject to Lorentz Forces. Lubricants 2022, 10, 228. [Google Scholar] [CrossRef]
20. Vijayalakshmi, P.; Gunakala, S.R.; Animasaun, I.L.; Sivaraj, R. Chemical Reaction and Nonuniform Heat Source/Sink Effects on Casson Fluid Flow over a Vertical Cone and Flat Plate Saturated with
Porous Medium. In Applied Mathematics and Scientific Computing; Kumar, B.R., Sivaraj, R., Prasad, B.S.R.V., Nalliah, M., Reddy, A.S., Eds.; Springer International Publishing: Cham, Switzerland,
2019; pp. 117–127. [Google Scholar]
21. Rashid, U.; Iqbal, A.; Liang, H.; Khan, W.; Ashraf, M.W. Dynamics of water conveying zinc oxide through divergent-convergent channels with the effect of nanoparticles shape when Joule dissipation
are significant. PLoS ONE 2021, 16, e0245208. [Google Scholar] [CrossRef]
22. Ramesh, G.; Madhukesh, J.; Shehzad, S.; Rauf, A. Ternary nanofluid with heat source/sink and porous medium effects in stretchable convergent/divergent channel. Proc. Inst. Mech. Eng. Part E J.
Process. Mech. Eng. 2022, 09544089221081344. [Google Scholar] [CrossRef]
23. Khan, U.; Ahmed, N.; Mohyud-Din, S.T. Thermo-diffusion, diffusion-thermo and chemical reaction effects on MHD flow of viscous fluid in divergent and convergent channels. Chem. Eng. Sci. 2016, 141
, 17–27. [Google Scholar] [CrossRef]
24. Mishra, A.; Pandey, A.K.; Chamkha, A.J.; Kumar, M. Roles of nanoparticles and heat generation/absorption on MHD flow of Ag–H[2]O nanofluid via porous stretching/shrinking convergent/divergent
channel. J. Egypt Math. Soc. 2020, 28, 17. [Google Scholar] [CrossRef]
25. Zahan, I.; Nasrin, R.; Khatun, S. Thermal performance of ternary-hybrid nanofluids through a convergent-divergent nozzle using distilled water—Ethylene glycol mixtures. Int. Commun. Heat Mass
Transf. 2022, 137, 106254. [Google Scholar] [CrossRef]
26. Adnan; Asadullah, M.; Khan, U.; Ahmed, N.; Mohyud-Din, S.T. Analytical and numerical investigation of thermal radiation effects on flow of viscous incompressible fluid with stretchable convergent
/divergent channels. J. Mol. Liq. 2016, 224, 768–775. [Google Scholar] [CrossRef]
27. Saifi, H.; Sari, M.R.; Kezzar, M.; Ghazvini, M.; Sharifpur, M.; Sadeghzadeh, M. Heat transfer through converging-diverging channels using Adomian decomposition method. Eng. Appl. Comput. Fluid
Mech. 2020, 14, 1373–1384. [Google Scholar] [CrossRef]
Figure 6. (a) Variation of $C f$ on $Re$ for variation in $ϕ 3$ (b) variation of $N u$ on $L *$ for variation in $G *$ (c) variation of $S h$ on $S c 1$ for variation in $λ 1$.
Table 1.
The effective thermophysical characteristics of chosen nanoparticles and carrier fluid, are given by, (see [
Properties Unit $H 2 O$ $A l 2 O 3$ $A g$ $C u$
$ρ$ $kg / m 3$ 997.1 3970 10,500 8933
$C p$ $m 2 s − 2 K − 1$ 4179 765.0 235 385.0
$k$ $kgms − 3 K − 1$ 0.613 40 429 401
Sl. No Name and Expression for the Constraint Fixed Value
1 Prandtl number $Pr = μ f C p f k f$ 6.3
2 Reynolds number $Re = f m x γ υ f$ 5
3 Schmidt number $S c 1 = υ f D f$ 0.8
4 Eckert number $E c 1 = U * 2 C p f T w 1$ 3
5 Local pollutant external source parameter $δ 2 = H 1 l C w 1 U *$ 0.1
6 External pollutant source variation parameter $λ 1 = C w 1 m 3$ 0.1
7 Internal heat absorption $G * & L * < 0$ $G * = L * = − 0.1$
8 Internal heat generation $G * & L * > 0$ $G * = L * = 0.1$
Special cases:
1 Divergent channel scenario if $γ > 0$
2 Convergent channel scenario if $γ < 0$
Table 3.
The verification of the study’s solutions using work by (see [
]) without the presence of nanoparticles.
Convergent Channel Case
[22] Current Work
$h η$ $h η$
$η$ ADM RK-4 HAM RKF-45
1 $0$ $0$ $0$ $0$
0.8 $0.423183$ $0.423183$ $0.423183$ $0.423187$
0.6 $0.705698$ $0.705698$ $0.705698$ $0.705701$
0.4 $0.879028$ $0.879028$ $0.879028$ $0.879031$
0.2 $0.971234$ $0.971234$ $0.971234$ $0.971235$
$0$ $1.0$ $1.0$ $1.0$ $1.000000$
Divergent Channel Case
$η$ ADM RK-4 HAM RKF-45
1 $0$ $0$ $0$ $0$
0.8 $0.288378$ $0.288378$ $0.288378$ $0.288381$
0.6 $0.559036$ $0.559036$ $0.559036$ $0.559039$
0.4 $0.788205$ $0.788205$ $0.788205$ $0.788207$
0.2 $0.944324$ $0.944324$ $0.944324$ $0.944326$
$0$ $1.0$ $1.0$ $1.0$ $1.000000$
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s).
MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:
Share and Cite
MDPI and ACS Style
K, V.; Sunitha, M.; Madhukesh, J.K.; Khan, U.; Zaib, A.; Sherif, E.-S.M.; Hassan, A.M.; Pop, I. Computational Examination of Heat and Mass Transfer Induced by Ternary Nanofluid Flow across Convergent
/Divergent Channels with Pollutant Concentration. Water 2023, 15, 2955. https://doi.org/10.3390/w15162955
AMA Style
K V, Sunitha M, Madhukesh JK, Khan U, Zaib A, Sherif E-SM, Hassan AM, Pop I. Computational Examination of Heat and Mass Transfer Induced by Ternary Nanofluid Flow across Convergent/Divergent Channels
with Pollutant Concentration. Water. 2023; 15(16):2955. https://doi.org/10.3390/w15162955
Chicago/Turabian Style
K, Vinutha, M Sunitha, J. K. Madhukesh, Umair Khan, Aurang Zaib, El-Sayed M. Sherif, Ahmed M. Hassan, and Ioan Pop. 2023. "Computational Examination of Heat and Mass Transfer Induced by Ternary
Nanofluid Flow across Convergent/Divergent Channels with Pollutant Concentration" Water 15, no. 16: 2955. https://doi.org/10.3390/w15162955
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/2073-4441/15/16/2955?utm_campaign=releaseissue_waterutm_medium=emailutm_source=releaseissueutm_term=titlelink62","timestamp":"2024-11-12T13:13:31Z","content_type":"text/html","content_length":"568900","record_id":"<urn:uuid:d61eb018-c79a-4069-b39d-94c1acff076c>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00419.warc.gz"} |
What you can learn from this centuries-old math mysteryWhat you can learn from this centuries-old math mystery - Ozan Varol
In 1637, Pierre de Fermat scribbled a note on a textbook margin that would baffle mathematicians for more than three centuries.
Fermat had a theory. He proposed that there’s no solution to the formula an + bn = cn for any n greater than 2. “I have a truly marvelous demonstration of this proposition,” he wrote, “which this
margin is too narrow to contain.”
And that’s all he wrote.
Fermat died before supplying the missing proof for what came to be known as Fermat’s Last Theorem. The teaser he left behind continued to tantalize mathematicians for centuries (and made them wish
Fermat had a bigger book to write on). Generations of mathematicians tried—and failed—to prove Fermat’s Last Theorem.
Until Andrew Wiles came along.
For most 10-year-olds, the definition of a good time doesn’t include reading math books for fun. But Wiles was no ordinary 10-year-old. He would hang out at his local library in Cambridge, England,
and surf the shelves for math books.
One day, he spotted a book devoted entirely to Fermat’s Last Theorem. He was tantalized by the mystery of a theorem that was so easy to state, yet so difficult to prove. Lacking the mathematical
chops to tackle the proof, he set it aside for over two decades.
He returned to the theorem later in life as a math professor and devoted seven years to working on it in almost total secrecy. In an ambiguously-titled 1993 lecture in Cambridge, Wiles publicly
revealed that he had solved the centuries-old mystery of Fermat’s Last Theorem. The announcement sent mathematicians in attendance, and around the globe, into a tizzy: “It’s the most exciting thing
that’s happened in — geez — maybe ever, in mathematics,” said Dr. Leonard Adelman. Even The New York Times ran a front-page story on the discovery, exclaiming “At Last Shout of ‘Eureka!’ in Age-Old
Math Mystery.”
But the celebrations proved premature. Wiles had made a mistake in a critical part of his proof. The mistake emerged during the peer-review process after Wiles submitted his proof for publication.
It would take another year, and collaboration with another mathematician, to repair the proof. Describing the day he found the missing piece, Wiles said, “I walked around the department, and I’d keep
coming back to my desk looking to see if it was still there. It was still there. I couldn’t contain myself.” At the end, the proof was 150 pages long—far longer than any book margin would have
allowed Fermat to write on.
Reflecting on how he managed to prove the theorem, Wiles compared the process of discovery to navigating a dark mansion. You start in the first room, he said, and spend months groping, poking, and
bumping into things in a hit-or-miss process. After tremendous disorientation and confusion, you might eventually find the light switch. You then move on to the next dark room and begin the process
all over again.
These breakthroughs, Wiles explains, are “the culmination of—and couldn’t exist without—the many months of stumbling around in the dark that proceed them.”
In school, we’re given the false impression that scientists took a straight path to the light switch. Textbooks with lofty titles—The Principles of Physics—magically reveal “the principles” in three
hundred digestible pages. An authority figure then steps up to the lectern to feed us “the truth.” We learn about Newton’s “laws”—as if they arrived by a grand divine visitation or a stroke of
genius—but not the years he spent exploring, revising, and tweaking them. The laws that Newton failed to establish—most notably his experiments in alchemy, which attempted, and spectacularly failed,
to turn lead into gold—don’t make the cut.
The path to the light switch is not a straight one. There are fits and starts, mistakes and corrections, failures and successes.
Be careful if the paths you’re taking to the light switches in your life are straight. If the drugs you’re developing were certain to work, if your client were certain to be acquitted in court, if
your Mars rover were certain to get to its destination, your jobs wouldn’t exist.
It’s the ability to make the most out of uncertainty that creates the most potential value.
Where certainty ends, progress begins.
Sources used:
Stuart Firestein, Ignorance: How It Drives Science, 2012.
Simon Singh, Fermat’s Last Theorem, 1997.
Solving Fermat: Andrew Wiles, https://www.pbs.org/wgbh/nova/proof/wiles.html.
At Last, Shout of ‘Eureka!’ In Age-Old Math Mystery, https://www.nytimes.com/1993/06/24/us/at-last-shout-of-eureka-in-age-old-math-mystery.html;
A Year Later, Snag Persists In Math Proof, https://www.nytimes.com/1994/06/28/science/a-year-later-snag-persists-in-math-proof.html.
In 1637, Pierre de Fermat scribbled a note on a textbook margin that would baffle mathematicians for more than three centuries.
Fermat had a theory. He proposed that there’s no solution to the formula an + bn = cn for any n greater than 2. “I have a truly marvelous demonstration of this proposition,” he wrote, “which this
margin is too narrow to contain.”
And that’s all he wrote.
Fermat died before supplying the missing proof for what came to be known as Fermat’s Last Theorem. The teaser he left behind continued to tantalize mathematicians for centuries (and made them wish
Fermat had a bigger book to write on). Generations of mathematicians tried—and failed—to prove Fermat’s Last Theorem.
Until Andrew Wiles came along.
For most 10-year-olds, the definition of a good time doesn’t include reading math books for fun. But Wiles was no ordinary 10-year-old. He would hang out at his local library in Cambridge, England,
and surf the shelves for math books.
One day, he spotted a book devoted entirely to Fermat’s Last Theorem. He was tantalized by the mystery of a theorem that was so easy to state, yet so difficult to prove. Lacking the mathematical
chops to tackle the proof, he set it aside for over two decades.
He returned to the theorem later in life as a math professor and devoted seven years to working on it in almost total secrecy. In an ambiguously-titled 1993 lecture in Cambridge, Wiles publicly
revealed that he had solved the centuries-old mystery of Fermat’s Last Theorem. The announcement sent mathematicians in attendance, and around the globe, into a tizzy: “It’s the most exciting thing
that’s happened in — geez — maybe ever, in mathematics,” said Dr. Leonard Adelman. Even The New York Times ran a front-page story on the discovery, exclaiming “At Last Shout of ‘Eureka!’ in Age-Old
Math Mystery.”
But the celebrations proved premature. Wiles had made a mistake in a critical part of his proof. The mistake emerged during the peer-review process after Wiles submitted his proof for publication.
It would take another year, and collaboration with another mathematician, to repair the proof. Describing the day he found the missing piece, Wiles said, “I walked around the department, and I’d keep
coming back to my desk looking to see if it was still there. It was still there. I couldn’t contain myself.” At the end, the proof was 150 pages long—far longer than any book margin would have
allowed Fermat to write on.
Reflecting on how he managed to prove the theorem, Wiles compared the process of discovery to navigating a dark mansion. You start in the first room, he said, and spend months groping, poking, and
bumping into things in a hit-or-miss process. After tremendous disorientation and confusion, you might eventually find the light switch. You then move on to the next dark room and begin the process
all over again.
These breakthroughs, Wiles explains, are “the culmination of—and couldn’t exist without—the many months of stumbling around in the dark that proceed them.”
In school, we’re given the false impression that scientists took a straight path to the light switch. Textbooks with lofty titles—The Principles of Physics—magically reveal “the principles” in three
hundred digestible pages. An authority figure then steps up to the lectern to feed us “the truth.” We learn about Newton’s “laws”—as if they arrived by a grand divine visitation or a stroke of
genius—but not the years he spent exploring, revising, and tweaking them. The laws that Newton failed to establish—most notably his experiments in alchemy, which attempted, and spectacularly failed,
to turn lead into gold—don’t make the cut.
The path to the light switch is not a straight one. There are fits and starts, mistakes and corrections, failures and successes.
Be careful if the paths you’re taking to the light switches in your life are straight. If the drugs you’re developing were certain to work, if your client were certain to be acquitted in court, if
your Mars rover were certain to get to its destination, your jobs wouldn’t exist.
It’s the ability to make the most out of uncertainty that creates the most potential value.
Where certainty ends, progress begins.
Sources used:
Stuart Firestein, Ignorance: How It Drives Science, 2012.
Simon Singh, Fermat’s Last Theorem, 1997.
Solving Fermat: Andrew Wiles, https://www.pbs.org/wgbh/nova/proof/wiles.html.
At Last, Shout of ‘Eureka!’ In Age-Old Math Mystery, https://www.nytimes.com/1993/06/24/us/at-last-shout-of-eureka-in-age-old-math-mystery.html;
A Year Later, Snag Persists In Math Proof, https://www.nytimes.com/1994/06/28/science/a-year-later-snag-persists-in-math-proof.html. | {"url":"https://ozanvarol.com/what-you-can-learn-from-this-centuries-old-math-mystery/","timestamp":"2024-11-02T09:30:45Z","content_type":"text/html","content_length":"323607","record_id":"<urn:uuid:9c168660-cf5b-4165-90b0-eaf0d5d091f4>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00675.warc.gz"} |
Is a 45 out of 50 a good grade?
A 45 out of 50 grade would be equivalent to a 90% or an A- grade, assuming a standard grading scale where 90-100% is an A, 80-89% is a B, 70-79% is a C, and so on.
What is 45 out of 50 as a grade?
The total answers count 50 - it's 100%, so we to get a 1% value, divide 50 by 100 to get 0.50. Next, calculate the percentage of 45: divide 45 by 1% value (0.50), and you get 90.00% - it's your
percentage grade.
Is a 45 a failing grade?
If the scale is out of 100, a 45 might be a failing grade. However, if the scale is different, such as out of 50 or another numerical range, the interpretation of a 45 would be different. Weight of
Assignments:Assess the weight of the assignments or assessments that contri.
What is a good score out of 50?
- In a competitive exam where the average score is 25/50, a score of 33/50 may be considered good, since it's higher than the average. - In a standardized test where the highest possible score is 50/
50, a score of 33/50 may be considered average, since it's around 66% of the total score.
What is a good score out of 45?
A score of 40 or above out of a maximum of 45 is considered a strong IB score that can enhance a student's chances of being admitted to top universities.
18 is What Percent of 45?
Is 45 out of 50 a pass?
45 out of 50 answers correct on a test is 90% and gives you an A- or B+ depending on the marking scale your teacher uses.
Is 95 an A or A+?
Common examples of grade conversion are: A+ (97–100), A (93–96), A- (90–92), B+ (87–89), B (83–86), B- (80–82), C+ (77–79), C (73–76), C- (70–72), D+ (67–69), D (65–66), D- (below 65).
Is 50 an F grade?
The issue arises from the fact that in most US grading systems A is 90% and above, B is 80-89, C is 70-79, D is 60-69, and F is anything below 60.
Is 48 out of 50 a good score?
Scoring 48 out of 50 is an excellent result. It's natural to feel disappointed about missing out on a perfect score, but it's important to remember that you still did incredibly well. It might be
helpful to focus on the questions you did answer correctly and acknowledge the effort you put into preparing for the exam.
How much will a 50 affect my grade if I have a 94?
If the 94 is based on the remaining 50% of the overall grade, your grade for the course will be 72 percent.
Is 97 an A+?
An A+ letter grade is equivalent to a 4.0 GPA, or Grade Point Average, on a 4.0 GPA scale, and a percentage grade of 97–100.
Is 93 a good grade?
A - is the highest grade you can receive on an assignment, and it's between 90% and 100% B - is still a pretty good grade! This is an above-average score, between 80% and 89% C - this is a grade that
rests right in the middle.
Is a D+ a passing grade?
Grade Point per Credit
The grades of "A" through "D-," "P," and "S" are passing grades, and credit is earned for courses in which they are awarded. Grades of "D+," "D" or "D-," while considered passing for undergraduate
students, indicate weak performance.
Is a 90 an A?
Thus, an A is a 95, halfway between 90 and 100. An A- is a 91.25, halfway between 90 and 92.5. Etc. Grades between these are averages.
Is 42 50 a good score?
Grade Cutoffs [Raw (Percent out of 50)] A: 42-50 (84-100%) B: 35-41 (70-82%) C: 28-34 (56-68%)
Is 30 50 a passing score?
50% to 59% is Pass. Some schools Fail at 49% or below, while others fail at 39% or below.
Is 69 failing?
A letter grade of a D is technically considered passing because it not a failure. A D is any percentage between 60-69%, whereas a failure occurs below 60%.
Why do grades skip E?
Below that, they added in the dreaded F.” In the 1930s, as the letter-based grading system grew more and more popular, many schools began omitting E in fear that students and parents may misinterpret
it as standing for “excellent.” Thus resulting in the A, B, C, D, and F grading system.
Is A ++ a real grade?
However, in general, an A+ or A grade is typically the highest grade attainable, usually representing a score of 90-100%. Some institutions may have variations on this scale, such as an A++, but
these are relatively rare.
Is a 3.0 A bad GPA?
Is a 3.0 GPA in high school considered good? A 3.0 GPA indicates a grade average of “B” and makes you eligible to apply to a wide range of schools, so yes! A 3.0 GPA is generally considered “good.”
What is a 5.0 GPA?
It indicates that the student only took coursework with a 5.0 grade point average and received all A's (or A+'s). However, when classes are weighted, perfect straight-A grades can result in a 5.0
instead of a standard 4.0. (or even higher). | {"url":"https://www.spainexchange.com/faq/is-a-45-out-of-50-a-good-grade","timestamp":"2024-11-15T01:18:59Z","content_type":"text/html","content_length":"340302","record_id":"<urn:uuid:5987f3e0-6cb5-4873-a18b-40f7b64041ff>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00107.warc.gz"} |
Value of information
┃ Moderator:Jouni (see all) ┃
┃ ┃
┃ Citation of this page: ┃
┃ Jouni T. Tuomisto, Marko Tainio: Value of information. Opasnet 2010. [1]. Accessed 08 Nov 2024. ┃
┃ Upload data ┃
┃ ┃
┃ ┃
┃ {{#opasnet_base_link:Op_en2480}} ┃
┃ ┃
┃ ┃
<section begin=glossary />
Value of information (VOI) in decision analysis is the amount a decision maker would be willing to pay for information prior to making a decision.^[1]. Value of information is specific to a
combination of a particular decision with several options, a particular objective (i.e., outcome of interest that can be quantitatively estimated), and a particular issue that is affected by the
decision and is relevant for the objective. If all such issues are considered at the same time, we talk about expected value of perfect information.<section end=glossary />
How can value of information be calculated in an assessment in such a way that
• it helps in understanding the impacts of uncertainties on conclusions and
• it helps to direct further assessment efforts to improve guidance to decision making?
This code is an example about how the VOI (value of information) function can be used. The actual function is defined in the code under the title Input.
To calculate value of information, you need
• a decision to be made with at least two different options a decision maker can choose from,
• an objective (i.e., outcome of interest or indicator) that can be quantitatively estimated and optimised,
• an optimising function to be used as the criterion for the best decision,
• an uncertain variable of interest (optional, needed only if partial VOI is calculated for the variable; if omitted, combined value of information is estimated for all uncertain variables in the
assessment model).
The code below is depreciated. Use the code above (Op_en2480/VOI) instead.
Value of information, i.e. the amount of money that the decision-maker is willing, in theory, to pay to obtain a piece of information. Value of information can also be measured in other units than
money, e.g. disability-adjusted life years if health impacts only are considered.
• There are different kinds of indicators under value of information, depending on what level of information is compared with the current situation:
Expected value of perfect information (everything is known perfectly)
Expected value of partial perfect information (one variable is known perfectly, otherwise current knowledge)
Expected value of imperfect information (things are known better but not perfectly)
Expected value of partial imperfect information (one variable is known better but not perfectly, otherwise current knowledge)
Expected value of including uncertainty (a decision analysis can ignore uncertainties and go with expected value of each variable, or include uncertainty and propagate that through the model.
There is a difference especially if the uncertainty distributions are skewed.)
Expected value of including an option (is there any value of including a non-optimal decision option in the final assessment?)
An example output from an ovariable ova with a decision index D1 (with options BAU and D1b) and one index C1. Ncuu means net cost under uncertainty. Also other values in the table are costs,
therefore negative values are savings. Syntax used:
VOI(ova, "D1", indices = "C1")
┃ Var │ evpiResult │ ncuuResult │ Result │ evioResult │ evppiResult ┃
┃ EVPI │ 2.395021 │ 2.482161 │ -8.714023e-02 │ NA │ NA ┃
┃ BAU │ 2.395021 │ NA │ -1.542299e-01 │ 2.549250 │ NA ┃
┃ D1b │ 2.395021 │ NA │ -8.714023e-02 │ 2.482161 │ NA ┃
┃ C1 │ NA │ 2.482161 │ 4.440892e-16 │ NA │ 2.482161 ┃
A previous Analytica version of VOI calculation is archived. The related model file is File:VOI analysis.ANA.
Impact of a strong correlation between the decision and a variable
There is a problem with the approach using the decision as a random variable. The problem occurs with variables that are strongly correlated with the decision variable. The iterations are categorised
into "VOI bins" based on the variable to be studied. In addition, iterations are categorised into "decision bins" based on the value of the decision variable. The idea is to study one VOi bin at a
time and find the best decision bin within that VOI bin. If the best decision is different in different VOI bin, there is some value of knowing to which VOI bin the true value of the variable
belongs. However, if the variable correlates strongly with the decision, it may happen that all iterations that are in a particular VOI bin are also in a particular decision bin. Then, it is
impossible to compare different decision bins to find out which decision is the best in that VOI bin.
This problem can be overcome by assessing counter-factual worlds, because then there is always the same number of iterations in every decision bin. The conclusion of this is that the VOI analysis
using decisions as random variables is a simple and quick screening method, but it cannot be reliably used for a final VOI analysis. In contrast, the counter-factual assessment is the method of
choice for that. Originally developed by Jouni Tuomisto and Marko Tainio, National Public Health Institute (KTL), Finland, 2005. The screening version was developed by Jouni Tuomisto, National
Institute for Health and Welfare (THL), 2009. (c) CC-BY-SA.
This test run shows that the VOI estimates only stabilise if there are more than 17 bins used. The number of iterations was 10000.
Value of information score
The VOI score is the current expected value of perfect information (EVPI) for that variable in an assessment where it is used. If the variable is used is several assessments, it is the sum of EVPIs
across all assessments.
Value of information (VOI) is a decision analysis method that estimates the benefits of collecting additional information. Yokota and Thompson (2004a) described VOI method as "…a decision analytic
technique that explicitly evaluates the benefits of collecting additional information to reduce or eliminate uncertainty." ^[2] The term value of information covers a number of different analyses
with different requirements and objectives.
To be able to perform a value of information analysis, the researcher needs to define possible decision options, consequences of each option, and uncertainty of each input variable. With the VOI
method, the researcher can estimate the effect of additional information to decision making and guide the further development of the model. Thus, the VOI analysis can be used as a sensitivity
analysis tool.
This review will shortly consider different VOI methods, requirements of the analysis, mathematical background and applications. In the end a short summary of the previously published VOI reviews by
Yokota and Thompson (2004a, 2004b) ^[2] ^[3] , is provided.
A family of analyses
Term value of information analyses covers a number of different decision analyses. The expected value of perfect information (EVPI) analysis estimates the value of completely eliminating uncertainty
from the particular decision. The EVPI analysis does not consider the sources of uncertainty, but how much the decision would benefit if uncertainty was removed. The VOI of a particular input
variable X can be analysed with expected value of perfect X information (EVPXI) (or expected value of partial perfect information (EVPPI) analysis. The sum of all individual EVPXIs from all input
variables is always less than EVPI.
The situations where uncertainty of the decision could be reduced to zero are exceptional, especially in the field of environmental health. Therefore, the results of EVPI and EVPXI analyses should be
treated as maximum gain that could be achieved by reducing uncertainty. For more realistic approach, the expected value of sample information (EVSI) and expected value of sample X information (EVSXI)
(or partial imperfect ie. EVII and EVPII, respectively) analyses could be used to estimate the value of reducing uncertainty of the model for a certain level or reducing uncertainty of the certain
input variable for a certain level, respectively. The use of these two analyses increase requirements of the model since the targeted uncertainty level must be defined. The expected value of
including uncertainty (EVIU) evaluates the effect of uncertainty in the specific decision problems and is out of the scope of this review.
Estimating the value of information
The VOI analyses estimate the difference between expected utility of the optimal decision, given new information, and the expected utility of the optimal decision given current information. The
complete review of different mathematical solutions is beyond the scope of this review and thus only the EVPI is presented here. Those interested to know more, the book Uncertainty: A guide to
Dealing with uncertainty in Quantitative Risk and Policy Analysis (Morgan and Henrion 1992) ^[4] and recent methodological review by Yokota and Thompson (2004b)^[3] describes more detailed the
mathematical background of the different VOI analyses and the solutions used in the past analyses.
EVPI is calculated using the following equation:
EVPI = E(Max(U(d[i],θ))) - Max(E(U(d[i],θ))),
where E=expectation over uncertain parameters θ, Max=maximum over decision options i, U=utility of decision d (i.e., the value of outcome after a particular decision option i is chosen, measured in
money, DALY, or another quantitative metric covering all relevant impacts).
The general formula for EVPII is:
EVPII = E[θ2](U(Max(E[θ2](U(d[i],θ2))),θ2)) - E[θ2](U(Max(E[θ1](U(d[i],θ1))),θ2)),
where θ1 is the prior information and θ2 is the posterior (improved) information. EVPPI can be calculated with the same formula in the case where P(θ2)=1 if and only if θ2=θ1. If θ includes all
variables of the assessment, the formula gives total, not partial, value of information.
The interpretation of the formula is the following (starting from the innermost parenthesis). The utility of each decision option d[i] is estimated in the world of uncertain variables θ. Expectation
over θ is taken (i.e. the probability distribution is integrated over θ), and the best option i of d is selected. The point is that in the first part of the formula, θ is described with the better
posterior information, while the latter part is based on the poorer prior information. Once the decision has been made, the expected utility is estimated again based on the better posterior
information in both the first and second part of the formula. Finally, the difference between the utility after the better and poorer information, respectively, gives the value of information.
The set-up of the analyses
To be able to perform a VOI analysis a modeller needs information on (i) the available decision options, (ii) the consequences of each option, and (iii) the uncertainties and reliability of the data.
In addition to these, both gains and losses of the options must be quantified with common metrics (monetary or non-monetary). In the following chapter these requirements are discussed in more detail.
The first requirement for the VOI analysis is that the available options have been defined. In the economic literature the decision is usually seen e.g. whether or not to invest. In the field of
environmental health the decisions could be e.g. choices between different control technologies or choices between available regulations. In ideal case the possible options have been defined
explicitly by the authorities or the customer of the study. More often the available options are defined during the risk assessment process and risk communication has a crucial part when identifying
the different options. In pure academic research the possible options can be defined by the modeller or the modelling team.
The second requirement is that the consequences of each possible option must be defined (e.g. effect of some control technology for the emissions and consequently to human health). Number of methods,
such as DPSEEA or IEHIA, are been used in the field of environmental health to identify and define the causal connections.
Third requirement is that the uncertainties and reliability of the data have been defined explicitly in the model. Again, in the ideal case the uncertainties of the data have been defined or the data
is available so that the modeller can assess the uncertainties. In reality, the data is sparse and the uncertainties must be assessed based on e.g. two different point estimates reported in the
different studies. Expert elicitation ^[5] and similar methods are available to define the uncertainties explicitly. In the absence of data the modeller's choice (author judgement) could be used to
estimate the uncertainties.
The outcomes of the actions must be quantified with a monetary or non-monetary metric. Again, in the economic analyses the common metric is by definition monetary. In the environmental field the
common metric could also be health effect or some summary metric of health effects (e.g. life expectancy, QALY, DALY). Of course, the use of e.g. QALYs increase the complexity and uncertainty of the
Applications for risk assessment
The value of information analyses can be used to guide the information gathering and model building. In the decision making, the decisions can be made based on available information or wait and
collect more information. The VOI analysis can estimate the value of additional information for the decision and guide the decision between immediate actions and data collection. In the economic
literature this is often seen as the main value of the VOI analysis. However, in the field of environmental health and risk assessment, situations where the decision maker can allocate more funding
for additional research and data collection are rare, and this kind of exploitation of VOI analysis is more an exception than rule.
Another way to use VOI analyses is to guide the process of model building. In this case, the decision maker is the modeller or the modeller team who makes the decisions of the modelling work. Thus,
the VOI analyses can be used like sensitivity analysis method. This use is also the most prominent use of VOI analyses in the field of environmental health and risk assessment. The decisions that can
be addressed are e.g. (i) whether (and which parts of) the model should contain explicit uncertainties, (ii) what are the key input parameters or assumptions in the model, and (iii) which parts of
the model should be specified more detailed. All of these start from the question whether or not model uncertainties have an effect on decision making.
VOI analyses in past risk assessments
The use of value of information analyses in the medical and environmental field applications has been extensively reviewed by Yokota and Thompson in two different papers ^[3]. The first review ^[2]
covers issues such as (i) the use of VOI analyses in different fields, (ii) the use of different VOI analyses, and (iii) motivations behind the analyses, while the second review ^[3] focused in more
detail on environmental health applications and the methodological development and problems. The following summary of the use of VOI analyses is based on these two reviews.
The concept of VOI has been defined in the 1960's. The first identified applications in the medical and environmental field are from the 1970's, but only after 1985 the use of VOI analyses have
spread more widely and its use has grown rapidly. In most of the analyses the number of uncertain input variables has been 1-4. EVPI or EVSI analyses have been the most common, while the EVPXI and
EVSXI analyses have been more exceptional. The reviewers noticed that the VOI analyses have been applied in a number of different fields from toxicology to water contamination studies.
The reviewers' view of the published analyses was that most of them were performed to show the usefulness of the analyses rather than actually use the results of analyses in the decision making. The
review showed "a lack of cross-fertilization across topic areas and the tendency of articles to focus on demonstrating the usefulness of the VOI approach rather than applications to actual management
decisions"^[2]. This result may illustrates the complexity of the environmental and risk assessment field decisions. Authors also concluded that inside the medical and environmental field the
different research groups are doing VOI analyses separately without citing or learning from other groups' work.
In the second review, the authors raised several analytical challenges in the VOI analyses ^[3]). These included e.g. difficulty to model the decisions, valuing the outcomes and characterizing
uncertainties. Although the development of the personal computers has increased the analytical possibilities, number of analytical problems still exists.
Standard VOI approach with counter-factual world descriptions
Counter-factual world descriptions mean that we are looking at two or more different world descriptions that are equal in all other respects except for a decision that we are assessing. In the
counter-factual world descriptions, different decision options are chosen. By comparing these worlds, it is possible to learn about the impacts of the decision. With perfect information, we could
make the theoretically best decision by always choosing the right option. If we think about these worlds as Monte Carlo simulations, we run our model several times to create descriptions about
possible worlds. Each iteration (or row in our result table about our objective) is a possible world. For each possible world (i.e., row), we create one or more counter-factual worlds. They are
additional columns which differ from the first column only by the decision option. With perfect information, we can go through our optimising table row by row, and for each row pick the decision
option (i.e., the column) that is the best. The expected outcome of this procedure, subtracted by the outcome we would get by optimising the expectation (net benefit under uncertainty), is the
expected value of perfect information (EVPI). ^[2] ^[3] ^[4] ^[5]
Screening approach with decisions as random variables
In this case, we do not create counter-factual world descriptions, but only a large number of possible world descriptions. The decision that we are considering is treated like any other uncertain
variable in the description, with a probability distribution describing the uncertainty about what actually will be decided. In this case, we are comparing world descriptions that contain a
particular decision option with other world descriptions that contain another decision option. It is important to understand that we are not comparing two counter-factual world descriptions, but we
are comparing a group or possible world descriptions to another group of world descriptions.
The major benefit of the screening approach is that it is not necessary do define decision variables beforehand. Basically any variable can be taken to be a decision, as long as it is a meaningful as
a decision and the model has a number of possible worlds simulated with Monte Carlo or another method such as Bayesian belief network (BBN). The idea is to conditionalise the decision variable to one
decision option at a time and then compare these conditionalisations to find out which one of them gives the optimal outcome in the objective.
In this approach, it is not possible to calculate EVPI in such a straightforward way as with counter-factual world descriptions. Therefore, with this approach, we are pretty much restricted to
calculating expected value of partial perfect (and imperfect) information, or EVPPI and EVPII, respectively. Some sophisticated mathematical methods may be developed to calculate this, but it is
beyond my competence. One approach sounds promising to me at them moment. It is used with probabilistic inversion, i.e. using bunches of probability functions instead of point-wise estimates.^[6]
There is a major difference between the two approaches. Counter-factual world descriptions are actually utilising the Do operator described by Pearl ^[7], which looks at impacts of forced changes of
a variable. In contrast, the latter case has the structure of an observational study, which looks at natural changes where several variables change at the same time. Therefore, it is subject to
confounders, which are typical problems in epidemiology: a variable is associated with the effect, but not because it is its cause but because it correlates with the true cause.
Because of this confounding effect, the latter method for value-of-information analysis may result in false negatives: a decision seems to be obvious (i.e., the VOI is zero), but a more careful
analysis of confounders would show that it is not. Therefore, a value-of-information analysis based on a Bayesian net should be repeated with an analysis of counter-factual world descriptions. In
Uninet, counter-factual world descriptions can be created with analytical conditioning, but it does not work with functional nodes, and its applicability is therefore limited.
The value of information is a decision analysis method that has been used and could be used in number of situations in the field of environmental health. The value of information covers a variety of
different analyses with different scopes and requirements. The most difficult analytical challenges relate to the assessment of uncertainties in a model, valuing outcomes, and, especially, modelling
different decisions. In the field of environmental health and risk assessment, identifying and modelling different decisions is probably the most challenging part of the analysis.
Value of information, decision analysis, uncertainty, decision making, optimising
Related files
See also | {"url":"https://dev.opasnet.org/w/Value_of_information_analysis","timestamp":"2024-11-08T09:27:28Z","content_type":"text/html","content_length":"106715","record_id":"<urn:uuid:001a6106-5f98-433c-900d-6b60ed400b81>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00333.warc.gz"} |
Bloblang Arithmetic | Bento
Bloblang supports a range of comparison operators !, >, >=, ==, <, <=, &&, || and mathematical operators +, -, *, /, %. How these operators behave is dependent on the type of the values they're used
with, and therefore it's worth fully understanding these behaviors if you intend to use them heavily in your mappings.
All mathematical operators (+, -, *, /, %) are valid against number values, and addition (+) is also supported when both the left and right hand side arguments are strings. If a mathematical operator
is used with an argument that is non-numeric (with the aforementioned string exception) then a recoverable mapping error will be thrown.
Number Degradation
In Bloblang any number resulting from a method, function or arithmetic is either a 64-bit signed integer or a 64-bit floating point value. Numbers from input documents can be any combination of size
and be signed or unsigned.
When a mathematical operation is performed with two or more integer values Bloblang will create an integer result, with the exception of division. However, if any number within a mathematical
operation is a floating point then the result will be a floating point value.
In order to explicitly coerce numbers into integer types you can use the .ceil(), .floor(), or .round() methods.
The not (!) operator reverses the boolean value of the expression immediately following it, and is valid to place before any query that yields a boolean value. If the following expression yields a
non-boolean value then a recoverable mapping error will be thrown.
If you wish to reverse the boolean result of a complex query then simply place the query within brackets (!(this.foo > this.bar)).
The equality operators (== and !=) are valid to use against any value type. In order for arguments to be considered equal they must match in both their basic type (string, number, null, bool, etc) as
well as their value. If you wish to compare mismatched value types then use coercion methods.
Number arguments are considered equal if their value is the same when represented the same way, which means their underlying representations (integer, float, etc) do not need to match in order for
them to be considered equal.
Numerical comparisons (>, >=, <, <=) are valid to use against number values only. If a non-number value is used as an argument then a recoverable mapping error will be thrown.
Boolean comparison operators (||, &&) are valid to use against boolean values only (true or false). If a non-boolean value is used as an argument then a recoverable mapping error will be thrown. | {"url":"https://warpstreamlabs.github.io/bento/docs/guides/bloblang/arithmetic/","timestamp":"2024-11-06T13:48:56Z","content_type":"text/html","content_length":"23561","record_id":"<urn:uuid:08ae2369-7871-4978-973f-9cc0231bd722>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00880.warc.gz"} |
Quantitative Seismology
Quantitative Seismology
By: Keiiti Aki, Paul Richards
Publication date: September 2002
ISBN: 9781891389634
This book provides a unified treatment of seismological methods that will be of use to advanced students, seismologists, and scientists and engineers working in all areas of seismology.
Title information
This new edition of the classic text by Aki and Richards has at last been updated throughout to systematically explain key concepts in seismology. Now in one volume, the book provides a unified
treatment of seismological methods that will be of use to advanced students, seismologists, and scientists and engineers working in all areas of seismology.
Language: English
Publisher: University Science Books
1. Introduction
Suggestions for Further Reading
2. Basic Theorems in Dynamic Elasticity
2.1 Formulation
2.2 Stress-Strain Relations and the Strain-Energy Function
2.3 Theorems of Uniqueness and Reciprocity
2.4 Introducing Green's Function for Elastodynamics
2.5 Representation Theorems
2.6 Strain-Displacement Relations and Displacement-Stress Relations in General Orthogonal Curvilinear Coordinates
Suggestions for Further Reading
3. Representation of Seismic Sources
3.1 Representation Theorems for an Internal Surface: Body-Force Equivalents for Discontinuities in Traction and Displacement
3.2 A Simple Example of Slip on a Buried Fault
3.3 General Analysis of Displacement Discontinuities across an Internal Surface E
3.4 Volume Sources: Outline of the Theory and Some Simple Examples
Suggestions for Further Reading
4. Elastic Waves from a Point Dislocation Source
4.1 Formulation: Introduction of Potentials
4.2 Solution for the Elastodynamic Green Function in a Homogeneous, Isotropic Unbounded Medium
4.3 The Double-Couple Solution in an Infinite Homogeneous Medium
4.4 Ray Theory for Far-Field P-waves and S-waves from a Point Source
4.5 The Radiation Pattern of Body Waves in the Far Field for a Point Shear Dislocation of Arbitrary Orientation in a Spherically Symmetric Medium
Suggestions for Further Reading
5. Plane Waves in Homogeneous Media and Their Reflection and Transmission at a Plane Boundary
5.1 Basic Properties of Plane Waves in Elastic Media
5.2 Elementary Formulas for Reflection/Conversion/Transmission Coefficients
5.3 Inhomogeneous Waves, Phase Shifts, and Interface Waves
5.4 A Matrix Method for Analyzing Plane Waves in Homogeneous Media
5.5 Wave Propagation in an Attenuating Medium: Basic Theory for Plane Waves
5.6 Wave Propagation in an Elastic Anisotropic Medium: Basic Theory for Plane Waves
Suggestions for Further Reading
6. Reflection and Refraction of Spherical Waves; Lamb's Problem
6.1 Spherical Waves as a Superposition of Plane Waves and Conical Waves
6.2 Reflection of Spherical Waves at a Plane Boundary: Acoustic Waves
6.3 Spherical Waves in an Elastic Half-Space: The Rayleigh Pole
6.4 Cagniard-De Hoop Methods for Line Sources
6.5 Cagniard-De Hoop Methods for Point Sources
6.6 Summary of Main Results and Comparison between Different Methods
Suggestions for Further Reading
7. Surface Waves in a Vertically Heterogeneous Medium
7.1 Basic Properties of Surface Waves
7.2 Eigenvalue Problem for the Displacement-Stress Vector
7.3 Variational Principle for Love and Rayleigh Waves
7.4 Surface-Wave Terms of Green's Function for a Vertically Heterogeneous Medium
7.5 Love and Rayleigh Waves from a Point Source with Arbitrary Seismic Moment
7.6 Leaky Modes
Suggestions for Further Reading
8. Free Oscillations of the Earth
8.1 Free Oscillations of a Homogeneous Liquid Sphere
8.2 Excitation of Free Oscillations by a Point Source
8.3 Surface Waves on the Spherical Earth
8.4 Free Oscillations of a Self-Gravitating Earth
8.5 The Centroid Moment Tensor
8.6 Splitting of Normal Modes Due to the Earth's Rotation
8.7 Spectral Splitting of Free Oscillations Due to Lateral Inhomogeneity of the Earth's Structure
Suggestions for Further Reading
9. Body Waves in Media with Depth-Dependent Properties
9.1 Cagniard's Method for a Medium with Many Plane Layers: Analysis of a Generalized Ray
9.2 The Reflectivity Method for a Medium with Many Plane Layers
9.3 Classical Ray Theory in Seismology
9.4 Inversion of Travel-Time Data to Infer Earth Structure
9.5 Wave Propagation in Media Having Smoothly Varying Depth-Dependent Velocity Profiles within Which Turning Points Are Present
9.6 Body-Wave Problems for Spherically Symmetric Earth Models in Which Discontinuities are Present between In homogeneous Layers
9.7 Comparison between Different Methods
Suggestions for Further Reading
10. The Seismic Source: Kinematics
10.1 Kinematics of an Earthquake as Seen at Far Field
10.2 Kinematics of an Earthquake as Seen at Near Field
Suggestions for Further Reading
11. The Seismic Source: Dynamics
11.1 Dynamics of a Crack Propagating with Prescribed Velocity
11.2 Dynamics of Spontaneous Planar Rupture Propagation
Suggestions for Further Reading
12. Principles of Seismometry
12.1 Basic Instrumentation
12.2 Frequency and Dynamic Range of Seismic Signals and Noise
12.3 Detection of Signal
Suggestions for Further Reading
Appendix 1: Glossary of Waves
Appendix 2: Definition of Magnitudes
“The continued popularity of this text is testament to the meticulous detail with which important mathematical formulas are derived; there simply is nothing of any importance that is glossed over or
-Pure and Applied Geophysics, 2005 (162)
“An excellent study and reference book for seismologists well-grounded in the methods of mathematical physics. This updated version of Aki and Richard’s classical geophysical text deserves a place in
every serious geophysicist’s library.”
-The Leading Edge
“For more than twenty years, Aki and Richards’ classic has maintained its position as the most complete and accessible text on theoretical seismology. Now brought up to date throughout, and with
several completely revised chapters, this book will remain “The Bible” of the subject for years to come.”
-Bruce R. Julian, USGS
“An important renewal of a classic geophysics text. The clarity of the chapters that describe fundamental seismic wave propagation remains undiminished.”
-Jeffrey Park, Yale University
“Still the preeminent text on analytic theory and methods in seismology.”
-D.J. Andrews, USGS
From reviews of the first edition:
“This truly exquisite text/monograph provides advanced students and professionals with a wonderfully detailed and comprehensive but lucid account of physical, mathematical and instrumentational
principles that lie at the quantitative heart of modern seismology…An extraordinary publishing event.”
-Sci Tech Book News | {"url":"https://uscibooks.directfrompublisher.com/9781891389634","timestamp":"2024-11-11T00:49:37Z","content_type":"application/xhtml+xml","content_length":"38222","record_id":"<urn:uuid:111db7ec-a202-432e-bc88-bf8e59f96d57>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00109.warc.gz"} |
Introduction to Chemical Engineering Processes/Numerical Root Finding Methods - Wikibooks, open books for an open world
Rootfinding is the determination of solutions to single-variable equations or to systems of n equations in n unknowns (provided that such solutions exist). The basics of the method revolve around the
determination of roots
A root of a function ${\displaystyle F(x_{1},x_{2},...)}$ in any number of variables is defined as the solution to the equation ${\displaystyle F(x_{1},x_{2},...)=0}$. In order to use any of the
numerical methods in this section, the equation should be put in a specific form, and this is one of the more common ones, used for all methods except the iterative method.
However, it is easy to put a function into this form. If you start with an equation of the form:
${\displaystyle F_{1}(x_{1},x_{2},...)=F_{2}(x_{1},x_{2},...)}$
then subtracting ${\displaystyle F_{2}}$ will yield the required form. Do not forget to do this, even if there is only a constant on one side!
If you want to use the bisection method later in this section to find one of the solutions of the equation ${\displaystyle 1=x^{2}}$, you should rewrite the equation as ${\displaystyle 0=x^{2}-1}$ so
as to put it in the correct form.
Since any equation can be put into this form, the methods can potentially be applied to any function, though they work better for some functions than others.
An analytical solution to an equation or system is a solution which can be arrived at exactly using some mathematical tools. For example, consider the function ${\displaystyle y=ln(x)}$, graphed
The root of this function is, by convention, when ${\displaystyle y=0}$, or when this function crosses the x-axis. Hence, the root will occur when ${\displaystyle ln(x)=0\rightarrow x=e^{0}=1}$
The answer x=1 is an analytical solution because through the use of algebra, we were able to come up with an exact answer.
On the other hand, attempting to solve an equation like:
${\displaystyle -x=ln(x)}$
analytically is sure to lead to frustration because it is not possible with elementary methods. In such a case it is necessary to seek a numerical solution, in which guesses are made until the answer
is "close enough", but you'll never know what the exact answer is.
All that the numerical methods discussed below do is give you a systematic method of guessing solutions so that you'll be likely (and in some cases guaranteed) to get closer and closer to the true
answer. The problem with numerical methods is that most are not guaranteed to work without a good enough initial guess. Therefore, it is valuable to try a few points until you get somewhere close and
then start with the numerical algorithm to get a more accurate answer. They are roughly in order from the easiest to use to the more difficult but faster-converging algorithms.
Iterative solutions in their purest form will solve the desired function so that it is in the form:
${\displaystyle x=f(x)}$
Then, a value for x is guessed, and f(x) is calculated. The new value of x is then re-inserted into f(x), and the process is repeated until the value of x changes very little.
The following example illustrates this procedure.
Use an iterative solution to calculate the root of ${\displaystyle x+ln(x)=0}$
Solution: Solve the equation for x:
${\displaystyle e^{-x}=x}$
First we need to guess an x to get it started. Let's try ${\displaystyle x=0.5}$
Then we have:
${\displaystyle x=e^{-0.5}=0.6065}$
${\displaystyle x_{2}=e^{-0.6065}=0.5453}$
${\displaystyle x_{3}=e^{-0.5453}=0.5796}$
${\displaystyle x_{4}=e^{-0.5796}=0.5601}$
${\displaystyle x_{5}=e^{-0.5601}=0.5711}$
${\displaystyle x_{6}=e^{-0.5711}=0.5649}$
${\displaystyle x_{7}=e^{-0.5649}=0.5684}$
Thus to two decimal places the root is ${\displaystyle x=0.56}$. More iterations could be performed to get a more accurate answer if desired.
This method has some rather severe limitations as we'll see in this example:
Repeat the above but this time solve for x a different way. What do you find?
Solution: To illustrate the point, let's start with a guess of ${\displaystyle x=0.56}$
The other way to solve for x is the more obvious way: ${\displaystyle x=-ln(x)}$
${\displaystyle x=-ln(0.56)=0.5798}$
${\displaystyle x_{2}=-ln(0.5798)=0.5451}$
${\displaystyle x_{3}=-ln(0.5451)=0.6068}$
Clearly, even though we started with a very good guess, the solution is diverging!
This example shows that the success of the iteration method strongly depends on the properties of the function on the right-hand side. In particular, it has to do with how large the slope of the
function is at the root. If the slope is too large, the method will not converge, and even if it is small the method converges slowly. Therefore, it is generally undesirable to use this method,
though some more useful algorithms are based on it (which is why it is presented here).
Although the iterative solution method has its downfalls, it can be drastically improved through the use of averaging. In this method, the function is still solved for x in the form:
${\displaystyle x=f(x)}$
From the initial guess ${\displaystyle x_{0}}$, the function f(x) is used to generate the second guess ${\displaystyle x_{1}}$. However, rather than simply putting ${\displaystyle x_{1}}$ into f(x),
a weighted average of ${\displaystyle x_{0}}$ and ${\displaystyle x_{1}}$ is made:
${\displaystyle x_{1}(New)=\alpha *x_{0}+(1-\alpha )*x_{1}(old),0\leq \alpha \leq 1}$
The term ${\displaystyle \alpha }$ is called the weight. The most common value of the weight is one-half, in which case the next value to plug into f(x) is simply the average of ${\displaystyle x_
{0}}$ and ${\displaystyle x_{1}(old)}$:
${\displaystyle x_{1}(New)={\frac {x_{0}+x_{1}(Old)}{2}}}$
This new value is then plugged into f(x), averaged with the result, and this is repeated until convergence.
The following examples show that this method converges faster and with more reliability than normal iterative solution.
Find the root of ${\displaystyle x+ln(x)=0}$ using the iterative method with a weight of ${\displaystyle \alpha ={\frac {1}{2}}}$
Solution: Let's start with a guess of 0.5 like last time, and compare what happens this time from what happened with normal iteration.
${\displaystyle x_{1}=e^{-0.5}=0.6065}$
${\displaystyle x_{1}(new)={\frac {0.5+0.6065}{2}}=0.5533}$
${\displaystyle x_{2}=e^{-0.5533}=0.5751}$
${\displaystyle x_{2}(new)={\frac {0.5533+0.5751}{2}}=0.5642}$
${\displaystyle x_{3}=e^{-0.5642}=0.5688}$
Here, after only three evaluations of the function (which usually takes the longest time of all the steps), we have the root to the same accuracy as seven evaluations with the other method!
The method is not only faster-converging but also more stable, so that it can actually be used solving the equation the other way too.
Starting with an initial guess of ${\displaystyle x=0.5}$ and using ${\displaystyle x=-ln(x)}$ and the weighted iteration method with ${\displaystyle \alpha ={\frac {1}{2}}}$, find the root of the
Solution: Starting with ${\displaystyle x_{0}=0.5}$ we have:
${\displaystyle x_{1}=-ln(0.5)=0.693}$
${\displaystyle x_{1}(new)={\frac {0.693+0.5}{2}}=0.597}$
${\displaystyle x_{2}=-ln(0.597)=0.517}$
${\displaystyle x_{2}(new)={\frac {0.517+0.597}{2}}=0.557}$
${\displaystyle x_{3}=-ln(0.557)=0.5856}$
${\displaystyle x_{3}(new)={\frac {0.5856+0.557}{2}}=0.571}$
${\displaystyle x_{4}=-ln(0.571)=0.560}$
${\displaystyle x_{4}(new)={\frac {0.560+0.571}{2}}=0.565}$
${\displaystyle x_{5}=-ln(0.565)=0.570}$
Therefore we can (slowly) converge in this case using the weighted iteration method to the solution.
Notice that in this case, if we use regular iteration the result only converged if the equation was solved in a certain way. Using weighted iteration, it is possible to solve it either way and obtain
a solution, but one way is clearly faster than the other. However, weighting will accelerate the algorithm in most cases and is relatively easy to implement, so it is a worthwhile method to use.
Let us consider an alternative approach to rootfinding. Consider a function f(x) = 0 which we desire to find the roots of. If we let a second variable ${\displaystyle y=f(x)}$, then y will (almost
always) change sign between the left-hand side of the root and the right-hand side. This can be seen in the above picture of ${\displaystyle y=ln(x)}$, which changes from negative to the left of the
root ${\displaystyle x=1}$ to positive to its right.
The bisection method works by taking the observation that a function changes sign between two points, and narrowing the interval in which the sign change occurs until the root contained within is
tightly enclosed. This only works for a continuous function, in which there are no jumps or holes in the graph, but a large number of commonly-used functions are like this including logarithms (for
positive numbers), sine and cosine, and polynomials.
As a more formalized explanation, consider a function ${\displaystyle y=f(x)}$ that changes sign between ${\displaystyle x=a}$ and ${\displaystyle x=b}$ We can narrow the interval by:
1. Evaluating the function at the midpoint
2. Determining whether the function changes signs or not in each sub-interval
3. If the continuous function changes sign in a sub-interval, that means it contains a root, so we keep the interval.
4. If the function does not change sign, we discard it. This can potentially cause problems if there are two roots in the interval,so the bisection method is not guaranteed to find ALL of the roots.
Though the bisection method is not guaranteed to find all roots, it is guaranteed to find at least one if the original endpoints had opposite signs.
The process above is repeated until you're as close as you like to the root.
Find the root of ${\displaystyle y=x+ln(x)}$ using the bisection method
By plugging in some numbers, we can find that the function changes sign between ${\displaystyle x=0.5}$ ${\displaystyle (y=-0.193)}$ and ${\displaystyle x=1}$ ${\displaystyle (y=1)}$. Therefore,
since the function is continuous, there must be at least one root in this interval.
• First Interval: ${\displaystyle 0.5(-)<x<1(+)}$
• Midpoint: ${\displaystyle x=0.75}$
• y at midpoint: ${\displaystyle y=0.75+ln(0.75)=0.462}$ Therefore, the sign changes between 0.5 and 0.75 and does not between 0.75 and 1.
• New Interval: ${\displaystyle 0.5(-)<x<0.75(+)}$
• Midpoint: ${\displaystyle x=0.625}$
• y at midpoint: ${\displaystyle y=0.155}$
• New Interval: ${\displaystyle 0.5(-)<x<0.625(+)}$
• Midpoint: ${\displaystyle x=0.5625}$
• y at midpoint: ${\displaystyle y=-0.0129}$
We could keep doing this, but since this result is very close to the root, lets see if there's a number smaller than 0.625 which gives a positive function value and save ourselves some time.
• x Value: ${\displaystyle x=0.57}$
• y value: ${\displaystyle y=0.00788}$
Hence x lies between 0.5625 and 0.57 (since the function changes sign on this interval).
Note that convergence is slow but steady with this method. It is useful for refining crude approximations to something close enough to use a faster but non-guaranteed method such as weighted
The Regula Falsi method is similar the bisection method. You must again start with two x values between which the function f(x) you want to find the root of changes. However, this method attempts to
find a better place than the midpoint of the interval to split it.It is based on the hypothesis that instead of arbitrarily using the midpoint of the interval as a guide, we should do one extra
calculation to try and take into account the shape of the curve. This is done by finding the secant line between two endpoints and using the root of that line as the splitting point.
More formally:
• Draw or calculate the equation for the line between the two endpoints (a,f(a)) and (b,f(b)).
• Find where this line intersects the x-axis (or when y = 0), giving you x = c
• Use this x value to evaluate the function, giving you f(c)
• The sub-intervals are then treated as in the bisection method. If the sign changes between f(a) and f(c), keep the interval; otherwise, throw it away. Do the same between f(c) and f(b).
• Repeat until you're at a desired accuracy.
Use these two formulas to solve for the secant line y = mx + B:
${\displaystyle m={\frac {f(b)-f(a)}{b-a}}}$
${\displaystyle B=f(b)-m*b=f(a)-m*a}$ (you can use either)
The regula falsi method is guaranteed to converge to a root, but it may or may not be faster than the bisection method, depending on how long it takes to calculate the slope of the line and the shape
of the function.
Find the root of ${\displaystyle x+ln(x)=0}$ but this time use the regula falsi method.
Solution: Be careful with your bookkeeping with this one! It's more important to keep track of y values than it was with bisection, where all we cared about was the sign of the function, not it's
actual value.
For comparison with bisection, let's choose the same initial guesses: ${\displaystyle a=0.5}$ and ${\displaystyle b=1}$, for which ${\displaystyle f(a)=-0.693}$ and ${\displaystyle f(b)=1}$.
• First interval: ${\displaystyle 0.5<x<1,-0.193(-)<f(x)<1(+)}$
• Secant line: ${\displaystyle y=2.386x-1.386}$
• Root of secant line: ${\displaystyle x=0.581}$
• Function value at root: ${\displaystyle f(x)=0.581+ln(0.581)=0.038(+)}$
Notice that in this case, we can discard a MUCH larger interval than with the bisection method (which would use ${\displaystyle x=0.75}$ as the splitting point)
• Second interval: ${\displaystyle 0.5<x<0.581,-0.193(-)<f(x)<0.038(+)}$
• Secant line: ${\displaystyle y=2.852x-1.619}$
• Root of secant line: ${\displaystyle x=0.5676}$
• Function value at root: ${\displaystyle f(x)=0.0013}$
We come up with practically the exact root after only two iterations!
In some cases, the regula falsi method will take longer than the bisection method, depending on the shape of the curve. However, it generally worth trying for a couple of iterations due to the
drastic speed increases possible.
In this method, we attempt to find the root of a function y = f(x) using the tangent lines to functions. This is similar to the secant method, except it "cuts loose" from the old point and only
concentrates on the new one, thus hoping to avoid hang-ups such as the one experienced in the example.
Since this class assumes students have not taken calculus, the tangent will be approximated by finding the equation of a line between two very close points, which are denoted (x) and ${\displaystyle
(x+\delta x)}$. The method works as follows:
1. Choose one initial guess, ${\displaystyle x_{1}}$
2. Evaluate the function f(x) at ${\displaystyle x=x_{1}}$ and at ${\displaystyle x=x_{1}+\delta x}$ where ${\displaystyle \delta x}$ is a small number. These yield two points on your (approximate)
tangent line.
3. Find the equation for the tangent line using the formulas given above.
4. Find the root of this line. This is ${\displaystyle x_{2}}$
5. Repeat steps 2-4 until you're as close as you like to the root.
This method is not guaranteed to converge unless you start off with a good enough first guess, which is why the guaranteed methods are useful for generating one. However, since this method, when it
converges, is much faster than any of the others, it is preferable to use if a suitable guess is available.
Find the root of ${\displaystyle x+ln(x)=y}$ using the tangent method.
Solution: Let's guess ${\displaystyle x_{1}=0.5}$ for comparison with iteration. Choose ${\displaystyle \delta (x)=0.001}$
• ${\displaystyle f(x_{1})=f(0.5)=-0.193}$
• ${\displaystyle f(x_{1}+\delta x)=f(0.501)=-0.190}$
• Tangent line: ${\displaystyle y=2.85x-1.618}$
• Root of tangent line: ${\displaystyle x=0.5677}$
Already we're as accurate as any other method we've used so far after only one calculation! | {"url":"https://en.wikibooks.org/wiki/Introduction_to_Chemical_Engineering_Processes/Numerical_Root_Finding_Methods","timestamp":"2024-11-03T00:37:27Z","content_type":"text/html","content_length":"183038","record_id":"<urn:uuid:bf096d78-3dd7-4032-9f5e-f7638c7aec32>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00067.warc.gz"} |
MODELING AND ANALYSIS OF AC OUTPUT POWER FACTOR FOR WIRELESS CHARGERS IN ELECTRIC VEHICLES - Nexgen Technology
by nexgentech | Oct 31, 2017 | ieee project
This paper presents a general mathematical expression and characteristic analysis of the output power factor before rectification on the receiver side for wireless chargers in electric vehicles.
This power factor is usually regarded as unity (i.e., the AC output voltage is in phase with the current), based on fundamental harmonic approximation (FHA). However, the default unity power factor
assumption is not accurate for output power derivation even at resonance frequency. This study explores not only output power factor characteristics for different frequencies or power levels, but
also the phase relationships of the input and output AC voltages. The continuous conduction mode (CCM) and discontinuous conduction mode (DCM) are both analyzed. An integrated LCC compensation
topology is selected as the research object, and its analysis process can be readily extended to other common topologies. Furthermore, this study is beneficial for the implementation of some control
strategies requiring precise power computation/estimation, e.g. feedforward control or model prediction control. Finally, a comparison of numerical and experimental results with various misalignment
cases validates correctness of the proposed theoretical derivation and analysis methodology.
A typical WPT system includes several stages, such as a rectifier with power factor correction (PFC), an inverter, a compensation network on the transmitter side, a magnetic coupler (including
transmitter and receiver coils), a compensation network on the receiver side and a rectifier for charging the DC battery. A DC-DC converter may be added between the rectifier and inverter on the
transmitter side for input DC voltage adjustment. Four basic compensation topologies are labeled as series-series (SS), series-parallel (SP), parallel-series (PS) and parallel-parallel (PP),
according to the way the capacitors are connected to the transmitter and receiver coils. Some other novel compensation topologies have been proposed recently. In, a series-parallel-series (SPS)
compensation topology is presented. In this new design, one capacitor is connected in series while the other is connected in parallel with the transmitter coil. On the receiver side, one capacitor is
connected in series with the coil. Thus both SS and PS characteristics appear in the topology. A LCL network is proposed in, where the transmitter is featured as a constant current source. In, a
series-parallel LCC compensation is used for better performance, despite its tricky parameter design needed for control stability and soft-switching realization. An integrated LCC compensation is
proposed to reduce the size and weight of additional inductors in introduces a CLCL network where bidirectional power transfer can be achieved.
Reliable acquisition of output power factor via thorough theoretical derivation is beneficial for the implementation of several control algorithms (e.g., feedforward control, model prediction
control, etc.), which require real-time precise estimation of output power, supposing the input and output voltages and switching frequency are known. On the other hand, the exploration of voltage/
current phase relationships and power analysis at various frequencies could make contributions to the design of a novel compensation topology. Therefore, this work is meaningful to the development of
effective control strategies in WPT systems and circuit design as well. A LCL converter can be formed by adding an LC compensation network on the primary side or on both primary (transmitter) and
secondary (receiver) sides. The advantage for the LCL converter at the resonant frequency is that the current in the primary side coil can be independent of the load condition, or in other words, the
LCL network performs like a current source. However, the design of an LCL converter usually requires additional inductors. To reduce the additional inductor size and cost, usually a capacitor is put
in series with the primary side coil, which forms an LCC compensation network. By utilizing an LCC compensation network, a zero current switching (ZCS) condition could be achieved for higher
efficiency by tuning the compensation network parameters. Also, when the LCC compensation network is adopted at the secondary side, the reactive power at the secondary side could be somehow
compensated and the current distortion might be reduced. Consequently, in order to verify the proposed theoretical derivation, an integrated LCC compensation topology is selected as a specific
research object. Extension of the presented analysis to other topologies is based on simple transformation rules.
In this paper, the exploration of the AC power factor characteristics and voltage phase relationships in wireless chargers of EVs is proposed, in order to correct a common misunderstanding that the
AC output power factor of a WPT system is always unity. Continuous conduction mode (CCM) and discontinuous conduction mode (DCM) with various frequencies are discussed, covering expected operation
conditions. An equivalent output voltage curve is introduced to decrease the calculation complexity in DCM. With simple transformation, the presented methodology for an integrated LCC compensation
topology can be readily extended to other WPT systems. It also contributes the new topology design and realization of some control strategies with precise power calculation/estimation required. The
comparison of experimental and calculated results proves the correctness and validity of the proposed strategy.
[1] M. A. Delucchi, C. Yang, A. F. Burke, J. M. Ogden, K. Kurani, J. Kessler, and D. Sperling, “An assessment of electric vehicles: technology, infrastructure requirements, greenhouse-gas emissions,
petroleum use, material use, lifetime cost, consumer acceptance and policy initiatives,” Frequency (kHz) Philos. Trans. R. Soc. A, Math. Phys. Eng. Sci., vol. 372, no. 2006, pp. 325-351, Jan. 2014.
[2] J. Seixas, S. Simoes, L. Dias, A. Kanudia, P. Fortes, and M. Gargiulo, “Assessing the cost-effectiveness of electric vehicles in European countries using integrated modeling,” Energy Policy, vol.
80, pp. 165-176, May 2015.
[3] M. Ettorre and A. Grbic, “ A transponder-based, nonradiative wireless power transfer,” IEEE Antennas and Wirel. Propag. Lett., vol. 11, pp. 1150-1153, Oct. 2012.
[4] L. Xie, Y. Shi, Y. T. Hou, and A. Loiu, “Wireless power transfer and applications to sensor networks,” IEEE Wirel. Commun., vol. 20, no. 4, pp. 140-145, Aug. 2013.
[5] K. Ghate and L. Dole, “ A review on magnetic resonance based wireless power transfer system for electric vehicles,” Proc. 2015 Int. Conf. Pervasive Comput. (ICPC), p 3 pp., 2015
[6] S. Kong, B. Bae, J. J. Kim, S. Kim, D. H. Jung, and J. Kim, “Electromagnetic radiated emissions from a repeating-coil wireless power transfer system using a resonant magnetic field coupling,”
Proc. 2014 IEEE Wirel. Power Transf. Conf. (WPTC), p 138-41, 2014
[7] W. Zhong and S. Y. R. Hui, “ Auxiliary Circuits for Power Flow Control in Multifrequency Wireless Power Transfer Systems With Multiple Receivers,” IEEE Trans. Power Electron., vol. 30, no. 10,
pp. 5902-5910, Oct. 2015.
[8] K. Lee and D. H. Cho, “Diversity Analysis of Multiple Transmitters in Wireless Power Transfer System,” IEEE Trans. Magn., vol. 49, no. 6, pp. 2946-2952, Jun. 2013.
[9] D. Ahn and S. Hong, “Wireless power transfer resonance coupling amplification by load-modulation switching controller,” IEEE Trans. Ind. Electron., vol. 62, no. 2, pp. 898-909, Feb. 2015.
[10] R. Feng, Q. Li, Q. Zhang, and J. Qin, “Robust secure transmission in MISO simultaneous wireless information and power transfer system,” IEEE Trans. Veh. Technol., vol. 64, no. 1, pp. 400-405,
Jan. 2015. | {"url":"https://nexgenproject.com/modeling-analysis-ac-output-power-factor-wireless-chargers-electric-vehicles/","timestamp":"2024-11-08T22:02:07Z","content_type":"text/html","content_length":"93975","record_id":"<urn:uuid:e1212847-de54-4ac1-aec5-7746c072cbac>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00439.warc.gz"} |
Soil Calculator - Calculate Soil Volume for your Garden - [100% Free]
When it comes to purchasing soil for the garden, people either overspend or end up returning to the garden shop to buy some more. Either way, it’s money and effort wasted. But things need not be so
if you use a soil calculator to help you calculate the amount of garden soil you might need.
How to use the soil calculator?
Starting a garden in itself is a huge task. It’s an expert’s job but that should not deter you to plan a garden. You need to consider a lot of factors and one of the most important is how much soil
to use. This soil calculator is a handy tool you can use to find out just that. It’s also known as a garden soil calculator, a soil volume calculator or a cubic feet calculator for soil and here are
the steps to use it:
• First, enter the value of the Length and choose the unit of measurement from the drop-down menu.
• Then enter the value of the Width and choose the unit of measurement from the drop-down menu.
• Next, enter the value of the Depth and choose the unit of measurement from the drop-down menu.
• After that, enter the value of the Area and choose the unit of measurement from the drop-down menu.
• The next thing to enter is the value of the Volume Needed and choose the unit of measurement from the drop-down menu.
• Then enter the value of the Density and choose the unit of measurement from the drop-down menu.
• Finally, enter the value of the Weight Needed and choose the unit of measurement from the drop-down menu.
• After entering all of these values, the soil calculator will automatically give you the values of the Price per Unit Mass, Price per Unit Volume, and Total Cost.
How do I calculate how much soil I need?
One of the most asked questions when going into an adventure in gardening is “How much soil should I buy?” This is best answered if you have established the required volume of soil. You can do this
by using the garden soil calculator or by performing the following steps:
• Determine the width and length of the area you plan to cover with the soil. Let’s assume a width W = 10 yards and a length L = 20 yards.
• Calculate the area by simply multiplying the value of the length and the width. In this case
A = L x W = 20 x 10
The computation yields an area A = 200 yd².
In cases when the area is of a unique shape, manually calculate the area and enter it directly into the soil volume calculator.
• Next, decide on the depth or thickness of your garden’s topsoil layer. For this, let’s assume this value to be D = 0.5 yards.
• Multiply the depth with the area to get the volume:
V = 200 x 0.5 = 100 yd3.
This is the volume of soil needed to cover an area that has a length of 20, a width of 10 and a depth of 0.5 yd. You can check the accuracy of this value using the cubic feet calculator for soil.
How much dirt do I need for a raised bed?
How much soil to purchase depends upon the depth and size of your garden bed. However, many of us aren’t sure about how high to make the raised bed. Here, you need to consider the types of plants you
plan to grow. For instance, some plants could be deep-rooted but some require only a shallow soil to cover the roots completely.
If you plan to raise different types of plants, it would be a logical decision to select a bed height which works for the plants with the deepest roots. This also accommodates the short-rooted ones.
Here are some pointers to consider when it comes to raised bed height required to grow some of the more popular vegetables, flowers, and herbs:
Plants that grow best on raised beds which are 6″ high
• arugula
• chives
• basil
• cilantro
• dill
• leeks
• lettuce
• marigolds
• mint
• onions
• oregano
• parsley
• radishes
• spinach
• strawberries
• thyme
• other types of annual flowers
Plants that grow best on raised beds which are 12″ high
• beets
• beans
• Brussels sprouts
• broccoli
• cabbage
• carrots
• cauliflower
• cantaloupe
• collards
• garlic
• cucumbers
• kale
• rosemary
• sage
• summer squash
• sweet peas
• Swiss chard
• snapdragons
• turnips
• lavender
• borage calendula
• lantana
• cosmos
• nasturtiums
• and everything on the list of the 6″ high raised bed
Plants that grow best on raised beds which are 20″ high
• artichokes
• eggplant
• asparagus
• okra
• peppers
• parsnips
• pineapple
• sage
• sweet potatoes
• watermelon
• tomatoes
• winter squash
• and everything on the list of the 6” and the 12” raised beds
The soil in raised beds eventually breaks down over time. But you don’t have to completely replace the soil in the raised garden to maintain its vibrancy, bounty, and beauty. Just add some soil
revitalizer before planting for the growing season.
How do you measure for topsoil?
As long as you have the required measurements, you can use the soil calculator to determine the correct amount of topsoil needed for your garden. You can also use the following steps as your guide. A
bit of important information, a cubic yard is equal to 27 cubic feet.
• Square feet measurements
Determine the area you plan to cover in square feet. Multiply the length and the width of the area which needs topsoil. The unit of measurement here is feet. If you aren’t able to get a perfect
measurement, just round the value off to the nearest foot.
After getting these dimensions, multiply the length and the width to get the area in square feet.
• Depth measurement
Now that you’ve solved for the area, you have to decide on the depth and this depends on you. Express the depth in inches.
• Conversion
Multiply the value of the area by the depth using inches as your unit of measurement. Then take this new value and divide it by 324. This gives you a measurement in cubic feet. | {"url":"https://calculators.io/soil/","timestamp":"2024-11-11T17:07:53Z","content_type":"text/html","content_length":"87623","record_id":"<urn:uuid:0145f41f-c1c7-4ce4-9bbe-7a4cb7b54547>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00260.warc.gz"} |
Optimization of Division and Reconfiguration Locations of the Medium-Voltage Power Grid Based on Forecasting the Level of Load and Generation from Renewable Energy Sources (2024)
1. Introduction
This article concerns medium voltage power networks, which operate in Poland at a nominal voltage level of 15 kV. Distribution network operators in Poland strive to maintain a radial MV network
layout, mainly due to the limitation of short-circuit current values. However, radial networks do not ensure the appropriate level of flexibility of network operation and continuity of supply to
consumers. The solution to this problem is the construction of the MV network as a closed network (mesh, double-sided) and the introduction of division points that preserve the radial character of
the network.
Physically, a division point is a location on the network equipped with a circuit breaker that remains open during normal network operation. It closes during emergencies to ensure continuity of
supply to customers who have lost power from the primary source.
Various configurations are provided for medium voltage networks allowing for maintaining the continuity of electricity supply in the event of a failure. The “configuration of the power network” is to
denote here a specific arrangement of a given network structure, obtained by changing the state of circuit breakers. In practice, such a change in the network configuration is implemented by using
network division points. The location of network division points is usually constant, and switching is performed only in the event of a failure, in order to maintain the continuity of electricity
The problem of selecting the optimal configuration of medium voltage networks, particularly the determination of the optimal location for network division points, has been a topic of scientific
research and publications for a long period of time, both in Poland [1,2,3,4,5,6] and worldwide [7,8]. However, to the authors’ knowledge, the solutions described in the literature on the subject
have not been translated into practical applications. The issue of selecting the optimal location of network division points is sometimes marginalized by distribution system operators (DSOs) and
treated as a purely academic problem with no practical significance. Therefore, the network division points remain unchanged in practice. Meanwhile, in addition to its original function, i.e.,
limiting the level of short-circuit power in the medium-voltage network, correctly locating network division points may also reduce power and energy losses, improve the reliability of the network’s
operation, reduce the costs of electricity distribution, and improve network operating parameters such as voltage levels and power quality indicators [9].
Initially, mathematical relationships based on derivatives of functions were used to solve the optimization problem. Currently, various types of optimization algorithms based on heuristics are used.
In recent years, the issue of optimal network configuration has once again become a topic of interest for scientists and distribution system operators. Medium voltage distribution networks are
expanded and modernized every year. They are characterized by numerous branches and often connect to various types of distributed generation sources. Due to the high variability of load profiles and
generation connected to the power grid, the functioning of the power system is non-deterministic. The increasing saturation of renewable energy sources or energy storage in MV networks causes a
change in the nature of their operation. Therefore, distribution networks with built-in functionality enabling frequent configuration changes are much more efficient than networks with layout
unchanged for most of the year.
The justification for undertaking this type of research is also the fact that DSOs, in order to counteract the negative effects resulting from the continuous increase in the number and capacity of
RES in their networks, are gradually increasing financial resources allocated to modernizing their network resources. When planning modernization, they are increasingly willing to install remotely
controlled switching equipment. This has an enormous potential for the process of planning network modernization (optimization of network split points) and automatic network reconfiguration
This work is the result of a cooperation with one of the Polish DSOs to develop and implement a concept for optimal control of a selected network area using available hardware resources and
2. Literature Review
The issue of configuration changes in medium voltage networks has been studied for a long period of time, with the review of the most recent research output showing that it remains an important and
currently relevant issue in the field of power engineering. Previous research has focused on the main function of network configuration changes, i.e., minimizing power and energy losses [10].
Various optimization methods are used in the process of reconfiguration of the medium voltage distribution networks [11]. Optimization methods can be classified as classical methods, heuristic
methods, and hybrid methods. Classical optimization methods consist of searching for an optimal solution in a given space starting from a selected starting point and searching for the minimum of the
objective function in its vicinity. Heuristic optimization methods are an alternative to the classical optimization methods and allow for solving various types of problems that cannot be solved using
classical methods or when the use of these methods is too time-consuming [12]. Hybrid optimization methods combine the features of classical and heuristic methods to eliminate the unfavorable or
enhance favorable features of classical and heuristic methods. Heuristic optimization methods cope better than classical ones with complex problems of network configuration changes and are thus more
often used. The most used algorithms include the following: particle swarm optimization (PSO) [13,14], genetic algorithm (GA) [15,16], and tabu search algorithm (TS) [17]. It is also worth mentioning
algorithms such as the spinning tree algorithm [18] and single-commodity flow and multi-commodity flow [19], which are also used in the process of optimizing the MV network configuration.
In [20], the grasshopper optimization algorithm (GOA) was used to optimize the network configuration, utilizing the analogy to the natural behavior of a grasshopper. The aim of the research was to
determine the optimal location of switches that can be used to change the network configuration in order to minimize power losses in the system. The simulation results showed that network
reconfiguration reduced power losses in the system by approximately 38%. The conclusions from this study indicate that the grasshopper optimization algorithm achieves better results than other
optimization methods (mentioned in the publication [20]). However, as the simulation studies were carried out on a relatively small, 33-node network model, it remains uncertain whether the algorithm
would also work on a larger network model. Moreover, the study did not take into account renewable energy sources, which have a significant impact on the operation of modern distribution networks.
A multi-criteria particle swarm optimization (PSO) study [21] presents an effective way to improve the operating parameters of a distribution network. The network reconfiguration allowed for a change
in the network division points while maintaining the radial structure of the network and ensuring power supply to all connected loads. The goal was to minimize power losses in the network while
improving the voltage profile of the system. The results proved the effectiveness of the proposed solution with power losses reduced by approximately 30% and a significant improvement in the voltage
profile. For their example, the authors obtained promising results, but the tested network model was also relatively small and did not take RES into account.
Renewable energy sources were included in other studies on changes in the configuration of the medium-voltage power grid [22,23], with [23] discussing the use of a genetic algorithm to optimize the
network configuration in order to improve its efficiency. Simulation studies were carried out on a network model compatible with distributed generation. Improving network efficiency came down to
reducing power losses in the network and improving the voltage profile. Simulation studies using a genetic algorithm showed opportunities to reduce power losses and improve the voltage profile. This
study, however, contains several shortcomings and unjustified generalizations. The paper does not present details on how to model power generated from RES. Moreover, it was found that renewable
energy sources connected to the grid had a positive effect on reducing power losses and improving the voltage profile. This is true only within a limited range of network operation and is a function
of numerous variables, including network configuration, RES saturation level, and load variability profiles.
Another study [24] presented the possibilities of increasing the efficiency of the network by modifying its topology. Improving network efficiency came down to optimizing the configuration using the
BPSO (binary PSO) optimization algorithm to minimize active power losses in the network. Simulation tests showed the possibility of reducing power losses in the network by approximately 34%; an
improvement in the voltage profile was also observed. This approach to increasing network efficiency seems correct and effective. However, it is worth conducting similar research using a larger
network model with a high concentration of RES to confirm the effectiveness of the developed algorithm.
In [25], the problem of reconfiguration of a power grid working with renewable energy sources was raised. The authors rightly noticed the uncertainties related to generating power from RES,
forecasting them based on the probability distribution. The disadvantages of the proposed solution include the fact that typical probability distributions are not always reflected in reality.
Moreover, neither historical data nor current weather data were fully utilized in the study.
The research described in [26] presented a methodology for reconfiguring a distribution network operating with distributed generation with the aim of minimizing energy losses. The study was carried
out on a small network model using a dragonfly optimization algorithm. A daily simulation was performed, and the daily energy losses were effectively reduced. The approach used to reconfigure the
network and reduce daily losses was successful. However, the tests were carried out on a small network model. Confirmation of the results on a larger part of the real distribution network was
The paper [27] presents the process of distribution network reconfiguration and reactive power compensation using the hybrid simulated annealing—minimum spanning tree algorithm. The study considered
node load variations according to a Gaussian distribution and wind farm generation variations according to a Weibull distribution. Daily load curves for weekday and weekend days and the presence of
solar panels were also taken into account. The study showed that the hybrid method gave significantly better results than the simulated annealing method. The results of the study confirm the
effectiveness of the developed approach, but again it was only carried out on a small grid model.
Authors of [28] addressed, among other things, the issue of reconfiguring the power grid and forecasting the level of power generation from RES. The authors correctly perceived and described the
problem of uncertainty in solar and wind energy generation. The research was based on randomly generated scenarios. However, the method for forecasting power from renewable energy sources is not
specified or clearly described.
The paper [29] presents a network configuration optimizations process aimed at minimizing active power losses using an improved radial maintenance algorithm (IRMA). The authors of the paper highlight
the high performance and efficiency of the algorithms developed, but the research is carried out on very small network models.
The paper [30] presents a process for optimizing the performance of a distribution network cooperating with distributed generation sources. A multi-criteria optimization using the horned lizard
optimization algorithm (HLOA) is proposed. The developed approach is interesting because it uses Monte Carlo simulation (MCS) based on probability density functions (PDF) together with a scenario
reduction algorithm (SRA). Furthermore, the research incorporates a probabilistic approach to predict, among other things, the power generated by photovoltaic sources. They make use of well-known
mathematical models that define the variance distributions of random phenomena. The results obtained are promising, but again, the research has only been carried out on small test grid models.
None of the medium-voltage network configuration optimization methods described above used current, online weather data correlated with the location of the RES sources to predict the generation level
of the RES sources. A summary of the literature review and the algorithms used in specific studies on the problems and optimization of MV network configuration is presented in Table 1.
The analysis of the subject literature prompted the authors to develop their own algorithm optimizing the network reconfiguration process, taking into account load forecasting and generated power
based on current and historical load and generation data and current data from weather services. The authors did not find a description of similar studies in the literature. In their considerations,
the authors omitted the financial aspect resulting from the need to install and increase the wear of switches. The initial assumption was to use the developed methodology to optimize the number and
location of switches enabling remote network reconfiguration. The solution developed as a result of this study can be used, in addition to the aforementioned functionality, to optimize the structure
of the power grid.
This article presents an original approach to the process of optimizing the operation of medium voltage networks using heuristic optimization methods and a statistical and probabilistic approach.
Algorithms for forecasting the load of transformer stations and generation from renewable energy sources were developed. The algorithm for forecasting the load of transformer stations used historical
measurement data and determined the probability of a specific level of received power using the Monte Carlo method. The algorithm for forecasting generation from RES (photovoltaic and wind) used
historical measurement data and current weather data obtained from weather API interfaces (Solcast API and OpenWeatherMap API). The use of heuristic optimization methods combined with a statistical
and probabilistic approach and the use of current weather data to forecast the power generated from RES present an innovative approach to the process of optimizing the operation of medium voltage
3. Methodology
The process of optimizing the operation of the medium voltage network was carried out using heuristic optimization methods. Heuristic optimization methods are an alternative to classical methods of
solving optimization problems, enabling solving various types of problems that cannot be solved using classical methods or when the use of those methods is too time-consuming or labor-intensive.
Heuristic optimization methods are currently used more frequently than classical methods due to the high complexity of problems occurring in the field of power engineering. They are used to solve
problems of optimal energy flow, minimize various types of costs, and solve problems of shipping economy and multilateral systems [31,32,33].
The cuckoo search algorithm [34] was selected due to its relatively easy implementation and high effectiveness in solving problems in the field of power engineering. The cuckoo search algorithm
mimics the aggressive reproductive strategy used by cuckoos, which involves laying eggs in other birds’ nests. In order to increase the efficiency of exploring the solution space, the cuckoo search
algorithm was extended using a jumping mechanism based on the Lévy distribution. The Lévy distribution is a continuous probability distribution for non-negative random variables. The step size in the
Lévy distribution is called the “Lévy flight.” A Lévy flight is a random walk in a discrete space, in which the step length is determined based on the Lévy distribution.
The cuckoo algorithm is based on three idealized principles [34]:
• Each cuckoo lays one egg and drops it into a randomly selected nest;
• The best nests with high egg quality are passed on to the next generation;
• The number of available nests is constant, and an egg dropped by a cuckoo is detected with a certain probability.
Optimization studies were conducted on a network model reflecting a real fragment of the national MV power grid, a part of the energy region of one of the Polish DSOs. The network model was selected
due to the desire to conduct research using a real fragment of the medium voltage distribution network typical of solutions used in the Polish National Power System. In addition, it allowed for the
use of real historical data obtained courtesy of the distribution network operator. This approach allowed for obtaining verifiable research results and facilitated the validation of the effectiveness
of the developed approach for the network operation optimization process. The basic data of the modeled network are presented in Table 2.
The network diagram adopted for the research is shown in Figure 1.
The research procedure was divided into two parts. In the first part of the research procedure, a model of the medium-voltage power grid was prepared, and calculations of power flows were conducted,
as well as the preliminary optimization of the network configuration, the main goal of which was to determine the optimal places for dividing the network. The optimized power grid model was used in
the second part of the research procedure, where simulation tests were carried out in which changes were made to the network operating system in response to changes in its operating conditions. The
simulation studies took into account the variability of demand and generation using the historical measurement data and current weather data.
The course of the first part of the research procedure can be presented in the form of the following algorithm:
• Preparation of a power grid model;
• Power flow calculations for the base model;
• Update with the data on power generated from RES in the network model, which was determined for each source based on historical data;
• Update with the data on power demand in the network model, which was determined for each MV/LV transformer based on historical data;
• Initialization of the optimization procedure;
• Determining the optimal solution.
The second part of the research procedure run according to the following algorithm:
• Loading the power grid model obtained in the first part of the research procedure;
• Performing power flow calculations;
• Initialization of the optimization procedure;
• Determining the optimal solution;
• Comparison of the results obtained in the research procedure with the results obtained from power flow calculations.
The following assumptions were made for the optimization studies:
• Varying load levels in the network;
• Varying level of power generated from RES;
• Photovoltaic and wind sources are connected to the grid;
• Possibility of implementing network division in all sections;
• The optimization process was carried out taking into account the variability of the load and power generated from RES;
• The power demand forecast was determined based on historical measurement data;
• The forecast of power generated from RES was determined based on historical data and current weather data;
• The set of acceptable solutions included solutions that met the following criteria: maintaining the radial system of the network, maintaining voltages within the required range, and lack of
network overload.
The location of medium-voltage network division points, treated as an optimization problem, becomes more complicated in the case of extensive networks composed of many major power supply points and
cooperating with RES connected at different nodes. The randomness and unpredictability of generation, as well as the variable load, further complicate system analysis. Certainly, a radically higher
number of operating states may be considered; however, this multiplication will not guarantee reaching the optimal point. Nevertheless, full optimization of a given operating state of the power
system occurs only when all operating conditions of the transmission network and the related limitations are taken into account [35,36,37]. By introducing the symbols of the three vectors, as
• state x—containing node voltage modules and their arguments;
• forcing f containing the powers received at the nodes;
• control c containing the power generated in the nodes.
The optimization task can be written in a general form:
F[obj](x,f,c) = F[obj](z) → min
under equality constraints:
and inequality:
The above issue is classified as an OPF (optimal power flow) task. In order to determine the optimal cutting points, power losses are assumed as the objective function, according to the following
The detailed form of equality and inequality constraints results from the provisions and formulas of the classic flow problem. The following limitations are considered in this work:
• For the elements of the control vector, i.e., active powers and passives generated in node (j = 1…G), where G is the number of generators in the network:
• Resulting from the permissible current carrying capacity of the lines (k, l = 1…N), where N is the number of network nodes:
• Resulting from the permissible voltage values in network nodes (i = 1…N), where N is the number of network nodes:
• Resulting from the balance of active and reactive power generated and consumed.
Balancing equations that must be satisfied for each network node (i = 1…N), where N is the number of network nodes, have the following form:
OPF and SCOPF (security constrained optimal power flow) tasks are relatively difficult to solve using methods similar to classical ones. Despite the simple form of the objective function (power
loss), the need to take into account the above-mentioned limitations, which are the result of power flow calculations, poses a considerable problem. The situation becomes even more complicated when
the calculations diverge. Additionally, the discrete nature of the decision variables, (a finite set of possible division points), makes the analysis even more difficult.
The developed approach to the process of optimizing the configuration of medium voltage networks is multi-platform. Figure 2 shows the solution architecture diagram along with its components and
interconnections. This solution architecture includes the following components:
• PowerWorld Simulator (version 23)—software for simulating the operation of the power system, which enables visualization, simulation, and analysis of the operation of the power system, which is
based on the calculation of power flows in the system;
• Simulator Automation Server—an add-on to the PowerWorld Simulator software, which allows for extending its functionality by running and controlling the PowerWorld Simulator from another
• OpenWeatherMap API—online service that provides access to global weather data via API as well as access to current weather data;
• Solcast API—online service that provides current and forecast data on solar radiation and photovoltaic energy worldwide;
• Weather API—a web application that is an adapter between the OpenWeatherMap API and Solcast API services and the MATLAB environment;
• MATLAB—a programming environment for numerical calculations in which the calculations for the research procedure were carried out.
Medium voltage power networks are characterized by significant load variability over time [38]. Various estimation methods are used to determine the load on transformer stations [39,40,41]. The
developed algorithm used historical measurement data for a period of one year to determine the active power demand forecast. Demand registrations were made with a 15 min resolution. Historical
measurement data were organized and subjected to statistical analysis. Incorrect measurements and measurements outside the pool of acceptable states have been omitted. It was also necessary to scale
the values, taking into account the rated power of the transformer. An algorithm was developed, which, based on the prepared statistical data, determines the probability of a specific load value
occurring at a given transformer station.
The level of power generated from photovoltaic sources depends on many factors, mainly solar radiation intensity, ambient temperature, and wind speed. Solar insolation is the most useful value for
estimating the power generated from photovoltaic sources. The developed approach to determine the generation forecast from PV sources used measurement data for a period of one year and current data
on the intensity of solar radiation obtained from the APIs [42]. An algorithm has been developed that, based on historical data and current meteorological data obtained from the APIs, determines the
probability of occurrence of solar radiation intensity of a specific value. The algorithm is based on the NOCT (normal operating cell temperature) standard for determining the maximum power generated
from a photovoltaic source, in which the maximum power of the photovoltaic source is achieved at a solar radiation intensity of 800 W/m^2. The algorithm ignores the ambient temperature and wind speed
due to the negligible effect of these parameters on the generated power.
The algorithm used to forecast power generated from PV sources includes the following elements:
• Preparation and loading of input data consisting of historical data from an Excel spreadsheet (xlsx file) containing actual measurement results;
• Downloading current weather data from the Solcast API;
• Determination of the space of acceptable solutions based on historical data;
• Correction of the space of acceptable solutions after taking into account weather data obtained from the Solcast API;
• Drawing of probability values using the Monte Carlo method;
• Determination of the generated power of individual sources.
The level of power generated from wind sources also depends on a number of different factors. The wind speed, apart from the structural dimensions of a wind turbine, is a decisive factor [43]. The
developed approach to determine the generation forecast from wind power sources used historical measurement data for a period of one year and current weather data obtained from the APIs [44]. An
algorithm has been developed that, based on historical data and current meteorological data obtained from the APIs, determines the probability of generating power of a specific value. The developed
algorithm for forecasting power generated from wind sources is based on the following assumptions:
• The power of the wind power source is determined based on wind speed, and other factors have been omitted due to their much smaller impact on the generated power;
• The maximum power generated from the wind power source occurs at a wind speed of 15 m/s;
• The wind speed obtained from the OpenWeatherMap API is used as a correction factor in forecasting the power of the wind power source.
The algorithm used to forecast the power generated from PV sources consists of the following steps:
• Preparation and loading of input data from an xlsx file (historical measurement data) from measurements;
• Downloading current weather data from OpenWeatherMap API;
• Determination of the space of acceptable solutions based on historical data;
• Correction of the space of acceptable solutions after taking into account data obtained from OpenWeatherMap API;
• Drawing of probability values using the Monte Carlo method;
• Determination of the power generated by individual sources.
The block diagram of the developed approach to the process of optimizing network configuration using proprietary algorithms for forecasting load and generation from renewable energy sources is shown
in Figure 3.
4. Optimization of the Operation of the MV Network
Simulation tests were carried out on the prepared network model, which included the following:
• determining the basic network configuration and calculating power flows;
• initial optimization of the network configuration;
• 24 h reconfiguration of the network in response to changes in load and generation levels from renewable energy sources.
The medium voltage network model is built in a closed bus system with 12 permanent network division points. The number of split points corresponds to the actual number of remotely controlled switches
installed in the modeled network area. The number of split points was not optimized. Only the places where the network was divided were subject to optimization.
The developed algorithm for forecasting generation from renewable sources was first used to prepare a network model. For each network model, a basic state was determined by forecasting the load at
the start of the simulation. In its basic configuration, the network model works with all installed RES. For the network configuration prepared in this way, power flow calculations were made, and the
basic network operating parameters were determined. Selected network operating parameters are shown in Table 3.
The initial optimization of the network configuration was then carried out, the main goal of which was to reduce power losses in the network. Optimization tests were carried out in the MATLAB
environment using the cuckoo search optimization algorithm. Selected network operating parameters after optimization are presented in Table 4.
After the initial optimization, a new network configuration was determined. The location of the optimal network division points turned out to be different from the one currently present in the tested
network (as determined by the DSO). The initial optimization of the network configuration allowed for reducing power losses in the network by approximately 25%, and the voltage profile improved
Network operating conditions over 24 h were also simulated using the network model for the following variants:
• Case 1—a network without renewable energy sources;
• Case 2—a network working with wind generation;
• Case 3—a network working with photovoltaic generation;
• Case 4—a network working with wind and photovoltaic generation.
Proprietary load forecasting algorithms and generation forecasting algorithms from RES were used for simulation studies. The load forecast for individual transformer stations was completed using a
load forecasting algorithm based on historical data. The forecast of generation from photovoltaic sources was completed using an algorithm for forecasting power from photovoltaic sources based on
historical data and current weather data obtained from the SOLCAST API. The forecast of generation from wind sources was made using an algorithm for forecasting power from wind sources based on
historical data and current weather data obtained from the OpenWeatherMap API.
For the network fragment under consideration, after consultation with the operator, it was assumed that the benefits resulting from limiting power losses above 20% would be comparable with the costs
of reconfiguring the network by changing the network division points, and thus the network would be reconfigured at those times. A different value could be assumed for the purpose of simulation, and
it would affect the simulation results, i.e., the number of switches and network division points during the day. A list of the selected parameters of the 24 h simulation is presented in Table 5.
The simulation was performed in a Windows 10 x 64 environment on a PC with an i5 class processor. The duration of a simulation (one iteration run every hour) on this class of computer was 7 min 36 s.
The time given may indicate that iterations could be performed more frequently, e.g., every 15 min. While this is technically possible, the desirability of reducing the iteration time is
Figure 4 shows the variation in the state of all circuit breakers at different hours of the day. The white color indicates no change in the state of a circuit breaker in a given hour compared to the
previous hour, the red color indicates a circuit breaker closing, and the green color indicates a circuit breaker opening. The numbers in the switch labels correspond to the node numbers on the
network model diagram (Figure 1 and the high-resolution jpg drawing attached to the article).
The initial operating point (0:00) is identical for all cases considered and takes into account the optimization performed in the first step described above. The apparent change in the state of the
circuit breakers means that the optimization performed showed the need to reconfigure the network with respect to the current network layout used by the operator (the research was carried out on a
real network model).
The initial operating point (0:00) is identical for all cases considered and takes into account the optimization performed in the first step described above. The apparent change in the state of the
circuit breakers means that the optimization carried out showed the need to reconfigure the network with respect to the current network layout used by the operator (the research was carried out on a
real network model). In the evening hours (21:00–5:00), due to the low variability of generation and load, the algorithm did not identify the need to reconfigure the network. Comparing all cases, the
significant impact of RES generation on the need to reconfigure the modeled network should be noted. Except for a certain group of circuit breakers that do not participate in the reconfiguration of
the network, the state, number, and configuration of the circuit breakers do not repeat during the day. The above shows the importance of considering the different types of RES and their interaction.
The algorithm developed by the authors can be used not only to optimize the operation of the network but also to optimize the location of the division points. For the case under consideration, if
limited to the day analyzed, one would have to conclude that circuit breakers not involved in the network reconfiguration are unnecessary. Obviously, such an analysis should be carried out at the
stage of the decision to modernize the network with the installation of circuit breakers and after simulation studies lasting at least one year. Nevertheless, the functionality of the developed tool
goes beyond the current optimization of network operation.
Table 6 presents a list of selected network operating parameters for a 24 h simulation without renewable energy sources during the hours when, according to the assumptions, network reconfiguration is
The first simulation of the 24 h network operating conditions consists in a variant in which the power grid does not work with RES. The network configuration was optimized every hour of the day, and
opportunities to reduce power losses were presented. In this variant of network operation, the reconfiguration allowed for a reduction in power losses ranging from 9% to 30%. For the analyzed day,
and for the adopted assumptions, the required number of reconfigurations of the modeled network system was six per day. The proposed method of network reconfiguration made it possible to reduce power
losses by 1.85 MWh per day.
Table 7 presents a list of selected network operating parameters for a 24 h simulation with wind generation at hours when, according to the assumptions, network reconfiguration is recommended.
The second simulation of 24 h network operating conditions consists of a variant in which the power grid works with wind generation sources. In this variant of network operation, reconfiguration made
it possible to reduce power losses ranging from 5% to 31%. For this configuration of network operation, in accordance with the adopted assumptions, it is recommended to switch the network division
points five times a day. Reconfiguration of the network over a 24 h period made it possible to reduce power losses by 1.75 MWh per day.
Table 8 presents a list of selected network operating parameters for a 24 h simulation with photovoltaic generation during the hours when, according to the assumptions, network reconfiguration is
The third simulation of 24 h network operating conditions consists of a variant in which the power grid works with photovoltaic generation sources. In this variant of network operation,
reconfiguration made it possible to reduce power losses ranging from 8% to 31%. For this configuration of network operation, in accordance with the adopted assumptions, it is recommended to switch
the network division points four times a day. Reconfiguration of the network over a 24 h period allowed for reducing power losses by 1.55 MWh per day.
Table 9 presents a list of selected network operating parameters for a 24 h simulation with wind and photovoltaic generation during the hours when, according to the assumptions, network
reconfiguration is recommended.
The last simulation of 24 h network operating conditions consists of a variant in which the power grid works with wind and photovoltaic generation sources. In this variant of network operation, the
reconfiguration allowed for a reduction in power losses ranging from 4% to 34%. For this configuration of network operation, in accordance with the adopted assumptions, it is recommended to switch
the network division points three times a day. Reconfiguration of the network over a 24 h period allowed for reducing power losses by 1.69 MWh per day.
5. Results and Discussion
An MV network model was prepared to verify the operation of the developed research procedures. The correct operation of the medium-voltage network optimization process and the developed algorithms
for forecasting load and generation from renewable energy sources were verified. A basic configuration was determined for the network model, for which power flows were calculated, and the basic
network operating parameters were determined. The initial optimization of the network configuration was carried out using the network model prepared in this way. Optimization tests demonstrated that
the basic network configuration was not optimal, and it was possible to reduce power losses by changing the location of network division points. The initial optimization of the network configuration
made it possible to reduce power losses by approximately 25%. The network model in the new configuration was verified in a 24 h optimization process.
In the second part of the research procedure, a process of cyclical network reconfiguration was carried out for various variants of network operation in order to test the developed algorithms. The
research was carried out in a 24 h cycle for four selected subjects’ network operation variants in accordance with the adopted assumptions.
First, a simulation of 24 h network operating conditions was performed for the variant without RES. An optimization procedure was launched every hour of the day, in which the optimal network
configuration was determined, and its parameters were controlled. Optimization in the 24 h cycle made it possible to reduce the loss of active power in the network in the range of 9% to 30%. The 24 h
reconfiguration made it possible to reduce power losses by 1.85 MWh per day. Assuming the average sales price of electricity on the competitive market calculated by the Energy Regulatory Office in
2023 at the level of EUR 177.27, the gain amounts to EUR 327.94 in savings related to power losses.
Then, a 24 h simulation was performed for a network with wind generation. Optimization over a 24 h cycle made it possible to reduce the loss of active power in the network in the range of 5% to 31%.
The 24 h reconfiguration made it possible to reduce power losses by 1.75 MWh per day. Assuming the average sales price of electricity on the competitive market calculated by the Energy Regulatory
Office in 2023 at the level of EUR 177.27, the gain amounts to EUR 310.22 in savings related to power losses.
Another simulation was performed for a network with photovoltaic generation. Optimization in the 24 h cycle made it possible to reduce the loss of active power in the network in the range of 8% to
31%. The 24 h reconfiguration made it possible to reduce power losses by 1.55 MWh per day. Assuming the average sales price of electricity on the competitive market calculated by the Energy
Regulatory Office in 2023 at the level of EUR 177.27, the gain amounts to EUR 274.76 in savings related to power losses.
The last simulation was performed for a network working with wind and photovoltaic generation. Optimization in the 24 h cycle made it possible to reduce the loss of active power in the network in the
range of 4% to 34%. Reconfiguration of the network over a 24 h period made it possible to reduce power losses by 1.69 MWh per day. Assuming the average sales price of electricity on the competitive
market calculated by the Energy Regulatory Office in 2023 at the level of EUR 177.27, the gain amounts to EUR 299.75 in savings related to power losses.
By making certain simplifications, it is possible to estimate the financial benefits resulting from the use of the medium voltage distribution network reconfiguration algorithm presented in the
article. The previous paragraphs contain estimated savings resulting from reducing power losses based on 24 h simulation. Assuming that not every day will be the same, and taking into account the
fact that this approximate analysis has already shown the possibility of saving amounts ranging from EUR 150 to as much as EUR 300, assuming an average amount of savings of EUR 200, approximately EUR
73,000 can be achieved in annual savings.
Cost accounting should also include the costs associated with the greater wear of the circuit breakers and the associated higher operating costs. The cost of one medium voltage circuit breaker is
approximately EUR 3500. Assuming that the circuit breaker should be replaced after 10,000 switch operations and taking into account the fact that each of them will potentially be able to perform
three switches per day related to the discussed algorithm, the circuit breaker will perform approximately 1100 operations per year. This means that the circuit breaker should be replaced after
approximately 10 years of operation, making the balance of benefits and costs rather favorable. This is an estimated value for one switch. The authors are currently working on a comprehensive
cost-benefit analysis, and the results of this analysis will be published in the next article.
However, regardless of whether the balance is more or less favorable, it is worth paying attention to the benefits resulting from significantly increasing the flexibility of the medium-voltage
network operation and consequently improving the quality of control.
Research conducted by the authors confirmed that a one-time optimization of the network configuration is not sufficient to ensure the optimal operation of the power system. For the power system to
function optimally, it requires ongoing monitoring and control. The research results confirmed that network reconfiguration including several changes in the location of network division points allows
for reducing power losses while maintaining the required voltage levels and other network parameters.
6. Conclusions
The article discusses issues related to the optimization of the operation of medium voltage networks, focusing on the optimal network configuration and reducing power and energy losses. The article
provides a critical review of selected studies from recent years on the issue of network configuration optimization. The analysis of the solutions proposed in the publications prompted the authors to
develop a new approach to the process of reconfiguring a medium-voltage power grid. The algorithms presented in the article are based on statistical and probabilistic approaches and also use current
data obtained from the weather API. A research procedure was developed and verified through simulation tests on a medium voltage power grid model. Simulation tests were carried out for four variants
of network operation in order to check the correct operation of the developed approach to the network configuration optimization process. The simulations confirmed the effectiveness of the approach
used and the developed algorithms. In addition to the efficiency of the developed solution, it was shown that considering the operation of different RES sources significantly affects the optimization
of the network configuration.
The research results show that even a relatively small increase in the frequency of network reconfiguration leads to an improvement in the quality of operation of the medium-voltage network and, in
the first place, to a reduction in power loss and, consequently, to a reduction in the costs associated with the distribution of electricity.
Given the results obtained, it can be concluded that the proposed solution provides considerable possibilities of practical applications. It can be used to optimize the location of network division
points (reducing economic costs related to the modernization of network infrastructure) as well as to optimize the reconfiguration process in order to reduce power losses while maintaining voltage
The authors also see the possibility of using the discussed algorithm. Running it for a network model before modernization, assuming that the cut-off points can be located anywhere in the network,
will allow for indicating those places in the distribution network where the installation of circuit breakers, e.g., remotely controlled ones, should be a priority, in view of the potential benefits
of the process of MV distribution network reconfiguration.
Supplementary Materials
The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/en17194933/s1, Figure S1: The network model adopted for research.
Author Contributions
Conceptualization, P.M. and R.M.; methodology, P.M. and K.S.; software, K.S.; validation, P.M.; formal analysis, P.M., R.M. and M.I.; investigation, K.S.; resources, K.S.; data curation, K.S. and
P.M.; writing—original draft preparation, P.M., R.M., M.I. and K.S.; writing—review and editing, P.M., R.M. and M.I.; visualization, K.S.; supervision, P.M.; project administration, P.M.; funding
acquisition, P.M. All authors have read and agreed to the published version of the manuscript.
This research received no external funding.
Data Availability Statement
Data are contained within the article.
Conflicts of Interest
The authors declare no conflicts of interest.
Figure 1. The network model adopted for research (network diagram in higher resolution available in a jpg file—supplementary file; Figure S1).
Figure 1. The network model adopted for research (network diagram in higher resolution available in a jpg file—supplementary file; Figure S1).
Figure 2. Solution architecture diagram.
Figure 2. Solution architecture diagram.
Figure 3. Flow chart of the network configuration optimization process.
Figure 3. Flow chart of the network configuration optimization process.
Figure 4. Change the state of circuit breakers in the analyzed network during simulated studies: (a) Case 1—a network without renewable energy sources; (b) Case 2—a network working with wind
generation; (c) Case 3—a network working with photovoltaic generation; (d) Case 4—a network working with wind and photovoltaic generation.
Figure 4. Change the state of circuit breakers in the analyzed network during simulated studies: (a) Case 1—a network without renewable energy sources; (b) Case 2—a network working with wind
generation; (c) Case 3—a network working with photovoltaic generation; (d) Case 4—a network working with wind and photovoltaic generation.
Table 1. Summary of the literature review.
Table 1. Summary of the literature review.
Ref. Year Optimization Method Network Includes Load Forecast Online *
Real RES Forecast RES
[3] 2014 Evolutionary algorithm NO NO YES NO NO
[4] 2015 Genetic algorithm YES NO YES NO NO
[5] 2017 Particle swarm optimization YES NO YES NO NO
[9] 2018 Cuckoo search NO YES YES YES NO
[20] 2019 Particle swarm optimization NO NO YES NO NO
[21] 2021 Genetic algorithm NO YES YES YES NO
[22] 2021 Particle swarm Ooptimization NO YES YES YES NO
[23] 2021 Genetic algorithm NO YES YES YES NO
[24] 2022 Binary particle swarm optimization NO YES YES YES NO
[25] 2023 Spanning tree algorithm NO YES YES YES NO
[26] 2023 Dragonfly optimization algorithm NO YES YES YES NO
[27] 2023 Simulated annealing—minimum spanning tree algorithm hybrid NO YES YES YES NO
[28] 2024 Single commodity flow method NO YES YES YES NO
[29] 2024 Radiality maintenance algorithm NO YES YES YES NO
[30] 2024 Horned lizard optimization algorithm NO YES YES YES NO
This study 2024 Cuckoo search YES YES YES YES YES
* Online—whether the algorithm uses external data from the Internet (e.g., online services that access current weather data).
Table 2. Selected parameters of the test network in the base configuration.
Table 2. Selected parameters of the test network in the base configuration.
Network Element Value
Power stations 110/15 kV 4
MV nodes 783
Loads Power range: 10–630 kW 732
PV and FW Power range: 200 kW–1 MW 48
Table 3. Selected network operating parameters in the basic configuration.
Table 3. Selected network operating parameters in the basic configuration.
Parameter Name Value Unit
Active power losses 1.21 MW
Load 63.5 MW
Power generated 64.71 MW
Voltage—min 1.02 pu
Voltage—max 1.10 pu
Table 4. Selected network operating parameters after initial configuration optimization.
Table 4. Selected network operating parameters after initial configuration optimization.
Parameter Name Value Unit
Active power losses 0.91 MW
Load 63.5 MW
Power generated 64.41 MW
Voltage—min 1.05 pu
Voltage—max 1.10 pu
Table 5. Selected simulation parameters.
Table 5. Selected simulation parameters.
Parameter Name Value
Optimization algorithm CuckooSearch
Number of iterations 200
Number of cuckoos 5
Start time 00:00
End time 23:00
Simulation step 1 h
Table 6. List of selected network operating parameters for the first simulation variant.
Table 6. List of selected network operating parameters for the first simulation variant.
Time Load Power Loss Power Loss
Level [MW] Level [MW] Difference [%]
00:00 30.21 0.265 23.45
07:00 43.35 0.531 20.90
11:00 56.30 0.952 22.60
15:00 50.56 0.725 21.50
19:00 37.65 0.403 23.07
21:00 30.25 0.253 30.06
Table 7. List of selected network operating parameters for the second simulation variant.
Table 7. List of selected network operating parameters for the second simulation variant.
Time Load Power Loss Power Loss
Level [MW] Level [MW] Difference [%]
00:00 34.50 0.249 24.50
08:00 36.54 0.280 20.55
12:00 62.48 1.017 31.20
16:00 40.14 0.355 21.80
21:00 32.12 0.213 23.70
Table 8. List of selected network operating parameters for the third simulation variant.
Table 8. List of selected network operating parameters for the third simulation variant.
Time Load Power Loss Power Loss
Level [MW] Level [MW] Difference [%]
05:00 30.14 0.210 22.50
09:00 40.92 0.401 27.40
15:00 42.69 0.430 31.20
19:00 32.34 0.294 20.50
Table 9. List of selected network operating parameters for the fourth simulation variant.
Table 9. List of selected network operating parameters for the fourth simulation variant.
Time Load Power Loss Power Loss
Level [MW] Level [MW] Difference [%]
00:00 24.48 0.259 34.74
09:00 45.73 0.651 20.55
19:00 29.20 0.465 23.07
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s).
MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https: | {"url":"https://senderoislam.net/article/optimization-of-division-and-reconfiguration-locations-of-the-medium-voltage-power-grid-based-on-forecasting-the-level-of-load-and-generation-from-renewable-energy-sources","timestamp":"2024-11-12T22:10:36Z","content_type":"text/html","content_length":"171056","record_id":"<urn:uuid:4b2173b1-0d0d-4245-869c-d6c90d1f8e73>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00275.warc.gz"} |
seminars - Zeta함수와 L함수에 대한 역사적 고찰 Ⅹ
※ 시간: 2:30~3:30 / 3:45~4:45
※ Zoom 회의 ID: 220 305 8101
In this lecture series, I will give a historical overview of zeta functions and L-functions following the footsteps of Euler, Dirichlet, Riemann, Dedekind, and many others, culminating with the
Langlands program. Along the way I will touch upon interesting interactions with analysis, algebra, number theory, and algebraic geometry. | {"url":"http://www.math.snu.ac.kr/board/index.php?mid=seminars&sort_index=date&order_type=desc&page=41&l=en&document_srl=813550","timestamp":"2024-11-08T12:11:33Z","content_type":"text/html","content_length":"45654","record_id":"<urn:uuid:4c0c0652-7add-4830-9194-545157541223>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00329.warc.gz"} |
Lockers - math puzzle
A high school has a strange principal. On the first day, he has his students perform an odd opening day ceremony:
There are one thousand lockers and one thousand students in the school. The principal asks the first student to go to every locker and open it. Then he has the second student go to every second
locker and close it. The third goes to every third locker and, if it is closed, he opens it, and if it is open, he closes it. The fourth student does this to every fourth locker, and so on. After the
process is completed with the thousandth student, how many lockers are open? | {"url":"http://www.pzzls.com/lockers_puzzle.html","timestamp":"2024-11-05T07:51:45Z","content_type":"application/xhtml+xml","content_length":"10838","record_id":"<urn:uuid:4acaa60a-bafb-4d5e-9f8d-9c2423b45f55>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00678.warc.gz"} |
Percents To Decimals Worksheet - Decimal Worksheets
Percents To Decimals Worksheet
Percents To Decimals Worksheet – There is a method to obtain totally free Percentage to Decimal Worksheets on the net to help you your college students check sales in between fractions and decimals.
The HTML and PDF models are printable and editable. Answer secrets are included with every worksheet. Each and every answer is automatically generated by examining random numbers. Respond to crucial
is printable and provided in the data file. To create the transformation method less difficult, print the worksheets with the name of your own youngster along with the particular date. Percents To
Decimals Worksheet.
Free fractions to percents worksheets for decimal worksheets for conversion
Learning to convert fractions into decimals can be quite a hard task. There are numerous totally free worksheets for conversion process of fractions into decimals which can help kids improve their
math expertise. With these fractions change worksheets, college students can also work on their own mathematics skills and expert the concept. The majority of these computer supplies happen to be in
equally HTML and PDF formats. They could be personalized to satisfy the needs of the little one. The majority of them include an response important that is certainly made quickly.
By far the most widely used method of changing fractions into decimals utilizes the technology notation method. To transform a small fraction right into a decimal, simply break down the two the
number of numerators by denominators. Then, you may minimize the lead to the fraction. The worksheets may also give an explanation for each phase that is transformed, making the job easier for
students to understand. The conversion process of decimal fractions worksheets are excellent, especially for pupils in basic educational institutions who require help with their arithmetic abilities.
If you decide to do your homeschooling and send them to school printable decimal, Fractions and percents converts can assist in understanding the concepts, no matter. Aside from aiding kids to learn
decimal and percentage symbols, these workouts could support them in finding out how to assess fractions in a fashion that helps work through expression problems. These worksheets can be utilized
anytime throughout the year, and are fantastic for instructing youngsters about the notion of.
Another advantage of the fractions in decimal converts is because they encourage young children to handle troubles within an enjoyable and enjoyable method. Teach them to apply their mental maths
skills in real-world situations, though making use of worksheets to help kids to practice these techniques will not just make the lesson better remembered. Moreover, they will be encouraged with the
satisfying video games and activities that come along with the worksheets.
Totally free fractions to percents worksheets decimal worksheets
No matter if you’re a homeschooling mom or dad or teacher these decimals, percents and fractions worksheets will aid your child in understanding the distinction of fraction and decimal. These
worksheets can help your youngster in changing fractions into decimals, solve word problems, arrange fractions and also measure the two. There are worksheets to show teenagers to spherical decimals
to the closest tenth or 100th.
As opposed to other math worksheets, they will help your child learn how to convert a fraction into the equivalent of a percentage. It’s suggested to feature game titles and other educative actions
within your sessions. A fraction-to-percentage worksheet with solutions is a wonderful way to create the method enjoyable as well as simple for youngsters. While you are switching a small fraction a
share, it’s essential to start with the best calculations and after that slowly improvement to more technical kinds.
A worksheet for fractions to percent comprises two components of changing a small fraction decimal, and also transforming a small percentage in to a percent. It can be possible to transform a
fraction transformed to some decimal or percentage by splitting up the numerator, Numerator and denominator by. Fractions to percents to decimal worksheets deal with both of these segments. Each
worksheet consists of an example of the small percentage decimal transformation in addition to worksheets using the suitable decimal.
Another tool which can be used to transform fractions into decimals is to try using the portion graph or chart. A portion graph or chart signifies how diverse fractions match around the amount
series. Furthermore, the chart shows the structure a specific small fraction is a component of. If the number is in a certain number of different fractions, then it is transformed into the decimal
format. This system can also be used to convert decimals. Besides worksheets, you can find worksheets which provide an introduction to diverse fractions.
Gallery of Percents To Decimals Worksheet
Converting Percents To Decimals Worksheets 99Worksheets
Fraction Decimal Percent Conversion Worksheet Converting Between
Grade 6 Math Worksheet Percents And Decimals Conversion K5 Learning
Leave a Comment | {"url":"https://www.decimalworksheets.com/percents-to-decimals-worksheet/","timestamp":"2024-11-06T20:28:41Z","content_type":"text/html","content_length":"64902","record_id":"<urn:uuid:f4a11cab-6b02-4754-805f-9cbaf732834c>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00640.warc.gz"} |
What is the definition of instantaneous rate of change for a function? | Socratic
What is the definition of instantaneous rate of change for a function?
1 Answer
Since the instant rate of rate of a function is the same as the derivative of the function, so the definition is
$f ' \left(x\right) = {\lim}_{h \to 0} \frac{f \left(x + h\right) - f \left(x\right)}{h}$
Impact of this question
3075 views around the world | {"url":"https://socratic.org/questions/what-is-the-definition-of-instantaneous-rate-of-change-for-a-function","timestamp":"2024-11-01T20:05:06Z","content_type":"text/html","content_length":"32461","record_id":"<urn:uuid:e333d134-11d0-4f53-a1c0-364b85cf5261>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00812.warc.gz"} |
WHAT ARE THE GEOGRAPHICAL COORDINATES OF ATHENS? - The best site for horoscopes daily, weekly, monthly, yearly online free
WHAT ARE THE GEOGRAPHICAL COORDINATES OF ATHENS?
Athens is the capital of Greece. "I need to know where it is Athens. It is located in North, South, West, East, South-East, North-West, North-East, South-West of Greenwich? How do I know exactly
precisely the european location city of Athens? Who tells me how to find the exact location in the World of the greek city? Who tells me what is and what are the latitude and longitude of Athens? I
need the latitude and longitude of Athens for a school project. How can I find, search online these geographic coordinates?" How to calculate the latitude and how to calculate the longitude of a
point? How to identify an exact point on Earth? On Earth in which exact location is the city of Athens to the equator and the Greenwich meridian? If you do not know what are the coordinates of Athens
with respect to the equator then here you will find what you need. The latitude and longitude are two fundamental coordinates to position anywhere on the earth's surface. Also the location of a human
being can be traced precisely through the use of these two coordinates that are measured in degrees and fractions of a degree. But the city of Athens where he is exactly where on the globe, in which
the european region? Before giving the exact latitude and longitude of the city of Athens, we give a precise definition of these two very important coordinates and they are based on satellite systems
today such as GPS included in smartphones and tablets. After the two values found the Google map with a satellite representation of the city.
Latitude: in geography represents a coordinate that serves to determine the position of a point on the Earth's surface, that is, the angular distance of a point from the equator. This coordinate is
measured in degrees and fractions of a degree along the arc of the meridian passing precisely for that point.
Longitude: in geography it is a coordinate that is used to determine the position of a point on the earth's surface, that is the angular distance of a point from the Greenwich meridian. This
coordinate is measured in degrees and fractions of a degree along the arc of the meridian passing precisely for that point.
ATHENS LATITUDE: 37° 59' 01" N
ATHENS LONGITUDE: 23° 43' 39" E
ATHENS ON GOOGLE MAPS
Famous cities - geographical position: | {"url":"https://www.oroscopodioggiedomani.it/latitudine-longitudine-citta_english/what-are-the-geographical-coordinates-of-athens.html","timestamp":"2024-11-06T10:20:54Z","content_type":"text/html","content_length":"8156","record_id":"<urn:uuid:85a02fed-7897-4d7f-80ea-118551eae2e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00536.warc.gz"} |
explorations in mathematics (part 1)
I've started going through the book Foundations of Higher Mathematics: Exploration and Proof (Fendel and Resek, 1990) again. It's a great math book, probably the best I've seen, because of its fresh
approach — with a focus on personal exploration rather than endless problem solving. I'm posting the results of my first few explorations, primarily in hopes that somebody smarter than me will post a
message providing new insights or pointing out places where I'm mistaken. If you want to buy the book, you should probably stop reading so as to not spoil the exercises for yourself.
Note: If you cannot view some of the math on this page, you may need to add MathML support to your browser. If you have Mozilla/Firefox, go here and install the fonts. If you have Internet Explorer,
go here and install the MathPlayer plugin.
Exploration 1: SubDivvy — a game
SubDivvy is a simple game played by two players. It proceeds as follows:
1. Choose at random a natural number greater than one. Let's call it `N`. (You may want to set a reasonable upper limit.)
2. On your turn, choose a positive divisor of `N`, that is not equal to `N`. Subtract the divisor from `N`. The result of the subtraction is given to the other player.
3. Play continues until the result reaches 1. The player who produces the result 1 is the winner.
An example game: The number chosen randomly is 68. Player 1 chooses 4 (a divisor of 68) and subtracts, yielding 64. Player 2 chooses 16 and subtracts, yielding 48. Player 1 chooses 24, yielding 24.
Player 2 chooses 8, yielding 16. Player 1 chooses 8, yielding 8. Player 2 chooses 4, yielding 4. Player 1 chooses 2, yielding 2. Player 2 is forced to choose 1, yielding 1, and player 2 is the
My analysis: (Because the game takes place with the natural numbers, the statements below assume the domain of natural numbers.)
1. An odd number can only be reduced to an even number because odd numbers have only odd divisors, and an odd number minus an odd number is an even number.
2. An even number can always be reduced to an odd number by subtracting 1, which is a divisor of all natural numbers.
3. The number 2 wins because 2 can be reduced to 1. In fact, 2 is the only number that can be reduced to 1 in a single step by subtracting a divisor. This is because to reduce `N` to 1, it must have
a divisor equal to `N - 1`. That is to say, `N/(N - 1)` must be an whole number, and that is only true when `N=2`. (`2/1` is a whole number, but not `3/2`, `4/3`, `5/4`, ... `100/99`, etc.) So
getting an even number, 2, is needed to win.
4. A player with an even number can continue getting even numbers by reducing the number to an odd number (by #2), which the opponent will be forced to reduce back into an even number (by #1).
Eventually the number will be reduced to 2 by this process, and the number 2 wins (by #3).
5. Therefore, having an even number is a winning position, because there is strategy that guarantees a win from that position. Due to #1, an odd number is a losing position, because it can only be
reduced into an even number, which is a winning position for the opponent. Thus, with optimal play, the winner is determined solely by whether the original number is odd or even.
6. A good strategy is to subtract the largest odd divisor if you have an even number, and subtract 1 if you have an odd number. This strategy is guaranteed to win if you have an even number. If you
have an odd number, subtracting 1 is not a winning move, but gives the opponent more choices of divisors, more turns, and therefore more opportunities to blunder, which is the only hope one has
with an odd number.
7. Given an initial number `N`, the maximum number of turns is `N - 1`, with 1 being subtracted on each turn. The minimum number of turns results from subtracting the greatest possible divisor on
each turn, and is equal to `"minturns"(N) = {(1,if N=2),(1+"minturns"[N-"greatestDivisor"(N)],if N>2):}} ~~ |~log_2 N~|`. As `N` increases, `"minturns"(N)` is less likely to be equal to `|~log_2
N~|`, but I believe that `0 <= "minturns"(N) - |~log_2 N~| < 1`, although I haven't proven it. (Can you prove or disprove that?) (On a rereading, this seems nonsensical. I think I made a mistake
typing this up, but I don't remember what I was originally thinking...)
Exploration 2: Sets of integers that are closed under subtraction
A set `S` is said to be closed under subtraction if for all pairs of members `a` and `b` chosen from the set, `a-b` is also in the set, including the case when `a=b`.
Here are my observations of such sets:
1. Sets of integers closed under subtraction must contain zero, because any member minus itself is zero.
2. They are symmetrical around zero because they contain zero (by #1) and any member `m` subtracted from zero is `-m`. That is to say, if `m` exists, `-m` must exist.
3. The only finite set that is closed under subtraction is `{0}`, because if a set contains any member `m` besides zero, then `-m` must exist (by #2), and `m` could be subtracted from `-m` yielding
`-2m`. Similarly, `-m` could be subtracted from `m` yielding `2m`. The process could be repeated yielding `-3m` and `3m`, and so on. Therefore, if `m` exists, all multiples of `m` must exist.
4. More specifically, infinite sets (see #3) contain every multiple of the GCD (greatest common divisor) of all non-zero members. That is, if the GCD of the non-zero members is 3, then the set
contains (and only contains) all multiples of three. This is because every non-zero member `m` is divisible by the GCD by definition, so every such `m` can be represented as `m=n*d` where `n` is
an integer and `d` is the GCD. Given two such members, `m_1` and `m_2`, represented as `n_1*d` and `n_2*d` respectively, `m_1 - m_2 = (n_1*d) - (n_2*d) = d(n_1 - n_2)`, which is also divisible by
`d`. Therefore, the difference of any two members is another number, divisible by the GCD, which also exists (by #3). This also can be visualized by imagining each number as a line of dots, with
the number `n` having `n` dots. So 4 is
Can you think of any other properties of sets of integers closed under subtraction?
Exploration 3: Sets of integers that are closed under addition
A set `S` is said to be closed under addition if for all pairs of members `a` and `b` chosen from the set, `a+b` is also in the set, including the case when `a=b`.
Here are my observations of such sets:
1. The only finite set closed under addition is `{0}`, because if a non-zero member `m` exists, it can be added to itself, yielding `2m`, and added again yielding `3m`, etc. So if `m` exists, all
multiples of `m` with the same sign must also exist. Numbers of the opposite sign needn't exist because adding two numbers with the same sign always produces a result with that same sign. (This
means that if 2 exists, then 4, 6, 8, etc. must also exist, but -4 needn't exist.)
2. If the set contains both positive and negative numbers, it must also be closed under subtraction (see above) because of the definition of subtraction: `a-b=a+(-b)`. By #1, if a set closed under
addition contains 2, it must contain 4, 6, 8, etc., and if it contains -4, it must contain -8, -12, -16, etc. But consider the set `S`, closed under addition, with a subset {-4, 2}. Because `2 +
(-4) = 2 - 4 = -2`, -2 must also exist, so by #1 the positive series (2, 4, 6, 8, ...) is duplicated on the negative side (-2, -4, -6, -8, ...). By similar reasoning, the negative series (-4, -8,
-12, ...) is also duplicated on the positive side. This symmetry means that if `m` exists, then `-m` exists, and because `m + (-m) = 0`, 0 must also exist. See the discussion of sets closed under
subtraction, above.
3. If a set `S` is closed under addition but not subtraction (ie, it does not contain numbers of opposite sign) and contains non-zero members, it contains (and only contains) multiples of the GCD
(greatest common divisor) of all non-zero members, with the same sign as those members. In addition, it contains all multiples of the GCD greater than or equal to twice the value of the smallest
non-zero member. So far this is rather tautological, but it allows you to determine the closure of a set over addition. For example, the closure of the set {9,12} (which has a GCD of 3) is
4. If the set is not closed under subtraction, then it may contain zero, although it is not required. The presence of zero does not imply the existence any other members, because `a+0=a`.
The sets of integers closed under addition are more complex than those closed under subtraction, so I'm more likely to be mistaken here, or missing some other properties. Can you find any other
properties of these sets?
Exploration 4: Make It More — another game
Make It More is another two-player game. Each player has a 3-digit number to fill in, so a playing sheet might look like:
Player 1: __ __ __
Player 2: __ __ __
Player 1 begins by choosing any digit except 0 or 5, and writing it in his ones column. Player 2 doubles the digit player 1 chose, and chooses any digit except 0 or 5 that is less than or equal to
the ones digit of the doubled value, and writes it into his ones column.
Player 1 doubles the digit player 2 just chose and, like player 2, picks a digit other than 0 or 5 that is less than or equal to the ones digit of the doubled value, and writes it in his tens column.
This process continues until both numbers are filled out. The winner is the player whose final 3-digit number is larger.
Here is a sample game:
Move Board Comments
1 4 (Player 1 chooses 4 for the ones column.)
2 4 (Twice 4 is 8; player 2 chooses 7, which is less than or equal to 8, for the ones column.)
3 3 4 (Twice 7 is 14; player 1 chooses 3, which is less than or equal to 4 [the ones digit of 14], for the tens column.)
4 3 4 (Twice 3 is 6; player 2 chooses 6 for the tens column.)
5 1 3 4 (Twice 6 is 12; player 1 chooses 1, which is less than or equal to 2, for the hundreds column.)
6 1 3 4 (Twice 1 is 2; player 2 chooses 2 for the hundreds column.)
267 is greater than 134, so player 2 wins this game.
This doesn't seem like much of a game to me, but I could be wrong. Here's what I think:
1. The digit 9 can only be chosen during move 1. After the first move, the previous digit is doubled. Since only the ones digit of the doubled value is considered, the doubling is effectively done
modulo 10, and no integer number times 2, modulo 10, is greater than or equal to 9. It won't be equal to 9 because 9 is odd, and multiplying by 2 makes a number even, and it won't be greater
because 9 is the maximum integer modulo 10. On the other hand, the values 1 and 2 are available to be chosen on every turn, because even if the smallest digit, 1, is chosen for the previous turn,
when doubled it will be equal to 2.
2. The hundreds column dominates the other columns. The other columns don't even need to be considered unless the two choices for the hundreds column are equal.
3. Because the hundreds column dominates the other columns, and because the digits 1 or 2 can always be chosen, player 2 can win this game every time. If player 2 chooses 1 or 2 for his tens column,
player 1's choices for his hundreds column will be limited to 1, 2, 3, or 4, and no matter which one of those he chooses, player 2 can double it and win.
subdivvy 2018-09-28 05:49AM
Hi Adam,
I'm wondering if I can convince you to take this down. It's a great problem, one that we use to give students an opportunity to learn how to explore and problem-solve. Unfortunately, some students
can't resist the temptation to search the internet and find answers and explanations without doing the exploring themselves.
an anonymous William Blatner | {"url":"http://www.adammil.net/blog/v84_explorations_in_mathematics_part_1_.html","timestamp":"2024-11-13T01:58:21Z","content_type":"text/html","content_length":"20966","record_id":"<urn:uuid:97809e1a-4cca-4a8c-ac69-90ecce95640e>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00466.warc.gz"} |
All articles - Tutcoach
Introduction Heaps are an essential data structure in computer science, widely used for various applications such as priority queues, heap sort, and graph algorithms like Dijkstra’s shortest path. In
a heap, every parent node is either greater than or equal to (Max-Heap) or less than or equal to (Min-Heap) its child nodes. In this article … Read more
C++ Program to Implement Floyd-Warshall Algorithm
Introduction The Floyd-Warshall algorithm is a fundamental algorithm in computer science, widely used for finding shortest paths in a weighted graph with positive or negative edge weights. This
algorithm is particularly important for its ability to handle graphs with negative weight edges, provided there are no negative weight cycles. The essence of the Floyd-Warshall algorithm … Read more
C++ Program to Implement Dijkstra’s Algorithm Using Priority Queue
Dijkstra’s algorithm is a popular algorithm for finding the shortest paths between nodes in a graph, which may represent, for example, road networks. In this article, we will learn to code a C++
Program to Implement Dijkstra’s Algorithm Using Priority Queue and show how to implement it using a priority queue in C++. We will … Read more
C++ Program to Check Leap Year
Checking whether a given year is a leap year is a common problem in programming, often used in applications related to calendars, date calculations, and more. In this article, we will learn to write
a C++ Program to Check Leap Year. We will include multiple examples with different implementations and detailed explanations of each, ensuring … Read more
Power of a Number Using a Loop in C++
Calculating the power of a number is a fundamental task in programming, often used in mathematical computations and algorithms. In this article, we will delve into how to calculate the power of a
number using a loop in C++. We will provide a clear introduction to the concept, followed by multiple examples of different implementations … Read more
C++ Program to Find the Sum of harmonic Series
Finding the sum of the harmonic series, which is expressed as Finding the series is a common mathematical problem that can be implemented easily using C++. In this article, we will provide an
in-depth explanation of the harmonic series, the prerequisites for understanding and implementing it, and present three different solutions with detailed explanations and … Read more
C++ Program to Implement Matrix Multiplication
Matrix multiplication is a fundamental operation in many scientific and engineering applications, including computer graphics, physics simulations, and data analysis. Understanding how to implement
matrix multiplication in C++ is essential for developers who work in these fields. In this article, we will explore the concept of matrix multiplication, the necessary prerequisites, and provide
three different … Read more
C++ Program to Find the Longest Increasing Subsequence
Introduction In the world of computer science, one common problem is finding the Longest Increasing Subsequence (LIS) in a given sequence. This problem is significant in various applications such as
data analysis, bioinformatics, and pattern recognition. The Longest Increasing Subsequence is a subsequence that appears in the same order as the original sequence but is … Read more
Solve the LCS problem using C++
Introduction The Longest Common Subsequence (LCS) problem is a classic computer science problem that involves finding the longest subsequence that is common to two sequences. A subsequence is a
sequence derived by deleting some or none of the elements without changing the order of the remaining elements. The LCS problem is important in fields such … Read more
C++ Program to Implement Selection Sort
Introduction Selection sort is a simple and intuitive comparison-based sorting algorithm. Despite its simplicity, it is not suitable for large datasets as its average and worst-case time complexity
is quite high. In this article, we will delve into the concept of selection sort, understand its working principle, and implement it in C++ through various examples. … Read more
C++ Program to Convert Days into Years Weeks and Days
Introduction In this article, we will explore how to implement a C++ Program to Convert Days into Years, Weeks and Days. We’ll cover the prerequisites, provide detailed explanations, and include
outputs for each example to ensure a comprehensive understanding. Prerequisites Before we start, ensure you have a basic understanding of the following: With these prerequisites, … Read more
C++ Program to Implement a Basic Calculator
Introduction A basic calculator is a simple yet powerful tool that performs arithmetic operations like addition, subtraction, multiplication, and division. Implementing a calculator in C++ is an
excellent exercise for beginners to understand fundamental programming concepts such as input/output handling, control structures, and functions. In this article, we’ll explore how to create a C++
Program … Read more
C++ Program to Demonstrate Friend Functions
Introduction In C++, friend functions are a powerful feature that allows functions to access the private and protected members of a class. This can be particularly useful when you need to provide
access to non-member functions without compromising encapsulation. In this article, we will explore how to write a C++ Program to Demonstrate Friend Functions. … Read more
C++ Program to Implement Single Inheritance
Introduction Inheritance is one of the core principles of object-oriented programming (OOP). It allows us to create a new class that reuses, extends, or modifies the behavior defined in another
class. Single inheritance is the simplest form of inheritance where a class inherits from only one base class. In this article, we will explore how … Read more
C++ Program to Implement Multiple Inheritance
Introduction In the world of object-oriented programming, inheritance is a powerful concept that allows the creation of a new class based on an existing class. Multiple inheritance, where a class can
inherit from more than one base class, is a unique feature of C++ that can be particularly useful in modeling complex relationships. In this … Read more
C++ Program to Count the Number of Vowels in a String
Introduction In the world of programming, string manipulation is a fundamental skill that comes into play in various applications. One common task is to count the number of vowels in a given string.
This article will guide you through creating a C++ Program to Count the Number of Vowels in a String. We’ll cover the … Read more
C++ Program to Convert Temperature from Fahrenheit to Celsius
Introduction: Temperature conversion is a common task in programming, especially in applications dealing with weather, physics, or engineering. In this article, we’ll explore a C++ Program to Convert
Temperature from Fahrenheit to Celsius. Understanding this conversion is essential for anyone working with temperature data or systems that use different temperature scales. Temperature conversion is
provided … Read more
C++ Program to Calculate Compound Interest
Imagine you have some money saved up and you’re curious about how much it could grow over time with compound interest. In the world of programming with C++, understanding compound interest and being
able to calculate it can open doors to various financial simulations and predictions. This article is your guide to creating a C++ … Read more
C++ programs to print all ASCII values and their equivalent characters
Introduction Welcome to our exploration of ASCII values and their equivalent characters in C++. ASCII (American Standard Code for Information Interchange) is a widely used character encoding standard
that assigns numeric codes to represent characters. Understanding ASCII values and their corresponding characters is fundamental in programming, especially when dealing with text processing,
character manipulation, and … Read more
C++Program to Calculate Simple Interest
Understanding and calculating simple interest is a fundamental concept in both finance and mathematics. Simple interest represents the amount of money earned or paid on a principal amount over a
certain period, based on an annual interest rate. It is a straightforward calculation that plays a crucial role in various financial transactions, such as loans, … Read more
C++ Program to Implement Depth First Search (DFS)
Introduction Depth First Search (DFS) is a fundamental algorithm used in graph theory to explore nodes and edges of a graph. It is a technique that starts at the root node and explores as far as
possible along each branch before backtracking. In this article, we will delve into how to write a C++ Program … Read more
C++Program to Implement Bellman-Ford Algorithm
Introduction The Bellman-Ford Algorithm is a well-known algorithm for finding the shortest path from a single source to all other vertices in a weighted graph. Unlike Dijkstra’s algorithm,
Bellman-Ford can handle graphs with negative weight edges, making it a versatile tool in graph theory. In this article, we will explore how to write a C++Program … Read more
C++ Program to Find the Subset Sum Using Backtracking
Introduction Subset Sum is a classic problem in computer science, where the goal is to determine whether a subset of a given set of integers sums up to a specific target value. This problem can be
efficiently solved using various methods, with backtracking being one of the most intuitive and powerful approaches. In this article, … Read more
C++Program to Implement LRU Cache
Introduction In the world of computing, efficient data retrieval is a critical aspect of performance. One widely used technique to optimize data retrieval is through the use of caching mechanisms.
One such mechanism is the Least Recently Used (LRU) Cache. In this article, we will explore how to implement an LRU Cache in C++ with … Read more
C++ Program to Implement Ant Colony Optimization Algorithm
Introduction Ant Colony Optimization (ACO) is a nature-inspired algorithm used for solving computational problems, particularly those involving finding optimal paths. Inspired by the foraging
behavior of ants, ACO utilizes the concept of pheromone trails to guide ants (agents) towards optimal solutions. In this article, we will explore how to write C++ Program to Implement Ant … Read more
C++ Program to Implement a Trie
Introduction A Trie, also known as a prefix tree, is a specialized tree used to store associative data structures. Tries are especially useful for applications that involve searching for words in a
dictionary, auto-completion, and spell checking. In this article, we will explore how to write a C++ Program to Implement a Trie with detailed … Read more
C++ Program to Implement a B-Tree
Introduction In this article, we will delve into how to implement C++ Program to Implement a B-Tree with detailed explanations and examples. B-Trees are a type of self-balancing tree data structure
that maintains sorted data and allows for efficient insertion, deletion, and search operations. B-Trees are widely used in databases and file systems due to … Read more
C++ Program to Implement AVL Tree
Introduction AVL Trees are a type of self-balancing binary search tree, named after their inventors Adelson-Velsky and Landis. In this article, we will learn to write a C++ Program to Implement AVL
Tree with detailed explanations and examples. AVL Trees are essential for maintaining balanced tree structures, ensuring efficient insertion, deletion, and lookup operations. By … Read more
C++ Program to Implement Red-Black Tree
Introduction Red-Black Trees are a type of self-balancing binary search tree, essential for maintaining sorted data efficiently. This data structure is crucial in various applications such as
implementing associative arrays, priority queues, and maintaining order statistics. In this article, we will explore how to write C++ Program to Implement Red-Black Tree with real-world examples and
… Read more
C++ Program to Implement a Circular Linked List with Doubly Linked Nodes
In the world of data structures, a circular linked list with doubly linked nodes offers a unique and powerful way to manage collections of elements. This structure allows you to traverse the list
from any node to any other node, making it both versatile and efficient. In this article, we will explore how to implement … Read more
C++ Program to Implement a Deque
A deque (double-ended queue) is a versatile data structure that allows insertion and deletion of elements from both ends. Deques are commonly used in scenarios where elements need to be added or
removed from both ends efficiently. This article will guide you through implementing a deque in C++ using three different methods, providing multiple solutions … Read more
C++ Program to Implement a Priority Queue
A priority queue is a special type of queue in which each element is associated with a priority and the element with the highest priority is served before the others. Priority queues are widely used
in scenarios such as CPU scheduling, bandwidth management in network routers, and many more. This article will guide you through … Read more
C++ Program to Find the Transpose of a Matrix
Finding the transpose of a matrix is a common operation in linear algebra and has numerous applications in various fields, including computer graphics, data analysis, and more. This article will
explore different methods to compute the transpose of a matrix using C++, providing multiple solutions and example outputs for each approach. Prerequisites To effectively follow … Read more
C++ Program to Find the Determinant of a Matrix
Finding the determinant of a matrix is a fundamental operation in linear algebra, with applications in areas such as systems of linear equations, geometry, and more. This article will guide you
through various methods to compute the determinant of a matrix using C++, providing detailed explanations and example outputs for each approach. Prerequisites To effectively … Read more
C++ Program to Implement Bubble Sort
Bubble Sort is a fundamental sorting algorithm that is easy to understand and implement. It repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong
order. This article will provide an in-depth look at Bubble Sort in C++, presenting multiple examples with different solutions and their respective outputs. … Read more
C++ Program to Check if a Matrix is Symmetric
Checking if a matrix is symmetric is a fundamental task in linear algebra, crucial for many applications in mathematics, physics, and engineering. A matrix is symmetric if it is equal to its
transpose. This article will explore various methods to determine if a matrix is symmetric using C++, providing detailed explanations and multiple solutions with … Read more
C++ Program to Find the Inverse of a Matrix
Finding the inverse of a matrix is a fundamental operation in linear algebra, essential for solving systems of linear equations, performing transformations, and more. This article will explore
various methods to compute the inverse of a matrix in C++, offering multiple solutions and detailed explanations. Prerequisites To effectively follow along with this article, you should … Read more
C++ Program to Find the Sum of the Diagonals of a Matrix
Finding the sum of the diagonals of a matrix is a common task in matrix operations, essential for many computational problems in mathematics, computer science, and engineering. This article explores
different C++ Program to Find the Sum of the Diagonals of a Matrix providing comprehensive examples to illustrate various approaches. Prerequisites To effectively understand and … Read more
C++ Program to Calculate the Sum of Each Row and Column of a Matrix
Calculating the sum of each row and column of a matrix is a common operation in matrix manipulation and analysis. This operation is widely used in various fields such as data analysis, machine
learning, and scientific computing. In this article, we will explore how to write a C++ program to calculate the sum of each … Read more
C++ Program to Find the Sum of the Diagonals of a Matrix
Finding the sum of the diagonals of a matrix is a common operation in linear algebra and is used in various applications, including graphics, data analysis, and scientific computing. In this article,
we will explore how to write a C++ program to find the sum of the diagonals of a matrix. We will provide three … Read more
C++ Program to Perform Addition of Two Matrices
Matrix addition is a fundamental operation in linear algebra, commonly used in various fields such as computer graphics, data analysis, and machine learning. This article will cover how to perform
the addition of two matrices using C++. We’ll explore three different examples with varying levels of complexity and explain each method step-by-step. Introduction In this … Read more
C++ Program to Perform Subtraction of Two Matrices
Matrix operations are fundamental in various fields of computer science, engineering, and mathematics. One such operation is matrix subtraction, where each element of one matrix is subtracted from
the corresponding element of another matrix. In this comprehensive guide, we will explore how to perform the subtraction of two matrices using C++. We will cover three … Read more
C++ Program to Convert Infix Expression to Postfix
Converting infix expressions (where operators are between operands, such as A+B) to postfix expressions (where operators follow their operands, such as AB+) is a fundamental problem in computer
science, especially in the fields of compiler construction and expression evaluation. This process simplifies the parsing and evaluation of expressions. In this comprehensive guide, we will explore …
Read more
C++ Program to Evaluate a Postfix Expression
Evaluating postfix expressions (also known as Reverse Polish Notation) is a common problem in computer science, particularly in the domain of compilers and calculators. A postfix expression is a
mathematical notation in which every operator follows all of its operands, eliminating the need for parentheses to dictate the order of operations. This comprehensive guide will … Read more
Sum of Natural Numbers Using Recursion
Calculating the sum of natural numbers is a fundamental task in mathematics and programming. Natural numbers are the sequence of numbers starting from 1, 2, 3, and so on. This task is frequently used
in data analysis, algorithm design, and problem-solving. In this comprehensive guide, we will explore how to find the sum of natural … Read more
R Program to Make a Simple Calculator
Creating a simple calculator is an excellent way to practice basic programming concepts, such as functions, control structures, and user input handling. In this comprehensive guide, we will explore
how to make a simple calculator using R programming. We will cover three different solutions, each with detailed explanations and outputs. Before diving into the examples, … Read more
Find LCM of a Number Using R
The Least Common Multiple (LCM) of two numbers is the smallest positive integer that is divisible by both numbers. This concept is fundamental in mathematics and has various applications in number
theory, algebra, and computational tasks. In this comprehensive guide, we will explore how to find the LCM of a number using R. We will … Read more
R Program to Find H.C.F. or G.C.D
The Highest Common Factor (H.C.F.) or Greatest Common Divisor (G.C.D.) of two numbers is the largest number that divides both of them without leaving a remainder. This concept is fundamental in
number theory and has various applications in mathematics and computer science. In this comprehensive guide, we will explore how to find the H.C.F. or … Read more
Fibonacci Sequence Using Recursion in R
The Fibonacci sequence is a series of numbers where each number is the sum of the two preceding ones, typically starting with 0 and 1. This sequence has numerous applications in computer science,
mathematics, and nature. In this comprehensive guide, we will explore how to generate the Fibonacci sequence using recursion in R. We will … Read more
R Program to Find the Factors of a Number
Finding the factors of a number is a fundamental task in mathematics and programming. Factors are the numbers that divide a given number exactly without leaving a remainder. This task has various
applications in number theory, cryptography, and problem-solving. In this comprehensive guide, we will explore different methods to find the factors of a number … Read more | {"url":"https://tutcoach.com/blog/page/2/","timestamp":"2024-11-10T02:28:27Z","content_type":"text/html","content_length":"171011","record_id":"<urn:uuid:1e3a6917-3b10-4ed9-85b3-b6b65983ec18>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00861.warc.gz"} |
[GAP Forum] Blocks and block systems for non-transitive groups
wh at icparc.ic.ac.uk wh at icparc.ic.ac.uk
Wed Dec 1 20:55:18 GMT 2004
Dear Beth, Dear GAP Forum,
On Wed, Dec 01, 2004 at 03:53:25PM +0000, Petra Holmes wrote:
> You can look at each orbit of G on the points separately. To do this, you
> can use the code:
> o:=Orbits(G); ims:=[];
> for oo in o do
> hom:=ActionHomomorphism(G,oo,OnPoints);
> Add(ims,Image(hom));
> od;
> Then you can look at blocks in each orbit of G separately, as you are now
> dealing with a collection of transitive groups.
Thank you. That would indeed allow me to find all the blocks I wanted
within each orbit using standard routines.
Having given some more thought to the problem, however, I am particularly
interested in blocks than span more than one orbit for intransitive groups.
To give an example, the group generated by (1,2,3,4)(5,6,7,8) is
intransitive: it has two orbits. Considering the orbits independently one
obtains the representative blocks [1,3] and [5,7]. But there are also block
systems with representative blocks [1,3,5,7] and [1,5] that I would like to
find but cannot by considering the orbits independently. Perhaps they can
be recovered/extracted later? But unless that's straightforward to do it
seems as though one might as well use a block-finding algorithm that works
directly with intransitive groups (unless that turns out to be unexpectedly
I guess I should go study the source code for AllBlocks some more and see if
I can figure out how it works. :)
More information about the Forum mailing list | {"url":"https://www.gap-system.org/ForumArchive2/2004/000945.html","timestamp":"2024-11-07T22:46:20Z","content_type":"text/html","content_length":"4369","record_id":"<urn:uuid:cd4a6e14-66fa-4e37-a53c-0fb6aee2dcca>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00312.warc.gz"} |
s Courses
Take the following courses:
MA-130 Calculus I
An introduction to calculus including differentiation and integration of elementary functions of a single variable, limits, tangents, rates of change, maxima and minima, area, volume, and other
applications. Integrates the use of computer algebra systems, and graphical, algebraic and numerical thinking.
4 CreditsN, QM
MA-160 Linear Algebra
An introduction to systems of linear equations, matrices, determinants, vector spaces, linear transformations, eigenvalues, and applications.
3 CreditsN, QMPrerequisites: MA130.
MA-230 Calculus II
Expands the treatment of two-space using polar and parametric equations. Emphasizes multivariable calculus, including vectors in three dimensions, curves and surfaces in space, functions of several
variables, partial differentiation, multiple integration, and applications.
4 CreditsN, QMPrerequisite: MA130
MA-235 Calculus III
A continuation of the calculus sequence. Topics include methods of integration by Simpson's Rule, applications, Taylor and Fourier series; introduction to ordinary differential equations; integration
in polar, cylindrical, and spherical coordinates; differential and integral vector calculus.
4 CreditsN, QMPrerequisites: MA230.
MA-335 Differential Equations
Theory and application of ordinary differential equations. Emphasis on modern qualitative techniques, with numerical and analytical approaches used when appropriate. Contains a brief introduction to
partial differential equations.
4 CreditsN, QMPrerequisites: MA130 and MA230 and MA235 or MA233.
Complete one of the following options below:
OPTION 1:
PC-202 Intro Physics I
A calculus-based introduction to the basic principles of mechanics (including periodic motion and dynamics), heat and thermodynamics, and special relativity.
3 CreditsN, QM, WK-FRCorequisite: PC-202L. Corequisite or Prerequisite: MA-130 or MA-230.
PC-202L Intro Physics Lab I
This lab is a calculus-based introductory laboratory experience that is designed to accompany PC-202. Individual experiments will correlate with the course, including kinematics, Newton's Laws,
energy, and momentum.
1 CreditNCorequisite: PC-202. Prerequisite or corequisite: MA-130 or MA-230.
OPTION 2:
PC-204 University Physics
A calculus-based introduction to the basic principles of mechanics (including periodic motion, statics, and dynamics), heat and thermodynamics, and special relativity. This course includes an
integrated introductory laboratory experience. This course is designed to be taken by students interested in a Program of Emphasis in Physics or Engineering Physics.
4 CreditsN, QM, WK-FRPre- or Co-requisites: MA-130; FYC-101
Take the following courses:
PC-203 Intro Physics II
A calculus-based introduction to basic principles of electricity, magnetism, electromagnetic waves and optics. Additional topics may include atoms and molecules, nuclear physics, relativity and solid
state physics.
3 CreditsN, QMPrerequisite: Take PC-202 or PC-204. Corequisite: PC-203L.
PC-203L Intro Physics Lab II
An algebra-based introductory laboratory experience designed to accompany PC-203. The individual experiments will involve topics in circuits, light and optics, and nuclear physics.
1 CreditNPrerequisite: PC-202 or PC-204. Corequisite: PC-203.
PC-189 Physics Seminar I
Seminar series, required of all freshmen Physics/Physics-Engineering POEs, consisting of research seminars given by invited speakers and members of the department, both faculty and students.
Discussions regarding specific career opportunities and preparation for graduate studies will also be an integral part of the seminar series.
1 Credit
PC-289 Physics Seminar II
Seminar series, required of all sophomore Physics/Physics-Engineering POEs, consisting of research seminars given by invited speakers and members of the department, both faculty and students.
Discussions regarding specific career opportunities and preparation for graduate studies will also be an integral part of the seminar series.
1 Credit Prerequisites: PC189.
PC-300 Intermediate Physics Lab
The origin and progress of physics in the 20th century, including relativity and quantum theory with applications in atomic and molecular physics, nuclear physics, elementary particles and possibly
some solid state physics. (Previously titled Modern Physics Lab)
3 CreditsN, CWPrerequisites: MA-230 and PC-203. Corequisite: PC-301.
PC-301 Modern Physics
The origins and progress of Physics in the 20th century, including relativity and quantum theory with applications in atomic and molecular physics, nuclear physics, elementary particles and possibly
some solid state physics. (Previously titled Theoretical Modern Physics)
3 CreditsNPrerequisite: MA-230 or PC-203. Pre- or co-requisite: MA-235.
PC-307 Advanced Physics Lab
Provides laboratory projects at the intermediate level. A series of projects is offered which best meet the educational needs of the student.
3 CreditsN, QS, CWPrerequisite: PC300. Special fee assessed.
PC-340 Mathematical Methods in Physics
An introduction to the mathematics used in advanced physical science courses. The emphasis is on early exposure to mathematical techniques and their applications rather than on rigorous derivation.
Topics include series analysis, complex variables, theory, matrix mechanics, ordinary and partial differential equations, vector and tensor analysis, and Fourier series.
3 CreditsNPrerequisites: PC203 and MA230.
PC-389 Physics Seminar III
Seminar series, required of all junior Physics/Physics-Engineering POEs, consisting of research seminars given by invited speakers and members of the department, both faculty and students.
Discussions regarding specific career opportunities and preparation for graduate studies will also be an integral part of the seminar series.
1 Credit Prerequisite: PC289.
PC-402 Quantum Mechanics
This course continues the discussion of the Schrodinger Equation, the particle-in-a-box, the harmonic oscillator, angular momentum, the hydrogen atom, and electron spin started in PC-301 and/or
CH-305, but at a level that is mathematically much more detailed and proceeds from the postulates of quantum mechanics in a logical manner. With this beginning, the course then focuses on more
complex problems such as thebehavior of multi-electron atoms and molecules. Issues of the meaning of measurement such as embodied in the EPR paradox, the Bell Inequality,and the interpretation of
associated experiments are also discussed. The course is heavily problem-oriented requiring a strong mathematicalbackground.
4 CreditsNPrerequisites: MA-235 and either PC-301 or CH-305
PC-410 Mechanics
A study of classical mechanics including Newtonian, Lagrangian and Hamiltonian approaches. Emphasis is placed on developing the student's ability to analyze physical problems involving particles,
systems of particles and rigid bodies. Insight is provided into a variety of techniques for solving such problems.
4 CreditsNPrerequisites: PC203 and PC340.
PC-489 Physics Seminar IV
Seminar series, required of all senior Physics/Physics-Engineering POEs, consisting of research seminars given by invited speakers and members of the department, both faculty and students.
Discussions regarding specific career opportunities and preparation for graduate studies will also be an integral part of the seminar series. Prerequisite: PC389, and restricted to Seniors with POE
of Physics or Engineering Physics.
1 Credit
PC-491 Electricity & Magnetism
A study of electromagnetic phenomena, including electrostatics, electric fields in matter, magnetostatics, magnetic fields in matter, introductory electrodynamics including Maxwell's equations, and
electromagnetic waves, potentials, and fields.
4 CreditsNPrerequisite: PC-203.
In addition to the required Physics and Mathematics courses, at least two of the following courses must be taken (graduate schools may expect additional courses):
PC-209 Electronics
An introduction to the theory and application of analog and digital electronics, starting with basic AC and DC circuits. The unit explains the principles of operation of the power supply, amplifier,
oscillator, logic circuits, micro controllers, and other basic circuits. An associated laboratory component allows construction of and measurements on the circuits under consideration. Note: a
special fee is assessed.
3 CreditsN
PC-239 Nuclear Threat
This course examines the development and ramifications of nuclear weapons. Students will learn the basic physics upon which these devices operate, and explore moral issues that arose in the
interactions of communities impacted by their construction, use, and testing, including the perspectives of scientists, government officials, and affected citizenry. Current issues and concerns
regarding nuclear weapons will be studied as well.
4 CreditsCA,N,H,CW,WK-SP
PC-350 Thermodynamics
An intermediate level course treating the concept of temperature and its measurement, the concepts of heat and work, the laws of thermodynamics, applications of these concepts to physical systems,
the elements of statistical mechanics and as many topics of current concern as time allows.
3 CreditsNPrerequisites: MA235 and PC301.
PC-430 Optics
The wave theory of light as applied to interference, diffraction, polarization, and image formation. Major emphasis on Fourier techniques. Study of geometrical optics, quantum optics, and radiometry
as time permits.
3 CreditsNPrerequisites: PC300 or PC301.
Take one the following courses below:
ND-498 Natural Sciences Capstone
The natural sciences capstone course is appropriate for any student in the natural sciences needing to fulfill the capstone requirement of the Juniata Curriculum. The course may be taken by any
student with a natural sciences POE in their last 30 credits at Juniata. Offered asynchronously online, the course will be graded satisfactory/unsatisfactory. Guided by a series of tutorial videos,
students are required to submit an up-to-date resume and two portfolio contributions. Through these assignments, students will reflect on how their Juniata experience has shaped their intellectual
and personal growth.
1 Credit
PC-450 Physics Research I
An opportunity for the student to do an independent research project under the guidance of a faculty member. Note: listed as Research: (title); may be taken multiple times for credit. Prerequisite:
1-4 CreditsN
AS-450 Astronomy Research II
Observational, computational, or theoretical research into a topic in astronomy or astrophysics, under the guidance of a faculty member. A formal written report and public presentation of research
results are required. May be taken multiple times for credit.
2-4 CreditsPre-requisites: AS-350 and permission of instructor.
POE Credit Total = 60-63
Students must complete at least 18 credits at the 300/400-level. Any course exception must be approved by the advisor and/or department chair.
Physics is the science that explores all aspects of the complex interactions of matter and energy, from the forces that bind atoms to those that build bridges. Physicists study and develop concepts
that are used in a precise mathematical description of nature and construct experiments to test their ideas. Skills cultivated in a study of Physics include critical reasoning, problem-solving,
logical thought, and the ability to clearly communication the value of this work to both peers and the public. Physics is at the core of a liberal arts education in a technological society.
The Physics Program of Emphasis is structured to allow a student to prepare for graduate school or to seek immediate employment. The first two years of physics consists of a broad introduction to the
field, providing basic knowledge and initial analytical skill development. Some laboratory work is included to insure contact with concrete phenomena, while the mathematics sequence offers the
necessary problem-solving techniques and discipline required for the upper-level physics courses at Juniata. At the upper level this program trains students in the fundamentals of experimentation and
The program as stated provides minimal preparation for graduate school and many schools would expect more of their entrants. A person starting early in the field and heading clearly toward graduate
school needs to develop a program with greater depth. The Department therefore recommends that a serious student take as large a fraction of the elective courses in physics as possible, and, in
addition, acquire research experience. | {"url":"https://connect.juniata.edu/academics/physics/physics-courses.php","timestamp":"2024-11-02T17:41:09Z","content_type":"text/html","content_length":"64663","record_id":"<urn:uuid:4c21237d-0632-4542-b68f-829ceadc961f>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00302.warc.gz"} |
Hi, I am a new user at your platform. I have couple of strategies and I was wondering if there is any machine learning extensions/tools available to help me optimize my strategy. Also I am really
keen to us ML to find the probability of profit of future trades based on historical patterns.
It is called finantic.Learning. It will contain 37 Machine Learning (ML) algorithms for both classification and regression.
It works on top of the finantic.IndicatorSelection extension.
The workflow will be like this:
You use Indicator Profiler or finantic.IndicatorSelection to find a set of promising indicators, i.e. some indicators that have a (however small) correlation between their indicator value and future
profits, i.e. show a certain predictive power.
Then you feed these (limited number of) indicators to finantic.Learning.
A machine learning algorithm will then build a model that calculates the best combination of the input indicators and predicts future profit.
Here is a preview of the finantic.Learning extension:
It shows the combobox with algorithms to choose from and also the input page that shows the set of indicators used as inputs for the various ML algorithms.
@DrKoch, can you record a videocast about finantic.Learning & IndicatorSelection extensions and what are their advantages for building profitable strategies?
A lot of great learning algorithms in that list. I can't see them all. Could you please list them all or point to the ML library being used?
Over the weekend I went through my notes and trading diary from 2009 when i was using NeuroSolutions. Best models had fewer inputs and these inputs where mostly RSI and percent change (ROC). Best ANN
algorithms were time lag recurrent flavors.
Also the models did not degrade to losses as quickly as I remembered. Took 1-2 months until they were not profitable anymore. They never lost money due to good money management and position sizing.
Always profitable.
Not true swing algo trading since I was babying them intraday a little to get better entries and exits.
The other important thing with ML/AI is what to predict. Neurodimension had a great idea with their optimal signal. The signals that the ANN was trained on.
Looks like this could be a machine learning tool and not an AI (ANN) tool.
Could you please list them all
Here is the complete list of ML algorithms supported by the upcoming finantic.Learning extension:
AdaBoost: Adaptive Boosting (Regression)
AdaBoost: Adaptive Boosting (Classification)
Averaged Perceptron (Classification)
Decission Trees (Regression)
Decission Trees (Classification)
Extremely Randomized Trees (Regression)
Extremely Randomized Trees (Classification)
Fast Forest / Random Forest (Classification)
Fast Forest / Random Forest (Regression)
Fast Tree / MART gradient boosting (Classification)
Fast Tree / MART gradient boosting (Regression)
Fast Tree with Tweedie Loss (Regression)
Generalized additive model (GAM) (Classification)
Generalized additive model (GAM) (Regression)
Gradient Boost/Absolute Loss (Regression)
Gradient Boost/Binomial Deviance (Classification)
Gradient Boost/Huber Loss (Regression)
Gradient Boost/Quantile Loss (Regression)
Gradient Boost/Square Loss (Regression)
L-BFGS Logistic Regression (Classification)
Light GBM (Classification)
Light GBM (Regression)
Linear SVM (Classification)
Local Deep SVM / LD-SVM (Classification)
Neural Net (Regression)
Neural Net (Classification)
Online Gradient Descent (OGD) (Regression)
Ordinary Least Squares (OLS) (Regression)
Random Forest (Regression)
Random Forest (Classification)
SDCA (Regression)
SDCA Binary Logistic Regression (Classification)
SDCA Non Calibrated (Classification)
Symbolic SGD Logistic Regression (Classification)
These algorithms come form two popular ML libraries: SharpLearning and Microsoft.ML.
Best models had fewer inputs and these inputs where mostly RSI and percent change (ROC)
I found a similar pattern: Do not use too many indicators as inputs. And preselect the indicatory with the best predictive power.
The finantic.IndicatorSelection extension is designed for the task to find indicators that match your trading strategy best, i.e. indicators that can predict if your trades will result in profit or
Because it selects among all available indicators (several thousand variants) chances are that you fnd things that work better than RSI and ROC.
The other important thing with ML/AI is what to predict.
My suggestion is this:
Start with a trading strategy that already works reasonably well.
Then use all the trades of this strategy, i.e. the profit of each individual position.
Use the machine Learning Algorithms for
to predict if a trade will be a winner or a looser.
Or use the machine learning algorithms for
to predict the profit of each trade.
Checking back if there is an update on finatic.Learning
Checking back again...Not that we are impatient, there are a lot of tools for that, but an integration in WL would be nice in these days.
I had two beta testers for finantic.Learning. Unfortunately they found too many issues, so I decided to do another iteration of developments and improvements before I release this extension.
Together with the finantic.NLP extension this one is the most complex extension I ever made. It takes much longer than expected...
... but will result in a quantum leap for WL.
I can imagine that. Thanks for the feedback! | {"url":"https://wealth-lab.com/Discussion/Is-there-a-Machine-Learning-extension-10835","timestamp":"2024-11-02T07:39:16Z","content_type":"text/html","content_length":"36933","record_id":"<urn:uuid:3a326279-ccdd-446d-b739-50df2fc11dbe>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00796.warc.gz"} |
firebird-support - Re: Bug: "-0" <> "0" in FB indexes !!!
Subject Re: Bug: "-0" <> "0" in FB indexes !!!
Author bjorgeitteno
Post date 2004-03-26T11:33:42Z
--- In
, "Steffen Heil" <lists@s...>
> You are writing 0 and -0, not 0.0 and -0.0, but I assume you really
> floating point numbers.
> So what do floating point numbers mean? They are approximations.
> +0.0 means somewhere near 0.0, maybe a litte more.
> -0.0 means somewhere near 0.0, maybe a litte less.
if you run this:
select (0.0*-1.0)*10e307 val from rdb$database
...you'll get a result of 0. If the "approximation" of (0.0*-1.0)
contained any digits <> 0 at all, it would be present. I believe,
with the binary formats of floats, '0.0' is an exact value. Of course
is with the currently used formats, because a '0.0' is a variable
filled with 0's, and a '-0.0' is the same except for the sign bit set
to 1.
But if one *wanted* two zero types, it should be concistent. One
should not treat the values differently when dealing with indexes and
with normal field value retrieval.
I believe one doesn't want a database engine to distinguish between
positive and negative zero. If one was to deal with complex maths,
one might want to, of course.
A statement like:
select * from tbl where float_field = 0.0
correctly (or *not*, if you want to distinguish, of course) returns
the rows containing "float_field = 0" or "float_field = -0". *If* a
usable index is not present.
> [In fact, 0.0 is here used to explain, it does NOT exist at all.
> point systems only know about +0.0 and -0.0]
> Another point to see, that they are NOT equad:
> +1.0 / +0.0 = +infinity
> +1.0 / -0.0 = -infinity
...which is about the only practical use for this number...
Assuming that the value stored (-0.0) is the correct outcome of the
-1 * 0
..then the behaviour of FB shown in the select statement above is not
correct provided you want the two values to be distinct.
No matter what the conclusion to this is, one thing needs being fixed:
- The values 0.0 and -0.0 should be either be equal or different
- And, If distinction is implemented: The value -0.0 should sort
between 0 and -1, not as today's 'negative infinity'. Client-side
components must also be able to display '-0'.
I'm really curious about the internals of FB - why -0.0 is evaluated
to be the "most negative number"....
Anyone shed a light on this ? | {"url":"http://fb-list-archive.s3-website-eu-west-1.amazonaws.com/firebird-support/2004/3/39350.html","timestamp":"2024-11-09T20:20:11Z","content_type":"text/html","content_length":"8997","record_id":"<urn:uuid:e900f274-7692-4ba5-9158-00bca06732b4>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00189.warc.gz"} |
Definite Integration
Our platform provides a comprehensive learning experience for students interested in exploring definite integration in calculus. By joining us, you can enhance and develop the following skills:
1. Calculating areas: Learn how to use definite integrals to find the area under curves and between curves. Develop proficiency in setting up and solving integration problems to determine the area
of irregular shapes.
2. Finding volumes: Explore the concept of definite integration in the context of finding volumes of solids of revolution. Learn how to set up and evaluate definite integrals to calculate the volume
of three-dimensional objects.
3. Solving real-world problems: Apply the principles of definite integration to solve real-world problems in various fields such as physics, engineering, and economics. Develop problem-solving
skills by using definite integrals to analyze and solve practical scenarios.
4. Numerical methods: Gain knowledge of numerical methods, such as Riemann sums and trapezoidal approximations, which are used to estimate definite integrals. Understand the concept of convergence
and how to improve accuracy in approximating definite integrals.
5. Integration techniques: Deepen your understanding of integration techniques, including u-substitution, integration by parts, and trigonometric substitutions. Develop proficiency in applying these
techniques to evaluate definite integrals efficiently.
For educators, our platform offers several advantages:
1. Comprehensive resources: Access a wide range of worksheets, examples, and instructional materials to support your teaching on definite integration in calculus.
2. Customization: Adapt the available resources to meet the needs and proficiency levels of your students, ensuring an effective and personalized learning experience.
3. Practice and assessment tools: Utilize the provided worksheets and assessments to evaluate student understanding, track progress, and provide targeted feedback on definite integration concepts
and techniques.
4. Real-world connections: Explore practical applications of definite integration in various fields to help students see the relevance and applicability of this fundamental concept.
5. Collaboration and support: Engage with a community of educators, share insights and resources, and receive support in teaching definite integration in calculus.
Join our platform to explore the world of definite integration, enhance your skills in finding areas and volumes, and gain a deeper understanding of this fundamental concept in calculus. Whether you
are a student looking to excel in calculus or an educator seeking comprehensive resources, our platform offers a supportive and enriching environment for studying definite integration and its
Definite Integration
Join our platform to explore the first fundamental theorem of calculus and enhance your skills in definite integration. Access a wide range of worksheets and resources to deepen your understanding of
this fundamental concept in calculus and its applications in finding areas, evaluating definite integrals, and solving related problems.
This site is free for the users because of the revenue generated by the ads running on the site. The use of ad blockers is against our terms of use.
Download & Print Resources
Updated To The Latest Standards!
UNLIMITED ACCESS to the largest collection of standards-based, printable worksheets, study guides, graphic organizers and vocabulary activities for remediation, test preparation and review in the
classroom or at home!
Visit Newpath Worksheets | {"url":"https://k12xl.com/calculus/definite-integration","timestamp":"2024-11-15T04:54:11Z","content_type":"text/html","content_length":"32174","record_id":"<urn:uuid:c521a43b-02ad-4ada-9f37-354754f6209d>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00804.warc.gz"} |
Syllabus for
Syllabus for Elementary Algebra
Course Objectives:
Some basic concepts of arithmetic and algebra ; real numbers; equations, inequalities, and
problem solving; formulas and problem solving; exponents and polynomials ; factoring,
solving equations , and problem solving; algebraic fractions ; coordinate geometry and
linear systems ; roots and radicals ; quadratic equations; additional topics.
Homework, Quizzes, and Exams:
Homework from the textbook will be assigned and reviewed in class. Homework will be
collected and graded. Since the exams will closely resemble homework exercises, success
in the course strongly depends on diligently completing all assignments in a timely
fashion. I cannot stress this last point enough! Students must complete all assignments on
time and come to class prepared . We will have 4 in class closed- book and no calculator
exams. The exams may cover any material discussed in class up to and in some cases
including material covered the day before. All exams should be considered cumulative.
We will also have 10 pre-announced quizzes. No make-up exams will be given.
Successful students should plan to spend at least two hours of study outside of class for
each hour of discussion. This translates into a minimum of ten additional hours per
Read the textbook:
I strongly encourage you to read the text Carefully. The lectures are designed as a
supplement to and not an alternative for the textbook. Students are expected to master all
topics in the textbook unless otherwise indicated and regardless of whether they are
mentioned in the lecture.
Class comportment:
All students are expected to arrive on time. Late arrivals are disruptive to both the
lecturer and students. Students must turn off all pagers and cell phones while in class.
Students are encouraged to ask questions and make comments on the lecture material.
This should be done in a courteous manner by raising one’s hand and being recognized.
Math 115
Learning Outcomes
1) Categorize numbers
2) Manipulate numbers using basic numerical operations
3) Evaluate arithmetic expressions containing exponents
4) Solve first-degree equations and inequalities
5) Solve formulas for a given variable
6) Recognize and solve proportion problems ; analyze and solve word problems
7) Simplify, add, subtract, multiply and divide polynomials
8) Simplify expressions containing negative exponents
9) Factor polynomials using appropriate methods
10) Apply factoring techniques to solve second-degree equations
11) Factor polynomials using appropriate methods
12) Solve equations and word problems using factoring
13) Manipulate and simplify algebraic
14) Solve equations containing algebraic fractions
15) Graph equations in two variables
16) Determine equations of lines
17) Solve systems of equations
18) Determine roots; simplify, add, subtract, multiply, and divide radicals
19) Solve equations containing radicals
20) Solve quadratic equations using completing the square and the quadratic formula
21) Define and evaluate relations and functions | {"url":"https://www.softmath.com/tutorials-3/relations/syllabus-for-elementary-5.html","timestamp":"2024-11-04T21:23:32Z","content_type":"text/html","content_length":"34473","record_id":"<urn:uuid:4aa14a79-f937-4772-9e85-037b6d7007ee>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00824.warc.gz"} |
Dynamic Generative Control
Chinese version: 动态生成式控制 - 知乎 (zhihu.com)
In several previous blog posts, we formulated the control problem as a generative model in the following fashion:
\[[v,w,x] = g(z), \quad z \sim \mathbb{D},\]
in which \(v\) is the collection of control objecties to be minimized, \(w\) contains the output control signals, and \(x\) represents the input signals that are thought to offer information to the
control problem.
When the system is dynamic, the form above lacks the explicit modeling of dynamic changes. The simplest form of a dynamic generative control model can be written as
\[\begin{bmatrix} v_t & w_t & x_t \\ v_{t-1} & w_{t-1} & x_{t-1} \end{bmatrix} = g(z), \quad z \sim \mathbb{D}.\]
Now, \(g(z)\) is able to generate system states before and after a control step, therefore after training, it must be able to model the dynamic changes in the controlled system. The effect of this is
a simpler dynamic control algorithm than that of a static model:
\[ & \text{Initialize } [v_0, x_0] = r (u_0, w_0) \\ & \text{for } t = 1 \text{ to } \infty \text{ do} \\ & \quad \text{Initialize } [v', w', x'] = [v_{t-1}, w_{t-1}, x_{t-1}] \\ & \quad \text{for }
N \text{ steps} \text{ do} \\ & \quad \quad z' = \underset{z}{\text{argmin}} \left\| g(z) - \begin{bmatrix} v' - \epsilon & w' & x' \\ v_{t-1} & w_{t-1} & x_{t-1} \end{bmatrix} \right\| \\ & \quad \
quad \begin{bmatrix} v' & w' & x' \\ \_ & \_ & \_ \end{bmatrix} = g(z') \\ & \quad w_t = w' \\ & \quad [v_t, x_t] = r (u_t, w_t) \]
Similarly to before, this algorithm proceeds by attempting to minimize the control objectives \(v\) until it has achieved the equilibrium. The difference is not needing to tune the step size of
objectives \(\epsilon\), because the variables of step \(t\) is anchored by those at step \(t-1\). As long as \(g\) has been trained properly to model the system transitions, it should be able to
find the control parameters that can maximally minimize the objectives in \(v\).
Without a doubt, the inner loop of the algorithm disambiguiated the uncertainty between all variables just like discussed before, making it possible to amortize the algorithm at each control step.
This will reduce the model much more agressively to the computation level of an MCU rather than a CPU / GPU / NPU, making it very cheap to use generative control compared to alternatives such as
LLMs. See here for a good mathematical tutorial on amortization, and here for a philosophical discussion.
Furthermore, let \(x = [v_{t-1}, w_{t-1}, x_{t-1}, x_t]\), the formulation above is equivalent to the static model \([v,w,x] = g(z)\) and the dynamic algorithm above is equivalent to the static
algorithm. This also matches with our previous discussion that the more we can provide useful signals to \(x\), the more powerful our generative world model can be. This equivalence is left to the
readers to explicitly write out. | {"url":"https://www.xzh.me/2024/08/dynamic-generative-control.html","timestamp":"2024-11-15T03:26:00Z","content_type":"application/xhtml+xml","content_length":"83233","record_id":"<urn:uuid:06370b7d-2827-4608-a593-81b03d49629a>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00571.warc.gz"} |
Frontiers | Exploring bitcoin cross-blockchain interoperability: estimation through Hurst exponent
• International College of Liberal Arts, Yamanashi Gakuin University, Kofu, Japan
This study aims to investigate the interoperability of the Bitcoin blockchain by comparing the US dollar prices of five cryptocurrencies derived from the Bitcoin price with their corresponding market
prices. The deviation rate between the derived price and the market price, referred to as the arbitrage return rate, is examined with respect to its adherence to the efficient market hypothesis and
martingale theory principles, specifically regarding mean-reversion and serial independence. Hurst exponents are estimated using R/S and DFA methods, and their dynamics are analyzed using a sliding
window technique. Our findings demonstrate that the Bitcoin blockchain effectively facilitates transactions among the five cryptocurrencies, though evidence suggests a potential structural change in
Bitcoin blockchain interoperability following April 2023.
1 Introduction
The notion of blockchain interoperability is gaining significant traction both in academic research and industrial applications. Belchior et al. (2022a) illustrate this trend by noting a substantial
increase in Google Scholar search results, from two in 2015 to 207 in 2020. This surge in research interest underscores the growing industry demand for interoperability among many existing
blockchains. Historically, individual blockchains were developed to address specific use cases and challenges in isolation, neglecting cross-chain interoperability (Abebe et al., 2019; Jin et al.,
2018). The adaptability of a blockchain to the requirements of its stakeholders has emerged as a critical driver behind the proliferation of new and diverse blockchains, resulting in a heterogeneous
blockchain ecosystem and consequent fragmentation of the blockchain landscape (Belchior et al., 2022b; Pillai et al., 2020; Xu et al., 2017). Presently, practitioners and researchers are confronted
with the challenge of balancing novelty and stability as they consider blockchain interoperability to enhance the scalability of existing systems and unlock new use cases (Belchior et al., 2022a).
In financial markets, blockchain technology finds significant application in cryptocurrencies, where digital tokens are viewed as financial assets with potential monetary use. Among the vast array of
cryptocurrencies tracked across numerous exchanges, including Bitcoin (BTC), Ethereum (ETH), Tether (USDT), Binance (BNB), Solana (SOL), and Ripple (XRP), the top six cryptocurrencies dominate the
market shares. As of 18 March 2024, these cryptocurrencies collectively hold 77.1% of the market capitalization, with Bitcoin alone accounting for 49.4%. Given its substantial market share, the
Bitcoin blockchain is a prime candidate for hosting other blockchains. In this context, bitcoin cross-chain interoperability refers to the capability of cryptocurrencies to be exchanged with one
another over the Bitcoin blockchain.
Wegner (1996) defines interoperability as the ability of multiple software components to collaborate effectively despite differences in language, interface, and execution platform. The National
Interoperability Framework Observatory (NIFO) (European Commission, 2020) identifies seven layers of interoperability: technical, semantic, organizational, legal, integrated public service
governance, and interoperability governance. Various factors influencing interoperability are categorized within each layer (Campmas et al., 2022). While current efforts primarily focus on the
technical layer, we concentrate on semantic-level interoperability. For example, facilitating a transaction from Ethereum to a Ripple user involves at least two blockchains: the source blockchain,
Ethereum, and the target blockchain, Ripple.
The process of transferring assets across different blockchains involves three fundamental steps: (i) locking an asset on the source blockchain, (ii) committing to the blockchain transfer, and (iii)
creating a representation of the asset (known as a token) on the target blockchain (Belchior et al., 2022b; Hargreaves et al., 2021). Belchior et al. (2022a) distinguish transfers between
heterogeneous blockchains as cross-blockchain communication (CBC), contrasting with cross-chain communication (CCC) between homogeneous blockchains. Zamyatin et al. (2021) demonstrate that no CCC
protocol can tolerate misbehaving nodes without a trusted third party. The choice of a trusted third party presents two options: centralized or decentralized (Montgomery et al., 2020). A centralized
trusted party could be an exchange or institution. Zamyatin et al. (2021) suggest that consensus among all distributed ledgers could be an abstraction for a trusted third party. Conversely, a
decentralized trusted party could be another blockchain. Borkowski et al. (2018) propose that the source blockchain should replicate the consensus mechanism of the target blockchain. Lafourcade and
Lombard-Platet (2020) argue that achieving fully decentralized blockchain interoperability is impractical. These findings underscore a crucial realization: Cross-blockchain transactions necessitate a
trusted third party, which may involve institutions or consensus mechanisms from the blockchains involved.
We discuss two scenarios for establishing third-party consensus: Cryptocurrency exchanges could use either dollars or bitcoins to facilitate cross-blockchain transactions. For example, Ethereum and
Ripple transactions can follow two distinct paths (see Figure 1). The first path involves utilizing a fiat currency, such as the U.S. dollar, as an intermediary to facilitate the consensus processes
in both the Ethereum and Ripple blockchains. For instance, one Ether may be valued at $3,480, and with the assistance of an exchange, $3,480 could be exchanged for 5,800 Ripples. This process
establishes an exchange rate between Ethers and Ripples, referred to as the cross-blockchain (CBC) exchange rate for Ripple in terms of Ether, denoted ETHXRP. The second path involves employing
another blockchain, such as the Bitcoin blockchain, as the intermediary to complete the consensus between Ethereum and Ripple. This approach results in two distinct CBC exchange rates, ETHBTC and
XRPBTC, along with the Bitcoin-derived CBC exchange rate, BETHXRP. Therefore, for cross-blockchain transactions between Ethereum and Ripple, there are at least two types of CBC exchange rates,
contingent upon the choice of fiat currency or blockchain used. The fiat-currency-derived CBC exchange rate, specifically ETHXRP, serves as a reference point for investigating Bitcoin
cross-blockchain interoperability, as BETHXRP represents.
Figure 1
Figure 1. The cross-blockchain transactions between Ethereum and Ripple, showcasing two distinct paths.
It is observed that the proposed CBC transaction model can be simplified into a halfway model. This means that using ETHUSD represents the direct path of a CBC transaction while utilizing the
bitcoin-derived ETHUSD, denoted as BETHUSD = ETHBTC × BTCUSD, signifies the indirect path of a CBC transaction. The advantage of this approach is that it allows focusing on one cryptocurrency against
bitcoin at a time.
This study aims to explore the interoperability of the Bitcoin blockchain with the top five cryptocurrencies in terms of market capitalization: ETH, USDT, BNB, SOL, and XRP. The methodology involves
comparing the prices of these five cryptocurrencies, namely, ETHUSD, USDTUSD, BNBUSD, SOLUSD, and XRPUSD, with their bitcoin-derived counterparts: BETHUSD, BUSDTUSD, BBNBUSD, BSOLUSD, and BXRPUSD.
Tether is the most widely used dollar-pegged stablecoin. Incorporating Tether in the analysis is worthwhile because of its profitable correlation with Bitcoin (Bianchi et al., 2020) and its noted
instability in terms of price, returns, volatility, and trading volume (Grobys and Huynh, 2022; Hoang and Baur, 2021).
In the context of cross-blockchain interoperability, Ether’s price in dollars, facilitated by the Bitcoin blockchain, should not consistently deviate from Ether’s dollar price. Alternatively, the
arbitrage return rate, defined as the difference between these two prices, should not consistently present predictable patterns. The absence of long-term memory in the arbitrage returns not only
aligns with efficient market theories but also suggests that the Bitcoin blockchain can effectively facilitate transactions across other blockchains.
Fama’s (1970) efficient market hypothesis (EMH) states that abnormal returns only exist by chance and that no individual can consistently predict future prices using current information. Samuelson’s
(1973) martingale model suggests that successive returns should not exhibit serial dependence. However, anomalies cannot simply be dismissed as random errors (Tversky and Kahneman, 1988). Frankfurter
and McGoun (2001) argue that anomalies are generic in nature and suggest a certain type of market efficiency. Latif et al. (2011) find that calendar, fundamental, and technical anomalies can lead to
abnormal profit. Fama (1990) contends that “such anomalies can be explained only in the context of some particular situations.”
One example of an anomaly is the slow response of investors to new information. Jegadeesh and Titman (1993) observe that adjustments to announcements usually take 12 months, with variations ranging
from 6 months to 2 years Barberis and Shleifer (2012) attribute this slow adjustment to under-reaction and overreaction. As Fama (1998) concludes in his work, “Market efficiency survives the
challenge from the literature on long-term return anomalies.”
Bariviera et al. (2017) use the detrended fluctuation analysis (DFA) method to analyze Bitcoin’s intraday returns over 5–12 h with 500 data points. They argue that DFA is better suited for
nonstationary data than the R/S method, which tends to confuse short-term and long-term memory. Their study finds that long-term memory is not linked to market liquidity and decreases over time.
Fousekis and Tzaferi (2021) utilize frequency connectedness analysis, a method introduced by Baruník and Křehlík (2018), to examine how shocks in one stochastic process affect another at various
frequencies, exploring the relationship between returns and trading activities. They suggest that asymmetry, characterized as a temporal link between returns and trading volume, can be influenced by
the strength of spillovers. Assaf et al. (2022) explore long-term memory by employing a matrix derived from the wavelet-based multivariate long-memory estimator developed by (Achard and Gannaz, 2016
). They discover significant long-term correlations between Bitcoin and five other cryptocurrencies. El Alaoui et al. (2019) observe a nonlinear interaction between Bitcoin returns and the growth
rate of trading volume using multifractal detrended cross-correlation analysis (MF-DCCA). Stosic et al. (2019) apply the multifractal detrended fluctuation analysis (MF-DFA) to Bitcoin, concluding
that Bitcoin returns do not exhibit long-term memory but show anti-persistent long-term correlations in volume changes.
This study proposes employing the rescaled range analysis (R/S) method and detrended fluctuation analysis (DFA) to estimate Hurst exponents using sliding windows. The Hurst exponent is a statistical
metric for predictability and is commonly utilized to assess whether a time series exhibits long-term memory, which manifests as volatility clustering in return time series (Bariviera, 2017). A
higher Hurst exponent also enhances the accuracy of backpropagation Neural Networks (Qian and Rasheed, 2004). Additionally, Bai-Perron tests for breakpoints complement the Hurst exponent methodology.
This study contributes to the literature by investigating long-term memory in Bitcoin-related arbitrage returns. The analysis is conducted using the R programming language.
The remainder of the paper is structured as follows: Section 2 outlines the dataset and methodology, Section 3 presents the results and discussion, and Section 4 concludes the paper.
2 Data and methodology
The data is collected daily from Yahoo Finance. This dataset includes six cryptocurrencies: Bitcoin (BTC), Ethereum (ETH), Tether (USDT), Binance (BNB), Solana (SOL), and Ripple (XRP). Each
cryptocurrency has its price time series: BTCUSD, ETHUSD, USDTUSD, BNBUSD, SOLUSD, and XRPUSD. Since we use the Bitcoin blockchain as the trusted third-party abstraction, there are five
cross-blockchain (CBC) exchange rates: ETHBTC, USDTBTC, BNBBTC, SOLBTC, and XRPBTC. For simplicity, we exclude “CBC” from the notation. The investigation period spans from 11 November 2017, to 18
March 2024, totaling 2319 observations after discarding two missing values. However, Solana, launched in 2020, extends from 10 April 2020, to 18 March 2024, comprising 1438 observations. Using Yahoo
Finance as the primary data source for cryptocurrency prices carries the risk of data inaccuracies. However, it provides consistency and minimizes timestamp issues.
In order to assess the interoperability of the Bitcoin blockchain, it is necessary to generate a Bitcoin-derived dollar price for each cryptocurrency, denoted as BCRYPTOUSD. This process involves
evaluating how closely the consensus mechanism of the bitcoin blockchain mirrors that of other cryptocurrency blockchains and, subsequently, how it translates this consensus into a dollar value
within its own mechanism. The computation for the bitcoin-derived cryptocurrency price is outlined by Nan and Kaizoji (2017, 2019).
where “CRYPTO” represents any given cryptocurrency.
For instance, the calculation of BETHUSD on 18 March 2024, can be derived from ETHUSD and ETHBTC using Equation 1: $BETHUSD=ETHBTC×BTCUSD=0.0525×67832.22≈3561$.
This indicates that one ether is valued at $3561 when bridged by the bitcoin blockchain. Notably, ETHBTC was quoted at $3561.764 on the same day.
Similarly, there are 2319 observations for four bitcoin-derived cryptocurrency prices: BETHUSD, BUSDTUSD, BBNBUSD, and BXRPUSD, while BSOLUSD has 1438 observations.
For each cryptocurrency and US dollar pair, two prices are quoted: one direct price and one indirect price bridged by the bitcoin blockchain. The arbitrage return rate between these two prices can be
constructed using the equation provided by Nan and Kaizoji (2020) and Pichl and Kaizoji (2017).
where $RETHUSD$ represents the arbitrage return rate between BETHUSD and ETHUSD. Similarly, we can calculate the other four arbitrage return rates using Equation 2: $RUSDTUSD$, $RBNBUSD$, $RXRPUSD$,
and $RSOLUSD$.
The Hurst exponent, denoted as H, quantifies the degree of serial dependence in a time series. Initially developed to measure long-term memory in hydrological time series by Hurst (1951), it was
later introduced by Mandelbrot and Wallis (1968) for analyzing financial time series. Theoretically, the value of the Hurst exponent categorizes a time series into three groups, as outlined by Qian
and Rasheed (Qian and Rasheed, 2004): (i) a white noise when 0 < H < 0.5, (ii) a random walk when H = 0.5, and (iii) a persistent series when H > 0.5. As H approaches 0, the strength of serial
dependence weakens, while it strengthens as H approaches 1.
Various estimators of the Hurst exponent based on scaling properties exist, with two commonly used ones being the R/S estimator, which relies on the rescaled range statistic, and the detrended
fluctuation analysis (DFA) estimator. Additionally, we employ a sliding window method to assess the dynamics of the estimated Hurst exponents.
2.1 R/S estimator
The rescaled range analysis (R/S) method scales the range of the cumulative sum of deviation of a time series from its mean (Bariviera, 2017). Let $xt,t=1,…,L$, be a time series, and the R/S
estimator can be found through the following procedure (Lee, 2022):
(i) Split ${x}_{t}$ into ${N}_{s}$ subseries.
Each subperiod has an equal length of $Ls$. The number of subperiods is $Ns$. So $Ns×Ls≥L$, and the last subseries may contain NAs.
Note that the subscripts have different meanings:
$t=whole time index,t=1,…,L.$
$s=sub series index,s=1,…,Ns$
$i=sub time index,i=1,…,Ls$
(ii) Calculate the mean ${M}_{s}$ and standard deviation ${S}_{s}$ for each subperiod:
$Ms=1Ls∑i=1Lsxi and Ss=1Ls∑i=1Lsxi−Ms21/2$
Note that the last subperiod $s=Ns$ has NAs, so $Ls$ should be adjusted accordingly.
(iii) Calculate the demeaned ${d}_{i,s}$ from the original time series ${x}_{i,s}$ for each subperiod:
where $i=1,…,Ls$.
(iv) Calculate the cumulative series of ${d}_{i,s}$, ${y}_{i,s}$, for each subperiod:
$yi,s =∑k=1idi,s$
(v) Find the range ${R}_{s}$ for each subperiod:
(vi) Rescale the range ${R}_{s}$ by its standard deviation ${S}_{s}$ to get ${R}_{s}/{S}_{s}$, and then calculate the mean of the rescaled range using Equation 3:
(vii) Repeat steps (i) through (vi) by varying ${L}_{s}$.
The length of the subseries, $Ls$, is a variable. Typically, we can select $Ls$ using the following method:
where we aim for $L/2k≈23$, hence $k=roundlogL/23/log2$.
The R/S statistic is known to asymptotically follow the relation shown in Equation 4 (Hurst, 1951)
where $H$ is the Hurst exponent. Hence, $H$ can be estimated using a simple linear regression using Equation 5:
2.2 DFA estimator
The detrended fluctuation analysis (DFA), introduced by Peng et al. (Peng et al., 1995) mitigates spurious detection of long-range dependence (Bariviera, 2017). Unlike the R/S method that measures
the maximum range in both directions, DFA calculates the average of the squared vertical distance of $yi,s$ from the ordinary least squares (OLS) line (Mielniczuk and Wojdyłło, 2007).
The DFA procedure involves five steps (Penzel et al., 2003).
(i) Determine the cumulative demeaned series ${y}_{t}$ of ${x}_{t},t=1,\dots ,L$, referred to as the “profile”:
$yt =∑k=1txt−M$
where $M$ is the mean of $xt,t=1,…,L$.
(ii) Divide ${y}_{t\text{\hspace{0.17em}}}$ into ${N}_{s}$ non-overlapping subseries of equal length ${L}_{s}$:
which may result in a short segment at the end of the profile. To mitigate its impact, the procedure is repeated from the opposite end, resulting in a total of $2Ns$ subperiods.
(iii) Compute the local trend for each subperiod using an OLS regression. Then, determine the variance for each subperiod using Equation 6:
where $s=1,…,Ns$. And $psi$ is the fitting polynomials in segment $s$. In the OLS fitting procedure, the polynomial could be linear, quadratic, cubic, or higher order, conventionally called, DFA1,
DFA2, DFA3, respectively. We chose DFA1 as the fitting polynomial, as DFA2 does not improve accuracy relative to the degree of freedom of the noise-like return time series.
(iv) Obtain the fluctuation function by taking the square root of averaged variances over $2{N}_{s}$ subperiods, as shown in Equation 7:
$FLs≡12Ns∑s=12NsFLs2s1/2 (7)$
(v)Repeat steps (i)-(iv) for different time scale ${L}_{s}$.
If the time series $xt$ exhibits long-range correlation following a power law, the fluctuation function should follow the relation (Peng et al., 1995):
Similarly, $H$ can be estimated using a log-log regression:
2.3 Sliding window
The sliding window technique involves employing a fixed window size, denoted as, $W$, which is moved through the time series $xt$ to analyze the dynamics of Hurst exponents. We tested window sizes of
$W=256$, $W=512$, and $W=1024$, respectively, but only the results for $W=512$ are presented.
This approach impacts the estimation procedures of R/S and DFA by substituting $W$ for $L$. Consequently, the number of estimations for each time series is determined by $Ne=L−W+1$. For instance,
considering the length of $RETHUSD$ as 2319, if $W=512$, then $Ne=L−W+1=2319−512+1=1808$.
3 Results
Table 1 displays the summary statistics of cryptocurrency and bitcoin-derived cryptocurrency prices. Notably, the bitcoin-derived prices do not exhibit significant deviations from their direct prices
regarding minimum, maximum, mean, and standard deviation. This observation suggests that the Bitcoin blockchain’s interoperability maintains unbiasedness from an unconditional statistical
Table 1
Table 2 provides the descriptive statistics of the return rates associated with arbitrage between the bitcoin-derived and direct prices. While the mean of the arbitrage return rates is nearly zero
across all cases, the daily standard deviations range between 1% and 2%, implying that approximately 95% of the data points exhibit deviations from the mean within the range of −6%–6%. Some extreme
values are observed, such as a 38% negative deviation for Ripple and a 12% positive deviation for USDT. Furthermore, these return series exhibit skewed leptokurtic and non-normal characteristics.
Table 2
Figure 2 displays the time series of arbitrage returns, with red dashed lines indicating breakpoints. Each series exhibits a mean-reverting characteristic with occasional spikes. However, a sudden
increase in fluctuation magnitudes was observed towards the end of the time series across all five cryptocurrencies. We employ the Bai-Perron test (Bai and Perron, 2003) to identify potential
structural changes in each arbitrage return series and determine the locations of these breakpoints. Two breakpoints are highlighted in Figure 2 with red dashed lines. Notably, one common breakpoint
occurred for all five return processes between 13 March 2023, and 13 April 2023. These observations raise questions about the Bitcoin blockchain’s interoperability: Are these fluctuations persistent
and predictable? What caused such a change? Partial answers to these questions lie in the values of the Hurst exponents. Table 3 provides summary statistics of the Hurst exponents estimates.
Figure 2
Table 3
Figure 3 illustrates Ethereum’s Hurst exponents estimated using the R/S and DFA methods. Both series exhibit a similar trend, with the R/S estimates mostly above the DFA’s. The DFA estimator
(H_dfa_ETH) displays increased volatility in the middle section of the series. These findings suggest that while the DFA broadly aligns with the R/S regarding changes in H values, they diverge in
terms of their levels: the R/S tends to overestimate H, whereas the DFA tends to underestimate H but is more sensitive to anomalies. Both methods indicate that Ethereum’s arbitrage return series
lacks strong, predictable persistence over the sample period: the R/S implies a random walk process, while the DFA suggests a mean-reverting process. However, from July 2021 to April 2023, the serial
dependence strengthened, particularly in the DFA, which exhibits evident long-term memory despite its volatility. Interestingly, both estimators return to non-persistent levels after April 2023,
indicating that the highly oscillating period in the latter part of Ethereum’s arbitrage returns (see Panel (a) of Figure 1) does not enhance predictability. Though mean-reverting processes could
suggest predictability, Fama (1998) concludes that anomalies tend to reverse, so unpredictability still holds in the long term.
Figure 3
The stablecoin examined in our study, Tether, has demonstrated significant serial dependence strength since April 2021 (refer to Figure 4). This suggests that the Bitcoin blockchain encountered
challenges in facilitating transactions between USDT users and U.S. dollars. This period of malfunction could lead to predictable profits. However, after the conclusion of 2023, there appears to be a
declining trend in the estimated Hurst exponents for USDT. Correspondingly, USDT’s arbitrage return behavior exhibited volatility during this period.
Figure 4
Regarding Binance, the Hurst exponents estimated through the R/S method indicate either a random walk pattern or a limited level of serial dependence strength, as depicted in Figure 5. The DFA
estimator tended to highlight a strong trend post-July 2021 but has reverted to indicating a random walk process since the onset of 2024.
Figure 5
The disparity between the R/S Hurst and DFA Hurst exponents is most pronounced for Ripple’s arbitrage returns, as illustrated in Figure 6. This indicates that the R/S method perceives the return
pattern as more identifiable and predictable, while the DFA considers it white noise. However, since the conclusion of 2022, Ripple’s arbitrage returns have exhibited a strong trend, which has
dissipated since July 2023. The observed disparity between these methods, particularly for Ripple’s arbitrage returns, can be attributed to the distinct sensitivities of each technique to different
types of data noise and trends. Specifically, the DFA method mitigates the spurious detection of long-range dependence (Bariviera, 2017).
Figure 6
Finally, Solona’s arbitrage returns demonstrated significant persistence from January 2022 to June 2022 (see Figure 7). Afterwards, the DFA Hurst exponents displayed high oscillations, suggesting
increased and cyclic predictability from June 2022 to March 2023. Subsequently, there was a divergence between the R/S and DFA estimators. Both estimators indicated a higher level of dependence
strength at the beginning of 2024.
Figure 7
4 Conclusion
Cross-blockchain interoperability is a critical concern for facilitating transactions across various existing cryptocurrencies, with the interoperability heavily reliant on a trusted third party. One
viable option is to utilize a blockchain as this intermediary party to bridge cross-blockchain transactions. Given its substantial market capitalization, the Bitcoin blockchain is a strong contender
for fulfilling this role, encompassing approximately 50% of the market share. Consequently, our investigation focused on assessing the interoperability of the bitcoin blockchain with the other top
five cryptocurrencies by market capitalization: Ethereum, Tether, BNB, Solana, and Ripple.
We introduced a middle-ground approach wherein each cryptocurrency is linked to US dollars via the Bitcoin blockchain instead of directly facilitating transactions between two cryptocurrencies. This
approach mitigates ambiguity arising from asymmetric influences of individual cryptocurrencies. Subsequently, interoperability was examined by comparing the dollar price derived through the Bitcoin
blockchain with each cryptocurrency’s “direct” dollar price. The deviation rate between these two prices served as the arbitrage-return rate. Adhering to efficient market theories, we scrutinized
whether long-term serial dependence existed in all arbitrage-return time series.
The dynamics of Hurst exponents, estimated using the R/S and DFA methods, indicate weak evidence of memory persistence, with characteristics leaning more towards a random walk or white noise pattern
over our sample period from 11 November 2017, to 18 March 2024. These results are consistent with Fama’s (1970, 1990) efficient market hypothesis, suggesting that prices follow a random walk with
returns reverting to a trivial mean.
Some sub-periods exhibited pronounced strength of dependence, accompanied by volatility and cyclical behavior. The observed autocorrelation may originate from a sluggish reaction to new information (
Barberis and Shleifer, 2012). Cyclical behavior, likely caused by the investor overreaction (De Bondt and Thaler, 1985), aligns with Fama’s (1998) assertion that most long-term return anomalies tend
to dissipate, leading to a pattern where past winners become future losers and vice versa. Concerning the high volatility observed in all five arbitrage return series after April 2023, Malkiel (2003)
concludes his research by saying that regardless of how high the price volatility is, capital markets may still be efficient if the price is less predictable. These findings suggest that the values
of cryptocurrencies passing through the Bitcoin blockchain do not result in predictable deviations from the prices determined within their respective blockchains through their consensus mechanisms.
Consequently, we infer that the semantic layer of the Bitcoin blockchain’s interoperability generally operated effectively, assuming the exclusion of the technical layer and other layers beyond the
scope of this study. Nonetheless, we observed a significant increase in volatility clustering since April 2023 across all cryptocurrencies. These phenomena may stem from structural changes in the
functionality of the Bitcoin blockchain. We propose further exploration into the potential decreased dependency of other cryptocurrencies on Bitcoin as a topic for future research.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
Author contributions
ZN: Conceptualization, Data curation, Formal Analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization,
Writing–original draft, Writing–review and editing.
The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.
We are deeply grateful to the referees for their valuable suggestions and insightful comments, which have greatly improved our paper's quality and clarity.
Conflict of interest
The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the
reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Supplementary material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fbloc.2024.1410191/full#supplementary-material
Abebe, E., Behl, D., Govindarajan, C., Hu, Y., Karunamoorthy, D., Novotny, P., et al. (2019). “Enabling enterprise blockchain interoperability with trusted data transfer (industry track),” in
Proceedings of the 20th international middleware conference industrial track, 29–35. doi:10.1145/3366626.3368129
Achard, S., and Gannaz, I. (2016). Multivariate wavelet whittle estimation in long-range dependence. J. Time Ser. Analysis 37 (4), 476–512. doi:10.1111/jtsa.12170
Assaf, A., Bhandari, A., Charif, H., and Demir, E. (2022). Multivariate long memory structure in the cryptocurrency market: the impact of COVID-19. Int. Rev. Financial Analysis 82, 102132.
Bai, J., and Perron, P. (2003). Critical values for multiple structural change tests. Econ. J. 6 (1), 72–78. doi:10.1111/1368-423x.00102
Bariviera, A. F. (2017). The inefficiency of Bitcoin revisited: a dynamic approach. Econ. Lett. 161, 1–4. doi:10.1016/j.econlet.2017.09.013
Bariviera, A. F., Basgall, M. J., Hasperué, W., and Naiouf, M. (2017). Some stylized facts of the Bitcoin market. Phys. A Stat. Mech. Its Appl. 484, 82–90. doi:10.1016/j.physa.2017.04.159
Baruník, J., and Křehlík, T. (2018). Measuring the frequency dynamics of financial connectedness and systemic risk. J. Financial Econ. 16 (2), 271–296. doi:10.1093/jjfinec/nby001
Belchior, R., Vasconcelos, A., Correia, M., and Hardjono, T. (2022a). Hermes: fault-tolerant middleware for blockchain interoperability. Future Gener. Comput. Syst. 129, 236–251. doi:10.1016/
Belchior, R., Vasconcelos, A., Guerreiro, S., and Correia, M. (2022b). A survey on blockchain interoperability: past, present, and future trends. ACM Comput. Surv. 54 (8), 1–41. doi:10.1145/3471140
Borkowski, M., Ritzer, C., McDonald, D., and Schulte, S. (2018) Caught in chains: claim-first transactions for cross-blockchain asset transfers, 56. Whitepaper: Technische Universität Wien, 57–58.
Campmas, A., Iacob, N., and Simonelli, F. (2022). How can interoperability stimulate the use of digital public services? An analysis of national interoperability frameworks and e-Government in the
European Union. Data and Policy 4, e19. doi:10.1017/dap.2022.11
De Bondt, W. F. M., and Thaler, R. (1985). Does the stock market overreact? J. Finance 40 (3), 793–805. doi:10.1111/j.1540-6261.1985.tb05004.x
Fama, E. F. (1970). Efficient capital markets: a review of theory and empirical work. J. Finance 25 (2), 383–417. doi:10.2307/2325486
Fama, E. F. (1990). Stock returns, expected returns, and real activity. J. Finance 45 (4), 1089–1108. doi:10.1111/j.1540-6261.1990.tb02428.x
Fama, E. F. (1998). Market efficiency, long-term returns, and behavioral finance. J.Finan. Econ. 49 (3), 283–306. doi:10.1016/S0304-405X(98)00026-9
Fousekis, P., and Tzaferi, D. (2021). Returns and volume: frequency connectedness in cryptocurrency markets. Econ. Model. 95, 13–20. doi:10.1016/j.econmod.2020.11.013
Frankfurter, G. M., and McGoun, E. G. (2001). Anomalies in finance: what are they and what are they good for? Int. Rev. Financial Analysis 10 (4), 407–429. doi:10.1016/s1057-5219(01)00061-8
Grobys, K., and Huynh, T. L. D. (2022). When tether says Jump Bitcoin asks How low? Finance Res. Lett. 47, 102644. doi:10.1016/j.frl.2021.102644
Hoang, L. T., and Baur, D. G. (2021). How stable are stablecoins? Eur. J. Finance, 1–17. doi:10.1080/1351847X.2021.1949369
Hurst, H. E. (1951). Long-term storage capacity of reservoirs. Trans. Am. Soc. Civ. Eng. 116 (1), 770–799. doi:10.1061/TACEAT.0006518
Jegadeesh, N., and Titman, S. (1993). Returns to buying winners and selling losers: implications for stock market efficiency. J. Finance 48 (1), 65–91. doi:10.1111/j.1540-6261.1993.tb04702.x
Jin, H., Dai, X., and Xiao, J. (2018). “Towards a novel architecture for enabling interoperability amongst multiple blockchains,” in 2018 IEEE 38th international conference on distributed computing
systems (ICDCS), 1203–1211. Available at: https://ieeexplore.ieee.org/abstract/document/8416383/.
Lafourcade, P., and Lombard-Platet, M. (2020). About blockchain interoperability. Inf. Process. Lett. 161, 105976. doi:10.1016/j.ipl.2020.105976
Latif, M., Arshad, S., Fatima, M., and Farooq, S. (2011). Market efficiency, market anomalies, causes, evidences, and some behavioral aspects of market anomalies. Res. J. Finance Account. 2 (9),
Malkiel, B. G. (2003). The efficient market hypothesis and its critics. J. Econ. Perspect. 17 (1), 59–82. doi:10.1257/089533003321164958
Mandelbrot, B. B., and Wallis, J. R. (1968). Noah, joseph, and operational hydrology. Water Resour. Res. 4 (5), 909–918. doi:10.1029/WR004i005p00909
Mielniczuk, J., and Wojdyłło, P. (2007). Estimation of Hurst exponent revisited. Comput. Statistics and Data Analysis 51 (9), 4510–4525. doi:10.1016/j.csda.2006.07.033
Montgomery, H., Borne-Pons, H., Hamilton, J., Bowman, M., Somogyvari, P., Fujimoto, S., et al. (2020). Hyperledger cactus whitepaper. Retrieved On, 24.
Nan, Z., and Kaizoji, T. (2017). Market efficiency of the bitcoin exchange rate: evidence from Co-integration tests. Available at: https://Ssrn.Com/Abstract=3179981.
Nan, Z., and Kaizoji, T. (2019). Market efficiency of the bitcoin exchange rate: weak and semi-strong form tests with the spot, futures and forward foreign exchange rates. Int. Rev. Financial
Analysis 64, 273–281. doi:10.1016/j.irfa.2019.06.003
Nan, Z., and Kaizoji, T. (2020). “The optimal foreign exchange futures hedge on the bitcoin exchange rate: an application to the US dollar and the euro,” in Advanced studies of financial technologies
and cryptocurrency markets (Springer), 163–181.
Peng, C.-K., Havlin, S., Stanley, H. E., and Goldberger, A. L. (1995). Quantification of scaling exponents and crossover phenomena in nonstationary heartbeat time series. Chaos An Interdiscip. J.
Nonlinear Sci. 5 (1), 82–87. doi:10.1063/1.166141
Penzel, T., Kantelhardt, J. W., Grote, L., Peter, J.-H., and Bunde, A. (2003). Comparison of detrended fluctuation analysis and spectral analysis for heart rate variability in sleep and sleep apnea.
IEEE Trans. Biomed. Eng. 50 (10), 1143–1151. doi:10.1109/tbme.2003.817636
Pichl, L., and Kaizoji, T. (2017). Volatility analysis of bitcoin price time series. Quantitative Finance Econ. 1, 474–485. doi:10.3934/qfe.2017.4.474
Pillai, B., Biswas, K., and Muthukkumarasamy, V. (2020). Cross-chain interoperability among blockchain-based systems using transactions. Knowl. Eng. Rev. 35, e23. doi:10.1017/s0269888920000314
Samuelson, P. A. (1973). Proof that properly discounted present values of assets vibrate randomly. Bell J. Econ. Manag. Sci. 4 (2), 369–374. doi:10.2307/3003046
Wegner, P. (1996). Interoperability. ACM Computing Surveys. 28 (1), 285–287. doi:10.1145/234313.234424
Stosic, D., Stosic, D., Ludermir, T. B., and Stosic, T. (2019). Multifractal behavior of price and volume changes in the cryptocurrency market. Phys. A Stat. Mech. Its Appl. 520, 54–61. doi:10.1016/
Tversky, A., and Kahneman, D. (1988). “Rational choice and the framing of decisions,” in Decision making: descriptive, normative, and prescriptive interactions, 167–192.
Xu, X., Weber, I., Staples, M., Zhu, L., Bosch, J., Bass, L., et al. (2017). A taxonomy of blockchain-based systems for architecture design. IEEE Int. Conf. Softw. Archit. (ICSA), 243–252. Available
at: https://ieeexplore.ieee.org/abstract/document/7930224/. doi:10.1109/ICSA.2017.33
Zamyatin, A., Al-Bassam, M., Zindros, D., Kokoris-Kogias, E., Moreno-Sanchez, P., Kiayias, A., et al. (2021). “SoK: communication across distributed ledgers,”. Financial cryptography and data
security. Editors N. Borisov,, and C. Diaz (Springer Berlin Heidelberg), 12675, 3–36. doi:10.1007/978-3-662-64331-0_1
Keywords: cryptocurrency, bitcoin, blockchain, cross-chain interoperability, Hurst exponent, DFA
Citation: Nan Z (2024) Exploring bitcoin cross-blockchain interoperability: estimation through Hurst exponent. Front. Blockchain 7:1410191. doi: 10.3389/fbloc.2024.1410191
Received: 31 March 2024; Accepted: 20 August 2024;
Published: 30 August 2024.
Copyright © 2024 Nan. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is
permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use,
distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Zheng Nan, nijelnan@gmail.com | {"url":"https://www.frontiersin.org/journals/blockchain/articles/10.3389/fbloc.2024.1410191/full","timestamp":"2024-11-08T21:44:03Z","content_type":"text/html","content_length":"507926","record_id":"<urn:uuid:8acd450f-6d99-4dc7-b93d-5a60e5209a4b>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00827.warc.gz"} |
The Stacks project
Lemma 35.13.2. Let $\{ f_ i : X_ i \to X\} _{i \in I}$ be a family of morphisms of affine schemes. The following are equivalent
1. for any quasi-coherent $\mathcal{O}_ X$-module $\mathcal{F}$ we have
\[ \Gamma (X, \mathcal{F}) = \text{Equalizer}\left( \xymatrix{ \prod \nolimits _{i \in I} \Gamma (X_ i, f_ i^*\mathcal{F}) \ar@<1ex>[r] \ar@<-1ex>[r] & \prod \nolimits _{i, j \in I} \Gamma (X_ i
\times _ X X_ j, (f_ i \times f_ j)^*\mathcal{F}) } \right) \]
2. $\{ f_ i : X_ i \to X\} _{i \in I}$ is a universal effective epimorphism (Sites, Definition 7.12.1) in the category of affine schemes.
Comments (0)
There are also:
• 2 comment(s) on Section 35.13: Fpqc coverings are universal effective epimorphisms
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 0EUA. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 0EUA, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/0EUA","timestamp":"2024-11-02T23:49:05Z","content_type":"text/html","content_length":"17865","record_id":"<urn:uuid:e793b0c4-f941-40e1-bf8d-6030b0a1fd92>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00632.warc.gz"} |
Monetary Policy Rule and Taylor Principle in Mongolia: GMM and DSGE Approaches
Graduate School of Humanities and Social Sciences, Saitama University, 225 Shimo-Okubo, Sakura-ku, Saitama 338-8570, Japan
Author to whom correspondence should be addressed.
Submission received: 22 October 2020 / Revised: 10 November 2020 / Accepted: 11 November 2020 / Published: 16 November 2020
This article aims to examine the monetary policy rule under an inflation targeting in Mongolia with a focus on its conformity to the Taylor principle, through two kinds of approaches: a monetary
policy reaction function by the generalized-method-of-moments (GMM) estimation and a New Keynesian dynamic stochastic general equilibrium (DSGE) model with a small open economy version by the
Bayesian estimation. The main findings are summarized as follows. First, the GMM estimation identified an inflation-responsive rule fulfilling the Taylor principle in the recent phase of the
Mongolian inflation targeting. Second, the DSGE-model estimation endorsed the GMM estimation by producing a consistent outcome on the Mongolian monetary policy rule. Third, the Mongolian rule was
estimated to have a weaker response to inflation than the rules of the other emerging Asian adopters of an inflation targeting.
JEL Classification:
E52; E58; O53
1. Introduction
Mongolia has evolved her monetary policy framework, since she transformed the economic system from a centrally planned economy to a market-based economy in the early 1990s. In 1991, the Bank of
Mongolia (BOM) started implementing monetary policy as a central bank. At the first stage until 2006, the BOM adopted a monetary aggregate targeting with its reserve money being an operational
target. Since the mid-2000s, however, the linkage between reserve money and inflation became unstable due to financial deepening processes, and thus the monetary aggregate target lost its
effectiveness. Under this background, the BOM introduced an inflation targeting framework in 2007 as the second stage. In this framework, the BOM equipped the policy mandates of announcing a mid-term
targeted inflation rate to the public and of taking every possible measures to maintain the inflation rate within its targeted range. At the same time, in July 2007 the BOM adopted the one-week
central bank bills’ rate as a policy rate, so that the policy rate could work as an operating target to attain its targeted inflation rate. Having received and completed the IMF Stand-by program
during the wave of the world financial crisis in 2009, the BOM has taken several steps to upgrade the inflation targeting system as a recent stage. The BOM developed the Forecasting and Policy
Analysis System (FPAS) since 2011, aiming at forecast-based policy formation and decision making, and effective communication with the public under the inflation targeting. The BOM has also improved
its operational framework by establishing an interest rate corridor to enhance the policy rate transmission mechanism from 2013.
When it comes to an analytical issue on rule-based monetary policies, the “monetary policy reaction function” has been a useful instrument to evaluate monetary policy rules practiced by central banks
in a quantitative way. The function, a more generalized form of the so-called Taylor rule proposed by
), describes the policy rules in such a way that central banks adjust their policy rates in response to the gaps between expected inflation and output and their respective targets. The functions were
initially estimated by the seminal work by
Clarida et al.
) for examining monetary policies of two sets of countries: the G3 (Germany, Japan, and the US) and the E3 (UK, France, and Italy). Since then, the functions have been widely applied for analyzing or
describing the monetary policy rules not only in advanced economies but also in emerging-market economies.
In examining the monetary policy reaction function, one of the most crucial criteria to judge the workability of monetary policy rules to control inflation would be whether the rules fulfill the
“Taylor principle”: for inflation to be stable, the central bank must respond to an increase in inflation with an even greater increase in a nominal interest rate (
Mankiw 2016
). In case a policy rate’s upward reaction to a hike of inflation rate is less than unity, the suppressed “real” interest rate could further accommodate inflation, thereby leading to a vicious circle
of spiraling inflation. The Taylor principle is in general considered to hold in advanced economies through
Clarida et al.
) and the subsequent empirical studies (e.g.,
Belke and Polleit 2007
). For developing and emerging-market economies like Mongolia, however, the validity of the Taylor principle is questionable and has also been limitedly studied, even though the economies adopted an
inflation targeting in their monetary policy frameworks.
Another point worth noting in analyzing monetary policy rules in developing and emerging-market economies is that their rules are often supposed to take into account not only inflation and output gap
but also exchange rate fluctuations.
Calvo and Reinhart
) argued that there seemed to be an epidemic case of the “fear of floating”, particularly among emerging-market economies. The fear of floating comes from a lack of confidence in currency value,
especially given that their external debt is primarily denominated in US dollars, which is often referred to as the “original sin” hypothesis (
Eichengreen and Hausmann 1999
). The principle of the “impossible trinity”, on the other hand, demonstrates that an economy has to give up one of three goals: fixed exchange rate, independent monetary policy, and free capital
flows. Thus, given the capital mobility, emerging-market economies tend to face the trade-off in their policy targets between exchange-rate stability and price stability: their efforts to avoid
exchange rate volatility prevent their monetary authorities from concentrating fully on an inflation targeting.
This article aims to examine the monetary policy rule under an inflation targeting in Mongolia with a focus on its conformity to the Taylor principle, through the two kinds of models: a monetary
policy reaction function by the generalized-method-of-moments (GMM) estimation and a New Keynesian dynamic stochastic general equilibrium (DSGE) model by the Bayesian estimation. The contributions of
this study are summarized as follows. First, this study focuses on the conformity to the Taylor principle in Mongolian monetary policy rule, while there has been a limited amount of evidence in the
literature. Second, this study examines Mongolian monetary policy rule in a DSGE macroeconomic framework as well as in a single policy reaction function with the GMM estimation, so that the Taylor
principle could be identified in a robust manner. Third, this study also estimates the Mongolian policy rate’s reaction to exchange rate, so that the degree of the fear of floating could be verified.
The rest of the paper is structured as follows.
Section 2
gives an overview of Mongolian monetary policy after the adoption of an inflation targeting in 2007.
Section 3
reviews the literature on the studies of monetary policy rules in emerging Asian economies including Mongolia, and highlights this study’s contributions.
Section 4
conducts the empirical analyses of the Mongolian monetary policy rule by GMM and DSGE estimations with descriptions of the methodology, as well as estimation results and their discussions.
Section 5
summarizes and concludes.
2. Overview of Mongolian Monetary Policy
This section overviews the trends in Mongolian monetary policy after the adoption of an inflation targeting in 2007.
Figure 1
displays the BOM’s policy rate and the interest rate corridor, and also compares the actual inflation rate with the targeted rate in terms of annual rate at each year end. The targeted inflation rate
was updated by the BOM’s Monetary Policy Guidelines for each year.
Soon after the BOM introduced an inflation targeting in 2007, the Mongolian economy was hit by the world financial crisis in 2009, and the Mongolian government accepted the IMF Stand-by Program in
that year. At that time, the BOM raised its policy rate towards 14 percent in March 2009, as there was the need to restore confidence in the local currency and to stop the deposit flight out of its
economy. The BOM afterwards reduced its policy rate gradually to 10 percent in September 2009 with the decline in inflation rate.
For the period from 2010 to 2012, the Mongolian economy entered the booming stage with a double-digit inflation rate. The fueling of inflation came from the price elevation of such necessities as
food and fuel on the supply side, and the expansionary fiscal policy and the soring of capital inflows in the mining sector on the demand side. Thus the BOM raised its policy rate continuously
towards 13.25 percent until January 2013. At the same time, the BOM together with the government initiated the “Medium-term Price Stabilization Program” containing programs to stabilize food and fuel
prices in October 2012 to decrease the supply side pressure on inflation.
Since around 2013, however, the Mongolian economy has been getting into a phase of economic slowdown. During 2014–2015, in particular, the net inward foreign direct investment to Mongolia fell
significantly (in 2014 by 17 times less than its peak in 2011), due to the downturn of the Chinese economy. Afterwards, the economy with inward foreign direct investment was in a moderate recovery
process until 2019. During this phase, the inflation rate calmed down with the rate falling down to one percent level in 2015–2016, and still kept itself below the targeted rate until 2019. The
monetary policy in this phase, on the other hand, represented rather complicated reactions: the BOM raised its policy rate in 2014–2015 and in the middle of 2016 to avoid the balance-of-payment
crises, while continuous monetary easing was expected under moderate inflation.
As for monetary policy framework, there has been progress in the inflation targeting system. The BOM developed the Forecasting and Policy Analysis System (FPAS) from 2011, aiming at forecast-based
policy formation and decision making, and effective communication with the public. The BOM also improved its operational framework by establishing an interest rate corridor from February 2013 as
shown in
Figure 1
All in all, the whole period with the inflation targeting since 2007 has two different phases: the first phase had such disturbances as the repercussion from the world financial crisis and the surge
of inflation-hike with economic booming; and the second phase experienced economic slow-down and moderate inflation with the improvements in monetary policy frameworks.
3. Literature Review and This Study’s Contributions
This section reviews the literature on the studies of monetary policy rules in emerging Asian countries including Mongolia, and highlights the contributions of this study. The review organizes the
previous studies with a focus on the conformity to the Taylor principle as well as the forward- or backward-looking modes in the policy rules of the emerging Asian countries who have adopted an
inflation targeting framework.
The Asian countries who experienced the currency crises in the late 1990s initiated an inflation targeting as their monetary policy frameworks in the post-crisis period: Indonesia in July 2005, Korea
in April 1998, the Philippines in January 2002, and Thailand in May 2000. Since they switched their exchange rate regime from a pegged one to a floating one in the crisis times, they intended to
utilize an inflation targeting as an alternative anchor for price stability (e.g.,
Mishkin 2000
Since then, their monetary policy rules under an inflation targeting have been discussed and examined quantitatively in academic circles. The conformity to the Taylor principle, which denotes that
policy rate’s response to inflation as being over unity, has been identified in the adopters of an inflation targeting by the following studies: for Indonesia,
Taguchi and Kato
), and
Taguchi et al.
); for Korea,
Kim and Park
); for the Philippines,
) and
Taguchi et al.
); and for Thailand,
Taguchi and Kato
), and
Taguchi et al.
). In terms of the forward- or backward- looking modes in the policy rate’s response to inflation, some differences are found among the studies above: for Indonesia, a contemporaneous- and
backward-looking rule in
) and
Taguchi and Kato
) versus a forward-looking rule in
Taguchi et al.
); and for Thailand, a backward- and contemporaneous- looking rule in
Taguchi and Kato
) and
) versus a forward-looking rule in
Taguchi et al.
). These differences might come from the ones in sample periods used in the studies: the upgrading toward forward-looking rules by using updated samples might reflect the recent improvements in
forecasting and managing capacities of the adopters of an inflation targeting by accumulating experiences of operating its system. Another category of the development in the studies on monetary
policy rules is the identification of a nonlinear Taylor rule. Since
) verified a nonlinear rule in the European Central Bank and Bank of England, the nonlinearity was also identified for Korea by
Koo et al.
) and for five emerging-market economies including Indonesia, Korea, and Thailand by
Caporale et al.
). This study, however, does not apply a nonlinear approach due to the lack in sample size in Mongolia.
Regarding the monetary policy rule of Mongolia, one of the adopters of an inflation targeting in Asia, there have been no studies except
Taguchi and Khishigjargal
), which examined it quantitatively with a policy reaction function.
Taguchi and Khishigjargal
) described the Mongolian recent rule as an inflation-responsive rule with a forward-looking manner, but with its response weak enough to be pro-cyclical to inflation pressure (against the Taylor
principle) due to the “fear of floating”.
This study’s contribution, particularly related to
Taguchi and Khishigjargal
), could be highlighted as follows. First, this study re-examines the Mongolian monetary policy reaction function by using updated sample data. Adding the sample period for 2018–2019 on a quarterly
basis seems to be critical, since the inflation during that period was well-controlled under the improved management of an inflation targeting as was observed in
Section 2
. Second, this study has the GMM estimation of a policy reaction function double-checked by the Bayesian estimation of a macroeconomic DSGE model, so that the validity of the Taylor principle could
be examined in a robust manner.
4. Empirical Analyses of Mongolian Monetary Policy Rule
This section conducts the empirical analyses of the Mongolian monetary policy rule. The section starts with the GMM estimation followed by the DSGE analysis and the discussions of their estimation
4.1. GMM Estimation
This subsection estimates a monetary policy reaction function by the GMM method for describing the Mongolian monetary policy rule under an inflation targeting with a focus on its conformity to the
Taylor principle.
The monetary policy reaction function is specified by the initial work of
Clarida et al.
) and the subsequent studies such as
Belke and Polleit
). The original form of the function is denoted by Equation (1), and the empirical specification is presented by Equation (2).
r[t]* = ř + β (E[π[t+n]|Ω[t]] − π*) + γ (E[y[t]|Ω[t]] − y*)
where r
* is a target for the central bank’s policy rate in period t; ř is a natural rate of nominal interest rate; π
is an inflation rate between periods t and t + n; y
is a real output; π* and y* are respective optimal points for inflation rate and real output; E is an expectation operator; and Ω is the information available to the central bank at the time it sets
a policy rate. Equation (1) is transformed into Equation (2) for an empirical estimation.
r[t] = (1 − ρ) (α + β π[t+n] + γ x[t]) + ρ r[t−1] + ε[t]
where r
, actual policy rate, comes from r
= (1 − ρ) r
* + ρ r
with ρ ϵ [0, 1] being the degree of policy rate smoothing; the unobserved forecast variables, E[π
|Ω] and E[y
|Ω], are replaced by the realized variables, π
and y
; α and x
are defined as α ≡ ř − β π* and x
≡ y
− y* (output gap); and ε
is a combination of the central bank’s forecast errors of inflation and output, and exogenous disturbances.
Among the parameters, one of the greatest concerns is β, the degree of policy rate’s responsiveness to the inflation rate. In order for the Taylor principle to hold, β should be over unity (β > 1):
the policy rate reacts to more than inflation rate, otherwise the real policy rate would accommodate inflation in a pro-cyclical manner. The subscript n of π[t+n] in this study takes the values of 1,
0 and −1 to denote forward-, contemporaneous-, and backward-looking specifications, respectively.
Aside from the policy rate’s responses to inflation and output, this study also confirms its reaction to exchange rate to see the degree of the fear of floating.
r[t] = (1 − ρ) (α + δ e[t]) + ρ r[t−1] + ε[t]
where e
is a change in exchange rate in terms of local currency (tugriks) value per US dollar. In the case that the central bank prioritizes exchange-rate stabilization in its policy rule, the coefficient δ
should take a significantly positive value.
The estimation uses quarterly data running from the third quarter of 2007 (2007Q3) to the present, the fourth quarter of 2019 (2019Q4), during which the BOM operated an inflation targeting. All the
data was retrieved from the International Financial Statistics (IFS) of the International Monetary Fund (IMF).
The empirical monetary policy reaction functions in Equations (2) and (3) require the data for the following four indicators: the series of “Central Bank Policy Rate” for policy rate r; “Consumer
Prices Index (2010 = 100)” for price index, which is transformed into a year-on-year change rate as inflation rate π; “Industrial Production, Seasonally adjusted, Index (2010 = 100)” for industrial
production, which is processed into output gap x by subtracting from the industrial production a Hodrick–Prescott-filter of that series as a proxy of a potential production level; and “National
Currency per US Dollar, Period Average” for exchange rate, which is expressed as a year-on-year change rate e.
Before conducting the estimation, the study investigates the stationary property of the data for each variable, by employing the augmented Dickey–Fuller (ADF) unit root test (
Said and Dickey 1984
) on the null hypothesis that each variable has a unit root in the test equation including “intercept”.
Table 1
reports the test result for the data for all the indicators, i.e., policy rate r, inflation rate π, output gap x, and exchange rate e for their level data. The test rejected a unit root in all the
data at the conventional level of significance by more than 95 percent, thereby their data showed a stationary property. Thus their data are justified to be used for the subsequent estimation.
For the technique to estimate the parameter vector [α, β, γ, δ, ρ], the study adopts the generalized method of moments (GMM). One of the assumptions required for regression analysis is that the
explanatory variables are uncorrelated with the disturbance term. In the case that the equation contains endogenously determined variables as explanatory ones, however, the assumption is violated and
the estimator of ordinary least squares is biased and inconsistent. The case could be applied to the estimation Equations (2) and (3) in this study, since the policy interest rate might also affect
the explanatory variables. The standard approach to eliminate the effect of variable and residual correlation is to estimate the equation using “instrumental variables” regression. In this context,
the GMM estimator is excellent in terms of consistency, asymptotic normality, and efficiency in its property, and has been widely used since the seminal works such as
) and
Hansen and Singleton
) applied the estimator to their empirical works. Thus this study adopts the GMM estimator and equips the instrumental variables of one-, two- and three-quarter lagged explanatory values of π, x, and
e, in the estimation Equations (2) and (3). The J-statistic implies that these instrumental variables are valid in the sense that the over-identifying restrictions cannot be rejected in the models
Table 2
Table 3
The GMM estimation is conducted for the total sample period (2007Q3–2019Q4), and also for the first half (2007Q3–2011Q4) and the second half (2012Q1–2019Q4) periods, since the whole period with an
inflation targeting has two different phases as was described in
Section 2
: the first phase with economic disturbances and high inflation, and the second phase with moderate inflation and policy improvements. The breakpoint in the total sample is set at 2012Q1 following
the previous study of
Taguchi and Khishigjargal
), and this study also reconfirmed the breakpoint statistically by the Chow’s breakpoint test: the F-statistic (7.166) rejected the hypothesis of parameter stability over different periods with the
breakpoint being 2012Q1 with a probability of more than 99 percent.
Table 2
reports the estimation outcomes of monetary policy reaction functions with forward-, contemporaneous-, and backward- looking specifications for different sample periods: the total, the first part and
the second part ones.
Table 3
shows the reaction to the change in exchange rate for three different sample periods. In each table, based on the estimated short-term coefficients in the upper part, the long-term coefficients [α,
β, γ] are computed and displayed in the lower part.
Focusing on the long-term coefficients in
Table 2
, it is only in the second-half-period estimation that the coefficients of inflation are positive at the significant level of more than 95 percent, and more importantly, the coefficient is beyond
unity in the contemporaneous-looking specification (β = 1.172 in π
), implying the conformity to the Taylor principle. The other coefficients including those of output gap are insignificant or weakly significant, otherwise impossible to calculate since the degree of
smoothing ρ is over unity. As for
Table 3
, no meaningful coefficients are found in the estimation on the policy rate’ responses to exchange rate fluctuations.
All in all, the Mongolian monetary policy rule in the recent phase of an inflation targeting is characterized by an inflation-responsive rule fulfilling the Taylor principle, and the fear of floating
is not serious enough to disturb the inflation-responsive rule.
4.2. New Keynesian DSGE Estimation
This section turns to a New Keynesian DSGE estimation in order to re-check the validity of the Taylor principle verified by the GMM estimation in the previous section. This section first specifies
the model structure, and then presents the estimation result.
A New Keynesian DSGE model, which was developed by
), was built on the micro- founded characteristic of a Real Business Cycle model (
Kydland and Prescott 1982
) with nominal rigidities. In the virtue of the advances of the estimation technique, especially since the seminal works in
Christiano et al.
) and
Smets and Wouters
), a New Keynesian model has been widely used for macroeconomic studies during the recent decades. One of the extensions of the simple New Keynesian model is to model a small open economy aside from
a closed economy.
Gali and Monacelli
), for instance, laid out a small open economy version of a model with Calvo-type staggered price-setting and with the equilibrium conditions reflecting degree of openness and world output
fluctuations, and used it for analyzing macroeconomic implications of alternative monetary policy regimes including an exchange rate peg.
This study applies the small open economy version of a New Keynesian DSGE model to examine the Mongolian monetary policy rule, since Mongolian economy is considered to be a typical small open
The estimable model consists of the following ten equations based on
Gali and Monacelli
$x ˜ t = E t [ x ˜ t + 1 ] − ( 1 / σ α ) ( r ˜ t − E t [ π H , t + 1 ] − r ¯ r ¯ n t )$
$r ¯ r ¯ t = − σ α Γ ( 1 − ρ a ) a t + α σ α ( Θ + Ψ ) E t [ Δ y ˜ * t + 1 ]$
$π H , t = β Et [ π H , t + 1 ] + κ α x ˜ t + e t$
$r ˜ t = ϕ r r ˜ t − 1 + ( 1 − ϕ r ) ( ϕ π π t + ϕ x x ˜ t ) + ε r t$
$s t = σ α ( y ˜ t − y ˜ * t )$
$y ˜ t = x ˜ t + ( Γ a t + α Ψ y ˜ * t )$
$y ˜ * t = ρ w y ˜ * t − 1 + ε w t$
The list of endogenous and exogenous variables, and the one of fixed and estimated parameters including definition identities, are presented in
Table 4
Table 5
, respectively. The variables in the log-linearized version are expressed by the percentage deviation from the zero-inflation steady-state level.
The first four equations from (4) to (7) constitute the major structure of a New Keynesian model (with a small open economy version), characterizing the dynamic behavior of three key macroeconomic
indicators: output gap, inflation, and nominal interest rate. Equation (4), called the “expectational IS curve”, corresponds to the log-linearization of an optimizing household’s Euler equation.
Equation (5) represents the determination of the natural rate of interest rate. Equation (6), called the New-Keynesian Phillips curve, describes the optimizing behavior of monopolistically
competitive firms that set their prices in a randomly staggered fashion, as suggested by
). Equation (7) represents the monetary policy rule, corresponding to Equations (1) and (2) shown in
Section 4.1
. The subsequent three equations from (8) to (10) describe the nexus between CPI inflation (the change in consumer prices) and domestic inflation (the change in domestic goods prices), representing
the property of a small open economy, i.e., the linkage between a small open economy and the world economy through economic openness and terms of trade.
This paper uses the Bayesian method to estimate the parameters of the model above, though some parameters are fixed in advance.
For more details of the estimation, see, for example,
An and Schorfheide
). Regarding the observed data, this DSGE estimation uses them for the three endogenous variables: output gap
$x ˜$
, domestic inflation π
and nominal interest rate
$r ˜$
. The data for the domestic inflation (the change in domestic goods prices) are calculated by a year-on-year change in the GDP deflator obtained by the division between nominal GDP and GDP at
constant prices (retrieved from the National Statistics Office of Mongolia
). As for the data for output gap and nominal interest rate, the data of output gap x and policy rate r in
Section 4.1
are applied, although the data of policy rate is processed into a detrended series by subtracting a Hodrick–Prescott-filter of that data, since the model is expressed by the deviation from the
steady-state level.
The sample period corresponds to the second-half one in
Section 4.1
This study focuses on the estimation of the parameters that appear in the monetary policy rule in Equation (7), namely, [ϕ
, ϕ
, ϕ
], and thus the other parameters are treated as fixed. As shown in
Table 5
, the parameter on the degree of economic openness α is set to 0.58, which corresponds to the import/GDP ratio on the average in the sample period
. Second, the parameters [β, γ, η, θ, σ, φ, ρ
, ρ
, ρ
] are set according to various types of DSGE literature studies such as
Smets and Wouters
Gali and Monacelli
). Finally, the parameters [κ
, λ, σ
, ω, Γ, Θ, Ψ] are set in the same way as
Gali and Monacelli
). The prior means of [ϕ
, ϕ
, ϕ
] are set to the values estimated by the GMM in
Section 4.1
Table 6
reports the prior-value settings in the left side of the column. The prior means of the parameters on the reaction to inflation ϕ
and the smoothing degree ϕ
correspond to the GMM-estimated parameters of the case π in the second half sample period (β = 1.172 and ρ = 0.905), which satisfy the Taylor principle. The prior-mean-value of the parameter on the
reaction to output gap ϕ
is set to zero, however, as the GMM-estimated coefficient was insignificant in that case.
The outcomes of the Bayesian estimations are summarized in terms of the posterior distributions in
Table 6
Figure 2
. In the comparison between prior and posterior distributions in
Table 6
, the shift away from the priors to the posteriors implies that the observed data add important information to the estimation of the posteriors. It is worth noting that the posterior means of
parameters on the reaction to inflation ϕ
and the smoothing degree ϕ
have almost the same values as their prior means: in the reaction to inflation, 1.152 (posterior) versus 1.172 (prior); and in the smoothing degree, 0.891 (posterior) versus 0.905 (prior). Regarding
the reaction to output gap, the posterior means turns out to be positive but still insignificant, judging from the Highest Posterior Density (HPD) Interval with its bottom line being negative.
Figure 3
shows the impulse response functions to monetary policy shock (one percent point of nominal interest rate shock). It shows that CPI inflation as well as domestic inflation respond negatively to
monetary policy shock over ten quarters. It should be noted that the negative response of CPI inflation is sharper than that of domestic inflation, since the negative impact of terms of trade is
added on to the CPI inflation response as Equation (8) in the model suggests.
In sum, the Bayesian estimation of the New Keynesian DSGE model with a small open economy version could endorse the GMM estimation of policy reaction function on the monetary policy rule in Mongolia,
in the sense that the outcome of both the estimations on the policy rate reaction to inflation are similar.
4.3. Discussions on Estimation Outcomes
This section discusses how to interpret the estimation outcomes in the context of the Mongolian official monetary policy stance and in comparison with the previous studies presented in
Section 3
. Both of the GMM and DSGE estimations in
Section 4.1
Section 4.2
identified an inflation-responsive monetary policy rule fulfilling the Taylor principle in Mongolia. This result is consistent with the Mongolian policy mandate of an inflation targeting in that the
BOM should take all possible measures to attain the targeted inflation through the policy rate operation. In particular, the conformity to the Taylor principle confirmed for the second half sample
period fits well with the upgraded inflation targeting called the FPAS that has been adopted since 2011.
Compared with the previous studies on the Mongolian monetary policy rule, this study and
Taguchi and Khishigjargal
) commonly verifies an inflation-responsive rule, but it is in this study, not in
Taguchi and Khishigjargal
), that the conformity to the Taylor principle is identified. This is probably due to this study’s updating of the sample data by adding the period for 2018–2019 with the inflation being
well-controlled under the improved management of an inflation targeting. The strength of the policy rate reaction to inflation in Mongolia could also be compared with those in the other emerging
Asian countries and an advanced country like the US. The Mongolian coefficient of inflation responsiveness is estimated to be under 1.2 in both GMM and DSGE approaches in this study. It is rather a
weaker reaction compared with those of the other Asian adopters of an inflation targeting according to the latest study of
Taguchi et al.
): 1.3 in Thailand, 1.4 in the Philippines and 1.8 in Indonesia, and further with the US Fed reaction, 2.27–2.57 exhibited by
Belke and Polleit
). Thus there seems to be still room to investigate whether the Mongolian policy rate reaction to inflation, although fulfilling the Taylor principle, would be powerful enough to control inflation in
the case of its high pressure.
This study focused on Mongolia to examine a monetary policy rule through the GMM and DSGE approaches. These approaches could also be applied to the investigation of monetary policy rules in the other
emerging market economies, as they have improved their inflation targeting management. In fact,
Taguchi et al.
) adopted the GMM and DSGE approaches for analyzing the monetary policy rules in Indonesia, the Philippines and Thailand, although their New Keynesian DSGE estimation was based on a closed economy’s
version. It is expected that these approaches will be used widely for examining monetary policy rules in emerging-market economies with extended versions of a New Keynesian DSGE model.
5. Concluding Remarks
This article examined the monetary policy rule under an inflation targeting in Mongolia with a focus on its conformity to the Taylor principle, through two kinds of approaches: a monetary policy
reaction function by the GMM estimation and a New Keynesian DSGE model with a small open economy version by the Bayesian estimation. This study contributes to the enrichment of evidence in assessing
an inflation targeting adopted by emerging market economies. The main findings are summarized as follows. First, the GMM estimation identified the contemporaneous inflation- responsive rule
fulfilling the Taylor principle in the recent phase of an inflation targeting. Second, the DSGE-model estimation endorsed the GMM estimation by producing a consistent outcome on the monetary policy
rule. Third, the Mongolian rule was estimated to have a weaker response to inflation than the rules of the other emerging Asian adopters of an inflation targeting.
Author Contributions
Conceptualization, methodology and formal analysis, H.T. and G.G.; investigation and data curation, G.G.; writing—original draft preparation and writing—review and editing, H.T.; All authors have
read and agreed to the published version of the manuscript.
This research received no external funding.
We appreciate Kenichi Tamegawa, Yamagata University, for his contribution to inputting the necessary knowledge in the estimation of the DSGE model with the small open economy version.
Conflicts of Interest
The authors declare no conflict of interest.
The Mongolian economic size is 13.0 billion US dollars at current prices in terms of GDP in 2018, while the average size of Asian developing economies is 621.7 billion US dollars in the same year.
The Mongolian ratio of “imports of goods and services” to GDP is 55.6 percent on the average during the period from 1995 to 2018, while the average ratio in Asian developing economies is 35.1
percent during the same period. The data of the GDP and the ratio of “imports of goods and services” to GDP are retrieved respectively from UNCTAD STAT:
The equations from (4) to (10) except (7) correspond to those in Gali and Monacelli (2005) as follows: Equations (4) and (5) to (37) in p. 719 of Gali and Monacelli (2005); (6) to (36) in p. 718;
3 (8) to (14) in p. 712; (9) to (29) in p. 717; and (10) to (35) in p. 718.
4 For the Bayesian estimation, the study uses the software of Dynare and Matlab.
6 The data of domestic inflation and output gap have no need to be processed under the assumption of zero-inflation steady-state.
7 The import/GDP ratio is the one that divides “Imports of Goods and Services” by GDP in Mongolia, using the data of International Financial Statistics of International Monetary Fund.
Figure 1. Trend in Monetary Policy in Mongolia. Source: Author’s description based on the website of the Bank of Mongolia.
Figure 3. Impulse Responses to Monetary Policy Shock under the Dynamic Stochastic General Equilibrium (DSGE) Model. Source: Author’s estimation.
Variable t-Statistic Probability
r −4.207 *** 0.001
π −3.535 ** 0.011
x −5.081 *** 0.000
e −3.643 *** 0.008
Note: ***, ** denote the rejection of null hypothesis at the 99% and 95% level of significance. Sources: Author’s estimation.
[Total Period: 2007Q3–2019Q4]
Coefficient πt − 1 π πt + 1
(1 − ρ)*α 4.861 −0.657 2.753
(0.693) (−0.053) (0.391)
(1 − ρ)*β −0.001 0.038 0.016
(−0.021) (0.454) (0.255)
(1 − ρ)*γ 0.007 0.013 0.018
(0.455) (0.888) (1.318)
ρ 0.574 1.031 0.749
(1.009) (1.026) (1.323)
J-statistics 3.766 0.707 0.316
(0.287) (0.871) (0.956)
Long-term Coefficients
α 11.414 - 11.001
β -0.003 - 0.064
γ 0.018 - 0.074
[First-half Period: 2007Q3–2011Q4]
Coefficient πt-1 π πt + 1
(1 − ρ)*α 9.415 *** 0.804 10.110 **
(9.284) (0.222) (2.565)
(1 − ρ)*β −0.066 *** 0.016 −0.048
(-5.538) (0.568) (−1.169)
(1 − ρ)*γ −0.008 * 0.001 −0.005
(−2.020) (0.123) (−0.522)
ρ 0.226 ** 0.940 ** 0.136
(2.670) (2.955) (0.419)
J-statistics 3.760 1.238 1.924
(0.288) (0.743) (0.588)
Long-term Coefficients
α 12.174 *** 13.429 11.706 **
β −0.086 *** 0.278 −0.056
γ −0.011 * 0.017 −0.006
[Second-half Period: 2012Q1–2019Q4]
Coefficient πt − 1 π πt + 1
(1 − ρ)*α 2.461 0.183 0.125
(0.430) (0.074) (0.029)
(1 − ρ)*β 0.083 ** 0.110 *** 0.067 **
(2.362) (2.927) (2.351)
(1 − ρ)*γ 0.086 * 0.139 0.002
(1.717) (1.066) (0.069)
ρ 0.736 0.905 *** 0.923 **
(1.542) (4.007) (2.606)
J-statistics 3.040 1.034 3.631
(0.385) (0.793) (0.2C9)
Long-term Coefficients
α 9.334 1.954 1.653
β 0.316 ** 1.172 *** 0.885 **
γ 0.327 * 1.480 0.031
Note: ***, **, * denote the rejection of null hypothesis at the 99%, 95%, and 90% level of significance. The numbers in parentheses are t-values, except that those in J-statistics are their
probabilities. Sources: Author’s estimation.
Coefficient 2007q3–2019q4 2007q3–2011q4 2012q1–2019q4
(1 − ρ)*α −0.231 2.652 ** 9.574
(−0.036) (2.905) (0.487)
(1 − ρ)*δ −0.039 −0.051 * 0.022
(−0.939) (−1.914) (0.448)
ρ 1.053 * 0.790 *** 0.167
(1.789) (6.874) (0.100)
J-statistics 0.158 0.131 0.588
(0.690) (0.716) (0.442)
Long-term Coefficients
α - 12.685 ** 11.495
δ - −0.244 * 0.027
Note: ***, **, * denote the rejection of null hypothesis at the 99%, 95%, and 90% level of significance. The numbers in parentheses are t-values, except that those in J-statistics are their
probabilities. Sources: Author’s estimation.
[Endogenous Variables]
$x ˜$ Output gap
$y ˜$ Output
π CPI inflation (the rate of change in consumer prices)
π[H] Domestic inflation (the rate of change in domestic goods prices)
$r ˜$ Nominal interest rate
$r ˉ$$r ˉ$ Natural rate of interest rate
s Terms of trade
E Expectation operator
[Exogenous Variables]
$y ˜$^* World output that follows first-order autoregressive with i.i.d. shock, ε[w]
a Productivity shock that follows first-order autoregressive with i.i.d. shock, ε[a]
e Cost-push shock that follows first-order autoregressive with i.i.d. shock, ε[e]
ε[r] Monetary policy shock with i.i.d.
Descriptions (˜ denotes the deviation from the steady-state level)
[Fixed Parameters] Descriptions Assumption Notes
α Degree of economic openness 0.58 Import/GDP ratio in the sample average
β Discount factor for households 0.99
γ Substitutability between goods produced in different foreign countries 1.00
η Substitutability between domestic and foreign goods 1.00
θ Probability a firm does not change its price 0.75
σ Parameter on utility of consumption under constant relative risk aversion (CRRA) 1.00 Log utility of consumption
φ Parameter on disutility of labor 0.00 Linear disutility of labor
ρ[a] Autoregressive parameter for productivity shock 0.90
ρ[e] Autoregressive parameter for cost-push shock 0.90
ρ[w] Autoregressive parameter for world GDP shock 0.90
[Definitional Identities]
κ[α] ≡ λ (σ[α] + φ)
λ ≡ {(1 – β θ) (1 – θ)}/θ
σ[α] ≡ σ/(1 – α) + α ω
ω ≡ σ γ + (1 – α) (σ η – 1)
Γ ≡ (1 + φ)/(σ[α] + φ)
Θ ≡ (σ γ – 1) + (1 – α) (σ η – 1)
Ψ ≡ – Θ σ[α]/(σ[α] + φ)
[Estimated Parameters: Monetary policy rule]
ϕ[r] Smoothing degree of policy rate
ϕ[π] Policy rate reaction to CPI inflation
ϕ[x] Policy rate reaction to output gap
Parameters Priors Posterior
Dist. Mean Stdev. Mean 90% HPD Interval
Monetary policy rule – – – – –
Inflation ϕ[π] norm 1.172 0.050 1.152 1.071–1.232
GDP gap ϕ[x] norm 0.000 0.050 0.011 −0.003–0.027
Smoothing ϕ[r] norm 0.905 0.050 0.891 0.854–0.928
Monetary Policy ε[r][t] invg 1.000 1.000 1.276 0.764–1.777
Productivity ε[a][t] invg 1.000 1.000 0.927 0.313–1.607
Cost-push ε[e][t] invg 1.000 1.000 2.124 1.658–2.567
World GDP ε[w][t] invg 1.000 1.000 17.176 13.101–22.539
Sources: Author’s estimation.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/
Share and Cite
MDPI and ACS Style
Taguchi, H.; Gunbileg, G. Monetary Policy Rule and Taylor Principle in Mongolia: GMM and DSGE Approaches. Int. J. Financial Stud. 2020, 8, 71. https://doi.org/10.3390/ijfs8040071
AMA Style
Taguchi H, Gunbileg G. Monetary Policy Rule and Taylor Principle in Mongolia: GMM and DSGE Approaches. International Journal of Financial Studies. 2020; 8(4):71. https://doi.org/10.3390/ijfs8040071
Chicago/Turabian Style
Taguchi, Hiroyuki, and Ganbayar Gunbileg. 2020. "Monetary Policy Rule and Taylor Principle in Mongolia: GMM and DSGE Approaches" International Journal of Financial Studies 8, no. 4: 71. https://
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/2227-7072/8/4/71","timestamp":"2024-11-06T01:48:18Z","content_type":"text/html","content_length":"450228","record_id":"<urn:uuid:51fd2692-ab37-4c3f-b133-711a552edf51>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00072.warc.gz"} |
Simulate the Solar Eclipse with SOLIDWORKS!
TriMech’s Enterprise Solutions Team brings together experts from Adaptive, Forward Vision, Design Rule, Desktop Engineering, and Majenta Solutions Dassault Systèmes Division to form an elite
solutions team.
The team's core function is to use their wealth of experience to deliver enterprise solutions from the Dassault Systèmes portfolio of products and solutions, including CATIA, DELMIA, ENOVIA, SIMULIA
and the 3DEXPERIENCE Platform.
Explore the TriMech Enterpise site to find the full portfolio and more information on our offerings.
Recruit professional talent easily with TriMech Staffing Solutions.
As part of the TriMech Group, we provide end-to-end staffing solutions for engineering positions.
Utilise our large network to find the top engineering talent to fit your team through tailored staffing services that meet your unique needs, dedicated account management, and support. | {"url":"https://www.solidsolutions.ie/Blog/2015/03/Simulate-the-Solar-Eclipse-with-SOLIDWORKS/","timestamp":"2024-11-03T00:19:10Z","content_type":"text/html","content_length":"113206","record_id":"<urn:uuid:155737a9-384a-4841-bad0-c491fa3d9c72>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00843.warc.gz"} |
Section 3.1, 3.2, 3.3 and 3.4 in Matter and Interactions (4th edition)
Non-constant Force: Newtonian Gravitation
Earlier, you read about the gravitational force near the surface of the Earth. This force was constant and was always directed “downward” (or rather toward the center of the Earth). In these notes,
you will read about Newton's formulation of the gravitational force that (in his day) helped explain the motion of the solar system including why the Sun was at the center of the solar system.
Lecture Video
The Gravitational Force
Using a number of empirical observations (by Tycho Brahe and Johannes Kepler) of the motion of various astronomical objects, Isaac Newton was able to develop an empirical formula for the interactions
of the those objects that could predict the future (and explain the past) motion of those objects. This formula became known as Newton's Universal Law of Gravitation. We will refer to it as the Model
of the Gravitational Force ^1).
Newton found that the interaction between two objects with mass is attractive, directly proportional to the product of their masses, inversely proportional to the square of their separation, and
directed along the line between their centers. The figure to the right illustrates the force that planet 2 exerts on planet 1.
To be explicit, consider the vector ($\vec{r}$) that points from planet 2 to planet 1. If the location of planet 1 relative to the origin is $\vec{r}_1$ and the location of planet 2 relative to the
same origin is $\vec{r}_2$, then this relative position or separation vector can be mathematically represented like this:
$$\vec{r} = \vec{r}_1 - \vec{r}_2$$
The separation vector is represented by the black arrow in the figure to the right. The length of this separation vector ($|\vec{r}|$) is the how far apart the two planets are. The unit vector that
points from planet 2 to planet 1 is given by,
$$\hat{r} = \dfrac{\vec{r}}{|\vec{r}|}$$
With these vectors written, you can now write down Newton's model of the gravitational force from the description above,
$$\vec{F}_{grav} = -G\dfrac{m_1 m_2}{|\vec{r}|^2}\hat{r}$$
where $G$ is a constant of proportionality that characterizes the strength of the gravitational force. This force is represented by the red arrow in the figure to the right. In SI units, $G = 6.67384
\times 10^{-11} \dfrac{m^3}{kg\:s^2}$.
Why the minus sign?
The gravitational force is an attractive force. That is, two objects that interact gravitationally are attracted to each other. The gravitational force formula uses the separation vector ($\vec{r}$)
that points from the object that exerts the force to the object that experiences the force. For example, in the figure above, $m_2$ exerts the force on $m_1$, so the separation vector points from
$m_2$ to $m_1$ (black arrow in the figure above).
But, the force that $m_1$ experiences is directed towards $m_2$; it is attracted towards $m_2$. The minus sign ensures that the force (red arrow in the figure above) points in this direction.
Newton's 3rd Law
The gravitational force provides the first example of Newton's 3rd Law, which you might have heard colloquially as “For every action, there is an equal and opposite reaction.” Unfortunately, this
colloquialism is a terribly inaccurate definition that gets applied incorrectly quite often, even by the Mythbusters!
Newton's 3rd Law results from the idea that a force quantifies the interaction between two objects. You can also think of it as an empirical fact, which stems from our definition of force. That is,
we observe when one object exerts a force on another object, the second object exerts a force on the first object of the same size but opposite in direction.
To be more concrete, you can think about the gravitational interaction between the Earth and the moon (shown in the figure below). The magnitude of these gravitational forces are the same (see the
equation above), but the vector direction for each always points directly towards the other object.
We will find other examples of Newton's 3rd Law pairs when you learn about contact interactions. When we discuss contact interactions, it turns out, these are the result of the electrostatic force.
If the forces are the same size, why isn't the motion the same?
The motion of systems is governed by the Momentum Principle. In this case, you might find it useful to think about the acceleration of the system, which tells you how the velocity of the system
changes. While the Earth and Moon experience the same size gravitational force, the small mass of the Moon (compared to the Earth) results in a much larger acceleration for the Moon, and this change
in the Moon's velocity is large (compared to the Earth's).
Acceleration due to the gravitational force
Consider a person ($m_{person}$) who is standing on the surface of the Earth ($R_{Earth}$ from the center of the Earth). The magnitude of the force acting on either the person due to the Earth or on
the Earth due to the person is the same size, namely,
$$|F_{grav}| = G\dfrac{m_{person}M_{Earth}}{R_{Earth}^2}$$
where $|F_{grav}|$ is simply the magnitude of the gravitational force. If you want to find the magnitude of the acceleration that the person experiences as a result of the gravitational force, simply
divide the above equation by the mass of the person (i.e., $a = F/m$ for the net force),
$$|a_{person}| = \dfrac{|F_{grav}|}{m_{person}} = G\dfrac{M_{Earth}}{R_{Earth}^2}$$
This acceleration is fully defined by known quantities (i.e., $G$, $M_{Earth}$, and $R_{Earth}$) and turns out to give the Near-Earth Gravitational acceleration ($g=9.81 \dfrac{m}{s^2}$). If instead,
you are interested in the acceleration the Earth experiences due to the person, you divide by the mass of the Earth (a mass that is $10^{22}$ times larger than the person's mass),
$$|a_{Earth}| = \dfrac{|F_{grav}|}{M_{Earth}} = G\dfrac{m_{person}}{R_{Earth}^2}$$
Thus, the acceleration that the Earth would experience due a single person is about 0.0000000000000000000001*$g$! This value is incredibly small; we often neglect changes in the motion of the Earth
due to objects that are not astronomically large. In these notes, the vector acceleration due to gravitational interactions is calculated explicitly.
(More) Modern Gravitational Models
Newton's model of the gravitational force was considered one of the simplest and most explanatory models for many years. We have since made observations that no longer fit with Newton's model (e.g.,
Gravitational lensing). Our best model for gravitation, which observations continue to fit, is called "general relativity" (GR) and was developed by Albert Einstein. While this model provides us with
far better predictions and explanations of a variety of observations, we still use Newton's model of the gravitational force for two reasons: (1) it can provide reasonable predictions for many cases,
and (2) the mathematics that is used in GR is sufficiently sophisticated that you will need more physics and mathematics experience to gain deep insight into its use.
We call this “law” a model because, as with all physical formulae, there are limitations to its predictive power. Newton was incredibly frustrated that the
motion of Mercury
could not be predicted by his “law.” In fact, a
new model
had to be developed. | {"url":"https://msuperl.org/wikis/pcubed/doku.php?id=183_notes:gravitation","timestamp":"2024-11-05T18:41:29Z","content_type":"application/xhtml+xml","content_length":"45869","record_id":"<urn:uuid:39ceba3e-a7e2-4857-9107-55d9ae955b14>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00695.warc.gz"} |
Tensor description of X-ray magnetic dichroism at the Fe L2,3-edges of Fe3O4
Tensor description of X-ray magnetic dichroism at the Fe L[2,3]-edges of Fe[3]O[4]
^aDebye Institute for Nanomaterials Science, Utrecht University, 99 Universiteitsweg, Utrecht 3584 CG, The Netherlands, ^bInstitute of Theoretical Physics, Heidelberg University, 19 Philosophenweg,
Heidelberg 69120, Germany, ^cJülich Centre for Neutron Science, Forschungszentrum Juelich GmbH, Jülich 52425, Germany, ^dFaculty of Science, Helwan University, Cairo 11795, Egypt, and ^eDiamond Light
Source, Harwell Science and Innovation Campus, Didcot OC11 0DE, United Kingdom
^*Correspondence e-mail: h.m.e.a.elnaggar@uu.nl, f.m.f.degroot@uu.nl
Edited by K. Kvashnina, ESRF – The European Synchrotron, France (Received 1 September 2020; accepted 11 November 2020)
A procedure to build the optical conductivity tensor that describes the full magneto-optical response of the system from experimental measurements is presented. Applied to the Fe L[2,3]-edge of a
38.85nm Fe[3]O[4]/SrTiO[3] (001) thin-film, it is shown that the computed polarization dependence using the conductivity tensor is in excellent agreement with that experimentally measured.
Furthermore, the magnetic field angular dependence is discussed using a set of fundamental spectra expanded on spherical harmonics. It is shown that the convergence of this expansion depends on the
details of the ground state of the system in question and in particular on the valence-state spin–orbit coupling. While a cubic expansion up to the third order explains the angular-dependent X-ray
magnetic linear dichroism of Fe^3+ well, higher-order terms are required for Fe^2+ when the orbital moment is not quenched.
1. Introduction
The determination of the electronic and magnetic structure of engineered magnetic nanostructures is essential to tailor their properties for technological applications such as information storage,
spin transport and sensing technology. These devices often rely on magnetic thin-films and nanostructures comprising multiple layers, such as for instance transition metal-oxides magnetic tunnel
junctions and exchange biased systems. X-ray magnetic dichroism spectroscopy is a powerful tool that can provide element- and site-specific magnetic information in heteromagnetic nanostructures
(Kuiper et al., 1993 ; Nunez Regueiro et al., 1995 ; Alders et al., 1998 ; Scholl et al., 2000 ; Hillebrecht et al., 2001 ; Haverkort et al., 2004 ; van der Laan, 2013 ; Luo et al., 2019 ). X-ray
magnetic circular dichroism (XMCD) can be used to determine the spin and orbital magnetic moments using sum rules (Carra et al., 1993 ) while X-ray magnetic linear dichroism (XMLD) can be used to
determine the site symmetry, anisotropic magnetic moments and spin–orbit interaction (Lüning et al., 2003 ; Csiszar et al., 2005 ; Arenholz et al., 2006 ; Finazzi et al., 2006 ; van der Laan et al.,
2011 ; Chen et al., 1992 , 2010 ; Iga et al., 2004 ). However, using dichroism experiments for magnetometry is far from being straightforward because it requires an understanding of the spectral
shape and magnitude of the dichroism signal as well as its dependence on the relative orientation of the X-ray polarization, the exchange field and the crystallographic axes.
The aim of this work is to provide a general method to construct and analyse dichroism effects in dipole transitions such as at the Fe L[2,3]-edge in magnetite (Fe[3]O[4]). We illustrate the
procedure to build the conductivity tensor from a few well chosen experimental measurements describing all possible dichroism effects at a single magnetic field orientation. Furthermore, the angular
dependence of the magnetic field is discussed using a set of fundamental spectra expanded using spherical harmonics which can describe the full magneto-optical response of the system (Haverkort et
al., 2010 ). Such expansions have been used previously to explain the angular dependence of XMLD (Arenholz et al., 2006 , 2007 ; van der Laan et al., 2008 , 2011 ), yet the new aspect we provide in
this work is a thorough inspection of the convergence of the expansion using a comprehensive set of XMLD data measured on Fe[3]O[4] in combination with theoretical calculations. Fe[3]O[4] serves as
an adequate model system: it is a ferrimagnetic mixed-valence strongly correlated system containing two different Fe sites where Fe^3+ ions reside in tetrahedral (T[d]) coordinated intersites (A
sites), while both Fe^2+ and Fe^3+ ions are in octahedral (O[h]) coordinated intersites (B sites). This provides us with an opportunity to study the effect of the electronic structure on the quality
of the expansion between the orbital singlet Fe^3+ and the orbital triplet Fe^2+ ions.
2. Methods
2.1. Experimental
The Fe[3]O[4] thin-film was grown on a conductive 0.1% Nb-doped SrTiO[3] (001) TiO[2]-terminated substrate using pulsed laser deposition as reported by Hamed et al. (2019 ). The film thickness and
surface roughness were concluded to be 38.85nm and 0.4nm, respectively, from X-ray reflectivity measurements (Fig. 10 of Appendix A ). The Verwey transition was observed at 114.97 ± 0.29K from the
magnetization versus temperature measurements in zero field cooling mode with 500Oe applied field (Fig. 11 of Appendix A ). Hysteresis measurements were also performed along the [1,0,0] direction to
inspect the saturation of the thin-film below and above the Verwey transition (Fig. 11 of Appendix A ). The largest coercivity is observed for the lowest temperature (H[c] = 0.1T) and an external
magnetic field of ∼0.25T is required to saturate the in-plane magnetization (see Fig. 12 of Appendix A ). On the contrary, the magnetization is not saturated along the [0,0,1] direction with a field
of H = 2T as shown by the XMCD measurement shown in (Fig. 13 of Appendix A ).
X-ray absorption spectroscopy (XAS) measurements were carried out on beamline I06 of Diamond Light Source, UK. The beam spot at the sample position was estimated to be ∼200µm × 100µm. The
polarization of the beam can be controlled using an Apple-II type undulator to produce linearly and circularly polarized X-rays. A vector magnet set to 1T was used to saturate the magnetization to
(nearly) any arbitrary direction. All measurements were performed at T = 200K in a normal-incidence configuration, i.e. with the incoming beam impinging at an angle of 90° with respect to the sample
surface. The energy resolution was estimated to be ∼200meV full width at half-maximum (FWHM). The measurements were performed in total electron yield mode. All experimental spectra were first
normalized to the incident photon flux. The spectra were then fitted using a model consisting of two error functions to take into account the L[2,3]-edge jumps. In addition, a set of Gaussian
functions were used to fit the multiplet features of the spectra (refer to Appendix B for more details). The L[2,3]-edge jumps were subtracted from the spectra and the spectra were renormalized to
the spectral area.
2.2. Computational
The data treatment and the Kramers–Kronig transformation were performed using Python. Crystal field multiplet calculations were performed using the quantum many-body program Quanty (Haverkort et al.,
2012 ). The Hamiltonian we use is of the form
The electron–electron Hamiltonian (H[e−e]) is of the form
where F^k (f[k]) and G^k (g[k]) are the Slater–Condon parameter for the radial (angular operators) part of the direct and exchange Coulomb interactions, respectively. The radial integrals are
obtained from atomic Hartree–Fock calculation scaled to 70% and 80% for valence and valence-core interactions, respectively, to take into account interatomic screening and mixing effects. This is in
line with works in the literature such as those by Arenholz et al. (2006 ) and Pattrick et al. (2002 ). This reduction is related to two effects: (i) the 80% reduction is to correct the Hartree–Fock
calculations such that they agree with atomic data, as shown by Cowan (1981 ) and many others; (ii) the additional reduction to 70% is to take into account the effects of charge transfer; in other
words, the nephelauxetic effects.
The spin–orbit Hamiltonian (H[SO]) is of the form
where l[i] and s[i] are the one electron orbital and spin operators, respectively, and the sum over i is over all electrons. The prefactor ξ is an atom-dependent constant (which is to a good
approximation material independent) and hence we used here tabulated data for this with ξ = 0.052eV for 3d orbitals and ξ = 8.20eV for 2p orbitals.
The crystal field Hamiltonian (H[CF]) is of the form
where C[k,m](θ,ϕ) are the angular crystal field operators expanded on renormalized spherical harmonics and A[k,m] are proportional to the distortion parameters used in crystal field theory, 10D[q], D
[s] and D[t]. In cubic symmetry we consider only 10D[q] which we found to be 1.25eV and 0.5eV for the B and A sites, respectively, by fitting to the XAS and XMCD spectra. The optimized parameters
used for the calculations can be found in Tables 4, 5 and 6. Details of the ground state for the three Fe ions in Fe[3]O[4] are shown in Appendix C . We note that we have not taken into account
charge transfer effects explicitly in our model. The mixing of iron and oxygen orbitals gives rise to charge transfer effects in core level spectroscopies. It has been shown that neutral experiments
on relatively ionic systems map very accurately to the crystal field multiplet model [refer to de Groot & Kotani (2008 ) for example]. This is the basis of crystal field theory, where the
hybridization is effectively taken care of by the reduction of the Slater integrals from their atomic values, i.e. an extra reduction with respect to the 80% reduction of the Hartree–Fock values.
Finally, the magnetic exchange Hamiltonian is given as
where S is the spin operator, n is a unit vector giving the direction of the magnetization and J[exch] is the magnitude of the mean-field exchange interaction which we use as 90meV in our
calculation. This value is based on previous 2p3d RIXS measurements that showed that the spin-flip excitation is observed at this energy [see, for example, Huang et al. (2017 ) and Elnaggar et al.
(2019a ,b )].
3. Results and discussion
3.1. Construction of the conductivity tensor
The general XAS cross-section can be expressed by equation (6) where ε is the polarization vector, Im is the imaginary part of the equation and σ is the conductivity tensor describing the material
properties (Haverkort et al., 2010 ),
The conductivity tensor is a 3 × 3 matrix for a dipole transition as shown in equation (7) . The matrix elements of the conductivity tensor are defined in equation (8) where ψ is the ground state
wavefunction, T[x(y)] = ε[x(y)]·r[x(y)] is the dipole transition operator, H is the Hamiltonian (taking into account the core-hole effect) and γ is the Lorentzian broadening given by the core-hole
lifetime [L[3] = 200meV and L[2] = 500meV half width at half-maximum (HWHM) (de Groot, 2005 ); L[2] has a larger lifetime broadening due to the Coster–Kronig Auger decay],
In the most general case, nine independent measurements are required to fully reconstruct the conductivity tensor. However, the crystal symmetry can simplify the conductivity tensor by dictating the
equivalence between matrix elements or cancelling out some of the matrix elements. For a cubic crystal system with the magnetic field aligned parallel to the high symmetry [1,0,0] direction, only
five of the nine matrix elements are non-zero (three diagonal elements: σ[xx], σ[yy], σ[zz]; and two off-diagonal elements: σ[yz] and σ[zy]). The cubic crystal field implies that the x, y and z
directions are equivalent by symmetry; however, if the external magnetic field is aligned to the x axis (and consequently the magnetization), it breaks the equivalency. For this reason, σ[xx] will be
different from σ[zz] and σ[yy]. In addition, the magnetization along x induces off-diagonal terms σ[yz] (σ[zy]) leading to a scenario where an electric field in the y(z) direction can produce an
excitation in the z(y) direction. The off-diagonal terms cannot be directly measured; however, they can be reconstructed from linear combinations of XAS measurements. We first focus on reconstructing
the terms σ[xx], σ[yy], σ[xy] and σ[yx]; hence four independent XAS measurements were performed for this purpose as shown in Table 1 .
Measured Constructed
ε[LH] σ[xx] = XAS[LH]
σ[yy] = XAS[CL] + XAS[CR] − XAS[LH]
The response function is a complex quantity and one needs to compute the real part of the function. The real and the imaginary parts of the response function are related to each other through the
Kramers–Kronig relation, which allows the computation of the real part from the XAS measurements [see Figs. 1 (b) and 1(c)]. Linear combinations of these measurements can now be created according to
Table 1 to give the matrix elements of the conductivity tensor. The four matrix elements (σ[xx], σ[yy], σ[xy] and σ[yx]) are shown in Fig. 1 (d). One notices that σ[xx], σ[yy] are different which
results in a significant XMLD [see Fig. 1 (e)]. The off-diagonal terms σ[xy] and σ[yx] are about 50 times smaller than the diagonal terms and are roughly equal. These symmetric off-diagonal
contributions are possibly due to the presence of small non-cubic distortion in the thin-film. In contrast to the XMLD, the XMCD is negligible as can be seen in Fig. 1 (f).
The same procedure can be used to reconstruct the conductivity tensor with the magnetic field aligned to the [0,0,1] direction from four XAS measurements. These matrix elements are shown in Fig. 2 .
A striking difference can be observed in comparison with Fig. 1 (d): the off-diagonal terms σ[xy] and σ[yx] are nearly an order of magnitude stronger and are antisymmetric where σ[xy] ≃ −σ[yx]. The
off-diagonal term differences seen are likely due to small misalignments in the orientation of the magnetization in particular given that the [0,0,1] direction is a magnetically hard direction and
does not saturate with 1T (see Fig. 13 of Appendix A ) in combination with the presence of small non-cubic crystal distortion in the thin-film as we showed in earlier work (Elnaggar et al., 2020 ).
The antisymmetric off-diagonal elements result in a significant XMCD as seen in Fig. 2 (c). On the other hand, the XMLD signal is negligible because σ[xx] ≃ σ[yy] [refer to Fig. 2 (b)].
The full conductivity tensor can be created by merging together the matrix elements obtained with B || [1,0,0] and B || [0,0,1] as shown in Fig. 3 . This procedure assumes that the crystal field is
cubic, which is an acceptable assumption given that the off-diagonal matrix elements related to the non-cubic crystal field are very small. With the full conductivity tensor at hand, we can compute
XAS for any arbitrary polarization using equation (6) . As such, we consider the polarization dependence as it is rotated from [1,0,0] to [0,1,0] in Fig. 4 . The computed isotropic XAS and the
polarization dependence at E = 706.4eV (red), 707.4eV (green), 708.4eV (magenta) and 709.4eV (blue) are shown in Figs. 4 (a) and 4(b), respectively. The measured polarization dependences at these
energies are shown in the bottom row of Fig. 4 (b) and agree very well with the computation using the full conductivity tensor.
3.2. Magnetic field dependence
The conductivity tensor shown in Fig. 3 gives the response function of the system at a certain magnetic field direction. It is of interest to find the full magneto-optical response of the system as
it provides information about the anisotropic magnetic spin–orbit interaction and magnetic moments (van der Laan, 1998 ; Dhesi et al., 2001 , 2002 ). Symmetry operations of the crystal can be used to
relate the conductivity tensor with different magnetic field directions. For example, in a cubic crystal system the conductivity tensors with the magnetic field along x, y and z transform into each
other through a 90° rotation. Similar symmetry arguments can be used to relate the conductivity tensor as a function of the magnetic field for different crystal symmetries. Haverkort et al. showed
that the conductivity tensor can be expressed as a sum of linear independent spectra multiplied by functions depending on the local magnetization direction as given in equation (9) (Haverkort et al.,
2010 ),
Here θ and ϕ define the direction of the local moment with θ being the polar angle, and ϕ being the azimuthal angle. Y[k,m](θ,ϕ) is a spherical harmonic function and σ[i,j] is the i,j component of
the conductivity tensor on a basis of linear polarized light in the coordinate system of the crystal. This expression allows one to describe the full, magnetic field directional dependent,
magneto-optical response of a system by using only a few linear independent fundamental spectral functions. This expression may be simplified for certain crystal systems as we will discuss in the
3.2.1. Spherical field expansion
The crystal field splitting can in some systems be small (in comparison with other interactions such as spin–orbit coupling in rare-earth compounds) and the crystal symmetry can be considered to be
nearly spherical. This approximation implies that the spectral shape modification is solely determined by the relative orientation between the magnetization and the polarization. Three fundamental
spectra (σ^(0), σ^(1) and σ^(2)) connected to the spherical harmonics Y[0,0](θ,ϕ), Y[1,0](θ,ϕ) and Y[2,0](θ,ϕ) are required to describe the conductivity tensor with an arbitrary magnetization
direction [equation (10) ],
The three fundamental spectra of the spherical expansion, σ^(0), σ^(1) and σ^(2), in Fe[3]O[4] are shown in Fig. 5 (a) and can be used to compute XAS spectra for any orientation of the magnetization.
To evaluate the quality of the expansion, we start by comparing the measured and the computed magnetic field angular dependence of XMLD [I[XMLD] = I(ϕ) − I(90°)] with linear horizontal polarization
where the magnetic field is rotated from [1,0,0] to [0,0,1] [see Fig. 5 (b)]. The expansion reproduces the measured angular dependence well and only minor discrepancies in the absolute intensities
are observed. The angular dependence in this case is given by equation (11) where the two fundamental spectra σ^(0) and σ^(2) come into play,
A more interesting case can be observed when the magnetic field angular dependence is measured with the polarization rotated 30° clockwise from the [1,0,0] direction {i.e. ε || [cos(30°),− sin
(30°),0]} as shown in Fig. 5 (c). Contrary to the results with linear horizontally polarized light, a strong deviation from spherical symmetry is now observed. The expected angular dependence from
the spherical field expansion should follow equation (12) ; however, the spherical field expansion completely breaks down when the polarization is aligned to a low symmetry direction. This is not a
surprising result for Fe[3]O[4] as its crystal structure is cubic and the crystal field splitting between the t[2g] and e[g] orbitals parametrized through 10D[q] is ∼1eV for Fe in Fe[3]O[4] while
the spin–orbit coupling is ∼0.05eV and the mean field exchange interaction is ∼0.09eV. These values suggest that the crystal field cannot be neglected, and a spherical field expansion consequently
cannot describe the magneto-optical response of Fe in Fe[3]O[4] well,
3.2.2. Cubic field expansion
The local symmetry of the Fe in Fe[3]O[4] is nearly cubic (Bragg, 1915 ), and therefore a more realistic treatment would be to perform a cubic field expansion. In this case, distinctly different
measurements can be taken, for example along the fourfold and the threefold symmetry axes, and the fundamental spectra of order k branch according to their symmetry representations in the cubic point
group. This is shown in equation (13) where σ^(2) branches to (a)]. Furthermore, higher-order k terms such as et al., 2010 ),
A comparison between the measured and computed magnetic field angular dependence of XMLD with linear horizontal polarization can be seen in Fig. 6 (b). The agreement between the measurements and the
computed field dependence for the spherical and cubic field expansions are of similar quality. The field dependence in this case is given by equation (14) which is of the same form as the spherical
field expansion [compare equation (14) with equation (11) ] and the fundamental spectra involved are very similar [compare Fig. 6 (a) with Fig. 5 (a)],
However, contrary to the spherical expansion results, an excellent agreement between the cubic field expansion and the magnetic field angular dependence performed with rotated polarization is now
observed in Fig. 6 (c). The reason for this improvement is the branching of the σ^(2) fundamental spectrum into which highlights the role of the XAS with the polarization aligned to a low symmetry
direction to sensitively probe the crystal symmetry,
3.2.3. Convergence of the field expansion
In symmetries lower than spherical, the expansion of the spin (or the magnetic field) direction on spherical harmonics does not truncate at finite k. There is thus, in principle, an infinite number
of linearly independent fundamental spectra. Not all of them are important and most of them will be of very low intensity. We have included in our previous analysis terms up to k = 3. The cubic field
expansion showed a satisfactory agreement with the experimental data and only small discrepancies were showed. Here we investigate theoretically the origin of these discrepancies and the convergence
of the field expansion. The quality of an expansion on the spin (or magnetic field direction) is foreseen to depend on the details of the ground state. This is because the magnetization direction
depends on both the orbital and spin moments and hence whether the valence orbital moment is quenched or not will affect the efficiency of the expansion. Fe[3]O[4] contains both types of ions, Fe^3+
and Fe^2+, providing us with an excellent opportunity to test the effect of the ground state for the two cases. We approach this by calculating the field dependence of XMLD in two ways:
(1) Performing a new full XMLD calculation for every magnetic field orientation.
(2) Computing once the conductivity tensor in equation (13) and then generating the field dependence of XMLD from the cubic fundamental spectra.
The first method is exact and involves no approximations. On the other hand, the accuracy of the second method depends on the order of the expansion used in the calculation. Let us first consider the
magnetic field (B) angular dependence probed with the linear polarized X-rays aligned to [1,0,0] where B is rotated about [0,0,1] at ϕ = 0°. The XMLD signal [computed as XMLD = XAS(ϕ) − XAS(90°)] for
Fe^3+ and Fe^2+ in O[h] symmetry is shown in Figs. 7 (a) and 7(b), respectively. The exact calculations (solid lines) and the cubic field expansion (dashed lines) match well which can initially
suggest that the series expanded up to k = 3 is sufficient to describe XMLD in 3d transition metal oxides. Similar conclusions were reached by Arenholz et al. (2006 , 2007 ) and van der Laan et al.
(2008 , 2011 ). However, a difference between the convergence of the series for both ions can be seen when the polarization is aligned parallel to [cos(30°),−sin(30°),0] [Figs. 7 (c) and 7(d)]. Only
minor discrepancies are observed for Fe^3+ while a larger disagreement is observed for Fe^2+.
The reason behind the mismatch observed lies in the ground state of Fe^2+. In the absence of a magnetic/exchange field, the ground state of the Fe^2+ ion in O[h] symmetry is ^5T[2g] composed of
15-fold degenerate states. This degeneracy is split by exchange and spin–orbit interactions leading to a ground state characterized by the spin and orbital momenta projections s[z] = 1.971 (±0.01)
and l[z] = 0.98 (±0.20). On the other hand, the ground state of Fe^3+ is characterized by the spin and orbital projections s[z] = 2.498 (±0.001) and l[z] = 0.001 (±0.001). These values are obtained
using the wavefunction calculated by solving equation (1) . The reported errors are obtained from the errors in the distortion parameters that are obtained by fitting the XMCD signal using our
calculations. We note that the ground state of Fe^3+ in T[d] symmetry is almost identical to that in O[h] symmetry and therefore we focus here on the O[h] sites (refer to the Appendix C for more
details). As the magnetic field is rotated, the spin moment follows the field for the Fe^3+ as shown in Fig. 8 (a). In the case of Fe^2+, however, the coupling between the orbital and spin momenta
results in a scenario where neither the spin nor the orbital moments follow the rotation of the magnetic field [see Fig. 8 (b)] due to the magnetocrystalline anisotropy (Alders et al., 2001 ). The
spin moment can be phase shifted from the direction of the magnetic field with ∼0.5° while the orbital moment can lag ∼4° in some directions. This causes the series to converge slower and hence
higher orders of k are required. This is further confirmed by the calculation in Fig. 9 (a) where the valence spin–orbit coupling is artificially switched off for Fe^2+. Now the cubic field expansion
reproduces the XMLD exquisitely well and no phase shift is observed [Fig. 9 (b)].
4. Conclusions
In conclusion, we illustrated the procedure to build the conductivity tensor from experimental measurements which describes the full magneto-optical response of the system. Applied to the Fe L[2,3]
-edge of a 38.85nm Fe[3]O[4]/SrTiO[3] (001) thin-film, we showed that the convergence of the cubic expansion depends on the details of the ground state. The key aspect that affects the convergence
of the expansion in this work is the valence state spin–orbit interaction. While the cubic expansion explains the angular dependence of the XMLD of Fe^3+ with terms up to the third order,
higher-order terms are required for Fe^2+. This conclusion is expected to apply for other systems where the valence orbital moments are not quenched.
Sample characterization
A1. X-ray reflectivity measurement
The Fe[3]O[4] (001)/SrTiO[3] film thickness, interface and surface roughness were examined by X-ray reflectivity using a Philips XPert MRD with Cu K[α] radiation (see Fig. 10 ). The film thickness
was concluded to be ∼38.85nm. The surface roughness was concluded to be ∼0.4nm on average.
A2. Magnetic measurement
Bulk magnetic properties of the Fe[3]O[4] (001)/SrTiO[3] thin-film were investigated using a quantum design dynacool physical properties measurement system. Magnetic moment versus temperature was
measured in zero-field cooling mode with 500Oe applied field [Fig. 11 (a)]. A clear peak can be observed at T = 114.97 ± 0.29K in the derivative signal (the Verwey transition) confirming the good
stoichiometry and quality of the thin-film. A comparison between the derivative signal between the thin-film used for this work and a single crystal of the same orientation is shown in Fig. 11 (b).
The Verwey transition is significantly broader for the thin-film (FWHM = 13.03 ± 0.71K, centre = 114.97 ± 0.29K) in comparison with the single crystal (FWHM = 1.40 ± 0.01K, centre = 127.08 ±
0.01K). This could be related to defects and domain formations in the thin-film which is typical for growth on SrTiO[3].
Hysteresis loop measurements were also performed along the [1,0,0] axis to inspect the saturation of the film below and above the Verwey transition parallel to the in-plane [1,0,0] direction (Fig. 12
). The largest coercivity is observed for the lowest temperature (H[c] = 0.1T) and an external magnetic field of 0.25T is required to saturate the in-plane magnetization. On the contrary, the
magnetization is not saturated along the [0,0,1] direction with a field of H[c] = 2T as shown by the XMCD signal shown in Fig. 13 .
Data treatment
In order to compare the XAS results measured at different magnetic field orientations it is necessary to normalize the spectra. The edge jumps (L[3] and L[2]) were fitted by two error functions
positioned at 708.8eV and 721.5eV, respectively [refer to the grey dashed line in Fig. 14 (a)]. The multiplet features of the spectra were fitted by a set of Gaussian functions. Six Gaussian
functions were used to fit the L[3] part of the spectra [red peaks in Fig. 14 (a)] and four to fit the L[2] [blue peaks in Fig. 14 (a)]. Only the amplitude of the functions was allowed to float
between the data while the energy positions and widths were kept constant between all the data. Table 2 (Table 3 ) shows the centre and width of the Gaussian peaks used for the L[3] (L[2]) edge. We
normalized the spectra by setting the L[2] edge jump to unity. The L[2,3] edge jumps were subtracted from the data as shown in Fig. 14 (b). Finally, the spectra were normalized to the spectral area.
Peak Centre (eV) HWFM (eV)
1 706.424 0.643
2 707.637 0.561
3 708.631 0.793
4 710.097 1.115
5 712.131 1.252
6 715.302 2.447
Peak Centre (eV) HWFM (eV)
1 719.122 0.527
2 720.908 0.963
3 722.880 1.343
4 724.852 1.422
Ground state of Fe ions in Fe[3]O[4]
C1. Fe^2+ in octahedral symmetry
The high spin Fe^2+ ion in octahedral symmetry 15-fold ^5T[2g] state [shown in Fig. 15 (a)] is split by exchange interaction [90meV applied according to equation (5) ] which lowers the degeneracies
leading to a triplet ground state as illustrated in Fig. 15 (b). The spin–orbit coupling finally lifts all the degeneracies as shown in Fig. 15 (c). The ground state is 99.8% pure in terms of crystal
field configuration and has an occupation |(t[2g])^3.9997(e[g])^2.0003〉 and is 97.21% composed of the state characterized by m[z] = −2 and l[z] = −1. We point out that the first and second excited
states are ∼22 and 53meV higher in energy than the ground state which leads to a Boltzmann occupation of ∼75.8%, 20.6% and 3.6% of the ground, first and second excited states, respectively, at
200K. These have been included in the calculation of the spectra. All the parameters used in the multipet calculations of Fe^2+ XAS are reported in Table 4 .
Initial state Final state
Parameter (eV) (eV) Comment
10Dq 1.25 1.25 Similar to the values reported by Pattrick et al. (2002 ) and Arenholz et al. (2006 ) for Fe[3]O[4]. This crystal field parameter reproduces well XAS,
XMCD and XMLD measurements in Fe[3]O[4].
F^ 2[dd] 7.676 8.245 Atomic Hartree–Fock calculation scaled to 70% to take interatomic screening and mixing. This is in line with the literature such as the work by
F^ 4[dd] 4.771 5.129 Pattrick et al. (2002 ) and Arenholz et al. (2006 ).
F^ 2[pd] – 5.434 Atomic Hartree–Fock calculation scaled to 80%. This is in accordance to the literature such as Pattrick et al. (2002 ) and Arenholz et al. (2006 ).
G^1[pd] – 3.208
G^3[pd] – 2.274
ξ[d] 0.052 0.052 Atomic value which is a reasonable approximation as the spin–orbit coupling is nearly an atomic quantity that is material independent.
ξ[p] – 8.2
J[exch] 0.09 0.09 This value is based on previous 2p3d RIXS measurements that showed that the spin-flip excitation is observed at this energy [see, for example Huang et
al. (2017 ) and Elnaggar et al. (2019a ,b )].
Lifetime broadening L[3]: 0.2, L[2]: 0.5 The lifetime broadening for the L[3] used is 0.2eV and for L[2] is 0.5eV.
C2. Fe^3+ in octahedral and tetrahedral symmetry
The ground states of Fe^3+ in O[h] and T[d] symmetries are almost identical with s[z] = 2.499 and l[z] = 0.001. Indeed, the ^6A[1] splits to the doublet E[2] and quartet G states; however, this
splitting can be neglected (the splitting is less than 0.1meV and is thermally populated). This is illustrated in Figs. 16 and 17 where the ^6A[1(g)] state [panel (a)] is split mainly due to
exchange interaction [panel (b)] while spin–orbit coupling has negligible effect [panel (c)] for Fe^3+ in both T[d] and O[h] symmetries. We point out that the first excited state is ∼90meV higher in
energy than the ground state which leads to a Boltzmann occupation of ∼99.6% and 0.4% of the ground and first excited state, respectively, at 200K. We therefore only used the ground state in the
calculation of the spectra. All the parameters used in the multipet calculations of Fe^3+ O[h] and T[d] XAS are reported in Tables 5 and 6 , respectively.
Initial state Final state
Parameter (eV) (eV) Comment
10Dq 1.25 1.25 Similar to the values reported by Pattrick et al. (2002 ), Liu et al. (2017 ) and Arenholz et al. (2006 ) for Fe[3]O[4]. This crystal field parameter
reproduces well XAS, XMCD and XMLD measurements in Fe[3]O[4].
F^ 2[dd] 8.429 8.972 Atomic Hartree–Fock calculation scaled to 70% to take interatomic screening and mixing. This is in line with the literature such as the work by Pattrick
F^ 4[dd] 5.274 5.616 et al. (2002 ) and Arenholz et al. (2006 ).
F^ 2[pd] – 5.956 Atomic Hartree–Fock calculation scaled to 80%. This is in accordance with the literature such as Pattrick et al. (2002 ) and Arenholz et al. (2006 ).
G^1[pd] – 4.450
G^3[pd] – 2.532
ξ[d] 0.052 0.052 Atomic value which is a reasonable approximation as the spin–orbit coupling is nearly an atomic quantity that is material independent.
ξ[p] – 8.2
J[exch] 0.09 0.09 This value is based on previous 2p3d RIXS measurements that showed that the spin-flip excitation is observed at this energy [see, for example, Huang et
al. (2017 ) and Elnaggar et al. (2019a ,b )].
Lifetime broadening L[3]: 0.2, L[2]: 0.5 The lifetime broadening for the L[3] used is 0.2eV and for L[2] is 0.5eV
Initial Final
state state
Parameter (eV) (eV) Comment
10Dq −0.5 −0.5 Similar to the values reported by Pattrick et al. (2002 ), Liu et al. (2017 ) and Arenholz et al. (2006 ) for Fe[3]O[4]. This crystal field parameter reproduces well
XAS, XMCD and XMLD measurements in Fe[3]O[4]. We note that this is the total crystal field parameter as used in the crystal field multiplet model, i.e. including the
effective effects of charge transfer.
F^ 2[dd] 8.429 8.972 Atomic Hartree–Fock calculation scaled to 70% to take interatomic screening and mixing. This is in line with the literature such as the work by Pattrick et al. (2002
F^ 4[dd] 5.274 5.616 ) and Arenholz et al. (2006 ).
F^ 2[pd] – 5.956 Atomic Hartree–Fock calculation scaled to 80%. This is in accordance with the literature such as Pattrick et al. (2002 ) and Arenholz et al. (2006 ).
G^ 1[pd] – 4.450
G^ 3[pd] – 2.532
ξ[d] 0.052 0.052 Atomic value which is a reasonable approximation as the spin–orbit coupling is nearly an atomic quantity that is material independent.
ξ[p] – 8.2
J[exch] −0.09 −0.09 This value is based on previous 2p3d RIXS measurements that showed that the spin-flip excitation is observed at this energy [see, for example, Huang et al. (2017 );
Elnaggar et al. (2019a ,b )].
Lifetime L[3]: 0.2, L[2]: The lifetime broadening for the L[3] used is 0.2eV and for L[2] is 0.5eV.
broadening 0.5
We are thankful to R.-P. Wang, M. Ghiasi and M. Delgado for helping with the synchrotron measurements. The synchrotron experiments were performed at the I06 beamline, Diamond Light Source, UK under
proposal number SI-17588. We are grateful for the help of the beamline staff to setup and perform the experiments.
Funding information
The following funding is acknowledged: European Research Council (grant No. 340279 to Frank M. F. de Groot; award No. SI-17588 to Hebatalla Elnaggar).
Alders, D., Coehoorn, R. & de Jonge, W. J. M. (2001). Phys. Rev. B, 63, 054407. CrossRef Google Scholar
Alders, D., Tjeng, L. H., Voogt, F. C., Hibma, T., Sawatzky, G. A., Chen, C. T., Vogel, J., Sacchi, M. & Iacobucci, S. (1998). Phys. Rev. B, 57, 11623–11631. Web of Science CrossRef CAS Google
Arenholz, E., van der Laan, G., Chopdekar, R. V. & Suzuki, Y. (2006). Phys. Rev. B, 74, 094407. Web of Science CrossRef Google Scholar
Arenholz, E., van der Laan, G., Chopdekar, R. V. & Suzuki, Y. (2007). Phys. Rev. Lett. 98, 197201. Web of Science CrossRef PubMed Google Scholar
Bragg, W. H. (1915). Nature, 95, 561. CrossRef ICSD Google Scholar
Carra, P., Thole, B. T., Altarelli, M. & Wang, X. (1993). Phys. Rev. Lett. 70, 694–697. CrossRef PubMed CAS Web of Science Google Scholar
Chen, C. T., Tjeng, L. H., Kwo, J., Kao, H. L., Rudolf, P., Sette, F. & Fleming, R. M. (1992). Phys. Rev. Lett. 68, 2543–2546. CrossRef PubMed CAS Web of Science Google Scholar
Chen, J. M., Hu, Z., Jeng, H. T., Chin, Y. Y., Lee, J. M., Huang, S. W., Lu, K. T., Chen, C. K., Haw, S. C., Chou, T. L., Lin, H.-J., Shen, C. C., Liu, R. S., Tanaka, A., Tjeng, L. H. & Chen, C. T.
(2010). Phys. Rev. B, 81, 201102. CrossRef Google Scholar
Cowan, R. (1981). The Theory of Atomic Structure and Spectra. University of California Press. Google Scholar
Csiszar, S. I., Haverkort, M. W., Hu, Z., Tanaka, A., Hsieh, H. H., Lin, H.-J., Chen, C. T., Hibma, T. & Tjeng, L. H. (2005). Phys. Rev. Lett. 95, 187205. CrossRef PubMed Google Scholar
Dhesi, S. S., van der Laan, G. & Dudzik, E. (2002). Appl. Phys. Lett. 80, 1613–1615. CrossRef CAS Google Scholar
Dhesi, S. S., van der Laan, G., Dudzik, E. & Shick, A. B. (2001). Phys. Rev. Lett. 87, 067201. CrossRef PubMed Google Scholar
Elnaggar, H., Sainctavit, P., Juhin, A., Lafuerza, S., Wilhelm, F., Rogalev, A., Arrio, M.-A., Brouder, C., van der Linden, M., Kakol, Z., Sikora, M., Haverkort, M. W., Glatzel, P. & de Groot, F. M.
F. (2019a). Phys. Rev. Lett. 123, 207201. CrossRef PubMed Google Scholar
Elnaggar, H., Wang, R.-P., Ghiasi, M., Yañez, M., Delgado-Jaime, M. U., Hamed, M. H., Juhin, A., Dhesi, S. S. & de Groot, F. (2020). Phys. Rev. Mater. 4, 024415. CrossRef Google Scholar
Elnaggar, H., Wang, R.-P., Lafuerza, S., Paris, E., Tseng, Y., McNally, D., Komarek, A., Haverkort, M., Sikora, M., Schmitt, T. & de Groot, F. M. F. (2019b). Appl. Mater. Interfaces, 11,
36213–36220. CrossRef CAS Google Scholar
Finazzi, M., Brambilla, A., Biagioni, P., Graf, J., Gweon, G.-H., Scholl, A., Lanzara, A. & Duò, L. (2006). Phys. Rev. Lett. 97, 097202. CrossRef PubMed Google Scholar
Groot, F. de (2005). Coord. Chem. Rev. 249, 31–63. Web of Science CrossRef Google Scholar
Groot, F. M. F. de & Kotani, A. (2008). Core Level Spectroscopy of Solids, 1st ed. CRC Press. Google Scholar
Hamed, M. H., Hinz, R. A. L., Lömker, P., Wilhelm, M., Gloskovskii, A., Bencok, P., Schmitz-Antoniak, C., Elnaggar, H., Schneider, C. M. & Müller, M. (2019). Appl. Mater. Interfaces, 11, 7576–7583.
CrossRef CAS Google Scholar
Haverkort, M. W., Csiszar, S. I., Hu, Z., Altieri, S., Tanaka, A., Hsieh, H. H., Lin, H.-J., Chen, C. T., Hibma, T. & Tjeng, L. H. (2004). Phys. Rev. B, 69, 020408. Web of Science CrossRef Google
Haverkort, M. W., Hollmann, N., Krug, I. P. & Tanaka, A. (2010). Phys. Rev. B, 82, 094403. Web of Science CrossRef Google Scholar
Haverkort, M. W., Zwierzycki, M. & Andersen, O. K. (2012). Phys. Rev. B, 85, 165113. Web of Science CrossRef Google Scholar
Hillebrecht, F. U., Ohldag, H., Weber, N. B., Bethke, C., Mick, U., Weiss, M. & Bahrdt, J. (2001). Phys. Rev. Lett. 86, 3419–3422. CrossRef PubMed CAS Google Scholar
Huang, H. Y., Chen, Z. Y., Wang, R.-P., de Groot, F. M. F., Wu, W. B., Okamoto, J., Chainani, A., Singh, A., Li, Z.-Y., Zhou, J.-S., Jeng, H.-T., Guo, G. Y., Park, J.-G., Tjeng, L. H., Chen, C. T. &
Huang, D. J. (2017). Nat. Commun. 8, 15929. CrossRef PubMed Google Scholar
Iga, F., Tsubota, M., Sawada, M., Huang, H. B., Kura, S., Takemura, M., Yaji, K., Nagira, M., Kimura, A., Jo, T., Takabatake, T., Namatame, H. & Taniguchi, M. (2004). Phys. Rev. Lett. 93, 257207.
Web of Science CrossRef PubMed Google Scholar
Kuiper, P., Searle, B. G., Rudolf, P., Tjeng, L. H. & Chen, C. T. (1993). Phys. Rev. Lett. 70, 1549–1552. CrossRef PubMed CAS Web of Science Google Scholar
Laan, G. van der (1998). Phys. Rev. B, 57, 5250–5258. Google Scholar
Laan, G. van der (2013). J. Phys. Conf. Ser. 430, 012127. CrossRef Google Scholar
Laan, G. van der, Arenholz, E., Chopdekar, R. V. & Suzuki, Y. (2008). Phys. Rev. B, 77, 064407. Google Scholar
Laan, G. van der, Telling, N. D., Potenza, A., Dhesi, S. S. & Arenholz, E. (2011). Phys. Rev. B, 83, 064409. Google Scholar
Liu, B., Piamonteze, C., Delgado-Jaime, M. U., Wang, R. P., Heidler, J., Dreiser, J., Chopdekar, R., Nolting, F. & de Groot, F. (2017). Phys. Rev. B, 96, 054446. Google Scholar
Lüning, J., Nolting, F., Scholl, A., Ohldag, H., Seo, J. W., Fompeyrine, J., Locquet, J.-P. & Stöhr, J. (2003). Phys. Rev. B, 67, 214433. Google Scholar
Luo, C., Ryll, H., Back, C. H. & Radu, F. (2019). Sci. Rep. 9, 18169. Google Scholar
Núñez Regueiro, M. D., Altarelli, M. & Chen, C. T. (1995). Phys. Rev. B, 51, 629–631. Google Scholar
Pattrick, R. A. D., Van Der Laan, G., Henderson, C. M. B., Kuiper, P., Dudzik, E. & Vaughan, D. J. (2002). Eur. J. Mineral. 14, 1095–1102. CrossRef CAS Google Scholar
Scholl, A., Stöhr, J., Lüning, J., Seo, J. W., Fompeyrine, J., Siegwart, H., Locquet, J.-P., Nolting, F., Anders, S., Fullerton, E. E., Scheinfein, M. R. & Padmore, H. A. (2000). Science, 287,
1014–1016. CrossRef PubMed CAS Google Scholar
This is an open-access article distributed under the terms of the Creative Commons Attribution (CC-BY) Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided
the original authors and source are cited. | {"url":"https://journals.iucr.org/s/issues/2021/01/00/ok5028/index.html","timestamp":"2024-11-07T23:52:33Z","content_type":"application/xhtml+xml","content_length":"275286","record_id":"<urn:uuid:32730a8d-4894-4d37-a886-31f20765e2fb>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00807.warc.gz"} |
SPLV: Invesco S&P 500 Low Volatility ETF | Logical Invest
What do these metrics mean?
'Total return, when measuring performance, is the actual rate of return of an investment or a pool of investments over a given evaluation period. Total return includes interest, capital gains,
dividends and distributions realized over a given period of time. Total return accounts for two categories of return: income including interest paid by fixed-income investments, distributions or
dividends and capital appreciation, representing the change in the market price of an asset.'
Using this definition on our asset we see for example:
• Compared with the benchmark SPY (109.2%) in the period of the last 5 years, the total return of 43.2% of Invesco S&P 500 Low Volatility ETF is lower, thus worse.
• During the last 3 years, the total return, or increase in value is 22.2%, which is lower, thus worse than the value of 33.3% from the benchmark.
'The compound annual growth rate isn't a true return rate, but rather a representational figure. It is essentially a number that describes the rate at which an investment would have grown if it had
grown the same rate every year and the profits were reinvested at the end of each year. In reality, this sort of performance is unlikely. However, CAGR can be used to smooth returns so that they may
be more easily understood when compared to alternative investments.'
Applying this definition to our asset in some examples:
• The compounded annual growth rate (CAGR) over 5 years of Invesco S&P 500 Low Volatility ETF is 7.5%, which is lower, thus worse compared to the benchmark SPY (15.9%) in the same period.
• Compared with SPY (10.1%) in the period of the last 3 years, the annual return (CAGR) of 6.9% is smaller, thus worse.
'Volatility is a statistical measure of the dispersion of returns for a given security or market index. Volatility can either be measured by using the standard deviation or variance between returns
from that same security or market index. Commonly, the higher the volatility, the riskier the security. In the securities markets, volatility is often associated with big swings in either direction.
For example, when the stock market rises and falls more than one percent over a sustained period of time, it is called a 'volatile' market.'
Which means for our asset as example:
• Compared with the benchmark SPY (20.9%) in the period of the last 5 years, the 30 days standard deviation of 18.8% of Invesco S&P 500 Low Volatility ETF is lower, thus better.
• Looking at volatility in of 12.9% in the period of the last 3 years, we see it is relatively smaller, thus better in comparison to SPY (17.6%).
'The downside volatility is similar to the volatility, or standard deviation, but only takes losing/negative periods into account.'
Applying this definition to our asset in some examples:
• Compared with the benchmark SPY (14.9%) in the period of the last 5 years, the downside risk of 13.6% of Invesco S&P 500 Low Volatility ETF is lower, thus better.
• During the last 3 years, the downside deviation is 9.1%, which is smaller, thus better than the value of 12.3% from the benchmark.
'The Sharpe ratio (also known as the Sharpe index, the Sharpe measure, and the reward-to-variability ratio) is a way to examine the performance of an investment by adjusting for its risk. The ratio
measures the excess return (or risk premium) per unit of deviation in an investment asset or a trading strategy, typically referred to as risk, named after William F. Sharpe.'
Applying this definition to our asset in some examples:
• Compared with the benchmark SPY (0.64) in the period of the last 5 years, the Sharpe Ratio of 0.26 of Invesco S&P 500 Low Volatility ETF is smaller, thus worse.
• Compared with SPY (0.43) in the period of the last 3 years, the Sharpe Ratio of 0.34 is smaller, thus worse.
'The Sortino ratio, a variation of the Sharpe ratio only factors in the downside, or negative volatility, rather than the total volatility used in calculating the Sharpe ratio. The theory behind the
Sortino variation is that upside volatility is a plus for the investment, and it, therefore, should not be included in the risk calculation. Therefore, the Sortino ratio takes upside volatility out
of the equation and uses only the downside standard deviation in its calculation instead of the total standard deviation that is used in calculating the Sharpe ratio.'
Applying this definition to our asset in some examples:
• Looking at the ratio of annual return and downside deviation of 0.36 in the last 5 years of Invesco S&P 500 Low Volatility ETF, we see it is relatively lower, thus worse in comparison to the
benchmark SPY (0.9)
• Looking at ratio of annual return and downside deviation in of 0.48 in the period of the last 3 years, we see it is relatively lower, thus worse in comparison to SPY (0.62).
'The Ulcer Index is a technical indicator that measures downside risk, in terms of both the depth and duration of price declines. The index increases in value as the price moves farther away from a
recent high and falls as the price rises to new highs. The indicator is usually calculated over a 14-day period, with the Ulcer Index showing the percentage drawdown a trader can expect from the high
over that period. The greater the value of the Ulcer Index, the longer it takes for a stock to get back to the former high.'
Which means for our asset as example:
• The Ulcer Index over 5 years of Invesco S&P 500 Low Volatility ETF is 8.81 , which is lower, thus better compared to the benchmark SPY (9.32 ) in the same period.
• Looking at Ulcer Ratio in of 7.18 in the period of the last 3 years, we see it is relatively smaller, thus better in comparison to SPY (10 ).
'Maximum drawdown is defined as the peak-to-trough decline of an investment during a specific period. It is usually quoted as a percentage of the peak value. The maximum drawdown can be calculated
based on absolute returns, in order to identify strategies that suffer less during market downturns, such as low-volatility strategies. However, the maximum drawdown can also be calculated based on
returns relative to a benchmark index, for identifying strategies that show steady outperformance over time.'
Which means for our asset as example:
• Compared with the benchmark SPY (-33.7 days) in the period of the last 5 years, the maximum reduction from previous high of -36.3 days of Invesco S&P 500 Low Volatility ETF is lower, thus worse.
• Looking at maximum reduction from previous high in of -17.3 days in the period of the last 3 years, we see it is relatively higher, thus better in comparison to SPY (-24.5 days).
'The Maximum Drawdown Duration is an extension of the Maximum Drawdown. However, this metric does not explain the drawdown in dollars or percentages, rather in days, weeks, or months. It is the
length of time the account was in the Max Drawdown. A Max Drawdown measures a retrenchment from when an equity curve reaches a new high. It’s the maximum an account lost during that retrenchment.
This method is applied because a valley can’t be measured until a new high occurs. Once the new high is reached, the percentage change from the old high to the bottom of the largest trough is
Which means for our asset as example:
• Looking at the maximum time in days below previous high water mark of 545 days in the last 5 years of Invesco S&P 500 Low Volatility ETF, we see it is relatively greater, thus worse in comparison
to the benchmark SPY (488 days)
• Compared with SPY (488 days) in the period of the last 3 years, the maximum days under water of 545 days is larger, thus worse.
'The Drawdown Duration is the length of any peak to peak period, or the time between new equity highs. The Avg Drawdown Duration is the average amount of time an investment has seen between peaks
(equity highs), or in other terms the average of time under water of all drawdowns. So in contrast to the Maximum duration it does not measure only one drawdown event but calculates the average of
Which means for our asset as example:
• Looking at the average days below previous high of 170 days in the last 5 years of Invesco S&P 500 Low Volatility ETF, we see it is relatively higher, thus worse in comparison to the benchmark
SPY (123 days)
• During the last 3 years, the average days under water is 215 days, which is larger, thus worse than the value of 176 days from the benchmark. | {"url":"https://logical-invest.com/app/etf/splv/invesco-s-p-500-low-volatility-etf","timestamp":"2024-11-11T08:20:23Z","content_type":"text/html","content_length":"59463","record_id":"<urn:uuid:da56234a-2ff4-433e-93e7-79a2f74c611b>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00504.warc.gz"} |
Reflektometrie - Helmholtz-Zentrum Dresden-Rossendorf, HZDR
X-ray reflectometry (XRR)
The scattering geometry of an incident plane wave is scetched in Fig.1 and Fig.2.
Fig. 1 Scattering geometry of XRR
Fig. 2 Reflection and transmission at a plane surface
Because the refractive index of X-rays n differs for matter only by small amounts(e.g., δ ~ 10^-5, β ~ 10^-6) from the vacuum value (n[0]=1), n = 1 - δ - iβ, and |n|<1, total extrernal reflection
occurs for incident angles α[i] smaller than the critical angle α[c]. The critical angle is (for β << δ)
α[c ]~ (2δ)^1/2~Z (Z - atomic number)
The critical angle of total external reflection is small (~0.2°- 1° for a wavelength λ of ~ 0.1 nm) and interferences from thin films are only visible in a small range close to the critical angle.
Reflectometry is sensitive to the electron density and absorption and does not depend on crystallinity and crystal texture. The reflected intensity R[F ]is given by the known Fresnel equations. Fig.3
shows the R[F] of a perfect Si-vacuum interface.
Fig. 3 Reflectivity of a perfect Si-vacuum interface for different values of absorption and a wavelength of l =0.154nm
• For α[i] << α[C] and β >> 0 --> R = 1
(total reflection)
• For α[i] > α[c ]--> R[F] ~ K[z]^-4
R[F] ~ α[i]^-4
Fig. 4 In some cases the surface roughness σ[rms] can be described as a Gaussian distribution of the height h of the hills and valleys around a average surface level h[0]
• Surface roughness σ[rms] is introduced by a Debye-Waller factor
• R[F]^rough = R[F] exp(-K[z]^2σ[rms]^2)
• if the height distribution is Gaussian-like as shown in Fig.4.
XRR on thin films and Multilayers
The reflected intensity may be calculated using a recurrence formalism which calculates the reflection coefficient starting from the lowest surface boundary (substrate) up to the last (surface /
With a code (based on the Parratt or matrix formalism) a simulation of XRR spectra can be done. The parameters film thickness (d), density (ρ) and roughness (σ[rms]) can be exctracted from the
interference spectra, the critical angle, and the decrease of the reflectivity as schematically shown in Fig.5 for a Ta layer on silicon substrate.
Fig. 5 Specular reflectivity of an oxidized Ta layer deposited on a Si-wafer (measured at ROBL with a wavelength of l=0.1033 nm)
Experimental setup for XRR
The Figs. 6 and 7 show the diffractometer D5000 (SIEMENS) in theta-theta geometry with the special cutting slit device for reflecometry. The divergent beam of a sealed copper tube is matched into a
parallel beam by a Göbel mirror. With such a beam the cutting slit may have a larger slit width and furthermore in the most cases one does not need a collimator-analyzer device. The gain in intensity
reduces the measuring time for specular scans extended to higher incidence angles. This is of special interest for the study of multilayers.
XRR is non-destructive technique and can be used for samples with a sufficently smooth surface.
Fig. 6 Scheme of experimental setup
Fig. 7 D5000 (SIEMENS) in theta-theta geometry with Göbel mirror and cutting slit device. | {"url":"https://www.hzdr.de/db/Cms?pNid=309&pOid=11657","timestamp":"2024-11-03T01:26:17Z","content_type":"text/html","content_length":"29039","record_id":"<urn:uuid:f2eb1a04-7aa5-4e7b-83c1-3881170424f9>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00339.warc.gz"} |
• Approximate matching and string distance calculations for R.
• All distance and matching operations are system- and encoding-independent.
• Built for speed, using openMP for parallel computing.
The package offers the following main functions:
• stringdist computes pairwise distances between two input character vectors (shorter one is recycled)
• stringdistmatrix computes the distance matrix for one or two vectors
• stringsim computes a string similarity between 0 and 1, based on stringdist
• amatch is a fuzzy matching equivalent of R’s native match function
• ain is a fuzzy matching equivalent of R’s native %in% operator
• seq_dist, seq_distmatrix, seq_amatch and seq_ain for distances between, and matching of integer sequences.
These functions are built upon C-code that re-implements some common (weighted) string distance functions. Distance functions include:
• Hamming distance;
• Levenshtein distance (weighted)
• Restricted Damerau-Levenshtein distance (weighted, a.k.a. Optimal String Alignment)
• Full Damerau-Levenshtein distance
• Longest Common Substring distance
• Q-gram distance
• cosine distance for q-gram count vectors (= 1-cosine similarity)
• Jaccard distance for q-gram count vectors (= 1-Jaccard similarity)
• Jaro, and Jaro-Winkler distance
• Soundex-based string distance
Also, there are some utility functions:
• qgrams() tabulates the qgrams in one or more character vectors.
• seq_qrams() tabulates the qgrams (somtimes called ngrams) in one or more integer vectors.
• phonetic() computes phonetic codes of strings (currently only soundex)
• printable_ascii() is a utility function that detects non-printable ascii or non-ascii characters.
Some of stringdist’s underlying C functions can be called directly from C code in other packages. The description of the API can be found by either typing ?stringdist_api in the R console or open the
vignette directly as follows:
vignette("stringdist_C-Cpp_api", package="stringdist")
Examples of packages that link to stringdist can be found here and here.
• A paper on stringdist has been published in the R-journal
• Slides of a talk given at te useR!2014 conference. | {"url":"https://cran.hafro.is/web/packages/stringdist/readme/README.html","timestamp":"2024-11-11T17:10:59Z","content_type":"application/xhtml+xml","content_length":"4330","record_id":"<urn:uuid:65f92153-e389-412d-9915-78cf7a5ea071>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00434.warc.gz"} |
Why HTTP/2.0 does not seem interesting
Why HTTP/2.0 does not seem interesting¶
This is the email I sent to the IETF HTTP Working Group:
From: Poul-Henning Kamp <phk@phk.freebsd.dk>
Subject: HTTP/2 Expression of luke-warm interest: Varnish
To: HTTP Working Group <ietf-http-wg@w3.org>
Message-Id: <41677.1342136900@critter.freebsd.dk>
Date: Thu, 12 Jul 2012 23:48:20 GMT
This is Varnish’ response to the call for expression of interest in HTTP/2[1].
Presently Varnish[2] only implements a subset of HTTP/1.1 consistent with its hybrid/dual “http-server” / “http-proxy” role.
I cannot at this point say much about what Varnish will or will not implement protocol wise in the future.
Our general policy is to only add protocols if we can do a better job than the alternative, which is why we have not implemented HTTPS for instance.
Should the outcome of the HTTP/2.0 effort result in a protocol which gains traction, Varnish will probably implement it, but we are unlikely to become an early implementation, given the current
proposals at the table.
Why I’m not impressed¶
I have read all, and participated in one, of the three proposals presently on the table.
Overall, I find all three proposals are focused on solving yesteryears problems, rather than on creating a protocol that stands a chance to last us the next 20 years.
Each proposal comes out of a particular “camp” and therefore all seem to suffer a certain amount from tunnel-vision.
It is my considered opinion that none of the proposals have what it will take to replace HTTP/1.1 in practice.
What if they made a new protocol, and nobody used it ?¶
We have learned, painfully, that an IPv6 which is only marginally better than IPv4 and which offers no tangible benefit for the people who have the cost/trouble of the upgrade, does not penetrate the
network on its own, and barely even on governments mandate.
We have also learned that a protocol which delivers the goods can replace all competition in virtually no time.
See for instance how SSH replaced TELNET, REXEC, RSH, SUPDUP, and to a large extent KERBEROS, in a matter of a few years.
Or I might add, how HTTP replaced GOPHER[3].
HTTP/1.1 is arguably in the top-five most used protocols, after IP, TCP, UDP and, sadly, ICMP, and therefore coming up with a replacement should be approached humbly.
Beating HTTP/1.1¶
Fortunately, there are many ways to improve over HTTP/1.1, which lacks support for several widely used features, and sports many trouble-causing weeds, both of which are ripe for HTTP/2.0 to pounce
Most notably HTTP/1.1 lacks a working session/endpoint-identity facility, a shortcoming which people have pasted over with the ill-conceived Cookie hack.
Cookies are, as the EU commission correctly noted, fundamentally flawed, because they store potentially sensitive information on whatever computer the user happens to use, and as a result of various
abuses and incompetences, EU felt compelled to legislate a “notice and announce” policy for HTTP-cookies.
But it doesn’t stop there: The information stored in cookies have potentially very high value for the HTTP server, and because the server has no control over the integrity of the storage, we are now
seeing cookies being crypto-signed, to prevent forgeries.
The term “bass ackwards” comes to mind.
Cookies are also one of the main wasters of bandwidth, disabling caching by default, sending lots of cookies where they are not needed, which made many sites register separate domains for image
content, to “save” bandwidth by avoiding cookies.
The term “not really helping” also comes to mind.
In my view, HTTP/2.0 should kill Cookies as a concept, and replace it with a session/identity facility, which makes it easier to do things right with HTTP/2.0 than with HTTP/1.1.
Being able to be “automatically in compliance” by using HTTP/2.0 no matter how big dick-heads your advertisers are or how incompetent your web-developers are, would be a big selling point for HTTP/
2.0 over HTTP/1.1.
However, as I read them, none of the three proposals try to address, much less remedy, this situation, nor for that matter any of the many other issues or troubles with HTTP/1.x.
What’s even worse, they are all additive proposals, which add a new layer of complexity without removing any of the old complexity from the protocol.
My conclusion is that HTTP/2.0 is really just a grandiose name for HTTP/1.2: An attempt to smooth out some sharp corners, to save a bit of bandwidth, but not get anywhere near all the architectural
problems of HTTP/1.1 and to preserve faithfully its heritage of badly thought out sedimentary hacks.
And therefore, I don’t see much chance that the current crop of HTTP/2.0 proposals will fare significantly better than IPv6 with respect to adoption.
HTTP Routers¶
One particular hot-spot in the HTTP world these days is the “load-balancer” or as I prefer to call it, the “HTTP router”.
These boxes sit at the DNS resolved IP numbers and distributes client requests to a farm of HTTP servers, based on simple criteria such as “Host:”, URI patterns and/or server availability, sometimes
with an added twist of geo-location[4].
HTTP routers see very high traffic densities, the highest traffic densities, because they are the focal point of DoS mitigation, flash mobs and special event traffic spikes.
In the time frame where HTTP/2.0 will become standardized, HTTP routers will routinely deal with 40Gbit/s traffic and people will start to architect for 1Tbit/s traffic.
HTTP routers are usually only interested in a small part of the HTTP request and barely in the response at all, usually only the status code.
The demands for bandwidth efficiency has made makers of these devices take many unwarranted shortcuts, for instance assuming that requests always start on a packet boundary, “nulling out” HTTP
headers by changing the first character and so on.
Whatever HTTP/2.0 becomes, I strongly urge IETF and the WG to formally recognize the role of HTTP routers, and to actively design the protocol to make life easier for HTTP routers, so that they can
fulfill their job, while being standards compliant.
The need for HTTP routers does not disappear just because HTTPS is employed, and serious thought should be turned to the question of mixing HTTP and HTTPS traffic on the same TCP connection, while
allowing a HTTP router on the server side to correctly distribute requests to different servers.
One simple way to gain a lot of benefit for little cost in this area, would be to assign “flow-labels” which each are restricted to one particular Host: header, allowing HTTP routers to only examine
the first request on each flow.
SPDY has come a long way, and has served as a very worthwhile proof of concept prototype, to document that there are gains to be had.
But as Frederick P. Brooks admonishes us: Always throw the prototype away and start over, because you will throw it away eventually, and doing so early saves time and effort.
Overall, I find the design approach taken in SPDY deeply flawed.
For instance identifying the standardized HTTP headers, by a 4-byte length and textual name, and then applying a deflate compressor to save bandwidth is totally at odds with the job of HTTP routers
which need to quickly extract the Host: header in order to route the traffic, preferably without committing extensive resources to each request.
It is also not at all clear if the built-in dictionary is well researched or just happens to work well for some subset of present day websites, and at the very least some kind of versioning of this
dictionary should be incorporated.
It is still unclear for me if or how SPDY can be used on TCP port 80 or if it will need a WKS allocation of its own, which would open a ton of issues with firewalling, filtering and proxying during
(This is one of the things which makes it hard to avoid the feeling that SPDY really wants to do away with all the “middle-men”)
With my security-analyst hat on, I see a lot of DoS potential in the SPDY protocol, many ways in which the client can make the server expend resources, and foresee a lot of complexity in implementing
the server side to mitigate and deflect malicious traffic.
Server Push breaks the HTTP transaction model, and opens a pile of cans of security and privacy issues, which would not be sneaked in during the design of a transport-encoding for HTTP/1+ traffic,
but rather be standardized as an independent and well analysed extension to HTTP in general.
HTTP Speed+Mobility¶
Is really just SPDY with WebSockets underneath.
I’m really not sure I see any benefit to that, except that the encoding chosen is marginally more efficient to implement in hardware than SPDY.
I have not understood why it has “mobility” in the name, a word which only makes an appearance in the ID as part of the name.
If the use of the word “mobility” only refers only to bandwidth usage, I would call its use borderline-deceptive.
If it covers session stability across IP# changes for mobile devices, I have missed it in my reading.
I have participated a little bit in this draft initially, but it uses a number of concepts which I think are very problematic for high performance (as in 1Tbit/s) implementations, for instance
variant-size length fields etc.
I do think the proposal is much better than the other two, taking a much more fundamental view of the task, and if for no other reason, because it takes an approach to bandwidth-saving based on
enumeration and repeat markers, rather than throwing everything after deflate and hope for a miracle.
I think this protocol is the best basis to start from, but like the other two, it has a long way to go, before it can truly earn the name HTTP/2.0.
Overall, I don’t see any of the three proposals offer anything that will make the majority of web-sites go “Ohh we’ve been waiting for that!”
Bigger sites will be enticed by small bandwidth savings, but the majority of the HTTP users will see scant or no net positive benefit if one or more of these three proposals were to become HTTP/2.0
Considering how sketchy the HTTP/1.1 interop is described it is hard to estimate how much trouble (as in: “Why doesn’t this website work ?”) their deployment will cause, nor is it entirely clear to
what extent the experience with SPDY is representative of a wider deployment or only of ‘flying under the radar’ with respect to people with an interest in intercepting HTTP traffic.
Given the role of HTTP/1.1 in the net, I fear that the current rush to push out a HTTP/2.0 by purely additive means is badly misguided, and approaching a critical mass which will delay or prevent
adoption on its own.
At the end of the day, a HTTP request or a HTTP response is just some metadata and an optional chunk of bytes as body, and if it already takes 700 pages to standardize that, and HTTP/2.0 will add
another 100 pages to it, we’re clearly doing something wrong.
I think it would be far better to start from scratch, look at what HTTP/2.0 should actually do, and then design a simple, efficient and future proof protocol to do just that, and leave behind all the
aggregations of badly thought out hacks of HTTP/1.1.
But to the extent that the WG produces a HTTP/2.0 protocol which people will start to use, the Varnish project will be interested.
Poul-Henning Kamp
Author of Varnish
[1] http://trac.tools.ietf.org/wg/httpbis/trac/wiki/Http2CfI
[2] https://www.varnish-cache.org/
[3] Yes, I’m that old.
[4] Which is really a transport level job, but it was left out of IPv6
along with other useful features, to not delay adoption[5].
[5] No, I’m not kidding. | {"url":"https://varnish-cache.org/docs/trunk/phk/http20.html","timestamp":"2024-11-04T22:16:06Z","content_type":"text/html","content_length":"21872","record_id":"<urn:uuid:291d7802-84d5-45ba-ac1b-3ee025489622>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00029.warc.gz"} |
Bubble Sort | Cratecode
Bubble Sort
Note: this page has been created with the use of AI. Please take caution, and note that the content of this page does not necessarily reflect the opinion of Cratecode.
Bubble Sort is a simple yet effective algorithm for sorting a list of elements. It's like organizing a line of dancers by height, where each time two dancers swap their positions, the entire line
becomes more sorted. Bubble Sort involves repeatedly comparing and potentially swapping adjacent elements until the entire list is sorted.
How Bubble Sort Works
Imagine we have a list of numbers that we want to put in ascending order. The Bubble Sort algorithm compares each pair of adjacent numbers and swaps them if they are out of order. This process
continues until the entire list is sorted. Here's a step-by-step breakdown:
1. Start at the first element of the list.
2. Compare it with the next element.
3. If the first element is greater than the second, swap them.
4. Move on to the next pair and repeat steps 2-3 until the end of the list is reached.
5. If any swaps were made in the previous pass, go back to step 1.
The name "Bubble Sort" comes from the fact that smaller elements "bubble" to the beginning of the list, while larger elements "sink" to the end.
Implementation in Python
Here's a simple implementation of Bubble Sort in Python:
def bubble_sort(arr):
n = len(arr)
for i in range(n):
# Initialize a flag to check if any swaps were made
swapped = False
# Iterate through the unsorted part of the list
for j in range(0, n - i - 1):
# Swap elements if they are out of order
if arr[j] > arr[j + 1]:
arr[j], arr[j + 1] = arr[j + 1], arr[j]
swapped = True
# If no swaps were made, the list is already sorted
if not swapped:
return arr
Time Complexity
Bubble Sort has a worst-case and average time complexity of O(n^2), where n is the number of elements in the list. This makes it inefficient for large lists. However, it has a best-case time
complexity of O(n) when the list is already sorted, which makes it advantageous in certain situations.
Bubble Sort is best suited for small datasets or lists that are already partially sorted. It's also a popular choice for educational purposes due to its simplicity and ease of understanding.
While you're unlikely to use Bubble Sort in production environments or for large datasets, it's still an essential algorithm to learn and understand as a programmer. It lays the foundation for more
advanced sorting algorithms and helps you appreciate the importance of algorithmic efficiency.
Hey there! Want to learn more? Cratecode is an online learning platform that lets you forge your own path. Click here to check out a lesson: Why Program? (psst, it's free!).
What is the Bubble Sort algorithm?
Bubble Sort is a simple sorting algorithm that works by repeatedly stepping through the list, comparing each pair of adjacent items and swapping them if they are in the wrong order. The algorithm
continues to do this until it makes a complete pass through the list without having to make any swaps, at which point the list is considered sorted.
How does the basic implementation of Bubble Sort look like in Python?
Here's a basic implementation of the Bubble Sort algorithm in Python:
def bubble_sort(arr):
n = len(arr)
for i in range(n):
for j in range(0, n-i-1):
if arr[j] > arr[j+1]:
arr[j], arr[j+1] = arr[j+1], arr[j]
example_list = [64, 34, 25, 12, 22, 11, 90]
print("Sorted list is:", example_list)
What is the time complexity of Bubble Sort?
The time complexity of Bubble Sort is O(n^2) in the worst and average cases, where 'n' is the number of items being sorted. This means that the algorithm's performance significantly degrades as the
size of the input list increases. In the best case (when the list is already sorted), Bubble Sort has a time complexity of O(n).
Can Bubble Sort be optimized for better performance?
Yes, Bubble Sort can be optimized by adding a flag to check if any swaps occurred during an iteration. If no swaps occur, it means the list is already sorted, and we can break out of the loop early.
Here's the optimized version in Python:
def optimized_bubble_sort(arr):
n = len(arr)
for i in range(n):
swapped = False
for j in range(0, n-i-1):
if arr[j] > arr[j+1]:
arr[j], arr[j+1] = arr[j+1], arr[j]
swapped = True
if not swapped:
What are some common use cases for Bubble Sort?
Bubble Sort is generally not recommended for large datasets due to its poor performance (O(n^2) time complexity). However, it can be useful for educational purposes, as it's easy to understand and
implement. It's also suitable for small datasets or partially sorted lists, where its simplicity and ease of coding might outweigh the benefits of more complex and efficient algorithms. | {"url":"https://cratecode.com/info/bubble-sort","timestamp":"2024-11-02T21:21:40Z","content_type":"text/html","content_length":"108562","record_id":"<urn:uuid:6d7496a6-8586-4836-9369-b5b90fec4753>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00354.warc.gz"} |
I realize I posted this question as a response in my thread about unioning polygons, but decided to post it as its own thread (for the sake of search engines and the fact that because Boost.Geometry
is fairly new it helps those looking for answers if the thread topics identify the questions being asked).
I am having the problem that after unioning polygons and simplifying the resulting multi_polygon that it doesn't simplify as much as I thought it would. The following code snippet illustrates the
(Disclaimer: I hope the ASCII art I include in this email ends up looking right for those reading. If not, copy and paste into an editor with a fixed width font or look at the end of the code listing
I included)
Suppose I have two rectangles with the following relative size and position
| |
| |
| |
When unioned together to make a square, I would expect it would look like the following without any simplification
| |
* *
| |
| |
After simplification I would expect it to look like the following
| |
| |
| |
| |
However, it really looks like the following after simplification
| |
(*) |
| |
| |
Here the (*) denotes the point that is the start point and end point of the simplified polygon. It seems that the simplification algorithm doesn't realize that there are two co-linear line segments
that cross the start/end boundary of the polygon exterior ring. I have tried sifting through the Boost.Geometry source code but can't find exactly where this simplification process is executed. From
reading the docs, it seems that in order to create my own simplify routine I will need to learn a lot more about how the Concept/Algorithm pattern is implemented. If I could get some guidance on the
best way to go about this, it would be greatly appreciated.
John Swensen
Geometry list run by mateusz at loskot.net | {"url":"https://lists.boost.org/geometry/2011/10/1574.php","timestamp":"2024-11-11T21:41:52Z","content_type":"text/html","content_length":"11967","record_id":"<urn:uuid:137773f3-d683-49cf-b618-3bd88a365305>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00740.warc.gz"} |
Mathematical Foundations of Monte Carlo Methods
Sampling Distribution
Reading time: 34 mins.
Read this chapter carefully. It describes the most important concepts for understanding the Monte Carlo method. Because there is so much to read in this chapter already, a lot of the mathematical
proofs for the equations presented in this chapter will be provided in the next chapter.
Sampling Distribution: Statistic from Statistics
It is not possible to calculate the population mean of an infinite population (such a population can be thought of as the outcome of an infinite number of coin tosses. Of course this concept is only
theoretical and such a population has only a hypothetical existence). Even when the population is finite but very large, computing a mean can reveal itself to be an impractical task. When facing
either one of these situations, what we can do instead is draw samples from the population (where each of these samples can be seen as a random variable) and divide the sum of these drawn random
variables by the number of samples to get what we call a sample mean \(\bar X\). What this number gives is an estimation (not an approximation) of the population mean \(\mu\). We have already
introduced these two concepts in the previous chapter.
In statistics when some characteristic of a given population can be calculated using all the elements or items in this population, we say that the resulting value is a parameter of the population.
The population mean for example is a population parameter that is used to define the average value of a quantity. Parameters are fixed values. On the other hand, when we use samples to get an
estimation of a population parameter we say that the value resulting from the samples is a statistic. Parameters are often given greek letters (such as \(\sigma\)) while statistics are usually given
roman letters (e.g. \(S\)). Note that because it is possible to draw more than one sample from a given population, the value of that statistic is likely to vary from sample to sample.
The way we calculate things such as the mean or variance depends on whether we use all the elements of the population or just samples from the population. The following tables help to compare the
difference between the two cases:
Population (parameter)
\(\bar x \approx \mu \)
A parameter is a value, usually unknown (and which therefore has to be estimated), used to represent a certain population characteristic. For example, the population mean is a parameter that is often
used to indicate the average value of a quantity.
\(N\) is the total size of the population (e.g. the total population of a country).
\(\mu = \dfrac{\sum_{i=1}^N {x_i}}{N}\)
\(\sigma^2 = \dfrac{\sum_{i=1}^N {(x_i - \mu)^2}}{N} = \dfrac{\sum_{i=1}^N x_i}{N} - \mu^2\)
Sample (statistics)
\(\bar x \approx \mu \)
A statistic is a quantity that is calculated from a sample of data. It is used to give information about unknown values in the corresponding population. For example, the average of the data in a
sample is used to give information about the overall average in the population from which that sample was drawn.
\(n\) is the size of the sample (e.g. the number of times we toss the coin).
\(\bar X = \dfrac{\sum_{i=1}^n x_i }{n}\)
\(S_n^2 = \dfrac{\sum_{i=1}^n (x_i - \bar X)^2 }{n} = \dfrac{\sum_{i=1}^n x_i}{n} - \bar X^2\)
Note that the variance of a statistic is denoted \(S_n^2\) (where the subscript n denotes the number of elements drawn from the population) to differentiate it from the population variance \(\sigma^2
\). The second formulation for the variance is helpful because, from a programming point of view, you can use it to compute the variance as you draw elements from the population (rather than waiting
until you have all samples). Now that we have established the distinction between the two cases and their relationship (a statistic is an estimation of a population's parameter) let's pursue a
practical example. To make the demonstration easier, we wrote a C++ program (see source code below) generating a population of elements defined by a parameter which is an integer between 0 to 20
(this is similar to the experiment with cards labeled with numbers). To generate this population, we arbitrarily pick up a number between 1 and 50 for each value contained in the range [0,20]. This
number represents the number of elements in the population having a particular value (either 0, 1, 2, etc. up to 20). The size of the population is the sum of these numbers. The parameters
(population size, population mean, and variance) of this population are summarized in the next table:
Population Size Population Mean Population Variance
572 8.970280 35.476387
The rest of the program will sample this population 1000 times, where the size of the sample (the number of elements drawn from the population to calculate the sample mean and the sample variance)
will vary from 2 to 20; this will help us to see how differently the sampling technique works for low and high number of samples. We randomly select a sample size of n and then draw n samples from
the population where for each draw, we first randomly select an item from the population (any random number between 1 and the population size).
The process by which we find out the value held by this item is a bit unusual. We incrementally remove the number of items holding a particular value (starting for the group of items holding the
value 0, then 1, etc.) from the item index, and we break from this loop, as soon as the item index gets lower than 0.
int item_index = pick_item_distr(rng), k;
for (k = 0; k <= MAX_NUM; ++k) {
item_index -= population[k];
if (item_index < 0) break;
The value of k in the code when we break from that loop is the value held by the chosen element. This technique works because the index of the drawn element is randomly chosen (i.e. the likelihood of
choosing an element in the group of 1s, is the same as choosing an element in the group of 2s, 3s,... or 20s).
Figure 1: distribution of samples (red = 2 samples, blue = 20 samples).
As the elements are drawn from the population we update the variables used for computing the sample mean and the sample variance. At the end of this process, we get 1000 samples (to start with) with
their associated sample mean and variance. Plotting these results will help us to make some interesting observations about the process of drawing samples from a population. Note that in the graph
(figure 1), samples with a small size are drawn in red, while samples with a large size are drawn in blue (the sample color is interpolated between red and blue when its size is somewhere between 2
and 20). The green spot in the figure indicates the point on the graph where the population parameters (the mean and the variance) coincide. Any sample which is close to this point can be considered
as providing a good estimate of the population mean and variance. The further away a sample is from this point (in either direction), the poorer the estimation.
If you still have a hard time understanding why samples (or statistics) are all over the place in graph 1, consider the simple case on the right, where we plotted say the weight of 9 people. If we
were to measure the mean of the population weight, we would get some value somewhere in the middle of the line (labeled population mean in the associated image). Now consider that while taking
samples from this population, we get the three items on the right circled in red. As you can see the sample mean calculated from these three items is very different from the population mean. Now
consider the case where the three items from the sample are the ones circled in green. Taking an average of these samples gives on the other hand a pretty good estimation of the population mean. The
set of samples and how "randomly" distributed represent a subset of all the possible combinations of items from the population. How spread the sample means depends (as we will shortly see) on some
factors such as the sample size, and the population distribution itself.
Here is the source code of the program used to generate the data in Figure 1.
#include <random>
#include <cstdlib>
#include <cstdio>
static const int MAX_NUM = 20; //items in the population are numbered (number between 0 and 20)
static const int MAX_FREQ = 50; //number of items holding a particular number varies between 1 and 50
int main(int argc, char **argv)
int minSampples = atoi(argv[1]); //minimum sample size
int maxSampples = atoi(argv[2]); //maximum sample size
std::mt19937 rng;
int population[MAX_NUM + 1];
int popSize = 0;
float popMean = 0;
float popVar = 0;
static const int numSamples = 1000;
// creation population
std::uniform_int_distribution<uint32_t> distr(1,MAX_FREQ);
for (int i = 0; i <= MAX_NUM; ++i) {
population[i] = distr(rng);
popSize += population[i];
popMean += population[i] * i; //prob * x_i
popVar += population[i] * i * i; //prob * x_i^2
popMean /= popSize;
popVar /= popSize;
popVar -= popMean * popMean;
fprintf(stderr, "size %d mean %f var %f\n", popSize, popMean, popVar);
std::uniform_int_distribution<uint32_t> n_samples_distr(minSampples, maxSampples);
std::uniform_int_distribution<uint32_t> pick_item_distr(0, popSize - 1);
float expectedValueMean = 0, varianceMean = 0;
// now that we have some data and stats to work with sample it
for (int i = 0; i < numSamples; ++i) {
int n = n_samples_distr(rng); //sample size
float sample_mean = 0, sample_variance = 0;
// draw samples from the population and compute stats
for (int j = 0; j < n; ++j) {
int item_index = pick_item_distr(rng), k;
for (k = 0; k <= MAX_NUM; ++k) {
item_index -= population[k];
if (item_index < 0) break;
// k is the value we picked up from the population,
// this is the outcome of a number between [0:20]
sample_mean += k;
sample_variance += k * k;
sample_mean /= n;
sample_variance /= n;
sample_variance -= sample_mean * sample_mean;
float c1[3] = { 1, 0, 0 };
float c2[3] = { 0, 0, 1 };
float t = (n - minSampples) / (float)(maxSampples - minSampples);
float r = c1[0] * (1 - t) + c2[0] * t;
float g = c1[1] * (1 - t) + c2[1] * t;
float b = c1[2] * (1 - t) + c2[2] * t;
printf("sample mean: %f sample variance: %f col: %f %f %f;\n", sample_mean, sample_variance, r, g, b);
return 0;
Figure 2: the sample averages converge in probability and "almost" surely to the expected value \(\mu\) as n approaches increases (red = 16 samples, blue = 32 samples).
What observations can we make from looking at this graph? First, note that the graph can be read along the abscissa (x) and ordinates (y) axis. Any sample close to the sample mean line (the vertical
cyan line) can be considered as providing a good estimate of the population mean. Any sample close to the population variance line (the horizontal cyan line) provides a good estimation of the
population variance. However, truly "good" samples are samples that are close to both lines. We have illustrated this in figure 1 with a yellow shape showing a cluster of samples around the green dot
which can be considered good samples because they are in reasonable proximity to these two parameters (if you are interested in this topic, read about confidence interval). Note also that most
samples in this cluster are blue but not all of them are (there is still a small portion of red or reddish samples). This tells us that the bigger the sample size, the more likely we are to come up
with an estimate converging to the population's parameters. Note though that we said, "we are more likely". We didn't say the bigger the sample size the better the estimate. This is capturing the
difference (however subtle and philosophical) there is between statistics and the concept of approximation. In statistics, there is always a possibility (even if very small) that the values given by
a sample will be the same as the population's parameters. However unlikely, this chance exist no matter how big (or small) the sample size. However, a sample whose size is very small is more likely
to provide an estimate which is way off, than a sample whose size is much larger even though the sample estimate is more likely to converge to the population's parameters.
Note 1: indeed, by the law of large numbers (which we have presented in chapter 2), the sample averages converge in probability and "almost" surely to the expected value \(\mu\) as n approaches
Note how we used "almost" in the previous definition. We can't predict for sure that it will converge, but we can say that the probability that it will, gets higher as the sample size increases. It
is all about probabilities. Figure 2 shows what happens to the "quality" of the samples as the sample size increases. Samples are still distributed around the population mean and variance but the
cluster of samples shrank indicating indeed that the quality of the estimation is on average better than what we had in figure 1\. In other words, the cluster gets smaller as n increases (indicating
an overall improvement in the quality of the estimation), or to word it differently, you can say that the statistics get more and more concentrated around some value as \(n\) increases. From there we
can ask two questions? What that value is and what happens to these statistics as \(n\) approaches infinity? But we already gave the answers to these questions when we introduced to Law of Large
Numbers. The value is the population mean itself and the sample mean approaches this population mean as \(n\) approaches infinity. Remember this definition from the chapter on the Expected Value:
$$ \begin{array}{l} \lim_{n \rightarrow \infty} \text{ Pr }(|\bar X - \mu| < \epsilon) = 1 \\ \bar X \rightarrow \mu. \end{array} $$
Figure 3: population distribution. Number of cards holding a particular number between 0 to 20. This distribution is arbitrary.
Figure 4: statistics or sample distribution. Number of samples whose sample mean is equal to any value between 0 and 20\. This distribution is not arbitrary and follows a normal distribution.
As you can see in figure 3 the population generated by our program has an arbitrary distribution. This population is not distributed according to any particular probability distribution, and
especially not a normal distribution. The reason why we made this choice will become clear very soon. Because the distribution is discrete and finite, this population of course has a well-defined
mean and variance which we already computed above. What we are going to do now is take a sample of size \(n\) from this population, compute its sample mean and repeat this experiment 1000 times. The
sample mean value will be rounded off to the nearest integer value (so that it takes any integer value between 0 and 20). At the end of the process, we will count the number of samples whose means
are either 0, 1 or 2, ... up to 20. Figure 4 shows the results. Quite remarkably, as you can see, the distribution of samples follows a normal distribution. This is not the distribution of cards here
that we are looking at but the distribution of samples. Be sure to understand that difference quite clearly. It is a distribution of statistics. Note also that this is not a perfect normal
distribution (you now understand why we have been very specific about this in the previous chapter) because clearly, there is some difference between the results and a perfect normal distribution
(curve in red). In conclusion, even though the distribution of the population is arbitrary, the distribution of samples or statistics is not (but it converges in distribution to the normal
distribution. We will come back to this idea later).
Question from a reader: on which basis do you claim that the sampling distribution is not a perfect normal distribution? And how do you get the curve to fit your data (the red curve in figure 4)? We
know that a normal distribution is defined by a mathematical equation and two parameters which are the mean and the standard deviation (check the previous chapter). From a computing point of view, it
is possible to measure the skew and the kurtosis of our sampling distribution (but we not showing how in this version of the lesson). If any of these two parameters are non zero then we know for sure
that our sampling distribution is not a perfect normal distribution (check the previous chapter for an explanation of these two terms). As for the second question, it is possible to compute the
expected value of the samples and their standard deviation (we show how below) and from these numbers, we can draw a curve such as the one you see in Figure 4. Keep in mind that this curve is not
meaningful. What you care about are the samples and their sampling distribution. You can interpret the curve as showing the overall tendency of the sampling distribution.
The following code snippet shows the changes we made to our code to compute the data for the chart in figure 4.
int main(int argc, char **argv)
int meansCount[MAX_NUM + 1];
for (int i = 0; i <= MAX_NUM; ++i) meansCount[i] = 0;
// now that we have some data and stats to work with sample it
for (int i = 0; i < numSamples; ++i) {
for (int i = 0; i <= MAX_NUM; ++i) printf("%d %d\n", i, meansCount[i]);
We have repeated many times that the sample means and the sample variance are random variables of their own (each sample is likely to have a different value than the others), and thus like any other
random variables, we can study their probability distribution. In other words, instead of studying for example how the height (the property) of all adults from a given country (the population) is
distributed, we take samples from this population to estimate the population's average height and look at how these samples are distributed with regards to each other. In statistics, the distribution
of samples (or statistics) is called a sampling distribution. Similarly to the case of population distribution, sampling distributions can be defined using models (i.e. probability distributions). It
defines how all possible samples are distributed for a given population and samples of a given size.
Note 2: the sampling distribution of a statistic is the distribution of that statistic, considered as a random variable when derived from a random sample of size n. In other words, the sampling
distribution of the mean is a distribution of sample means.
Keep in mind that a statistic on its own is a function of some random variables, and consequently is itself a random variable. As with all other random variables it hence has a distribution. That
distribution is what we call the sampling distribution of the statistic \(\bar X\). The values that a statistic can have and how likely it is to take on any of these values are defined by the
statistic's probability distribution.
An important point to make from these definitions is that if you know the sampling distribution of some statistic \(\bar X\) then you can find out what the probability or how likely \(\bar X\) will
take on some values before observing the data.
The next concept is an extension of the technique we have been using so far. Remember how we learned to calculate an expected value in chapter 1? In the case of discrete random variables (the concept
is the same for continuous r.v. but the concept is easier to describe with discrete r.v.), we know the expected value or mean is nothing more than the sum of each outcome weighted by its probability.
Note 3: just as the population can be described with parameters such as the mean, so can the sampling distribution. In other words, we can apply to samples or statistics the same method for computing
a mean as the method we used to calculate the mean of random variables. When applied to samples, the resulting value is called the expected value of the distribution of mean and can be defined as:
$\mu_{\bar X} = E(\bar X) = \sum_{i=1}^n p_i \bar X_i,$
where \(p_i\) is the probability associated with the sample mean \(\bar X_i\) and where these probabilities are themselves distributed according to sampling distribution. When the samples are drawn
with the same probability (i.e. uniform distribution) the probability is just \(1 \over n\) where \(n\) here stands for the number of samples or statistics (not the sample size).
For reasons beyond the scope of this lesson, \(\mu_{\bar X} \) can not be considered the same thing as a mean (because as the sample size approaches infinity this value is undefined), however, it can
be interpreted or seen as something similar to it. This expected value of the distribution of means can be seen somehow as a weighted average of a finite number of sample means. Not the average, but
the weighted average where the weight is the frequency (or probability) of each possible outcome:
$$E(\bar X) = \sum_{i=1}^n w_i \bar X_i.$$
This is the same equation as before, but we replaced \(p_i\) with \(w_i\); in this context though, the two terms are synonymous.
It can be proven that the expected value of the distribution of means \(\mu_{\bar X} \) is equal to the population mean \(\mu\): \(\mu_{\bar X} = \mu\). The proof of this equality can be found at the
end of this chapter (see Properties of Sample Mean). However, let's just say for now that in statistics, when the expected value of a statistic (such as \(\mu_{\bar X}\)) equals a population
parameter, the statistic itself is said to be an unbiased estimator of that parameter. For example, we could say that \(\bar X\) is an unbiased estimator of \(\mu\). Again this important concept will
be reviewed in detail in the next chapter. We will also show in the chapter on Monte Carlo, that Monte Carlo is an (unbiased) estimator.
Figure 4: while increasing the number of samples, the sampling distribution becomes closers to a perfect normal distribution \(N(\mu, 0)\).
If now increase the sample size (i.e. increase \(n\)) then the samples are more likely to give a better estimate of the population mean (we have already illustrated this result in figure 2). Not all
samples are close to the mean, but generally the higher the sample size (\(n\)), the closer the samples are to the mean. The distribution of samples has the shape of a normal distribution as in the
previous case, but if you compare the graphs from Figures 3 and 4 you can see that the second distribution is tighter than the first one. As \(n\) increases, you get closer to using all the elements
from the population for estimating the mean, thus logically the sample means are more likely on average, to be close to the population mean (or to say it another way, the chance of having sample
means away from the population mean is lower). This experiment shows something similar to what we have already looked at in Figures 1 and 2 (check note 1 again) which is that the higher the sample
size, the more likely we are in probability to converge to the true mean (i.e. the population mean). We could easily confirm this experimentally by looking at the way the expected value of the
distribution of means \(\mu_{\bar X}\) (as well as its associated standard deviation \(\sigma_{\bar X}\)) varies as we increase the sample size. We should expect the expected of the sample means \(\
mu_{\bar X}\) to converge to the population mean \(\mu\) and the normal distribution (measured in terms of its standard deviation) to keep becoming tighter as the sample size increases. Here are the
changes we made to the code to compute these two variables named expectedValueDistrMeans (the equivalent of \(\mu_{\bar X}\) and varianceDistrMeans (the equivalent of \(\sigma_{\bar X}\)). In this
version of the code, each sample has the same sample size (line 6) and we also increased the total number of samples (line 4) from the population to improve the robustness of the estimation (we went
from 1000 to 1000 statistics or samples):
int main(int argc, char **argv)
static const int numSamples = 10000;
float expectedValueDistrMeans = 0, varianceDistrMeans = 0;
for (int i = 0; i < numSamples; ++i) {
int n = atoi(argv[1]); // fix sample size
float sample_mean = 0, sample_variance = 0;
for (int j = 0; j < n; ++j) {
sample_mean /= n;
sample_variance /= n;
sample_variance -= sample_mean * sample_mean;
expectedValueDistrMeans += sample_mean;
varianceDistrMeans += sample_mean * sample_mean;
expectedValueDistrMeans /= numSamples;
varianceDistrMeans /= numSamples;
varianceDistrMeans -= expectedValueDistrMeans * expectedValueDistrMeans;
fprintf(stderr, "Expected Value of the Mean %f Standard Deviation %f\n", expectedValueDistrMeans, sqrt(varianceDistrMeans));
return 0;
Are you lost? Some readers told us that at this point they tend to be lost. They don't understand what we try to calculate anymore. This is generally the problem in statistics when you get to this
point because the names themselves tend to become confusing. For example, it might take you some time to understand what we refer to when we speak of the mean of the sampling distribution of the
means. To clarify things, we came up with the following diagram:
First off, you start with a population. Then you draw elements from this population randomly. In this particular diagram in each experiment, we make what we call 3 observations, in other words, we
draw 3 items from the population. Because these are random variables, but possible outcomes from the experiment we label them with the lowercase \(x\). If now take the weighted average of these 3
drawn items, we get what we call a statistic or sample whose size is \(n = 3\). To compute the value of this sample, we use the equation for the expected value (or mean). Each sample on its own is a
random variable, but because now they represent the mean of a certain number n of items in the population, we label them with the upper letter \(X\). We can repeat this experiment \(N\) times which
gives as series of samples: \(X_1, X_2, ... X_N\). This collection of samples is what we call a sampling distribution. Because samples are random, we can also compute their mean the same way we
computed the mean of the items in the population. This is what we called the expected value (or mean) of the sampling distribution of means and denoted \(\mu_{\bar X}\). And once we have this value
we can compute the variance of the distribution of means \(\sigma_{\bar X}\).
We ran the program several times each time increasing the sample size by 2\. The following table shows the results (keep in mind that the population mean which we compute in the program is 8.970280):
Sample Size (n) Mean \(\mu_{\bar X}\) Standard Deviation \(\sigma_{\bar X}\)
2 8.9289 4.275
4 8.9417 3.014
8 8.9559 2.130
16 8.9608 1.512
32 8.9658 1.074
... ... ...
16384 8.9703 0.050
First, the data seems to confirm the theory: as the sample size increases, the mean of all our samples (\(\mu_{\bar X}\)) approaches the population mean (which is 8.970280). Furthermore, the standard
deviation of the distribution of means \(\sigma_{\bar X}\) decreases as expected (you can visualize this as the curve of the normal distribution becomes narrower). Thus as stated before, as \(n\)
approaches infinity, the sampling distribution turns into a perfect normal distribution of mean \(\mu\) (the population mean) and standard deviation 0: \(N(\mu, 0)\). We say that the random sequence
of random variables \(X_1, ... X_n\), converges in distribution to a normal distribution.
Try to connect the concept of convergence in distribution with the concept of convergence in probability which we have talked about in the chapter on expected value.
This is important because mathematicians like to have the proof that eventually the mean of the samples \(\mu_{\bar X}\) and the population mean \(\mu\) are the same and that the method is thus valid
(from a theoretical point of view because obviously in practice, the infinitely large sample size is impossible). In other words, we can write (and we also checked this result experimentally) that:
$$\mu_{\bar X} = E[\bar X] = \mu.$$
And if you don't care so much about the mathematics and just want to understand how this applies to you (and the field of rendering) you can just see this as "your estimation becomes better as you
keep taking samples (i.e. as \(n\) increases)". Eventually, you have so many samples, that your estimation and the value of what you are trying to estimate are very close to each other and even the
same in theory when you have an infinity of these samples. That's all it "means".
Let's now talk about something that is certainly of great importance in statistics but even more in rendering (at last something useful to us). If you look at the table again you may have noticed
that the difference between the expected value of the distribution of means when N=2 and N=16 are greater (8.9608 - 8.9289 = 0.0319) than the difference for the expected value when N=32 and N=16384
(8.9703 - 8.9658 = 0.0045). In other words, the estimation when going from 32 to 16384 samples is only a fraction better (the exact value is 0.0319/0.0045) than when going from 2 to 16 samples!
When you keep increasing the sample size \(n\) and we will prove this relationship at the end of the chapter, the variance of the distribution of means decreases according to the following equation:
$\sigma_{\bar X} = {\dfrac{\sigma}{\sqrt{n}}}.$
Where \(\sigma\) is the standard deviation of the population and \(n\) is the sample size. In other words, the rate at which the standard deviation of the distribution of means (which you can
interpret as the error in the estimation of the population mean, in fact in statistics, the standard deviation of the distribution of means \(\sigma_{\bar X}\) is called the standard error) decreases
with the number of samples (\(\sqrt{n}\)) and is non-linear (the square root operator is a nonlinear operator). We can say that the rate of convergence is \(O(\sqrt{n})\) if you wish to use the big O
notation (you can read this as "the algorithm's performance is directly proportional to the square root of n"). Note that the variance \(\sigma_{\bar X}^2\) also known in statistics as the Mean
Squared Error (MSE), varies linearly with \(n\) since variance is the square root of standard deviation.
The consequence of this relationship, is that four times more samples are needed to decrease the error of the estimation by half.
We will come back to this observation when we get to the chapter on the Monte Carlo method.
Figure 5: convergence rate is \(O(\sqrt{n}\).
The following sequence of results helps to understand the "four times more samples are needed to decrease the error of the estimation by half" better. Imagine that the standard deviation of the
population is 1 to start with and that you start with a sample size of 4, thus your standard error is \(error = 1/\sqrt{4} = 0.5\). Now if you want to decrease the error by 2 you need 4 times as many
samples, that is 16 samples. Let's check: \(error = 1/\sqrt{16}=0.25\) which is correct. And again if you want to decrease this error by half one more time, you need 4 times 16 samples which are 64
samples: \(error = 1/\sqrt{64}=0.125\). Note that the processing time increases linearly with respect to the sample size, thus quadrupling the number of samples increases the processing time by four
(assuming a linear relationship between the number of samples and the processing time, which is a reasonable assumption). This should start giving you intuition as to why Monte Carlo methods can
quickly become expensive. You can see a plot of this rate of convergence in figure 5\. From a practical point of view what this means though, is that it quickly becomes prohibitively expensive to
even make a small improvement in the quality of your estimation, but on the other hand the method gives a reasonably good estimate with relatively small sample sizes. The proof of this relationship
will be given in the next chapter, however, we can already test it experimentally. We modified our program to print out the standard error we computed from the samples and the stander error computed
from the equation \(\sigma \over \sqrt{n}\) and ran the program with different sample sizes.
int main(int argc, char **argv)
int n = atoi(argv[1]);//n_samples_distr(rng); // sample size
for (int i = 0; i < numSamples; ++i) {
for (int j = 0; j < n; ++j) {
expectedValueMean += sample_mean;
varianceMean += sample_mean * sample_mean;
expectedValueMean /= numSamples;
varianceMean /= numSamples;
varianceMean -= expectedValueMean * expectedValueMean;
fprintf(stderr, "Std Err (theory): %f Std Err (data): %f\n", sqrtf(popVar) / sqrtf(n), sqrt(varianceMean));
return 0;
The results reported in the table below show without a doubt that the standard error computed from the samples matches closely the value given by the equation.
All this work leads to what is known in mathematics (it is considered one of the most important concepts in statistics and even mathematics and most of the sampling theory is based on this theory) as
the Central Limit Theorem (or CLT).
The Central Limit Theorem states that the mean of the sampling distribution of the mean \(\mu_{\bar X}\) equals the mean of the population \(\mu\) and that the standard error of the distribution of
means \(\sigma_{\bar X}\) is equal to the standard deviation of the population \(\sigma\) divided by the square root of \(n\). In addition, the sampling distribution of the mean will approach a
normal distribution \(N(\mu, {{\sigma}/{\sqrt{n}}})\). These relationships may be summarized as follows:
$$ \begin{array}{l} \mu_{\bar X} = \mu \\ \sigma_{\bar X} = \dfrac{\sigma}{\sqrt{n}}. \end{array} $$
Again, the power of this theorem is that even when we don't anything about the population distribution, the distribution of the sample mean always approaches a normal distribution if we have enough
samples (assuming these samples are i.i.d.).
For the curious reader: you may come across the term "asymptotic distribution" in the literature on statistics. This is an advanced topic that we will look into in a separate lesson, however, we will
just give a quick answer here by saying that it is the term given to a function to which a sampling distribution converges, as n approaches infinity. In the context of the sample mean, we explained
that as n approaches infinity, the sampling distribution converges in distribution to a normal distribution of mean \(\mu\) and deviation \(\sigma^2\) over the square root of n: \(\mathcal{N}(\mu, \
sigma / \sqrt{n})\). Thus the normal distribution is said to be the asymptotic distribution of \(X_n\). In fact, by subtracting \(\mu\) from \(\bar X\), dividing by the variance \(\sigma\), and
multiplying the whole by \(\sqrt{n}\) this even becomes the standard normal distribution. We can say that \(\sqrt{n}(\bar X - \mu) / \sigma\) converges in distribution to \(\mathcal{N}(0, 1)\), the
standard normal distribution.
Properties of the Sample Mean
We will now review the properties of the sample mean:
$$\bar X = \dfrac{1}{n} (X_1 + ... + X_n),$$
and especially give the proofs for the mean and variance of the sample mean. First, let's review the expected value of the sample mean which as we said before, is equal to the population mean (Eq.
$$E[\bar X_n] = \dfrac{1}{n} \sum_{i=1}^n E[X_i] = \dfrac{1}{n} \cdot { n \mu } = \mu.$$
The expected value of the sample mean is nothing else than the average of the expected value of all the random variables making up that sample mean. To understand this proof you will need to get back
to the first and second properties of expected value (which you can find in this chapter:
$$ \begin{array}{l} E[aX+b] = aE[X] + b\\E[X_1 + ... + X_n] = E[X_1] + ... + E[X_n]. \end{array} $$
Since the a sample mean is equal to: \(\bar X = \dfrac{1}{n} (X_1 + ... + X_n)\) we can write:
$$ \begin{array}{l} E[\bar X]&=&E[\dfrac{1}{n}(X_1 + ... + X_n)]\\ &=&\dfrac{1}{n}E[X_1 + ... + X_n]\\ &=&\dfrac{1}{n} \sum_{i=1}^N E[X_i]. \end{array} $$
We also know that the expected value of a random variable is by definition the population mean thus we can replace \(E[X_i]\) in the equation by \(\mu\). The end of the demonstration is trivial. You
have a sum of n \(E[X]\) or n times the population mean. The two n cancel out and you are left with \(\mu\) (Eq. 1). Let's now move on to the variance of the sample mean (Eq. 2):
$$ \begin{array}{l} E[\bar X]&=&E[\dfrac{1}{n}(X_1 + ... + X_n)]\\ &=&\dfrac{1}{n}E[X_1 + ... + X_n]\\ &=&\dfrac{1}{n} \sum_{i=1}^N E[X_i]. \end{array} $$
For this proof we need to use the second and third properties of variance (which you can find in this chapter:
$$ \begin{array}{l} Var(aX + b) = a^2Var(X)\\Var(X_1+...+X_n) = Var(X_1) + ... + Var(X_n). \end{array} $$
With these two properties in hand, the derivation is simple:
$$ \begin{array}{l} Var(\bar X)&=&Var(\dfrac{1}{n}(X_1 + ... X_n))\\ &=&\dfrac{1}{n^2 } Var(X_1 + ... X_n)\\ &=&\dfrac{1}{n^2 } \sum_{i=1}^n Var(X_i). \end{array} $$
And to finish, you just need to substitute \(Var(X)\) for \(\sigma^2\). As for the mean of the sample mean we have n of them, the two n cancel out and we are left with \(\sigma^2 / n\). Finally, we
can write that the standard deviation which is the square root of the variance is \(\sigma / \sqrt{n}\). Note that this property is interesting because, in essence, it says that the variance of the
sample mean is lower than the variance of the population distribution itself, which also means that the sample mean \(\boldsymbol{ \bar X }\) is more likely to be close to \(\boldsymbol{ \mu }\) than
is the value of a single observation \(\boldsymbol{X_i}\).
Visualizing a Binomial Distribution with an Experiment: the Bean Machine
Let's finish this chapter with this great video (link to the original and complete video on YouTube) showing a simulation of what is known as a Bean machine. The bean machine (also known as the
quincunx, or Galton box) was invented by Sir Francis Galton to demonstrate the central limit theorem (in particular that the normal distribution is approximate to the binomial distribution). The idea
is to roll a tiny little ball down the face of a board adorned by a lattice of equally spaced pins. On their way down the balls bounce right and left as they hit the pins (randomly, independently)
and are collected into one-ball-wide bins at the bottom. Over time, the height of the ball columns in the bins approximates a bell curve. Play the video to see this quite amazing phenomenon happening
right in front of your eyes. This experiment gives us an insight into the central limit theorem. The sample mean behaves like the ball bouncing down the face of the board: sometimes it bounces off to
the left, sometimes it bounces off to the right, but on average it lands around the middle and the result is a bell shape.
What You Need to Remember and What's Next?
This is a long and dense chapter. You should remember the concept of a population parameter, the sample mean, and most importantly the concept behind the Central Limit Theorem (which is that as you
sample a population, the distribution of these samples approaches the normal distribution, and this holds independently from the population distribution itself). Finally, keep in mind that the rate
of convergence is proportional to the square root of the sample size. | {"url":"https://www.scratchapixel.com/lessons/mathematics-physics-for-computer-graphics/monte-carlo-methods-mathematical-foundations/sampling-distribution.html","timestamp":"2024-11-08T08:24:41Z","content_type":"text/html","content_length":"51638","record_id":"<urn:uuid:f90a59b8-f0b6-4da8-a6e9-f1d01bbefe66>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00419.warc.gz"} |
Previous CPP conferences
CPP 2016, Saint Petersburg, Florida, USA, January 18-19, 2016 (collocated with POPL'16)
CPP 2015, Mumbai, India, January 13-14, 2015 (collocated with POPL'15)
CPP 2013, Melbourne, Australia, December 11-13, 2013 (collocation with APLAS'13)
CPP 2012, Kyoto, Japan, December 13-15, 2012 (collocation with APLAS'12)
CPP 2011, Kenting, Taiwan, December 7-9, 2011 (collocation with APLAS'11)
The CPP Manifesto (from 2011)
In this manifesto, we advocate for the creation of a new international conference in the area of formal methods and programming languages, called Certified Programs and Proofs (CPP). Certification
here means formal, mechanized verification of some sort, preferably with the production of independently checkable certificates. CPP would target any research promoting formal development of
certified software and proofs, that is:
• The development of certified or certifying programs
• The development of certified mathematical theories
• The development of new languages and tools for certified programming
• New program logics, type systems, and semantics for certified code
• New automated or interactive tools and provers for certification
• Results assessed by an original open source formal development
• Original teaching material based on a proof assistant
Software today is still developed without precise specification. A developer often starts the programming task with a rather informal specification. After careful engineering, the developer delivers
a program that may not fully satisfy the specification. Extensive testing and debugging may shrink the gap between the two, but there is no assurance that the program accurately follows the
specification. Such inaccuracy may not always be significant, but when a developer links a large number of such modules together, these "noises" may multiply, leading to a system that nobody can
understand and manage. System software built this way often contains hard-to-find "zero-day vulnerabilities" that become easy targets for Stuxnet-like attacks. CPP aims to promote the development of
new languages and tools for building certified programs and for making programming precise.
Certified software consists of an executable program plus a formal proof that the software is free of bugs with respect to a particular dependability claim. With certified software, the dependability
of a software system is measured by the actual formal claim that it is able to certify. Because the claim comes with a mechanized proof, the dependability can be checked independently and
automatically in an extremely reliable way. The formal dependability claim can range from making almost no guarantee, to simple type safety property, or all the way to deep liveness, security, and
correctness properties. It provides a great metric for comparing different techniques and making steady progress in constructing dependable software.
The conventional wisdom is that certified software will never be practical because any real software must also rely on the underlying runtime system which is too low-level and complex to be
verifiable. In recent years, however, there have been many advances in the theory and engineering of mechanized proof systems applied to verification of low-level code, including proof-carrying code,
certified assembly programming, local reasoning and separation logic, certified linking of heterogeneous components, certified protocols, certified garbage collectors, certified or certifying
compilation, and certified OS-kernels. CPP intends to be a driving force that would facilitate the rapid development of this exciting new area, and be a natural international forum for such work.
The recent development in several areas of modern mathematics requires mathematical proofs containing enormous computation that cannot be verified by mathematicians in an entire lifetime. Such
development has puzzled the mathematical community and prompted some of our colleagues in mathematics and computer science to start developing a new paradigm, formal mathematics, which requires
proofs to be verified by a reliable theorem prover. As particular examples, such an effort has been made for the four-color theorem and has started for the sphere packing problem and the
classification of finite groups. We believe that this emerging paradigm is the beginning of a new era. No essential existing theorem in computer science has yet been considered worth a similar
effort, but it could well happen in the very near future. For example, existing results in security would often benefit from a formal development allowing us to exhibit the essential hypotheses under
which the result really holds. CPP would again be a natural international forum for this kind of work, either in mathematics or in computer science, and would participate strongly in the emergence of
this paradigm.
On the other hand, there is a recent trend in computer science to formally prove new results in highly technical subjects such as computational logic, at least in part. In whichever scientific area,
formal proofs have three major advantages: no assumption can be missing, as is sometimes the case; the result cannot be disputed by a wrong counterexample, as sometimes happens; and more importantly,
a formal development often results in a better understanding of the proof or program, and hence results in easier and better implementation. This new trend is becoming strong in computer science
work, but is not recognized yet as it should be by traditional conferences. CPP would be a natural forum promoting this trend.
There are not many proof assistants around. There should be more, because progress benefits from competition. On the other hand, there is much theoretical work that could be implemented in the form
of a proof assistant, but this does not really happen. One reason is that it is hard to publish a development work, especially when this requires a long-term effort as is the case for a proof
assistant. It is even harder to publish work about libraries which, we all know, are fundamental for the success of a proof assistant. CPP would pay particular attention in publishing, publicizing,
and promoting this kind of work.
Finally, CPP also aims to be a publication arena for innovative teaching experiences, in computer science or mathematics, using proof assistants in an essential way. These experiences could be
submitted in an innovative format to be defined. | {"url":"https://cpp2017.mpi-sws.org/past.html","timestamp":"2024-11-03T16:36:30Z","content_type":"text/html","content_length":"10467","record_id":"<urn:uuid:4a222050-09e3-47bc-b52f-9a9c786940cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00431.warc.gz"} |
Simulating a Subregion for Economic Feasibility
The coal seam must be of a minimum thickness, called a cutoff value, for a mining operation to be profitable. Suppose that, for a subregion of the measured area, the cost of mining is higher than in
the remaining areas due to the geology of the overburden. This higher cost results in a higher thickness cutoff value for the subregion. Suppose also that it is determined from a detailed cost
analysis that at least 60% of the subregion must exceed a seam thickness of 39.7 feet for profitability.
How can you use the SRF model ( and ) and the measured seam thickness values , to determine, in some approximate way, whether at least 60% of the subregion exceeds this minimum?
Spatial prediction does not appear to be helpful in answering this question. Although it is easy to determine whether a predicted value at a location in the subregion is above the 39.7-feet cutoff
value, it is not clear how to incorporate the standard error associated with the predicted value. The standard error is what characterizes the stochastic nature of the prediction (and the underlying
SRF). It is clear that it must be included in any realistic approach to the problem.
A conditional simulation, on the other hand, seems to be a natural way of obtaining an approximate answer. By simulating the SRF on a sufficiently fine grid in the subregion, you can determine the
proportion of grid points in which the mean value over realizations exceeds the 39.7-feet cutoff and compare it with the 60% value needed for profitability.
It is desirable in any simulation study that the quantity being estimated (in this case, the proportion that exceeds the 39.7-feet cutoff) not depend on the number of simulations performed. For
example, suppose that the maximum seam thickness is simulated. It is likely that the maximum value increases as the number of simulations performed increases. Hence, a simulation is not useful for
such an estimate. A simulation is useful for determining the distribution of the maximum, but there are general theoretical results for such distributions, making such a simulation unnecessary. See
Leadbetter, Lindgren, and Rootzen (1983) for details.
In the case of simulating the proportion that exceeds the 39.7-feet cutoff, it is expected that this quantity will settle down to a fixed value as the number of realizations increases. At a fixed
grid point, the quantity being compared with the cutoff value is the mean over all simulated realizations; this mean value settles down to a fixed number as the number of realizations increases. In
the same manner, the proportion of the grid where the mean values exceed the cutoff also becomes constant. This can be tested using PROC SIM2D.
A crucial, nonprovable assumption in applying SRF theory to the coal seam thickness data is that the values , represent a single realization from the set of all possible realizations consistent with
the SRF model ( and . A conditional simulation repeatedly produces other possible simulated realizations consistent with the model and data. However, the only concern of the mining company is this
single unique realization. It is not concerned about similar coal fields to be mined sometime in the future; it might never see another coal field remotely similar to this one, or it might not be in
business in the future.
Hence the proportion found by generating repeated simulated realizations must somehow relate back to the unique realization that is the coal field (seam thickness). This is done by interpreting the
proportion found from a simulation to the spatial mean proportion for the unique realization. The term “spatial mean” is simply an appropriate integral over the fixed (but unknown) spatial function .
(The SRF is denoted ; a particular realization, a deterministic function of the spatial coordinates, is denoted .)
This interpretation requires an ergodic assumption, which is also needed in the original estimation of . See Cressie (1993, pp. 53–58) for a discussion of ergodicity and Gaussian SRFs. | {"url":"http://support.sas.com/documentation/cdl/en/statug/65328/HTML/default/statug_sim2d_examples02.htm","timestamp":"2024-11-03T04:10:17Z","content_type":"application/xhtml+xml","content_length":"18155","record_id":"<urn:uuid:0e2942d9-7018-41ca-8b2d-ea9a4458b2e3>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00703.warc.gz"} |
xpression for
Write a numerical expression for each phrase, and simplify the expression. The sum of -18 and \(-6,\) divided by the product of 2 and -4
Short Answer
Expert verified
Step by step solution
Understand the Problem
The task is to write a numerical expression for the given phrase and then simplify it. The phrase is: 'The sum of -18 and -6, divided by the product of 2 and -4'.
Write the Expression for the Sum
First, find the sum of -18 and -6. This can be written as \[-18 + (-6) = -18 - 6\].
Simplify the Sum
Now simplify the expression for the sum: \[-18 - 6 = -24\].
Write the Expression for the Product
Next, find the product of 2 and -4. This can be written as \[2 \times (-4)\].
Simplify the Product
Now simplify the expression for the product: \[2 \times (-4) = -8\].
Write the Division Expression
Now that you have both the sum and the product, write the entire expression for the division: \[\frac{-24}{-8}\].
Simplify the Division
Finally, simplify the division: \[\frac{-24}{-8} = 3\].
Key Concepts
These are the key concepts you need to understand to accurately answer the question.
Arithmetic Operations
Arithmetic operations are the basic building blocks of math. They include addition, subtraction, multiplication, and division. In the given exercise, we need to use addition (or sum), multiplication
(or product), and division.
First, let's review these operations in the context of the problem:
• Addition: Adding two numbers together. Example: \((-18) + (-6)\)
• Subtraction: Subtracting one number from another. Example: \((-18) - 6\)
• Multiplication: Multiplying two numbers. Example: \2 \times (-4)\
• Division: Dividing one number by another. Example: \frac{-24}{-8}\
Understanding these operations is crucial, as they are often combined to solve complex problems.
Expression Simplification
Expression simplification is the process of making an expression simpler and easier to understand. In the exercise, we started with the phrase: 'The sum of -18 and -6, divided by the product of 2 and
The simplification was done in steps:
• First, we simplified the sum \((-18) + (-6) = -24\)
• Next, we simplified the product \2 \times (-4) = -8\
• Finally, we simplified the division \frac{-24}{-8} = 3\
As you can see, breaking down the expression into steps makes the problem easier to solve.
Division is an arithmetic operation where one number is divided by another. In this exercise, after finding the sum and product, we needed to perform division.
In mathematical notation, division is represented by the symbol \('/'\) or fraction bar \('\frac{}{}'\). For example, \frac{-24}{-8}\ can be read as '-24 divided by -8'.
Division is crucial when solving problems that involve multiple steps like this exercise. The key is to simplify the numbers as much as possible before performing division.
Here, \frac{-24}{-8}\ simplified to 3 because dividing two negative numbers results in a positive number:
\frac{{-24}}{{-8}} = \frac {24}{8} = 3\
Always remember, dividing negative by negative gives a positive result, dividing positive by negative (or vice-versa) gives a negative result. | {"url":"https://www.vaia.com/en-us/textbooks/math/beginning-and-intermediate-algebra-7-edition/chapter-1/problem-116-write-a-numerical-expression-for-each-phrase-and/","timestamp":"2024-11-08T11:22:36Z","content_type":"text/html","content_length":"252489","record_id":"<urn:uuid:a309bea9-6df4-4bfd-a1ea-01fde251abad>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00891.warc.gz"} |
Backtracking, Famous Chess Problems, and You
Can you solve the problem that stumped mathematicians for centuries?
Backtracking algorithms are very powerful, and are often the keystone in a successful combinatorial search algorithm. They can allow us to proceed in finding complex solutions where other algorithms
fall far short of the mark.
Many famous problems in computer science can be solved using backtracking, including: The Eight Queens Problem, The Sudoku Puzzle, The Graph Coloring Problem, and The Knapsack Problem.
This article will explore the ideas of combinatorial search and backtracking within the context of a classical chess problem: The Eight Queens Problem.
The Eight Queens Problem
If you are unfamiliar with the rules of chess, or the Eight Queens problem, you will want to read this.
The Eight Queens Problem asks us, "How many ways can we position eight queens on a chessboard such that no queen threatens another?"
A chessboard is an 8x8 grid, and the queen is a powerful piece, capable of attacking any square on her row, column, or diagonal. For the visually inclined, that looks like this:
In chess, a piece threatens another piece if it is able to capture the other in a single move. In the above picture, the two queens are not threatening one another.
Stated formally, the problem has two rules for a successful candidate:
1. There must be 8 queens on the board
2. No queen threatens another
Then, we are to count the number of successful candidates.
Wait, but why do we need backtracking at all? Could we not solve this problem using some other algorithm?
Let's take a deeper look at the problem.
Analyzing the Problem
Let's analyze the problem a bit. First, let us relax the constraints a bit and ask ourselves, "How many ways could we arrange eight queens on the board?"
This is a simpler problem, with a much clearer answer: We have 64 squares, and need to place 8 pieces. Since every piece is a queen, the order does not matter. And since the order doesn't matter, it
comes down to a very common problem in computer science: n choose k.
n choose k is way of calculating how many combinations we can create when the number of things to choose from is greater than or equal to the number of things chosen.
In our case, we have 64 choose 8 possible combinations, which is equal to:
$$\frac{64!}{8!(64-8)!} = 4,426,165,368$$
That's a lot of different ways we could arrange eight queens on a chess board. However, we should be thankful that the order doesn't matter. For instance, if we were placing 8 different pieces on on
a chessboard, we'd be looking at a very large number of permutations:
$$\frac{64!}{(64-8)!} = 178,462,990,000,000$$
Yep, that's 178 trillion.
These numbers are a challenge even to modern computers, and they indicate that a simple brute force algorithm will not work.
Since we are searching for specific combinations that fit a certain constraint (no queen threatens another), this is a combinatorial search problem. Fortunately, we can use the constraints of the
problem against it - by backtracking.
The Essence of Backtracking
Enter backtracking.
In a single sentence, backtracking is recognizing when a partial candidate has failed to meet the constraints of the problem, and turning the F around.
But let's get a little more in depth than that.
The thing that really helped backtracking click for me was visualizing the path from the empty board to any one of the solutions as a tree.
Let us start with the root of this tree: The empty chessboard. From the empty board, we can place a single queen, and since the board is empty we can choose any of the 64 squares, like so:
By adding a piece to the board, we can visualize that we are descending lower into the tree.
So, we have 64 possible first moves. What about second moves? Naturally, the second comes after the first, and we can't place a piece on an occupied square, so we have one less square to choose from.
Therefore, from any of the 64 first moves, we can make 63 possible second moves.
Already it is difficult to show a tree that demonstrates all of the possible second moves. Instead, I'm going to zoom in on a specific part of that tree, so that we can make two critical observations
that will help us write a correct algorithm:
So, we've made our first and second moves. But wait! Something is going on here!
1. We've just created the same combination! Since all queens are the same, there is no difference between these two board states. This is critical. We must make sure our algorithm does not revisit
previously encountered board states. If we fail to do this, our tree goes from 4.4 billion to 172 trillion board states in size! Bad!
2. Both of these also wrong! These queens are threatening each other! Even if we create an algorithm that weeds out duplicate combinations, we can see that there is no point in adding another piece
if our last move fails to meet the constraints of the problem.
Point 2 is the very essence of backtracking. We need to use the constraints of the problem to reject bad moves while we descend into the tree (by adding pieces), not after we've picked all 8
If we reject the board state in the case shown above, we will have effectively pruned 62 choose 6 incorrect board states from the search tree:
$$\frac{62!}{6!(62-6)!} = 61,474,519$$
That is 61.4 million bad decisions gone in the blink of an eye. If only I had this kind of power in real life.
These savings add up fast. In the end, they are what makes a backtracking solution superior to a naive brute force solution.
In summary, we need an algorithm that finds the set of appropriate next moves for a given board state, and rejects the moves that fail to meet the problem constraints immediately. Additionally, we
must skip board states that have already been encountered.
Wow, sounds cool, but how do we write such legendary code? Well...
How to Write a Backtracking Algorithm
This is the section where we will begin looking at some code. I'm not cool, so I'm going to write the code in C, like a crusty dinosaur. Keep in mind the techniques used here work for any language.
The tree described in the previous section is a recursive data structure, and therefore the backtracking algorithm is a recursive algorithm. It is effectively a variant of depth-first search, but it
does have a few more moving parts.
Take a read through these function signatures, and then we'll talk about them:
void backtrack(state_t *p);
int reject(state_t *p);
int accept(state_t *p);
Let's go through these in the general sense. backtrack is the main function here. It is responsible for calling both reject and accept, both of which are predicate functions that return true or false
(or in C, non-zero or zero). Additionally, backtrack has the responsibility of iterating through the available next moves, and calling itself for each one.
reject returns true if the board state fails to meet the constraints of the problem. This is pretty important for the turning the fuck around part of backtracking.
accept returns true if the board state has met the acceptance criteria. You'll see that this function doesn't actually have much to do, since it gets called after reject.
Optionally, you may wish to write routines for finding the set of possible next moves, making a move, and undoing a move. That's largely dependent on the complexity of the problem at hand. I left
them out because a chess board is just an 8x8 grid, so it was easy enough solve this problem by looping through the rows and columns.
Moving on, let's take a quick look at one possible way to model the Eight Queens problem: a stack!
typedef struct move_t {
int rank; // chess lingo for "row"
int file; // chess lingo for "column"
} move_t;
typedef struct state_t {
move_t moves[DEPTH]; // a stack of moves, DEPTH = 8
int len; // the stack index & array len
} state_t;
I found it easy enough to model the board state as a stack of moves. Doing so makes it quite efficient for making a move (pushing to the stack), and undoing a move (popping off the stack).
At any given recursion into backtrack, our reject predicate just needs to check the validity of the newest move against the previous moves, which amounts to iterating an array of grid coordinates of
size 8. accept only needs to check the size of the stack!
Let's look at the full code:
typedef struct move_t {
int rank; // chess lingo for "row"
int file; // chess lingo for "column"
} move_t;
typedef struct state_t {
move_t moves[DEPTH]; // a stack of moves, DEPTH = 8
int len; // the stack index & array len
} state_t;
void backtrack(state_t *p);
int reject(state_t *p);
int accept(state_t *p);
int threatens(move_t *x, move_t *y); // helper for reject
void backtrack(state_t *p) {
if (reject(p)) {
if (accept(p)) {
/* you win, do something awesome */
/* if a move has been made, next move should skip
to the rank after, as this prevents revisiting
previous board states */
for (int rank = p->len ? p->positions[p->len - 1].rank + 1 : 0;
rank < RANKS; ++rank) {
for (int file = 0; file < FILES; ++file) {
/* push next move to the stack */
p->moves[p->len++] =
(move_t){.rank = rank, .file = file};
/* pop last move off the stack */
int reject(state_t *p) {
/* impossible to fail if less than 2 moves */
if (p->len < 2) {
return 0;
/* test newest move against previous moves */
move_t *new_move = &p->positions[p->len - 1];
for (move_t *previous_move = p->positions;
previous_move != new_move; ++previous_move) {
if (threatens(new_move, previous_move)) {
return 1;
return 0;
/* since this is called after reject, we can simply return
true if the size of the stack has reached DEPTH */
int accept(state_t *p) {
return p->len == DEPTH;
int threatens(move_t *x, move_t *y) {
return x->file == y->file || x->rank == y->rank ||
abs(x->file - y->file) == abs(x->rank - y->rank);
All right, recursive stuff can be a lot to take in, but the main take away here is the relationship that backtrack has with reject, accept, and itself. Here's a breakdown of the important parts:
1. We have a rejection policy. Remember, this is the thing that is capable of pruning millions of bad paths out of our search tree!
2. We iterate the set of next possible moves at each level of the tree. As I mentioned briefly before, some problems may lend themselves to making separate routines for this. However, the chess
board is actually linear enough that I felt row-major looping was sufficient.
3. We don't revisit previously seen board states. The clever trick here is that you should always position new moves after the last played move. For problems other than this, you may need to use a
hash set or something like that.
All right, at this point you might be wondering, "What the hell is the actual answer to the Eight Queens problem? I can't believe I read all that garbage code not to find out at the end!
Well, the bad news is that if you don't know, I'm not going to spoil it for you. Please don't hulk out on me. Instead, write some code and run it.
Backtracking is a very powerful technique, and the key take away is this: If the problem constraints allow you to know whether a partial candidate has failed, use it to your advantage!
Many problems can be solved with backtracking. I hope this article helped you in understanding backtracking well enough that you feel prepared to have a go at something a bit more exciting than the
Eight Queens Problem.
Why don't you try your hand at writing a Sudoku solver?
Thanks for reading!
Further Reading
The pseudocode section of the backtracking wiki helped me understand the structure and relationships of these functions a great deal, and basically served as my primary reference for the structure of
the above code. Definitely worth a read. | {"url":"https://aubreynicoll.com/backtracking-famous-chess-problems-and-you","timestamp":"2024-11-02T00:08:51Z","content_type":"text/html","content_length":"253640","record_id":"<urn:uuid:2826fc7b-10a9-4eeb-a5e7-bdce6f7eafe4>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00417.warc.gz"} |
• The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
Weickert, J. 30 October 1998 (has links) (PDF)
Nonsteady Navier-Stokes equations represent a differential-algebraic system of strangeness index one after any spatial discretization. Since such systems are hard to treat in their original form,
most approaches use some kind of index reduction. Processing this index reduction it is important to take care of the manifolds contained in the differential-algebraic equation (DAE). We
1 investigate for several discretization schemes for the Navier-Stokes equations how the consideration of the manifolds is taken into account and propose a variant of solving these equations along
the lines of the theoretically best index reduction. Applying this technique, the error of the time discretisation depends only on the method applied for solving the DAE.
MSC 76D05
Weickert, J. 30 October 1998 (has links)
Nonsteady Navier-Stokes equations represent a differential-algebraic system of strangeness index one after any spatial discretization. Since such systems are hard to treat in their original form,
2 most approaches use some kind of index reduction. Processing this index reduction it is important to take care of the manifolds contained in the differential-algebraic equation (DAE). We
investigate for several discretization schemes for the Navier-Stokes equations how the consideration of the manifolds is taken into account and propose a variant of solving these equations along
the lines of the theoretically best index reduction. Applying this technique, the error of the time discretisation depends only on the method applied for solving the DAE.
Bernert, K. 30 October 1998 (has links) (PDF)
The paper deals with tau-extrapolation - a modification of the multigrid method, which leads to solutions with an improved con- vergence order. The number of numerical operations depends linearly
3 on the problem size and is not much higher than for a multigrid method without this modification. The paper starts with a short mathematical foundation of the tau-extrapolation. Then follows a
careful tuning of some multigrid components necessary for a successful application of tau-extrapolation. The next part of the paper presents numerical illustrations to the theoretical
investigations for one- dimensional test problems. Finally some experience with the use of tau-extrapolation for the Navier-Stokes equations is given.
Bernert, K. 30 October 1998 (has links)
4 The paper deals with tau-extrapolation - a modification of the multigrid method, which leads to solutions with an improved con- vergence order. The number of numerical operations depends linearly
on the problem size and is not much higher than for a multigrid method without this modification. The paper starts with a short mathematical foundation of the tau-extrapolation. Then follows a
careful tuning of some multigrid components necessary for a successful application of tau-extrapolation. The next part of the paper presents numerical illustrations to the theoretical
investigations for one- dimensional test problems. Finally some experience with the use of tau-extrapolation for the Navier-Stokes equations is given.
Page generated in 0.0368 seconds | {"url":"http://search.ndltd.org/search.php?q=subject%3A%22MSC+76D05%22","timestamp":"2024-11-06T05:05:48Z","content_type":"text/html","content_length":"55160","record_id":"<urn:uuid:3bd6a7c9-657a-414d-8cef-af32349bd5d2>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00482.warc.gz"} |
Almost One
Choose some fractions and add them together. Can you get close to 1?
Here is a set of six fractions: $$\frac{1}{6} \quad \frac{1}{25} \quad \frac{3}{5} \quad \frac{3}{20} \quad \frac{4}{15} \quad \frac{5}{8} $$
Choose some of the fractions and add them together. You can use as many fractions as you like, but you can only use each fraction once.
Can you get an answer that is close to 1?
What is the closest to 1 that you can get?
With thanks to Colin Foster who introduced us to this problem.
Getting Started
You could begin by choosing a fraction bigger than $\frac{1}{2}$ and adding on smaller fractions to get close to 1.
You could approximate each fraction to fractions that you are familiar with (with small denominators) and then use your approximations to estimate possible sums.
It is often easiest to add fractions when they have the same denominator...
Student Solutions
We received many solutions to this problem, and well done to everyone who found a sum close to 1.
Xaviar from Temora Public School in Australia, Demilade from Green Springs School in Nigeria, Ibrahim from Nigerian Tulip International College, Jonathan, Alex and Shafi from Greenacre Public School
in Australia, Marissa, Jessica and Logan from Matamata Intermediate in New Zealand, Ethan from King Geroge V School in Hong Kong, Year 9 class at Wyedean School in England, John from South Hunsley
Secondary School in the UK and Kenzo from ISL in Switzerland added fractions together to see what they could find.Ethan found:
$$\tfrac1{25}+\tfrac35+\tfrac3{20}$$ I "simplified" them so it was easier to add them together.
It isn't very close but this is the closest one I managed to find.
Ibrahim found:
$$\begin{split}\tfrac35+\tfrac58+\tfrac3{20}&=\tfrac{24+25+6}{40}\\&=\tfrac{55}{40}\\&=1.375\end{split}$$ which, when given to 1 significant figure, is $1$
$$\begin{split}\tfrac16+\tfrac4{15}+\tfrac3{5}&=\tfrac{5+8+18}{30}\\&=\tfrac{31}{30}\\&=1.03\end{split}$$ which is very close to $1$
Logan took the idea of equivalent fractions a bit further:
First I analyzed the 6 fractions and looked at which ones could add together to get a number close to one. After a few tries not working I then chose to add $\frac58$ and $\frac4{15}$ which was
equivalent to $\frac{107}{120}$ as $120$ is the least common multiple of the two denominators $15$ and $8$. I then found any other denominators that could multiply into $120$ and the fractions were $
\frac35$, $\frac16$ and $\frac1{20}$. These fractions were equivalent to $\frac{72}{120}$, $\frac{20}{120}$ and $\frac{6}{120}$.
After adding all these numbers on in separate equations the closest answer was $127$ after adding the $\frac16$ or $\frac{20}{120}$. This is equivalent to $1.058$, just $0.058$ off the number $1$.
William from New End Primary School in the UK, James from The King's School Grantham in the UK, Will from WWSPS in Australia, Rishika from Nonsuch High School for Girls in the UK, Yesh from
Manchester Grammar School in the UK, Aryaman from Bangkok Patana School in Thailand, Hondfa, Tony, Jacob, Nathan, Laurenc and Allan from Greenacre Public School, Isaac from Rugby School in the UK,
Victor from South Hunsley in the UK, Daanyal from Caerleon Comprehensive School in Wales and Paul from Coventry University in the UK all used this method of expressing all of the fractions over the
same denominator.
This is Rishika's working:
The easiest way to find an answer closest to 1 is to find the LCM
(lowest common multiple)
of all the denominators first, for which I used prime factorisation:
$6 = 2\times3\\
25= 5\times5 \hspace{3mm}(5^2)\\
5 = 5\\
20 = 2\times2\times5\hspace{3mm} (2^2\times5)\\
15 = 3\times5\\
8 = 2\times2\times2\hspace{3mm} (2^3)\\$
To find the LCM we need to multiply together the highest number of $2$s, $3$s and $5$s present in each set, overall. From above, we know that the highest number of $2$s is $3$ ($2\times2\times2$),
the highest number of $3$s is $1$ ($2\times3$ and $3\times5$) and
the highest number of $5$s is $2$ ($5\times5$).
Therefore we multiply $2\times2\times2\times3\times5\times5 = 600$, which is the LCM.
Then we can put all the fractions over the LCM:
\frac{160}{600}\\ \frac{75}{600}$
To find the fractions that add to give an answer closest to $1$, I first added all the above fractions together, giving $\frac{809}{600}$ (remember $1 =\frac{600}{600}$).
I needed $\frac{209}{600}$ less to make $1$.
The fractions that add to make the closest to $\frac{209}{600}$ were: $\frac{100}{600}, \frac{24}{600}$ and $\frac{75}{600}$ ($\frac16,\frac1{25}$ and $\frac{5}{8}$), summing to $\frac{199}{600}$.
Therefore, I needed to add other fractions to give me the answer closest to $1$:
$$\begin{split}\tfrac35+\tfrac3{20}+\tfrac4{15}&=\tfrac{360}{600}+\tfrac{90}{600}+\tfrac{160}{600}\\&=\tfrac{610}{600}\\&=1\tfrac1{60}\end{split}$$ which is the closest answer to $1$.
Daanyal used a denominator of $1800$ instead of $600$, and wanted to prove that
was the best possible sum. This is Daanyal's working:
I used trial and error to find a combination close to $1800$. Quite quickly I got to: $$\tfrac{1080}{1800}+\tfrac{270}{1800}+\tfrac{480}{1800}=\tfrac{1830}{1800}=\tfrac{61}{60}$$ I decided that this
was the closest I could get by trial and error. Next, I needed to prove/disprove that this was the closest to $1$ you can get.
To improve this equation, you need to:
a) Remove one or more terms
b) Replace it with one or more terms
To keep it brief, I am going to refrain from using the denominators as they are not very significant.
Three numerators used: Three numerators not used:
The three used numerators can be used to create a list of seven options for removing terms from the equation. Likewise, the three unused numerators can be used to create a similar list of replacement
Removal Replacement
180+270+1080=1830\hspace{27mm}300+72+1125=1497$$ If you remove any term on the left and replace it with any term on the right, it results in a total which is further from $1800$ than $1830$ is. This
proves that $\frac{1830}{1800}$ or $\frac{61}{60}$ is the closest to $1$ you can get.
Paul said that
we could probably solve this by programming some software to try every iteration.
Flynn from Parkside Primary School in Australia and Aidan from Sheldon School in England expressed all of the fractions as fractions with denominators of 100, which led to some very strange fractions
(which mathematicians don't usually allow). Flynn got
$\frac{62.5}{100}+\frac{16.6666666}{100}+\frac{15}{100}+\frac{4}{100}$ or $1.02$ (Rounded)
and Aidan got
{(rounded up)}\hspace{3mm}=\frac{101.7}{100}$
This is similar to what Saroja from India did using percentages:
$\frac35,\frac3{20}$ and $\frac4{15}$.
The first fraction is 60%.
The third one is a little more than 25%.
The total of these two is about 85%
Hence we have to choose a fraction which is about 15%.
Hence $\frac3{20}$
(equivalent to 15%)
fits in well. The sum of the three fractions is $\frac{61}{60}$ - nearly 1.
Zane from Shireland Collegiate Academy, Soham from Sutton Grammar School and Sheila, all in the UK, converted the fractions into decimals to solve the problem. Click here here to see Soham's complete
solution, with explanation.
Teachers' Resources
The suggestions in these notes are adapted from Colin Foster's article, Sum Fractions.
Why do this problem?
Adding and subtracting fractions is a procedure which students often find very difficult to master. It is important to address the area without it feeling like an exact repetition of what they have
done many times before.
One way to avoid the tedium of lots of repetitive practice is to embed practice in a bigger problem which students are trying to solve. This idea is explored in Colin Foster's article, Mathematical
Etudes, and this problem is an example of a mathematical etude.
Possible approach
"What can you say about these six fractions?" $$\frac{1}{6} \quad \frac{1}{25} \quad \frac{3}{5} \quad \frac{3}{20} \quad \frac{4}{15} \quad \frac{5}{8} $$
Students might note that they are all different, that they are all less than 1, that they are all positive, that they are all expressed in their simplest terms, that four are less than a half and two
are greater than a half, that they are not in order of size, and so on.
Encourage students to say as many things as they can think of. Questions like this are a good way to encourage students to be mathematically observant.
"Which fraction do you think is the largest? Which is the smallest? Why?"
Since all of the fractions are expressed in their simplest terms, it is easy to see that none of them are equal. Students may compare fractions by making their denominators equal or converting them
to decimals.
Encourage students to use 'informal' methods of comparing fractions, and only calculate when it becomes absolutely necessary. For example, $\frac{1}{20}$ is bigger than $\frac{1}{25}$, so $\frac{3}
{20}$ will certainly be bigger than $\frac{1}{25}$.
"Write a fraction that is equal to $\frac35$. And another, and another..."
Students could write their fractions on mini-whiteboards.
They will probably list equivalent fractions such as $\frac{6}{10}, \frac{30}{50}$ etc.
You could encourage a wider range of answers by introducing some constraints, for example... "Write down one with an odd denominator" or "Write down one where the numerator is a five-digit number
that does not end in 0".
Then ask them to do the same with $\frac58$.
"How would you add $\frac35$ and $\frac58$ without a calculator?"
"The answer to $\frac35 +\frac58$ is a little bit more than 1. Is there any way that you could have predicted that the answer was going to be more than 1 without working it out exactly?"
Both fractions are more than $\frac12$, so their total must be more than 1.
This sort of reasoning can be very useful for estimating the size of an answer so that mistakes can be spotted. Estimation will be important in the main activity that follows.
Return to the original set of six fractions:
$$\frac{1}{6} \quad \frac{1}{25} \quad \frac{3}{5} \quad \frac{3}{20} \quad \frac{4}{15} \quad \frac{5}{8} $$
"Choose some of the fractions and add them together. You can use as many fractions as you like, but you can only use each fraction once."
"Can you get an answer that is close to 1?"
"What is the closest to 1 that you can get?"
Make it clear that calculators are not to be used!
If some students are unsure how to start, encourage them to talk to their partner.
Give students some time to work on the problem. This will be a good opportunity to circulate and see how students are getting on.
If students obtain an answer like $\frac{11}{12}$ (from $\frac{1}{6} + \frac{3}{5} + \frac{3}{20}$), they may think that they are as close as possible, as their answer is “only 1 away” , but because
the “one” is “one twelfth” they are not really that close ($\frac{1}{12}$ is more than 8%), so they should aim to get even closer!
Allow plenty of time at the end of the lesson for students to share their approaches and reasoning.
Possible support
If students are not secure with equivalent fractions they could do some work with Fractional Wall.
Possible extension
Students could be asked to find the set of fractions which add up to as near to $\frac{1}{2}$ as possible.
For other rich contexts that offer students an opportunity to practise manipulating fractions see Peaches Today, Peaches Tomorrow and Keep it Simple. | {"url":"https://nrich.maths.org/problems/almost-one","timestamp":"2024-11-12T00:50:45Z","content_type":"text/html","content_length":"51039","record_id":"<urn:uuid:e308f1f7-1366-48be-a892-581aa6200a25>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00020.warc.gz"} |
all th
expand | collapse
» S
What’s all this number stuff?
Collecting the calibration numbers can be quite intimidating at first. This section tries to give you an idea what they are used for, in the hope it will make the collection process less bothersome.
The programs Cal and Sim3 create 81 values from the static and dynamic logs that you have gathered.
There are 3 sensor groups — the gyros, the accelerometers, and the magnetometers — and each have an X, Y and Z axis. So there are 9 data points affecting everything the calculations do.
The values generatet correct for the manufacturing differences of each sensor, how accurately they are mounted, temperature effects, and how they are affected by external influences like wires, ESCs,
motors, etc.
Each sensor axis has a Bias value and a Scale value for each axis. These values vary with temperature, so the programs Cal and Sim3 try to find 3 numbers that best fit a 3rd order polynomial equation
that represents the corrections needed over the operating temperature range. There are temperature corrections for Bias for all three axis’ on all three sensors. There are also corrections for Scale
for the accs and mags. Since the gyros would require some sort of calibrated rate of change, the Scale is taken from the manufacturer’s data sheet.
So, if you’ve been calculating all this, we’re up to 63 different values. 24 each for accel and 24 for the mags. Only 15 for the gyros since we leave out the scale temperature calibration and only
apply the bias temperature calibration.
The define names clearly explain what each value is used for. If you look closer at the source code, you can see how they are skillfully applied.
Currently, more accurate gyro bias values are generated on AQ startup, so those values are not used at this time, but the temperature corrections are being applied.
There are also 18 values created that correct for the slight misalignment of the sensors. That brings it up to 81, and with Magnetic Inclination and Declination we get a rousing 83 values that will
make the AQ fly better than its competitors… if we generate good numbers!
The proprietary programs Cal and Sim3 are used to take all the data provided, sometimes leveraging one against the other to come up with a unique set of numbers for each board’s installation.
The idea is simple, the more accurate the data, the better the machine will fly. | {"url":"http://autoquad.org/wiki/wiki/configuring-autoquad-flightcontroller/autoquad-calibrations/calibration-faq/what-are-all-these-numbers-for/","timestamp":"2024-11-11T06:27:50Z","content_type":"text/html","content_length":"55652","record_id":"<urn:uuid:44317e40-8df6-489f-aa08-7b20db3d9750>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00406.warc.gz"} |
The coefficients of x2y2,yzt2 and xyzt in the expansion of (x+y... | Filo
The coefficients of and in the expansion of are in the ratio
Not the question you're searching for?
+ Ask your question
Coefficient of in and
coefficient of in
Also, coefficients of in
Required ratio is
Was this solution helpful?
Found 5 tutors discussing this question
Discuss this question LIVE for FREE
7 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions from Binomial Theorem
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text The coefficients of and in the expansion of are in the ratio
Updated On Apr 17, 2022
Topic Binomial Theorem
Subject Mathematics
Class Class 11
Answer Type Text solution:1 Video solution: 1
Upvotes 145
Avg. Video Duration 16 min | {"url":"https://askfilo.com/math-question-answers/the-coefficients-of-x2-y2-y-z-t2-and-x-y-z-t-in-the-expansion-of-xyzt4-are-in","timestamp":"2024-11-12T00:05:56Z","content_type":"text/html","content_length":"307215","record_id":"<urn:uuid:af75e1df-1501-4c93-8d8d-35a4ef2ab7bf>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00188.warc.gz"} |
Bard on Lotto
I asked Google Bard, “What are the odds of winning the Washington State Lotto jackpot?” Its reply began with two sentences:
Google Bard attempting to answer a question on probability
1. The odds of winning the Washington State Lotto jackpot are 1 in 6.99 million.
2. This means that for every 6.99 million tickets sold, one ticket will win the jackpot.
#1 is correct and a useful parsing of information from the Lotto website. #2 is either false or misleading, depending on how you read it.
The WA Lotto is a 6-of-49 number picking game. The game will pick six numbers from 1 to 49 without replacement (no repeats) in any order, and you have to guess in advance which numbers will be
picked. If you guess all six numbers, you win the jackpot. The odds of winning the jackpot in a single pick are 1 in 49!/(6! * 43!) = 13,983,816. You get two picks per $1 ticket, so the odds that a
ticket will win the jackpot are 1 in 6.99 mil.
$$\frac{n!}{r!(n-r)!} = \frac{49!}{6! \times 43!} = 13983816$$
To guarantee a jackpot win for a single lottery, you can buy $7 mil worth of tickets and pick every of the 13,983,816 possible combinations. But that’s not what Bard said. Statement #2 suggests that
13.98 mil random picks (as if 6.99 mil people bought a $1 ticket) would guarantee a jackpot win. This is false.
What are the chances that given 13.98 mil random picks, at least one of them will win the jackpot? One way to figure this is to calculate the odds of each possible win pattern: pick #1 wins and all
the others lose, pick #2 wins and all other lose, pick #1 and pick #2 both win, etc. Then add them up. But there’s an easier way.
Given that either an event happens or it doesn’t happen, the odds of the event not happening is 1 minus the odds of it happening, because the total odds of all possibilities must add to 1. It is
guaranteed (probability 1) that the odds that the coin will land either heads (1/2) or tails (1/2): $1/2 + 1/2 = 1$. If the odds that one pick gets the jackpot is $P(M) = $ 1 in 13,983,816, the odds
that one pick doesn’t get the jackpot is $1 - P(M)$.
$$1 - P(M) = 1 - \frac{1}{13983816} = \frac{13983815}{13983816}$$
To determine the odds of at least one of multiple picks winning the lottery, we consider its opposite: what are the odds that nobody wins the jackpot? If the odds of one pick not matching are $1 - P
(M)$, then the odds of all 13,983,816 random picks not matching are $1 - P(M)$ multiplied by itself 13,983,816 times.
$$(1 - P(M))^t = (1 - \frac{1}{13983816})^{13983816}$$
So the odds of at least one pick out of 13.9 million picks matching are 1 minus this number, or about 63%. It’s likely, but not “guaranteed.”
$$1 - (1 - \frac{1}{13983816})^{13983816} = 0.63212057175$$
Google Calculator gets it right
It’s always amusing to me to realize that this accounts for all of the possibilities of which picks are winners, including the possibility that everyone picked exactly the same numbers and they
matched, or everyone won but you, etc. Importantly, these are the total odds of somebody out of millions of people winning, not the odds of you winning.
Chatbots being wrong isn’t news. What I think is interesting here is that chatbots seem to be wrong about mathematics more often than with other subjects. Chatbots don’t understand anything. They’re
guessing at how someone might answer a question based on the text of billions of people discussing billions of things. This makes chatbots more likely to repeat common misconceptions when those
misconceptions appear more often than their corrections in the training data. | {"url":"https://dansanderson.com/articles/bard-on-lotto/","timestamp":"2024-11-06T05:48:25Z","content_type":"text/html","content_length":"15199","record_id":"<urn:uuid:88fbb8b7-8526-46b1-a44b-c26825ddfae6>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00695.warc.gz"} |
This Paper Could Be The Key to Solving a 160-Year-Old, Million-Dollar Maths Problem
Humans06 April 2017
When it comes to the fabled Riemann zeta function, the world's best mathematicians have suffered 160 years of dead-end after dead-end.
But now a trio of mathematicians have discovered a new approach to solving what's been called "the greatest unsolved problem in mathematics", and it just got someone closer to a US$1 million cash
First proposed in a 1859 paper by German mathematician Bernhard Riemann, the Riemann zeta function would provide us with a foolproof way to wrangle the famously unwieldy world of prime numbers - if
we can prove it.
The reason prime numbers are so difficult to identify and predict is that they appear to be completely random - although there are hints that there might be some kind of order that we're yet to
They also contain no factors - numbers we can multiply together to get another number - other than 1 and themselves, which makes them almost frustratingly simple entities.
But what the Riemann zeta function hypothetically allows you do to is use one formula to calculate how many primes there are below any given threshold.
It's so incredibly powerful, that over several decades, countless algorithms - especially in security and cryptography - have been formulated assuming that it's true.
If it can be proven, it will open up an entire world of new mathematics, akin to how algebraic number theory revolutionised the field before it.
But if someone figures out how to disprove Riemann's hypothesis, "the failure … would create havoc in the distribution of prime numbers", Italian number theorist Enrico Bombieri once wrote.
The reason the problem has been included as one of seven Millennium Prize Problems in mathematics that each carry a $1 million payout is because it's not only crucial that we figure out its veracity,
doing so is mind-bogglingly difficult.
It's based on what's referred to as the Zeta Function zeroes - algorithms that begin with any two coordinates and uses those to perform a set calculation to figure out a value.
"If you imagine the two initial coordinates to be values for latitude and longitude, for example, then the Zeta Function returns the altitude for every point, forming a kind of mathematical landscape
full of hills and valleys," Matt Parker explains for The Guardian.
"Riemann was exploring this landscape when he noticed that all of the locations that have zero altitude (points at 'sea level' in our example) lie along a straight line with a 'longitude' of 0.5 -
which was completely unexpected."
Riemann used these zeroes to come up with a formula to define prime number distribution, but he was unable to prove that they all fell on the same straight line.
And not for lack of trying - you could individually prove that the first 100 billion or 10 trillion zeros all fall on that line, but what about the zeros that come after? How do you prove that
infinite zeroes will still follow this trend?
Now a new paper by three mathematicians in the US, Canada, and the UK proposes that we use quantum mechanics to solve the problem, latching onto a decades-old idea that there could exist a quantum
system whose energy states correspond to the hypothetical zeros of the zeta function.
If this pans out, it could give a route to proving the Riemann Hypothesis, the Holy Grail of math. Stay tuned. Not there yet, but maybe... https://t.co/XPvCDSG83m
— Steven Strogatz (@stevenstrogatz) March 30, 2017
They've defined a component called a Hamiltonian operator (denoted as H) as being the key to the existence of this quantum system, and the mathematics community is now buzzing over their concluding
"If the analysis presented here can be made rigorous to show that Η is manifestly self-adjoint, then this implies that the Riemann hypothesis holds true."
To put that more simply, says Kevin Knudson at Forbes, "Should such a system exist, the Riemann Hypothesis would follow immediately.
The hypothesis is strong enough to have gotten everyone's attention, but the jury's still out on if this is the key to unlocking what's been defined as the most important open question in pure
Because if anything requires some serious mulling time, it's this.
"I would need more time to give a relevant opinion about the significance of their findings as a strategy towards the Riemann hypothesis," Paul Bourgade, a mathematician at New York University,
told Natalie Wolchover at Quanta Magazine.
The research has been published in Physical Review Letters, and you can find out more about Riemann zeta function below: | {"url":"https://www.sciencealert.com/this-paper-could-be-the-key-to-solving-a-160-year-old-million-dollar-maths-problem","timestamp":"2024-11-13T16:23:56Z","content_type":"text/html","content_length":"142847","record_id":"<urn:uuid:c463e949-451b-4d63-98e9-6ded88f16141>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00383.warc.gz"} |
Need help with states and text
Hay guys, trying to learn states and i've run into a problem.
I've got this function. And when it is called, i need it to create some text. States and breaking my phaser game into several files is a first for me and i cant for the life of me figure this out.
I need to create some text from the animationPass function, but i get this error:
Uncaught TypeError: Cannot read property 'text' of undefined -- of line 7, and thats where the create introText line is.
in my intro.js i've got
function animationPass() {
introText = this.add.text(320, 240, 'Write some text');
//i've tried this.introText - but that didnt work either.
var introMap = function (game) {
var introText;
introMap.prototype = {
create: function () {
update: function () {
And this has worked fine so far. Infact i've used this structure in another file and it works just fine there. Here i get however
Uncaught TypeError: Cannot read property 'text' of undefined -- of line 7, and thats where the create introText line is.
My preload.js looks like this
preload = function (game) {
WebFontConfig = {
google: {
families: ['Press Start 2P']
preload.prototype = {
preload: function () {
this.load.spritesheet('intro', 'img/introAnim.png', 172, 124, 25);
create: function () {
I humbly ask for your assistance. And i'm sorry if the answer is obvious.
I would not ask if i hadn't spend a great deal of time trying to figure out the answer myself.
You should either invoke `animationPass` with `call` and the current state
function animationPass() {
this.introText = this.add.text(320, 240, 'Write some text');
var IntroMap = function() {};
IntroMap.prototype = {
create: function() {
update: function() {
or convert it to a method of your state object and invoke it with `this` (more common):
var IntroMap = function() {};
IntroMap.prototype = {
create: function() {
update: function() {
animationPass: function() {
this.introText = this.add.text(320, 240, 'Write some text');
A whole bunch of thanks to you. This worked perfectly
Is there some reading you would recommend that elaborates on this?
And another question, if you have the time
In your second example how would i combine this.animationPass(); and setInterval?
I've got this snip here
introMap.prototype = {
create: function () {
myInterval = setInterval(this.animationPass, 1000);
update: function () {},
animationPass: function () {
this.introText = this.add.text(320, 240, 'Write some text');
this.introText.addColor("#E0AF33", 0);
When i do this introText = cannot read text of undefined
When i just go
create: function () {
introText is added just fine.
1 hour ago, DudeshootMankill said:
Is there some reading you would recommend that elaborates on this?
JavaScript/Reference/Operators/this and https://rainsoft.io/gentle-explanation-of-this-in-javascript/.
1 hour ago, DudeshootMankill said:
In your second example how would i combine this.animationPass(); and setInterval?
When you pass a method as a callback, you need to specify the calling context (`this` value). Usually you want the callback invoked in the same context as the current one (`this` == the current game
state), so you can just pass `this`. (Use time.events.loop instead of setInterval, but the concept is the same.)
IntroMap.prototype = {
create: function() {
this.timer = 0;
this.myInterval = this.time.events.loop(1000, this.animationPass, this);
update: function() {},
shutdown: function() {
animationPass: function() {
this.introText = this.add.text(320, 240, 'Write some text');
this.introText.addColor("#E0AF33", 0);
Thank you friend. All of this worked out perfectly. | {"url":"https://www.html5gamedevs.com/topic/25674-need-help-with-states-and-text/","timestamp":"2024-11-12T05:42:55Z","content_type":"text/html","content_length":"131784","record_id":"<urn:uuid:ab6fe7b6-e98d-4af1-8de4-778ab4a98988>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00813.warc.gz"} |
manual page
psignal, sys_siglist, sys_signame — system signal messages
#include <signal.h>
psignal(unsigned int sig, const char *s);
extern char *sys_siglist[];
extern char *sys_signame[];
The psignal() function locates the descriptive message string for the given signal number sig and writes it to the standard error.
If the argument s is not NULL it is written to the standard error file descriptor prior to the message string, immediately followed by a colon and a space. If the signal number is not recognized (see
sigaction(2) for a list), the string “Unknown signal” is produced.
The message strings can be accessed directly using the external array sys_siglist, indexed by recognized signal numbers. The external array sys_signame is used similarly and contains short,
upper-case abbreviations for signals which are useful for recognizing signal names in user input. The defined value NSIG contains a count of the strings in sys_siglist and sys_signame.
The psignal() function appeared in 4.2BSD. | {"url":"https://man.openbsd.org/OpenBSD-5.9/psignal.3","timestamp":"2024-11-11T11:12:30Z","content_type":"text/html","content_length":"8810","record_id":"<urn:uuid:4cbefb7a-aa0f-499a-9ddd-1cfe4b798a88>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00350.warc.gz"} |
Geometry for Elementary School/The Right angle-Hypotenuse-Side congruence theorem - Wikibooks, open books for an open world
In this chapter, we will discuss the right angle-hypotenuse-side congruence theorem, often shorted to RHS. Some people call it the hypotenuse-leg congruence theorem, which is shortened to HL. It is
special because it can only be used on right-angled triangles. | {"url":"https://en.m.wikibooks.org/wiki/Geometry_for_Elementary_School/The_Right_angle-Hypotenuse-Side_congruence_theorem","timestamp":"2024-11-02T03:10:40Z","content_type":"text/html","content_length":"23414","record_id":"<urn:uuid:4c7c7c20-c934-4fab-83e2-661c8a760ea0>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00433.warc.gz"} |
Basic & On Demand
Enrollment for Basic and On Demand courses is open on a rolling basis.
Teachers can use the complete Plus course curriculum for flexible instruction.
Course Summary
Build geometry fundamentals and increase your problem-solving ability by applying geometry concepts to calculate angles and intersecting lines, perimeters, polygons, area and volume, and more.
Students develop their ability to justify conclusions by writing formal proofs for geometric theorems and begin the process of constructing these proofs independently. Students acquire the geometric
fundamentals needed to succeed in more advanced math courses.
This is a 2-semester course. We do not recommend taking both semesters simultaneously.
Unit 1: Geometry Beginnings
• Nets and Drawings for Visualizing Geometry
• Points, Lines and Planes
• Measuring Segments
• Measuring Angles
• Exploring Angle Pairs
• Midpoint and Distance in the Coordinate Plane
Unit 2: Geometric Reasoning
• Basic Constructions
• Patterns and Inductive Reasoning
• Conditional Statements
• Biconditionals and Definitions
• Deductive Reasoning
• Reasoning in Algebra and Geometry
• Proving Angles Congruent
Unit 3: Lines and Angles
• Lines and Angles
• Properties of Parallel Lines
• Proving Lines Parallel
• Parallel and Perpendicular Lines
• Parallel Lines and Triangles
Unit 4: Congruent Triangles
• Congruent Figures
• Triangle Congruence by SSS and SAS
• Triangle Congruence by ASA and AAS
• Using Corresponding Parts of Congruent Triangles
• Isosceles and Equilateral Triangles
• Congruence in Right Triangles
• Congruence in Overlapping Triangles
Unit 5: Relationships Within Triangles
• Mid-segments of Triangles
• Perpendicular and Angle Bisectors
• Bisectors in Triangles
• Medians and Altitudes
• Indirect Proof
• Inequalities in One Triangle
• Inequalities in Two Triangles
Unit 6: Right Triangles
• The Pythagorean Theorem and Its Converse
• Special Right Triangles
• Trigonometry
• Angles of Elevation and Depression
• Areas of Regular Polygons
Unit 7: Transformations
• Translations
• Reflections
• Rotations
• Compositions of Isometries
• Congruence Transformations
Unit 8: Similarity
• Similar Polygons
• Proving Triangles Similar
• Similarity in Right Triangles
• Proportions in Triangles
• Dilations
• Similarity Transformations
Unit 9: Polygons and Quadrilaterals
• The Polygon Angle-Sum Theorems
• Properties of Parallelograms
• Proving That a Quadrilateral is a Parallelogram
• Properties of Rhombuses, Rectangles and Squares
• Conditions for Rhombuses, Rectangles and Squares
• Trapezoids and Kites
• Applying Coordinate Geometry
• Proofs Using Coordinate Geometry
Unit 10: Perimeter and Area
• Perimeter and Area in the Coordinate Plane
• Areas of Parallelograms and Triangles
• Areas of Trapezoids, Rhombuses, and Kites
• Polygons in the Coordinate Plane
Unit 11: Surface Area and Volume
• Surface Areas of Prisms and Cylinders
• Surface Areas of Pyramids and Cones
• Volumes of Prisms and Cylinders
• Volumes of Pyramids and Cones
• Surface Areas and Volumes of Spheres
• Areas and Volumes of Similar Solids
Unit 12: Circles
• Circles and Arcs
• Areas of Circles and Sectors
• Tangent Lines
• Chords and Arcs
• Inscribed Angles
• Angle Measures and Segment Lengths
Unit 13: Probability
• Experimental and Theoretical Probability
• Permutations and Combinations
• Compound Probability
• Probability Models
• Conditional Probability Formulas | {"url":"https://ucscout.org/courses/geom","timestamp":"2024-11-01T22:40:49Z","content_type":"text/html","content_length":"53741","record_id":"<urn:uuid:619f757c-894e-4f42-95ae-031e465d1cab>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00256.warc.gz"} |
Chances of Short-Term Cooling Estimated from a Selection of CMIP5-Based Climate Scenarios during 2006–35 over Canada
1. Introduction
Climate scientists recognize that, although Earth’s surface temperatures have entered a period of general long-term increase, temporary stalling or cooling periods may occur at any spatial scale (
Easterling and Wehner 2009; Knight et al. 2009; Meehl et al. 2011; de Elía et al. 2013; Deser et al. 2014). Most long-term trends for the twenty-first century are projected as positive with a high
degree of confidence, because anthropogenic greenhouse gases are the dominant factor for that period and their radiative forcing is expected to increase (van Vuuren et al. 2011; Stocker et al. 2013).
Short-term trend distributions are tied to this global forcing as well but are also expected to reflect the fast-changing influence of secondary factors like natural forcings (e.g., volcanic aerosols
and solar cycles) and unforced natural variability [e.g., the North Atlantic Oscillation (NAO) and the El Niño–Southern Oscillation (ENSO)].
The recent period (~1998–2012) offers an example of temporary warming slowdown on the global scale (Morice et al. 2012). This could have resulted from naturally prevailing La Niña–like conditions in
the tropical Pacific (Kosaka and Xie 2013) accompanied by subsurface ocean heat uptake higher than in decades that experienced more warming (Meehl et al. 2011; England et al. 2014; Risbey et al. 2014
). The NAO could be implicated as well, at least for the Northern Hemisphere (Li et al. 2013). Other possible causes, involving a decrease in top-of-atmosphere radiative forcing, have been proposed:
decreased ozone-depleting substances and methane emissions (Estrada et al. 2013), decreased solar irradiance during cycle 23’s declining phase (Lean 2010; Kopp and Lean 2011), increased sulfur
emissions in Asia (Kaufmann et al. 2011), and declining stratospheric water vapor concentrations after 2000 (Solomon et al. 2010). The recent slowdown may be in part a measurement artifact due to
incomplete global coverage (Cowtan and Way 2013), but in this case it may be only a matter of time before a 10- or 15-yr global cooling trend actually occurs.
At the regional and local scales, short-term cooling trends are expected to occur with higher probabilities, because interannual and interdecadal variability is generally higher at smaller spatial
scales (Deser et al. 2012; Maraun 2013). For North America, Deser et al. (2014) have shown that internal variability can result in as much as ~40% of simulations generated by state-of-the-art global
coupled models showing cooling trends over some regions during the period 2010–35.
Although the possibility of future cooling trends in a long-term warming path is undisputed, most studies on the subject are based on direct outputs from climate models, which present differences in
many statistical properties when compared with observational products. For example, van Oldenborgh et al. (2013) have examined the Coupled Model Intercomparison Project phase 5 (CMIP5) multimodel
ensemble and concluded that the distribution of surface temperature trends (per degree of global warming) for 1950–2011 and north of 45°S is underdispersive; over Canada, their analysis suggests an
underforecasting bias, with observed trends almost all within the 20th–99th-percentile bracket of CMIP5 trends. In another study, Bhend and Whetton (2013) also concluded that there were
inconsistencies between observed and CMIP3–CMIP5 trends, including for regions in Canada. Other relevant studies of temperature trends at different time and space scales in multimodel ensembles
include those of Santer et al. (2011), Sakaguchi et al. (2012), Stott et al. (2013), and Knutson et al. (2013).
Reported inconsistencies between simulated and observed trends are not the only reason why computing future probability of cooling trend (P[cool]) values based on climate model simulations is
controversial (Curry 2011; Stephenson et al. 2012). Another important reason is that a “true” value of P[cool] would have to be based on a perfectly representative distribution of trends, which
multimodel ensembles cannot guarantee (von Storch and Zwiers 2013). Other issues related to probabilistic projections of climate include the difficulty of assessing reliability given a single real
climate trajectory, inferring models’ future skill from past skill, subjectivity in model weighting, model tuning, and anthropogenic forcing uncertainty (Räisänen and Palmer 2001; Tebaldi and Knutti
2007; Yokohata et al. 2013). However, it may be argued that estimations of P[cool] should be provided by climate experts when needed, with limitations clearly stated. The rationale is that otherwise,
decisions are made “with implied assessments of relative likelihoods that depart, perhaps significantly, from experts’ best estimates” (Hall et al. 2005, p. 346). Notably, Räisänen and Palmer (2001),
Schneider (2002), and Collins (2007) encouraged probabilistic climate forecasting.
This study’s main objective is to estimate P[cool] for short-term durations (from 5 to 25 yr) over Canada during the current period (2006–35) in ways that reflect how numerical climate simulations
are actually used by climate service institutions to provide decision makers with local climate scenarios (Huard et al. 2014). A local univariate climate scenario is defined here as a time series
that describes a path one climate variable may plausibly follow over a very small area and during a given period of time. Plausibility requires statistical properties consistent with observations for
the past segment, continuity at the past–future junction, and physical credibility for the future segment. Climate scenarios are designed by merging statistical properties from numerical model
simulations and observational products, with the direct usage of simulations seen as one of the options. No single scenario is a prediction; however, as an ensemble, scenarios aim to cover the range
that includes the real future climate trajectory.
A secondary objective is to evaluate whether climate scenario design methods other than the direct use of simulations reduce offsets between observational and simulated P[cool] distributions during
the calibration period (1962–2010). Two such alternative methods are used. One method performs statistical adjustment of the simulated interannual variability, while the other involves using a simple
autoregressive model to generate variability.
It is important to emphasize local short-term cooling trends in the context of global warming for at least two reasons. First, some decisions do require information related to the climate’s
short-term evolution or are based on changes from climate normal, and it may be important to know the range of temperature trends. Second, global warming and consequential environmental changes
represent a scientific challenge but also a political issue, implying that it might be tempting for some interest groups to create and spread disinformation. For example, the recent warming slowdown
is known to have fueled contrarian views, as discussed by Kaufmann et al. (2011) and Cowtan and Way (2013). Thus it must be clearly demonstrated that short-term cooling trends and long-term global
warming represent two intertwined phenomena and that the occurrence of the former is in no way a refutation of the latter (Santer et al. 2011).
The paper is divided as follows. In section 2, datasets and methodological choices are described. Hindcast skill results for 1962–2010 and probabilistic forecasts for 2006–35 are presented in
sections 3 and 4, respectively. Section 5 consists of a summary with discussion.
2. Data and methods
Numerical model simulations are often used directly as scenarios for making inferences about aspects of Earth’s real future climate (e.g., Deser et al. 2014). This is justified when statistical
properties of interest are judged to be realistic. But a more general approach to climate scenario design would be to merge information from simulations and observations, with all the weight on
simulations being just one option. The rationale for generalizing the approach is that there exist statistical offsets between simulations and local realizations of nature, and part of this is due to
scale mismatch and the inherent imperfections of models. This causes discontinuities at the junction of the past-observed and future-simulated segments, among other problems. However, part of these
offsets may also result from limitations in observational products (particularly in northern Canada, where the network of stations is sparse) and from the comparison period being too short for both
observations and simulations to represent the full phase space of the local climate. For this and other reasons, there is no scientific consensus on the legitimacy of methods akin to bias correction
(Themeßl et al. 2012; Haerter et al. 2011; Ehret et al. 2012).
In this study, P[cool] uncertainty related to the choice of scenario design is addressed by considering three different methods. These are based on the same observational product and simulation
ensemble but merge the statistical properties differently. The property of interest here is the short-term trend distribution. Because it is difficult to design climate scenarios in a way that
directly ensures their short-term trends will be consistent with observations, methods (other than the direct use of simulations) address the problem indirectly by seeking agreement in interannual
variability (standard deviation of residuals around a background state or long-term time-varying trend). Offsets in climate simulation variability have the potential to affect trend detection as well
as consistency with observations (Knutson et al. 2013). None of the methods is based on seeking scenario–observation agreement in long-term trends, although this statistical property is strongly
linked with short-term trend distributions. Such an agreement could be imposed on the past segment of a scenario simply by borrowing the observational regression. However, extrapolating for the
future segment is unlikely to give credible results.
a. Datasets
1) Numerical simulations
Monthly data from CMIP5 are used (Taylor et al. 2012). The CMIP5 dataset has been chosen because it is recent and well explored by the climate community (Stocker et al. 2013) and reflects a variety
of modeling efforts. Nonetheless, this ensemble’s representation of the historical climate has limitations (Sheffield et al. 2013), and models are not completely independent (Räisänen 2007; Masson
and Knutti 2011; Stephenson et al. 2012; Knutti et al. 2013). We use 60 simulations from 15 Earth system models (ESMs) and four representative concentration pathways (RCPs). Only ESMs presenting at
least one member per RCP at the time of analysis have been selected; these are listed in Table 1. The principle of “model democracy” is applied to the selected ESMs, with only the member with
identification code r1i1p1 kept for each ESM and RCP (most models had only one member at the time the analysis was conducted). However, in practice, some models may have more weight than others in
this study, since different models from the same center may, in fact, be closely related (GFDL and MIROC each contribute three models in this study).
Table 1.
List of ESMs and associated modeling groups whose RCP experiment simulations with member code r1i1p1 are used in this study. (Here, QCCCE is Queensland Climate Change Centre of Excellence, NIMR/KMA
is National Institute of Meteorological Research/Korea Meteorological Administration, MOHC/INPE is Met Office Hadley Centre/Instituto Nacional de Pesquisas Espaciais, and NCC is Norwegian Climate
Centre; expansions for the ESMs and the other modeling groups are available at http://www.ametsoc.org/PubsAcronymList.)
2) Observational data
The product used for scenario design is a Natural Resources Canada (NRCan) dataset based on daily observations from Environment Canada’s station network interpolated to a regular 10 km × 10 km grid
over Canada (Hutchinson et al. 2009; Hopkinson et al. 2011). Version 2 is used. It contains the daily minimal (T[min]) and maximal (T[max]) temperatures, obtained by applying the Australian National
University Splines (ANUSPLIN) package interpolation procedure to station observations. This procedure accounts for terrain elevation but does not take account of any other physical principle. Thus,
biases are potentially large where the stations network is sparse (e.g., in the Arctic). Although this dataset does not strictly correspond to observations, this term will be used herein for the sake
of brevity. Daily mean temperatures (T[mean]) are obtained by the approximation T[mean] = (T[min] + T[max])/2. This approximation was notably used by Jones et al. (1999) and has known limitations
(e.g., Zeng and Wang 2012).
b. Study domain and period
Figure 1 presents the Canadian toponyms referred to in this study. Scenarios are built for 5146 locations over the country, a number obtained by selecting each 50th point of the NRCan grid. Thus,
scenarios are produced on an irregular grid, with a sampling distance of the order of ~44 km. This relatively short distance should not be interpreted as a magnification of ESMs’ resolutions
(~100–300 km); scenarios are calculated separately at each location, but neighboring locations are climatically not independent. The sampling distance stems from a compromise between limiting
computing time and being able to visualize interesting geographical patterns. Each value in a time series corresponds to a single year and may represent the whole year (ANN) or a single season
[December–January (DJF), March–May (MAM), June–July (JJA), and September–November (SON)]. For winter values of a given year, December data from the previous year are used, so the average value
actually represents three consecutive months.
Fig. 1.
Map of Canada with toponyms referred to in the study. Red is for cities, blue is for seas, black is for provinces and territories, and green is for other geographical features. NF and NB stand for
Newfoundland and New Brunswick, respectively.
Citation: Journal of Climate 28, 8; 10.1175/JCLI-D-14-00224.1
Statistical calculations of trends are performed for the 2006–35 segment of the scenarios created. During that period, the mean state of the scenarios ensemble follows a fairly linear trend (not
shown), with no obvious RCP-based segregation. Figure 2 shows the linear trends for the grid point nearest to Quebec City (46.71°N, −71.21°E) during the periods 2006–35 and 2036–65, for each
combination ESM–RCP (annual time series are used, which are obtained by interpolating the simulations at the NRCan grid point closest to the targeted location). The uncertainty in the trend appears
to be much more dependent on ESM choice and initial conditions than on RCP choice over 2006–35. The rates of warming generally increase from 2006–35 to 2036–65 in the case of RCP8.5, are about
constant for RCP6.0, and decrease for RCP4.5, whereas temperatures for RCP2.6 stabilize at warmer values than currently observed.
Fig. 2.
Trends in annual T[mean] time series for the 15 ESMs listed in Table 1 for periods (a) 2006–35 and (b) 2036–65, near Quebec City. Symbols corresponding to the same ESM are connected for better visual
Citation: Journal of Climate 28, 8; 10.1175/JCLI-D-14-00224.1
As a matter of indication, the p value of the two-sample Kolmogorov–Smirnov test between RCP2.6 and RCP8.5 linear trend distributions is ~0.14 for 2006–35, and <0.01 for 2036–65. Considering a 5%
significance level, this means the null hypothesis that the two samples are drawn from the same parent distribution is rejected for the 2036–65 period only. Applying the Kolmogorov–Smirnov test on
all 5146 locations results in the rejection of the null hypothesis 77 and 5146 times for 2006–35 and 2036–65, respectively. This supports the assumption that socioeconomic scenarios associated with
RCPs do not have a major impact on trends during the study period (i.e., the warming is, in large part, already committed). Hence the dataset can reasonably be considered to include four members per
ESM, and trend statistics on the scenarios may be performed without regard for the RCP. This approximation of a small forcing-related uncertainty might not hold when another ensemble of simulations
is considered. Time series for the different RCPs during 2006–35 are also practically indistinguishable in terms of two other statistical properties: namely, detrended variance and lag-1
autocorrelation values (not shown).
Finally, it is important to mention that the 5146 locations investigated are not all climatically independent. According to a simple test, the maximal number of uncorrelated NRCan time series across
Canada during 1962–2010 is six for each season except summer (11). For two time series to be considered uncorrelated, this test requires that the Pearson linear correlation coefficient (r[Pea])
between their respective linear regression residuals be nonsignificant according to Fisher’s z-transformation test at the 95% level (Wilks 2006, section 5.4.2).
c. Scenario design
1) Method 1: Interpolated model outputs
The first method consists of interpolating each simulation at the 5146 study locations. No statistical adjustment is performed. Spatial interpolation has been used as a form of rudimentary
downscaling by Ahmed et al. (2013; see also references therein).
2) Method 2: Statistical adjustment of interannual variability
In the second method, each simulation is adjusted according to its level of agreement with observational variability over a calibration period (1962–2010). The transient background (long-term) trends
are still dictated by the simulation. Adjustment is performed independently for each simulation, season, and location. After spatial interpolation, the first step is calculating the time-varying
background state of the simulation B[SIM], as a fourth-order regression (Hawkins and Sutton 2009). The B[SIM] is then subtracted from the simulated time series to get the residual distribution {r
[SIM]}. A fourth-order regression is also used to separate the observational residuals {r[OBS]} from the background transient state B[OBS]. A transfer function is next determined, which ensures the
mapping of {r[SIM]} into {r[OBS]} for elements of the calibration period, on a quantile basis. The transfer function is then applied to the 1962–2035 residuals, and a scenario is obtained by adding B
[SIM] back to the adjusted residuals.
The statistical adjustment of interannual variability is inspired by techniques applied to daily model outputs and known under different names, including quantile mapping (Themeßl et al. 2012) and
statistical bias correction (Piani et al. 2010). Here the transfer function is linked with the residuals rather than the actual values, mostly because otherwise different observed and simulated
long-term trends would lead to inappropriate variability adjustment (Scherrer et al. 2005).
3) Method 3: Stochastic interannual variability
In this method, simulations are used to provide the long-term background state, whereas observations are used to create year-to-year fluctuations around that state. Each simulation provides a
background state
independently, which is determined as in method 2 (fourth-order regression). The same technique is used to obtain
, and the corresponding residuals
are then used to create a first-order autoregressive or AR(1) model (
Wilks 2006
, section 8.3.1). In such a statistical model, each value
of a time series partially depends on the previous,
, following the equation
corresponds to the lag-1 autoregressive parameter of {
is the average of {
}, and
is a random number picked from a distribution {
}. This distribution is assumed to be Gaussian, with
= 0 and
= (1 –
being the standard deviation of {
}. Scenario residuals are then generated using the first value of {
} as the initial
value and randomly picking from within {
}. Here 50 values beyond the number needed to cover the period 1962–2035 are generated, and only the last 74 values are kept, to eliminate the influence of the arbitrary
value chosen to start the series. Once the time series for the scenario residuals is obtained, it is added to
to obtain a
scenario. Whether the use of a higher-order autoregressive integrated moving average (ARIMA) model would be more appropriate has not been investigated, as was done, for example, by
Foster and Rahmstorf (2011)
for global temperature.
Successively using each simulation of our work dataset provides an ensemble of 60 scenarios. Note that for each simulation, any number of scenarios could have been generated just by changing X[0] or
the pseudorandom seed. However, using the same number of scenarios as provided by methods 1 and 2 renders the intermethod comparison easier.
4) Methods summary
Table 2 offers an overall view of how simulations and observations are combined to produce scenarios in each method, indicating which dataset is relied upon for the background state, the sequence
(alternation of minima and maxima), and the variability. The alternative method referred to in Table 2 is not used to generate future P[cool] values and is described in section 4. Figure 3 shows what
this means concretely by showing the three scenarios that stem from the FGOALS-s2 historical/RCP6.0 simulation, for annual T[mean] values interpolated at the grid point nearest to Yellowknife
(62.13°N, −114.46°E). It is not necessary to adjust the manifest offset between simulated and observed long-term averages, because trends and P[cool] values are invariant under a constant shift.
However, to facilitate viewing, the method 3 scenario is shown with a constant shift corresponding to the difference between the means of B[SIM] and B[OBS] over the calibration period. The more
subtle offset in variability is adjusted for in method 2. In this case, standard deviations of the 1962–2010 residuals for observations, the simulation, and the method 2 scenario are 1.15°, 1.51°,
and 1.17°C, respectively. For method 3, the standard deviation of the residuals is 1.30°C; the number of time steps is too small to ensure convergence toward the prescribed standard deviation.
Table 2.
Methods summarized by source [simulations (SIM) or observations (OBS)] for three major scenario characteristics: trend, sequence, and variability. The alternative method is used for hindcast analysis
Fig. 3.
Time series used in and resulting from the different scenario design methods for annual T[mean] values from the FGOALS-s2–RCP6.0 simulation interpolated at the grid point nearest to Yellowknife. The
method 3 scenario is shifted by a constant value corresponding to the difference between the means of B[SIM] and B[OBS] over the calibration period.
Citation: Journal of Climate 28, 8; 10.1175/JCLI-D-14-00224.1
d. Short-term trend calculations
Once scenarios are created, short-term trend distributions are obtained, with the aim of estimating a probability for the occurrence of a negative trend (P[cool]) for each case location/season/method
/duration. Short-term durations considered are 5, 10, 15, 20, and 25 yr. For each case, a trend distribution is obtained by running a block of the specified duration within the 2006–35 period and
calculating linear trends by least squares regression. For each case, this gives 1560 values for 5-yr durations (26 blocks × 60 scenarios), and 1260, 960, 660, and 360 values for the other durations
as they increase. It is important to note that blocks are not independent, because they overlap. As will be discussed later, this lack of independence is taken into account in assessing confidence
intervals (CIs). For each distribution, some trends are negative and some are positive, and the fraction of negative values during 2006–35 is directly interpreted as P[cool].
There are more sophisticated ways to obtain probabilities in situations like this—for example, using Bayesian approaches. However, there is no methodological consensus, and differences between
results based on various methods are sometimes large (e.g., Tebaldi and Knutti 2007). It is reasonable to consider that the more frequently negative trends occur in scenarios based on ESMs, the more
likely they are to occur in the real world.
3. Hindcast skill
a. Verification rank histograms
A consistent hindcast requires that the observed value’s rank varies with equiprobability among the different projected outcomes (Wilks 2006, section 7.7.2). Such an assessment has been performed for
method 1 and every case season/duration during the period 1962–2010, as shown in Fig. 4. Ranks are converted into percentiles. Weights are 1/2N for each tail and 1/N for each interval between
successive scenario P[cool] values, with N = 60 being the number of scenarios. Linear interpolation is performed between ranks, and 5-percentile bins are used for the histograms (see van Oldenborgh
et al. 2013 for further details). Each location contributes one P[cool] value based on a number of trend values, which depends on the duration. Overlap in periods for calculating trends and spatial
correlation limit the effective sample size in two different manners. For example, for the case ANN/10-yr duration, there are around 6 climatically independent regions across the country, and only 4
of the 40 trend values determining the local P[cool] value are completely independent. This indicates that the histograms include some redundant information, rendering the estimation of statistical
significance challenging (with respect to the disparities between observation and scenario P[cool] distributions). Nevertheless, rank histograms are probably among the best tools for judging how
observations differ from the hindcast ensemble. For each case in Fig. 4, observations (red histogram) differ from the ideal result (black line). But when individual simulations are compared with the
other 59, histograms also depart from the ideal result (for each bin, the gray shading ranges from the lowest to the highest frequency resulting from the intersimulation verifications). The general
location of the red histogram within the gray shading indicates that observations differ from simulations approximately as much as simulations differ from one another. However, for DJF and 15–25-yr
durations there is an overforecasting bias (the hindcast shows perceptibly fewer low P[cool] values than the observations). Also, for the case SON/25 yr, observational values occur atypically often
within the 80th–95th percentiles. The overall picture is the same with methods 2 and 3 (not shown).
Fig. 4.
Verification rank histograms for P[cool] considering method 1. Each panel represents a case season/duration. Red histograms are for the observational percentiles within the 60 scenarios and represent
the frequency of occurrence when each of the 5146 observational P[cool] values is inserted within its corresponding distribution of 60 scenario P[cool] values. Given 5-percentile bins, a perfectly
reliable hindcast would be the black line at 0.05. Gray limits for each 5-percentile bin represent the minimal and maximal frequencies from interscenario verifications.
Citation: Journal of Climate 28, 8; 10.1175/JCLI-D-14-00224.1
The fairly good verification rank histograms do not highlight the fact that P[cool] geographical patterns vary substantially between observations and individual models. Related correlation
coefficients r[Pea] (not shown) are generally small but more often positive than negative. Forcing local scenarios to adopt the observational background state improves P[cool] correlational results
substantially. This is discussed in more detail in the next section.
b. Distributions
In this section, the models’ P[cool] distributions are compared with the corresponding observational values during the calibration period (1962–2010). This procedure’s objective is mainly to assess
whether statistically adjusting the variability (with methods 2 and 3) has a beneficial impact in hindcast mode. An alternative scenario design method that consists of reproducing the observed
variability around its fourth-order regression with an AR(1) model is also used. This method has not been used beyond the calibration period, because the observed regression would have to be
extrapolated, and this procedure has been judged to be too simplistic. The alternative method’s role in the current section is limited to assessing the potential for agreement between scenario and
observational P[cool] values that could be reached were their background state the same. Such an agreement during the past period would not guarantee future agreement, but a disagreement would
clearly indicate some limitation in the scenario design method.
In Fig. 5, winter and summer 5-yr duration P[cool] distributions for 1962–2010 across Canada are shown for observations (red histograms) as well as for the individual 15 ESMs [gray histograms; black
bars connect the 4th and 12th highest frequency values for each 2.5% bin, corresponding roughly to the interquartile range (IQR)]. Each location contributes one P[cool] value. Again, overlapping
periods for trend calculation and spatial correlation limit the effective degrees of freedom and hinder clean calculations of statistical significance in the disparities. Disparities themselves are
assessed using the Kolmogorov–Smirnov distance (D[KS]), for which a value of 0 means distributions match perfectly, whereas a value of 1 indicates distributions do not even overlap. Minimal, median,
and maximal values among the 15 computed D[KS] values are provided on each panel of Fig. 5. For these cases, raw simulations (method 1) often present a P[cool] histogram with a light left tail
relative to the observational one. Another noteworthy result is that there is no marked difference between methods 1 and 2, as revealed graphically and by D[KS] values. Method 3 leads to model
histograms much closer to one another for cases shown in Fig. 5. One potential explanation is spatial decorrelation of the variability, which produces much richer effective sampling. The difference
between results for method 3 and those for the alternative method previously described illustrates how the right background state may contribute to obtaining the right P[cool] values.
Fig. 5.
Histograms of P[cool] values across Canada. Each panel represents a case method/season. Each histogram is based on 5146 5-yr P[cool] values (one value per location). Red histograms are for
observational values and gray histograms for scenarios from each of the 15 models. Black bars connect the 4th and 12th highest model frequencies in each 2.5% bin. For each case, each of the 15
scenario distributions is compared with the observational distribution through the Kolmogorov–Smirnov statistic; minimal, median, and maximal values are shown.
Citation: Journal of Climate 28, 8; 10.1175/JCLI-D-14-00224.1
For a more complete picture, Fig. 6 presents the D[KS] values for each case season/duration/method/ESM. Each model’s P[cool] distribution is compared with the observational one. Again, each black bar
gives the IQR among the 15 ESMs. The green marker corresponds to D[KS] between observations and the multimodel ensemble. Results can be summarized as follows. Adjusting variability with method 2
generally does not bring P[cool] distributions closer to the observed ones. This results partially from compensational effects, whereby occurrence numbers for inflation and compression of standard
deviation are comparable (this is not the case for winter, where models produce many more cases of excessive variability than the reverse). Altered trends must also cross the zero line in order to
make a difference in P[cool], so it is in some ways harder to impact P[cool] than to impact the underlying trend distribution. Generating variability with an AR(1) model (in method 3) often—but not
always—leads to a narrower IQR, but this does not necessarily co-occur with a general D[KS] decrease. Thus, altering variability and/or lag-1 autocorrelation toward observed values does not
systematically lead to closer P[cool] distributions, although very high D[KS] values are often diminished with method 3. The multimodel distribution is found most often within the IQR of individual
models and sometimes in the best quartile. There is no case where it is found in the worst quartile.
Fig. 6.
Kolmogorov–Smirnov distance between observational and scenario distributions for each case season/duration/method. Each black bar connects the 4th and 12th highest values among the 15 ESMs
(distributions from the different RCPs are concatenated). A green marker corresponds to D[KS] between observations and the multimodel ensemble.
Citation: Journal of Climate 28, 8; 10.1175/JCLI-D-14-00224.1
Results for the alternative method suggest at least two things. First, using the same background state as observations often brings a more substantial improvement than using the same standard
deviation. Second, with the same background states, it seems that 4 AR(1) runs per location are enough for having convergence toward some P[cool] distribution (this is not the case for an individual
location). The spread associated with method 3 is thus more due to the simulations having diverse offsets in their background state than to random effects of the AR(1) procedure. One case in which
the alternative method clearly results in worse D[KS] values is the DJF/10-yr case (annual results are impacted as well). For this case, scenario histograms are shifted by ~4% toward lower P[cool]
values, because of a wide region around Hudson Bay where the observed P[cool] values are much larger than expected from the alternative method (not shown). In this region, standard deviation of the
observed residuals has increased over time, and the AR(1) model cannot systematically reproduce this heteroscedasticity. Method 3 thus obtains good results for the wrong reason (incorrect
scedasticity partially compensating for inadequate background states). Another potential inadequacy of an AR(1) model could stem from residuals not being Gaussian (Gluhovsky and Agee 2007).
The availability of metrics to assess how realistic simulated P[cool] distributions are during the recent-past period renders the recourse to unequal model weights tempting. For example, a relative
weight equal to 1 − D[KS] could be a simple, defensible choice. However, the principle of model democracy used here has been preserved for two reasons. First, better past skill does not guarantee
better future skill, because the relative importance of the physical processes can evolve in a transient climate (Reifen and Toumi 2009). Second, relative model performances vary substantially across
seasons and trend durations (but less across methods). For example, each raw simulation is found in the best quartile for at least one case season/duration, and the result for the worst quartile is
In this section, it has been shown that forcing simulated standard deviations to fit the observed values does not lead to systematic benefits or shortcomings in hindcasting P[cool] distributions.
This occurs because other statistical properties, such as long-term trend and scedasticity, are also important. Taking this into account, results for 2006–35 (next section) have been computed using
each of the three methods, and differences are interpreted as uncertainty related to postprocessing of simulated interannual variability.
4. Estimation of 2006–35 P[cool] values
a. A specific case (Quebec City)
Figure 7 shows smoothed histograms (distributions) of the trends during 2006–35 near Quebec City, for the five durations considered. Annual values and method 2 are used for this figure, with bins of
size 0.333°C decade^−1 (the smoothing procedure is to facilitate viewing and consists of connecting the bin centers). Unsurprisingly, medians are located in the positive trend zone (between 0.38° and
0.47°C decade^−1), the shorter the duration is the wider the distribution becomes, and the fraction of the area under the curve located in the negative zone decreases as duration increases.
Corresponding P[cool] values are presented in Fig. 8, along with results for each season and for the other scenario design methods. Square symbols indicate the P[cool] value (frequency of
occurrence), whereas associated solid lines illustrate CIs at the 95% level (see the appendix). The gradual decrease in P[cool] as duration increases appears to be robust through seasons and methods.
Values approach 50% for the 5-yr duration, but this threshold is not exceeded (however, this does happen at a few locations for the 5-yr duration and method 3, with the extremal case at P[cool] =
51.9%). The MAM and DJF seasons generally show the highest P[cool] values, whereas ANN and JJA generally show the lowest values.
Fig. 7.
Smoothed histograms of short-term trends in annual T[mean] during 2006–35 near Quebec City as obtained using method 2. The same bins with width ~0.333°C decade^−1 are used for all durations.
Citation: Journal of Climate 28, 8; 10.1175/JCLI-D-14-00224.1
Fig. 8.
Annual and seasonal P[cool] values (square symbols) near Quebec City as a function of the duration considered, obtained using scenarios generated by (a) method 1, (b) method 2, and (c) method 3. Bars
correspond to confidence intervals calculated following the method described in the appendix.
Citation: Journal of Climate 28, 8; 10.1175/JCLI-D-14-00224.1
It is important to mention that the method for estimating CIs only accounts for stochastic uncertainty, through the assumption that each sample is drawn from a Gaussian distribution. Other issues,
including model interdependence and RCP representativeness, are not accounted for by the CIs. Dependence on the postprocessing method is, however, explicitly shown in the three panels. The variety of
error sources associated with estimating probabilities strongly suggests not interpreting P[cool] values too precisely. Hence, there must be more focus on geographical patterns and comparisons
between seasons, methods, and durations than on the P[cool] values themselves. CIs will not be presented for the remainder of the paper, for the sake of brevity.
b. Results on the Canadian scale
Probabilities of negative trends have been calculated for the 5146 grid points covering Canada. Figures 9, 10, and 11 show results obtained per season for the 20-yr duration using methods 1, 2 and 3,
respectively. Patterns are relatively similar between methods, although maps appear noticeably noisier for method 3, because of the fact that variability is spatially uncorrelated. Roughly speaking,
the regions with the highest P[cool] values are parts of the Prairies (as defined in Fig. 1), British Columbia, southern Yukon, and Newfoundland–Labrador for DJF, Newfoundland–Labrador and the
Eastern Baffin Island area for MAM, land surrounding the Beaufort Sea and a thin segment along the Labrador coast for JJA, and parts of the Prairies for SON. On the other hand, the regions with the
lowest P[cool] values correspond to parts of Nunavut for DJF and MAM, to southern Quebec, New Brunswick, and parts of the Canadian Cordillera for JJA, and to southeastern Canada and parts of Nunavut
and southern Ontario for SON.
Fig. 9.
Seasonal P[cool] values over Canada during the period 2006–35 for a 20-yr duration using method 1: (a) DJF, (b) MAM, (c) JJA, and (d) SON.
Citation: Journal of Climate 28, 8; 10.1175/JCLI-D-14-00224.1
Increasing (decreasing) the duration generally leads to the same geographical patterns, with an overall decrease (increase) in P[cool] (not shown). This overall variation linked with duration is
illustrated in Table 3, which presents the P[cool] interdecile range [IDR (10th–90th percentiles)] of the distribution of the 5146 values over Canada, for each case duration/season/method. As
duration increases (for fixed season and method), the IDR moves toward lower probability values, and generally widens. With regards to the method’s impact (keeping duration and season fixed), method
3 often results in IDRs with the lowest probability values. Finally, seasonal influence could roughly be characterized by the following ranking of P[cool] for fixed duration and method: MAM, DJF,
JJA, SON, and ANN (from highest to lowest values, based on 10th and 90th percentile values). Spring and winter are associated with higher probabilities on the Canadian scale, but this is not
necessarily the case for every location.
Table 3.
The 10th–90th percentiles over P[cool] values (%) at 5146 grid points of Canadian inland territory, inferred from 60 scenarios (methods 1–3) during 2006–35.
Results provided here consist of the statistical aggregate of a variety of climate trajectories, which, moreover, stem from models with different (though related) physics formulations. A physical
interpretation would necessitate an investigation of physical processes underlying the trajectories of individual simulations, as performed, for example, by Deser et al. (2014). Such an analysis is
beyond the scope of the present paper, but how variability and long-term linear trend patterns correlate with P[cool] patterns has been investigated. Results using Spearman rank correlation
coefficients (r[Spe]) are shown in Table 4 for the 5-yr duration. For each case season/method/location, 30-yr linear regressions are performed on each of the 60 scenarios to determine individual
variability and trend values, which are next averaged. Results verify a posteriori that variability is generally linked with P[cool], although the low number of climatically independent regions
implies a low level of statistical significance. The long-term trend appears more important, and the variability-to-trend ratio is an even better indicator (absolute values of r[Spe] are the same
when the inverse signal-to-noise ratio is used). Results are qualitatively the same for longer durations but have not been shown since it is trivial that, for example, 25-yr P[cool] values are
correlated with 30-yr linear trends. These results suggest that, for example, the low winter (high summer) P[cool] values for the western Arctic Archipelago relative to other regions (Figs. 9–11) may
be explained in terms of relatively low (high) ratios of the variability over the background trend.
Table 4.
Spearman rank correlation coefficient between 5-yr P[cool] and other statistical properties of scenarios over 5146 grid points of Canadian inland territory during 2006–35. Variability (standard
deviation of linearly detrended time series) and linear trends are calculated for each scenario and then averaged over the 60 scenarios at each grid point before variability/trend ratios are
calculated and correlated with P[cool].
5. Summary with discussion
This paper aims to produce probabilistic scenarios for short-term cooling trend occurrence over Canada during 2006–35 and to evaluate the impact of climate variability postprocessing on results. The
frequency of negative temperature trends in an ensemble of 60 climate scenarios has been interpreted as the probability of cooling trend (P[cool]), using three different postprocessing methods:
direct usage of locally interpolated simulations (method 1), calibration of variability based on a procedure akin to bias correction (method 2), and use of an autoregressive model (method 3).
It has been verified in hindcast mode (during 1962–2010) that P[cool] distributions across Canada are relatively reliable. Indeed, observational distributions do not differ from scenario
distributions more than these differ from one another, except for winter and 15–25-yr durations (atypical overforecasting bias) and for autumn and 25-yr duration (underforecasting). Hindcast results
also show that postprocessing of the variability has a slight impact on P[cool] distributions but does not improve reliability much. Moreover, it appears that short-term P[cool] reliability is more
limited by inadequacies in model long-term trends than in model variability. Results for 2006–35 suggest that the most influential indicator may be the signal-to-noise ratio, because short-term P
[cool] across Canada is more closely correlated to the ratio of the background trend over variability than to the background trend or to variability alone.
Calculations for 2006–35 unsurprisingly lead to higher P[cool] values for shorter durations. For example, probabilities of cooling in mean annual values across Canada decrease from ~40%–46% for 5-yr
durations to ~2%–18% for 25-yr durations (interdecile ranges). The case of Quebec City highlights that stochastic uncertainty in P[cool] is considerable and may be larger than that associated with
the interannual variability postprocessing method. However, it must be mentioned that a larger ensemble would have led to less stochastic uncertainty and that the assigned number of effective degrees
of freedom is somewhat subjective, in part because models are not fully independent.
It must be emphasized that besides the stochastic issue, there is also epistemic uncertainty. Indeed, the CMIP5 simulations used are based on four different representative concentration pathways
(RCPs), which do not cover all socioeconomic possibilities. For example, the deployment of a geoengineering scheme involving sulfate aerosols (Keith 2013) or a nuclear conflict (Özdoğan et al. 2013)
are two conceivable events for which regional climate impacts could be outside of the range covered by RCP-based simulations. Volcanism and solar activity, not accounted for in RCPs because they are
natural forcings, could also impact P[cool] calculations, were they more predictable. One interesting approach would be to use occurrence probabilities for significant volcanic eruptions (Hyde and
Crowley 2000) and for the onset of a grand solar minimum (Lockwood 2010).
Despite the limitations just mentioned, it seems clear that, during the coming decades, cooling trends of durations up to 25 yr (and possibly more) do represent plausible outcomes at any location in
Canada. This in no way disproves global warming, because these cooling trends occur in simulations whose background state is warming. It also appears that, as a counterpoint, short-term warming
trends could occur that are much stronger than the expected range of long-term trends.
We acknowledge the World Climate Research Programme’s Working Group on Coupled Modelling, which is responsible for CMIP, and we thank the climate modelling groups (listed in Table 1 of this paper)
for producing and making their model output available. The U.S. Department of Energy’s Program for Climate Model Diagnosis and Intercomparison provides coordinating support and led the development of
software infrastructure for CMIP in partnership with the Global Organization for Earth System Science Portals. We also thank Dan McKenney and his team at the Canadian Forest Service of Natural
Resources Canada for providing the observational product used here. Finally, we acknowledge the useful work of three anonymous reviewers.
Confidence Intervals for P[cool]
Uncertainty in
is determined by creating a model of the population of trends for each case based on considerations found in
Scheaffer and McClave (1990
, section 7.3). To explain the method used, let us consider the sample of trends for the 10-yr duration shown in
Fig. 4
. This distribution is based on 1260 values, corresponding to 60 independent simulations and 21 blocks falling within 2006–35 (the first 10-yr block would be 2006–15), and has a mean
and a variance
. The sample is assumed to be drawn from a population with a Gaussian distribution with a mean μ and variance σ
, so that
of the population is given by
The parameters μ and σ cannot be determined unequivocally from
, and confidence intervals must be set, at the 95% level in this case. The CI for σ
is given by
− 1 stands for the number of degrees of freedom and
represent the corresponding critical values for a chi-square distribution, appropriate for the variance of a Gaussian distribution. A conservative minimum of 60 degrees of freedom is considered,
because blocks overlap and to account for possible autocorrelation. The CI for μ is normally given by
represent the critical values for a Student’s
distribution, appropriate for the mean of a Gaussian distribution. As
> 30, the
distribution approaches the Gaussian distribution, and the critical values
= 1.96 are used. In the present case,
is conservatively replaced by the upper bound of the CI for σ.
Once the CIs for μ and σ are determined, the four combinations of the CI bounds are successively introduced in Eq. (A1), and the two extremal values among these four are used as the CI for P[cool].
Consistency is ensured by checking that P[cool], calculated as the proportion of negative values in the sample, falls within the corresponding CI. For the case referred to above, the sample has m =
0.41°C and s = 0.84°C, which gives a CI for P[cool] of [17.2%, 43.9%], whereas the proportion of negative values in the sample is 30.4%.
• Ahmed, K. F., G. Wang, J. Silander, A. M. Wilson, J. M. Allen, R. Horton, and R. Anyah, 2013: Statistical downscaling and bias correction of climate model outputs for climate change impact
assessment in the U.S. northeast. Global Planet. Change, 100, 320–332, doi:10.1016/j.gloplacha.2012.11.003.
• Bhend, J., and P. Whetton, 2013: Consistency of simulated and observed regional changes in temperature, sea level pressure and precipitation. Climatic Change, 118, 799–810, doi:10.1007/
• Collins, M., 2007: Ensembles and probabilities: A new era in the prediction of climate change. Philos. Trans. Roy. Soc. London,A365, 1957–1970, doi:10.1098/rsta.2007.2068.
• Cowtan, K., and R. G. Way, 2013: Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends. Quart. J. Roy. Meteor. Soc.,140, 1935–1944, doi:10.1002/qj.2297.
• Curry, J., 2011: Reasoning about climate uncertainty. Climatic Change, 108, 723–732, doi:10.1007/s10584-011-0180-z.
• de Elía, R., S. Biner, and A. Frigon, 2013: Interannual variability and expected regional climate change over North America. Climate Dyn., 41, 1245–1267, doi:10.1007/s00382-013-1717-9.
• Deser, C., R. Knutti, S. Solomon, and A. S. Phillips, 2012: Communication of the role of natural variability in future North American climate. Nat. Climate Change,2, 775–779, doi:10.1038/
• Deser, C., A. S. Phillips, M. A. Alexander, and B. V. Smoliak, 2014: Projecting North American climate over the next 50 years: Uncertainty due to internal variability. J. Climate, 27, 2271–2296,
• Easterling, D. R., and M. F. Wehner, 2009: Is the climate warming or cooling? Geophys. Res. Lett., 36, L08706, doi:10.1029/2009GL037810.
• Ehret, U., E. Zehe, V. Wulfmeyer, K. Warrach-Sagi, and J. Liebert, 2012: HESS opinions “Should we apply bias correction to global and regional climate model data?” Hydrol. Earth Syst. Sci., 16,
3391–3404, doi:10.5194/hess-16-3391-2012.
• England, M. H., and Coauthors, 2014: Recent intensification of wind-driven circulation in the Pacific and the ongoing warming hiatus. Nat. Climate Change, 4, 222–227, doi:10.1038/nclimate2106.
• Estrada, F., P. Perron, and B. Martínez-López, 2013: Statistically derived contributions of diverse human influences to twentieth-century temperature changes. Nat. Geosci., 6, 1050–1055, doi:
• Foster, G., and S. Rahmstorf, 2011: Global temperature evolution 1979–2010. Environ. Res. Lett., 6, doi:10.1088/1748-9326/6/4/044022.
• Gluhovsky, A., and E. Agee, 2007: On the analysis of atmospheric and climatic time series. J. Appl. Meteor. Climatol., 46, 1125–1129, doi:10.1175/JAM2512.1.
• Haerter, J. O., S. Hagemann, C. Moseley, and C. Piani, 2011: Climate model bias correction and the role of timescales. Hydrol. Earth Syst. Sci., 15, 1065–1079, doi:10.5194/hess-15-1065-2011.
• Hall, J., C. Twyman, and A. Kay, 2005: Influence diagrams for representing uncertainty in climate-related propositions. Climatic Change, 69, 343–365, doi:10.1007/s10584-005-2527-9.
• Hawkins, E., and R. Sutton, 2009: The potential to narrow uncertainty in regional climate predictions. Bull. Amer. Meteor. Soc., 90, 1095–1107, doi:10.1175/2009BAMS2607.1.
• Hopkinson, R. F., D. W. McKenney, E. J. Milewska, M. F. Hutchinson, P. Papadopol, and L. A. Vincent, 2011: Impact of aligning climatological day on gridding daily maximum–minimum temperature and
precipitation over Canada. J. Appl. Meteor. Climatol., 50, 1654–1665, doi:10.1175/2011JAMC2684.1.
• Huard, D., D. Chaumont, T. Logan, M.-F. Sottile, R. D. Brown, B. Gauvin St-Denis, P. Grenier, and M. Braun, 2014: A decade of climate scenarios: The Ouranos consortium modus operandi. Bull. Amer.
Meteor. Soc.,95, 1213–1225, doi:10.1175/BAMS-D-12-00163.1.
• Hutchinson, M. F., D. W. McKenney, K. Lawrence, J. H. Pedlar, R. F. Hopkinson, E. Milewska, and P. Papadopol, 2009: Development and testing of Canada-wide interpolated spatial models of daily
minimum–maximum temperature and precipitation for 1961–2003. J. Appl. Meteor. Climatol., 48, 725–741, doi:10.1175/2008JAMC1979.1.
• Hyde, W. T., and T. J. Crowley, 2000: Probability of future climatically significant volcanic eruptions. J. Climate, 13, 1445–1450, doi:10.1175/1520-0442(2000)013<1445:LOFCSV>2.0.CO;2.
• Jones, P. D., M. New, D. E. Parker, S. Martin, and I. G. Rigor, 1999: Surface air temperature and its changes over the past 150 years. Rev. Geophys., 37, 173–199, doi:10.1029/1999RG900002.
• Kaufmann, R. K., H. Kauppi, M. L. Mann, and J. H. Stock, 2011: Reconciling anthropogenic climate change with observed temperature 1998–2008. Proc. Natl. Acad. Sci. USA,108, 11 790–11 793, doi:
• Keith, D., 2013: A Case for Climate Engineering. MIT Press, 194 pp.
• Knight, J., and Coauthors, 2009: Do global temperature trends over the last decade falsify climate predictions? [in “State of the Climate in 2008”]. Bull. Amer. Meteor. Soc., 90 (8), S22–S23.
• Knutson, T. R., F. Zeng, and A. T. Wittenberg, 2013: Multimodel assessment of regional surface temperature trends: CMIP3 and CMIP5 twentieth-century simulations. J. Climate, 26, 8709–8743, doi:
• Knutti, R., D. Masson, and A. Gettelman, 2013: Climate model genealogy: Generation CMIP5 and how we got there. Geophys. Res. Lett., 40, 1194–1199, doi:10.1002/grl.50256.
• Kopp, G., and J. L. Lean, 2011: A new, lower value of total solar irradiance: Evidence and climate significance. Geophys. Res. Lett.,38, L01706, doi:10.1029/2010GL045777.
• Kosaka, Y., and S.-P. Xie, 2013: Recent global-warming hiatus tied to equatorial Pacific surface cooling. Nature, 501, 403–407, doi:10.1038/nature12534.
• Lean, J., 2010: Cycles and trends in solar irradiance and climate. Wiley Interdiscip. Rev.: Climate Change, 1, 111–122, doi:10.1002/wcc.18.
• Li, J., C. Sun, and F.-F. Jin, 2013: NAO implicated as a predictor of Northern Hemisphere mean temperature multidecadal variability. Geophys. Res. Lett., 40, 5497–5502, doi:10.1002/2013GL057877.
• Lockwood, M., 2010: Solar change and climate: An update in the light of the current exceptional solar minimum. Proc. Roy. Soc. London, A466, 303–329, doi:10.1098/rspa.2009.0519.
• Maraun, D., 2013: Bias correction, quantile mapping, and downscaling: Revisiting the inflation issue. J. Climate, 26, 2137–2143, doi:10.1175/JCLI-D-12-00821.1.
• Masson, D., and R. Knutti, 2011: Climate model genealogy. Geophys. Res. Lett., 38, L08703, doi:10.1029/2011GL046864.
• Meehl, G. A., J. M. Arblaster, J. T. Fasullo, A. Hu, and K. E. Trenberth, 2011: Model-based evidence of deep-ocean heat uptake during surface-temperature hiatus periods. Nat. Climate Change, 1,
360–364, doi:10.1038/nclimate1229.
• Morice, C. P., J. J. Kennedy, N. A. Rayner, and P. D. Jones, 2012: Quantifying uncertainties in global and regional temperature change using an ensemble of observational estimates: The HadCRUT4
data set. J. Geophys. Res., 117, D08101, doi:10.1029/2011JD017187.
• Özdoğan, M., A. Robock, and C. J. Kucharik, 2013: Impacts of a nuclear war in South Asia on soybean and maize production in the Midwest United States. Climatic Change, 116, 373–387, doi:10.1007/
• Piani, C., J. O. Haerter, and E. Coppola, 2010: Statistical bias correction for daily precipitation in regional climate models over Europe. Theor. Appl. Climatol., 99, 187–192, doi:10.1007/
• Räisänen, J., 2007: How reliable are climate models. Tellus, 59A, 2–29, doi:10.1111/j.1600-0870.2006.00211.x.
• Räisänen, J., and T. N. Palmer, 2001: A probability and decision-model analysis of a multimodel ensemble of climate change simulations. J. Climate, 14, 3212–3226, doi:10.1175/1520-0442(2001)014
• Reifen, C., and R. Toumi, 2009: Climate projections: Past performance no guarantee of future skill? Geophys. Res. Lett., 36, L13704, doi:10.1029/2009GL038082.
• Risbey, J. S., S. Lewandowsky, C. Langlais, D. P. Monselesan, T. J. O’Kane, and N. Oreskes, 2014: Well-estimated global surface warming in climate projections selected for ENSO phase. Nat.
Climate Change, 4, 835–840, doi:10.1038/nclimate2310.
• Sakaguchi, K., X. Zeng, and M. A. Brunke, 2012: The hindcast skill of the CMIP ensembles for the surface air temperature trend. J. Geophys. Res., 117, D16113, doi:10.1029/2012JD017765.
• Santer, B. D., and Coauthors, 2011: Separating signal and noise in atmospheric temperature changes: The importance of timescale. J. Geophys. Res., 116, D22105, doi:10.1029/2011JD016263.
• Scheaffer, R. L., and J. T. McClave, 1990: Probability and Statistics for Engineers. 3rd ed. Duxbury Press, 696 pp.
• Scherrer, S. C., C. Appenzeller, M. A. Liniger, and C. Schär, 2005: European temperature distribution changes in observations and climate change scenarios. Geophys. Res. Lett., 32, L19705, doi:
• Schneider, S. H., 2002: Can we estimate the likelihood of climatic scenarios at 2100? Climatic Change, 52, 441–451, doi:10.1023/A:1014276210717.
• Sheffield, J., and Coauthors, 2013: North American climate in CMIP5 experiments. Part I: Evaluation of historical simulations of continental and regional climatology. J. Climate, 26, 9209–9245,
• Solomon, S., K. H. Rosenlof, R. W. Portmann, J. S. Daniel, S. M. Davis, T. J. Sanford, and G.-K. Plattner, 2010: Contributions of stratospheric water vapor to decadal changes in the rate of
global warming. Science, 327, 1219–1223, doi:10.1126/science.1182488.
• Stephenson, D. B., M. Collins, J. C. Rougier, and R. E. Chandler, 2012: Statistical problems in the probabilistic prediction of climate change. Environmetrics, 23, 364–372, doi:10.1002/env.2153.
• Stott, P., P. Good, G. Jones, N. Gillett, and E. Hawkins, 2013: The upper end of climate model temperature projections is inconsistent with past warming. Environ. Res. Lett., 8, 014024, doi:
• Taylor, K. E., R. J. Stouffer, and G. A. Meehl, 2012: An overview of CMIP5 and the experiment design. Bull. Amer. Meteor. Soc., 93, 485–498, doi:10.1175/BAMS-D-11-00094.1.
• Tebaldi, C., and R. Knutti, 2007: The use of the multi-model ensemble in probabilistic climate projections. Philos. Trans. Roy. Soc. London, A365, 2053–2075, doi:10.1098/rsta.2007.2076.
• Themeßl, M. J., A. Gobiet, and G. Heinrich, 2012: Empirical-statistical downscaling and error correction of regional climate models and its impact on the climate change signal. Climatic Change,
112, 449–468, doi:10.1007/s10584-011-0224-4.
• van Oldenborgh, G. J., F. J. Doblas-Reyes, S. S. Drijfhout, and E. Hawkins, 2013: Reliability of regional climate model trends. Environ. Res. Lett., 8, 014055, doi:10.1088/1748-9326/8/1/014055.
• van Vuuren, D. P., and Coauthors, 2011: The representative concentration pathways: An overview. Climatic Change, 109, 5–31, doi:10.1007/s10584-011-0148-z.
• von Storch, H., and F. Zwiers, 2013: Testing ensembles of climate change scenarios for “statistical significance.” Climatic Change, 117, 1–9, doi:10.1007/s10584-012-0551-0.
• Wilks, D. S., 2006: Statistical Methods in the Atmospheric Sciences. 2nd ed. International Geophysics Series, Vol. 59, Academic Press, 627 pp.
• Yokohata, T., and Coauthors, 2013: Reliability and importance of structural diversity of climate model ensembles. Climate Dyn., 41, 2745–2763, doi:10.1007/s00382-013-1733-9.
• Zeng, X., and A. Wang, 2012: What is monthly mean land surface air temperature? Eos Trans. Amer. Geophys. Union, 93, 156, doi:10.1029/2012EO150006. | {"url":"https://journals.ametsoc.org/view/journals/clim/28/8/jcli-d-14-00224.1.xml","timestamp":"2024-11-12T14:20:07Z","content_type":"text/html","content_length":"1049715","record_id":"<urn:uuid:c7555350-d1f2-40aa-99c9-b0b2632c93f3>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00480.warc.gz"} |