content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Study on vibration characteristics of rolling mill based on vibration absorber
The vertical vibration often occurs during the rolling production, which has an influence on the accuracy of rolling mill. In order to effectively suppress the vertical vibration of the rolling
equipment, the rolling mill model with vibration absorber device was established. Based on the main resonance singularity of the rolling mill system, the best combination of opening parameters was
obtained. The best combination of opening parameters helps the rolling mill system work in a stable area. Finally, the effects of different vibration absorber parameters on the vibration
characteristics of the rolling mill system were analyzed. Results show that the vibration absorber device can effectively improve the stability of the rolling mill system.
• The rolling mill model with a vibration absorber was established.
• The main resonance singularity of the rolling mill with a vibration absorber was studied.
• The parameters of the vibration absorber seriously affect the stability of the rolling mill.
1. Introduction
With the advancement of technology, high-precision rolling products are becoming more and more significant in intelligent manufacturing. However, the vertical vibration of the rolling mill has
seriously affected the quality of the rolled products [1-3]. The higher the rolling speed, the more significant the vertical vibration of the rolling mill. When the rolling speed of the rolling
reaches the critical value of the rolling speed, which damage the mill equipment [4-6]. Therefore, the vertical vibration not only affects the precision of the rolled products, but also damages the
rolling equipment, especially vibration mark in the roll surface.
In recent years, vertical vibration problem of rolling mill is getting more and more attention. Some researchers studied the vertical vibration behavior of rolling mills from different angles, and
proposed some effective methods by numerical simulation and physical model. For instance, Fan et al. not only field tested the vibration frequency of CSP (Compact Strip Production) hot tandem mill
and its natural frequency, but also obtained the mutual influence relationship between the vibration frequency of hot tandem mill and its natural frequency [7]. Lemma et al. studied a numerical
simulation experiment on the physical model of the rolling mill. Then, they predicted the life of rolls by analyzing the simulated value of the rolling mill system [8]. Brusa et al. investigated the
vibration behavior of the Sendzimir mill. In order to effectively suppress the torsional vibration and vertical vibration behavior of the rolling mill, they developed a virtual numerical simulator in
the Matlab/Simulink environment [9]. Soon after, Liu et al. set up a nonlinear vertical vibration physical model of the mill equipment, and the vertical vibration characteristics of rolling mill
under nonlinear restraint force are studied [10]. They analyzed the relationship of mark spacing and vibration source. Finally, they found a way to suppress vertical vibration of rolling mill.
Also, a large number of simulation experiments and physical models have been showed on the vertical vibration behavior. Simultaneously, various mechanical parameters affected the vertical vibration
of the rolling mill are also analyzed. Sun et al. discussed the vertical vibration characteristics of the rolls under different load forces and found the distribution pressure of the rolling products
seriously affects the vibration behavior of the rolling mill [11]. Zhang et al. researched the vibration behavior of the rolling mill caused by the deformation of the rolled product and the
electromechanical coupling model of the rolling mill was established. Meanwhile, they also came up with the cause of the unsteady vibration of the mill [12]. Yang et al. investigated the influence of
the roll process parameters on the vertical vibration of the mill. The results show that the nonlinear spring force and nonlinear friction seriously affect the stability of the mill system [13].
Fujita et al. applied electrical discharge coating as a means of improving the wear resistance of the roll surface. Finally, the influence of the electrical machining conditions on the layer
thickness, hardness, and surface roughness was evaluated and discussed [14]. Yang Xu et al. took into account the interaction between the roller and the rolled piece and established a vertical
vibration model of roll system based on the dynamic friction equation of roll gap [15].
However, high quality steel is becoming more and more significant in the field of machine manufacturing, but the vibration problem still affects the quality of rolled products. In order to more
effectively suppress the vertical vibration of the rolling mill, some scholars began to study the rolling mill system from the control theories. Ling et al. added lateral hydraulic cylinder in
horizontal direction of the roller system bearing seats to suppress horizontal vibration of the rolling mill. But this method is likely to have occurrence of primary resonance and super-harmonic
resonance [16]. Yan. applied the designing second order torsional vibration observer in the main drive control system. Ultimately, torsional vibration could be restrained by the main drive system of
the second order [17]. Liu Shuang et al. established the dynamic equation of some nonlinear torsional vibration system with two masses. After adding the method of the adaptive continuous perturbation
control, the amplitude of the system decreases, and there is a transformation from chaotic motion to periodic motion [18].
Actually, it was difficult to effectively suppress the vertical vibration of the rolling mill by the control algorithm and optimization of the rolling parameters [19, 20]. Although many researchers
have suppressed the vertical vibration of the rolling mill by the control algorithm, they did not give a detailed description. Therefore, rolling mill system model with a vibration absorber was
established, which relies on the interaction relationship between the rolling mill roller system and the vibration absorber device. The main resonance singularity of the system was analyzed. In
addition, we also chose the appropriate combination of opening parameters, which enables the rolling mill system work in a stable area. Finally, the effect of the mass, spring force and friction of
the vibration absorber on the vibration behavior of the rolling mill was tested. These provided a reliable basis for the vertical vibration of the rolling mill.
2. A mill rolls model with vibration absorber device
The influence of nonlinear spring force of hydraulic cylinder system on vibration characteristics of the rolling mill system was studied, on the basis of the viewpoint of nonlinear dynamics [21]. The
hydraulic cylinder is distributed between the rack and the upper backup roll. Different thickness of plate strips are rolled out through changing the gap size of the two working rolls. The structure
of four-roll mill system is shown in Fig. 1.
The characteristics of the upper roll system are the same with the lower roll system due to the symmetry of the four-high mill structure. Therefore, we only analyzed the upper roll system of the
rolling mill for the sake of research. In this paper, a physical model of the rolling mill system under the constraints of nonlinear spring force is established. The vibration model of mill rolls is
shown in Fig. 2.
Where ${m}_{1}$ is the equivalent mass of backup roll and work roll, ${c}_{1}$ is the equivalent damping of rolled piece, ${k}_{1}$ is the equivalent stiffness of the rolled piece, ${F}_{l}=F\mathrm
{c}\mathrm{o}\mathrm{s}\left(\omega t\right)$ is equivalent load force, $F$ is external excitation amplitude, $\omega$ is the angular frequency of external excitation. ${F}_{s}\left(x\right)={k}_{1}^
{"}{x}_{1}+{k}_{3}^{\text{'}}{{x}_{1}}^{3}$ is spring force of hydraulic cylinder. ${k}_{1}^{"}$ is the equivalent stiffness of hydraulic cylinder, ${k}_{3}^{"}$ is the nonlinear spring force between
frame and upper roll system, ${x}_{1}$ is the vibration displacement of backup roll and work roll.
The vibration absorber and the rolling mill roller are connected by the elastic element and damping element of the vibration absorber. The vibration absorber device is installed to the bracket of the
rolling mill rolls. The vibration absorber device and the upper roller system constitute a two-degree-of-freedom system. The installation of the vibration absorber device in the rolling mill system
is shown in Fig. 3.
Fig. 1The structure diagram of mill rolls. 1. Rack, 2. Hydraulic cylinder, 3. Backup roll, 4. Roll gap, 5. Work roll, 6. Rolled piece
Fig. 2The vibration model of mill rolls
Fig. 3Structure diagram of the rolling mill with vibration absorber. 1. Vibration absorber, 2. Hydraulic cylinder, 3. Backup roll, 4. Upper work roll
The balance position of the rolling mill system and the vibration absorber device are the origin of motion when the system is at a standstill. The size of the vibration displacement of the rolling
mill system reflects the vibration intensity. Therefore, in order to control the vertical vibration of the rolling mill system, it is necessary to reduce the vibration displacement the rolling mill
system. The vibration energy of the rolling mill system is transferred to the vibration absorber device through the elastic elements and damping elements of the vibration absorber device. The
vibration energy of the rolling mill system is mainly divided into two parts. One part is transferred to the friction force of the vibration absorber device. The other part is converted into the
kinetic energy of the vibration absorber device. Finally, the vibration displacement of the rolling mill system can effectively be reduced by the vibration absorber device.
Fig. 4System model of two-degrees-of-freedom with vibration absorber
Where ${m}_{2}$ is the mass of dynamic vibration absorber, ${x}_{2}$ is the absolute displacement of the vibration absorber, ${k}_{2}$ is the equivalent stiffness of the vibration absorber and
rolling mill upper roller system, ${c}_{2}$ is the equivalent stiffness of the vibration absorber and rolling mill upper roller system. On the basis of d’Alembert’s principle [22]. The dynamics
balance equation is expressed as:
left({k}_{1}^{\mathrm{"}}+{k}_{3}^{\mathrm{"}}{{x}_{1}}^{3}\right)=F\mathrm{c}\mathrm{o}\mathrm{s}\left(\omega t\right),\\ {m}_{2}{\stackrel{¨}{x}}_{2}-{c}_{2}\left({\stackrel{˙}{x}}_{1}-{\stackrel
where ${\stackrel{˙}{x}}_{1}$ is the first derivative of ${x}_{1}$, ${\stackrel{¨}{x}}_{1}$ is the second derivative of ${x}_{1}$, ${\stackrel{˙}{x}}_{2}$ is the first derivative of ${x}_{2}$, ${\
stackrel{¨}{x}}_{2}$ is the second derivative of ${x}_{2}$.
3. Solution of two-degree-of-freedom system based on vibration absorber control
Assuming that the rolling mill system is subjected to periodic external load forces, set ${F}_{l}=F\mathrm{c}\mathrm{o}\mathrm{s}\left(\omega t\right)$. By transposition and replacement, Eq. (1) is
transformed into a standard form:
$\left\{\begin{array}{l}{\stackrel{¨}{x}}_{1}+{{\lambda }^{2}}_{1}{x}_{1}=\delta {x}_{2}-\xi \left({\stackrel{˙}{x}}_{1}-{\stackrel{˙}{x}}_{2}\right)-\rho {\stackrel{˙}{x}}_{1}-{k}_{3}^{*}{{x}_{1}}^
{3}+{F}_{0}\mathrm{cos}\left(\omega t\right),\\ {\stackrel{¨}{x}}_{2}+{{\lambda }^{2}}_{2}{x}_{2}=\gamma \left({\stackrel{˙}{x}}_{1}-{\stackrel{˙}{x}}_{2}\right)+{\lambda }_{2}{x}_{1}.\end{array}\
${{\lambda }^{2}}_{1}=\frac{{k}_{1}+{k}_{2}+{k}_{1}^{"}}{{m}_{1}},\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}{{\lambda }^{2}}_{2}=\frac{{k}_{2}}{{m}_{2}},\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm
{}\gamma =\frac{{c}_{2}}{{m}_{2}},\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\rho =\frac{{c}_{1}}{{m}_{1}},$
$\xi =\frac{{c}_{2}}{{m}_{1}},\mathrm{}\mathrm{}\mathrm{}\mathrm{}\delta =\frac{{k}_{2}}{{m}_{1}},\mathrm{}\mathrm{}\mathrm{}\mathrm{}{F}_{0}=\frac{F}{{m}_{1}},\mathrm{}\mathrm{}\mathrm{}\mathrm{}
Based on the optimal control principle of vibration absorber [23]. ${k}_{2}$ could be approximately expressed as ${k}_{2}=\frac{\mu }{{\left(1+\mu \right)}^{2}}{k}_{1}$, ${c}_{2}$ could be
approximately expressed as ${c}_{2}=\frac{\mu }{1+\mu }\sqrt{\frac{3\mu {k}_{1}{m}_{1}}{2\left(1+\mu \right)}}$. $\mu =\frac{{m}_{2}}{{m}_{1}}$ is the ratio of the mass of the vibration absorber and
mass of the upper roller system.
Set, $\delta =\epsilon {\delta }_{1}$, $\xi =\epsilon {\xi }_{1}$, $\rho =\epsilon {\rho }_{1}$, ${F}_{0}=\epsilon {F}_{10}$, $\gamma ={\gamma }_{1}$, ${{\lambda }^{2}}_{2}=\epsilon {\omega }_{1}$, $
{k}_{3}^{*}=\epsilon {k}_{31}^{*}$.
By parameters replacement, Eq. (2) becomes:
$\left\{\begin{array}{l}{\stackrel{¨}{x}}_{1}+{{\lambda }^{2}}_{1}{x}_{1}=\epsilon \left[{\delta }_{1}{x}_{2}-{\xi }_{1}\left({\stackrel{˙}{x}}_{1}-{\stackrel{˙}{x}}_{2}\right)-{\rho }_{1}{\stackrel
{˙}{x}}_{1}-{k}_{31}^{*}{{x}_{1}}^{3}+{F}_{10}\mathrm{cos}\left(\omega t\right)\right],\\ {\stackrel{¨}{x}}_{2}+{{\lambda }^{2}}_{2}{x}_{2}=\epsilon \left[{\gamma }_{1}\left({\stackrel{˙}{x}}_{1}-{\
stackrel{˙}{x}}_{2}\right)+{\omega }_{1}{x}_{1}\right].\end{array}\right\$
In order to get a nonlinear approximate solution of Eq. (2). Set, ${T}_{0}=t$, ${T}_{1}=\epsilon t$. Therefore, the time derivatives are expressed as:
$\left\{\begin{array}{l}d/dt={D}_{0}+\epsilon {D}_{1}+\dots ,\\ {d}^{2}/d{t}^{2}={{D}_{0}}^{2}+2\epsilon {D}_{0}{D}_{1}+\dots ,\end{array}\right\$
where, ${D}_{n}=\partial /\partial {T}_{n}$, $\epsilon$ is small parameter. On the basis of a multi-scale approach [24]. Set the solution of Eq. (2) as:
$\left\{\begin{array}{l}{x}_{1}={x}_{11}\left({T}_{0},{T}_{1}\right)+\epsilon {x}_{12}\left({T}_{0},{T}_{1}\right),\\ {x}_{2}={x}_{21}\left({T}_{0},{T}_{1}\right)+\epsilon {x}_{22}\left({T}_{0},{T}_
Substituting Eq. (4) and Eq. (5) into Eq. (3). After Eq. (3) is defined as:
$\left\{\begin{array}{l}{D}_{0}^{2}{x}_{11}+{\lambda }_{1}^{2}{x}_{11}=0,\\ {D}_{0}^{2}{x}_{21}+{\lambda }_{2}^{2}{x}_{21}=0.\end{array}\right\$
$\left\{\begin{array}{l}{D}_{0}^{2}{x}_{12}+{\lambda }_{1}^{2}{x}_{12}={F}_{10}cos\left(\omega t\right)+{\delta }_{1}{x}_{21}-{\xi }_{1}{D}_{0}{x}_{11}+{\xi }_{1}{D}_{0}{x}_{21}\\ \mathrm{}\mathrm{}\
mathrm{}\mathrm{}\mathrm{}\mathrm{}-{\rho }_{1}{D}_{0}{x}_{11}-{k}_{31}^{8}{x}_{11}^{3}-2{D}_{0}{D}_{1}{x}_{11},\\ {D}_{0}^{2}{x}_{22}+{\lambda }_{2}^{2}{x}_{22}={\gamma }_{1}{D}_{0}{x}_{11}-{\gamma
}_{1}{D}_{0}{x}_{21}+{\omega }_{1}{x}_{11}-2{D}_{0}{D}_{1}{x}_{21}.\end{array}\right\$
The solution of Eq. (6) is setting as:
$\left\{\begin{array}{l}{x}_{11}={A}_{1}\left({T}_{1}\right){e}^{i{\lambda }_{1}{T}_{0}}+{\stackrel{-}{A}}_{1}\left({T}_{1}\right){e}^{-i{\lambda }_{1}{T}_{0}},\\ {x}_{21}={A}_{2}\left({T}_{1}\right)
{e}^{i{\lambda }_{2}{T}_{0}}+{\stackrel{-}{A}}_{2}\left({T}_{1}\right){e}^{-i{\lambda }_{2}{T}_{0}}.\end{array}\right\$
Substituting Eq. (8) into Eq. (3). On the basis of the small scale detuning parameters, the frequencies are redefined as:$\omega ={\lambda }_{1}+\epsilon \sigma$; ${\lambda }_{2}={\lambda }_{1}+\
epsilon {\sigma }_{1}$. Where, $\sigma$ and ${\sigma }_{1}$ are detuning parameters. For avoiding the secular terms of equations, Eq. (7) must meet the conditions as follow:
$\left\{\begin{array}{l}0.5{F}_{10}{e}^{i\sigma {T}_{1}}+\left({\delta }_{1}+{\xi }_{1}\right){B}_{2}{e}^{i{\sigma }_{1}{T}_{1}}-\left({\xi }_{1}+{\rho }_{1}+2{D}_{1}\right){B}_{1}i{\lambda }_{1}+2i
{\lambda }_{1}{D}_{1}{B}_{1}-3{b}_{0}{{B}_{1}}^{2}\overline{{B}_{1}}=0,\\ {\gamma }_{1}{B}_{1}i{\lambda }_{1}{e}^{-i{\sigma }_{1}{T}_{1}}-{\gamma }_{1}{B}_{2}i\omega +{\omega }_{1}{B}_{1}{e}^{-i{\
sigma }_{1}{T}_{1}}-2{D}_{1}{B}_{2}i\omega =0.\end{array}\right\$
To solve Eq. (9), we need to express the solution in polar form:
${B}_{1}=0.5a{e}^{i{\phi }_{1}}$, ${B}_{2}=0.5b{e}^{i{\phi }_{2}}$, where, $a$, $b$, ${\phi }_{1}$, ${\phi }_{2}$ both are the functions of ${T}_{1}$. In order to obtain the solution of ${\theta }_
{1}={\phi }_{2}-{\phi }_{1}+{\sigma }_{1}{T}_{1}$ equation sets, introducing intermediate variables $\theta$, ${\theta }_{1}$; define that: $\theta =\sigma {T}_{1}-{\phi }_{1}$; Substituting ${B}_{1}
$, ${B}_{2}$, $\theta$, ${\theta }_{1}$ into Eq. (9), the modulation equations are expressed as:
$\left\{\begin{array}{l}0.5{F}_{10}\mathrm{s}\mathrm{i}\mathrm{n}\theta +0.5{\delta }_{1}b\mathrm{s}\mathrm{i}\mathrm{n}{\theta }_{1}-0.5{\xi }_{1}{\lambda }_{1}a+0.5{\xi }_{1}b\mathrm{s}\mathrm{i}\
mathrm{n}{\theta }_{1}\\ -0.5{\rho }_{1}{\lambda }_{1}a-{\lambda }_{1}\stackrel{˙}{a}-3/8{b}_{0}{a}^{3}=0,\\ 0.5{F}_{10}\mathrm{c}\mathrm{o}\mathrm{s}\theta +0.5{\delta }_{1}b\mathrm{c}\mathrm{o}\
mathrm{s}{\theta }_{1}+0.5{\xi }_{1}b\mathrm{c}\mathrm{o}\mathrm{s}{\theta }_{1}+a{\lambda }_{1}{\stackrel{˙}{\phi }}_{1}=0,\\ 0.5{\gamma }_{1}a{\lambda }_{1}\mathrm{c}\mathrm{o}\mathrm{s}{\theta }_
{1}-0.5{\gamma }_{1}b\omega -0.5{\omega }_{1}a\mathrm{s}\mathrm{i}\mathrm{n}{\theta }_{1}-\omega \stackrel{˙}{b}=0,\\ 0.5{\gamma }_{1}a{\lambda }_{1}\mathrm{s}\mathrm{i}\mathrm{n}{\theta }_{1}+0.5{\
omega }_{1}a\mathrm{c}\mathrm{o}\mathrm{s}{\theta }_{1}+b\omega {\stackrel{˙}{\phi }}_{2}=0.\end{array}\right\$
Eliminating $\theta$, ${\theta }_{1}$, the frequency response is obtained in terms of two coupled equations as:
$\left({\gamma }_{1}a{\lambda }_{1}{\right)}^{2}=\left({\omega }_{1}a{\right)}^{2}+\left({\gamma }_{1}b{\lambda }_{2}{\right)}^{2}+2{\gamma }_{1}b{\lambda }_{2}{\omega }_{1}a\mathrm{s}\mathrm{i}\
mathrm{n}{\theta }_{1}+\left[2b{\lambda }_{2}\left(\sigma -{\sigma }_{1}\right){\right]}^{2}.$
$\begin{array}{l}{{F}_{10}}^{2}=\left[\left({\delta }_{1}+{\xi }_{1}\right)b{\right]}^{2}+\left({\xi }_{1}{\lambda }_{1}a+{\rho }_{1}{\lambda }_{1}a+0.75{b}_{0}{a}^{3}{\right)}^{2}\\ \mathrm{}\mathrm
{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}-2\left({\delta }_{1}+{\xi }_{1}\right)b\mathrm{s}\mathrm{i}\mathrm{n}{\theta }_{1}\left({\xi }_{1}{\lambda }_{1}a+{\rho }_{1}{\lambda }_{1}a+0.75{b}_{0}{a}^{3}\
right)\\ \mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}+\left(2a{\lambda }_{1}\sigma {\right)}^{2}+4a{\lambda }_{1}\sigma \left({\delta }_{1}+{\xi }_{1}\right)b\mathrm{c}\mathrm{o}\mathrm{s}{\theta }_
$\mathrm{s}\mathrm{i}\mathrm{n}{\theta }_{1}=\frac{\left({\delta }_{1}+{\xi }_{1}{\right)}^{2}{b}^{2}+\left({\xi }_{1}{\lambda }_{1}a+{\rho }_{1}{\lambda }_{1}a{\right)}^{2}+\left(2{\lambda }_{1}a\
sigma {\right)}^{2}-{{F}_{10}}^{2}}{2{a}^{2}b\left({\delta }_{1}+{\xi }_{1}\right)N}$
$\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}+\frac{{\lambda }_{1}\sigma MN-{{\lambda }^{2}}_{1}\sigma \left({\delta }_{1}+{\xi }_{1}\right){b}^{2}{\gamma }_{1}{\omega }_{1}{\lambda }_{1}}
{2{a}^{2}b{\omega }_{1}NA}+\frac{{\lambda }_{1}\sigma L}{2{a}^{2}b{\omega }_{1}NA\left({\delta }_{1}+{\xi }_{1}\right)},$
$\mathrm{c}\mathrm{o}\mathrm{s}{\theta }_{1}=\frac{MN-\left({\delta }_{1}+{\xi }_{1}\right){b}^{2}{\gamma }_{1}{\omega }_{1}{\lambda }_{2}}{4ab{\omega }_{1}\left[{\gamma }_{1}{\lambda }_{1}{\lambda }
_{2}+{\lambda }_{2}\left(\sigma -{\sigma }_{1}\right)N\right]}+\frac{L}{4ab{\omega }_{1}\left[{\gamma }_{1}{\lambda }_{1}{\lambda }_{2}+{\lambda }_{2}\left(\sigma -{\sigma }_{1}\right)N\right]\left
({\delta }_{1}+{\xi }_{1}\right)},$
$\left\{\begin{array}{l}L={\gamma }_{1}{\lambda }_{1}{\lambda }_{2}{{F}_{10}}^{2}-{\gamma }_{1}{\lambda }_{2}{\omega }_{1}{a}^{2}{N}^{2}-{\gamma }_{1}{\lambda }_{2}{\omega }_{1}\left(2a{\lambda }_{1}
\sigma \right),\\ M={{\gamma }_{1}}^{2}{a}^{2}{{\lambda }^{2}}_{1}-{{\omega }_{1}}^{2}{a}^{2}-{{\gamma }_{1}}^{2}{b}^{2}{{\lambda }^{2}}_{2}-4{b}^{2}{{\lambda }^{2}}_{2}\sigma -{\sigma }_{1}{\right)}
^{2},\\ N={\xi }_{1}{\lambda }_{1}+{\rho }_{1}{\lambda }_{1}+0.75{k}_{31}^{\mathrm{*}}{a}^{2}.\end{array}\right\$
4. Simulation research
4.1. Bifurcation characteristic analysis of rolling mill system with vibration absorber
Substituting Eq. (11) into Eq. (12) and eliminating $a$. Then, the bifurcation equation is expressed as:
${z}^{6}+\alpha {z}^{4}+\beta {z}^{2}+\mu =0.$
where, $\alpha$, $\beta$ are the unfolding parameters linked to the internal parameters of the rolling mill system; $\mu$ is the bifurcation parameter mainly associated with external excitation. On
the basis of the Singularity theory, the Eq. (13) is the universal unfolding of the paradigm ${z}^{6}+\mu =0$. Since the equation has many parameters, the bifurcation characteristics cannot be
directly displayed on the floor plan. So the rolling mill system with vibration absorber can be divided into three parts and three critical lines. Transition sets of the rolling mill system
parameters is shown in Fig. 5.
Fig. 5Transition sets of system parameters
Fig. 6Bifurcation diagram of the system
As shown in Fig. 6. when opening parameter combination of the rolling mill system is in zone I, the corresponding system bifurcation topology has a stable solution. If the opening parameter
combination crosses the critical state of the set ${H}_{1}$, the vibration amplitude of the system with the change of the bifurcation parameters. Meanwhile, the vibration amplitude of the rolling
equipment shows unstable multi-value phenomenon when the bifurcation parameter is closed to zero. When the opening parameter from the critical state of the bipolar line point set D to the parameter
III area, the multi-valued bifurcation parameter interval of the vibration amplitude of the rolling mill system gradually enlarges. Obviously, the rolling mill vibration system becomes more sensitive
in the area III. The opening parameters $\alpha$ and $\beta$ are mainly related to the system cubic coefficient. Therefore, parameters of rolling process should be properly selected. As far as
possible, the combination of the numerical values of the opening parameters $\alpha$ and $\beta$ are kept within the range of the region I.
4.2. Internal resonance characteristic analysis
The time-domain diagram and phase diagram under before and after adding the vibration absorber device are shown in Fig. 7 and Fig. 8. When the rolling mill system does not have a vibration absorber
device, the vibration displacement is also gradually converge to a steady state. But its convergence time prolonged, and the system phase diagram is also a closed curve. It can be seen that the
system is in the relatively steady state, but there is a trend of the unstable development. When the mill system is equipped with a vibration absorber device. The vibration displacement will
gradually converge to the steady state with the passage of time, and the system phase diagram is a closed curve. It can be seen that the system is in the steady state.
Fig. 7The rolling mill system does not have a vibration absorber device
Fig. 8The mill system is equipped with a vibration absorber device
From Fig. 9 to Fig. 11, the influence of mass, spring force and friction force of different vibration absorbers on amplitude-frequency characteristic curve of roll mill vibration was analyzed. In
Fig. 9, the amplitude-frequency characteristic curves of the vibration absorbers of different masses are different in curvature and height. In Fig. 10, the stiffness coefficient of the vibration
absorber can change the curvature of the amplitude-frequency characteristic curve, thereby changing the range of system stability. In Fig. 11, the frictional force of the vibration absorber can
change the height of the amplitude-frequency characteristic curve.
Fig. 9Amplitude frequency characteristics of different absorbers
Fig. 10Amplitude frequency characteristics of spring force of different vibration absorbers
Fig. 11Amplitude frequency characteristics of friction force of different vibration absorbers
5. Conclusions
Taking the impact of vertical vibration of the rolling mill system into consideration, the rolling mill model with a vibration absorber was established. It was found that the vibration energy of the
rolling mill system could be transferred to the vibration absorber device. A part of the vibration energy was converted into the heat energy of the vibration absorber device. Another part of the
vibration energy was transferred to the kinetic energy of the vibration absorber device. Therefore, the vibration displacement of the rolling mill system is effectively reduced by the vibration
absorber device.
On the basis of the main common-amplitude-frequency-response equation, the main resonance singularity of the rolling mill system with a vibration absorber was studied. The differential topology of
the rolling mill system showed the stable state of the unique solution, which the parameters combination $\alpha$ and $\beta$ were in the region I. Finally, the best combination of the opening
parameters $\alpha$ and $\beta$ were selected by analyzing the system parameter transfer set and the bifurcation topology.
Based on the influence of the mass, spring force and friction force of the vibration absorber on the stability of amplitude-frequency characteristic curve of the rolling mill. Some conclusions are
drawn: Within a certain range, the smaller the mass of the vibration absorber device, the smaller the height and curvature of the amplitude-frequency characteristic curve of the rolling mill system.
The greater the spring force of the vibration absorber, the smaller the unstable area of the mill system. The greater the friction of the vibration absorber, the smaller the vibration amplitude of
the mill system. Therefore, it is important to choose the best mass, spring force and friction of the vibration absorber for effectively suppressing the vertical vibration of the rolling mill system.
• Niziol J., Swiatoniowski A. Numerical analysis of the vertical vibrations of rolling mills and their negative effect on the sheet quality. Journal of Materials Processing Technology, Vol. 162,
Issue 1, 2005, p. 546-550.
• Heidari A., Forouzan M. R. Optimization of cold rolling process parameters in order to increasing rolling speed limited by chatter vibrations. Journal of Advanced Research, Vol. 4, Issue 1, 2013,
p. 27-34.
• Yarita I., Furukawa K., Seino Y. Analysis of chattering in cold rolling for ultrathin gauge steel strip. Transactions of the Iron and Steel Institute of Japan, Vol. 18, Issue 1, 1978, p. 1-10.
• Wu S., Shao Y., Wang L., et al. Relationship between vibration marks and rolling force fluctuation for twenty-high roll mill. Engineering Failure Analysis, Vol. 55, Issue 1, 2015, p. 87-99.
• Yildiz S. K., Forbes J. F., Huang B., et al. Dynamic modeling and simulation of a hot strip finishing mill. Applied Mathematical Modeling, Vol. 33, Issue 7, 2009, p. 3208-3225.
• Kim Y., Kim C. W., Lee S., et al. Dynamic modeling and numerical analysis of a cold rolling mill. International Journal of Precision Engineering and Manufacturing, Vol. 14, Issue 3, 2013, p.
• Fan X. B., Zang Y., et al. Vibration of CSP hot strip mill. Journal of Mechanical Engineering, Vol. 43, Issue 8, 2007, p. 198-201.
• Brusa. E., Lemma L. Numerical and experimental analysis of the dynamic effects in compact cluster mills for cold rolling. Journal of Materials Processing Technology, Vol. 209, Issue 5, 2009, p.
• Brusa E., Lemma L., Benasciutti D. Vibration analysis of a Sendzimir cold rolling mill and bearing fault detection. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of
Mechanical Engineering Science, Vol. 224, Issue 8, 2010, p. 1645-1654.
• Liu F., Liu B., Liu H., et al. Vertical vibration of strip mill with the piecewise nonlinear constraint arising from hydraulic cylinder. International Journal of Precision Engineering and
Manufacturing, Vol. 16, Issue 9, 2015, p. 1891-1898.
• Sun J. L., Peng Y., Gao Y. A., et al. Simulation and experimental study of horizontal vibration of hot strip mill. Journal of Central South University, Vol. 46, Issue 12, 2015, p. 4497-4503.
• Zhang Y., Yan X., Ling Q. Electromechanical coupling vibration of rolling mill excited by variable frequency harmonic. Advanced Materials Research, Vol. 912, Issue 914, 2014, p. 662-665.
• Yang X., Tong C. N., Meng J. J. Rolling force model of vibration zone in cold strip mill. Journal of Vibration, Testing and Diagnosis, Vol. 30, Issue 4, 2010.
• Fujita N., Kimura Y., Kobayashi K., et al. Dynamic control of lubrication characteristics in high speed tandem cold rolling. Journal of Materials Processing Technology, Vol. 229, 2016, p.
• Yang X., Li Q., Tong C., et al. Vertical vibration model for unsteady lubrication in rolls-strip interface of cold rolling mills. Advances in Mechanical Engineering, Vol. 4, Issue 12, 2012, p.
• Ling Q. H., Yan X. Q., Zhang Y. F. Research on nonlinear horizontal vibration suppression of hot strip mill. Journal of Chang’an University (Natural Science), Vol. 35, Issue 6, 2015, p. 145-151.
• Yan X. Q. Electro-hydraulic coupling vibration control of hot strip mill. Journal of Mechanical Engineering, Vol. 47, Issue 17, 2011, p. 61-65.
• Liu S., Ai H. L., Lin Z. J., et al. Analysis of vibration characteristics and adaptive continuous perturbation control of some torsional vibration system with backlash. Chaos Solitons and
Fractals, Vol. 103, 2017, p. 151-158.
• Zhu Y., Jiang W., Liu S., et al. Research on influences of nonlinear hydraulic spring force on nonlinear dynamic behaviors of electro-hydraulic servo system. China Mechanical Engineering, Vol.
26, Issue 8, 2015, p. 1085-1091.
• Han M., Romanovski V. G., Zhang X. Equivalence of the Melnikov function method and the averaging method. Qualitative Theory of Dynamical Systems, Vol. 15, Issue 2, 2016, p. 471-479.
• Wang Z., Tian Q., Hu H., et al. Nonlinear dynamics and chaotic control of a flexible multibody system with uncertain joint clearance. Nonlinear Dynamics, Vol. 86, Issue 3, 2016, p. 1571-1597.
• Glocker C. H. Discussion of d’Alembert’s principle for non-smooth unilateral constraints. ZAMM Journal of applied mathematics and mechanics: Zeitschrift für angewandte Mathematik und Mechanik,
Vol. 79, 1999, p. 91-94.
• Guohua Z., Aiguo C., Zhen W., et al. Analysis of lightweight composite body structure for electrical vehicle using the multiscale approach. Journal of Mechanical Engineering, Vol. 52, Issue 6,
2016, p. 145-152.
• Li L. Optimal parameters selection and engineering implementation of dynamic vibration absorber attached to boring bar. INTER-NOISE and NOISE-CON Congress and Conference Proceedings, 2016.
About this article
vibration absorber
rolling mill
opening parameters
parameters optimization
Copyright © 2019 Jialei Jiang, et al.
This is an open access article distributed under the
Creative Commons Attribution License
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | {"url":"https://www.extrica.com/article/20871","timestamp":"2024-11-03T16:10:50Z","content_type":"text/html","content_length":"179163","record_id":"<urn:uuid:3a9bdf69-ab72-4149-b7e1-398a6fc6f49d>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00838.warc.gz"} |
E-learning course overview
• Registration deadline: open-end
To take this course, you must register following the instructions on this page. You will need an email address of the Ruhr-University or the Technical University Dortmund for registration. If you
are an exchange student without such an email address or come from another university within the Ruhr-Alliance, contact us by email as instructed there. When registering, please fill in your
degree program (for example, "MSC Angewandte Informatik", not just "Master of Science"). This is important information for us to manage exams and credit points.
This course will be taught in the inverted classroom format combining presence and online features.
For each lecture, a video will be made available that students should watch BEFORE the lecture hour. During the lecture hour, we'll discuss the material presented in the video. So to profit from
that, you MUST have seen the video beforehand. That discussion takes place in the classroom, but can also be followed in real time through a Zoom channel. Both students in the class room and students
following online should and can ask questions.
The same format and same Zoom channel will be used for the exercise sessions. In these, the solutions of the corrected exercises will be discussed. The exercise session can also be used to ask
general questions.
The course uses e-learning features provided through the present webpages (see under "E-LEARNING"). This course is NOT managed through moodle! Once registered, the e-learning webpages at ini.rub.de
will give you access to the video lectures, the zoom room, the lecture slides, readings, exercise sheets, and more. You will upload your exercise solutions and will see the marked corrections to your
solutions there. You can also ask questions in the "discussion forum".
To take the course, you must, therefore, register through this webpage: Go to "e-learning", select this course, and follow the instructions there. You will need an email address of the
Ruhr-University or the Technical University Dortmund for registration. If you are an exchange student without such an email address or come from another university within the Ruhr-Alliance, contact
us by email as instructed there. When registering, please fill in your degree program (for example, "MSC Angewandte Informatik", not just "Master of Science"). This is important information for us to
manage exams and credit points.
This course lays the foundations for a neurally grounded understanding of the fundamental processes in perception, in cognition, and in motor control, that enable intelligent action in the world. The
theoretical perspective is aligned with ideas from embodied and situated cognition, but embraces concepts of neural representation and aims to reach higher cognition. Neural grounding is provided at
the level of populations of neurons in the brain that form strongly recurrent neural networks and are ultimately linked to the sensory and motor surfaces.
The theoretical concepts on which the course is based come form dynamical systems theory. These concepts are used to characterize neural processes in strongly recurrent neural networks as neural
dynamic systems, in which stable activation states emerge from the connectivity patterns within neural populations. These connectivity patterns imply that neural populations represent low-dimensional
features spaces. This leads to neural dynamic fields of activation as the building blocks of neural cognitive architectures. Dynamic instabilities induce change of attractor states from which
cognitive functions such as detection, change, or selection decisions, working memory, and sequences of processing stages emerge.
The course partially follows a textbook (Dynamic Thinking—A primer on Dynamic Field Theory, Schöner, Spencer, and the DFT research group. Oxford University Press, 2016), of which chapters will serve
as reading material. Exercises will focus on hands-on simulation experiments, but also involve readings and the writing of short essays on interdisciplinary research topics. See
www.dynamicfieldtheory.org for some of that material. Tutorials on mathematical concepts are provided, so that training in calculus and differential equations is useful, but not a prerequisite for
the course.
Lecturer (+49) 234-32-27965 gregor.schoener@ini.rub.de NB 3/31
Teaching Assistant (primary contact) daniel.sabinasz@ini.rub.de
Tutor (+49) 234-32-27973 raul.grieben@ini.rub.de NB 02/74
Tutor rebecca.baldi@ini.rub.de
Tutor (+49) 234-32-27971 lukas.bildheim@ini.rub.de NB 02/76
Tutor (+49) 234-32-15884 stephan.sehring@ini.rub.de NB 02/75
Tutor (+49) 234-32-27976 minseok.kang@ini.rub.de NB 02/75
Course type
6 CP
Winter Term 2022/2023
Takes place every week on Thursday from 14:15 to 16:00 in room NB 3/57.
First appointment is on 13.10.2022
Last appointment is on 02.02.2023
Takes place every week on Thursday from 16:00 to 16:45 in room NB 3/57.
First appointment is on 20.10.2022
Last appointment is on 02.02.2023
This course requires some basic math preparation, typically as covered in two semesters of higher mathematics (functions, differentiation, integration, differential equations, linear algebra). The
course does not make extensive use of the underlying mathematical techniques, but uses the mathematical concepts to express scientific ideas. Students without prior training in the relevant
mathematics may be able to follow the course, but will have to work harder to familiarize themselves with the concepts.
Exercises are organized by Daniel Sabinasz Details on grading are available in the course rules below.
The course will be based on selected chapters of a textbook (Dynamic Thinking: A Primer on Dynamic Field Theory by Schöner, G., Spencer, J, and the DFT Research Group, Oxford University Press). The
Introduction and the first two chapters are available for download in the course materials below. These and others will also serve as readings for some of the exercises.
For the mathematical background in dynamical systems an excellent resource is a book that is available online as a free download (thanks to the author's generosity): Edward R. Scheinerman's
Invitation to Dynamical Systems. This book covers both discrete and continuous time dynamical systems, while in the course we will only make use of continuous time dynamical systems formalized as
differential equations.
Teaching Units
Organization of the course
Lecture slides
This will be discussed in the first live session...
Document Rules for credit
Watch this for an introduction into a topic and an overview over the course.
Lecture slides Introduction
Dynamical systems tutorial
Lecture Dynamical systems tutorial part 1
Dynamical systems tutorial part 1
Video This lecture given by Sophie Aerdker gives a brief introduction into foundational concepts from the mathematics of dynamical systems as preparation for the neural dynamics in Dynamic Field
Theory, covered in the rest of the course.
Dynamical systems tutorial part 2
slides The second part of the dynamical systems tutorial presented by Sophie Aerdker as background for the Neural Dynamics course. This covers bifurcations and their significance for modeling.
Video Dynamical systems tutorial part 2
Simple simulator
This Matlab code illustrates the ideas of numerical simulation of differential equations... As a RUB student you have access to Matlab here.
Exercise 1
Note: this is the most mathematical of all exercises. For mathematically skilled participants, this should not be hard, but hopefully insightful. For those who have not practiced math for a
while, this will be hard. But the remainder of the course and of the exercises will NOT be in this style, so do not dispair if you struggle with this exercise sheet.
Braitenberg vehicles
Lecture Embodied nervous systems: Braitenberg vehicles
Embodied nervous systems: Braitenberg vehicles
This lecture is part of the introductory portion of the course in neural dynamics. It uses the metaphor proposed by Valentino Braitenberg of organisms as vehicles, with sensors, motors, a
Video body that connects these mechanically, and a nervous system that connects these neurally, embedded in a structured environment. We see how behavior emerges from intuitive mental simulation
and how this can be made exact based on models of the environment and of the vehicle. This leads to the notion of behavioral attractor dynamics. I discuss the relation to cybernetic
thinking and point to how neural dynamics goes beyond this simple case.
Exercises Exercise 2: Braitenberg vehicles
Neural dynamics
Lecture Neural dynamics
Neural dynamics
Video This is the first lecture in the course that introduces neural dynamics properly speaking. Motivated by the dynamics of the membrane potential of neurons, the basic equations is introduced
and illustrated. The simplest recurrent network of one neuron coupled excitatorily to itself is used to introduce the detection instability. Two neurons that are inhibitorily coupled
exemplify competitive selection.
Exercises Exercise 4 on neural dynamics
Foundations of Dynamic Field Theory (DFT)
Lecture DFT: Foundations and detection
DFT: Foundations and detection
Video This is the core lecture on Dynamic Field Theory for the Neural Dynamics course. It introduces the notion of a neural dynamic field, making sense of the dimensions over which such fields
are defined, and proceeds to discuss the basic attractor states and their instabilities. The detection instability is discussed in some depth and linked to psychophysical evidence.
Exercises Exercise 5: detection
Lecture DFT: Selection
DFT: selection
This is the second core lecture on Dynamic Field Theory for the Neural Dynamics course. It focusses on selection decisions. I first review how such decisions are made in DFT and what
Video functional properties emerge from that mechanism. Then i discuss the limited evidence we have about "free" choice decisions by reviewing work on saccadic eye movements. The reaction time
paradigm is then discussed in light of DFT accounts. Finally, selection decisions in the timed-movement-initiation-paradigm are presented as a major source of empirical support from the DFT
framework of selection.
Exercises Exercise 6: Selection
Document Reading for Exercise 6: movement preparation
DFT: Memory
Video This lecture reviews both the memory trace and working memory as two further foundational elements of Dynamic Field Theory. I also introduce briefly the 3-layer field model of change
detection and show how this accounts for signatures of visual working memory. The A not B paradigm of Piaget is used to illustrate all these ideas.
Lecture DFT: Memory
Exercises Exercise 7 Memory
Lecture DFT embodied
DFT embodied
This short lecture illustrates how neural dynamic fields can be directly driven from time varying sensory inputs and can conversely drive motor behaviors in closed loop.
Lecture DFT Neural basis
DFT neural basis
A short lecture about how neural dynamic fields are linked to distributions of neural population activation.
Higher dimensional fields
Higher dimensional fields: Binding, search, and coordinate transforms
This lecture explores neural dynamic fields in higher dimensions. When these dimensions combine different features, they represent bound objects. We show how this enables new cognitive
Video functions, most prominently search, exemplified in visual search. The scaling problem with an increasing number of dimensions is addressed and localist representations are contrasted to
distributed representations. The binding through space of Feature Integration Theory provides a solution for our localist approach in DFT. Finally, I show how coordinate transforms are
enabled by bound representations and how they are a possible reason for the attentional bottleneck in visual (and other) cognition.
Lecture Higher dimensional fields: binding, search, coordinate transforms
What is DFT?
Exercises Essay exercise 8: What is DFT?
Sequence generation
Lecture slides Sequence generation
Sequence generation
This is an edited version of a video from the 2022 DFT summer school that provides the core ideas around autonnomous generation of sequences of mental or motor states.
Toward higher cognition
Video DFT models of grounded cognition
Lecture DFT models of grounded cognition
DFT models of compositionality (scibo)
In this lecture, Daniel Sabinasz reviews the famous notion of productivity, namely, the ability to flexibly join “atomic“ linguistic units into “molecular“ linguistic units, and to join
Video molecular linguistic units into more complex molecular linguistic units. He further reviews the notion of compositionality, which accounts for how we understand molecular expressions by
virtue of understanding the meanings of their parts and the way that the parts are combined. This leads to a discussion of how compositionality may be achieved by neural systems in a way that
is consistent with the principles formalized in Dynamic Field Theory
Video DFT models of compositionality (youtube)
Lecture DFT models of compositionality
Background reading
Here are some chapters from the book "Dynamic Thinking -- A primer on Dynamic Field Theory" (Schöner, Spencer and the DFT Research Group, Oxford University Press, 2016), which may serve as background
reading for the course.
CEDAR Tutorial
Project: Implementing a simple Visual Search architecture with CEDAR
Exercises This tutorial will walk you through the steps of implementing a simple visual search architecture with CEDAR. This is an optional exercise for you and will give you practice in how to
build cognitive architectures.
Template for the visual search project
Reference Solution to visual search exercise
The Institut für Neuroinformatik (INI) is a central research unit of the Ruhr-Universität Bochum. We aim to understand the fundamental principles through which organisms generate behavior and
cognition while linked to their environments through sensory systems and while acting in those environments through effector systems. Inspired by our insights into such natural cognitive systems, we
seek new solutions to problems of information processing in artificial cognitive systems. We draw from a variety of disciplines that include experimental approaches from psychology and
neurophysiology as well as theoretical approaches from physics, mathematics, electrical engineering and applied computer science, in particular machine learning, artificial intelligence, and computer
Universitätsstr. 150, Building NB, Room 3/32
D-44801 Bochum, Germany
Tel: (+49) 234 32-28967
Fax: (+49) 234 32-14210 | {"url":"https://www.ini.rub.de/elearning/?eid=371","timestamp":"2024-11-10T21:28:51Z","content_type":"text/html","content_length":"132647","record_id":"<urn:uuid:ce88fb3e-b4e6-433e-98c2-8bce3525d4d9>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00491.warc.gz"} |
Corresponding Angles: Definition, Types and Examples |Turito
What are Corresponding Angles?
Corresponding angles are the angles formed in corresponding corners with the transversal line. When two parallel lines intersect by any other line, i.e., the transversal, it creates corresponding
angles. For example, angles p and w are the corresponding angles in the given figure. These are the angles that occupy a relative position at the intersection with the transversal. If these lines are
parallel, the corresponding angles formed are also equal. These corresponding angles are a type of angle pair. These can have both alternate interior angles and alternate exterior angles.
Types of Corresponding Angles
After understanding is corresponding angles, let us understand their types. We know that the transversal line can intersect two parallel or non-parallel lines. Thus, these angles are of two types:
1. Corresponding Angles, Including Parallel Lines and Transversals
When a transversal line crosses two given parallel lines, the corresponding angles formed have equal measure. For example, the two parallel lines in the figure have a transversal intersecting them.
It forms eight angles with the transversal line. So, the angles at the intersection of the first line with the transversal have equal corresponding angles formed by the intersection of the second
line with the transversal. Hence,
• ∠p = ∠w
• ∠q = ∠x
• ∠r = ∠y
• ∠s = ∠z
∠p = ∠s, ∠q = ∠r, ∠w = ∠ z and ∠x = ∠y, are pairs of vertically opposite angles.
2. Corresponding Angles, Including Non-parallel Lines and Transversals
When a transversal line intersects two non-parallel lines, the corresponding angles formed will not have any relation and will be unequal. They will be corresponding but not equal.
• Two corresponding angles cannot be adjacent angles.
• Two corresponding angles cannot be consecutive interior angles as they do not touch.
• The angles lying opposite to transversal are alternate angles.
• The two corresponding angles will be equal when the transversal line intersects two parallel.
• An interior and exterior angle correspond to each other by being on the same transversal side.
The Types of Corresponding Angles according to the Sum?
They are of two types based on the sum. They are:
• The supplementary Corresponding Angles (when the sum is 180 degrees)
• The Complementary Corresponding angles (when the sum is 90 degrees)
Corresponding Angles Theory
This theory of the corresponding angle states that if the transversal line intersects two parallel lines, the corresponding angles are congruent. Moreover, the corresponding angles will always be
equal if the transversal line crosses two lines that are parallel to each other.
Corresponding Angles in a Triangle
In a triangle, the angles of a congruent pair of sides of two congruent or identical triangles are corresponding angles. Therefore, these angles have the same value or are equal.
Corresponding Angle Proposition
This proposition or theorem of the corresponding angles states:
“When two parallel lines intersect a transverse line, then the angles in the regions of intersection are congruent and are corresponding angles.”
The Corresponding Angles Theorem Converse
The corresponding angle theorem works vice versa. So we can form the statement for the converse theorem as:
“If the intersection region angles are congruent and are corresponding angles, then the lines are parallel.” If a transversal intersects, the two lines are parallel. Then it forms the converse of the
corresponding angle theorem.
Applications of Corresponding Angles
Corresponding angles have a wide range of applications that we often ignore. Let us study a few practical applications of corresponding angles.
• Usually, windows have grills in the form of square boxes or diamond blocks. They make corresponding angles.
• The bridge on the gigantic pillar stands strong because the pillars are connected in such a way that corresponding angles are equal.
• The railway tracks are professionally designed so that corresponding angles are equal.
Types of Angles:
Different types of Angles can form by the intersection of two or more lines. Let us discuss them briefly:
• Acute angle: An angle whose value lies between 0° and 90° an acute angle.
• Obtuse angle: An angle whose value lies between 90° and 180° is an obtuse angle.
• Right angle: An angle whose value is 90°, a right angle.
• Straight angle: An angle whose value is 180° is a straight angle.
• Supplementary angles: When the addition of two angles is equal to 180°, then the angles are called supplementary angles. Two right angles are always supplementary angles.
• Complementary angles: When the addition of two angles is equal to 90°, these angles are complementary angles.
• Adjacent angles: Adjacent angles are the angles that have a common vertex and a common arm.
• Vertically opposite angles: If two lines bisect, the angles created opposite to each other at the point of bisection are vertically opposite angles.
Corresponding Angles Examples
Example 1: If the two corresponding angles are 6x + 12 and 70. Find the value of x?
Solution: Let the two angles be congruent corresponding angles.
6x + 12 = 70
6x = 70 – 12
6x = 58
x = 9.67
Example 2: Two corresponding angles are 8y – 15 and 6y + 7. What is the value of each corresponding angle?
Solution: Given values of corresponding angles are
8y – 15 and 6y + 7
We will now find the values of both the variables x and y.
We know that these are congruent corresponding angles.
8y – 15 = 6y + 7
8y – 6y = 15 + 7
2y = 22
y = 11
The magnitude of each corresponding angle,
8y – 15 = 8(11) – 15 = 73
6y + 7 = 6(11) + 7 = 73
Example 3: Given:
∠1 = 5x + 1 and ∠3 = 6x – 3, are two corresponding angles.
Find the value of x.
Solution: As these are corresponding angles, they will be congruent as lines are aid to be parallel.
We will now equate both the angles, and solve for x.
∠1 = 5x + 1 and ∠3 = 6x – 3,
5x + 1 = 6x – 3
1 + 3 = 6x – 5x
4 = x
Hence the value of x is 4.
Example 4: When two corresponding angles are ∠2 = 6x + 4 and ∠6 = 5x + 12. Find the value of x.
Solution: As these are corresponding angles so they will be congruent as lines are said to be parallel in nature.
We will now equate both the angles, and solve for x
∠2 = 6x + 4 and ∠6 = 5x + 12
6x + 4 = 5x + 12
6x – 5x = 12 – 4
x = 8
Hence the value of x is 8.
Example 5: When two corresponding angles are ∠7 = 5x + 6 and ∠3 = 9x – 10. Find the value of x.
Solution: As they are corresponding angles and the lines are parallel in nature, then they should be congruent.
Equate the given expressions ∠7 = 5x + 6 and ∠3 = 9x – 10 and find the value of x.
5x + 6 = 9x – 10
6 + 10 = 9x – 5x
16 = 4x
x = 16 / 4
x = 4
Hence the value of x is 4.
Frequently Asked Questions
1. Can Corresponding Angles be Supplementary?
Answer: Yes, If the transversal intersects two parallel lines perpendicularly, the corresponding angles can be supplementary (i.e., at 90 degrees). In this case, each corresponding angle will be 90
degrees, and their sum will be 180 degrees (i.e., supplementary).
2. Are all Corresponding Angles Equal?
Answer: No, not all corresponding angles are the same. The corresponding angles are equal when a transversal intersects two parallel lines.
3. What is the Angle Rule for Corresponding Angles?
Answer: The corresponding angles postulates or the angle rule of corresponding angles states that the corresponding angles are equal if a transversal cuts two parallel lines.
4. What do Corresponding Angles Look Like?
Answer: Corresponding angles are used in geometry to describe angles that are adjacent to each other. A corresponding angle is represented by a triangle with the same angle as the other.
5. Do corresponding angles add up to 360?
Answer: Angles measured around a point will always total 360 degrees. All the angles above add up to 360°. 53° + 80° + 140° + 87° = 360°.
Relevant Articles
Convert Millimeters(mm) to Inches
In mathematics, length is measured in millimeters and inches. Before …
Convert Millimeters(mm) to Inches Read More »
Read More >>
Study Abroad
Get an Expert Advice from Turito
Get an Expert Advice from Turito
1-on-1 tutoring for the undivided attention | {"url":"https://www.turito.com/blog/foundation/corresponding-angles","timestamp":"2024-11-04T02:50:55Z","content_type":"application/xhtml+xml","content_length":"163908","record_id":"<urn:uuid:a7f97162-f94a-4ac3-9ae3-b84cd59542ee>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00723.warc.gz"} |
Depth First Search in Graphs
This lesson will teach you how to write a recursive code for depth first search in graphs.
What is Depth First Search?
Depth First Search is a way to traverse and search all nodes in a graph. This traversal algorithm works in such a way that it starts from the root node and then traverses all the way down that branch
until it reaches the leaf, i.e., the last node with no other children, and then backtracks. This follows until all nodes are traversed. The illustration below shows a better understanding of DFS.
Level up your interview prep. Join Educative to access 80+ hands-on prep courses. | {"url":"https://www.educative.io/courses/recursion-for-coding-interviews-in-cpp/depth-first-search-in-graphs","timestamp":"2024-11-06T04:16:46Z","content_type":"text/html","content_length":"700252","record_id":"<urn:uuid:c305e326-6f78-4af9-a552-60f03181b50c>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00789.warc.gz"} |
Hassan E.
What do you want to work on?
About Hassan E.
Career Experience
Results-oriented mechanical engineer who has proven experience in mechanical power engineering (power production and turbomachinery), electrical engineering, automatic control, and business
I Love Tutoring Because
learners want to feel as if we are both going through this journey together. I am helping them achieve something they want, and the journey is much more important than the destination. Tutoring can
be challenging, especially when it comes to motivating each learner.
Other Interests
Math - Algebra
He was very helpful and explained everything out to me
Math - Algebra
The tutors are helpful but take a long time to answer simple questions
Math - Algebra
He was so helpful he helped improve my grades and succeed in school!
Math - Algebra
I just started today so it hasnt helped me with my grades but i feel like it will and I would like to say thank you for your help | {"url":"https://testprepservices.princetonreview.com/academic-tutoring/tutor/hassan%20e--11174668","timestamp":"2024-11-12T20:06:36Z","content_type":"application/xhtml+xml","content_length":"189359","record_id":"<urn:uuid:d1a02910-a6bd-41ca-bf00-94609525ecf0>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00583.warc.gz"} |
Topology in condensed matter systems (WS 2024/25)
Proseminar: Selected topics in topological condensed matter physics (WS 2024/25)
Mathematical Foundations of Physics B (2024)
Topology in condensed matter systems (2023)
Quantum computing proseminar (2022)
Intensive Course on topology (2021)
Topology in condensed matter systems (2016)
Seminar talk: Anyons a la Leinaas and Myrheim
Genua 2024: Mid spectrum anomalies in easy-plane quantum magnets
ICSM Fethiye, Turkiye, 2023: Machida-Shibata states (Artificial atomic structures on superconductors for engineering quantum states: Theoretical insights)
World Quantum Day 2023, Topological magnetism for quantum technologies?
iTHEMS talk 2022 - boundary effects
Wolfgang Pauli Center - blackboard seminar (2021)
Kaiserslautern 2020: Simplifying Majorana Braiding
Kaiserslautern 2020: The Majorana Knot (Mathematica file)
ICEQT 2019: Prospects on topological qubit manipulation in quantum spin helices (pdf)
ICEQT 2019: Prospects on topological qubit manipulation in quantum spin helices (pptx, including videos)
DPG Spring meeting 2019: Winding up quantum spin helices and classical-quantum topological crossover
Anyon physics of ultracold atomic gases (2018): Anyon models in 2D and 1D
Anyon physics of ultracold atomic gases (2018):Many-particle theory of anyons in 1D | {"url":"https://www.posske.de/lectures.php","timestamp":"2024-11-11T13:48:51Z","content_type":"text/html","content_length":"4116","record_id":"<urn:uuid:195c20bf-38ad-4747-a298-a37282b88b8f>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00703.warc.gz"} |
Counting Consecutive Negative Numbers
Please Note: This article is written for users of the following Microsoft Excel versions: 97, 2000, 2002, and 2003. If you are using a later version (Excel 2007 or later), this tip may not work for
you. For a version of this tip written specifically for later versions of Excel, click here: Counting Consecutive Negative Numbers.
Counting Consecutive Negative Numbers
Written by Allen Wyatt (last updated April 13, 2019)
This tip applies to Excel 97, 2000, 2002, and 2003
Lori has a series of numbers, in adjacent cells, that can be either positive or negative. She would like a way to determine the largest sequence of negative numbers in the range. Thus, if there were
seven negative numbers in a row in this sequence, she would like a formula that would return the value 7.
We've looked high and low and can't find a single formula that will do what is wanted. You can, however, do it with an intermediate column. For instance, if you have your numbers in column A
(beginning in A1), then you could put the following formula in cell B1:
Then, in cell B2 enter the following:
Copy this down to all the other cells in column B for which there is a value in column A. Then, in a different cell (perhaps cell C1) you can put the following formula:
This value will represent the largest number of consecutive negative values in column A.
If you don't want to create an intermediate column to get the answer, you could create a user-defined function that will return the value.
Function MaxNegSequence(rng As Range)
' search for the largest sequence
' of negative numbers in the range
Dim c As Range
Dim lCounter As Long
Dim lMaxCount As Long
lCounter = 0
lMaxCount = 0
On Error Resume Next
For Each c In rng.Cells
If c.Value < 0 Then
lCounter = lCounter + 1
If lCounter > lMaxCount Then
lMaxCount = lCounter
End If
lCounter = 0
End If
Next c
MaxNegSequence = lMaxCount
End Function
To use the function, just place a formula similar to the following in your worksheet:
= MaxNegSequence(A1:A512)
ExcelTips is your source for cost-effective Microsoft Excel training. This tip (3533) applies to Microsoft Excel 97, 2000, 2002, and 2003. You can find a version of this tip for the ribbon interface
of Excel (Excel 2007 and later) here: Counting Consecutive Negative Numbers.
2020-10-15 20:03:48
Hello Allen,
Thank you for the Counting Consecutive Negative Numbers code. I have a question regarding this line of code: Dim lMaxCount As Long
When stepping through the code, the lMaxCount variable becomes a Boolean data type. Since this variable is clearly dim as a long data type, why does it become a Boolean data type when the code starts
Thank you for any insights .
2019-04-13 14:36:28
Rick Rothstein
Here is another, more compact way to write your MaxNegSequence function...
Function MaxNegSequence(Rng As Range) As Long
Dim V As Variant, Arr As Variant
For Each V In Split(Replace(Join(Evaluate("TRANSPOSE(IF(" & Rng.Address & "<0,1,""""))"), " "), "1 ", 1))
If Len(V) > MaxNegSequence Then MaxNegSequence = Len(V)
End Function | {"url":"https://excel.tips.net/T003533_Counting_Consecutive_Negative_Numbers","timestamp":"2024-11-08T22:01:45Z","content_type":"text/html","content_length":"46828","record_id":"<urn:uuid:1f1efd67-203d-41b0-8036-bb16d01f8e06>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00263.warc.gz"} |
How to Improve Math Skills (for high school students) -
“Math is not for me!”
“All is well, except you know – math! Bleh!”
“I can’t cope with the speed at which they teach math. Formulas, questions, more theorems, more questions and it goes on before I can understand it!”
Heard these often? Math is a monster that menaces most high schoolers.
According to an educational study, about 40% of all 17-year-olds in the US lack math skills. This is not surprising! Time and again, in Gallup surveys, math has come out as the subject perceived by
students as the most difficult amongst all others. But boldface the word ‘perceived’ in your mind as you read more facts about math education in the US:
• 60 percent of American students finish high school unprepared for college classes, compared to a much smaller number in other developed countries.
• The US ranked 42nd in a global comparison of math skills by country, placing below the global average by 20 points
Why is it that American scores in math fall way behind other countries and the world average? Honestly, it is because most U.S. high schools teach math differently than those in other countries.
“Classes often focus on formulas and procedures rather than teaching students to think creatively about solving complex problems involving all sorts of mathematics”, experts say. This approach makes
it harder for students to compete globally, be it on an international exam or in colleges and careers that value sophisticated thinking and data science.
With the Covid-19 situation, things have gotten tougher with math all the more! There are many teachers who are not comfortable with teaching math online. On top of it, a lot of teachers have
dismissed the idea of teaching online.
Read how this student aced his grades with Talentnook’s help when a math teacher went absent at his junior college.
The good news is that there are still a lot of ways to strengthen your math skills without being entirely dependent on schools’ pedagogy. Get started with these best of tips brought to you by experts
at Talentnook.com:
1. Don’t just practice. Also, play around!
You read it right! While every other person suggests that extensive practice is the master key to unlock your inner math genius, we beg to differ! Statistics show that the human brain is at its peak
of ‘learnability’ when it is in the ‘fun’ or ‘play’ mode. So never believe that mastering math is proportional to the number of notebooks filled with practice questions. Instead, do this:
• Learn derivations of formulae and theorems to understand their usage better
• Play with problem-solving approaches. Try different methods (even shortcuts) to arrive at the correct answer (e.g.
• Use tables and diagrams to organize information and to see patterns
• Approach each question like a riddle or a puzzle and not as a complex math word problem – this shift in perception is everything, try it to believe it!
• Try your hand at making educated guesses and approximations
2. Take baby steps every day
1.01 raised to the power 365 is 37.8 and 0.99 raised to the power 365 is 0.03. Apply this mathematical lesson to your daily life! Increase your knowledge by just 1% every day, and in one year you’ll
be 38X smarter.
Do these little things daily and watch your prowess in math grow manifold:
• Analyze daily life problems mathematically (e.g. allegation questions are all around you – grab a bottle of lemon soda and a bottle of water, how much should you mix of each to get 10% lemon
essence in the final mixture?)
• Solve random math puzzles (especially the ones not even remotely linked to your syllabus), slowly you will start seeing math as fun rather than a subject!
• Expand your learning horizons – make use of digital tools, online free tutorials, mock tests, and other resources like formula flashcards
• Be receptive to understanding the application of math in daily life (discounts are a great way to start appreciating the power of math! For example: would you benefit from a 20% discount or a 10%
on 10% discount? Solve it to see why the counter-intuitive answer is right!)
• Avoid using a calculator, do mental math as much as possible to remove the fear of numbers in general (next challenge: you must sum up the bill at the grocery store faster than the cashier even
after applying all discounts and offers!)
Also read: How Teaching is changing in a Post Pandemic society
3. Get a tutor’s help to create your winning formula
A tutor is more than a person who helps you get better grades temporarily. With a dedicated, professional tutor you can not only customize your learning journey according to your preferences and
goals, but you can also develop habits like critical thinking and discipline.
To address the need of students struggling with math in the United States, we at Talentnook have launched a math program called ELITE. Dedicated to the mission of making you achieve higher scores and
develop more confidence in math. Some key highlights of this booster course designed to help improve math skills for high school students:
• The program will help students get better at math. The program is in fact aimed at preparing them for not only good grades but also for Advanced placement exams (AP exams)
• Utilizes a holistic approach to teaching math and focuses on the development of critical thinking skills rather than cramming of formula cheat sheets
• Every student who enrolls in ELITE shall get a personal academic advisor / who will offer a highly personalized and effective learning program
• By utilizing the power of deep learning and continuous assessment, the program is designed to help students build confidence and a love for math. The idea is to take away the focus from hacking
a method to score higher grades in the short-run (which anyways is a by-product of great confidence and personalized learning!)
The program guarantees satisfaction and success. Don’t believe us? You can simply sign up for a free trial!
Sign up for the Talentnook ELITE Math Program
4. Collaborate and Grow your Math skills
The best way to improve math skills for high school students is to collaborate! Join math forums online where you can discuss math questions, solve them for others, ask your doubts, and discuss
general concepts. The discussions have moved online after all! This is the time to utilize the power of the internet, especially during the pandemic. Imagine discussing math with students from
Switzerland, India, Australia, etc.
The learning could be immense if you utilize the power of discussions with other students. Some key benefits are:
• Learning alternate approaches by evaluating methods used by others,
• Getting access to free resources like practice sets
• Getting access to peer-to-peer discussions without any inhibitions.
Take the collaboration to a next level and try the following if you like:
• Working on joint math projects
• Participate in math contests online
• Level up and present your math ideas/ learnings to your friends or schoolmates in a newsletter! (e.g. a newsletter containing fun facts and uses of Pythagoras theorem! It will not only help
others but will also massively motivate you to learn math at much deeper and yet in a fun way!)
• Step up by trying to ‘create’ math questions to activate creative and critical thinking at the same time. For example, send a mock test of 10 questions to your best friend and get them to send
you one next week!
• Don’t Google, ask your tutor first, friends next, and books last. If your doubts still remain, then hop on to the search engine for answers. Spend enough time with a problem before looking out
for a quick solution to replicate!
Ready to step up in the game of math? Signup to Talentnook and unlock access to experienced tutors offering personalized lessons aimed at improving math skills for high school students. | {"url":"https://talentnook.com/improve-math-skills-for-high-school-students","timestamp":"2024-11-09T07:31:48Z","content_type":"text/html","content_length":"214727","record_id":"<urn:uuid:80a18349-2680-4d8d-b2ed-16f7a7630f08>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00756.warc.gz"} |
Finite total curvature and soap bubbles with almost constant higher-order mean curvature
Published Paper
Inserted: 23 aug 2023
Last Updated: 8 oct 2024
Journal: IMRN
Year: 2024
Doi: https://doi.org/10.1093/imrn/rnae159
Links: Journal article
Given $ n \geq 2 $ and $ k \in \{2, \ldots , n\} $, we study the asymptotic behaviour of sequences of bounded $C^2$-domains of finite total curvature in $ \mathbb{R}^{n+1} $ converging in volume and
perimeter, and with the $ k $-th mean curvature functions converging in $ L^1 $ to a constant. Under natural mean convexity hypothesis, and assuming an $ L^\infty $-control on the mean curvature
outside a set of vanishing area, we prove that finite unions of mutually tangent balls are the only possible limits. This is the first result where such a uniqueness is proved without assuming
uniform bounds on the exterior or interior touching balls. | {"url":"https://cvgmt.sns.it/paper/6187/","timestamp":"2024-11-07T22:43:06Z","content_type":"text/html","content_length":"8750","record_id":"<urn:uuid:2b2a4ee8-2e50-429d-810c-d7c8fd8a3196>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00887.warc.gz"} |
Creating Real Mathematicians - A Mathematician’s Lament
A Mathematician’s Lament
by Paul Lockhart
A musician wakes from a terrible nightmare. In his dream he finds himself in a society where
music education has been made mandatory. “We are helping our students become more
competitive in an increasingly sound-filled world.” Educators, school systems, and the state are
put in charge of this vital project. Studies are commissioned, committees are formed, and
decisions are made— all without the advice or participation of a single working musician or
Since musicians are known to set down their ideas in the form of sheet music, these curious
black dots and lines must constitute the “language of music.” It is imperative that students
become fluent in this language if they are to attain any degree of musical competence; indeed, it
would be ludicrous to expect a child to sing a song or play an instrument without having a
thorough grounding in music notation and theory. Playing and listening to music, let alone
composing an original piece, are considered very advanced topics and are generally put off until
college, and more often graduate school.
As for the primary and secondary schools, their mission is to train students to use this
language— to jiggle symbols around according to a fixed set of rules: “Music class is where we
take out our staff paper, our teacher puts some notes on the board, and we copy them or
transpose them into a different key. We have to make sure to get the clefs and key signatures
right, and our teacher is very picky about making sure we fill in our quarter-notes completely.
One time we had a chromatic scale problem and I did it right, but the teacher gave me no credit
because I had the stems pointing the wrong way.”
In their wisdom, educators soon realize that even very young children can be given this kind
of musical instruction. In fact it is considered quite shameful if one’s third-grader hasn’t
completely memorized his circle of fifths. “I’ll have to get my son a music tutor. He simply
won’t apply himself to his music homework. He says it’s boring. He just sits there staring out
the window, humming tunes to himself and making up silly songs.”
In the higher grades the pressure is really on. After all, the students must be prepared for the
standardized tests and college admissions exams. Students must take courses in Scales and
Modes, Meter, Harmony, and Counterpoint. “It’s a lot for them to learn, but later in college
when they finally get to hear all this stuff, they’ll really appreciate all the work they did in high
school.” Of course, not many students actually go on to concentrate in music, so only a few will
ever get to hear the sounds that the black dots represent. Nevertheless, it is important that every
member of society be able to recognize a modulation or a fugal passage, regardless of the fact
that they will never hear one. “To tell you the truth, most students just aren’t very good at music.
They are bored in class, their skills are terrible, and their homework is barely legible. Most of
them couldn’t care less about how important music is in today’s world; they just want to take the
minimum number of music courses and be done with it. I guess there are just music people and
non-music people. I had this one kid, though, man was she sensational! Her sheets were
impeccable— every note in the right place, perfect calligraphy, sharps, flats, just beautiful.
She’s going to make one hell of a musician someday.”
Waking up in a cold sweat, the musician realizes, gratefully, that it was all just a crazy
dream. “Of course!” he reassures himself, “No society would ever reduce such a beautiful and
meaningful art form to something so mindless and trivial; no culture could be so cruel to its
children as to deprive them of such a natural, satisfying means of human expression. How
Meanwhile, on the other side of town, a painter has just awakened from a similar
I was surprised to find myself in a regular school classroom— no easels, no tubes of paint.
“Oh we don’t actually apply paint until high school,” I was told by the students. “In seventh
grade we mostly study colors and applicators.” They showed me a worksheet. On one side were
swatches of color with blank spaces next to them. They were told to write in the names. “I like
painting,” one of them remarked, “they tell me what to do and I do it. It’s easy!”
After class I spoke with the teacher. “So your students don’t actually do any painting?” I
asked. “Well, next year they take Pre-Paint-by-Numbers. That prepares them for the main
Paint-by-Numbers sequence in high school. So they’ll get to use what they’ve learned here and
apply it to real-life painting situations— dipping the brush into paint, wiping it off, stuff like that.
Of course we track our students by ability. The really excellent painters— the ones who know
their colors and brushes backwards and forwards— they get to the actual painting a little sooner,
and some of them even take the Advanced Placement classes for college credit. But mostly
we’re just trying to give these kids a good foundation in what painting is all about, so when they
get out there in the real world and paint their kitchen they don’t make a total mess of it.”
“Um, these high school classes you mentioned…”
“You mean Paint-by-Numbers? We’re seeing much higher enrollments lately. I think it’s
mostly coming from parents wanting to make sure their kid gets into a good college. Nothing
looks better than Advanced Paint-by-Numbers on a high school transcript.”
“Why do colleges care if you can fill in numbered regions with the corresponding color?”
“Oh, well, you know, it shows clear-headed logical thinking. And of course if a student is
planning to major in one of the visual sciences, like fashion or interior decorating, then it’s really
a good idea to get your painting requirements out of the way in high school.”
“I see. And when do students get to paint freely, on a blank canvas?”
“You sound like one of my professors! They were always going on about expressing
yourself and your feelings and things like that—really way-out-there abstract stuff. I’ve got a
degree in Painting myself, but I’ve never really worked much with blank canvasses. I just use
the Paint-by-Numbers kits supplied by the school board.”
Sadly, our present system of mathematics education is precisely this kind of nightmare. In
fact, if I had to design a mechanism for the express purpose of destroying a child’s natural
curiosity and love of pattern-making, I couldn’t possibly do as good a job as is currently being
done— I simply wouldn’t have the imagination to come up with the kind of senseless, soulcrushing
ideas that constitute contemporary mathematics education.
Everyone knows that something is wrong. The politicians say, “we need higher standards.”
The schools say, “we need more money and equipment.” Educators say one thing, and teachers
say another. They are all wrong. The only people who understand what is going on are the ones
most often blamed and least often heard: the students. They say, “math class is stupid and
boring,” and they are right.
Mathematics and Culture
he first thing to understand is that mathematics is an art. The difference between math and
the other arts, such as music and painting, is that our culture does not recognize it as such.
Everyone understands that poets, painters, and musicians create works of art, and are expressing
themselves in word, image, and sound. In fact, our society is rather generous when it comes to
creative expression; architects, chefs, and even television directors are considered to be working
artists. So why not mathematicians?
Part of the problem is that nobody has the faintest idea what it is that mathematicians do.
The common perception seems to be that mathematicians are somehow connected with
science— perhaps they help the scientists with their formulas, or feed big numbers into
computers for some reason or other. There is no question that if the world had to be divided into
the “poetic dreamers” and the “rational thinkers” most people would place mathematicians in the
latter category.
Nevertheless, the fact is that there is nothing as dreamy and poetic, nothing as radical,
subversive, and psychedelic, as mathematics. It is every bit as mind blowing as cosmology or
physics (mathematicians conceived of black holes long before astronomers actually found any),
and allows more freedom of expression than poetry, art, or music (which depend heavily on
properties of the physical universe). Mathematics is the purest of the arts, as well as the most
So let me try to explain what mathematics is, and what mathematicians do. I can hardly do
better than to begin with G.H. Hardy’s excellent description:
A mathematician, like a painter or poet, is a maker
of patterns. If his patterns are more permanent than
theirs, it is because they are made with ideas.
So mathematicians sit around making patterns of ideas. What sort of patterns? What sort of
ideas? Ideas about the rhinoceros? No, those we leave to the biologists. Ideas about language
and culture? No, not usually. These things are all far too complicated for most mathematicians’
taste. If there is anything like a unifying aesthetic principle in mathematics, it is this: simple is
beautiful. Mathematicians enjoy thinking about the simplest possible things, and the simplest
possible things are imaginary.
For example, if I’m in the mood to think about shapes— and I often am— I might imagine a
triangle inside a rectangular box:
I wonder how much of the box the triangle takes up? Two-thirds maybe? The important
thing to understand is that I’m not talking about this drawing of a triangle in a box. Nor am I
talking about some metal triangle forming part of a girder system for a bridge. There’s no
ulterior practical purpose here. I’m just playing. That’s what math is— wondering, playing,
amusing yourself with your imagination. For one thing, the question of how much of the box the
triangle takes up doesn’t even make any sense for real, physical objects. Even the most carefully
made physical triangle is still a hopelessly complicated collection of jiggling atoms; it changes
its size from one minute to the next. That is, unless you want to talk about some sort of
approximate measurements. Well, that’s where the aesthetic comes in. That’s just not simple,
and consequently it is an ugly question which depends on all sorts of real-world details. Let’s
leave that to the scientists. The mathematical question is about an imaginary triangle inside an
imaginary box. The edges are perfect because I want them to be— that is the sort of object I
prefer to think about. This is a major theme in mathematics: things are what you want them to
be. You have endless choices; there is no reality to get in your way.
On the other hand, once you have made your choices (for example I might choose to make
my triangle symmetrical, or not) then your new creations do what they do, whether you like it or
not. This is the amazing thing about making imaginary patterns: they talk back! The triangle
takes up a certain amount of its box, and I don’t have any control over what that amount is.
There is a number out there, maybe it’s two-thirds, maybe it isn’t, but I don’t get to say what it
is. I have to find out what it is.
So we get to play and imagine whatever we want and make patterns and ask questions about
them. But how do we answer these questions? It’s not at all like science. There’s no
experiment I can do with test tubes and equipment and whatnot that will tell me the truth about a
figment of my imagination. The only way to get at the truth about our imaginations is to use our
imaginations, and that is hard work.
In the case of the triangle in its box, I do see something simple and pretty:
If I chop the rectangle into two pieces like this, I can see that each piece is cut diagonally in
half by the sides of the triangle. So there is just as much space inside the triangle as outside.
That means that the triangle must take up exactly half the box!
This is what a piece of mathematics looks and feels like. That little narrative is an example
of the mathematician’s art: asking simple and elegant questions about our imaginary creations,
and crafting satisfying and beautiful explanations. There is really nothing else quite like this
realm of pure idea; it’s fascinating, it’s fun, and it’s free!
Now where did this idea of mine come from? How did I know to draw that line? How does
a painter know where to put his brush? Inspiration, experience, trial and error, dumb luck.
That’s the art of it, creating these beautiful little poems of thought, these sonnets of pure reason.
There is something so wonderfully transformational about this art form. The relationship
between the triangle and the rectangle was a mystery, and then that one little line made it
obvious. I couldn’t see, and then all of a sudden I could. Somehow, I was able to create a
profound simple beauty out of nothing, and change myself in the process. Isn’t that what art is
all about?
This is why it is so heartbreaking to see what is being done to mathematics in school. This
rich and fascinating adventure of the imagination has been reduced to a sterile set of “facts” to be
memorized and procedures to be followed. In place of a simple and natural question about
shapes, and a creative and rewarding process of invention and discovery, students are treated to
Triangle Area Formula:
A = 1/2 b h h
“The area of a triangle is equal to one-half its base times its height.” Students are asked to
memorize this formula and then “apply” it over and over in the “exercises.” Gone is the thrill,
the joy, even the pain and frustration of the creative act. There is not even a problem anymore.
The question has been asked and answered at the same time— there is nothing left for the
student to do.
Now let me be clear about what I’m objecting to. It’s not about formulas, or memorizing
interesting facts. That’s fine in context, and has its place just as learning a vocabulary does— it
helps you to create richer, more nuanced works of art. But it’s not the fact that triangles take up
half their box that matters. What matters is the beautiful idea of chopping it with the line, and
how that might inspire other beautiful ideas and lead to creative breakthroughs in other
problems— something a mere statement of fact can never give you.
By removing the creative process and leaving only the results of that process, you virtually
guarantee that no one will have any real engagement with the subject. It is like saying that
Michelangelo created a beautiful sculpture, without letting me see it. How am I supposed to be
inspired by that? (And of course it’s actually much worse than this— at least it’s understood that
there is an art of sculpture that I am being prevented from appreciating).
By concentrating on what, and leaving out why, mathematics is reduced to an empty shell.
The art is not in the “truth” but in the explanation, the argument. It is the argument itself which
gives the truth its context, and determines what is really being said and meant. Mathematics is
the art of explanation. If you deny students the opportunity to engage in this activity— to pose
their own problems, make their own conjectures and discoveries, to be wrong, to be creatively
frustrated, to have an inspiration, and to cobble together their own explanations and proofs— you
deny them mathematics itself. So no, I’m not complaining about the presence of facts and
formulas in our mathematics classes, I’m complaining about the lack of mathematics in our
mathematics classes.
f your art teacher were to tell you that painting is all about filling in numbered regions, you
would know that something was wrong. The culture informs you— there are museums and
galleries, as well as the art in your own home. Painting is well understood by society as a
medium of human expression. Likewise, if your science teacher tried to convince you that
astronomy is about predicting a person’s future based on their date of birth, you would know she
was crazy— science has seeped into the culture to such an extent that almost everyone knows
about atoms and galaxies and laws of nature. But if your math teacher gives you the impression,
either expressly or by default, that mathematics is about formulas and definitions and
memorizing algorithms, who will set you straight?
The cultural problem is a self-perpetuating monster: students learn about math from their
teachers, and teachers learn about it from their teachers, so this lack of understanding and
appreciation for mathematics in our culture replicates itself indefinitely. Worse, the perpetuation
of this “pseudo-mathematics,” this emphasis on the accurate yet mindless manipulation of
symbols, creates its own culture and its own set of values. Those who have become adept at it
derive a great deal of self-esteem from their success. The last thing they want to hear is that
math is really about raw creativity and aesthetic sensitivity. Many a graduate student has come
to grief when they discover, after a decade of being told they were “good at math,” that in fact
they have no real mathematical talent and are just very good at following directions. Math is not
about following directions, it’s about making new directions.
And I haven’t even mentioned the lack of mathematical criticism in school. At no time are
students let in on the secret that mathematics, like any literature, is created by human beings for
their own amusement; that works of mathematics are subject to critical appraisal; that one can
have and develop mathematical taste. A piece of mathematics is like a poem, and we can ask if
it satisfies our aesthetic criteria: Is this argument sound? Does it make sense? Is it simple and
elegant? Does it get me closer to the heart of the matter? Of course there’s no criticism going on
in school— there’s no art being done to criticize!
Why don’t we want our children to learn to do mathematics? Is it that we don’t trust them,
that we think it’s too hard? We seem to feel that they are capable of making arguments and
coming to their own conclusions about Napoleon, why not about triangles? I think it’s simply
that we as a culture don’t know what mathematics is. The impression we are given is of
something very cold and highly technical, that no one could possibly understand— a selffulfilling
prophesy if there ever was one.
It would be bad enough if the culture were merely ignorant of mathematics, but what is far
worse is that people actually think they do know what math is about— and are apparently under
the gross misconception that mathematics is somehow useful to society! This is already a huge
difference between mathematics and the other arts. Mathematics is viewed by the culture as
some sort of tool for science and technology. Everyone knows that poetry and music are for pure
enjoyment and for uplifting and ennobling the human spirit (hence their virtual elimination from
the public school curriculum) but no, math is important.
SIMPLICIO: Are you really trying to claim that mathematics offers no useful or
practical applications to society?
SALVIATI: Of course not. I’m merely suggesting that just because something
happens to have practical consequences, doesn’t mean that’s what it is
about. Music can lead armies into battle, but that’s not why people
write symphonies. Michelangelo decorated a ceiling, but I’m sure he
had loftier things on his mind.
SIMPLICIO: But don’t we need people to learn those useful consequences of math?
Don’t we need accountants and carpenters and such?
SALVIATI: How many people actually use any of this “practical math” they
supposedly learn in school? Do you think carpenters are out there
using trigonometry? How many adults remember how to divide
fractions, or solve a quadratic equation? Obviously the current
practical training program isn’t working, and for good reason: it is
excruciatingly boring, and nobody ever uses it anyway. So why do
people think it’s so important? I don’t see how it’s doing society any
good to have its members walking around with vague memories of
algebraic formulas and geometric diagrams, and clear memories of
hating them. It might do some good, though, to show them
something beautiful and give them an opportunity to enjoy being
creative, flexible, open-minded thinkers— the kind of thing a real
mathematical education might provide.
SIMPLICIO: But people need to be able to balance their checkbooks, don’t they?
SALVIATI: I’m sure most people use a calculator for everyday arithmetic. And
why not? It’s certainly easier and more reliable. But my point is not
just that the current system is so terribly bad, it’s that what it’s missing
is so wonderfully good! Mathematics should be taught as art for art’s
sake. These mundane “useful” aspects would follow naturally as a
trivial by-product. Beethoven could easily write an advertising jingle,
but his motivation for learning music was to create something
SIMPLICIO: But not everyone is cut out to be an artist. What about the kids who
aren’t “math people?” How would they fit into your scheme?
SALVIATI: If everyone were exposed to mathematics in its natural state, with all
the challenging fun and surprises that that entails, I think we would
see a dramatic change both in the attitude of students toward
mathematics, and in our conception of what it means to be “good at
math.” We are losing so many potentially gifted mathematicians—
creative, intelligent people who rightly reject what appears to be a
meaningless and sterile subject. They are simply too smart to waste
their time on such piffle.
SIMPLICIO: But don’t you think that if math class were made more like art class
that a lot of kids just wouldn’t learn anything?
SALVIATI: They’re not learning anything now! Better to not have math classes at
all than to do what is currently being done. At least some people
might have a chance to discover something beautiful on their own.
SIMPLICIO: So you would remove mathematics from the school curriculum?
SALVIATI: The mathematics has already been removed! The only question is
what to do with the vapid, hollow shell that remains. Of course I
would prefer to replace it with an active and joyful engagement with
mathematical ideas.
SIMPLICIO: But how many math teachers know enough about their subject to
teach it that way?
SALVIATI: Very few. And that’s just the tip of the iceberg…
Mathematics in School
here is surely no more reliable way to kill enthusiasm and interest in a subject than to make
it a mandatory part of the school curriculum. Include it as a major component of
standardized testing and you virtually guarantee that the education establishment will suck the
life out of it. School boards do not understand what math is, neither do educators, textbook
authors, publishing companies, and sadly, neither do most of our math teachers. The scope of
the problem is so enormous, I hardly know where to begin.
Let’s start with the “math reform” debacle. For many years there has been a growing
awareness that something is rotten in the state of mathematics education. Studies have been
commissioned, conferences assembled, and countless committees of teachers, textbook
publishers, and educators (whatever they are) have been formed to “fix the problem.” Quite
apart from the self-serving interest paid to reform by the textbook industry (which profits from
any minute political fluctuation by offering up “new” editions of their unreadable monstrosities),
the entire reform movement has always missed the point. The mathematics curriculum doesn’t
need to be reformed, it needs to be scrapped.
All this fussing and primping about which “topics” should be taught in what order, or the use
of this notation instead of that notation, or which make and model of calculator to use, for god’s
sake— it’s like rearranging the deck chairs on the Titanic! Mathematics is the music of reason.
To do mathematics is to engage in an act of discovery and conjecture, intuition and inspiration;
to be in a state of confusion— not because it makes no sense to you, but because you gave it
sense and you still don’t understand what your creation is up to; to have a breakthrough idea; to
be frustrated as an artist; to be awed and overwhelmed by an almost painful beauty; to be alive,
damn it. Remove this from mathematics and you can have all the conferences you like; it won’t
matter. Operate all you want, doctors: your patient is already dead.
The saddest part of all this “reform” are the attempts to “make math interesting” and
“relevant to kids’ lives.” You don’t need to make math interesting— it’s already more
interesting than we can handle! And the glory of it is its complete irrelevance to our lives.
That’s why it’s so fun!
Attempts to present mathematics as relevant to daily life inevitably appear forced and
contrived: “You see kids, if you know algebra then you can figure out how old Maria is if we
know that she is two years older than twice her age seven years ago!” (As if anyone would ever
have access to that ridiculous kind of information, and not her age.) Algebra is not about daily
life, it’s about numbers and symmetry— and this is a valid pursuit in and of itself:
Suppose I am given the sum and difference of two numbers. How
can I figure out what the numbers are themselves?
Here is a simple and elegant question, and it requires no effort to be made appealing. The
ancient Babylonians enjoyed working on such problems, and so do our students. (And I hope
you will enjoy thinking about it too!) We don’t need to bend over backwards to give
mathematics relevance. It has relevance in the same way that any art does: that of being a
meaningful human experience.
In any case, do you really think kids even want something that is relevant to their daily lives?
You think something practical like compound interest is going to get them excited? People
enjoy fantasy, and that is just what mathematics can provide— a relief from daily life, an
anodyne to the practical workaday world.
A similar problem occurs when teachers or textbooks succumb to “cutesyness.” This is
where, in an attempt to combat so-called “math anxiety” (one of the panoply of diseases which
are actually caused by school), math is made to seem “friendly.” To help your students
memorize formulas for the area and circumference of a circle, for example, you might invent this
whole story about “Mr. C,” who drives around “Mrs. A” and tells her how nice his “two pies
are” (C = 2πr) and how her “pies are square” (A = πr2) or some such nonsense. But what about
the real story? The one about mankind’s struggle with the problem of measuring curves; about
Eudoxus and Archimedes and the method of exhaustion; about the transcendence of pi? Which
is more interesting— measuring the rough dimensions of a circular piece of graph paper, using a
formula that someone handed you without explanation (and made you memorize and practice
over and over) or hearing the story of one of the most beautiful, fascinating problems, and one of
the most brilliant and powerful ideas in human history? We’re killing people’s interest in circles
for god’s sake!
Why aren’t we giving our students a chance to even hear about these things, let alone giving
them an opportunity to actually do some mathematics, and to come up with their own ideas,
opinions, and reactions? What other subject is routinely taught without any mention of its
history, philosophy, thematic development, aesthetic criteria, and current status? What other
subject shuns its primary sources— beautiful works of art by some of the most creative minds in
history— in favor of third-rate textbook bastardizations?
The main problem with school mathematics is that there are no problems. Oh, I know what
passes for problems in math classes, these insipid “exercises.” “Here is a type of problem. Here
is how to solve it. Yes it will be on the test. Do exercises 1-35 odd for homework.” What a sad
way to learn mathematics: to be a trained chimpanzee.
But a problem, a genuine honest-to-goodness natural human question— that’s another thing.
How long is the diagonal of a cube? Do prime numbers keep going on forever? Is infinity a
number? How many ways can I symmetrically tile a surface? The history of mathematics is the
history of mankind’s engagement with questions like these, not the mindless regurgitation of
formulas and algorithms (together with contrived exercises designed to make use of them).
A good problem is something you don’t know how to solve. That’s what makes it a good
puzzle, and a good opportunity. A good problem does not just sit there in isolation, but serves as
a springboard to other interesting questions. A triangle takes up half its box. What about a
pyramid inside its three-dimensional box? Can we handle this problem in a similar way?
I can understand the idea of training students to master certain techniques— I do that too.
But not as an end in itself. Technique in mathematics, as in any art, should be learned in context.
The great problems, their history, the creative process— that is the proper setting. Give your
students a good problem, let them struggle and get frustrated. See what they come up with.
Wait until they are dying for an idea, then give them some technique. But not too much.
So put away your lesson plans and your overhead projectors, your full-color textbook
abominations, your CD-ROMs and the whole rest of the traveling circus freak show of
contemporary education, and simply do mathematics with your students! Art teachers don’t
waste their time with textbooks and rote training in specific techniques. They do what is natural
to their subject— they get the kids painting. They go around from easel to easel, making
suggestions and offering guidance:
“I was thinking about our triangle problem, and I noticed something. If the triangle is really
slanted then it doesn’t take up half it’s box! See, look:
“Excellent observation! Our chopping argument assumes that the tip of the triangle lies
directly over the base. Now we need a new idea.”
“Should I try chopping it a different way?”
“Absolutely. Try all sorts of ideas. Let me know what you come up with!”
o how do we teach our students to do mathematics? By choosing engaging and natural
problems suitable to their tastes, personalities, and level of experience. By giving them time
to make discoveries and formulate conjectures. By helping them to refine their arguments and
creating an atmosphere of healthy and vibrant mathematical criticism. By being flexible and
open to sudden changes in direction to which their curiosity may lead. In short, by having an
honest intellectual relationship with our students and our subject.
Of course what I’m suggesting is impossible for a number of reasons. Even putting aside the
fact that statewide curricula and standardized tests virtually eliminate teacher autonomy, I doubt
that most teachers even want to have such an intense relationship with their students. It requires
too much vulnerability and too much responsibility— in short, it’s too much work!
It is far easier to be a passive conduit of some publisher’s “materials” and to follow the
shampoo-bottle instruction “lecture, test, repeat” than to think deeply and thoughtfully about the
meaning of one’s subject and how best to convey that meaning directly and honestly to one’s
students. We are encouraged to forego the difficult task of making decisions based on our
individual wisdom and conscience, and to “get with the program.” It is simply the path of least
TEXTBOOK PUBLISHERS : TEACHERS ::
A) pharmaceutical companies : doctors
B) record companies : disk jockeys
C) corporations : congressmen
D) all of the above
The trouble is that math, like painting or poetry, is hard creative work. That makes it very
difficult to teach. Mathematics is a slow, contemplative process. It takes time to produce a work
of art, and it takes a skilled teacher to recognize one. Of course it’s easier to post a set of rules
than to guide aspiring young artists, and it’s easier to write a VCR manual than to write an
actual book with a point of view.
Mathematics is an art, and art should be taught by working artists, or if not, at least by people
who appreciate the art form and can recognize it when they see it. It is not necessary that you
learn music from a professional composer, but would you want yourself or your child to be
taught by someone who doesn’t even play an instrument, and has never listened to a piece of
music in their lives? Would you accept as an art teacher someone who has never picked up a
pencil or stepped foot in a museum? Why is it that we accept math teachers who have never
produced an original piece of mathematics, know nothing of the history and philosophy of the
subject, nothing about recent developments, nothing in fact beyond what they are expected to
present to their unfortunate students? What kind of a teacher is that? How can someone teach
something that they themselves don’t do? I can’t dance, and consequently I would never
presume to think that I could teach a dance class (I could try, but it wouldn’t be pretty). The
difference is I know I can’t dance. I don’t have anyone telling me I’m good at dancing just
because I know a bunch of dance words.
Now I’m not saying that math teachers need to be professional mathematicians— far from it.
But shouldn’t they at least understand what mathematics is, be good at it, and enjoy doing it?
If teaching is reduced to mere data transmission, if there is no sharing of excitement and
wonder, if teachers themselves are passive recipients of information and not creators of new
ideas, what hope is there for their students? If adding fractions is to the teacher an arbitrary set
of rules, and not the outcome of a creative process and the result of aesthetic choices and desires,
then of course it will feel that way to the poor students.
Teaching is not about information. It’s about having an honest intellectual relationship with
your students. It requires no method, no tools, and no training. Just the ability to be real. And if
you can’t be real, then you have no right to inflict yourself upon innocent children.
In particular, you can’t teach teaching. Schools of education are a complete crock. Oh, you
can take classes in early childhood development and whatnot, and you can be trained to use a
blackboard “effectively” and to prepare an organized “lesson plan” (which, by the way, insures
that your lesson will be planned, and therefore false), but you will never be a real teacher if you
are unwilling to be a real person. Teaching means openness and honesty, an ability to share
excitement, and a love of learning. Without these, all the education degrees in the world won’t
help you, and with them they are completely unnecessary.
It’s perfectly simple. Students are not aliens. They respond to beauty and pattern, and are
naturally curious like anyone else. Just talk to them! And more importantly, listen to them!
SIMPLICIO: All right, I understand that there is an art to mathematics and that we
are not doing a good job of exposing people to it. But isn’t this a
rather esoteric, highbrow sort of thing to expect from our school
system? We’re not trying to create philosophers here, we just want
people to have a reasonable command of basic arithmetic so they can
function in society.
SALVIATI: But that’s not true! School mathematics concerns itself with many
things that have nothing to do with the ability to get along in society—
algebra and trigonometry, for instance. These studies are utterly
irrelevant to daily life. I’m simply suggesting that if we are going to
include such things as part of most students’ basic education, that we
do it in an organic and natural way. Also, as I said before, just because
a subject happens to have some mundane practical use does not mean
that we have to make that use the focus of our teaching and learning.
It may be true that you have to be able to read in order to fill out
forms at the DMV, but that’s not why we teach children to read. We
teach them to read for the higher purpose of allowing them access to
beautiful and meaningful ideas. Not only would it be cruel to teach
reading in such a way— to force third graders to fill out purchase
orders and tax forms— it wouldn’t work! We learn things because
they interest us now, not because they might be useful later. But this
is exactly what we are asking children to do with math.
SIMPLICIO: But don’t we need third graders to be able to do arithmetic?
SALVIATI: Why? You want to train them to calculate 427 plus 389? It’s just not a
question that very many eight-year-olds are asking. For that matter,
most adults don’t fully understand decimal place-value arithmetic, and
you expect third graders to have a clear conception? Or do you not
care if they understand it? It is simply too early for that kind of
technical training. Of course it can be done, but I think it ultimately
does more harm than good. Much better to wait until their own
natural curiosity about numbers kicks in.
SIMPLICIO: Then what should we do with young children in math class?
SALVIATI: Play games! Teach them Chess and Go, Hex and Backgammon,
Sprouts and Nim, whatever. Make up a game. Do puzzles. Expose
them to situations where deductive reasoning is necessary. Don’t
worry about notation and technique, help them to become active and
creative mathematical thinkers.
SIMPLICIO: It seems like we’d be taking an awful risk. What if we de-emphasize
arithmetic so much that our students end up not being able to add and
SALVIATI: I think the far greater risk is that of creating schools devoid of creative
expression of any kind, where the function of the students is to
memorize dates, formulas, and vocabulary lists, and then regurgitate
them on standardized tests—“Preparing tomorrow’s workforce today!”
SIMPLICIO: But surely there is some body of mathematical facts of which an
educated person should be cognizant.
SALVIATI: Yes, the most important of which is that mathematics is an art form
done by human beings for pleasure! Alright, yes, it would be nice if
people knew a few basic things about numbers and shapes, for
instance. But this will never come from rote memorization, drills,
lectures, and exercises. You learn things by doing them and you
remember what matters to you. We have millions of adults wandering
around with “negative b plus or minus the square root of b squared
minus 4ac all over 2a” in their heads, and absolutely no idea whatsoever
what it means. And the reason is that they were never given the
chance to discover or invent such things for themselves. They never
had an engaging problem to think about, to be frustrated by, and to
create in them the desire for technique or method. They were never
told the history of mankind’s relationship with numbers— no ancient
Babylonian problem tablets, no Rhind Papyrus, no Liber Abaci, no Ars
Magna. More importantly, no chance for them to even get curious
about a question; it was answered before they could ask it.
SIMPLICIO: But we don’t have time for every student to invent mathematics for
themselves! It took centuries for people to discover the Pythagorean
Theorem. How can you expect the average child to do it?
SALVIATI: I don’t. Let’s be clear about this. I’m complaining about the complete
absence of art and invention, history and philosophy, context and
perspective from the mathematics curriculum. That doesn’t mean that
notation, technique, and the development of a knowledge base have no
place. Of course they do. We should have both. If I object to a
pendulum being too far to one side, it doesn’t mean I want it to be all
the way on the other side. But the fact is, people learn better when
the product comes out of the process. A real appreciation for poetry
does not come from memorizing a bunch of poems, it comes from
writing your own.
SIMPLICIO: Yes, but before you can write your own poems you need to learn the
alphabet. The process has to begin somewhere. You have to walk
before you can run.
SALVIATI: No, you have to have something you want to run toward. Children can
write poems and stories as they learn to read and write. A piece of
writing by a six-year-old is a wonderful thing, and the spelling and
punctuation errors don’t make it less so. Even very young children can
invent songs, and they haven’t a clue what key it is in or what type of
meter they are using.
SIMPLICIO: But isn’t math different? Isn’t math a language of its own, with all
sorts of symbols that have to be learned before you can use it?
SALVIATI: Not at all. Mathematics is not a language, it’s an adventure. Do
musicians “speak another language” simply because they choose to
abbreviate their ideas with little black dots? If so, it’s no obstacle to
the toddler and her song. Yes, a certain amount of mathematical
shorthand has evolved over the centuries, but it is in no way essential.
Most mathematics is done with a friend over a cup of coffee, with a
diagram scribbled on a napkin. Mathematics is and always has been
about ideas, and a valuable idea transcends the symbols with which you
choose to represent it. As Gauss once remarked, “What we need are
notions, not notations.”
SIMPLICIO: But isn’t one of the purposes of mathematics education to help
students think in a more precise and logical way, and to develop their
“quantitative reasoning skills?” Don’t all of these definitions and
formulas sharpen the minds of our students?
SALVIATI: No they don’t. If anything, the current system has the opposite effect
of dulling the mind. Mental acuity of any kind comes from solving
problems yourself, not from being told how to solve them.
SIMPLICIO: Fair enough. But what about those students who are interested in
pursuing a career in science or engineering? Don’t they need the
training that the traditional curriculum provides? Isn’t that why we
teach mathematics in school?
SALVIATI: How many students taking literature classes will one day be writers?
That is not why we teach literature, nor why students take it. We
teach to enlighten everyone, not to train only the future professionals.
In any case, the most valuable skill for a scientist or engineer is being
able to think creatively and independently. The last thing anyone
needs is to be trained.
The Mathematics Curriculum
he truly painful thing about the way mathematics is taught in school is not what is missing—
the fact that there is no actual mathematics being done in our mathematics classes— but
what is there in its place: the confused heap of destructive disinformation known as “the
mathematics curriculum.” It is time now to take a closer look at exactly what our students are up
against— what they are being exposed to in the name of mathematics, and how they are being
harmed in the process.
The most striking thing about this so-called mathematics curriculum is its rigidity. This is
especially true in the later grades. From school to school, city to city, and state to state, the same
exact things are being said and done in the same exact way and in the same exact order. Far
from being disturbed and upset by this Orwellian state of affairs, most people have simply
accepted this “standard model” math curriculum as being synonymous with math itself.
This is intimately connected to what I call the “ladder myth”— the idea that mathematics can
be arranged as a sequence of “subjects” each being in some way more advanced, or “higher”
than the previous. The effect is to make school mathematics into a race— some students are
“ahead” of others, and parents worry that their child is “falling behind.” And where exactly does
this race lead? What is waiting at the finish line? It’s a sad race to nowhere. In the end you’ve
been cheated out of a mathematical education, and you don’t even know it.
Real mathematics doesn’t come in a can— there is no such thing as an Algebra II idea.
Problems lead you to where they take you. Art is not a race. The ladder myth is a false image of
the subject, and a teacher’s own path through the standard curriculum reinforces this myth and
prevents him or her from seeing mathematics as an organic whole. As a result, we have a math
curriculum with no historical perspective or thematic coherence, a fragmented collection of
assorted topics and techniques, united only by the ease in which they can be reduced to step-bystep
In place of discovery and exploration, we have rules and regulations. We never hear a student
saying, “I wanted to see if it could make any sense to raise a number to a negative power, and I
found that you get a really neat pattern if you choose it to mean the reciprocal.” Instead we have
teachers and textbooks presenting the “negative exponent rule” as a fait d’accompli with no
mention of the aesthetics behind this choice, or even that it is a choice.
In place of meaningful problems, which might lead to a synthesis of diverse ideas, to
uncharted territories of discussion and debate, and to a feeling of thematic unity and harmony in
mathematics, we have instead joyless and redundant exercises, specific to the technique under
discussion, and so disconnected from each other and from mathematics as a whole that neither
the students nor their teacher have the foggiest idea how or why such a thing might have come
up in the first place.
In place of a natural problem context in which students can make decisions about what they
want their words to mean, and what notions they wish to codify, they are instead subjected to an
endless sequence of unmotivated and a priori “definitions.” The curriculum is obsessed with
jargon and nomenclature, seemingly for no other purpose than to provide teachers with
something to test the students on. No mathematician in the world would bother making these
senseless distinctions: 2 1/2 is a “mixed number,” while 5/2 is an “improper fraction.” They’re
equal for crying out loud. They are the same exact numbers, and have the same exact properties.
Who uses such words outside of fourth grade?
Of course it is far easier to test someone’s knowledge of a pointless definition than to inspire
them to create something beautiful and to find their own meaning. Even if we agree that a basic
common vocabulary for mathematics is valuable, this isn’t it. How sad that fifth-graders are
taught to say “quadrilateral” instead of “four-sided shape,” but are never given a reason to use
words like “conjecture,” and “counterexample.” High school students must learn to use the
secant function, ‘sec x,’ as an abbreviation for the reciprocal of the cosine function, ‘1 / cos x,’
(a definition with as much intellectual weight as the decision to use ‘&’ in place of “and.” ) That
this particular shorthand, a holdover from fifteenth century nautical tables, is still with us
(whereas others, such as the “versine” have died out) is mere historical accident, and is of utterly
no value in an era when rapid and precise shipboard computation is no longer an issue. Thus we
clutter our math classes with pointless nomenclature for its own sake.
In practice, the curriculum is not even so much a sequence of topics, or ideas, as it is a
sequence of notations. Apparently mathematics consists of a secret list of mystical symbols and
rules for their manipulation. Young children are given ‘+’ and ‘÷.’ Only later can they be
entrusted with ‘√¯,’ and then ‘x’ and ‘y’ and the alchemy of parentheses. Finally, they are
indoctrinated in the use of ‘sin,’ ‘log,’ ‘f(x),’ and if they are deemed worthy, ‘d’ and ‘∫.’ All
without having had a single meaningful mathematical experience.
This program is so firmly fixed in place that teachers and textbook authors can reliably
predict, years in advance, exactly what students will be doing, down to the very page of
exercises. It is not at all uncommon to find second-year algebra students being asked to calculate
[ f(x + h) – f(x) ] / h for various functions f, so that they will have “seen” this when they take
calculus a few years later. Naturally no motivation is given (nor expected) for why such a
seemingly random combination of operations would be of interest, although I’m sure there are
many teachers who try to explain what such a thing might mean, and think they are doing their
students a favor, when in fact to them it is just one more boring math problem to be gotten over
with. “What do they want me to do? Oh, just plug it in? OK.”
Another example is the training of students to express information in an unnecessarily
complicated form, merely because at some distant future period it will have meaning. Does any
middle school algebra teacher have the slightest clue why he is asking his students to rephrase
“the number x lies between three and seven” as |x - 5| < 2 ? Do these hopelessly inept textbook
authors really believe they are helping students by preparing them for a possible day, years
hence, when they might be operating within the context of a higher-dimensional geometry or an
abstract metric space? I doubt it. I expect they are simply copying each other decade after
decade, maybe changing the fonts or the highlight colors, and beaming with pride when an
school system adopts their book, and becomes their unwitting accomplice.
Mathematics is about problems, and problems must be made the focus of a students
mathematical life. Painful and creatively frustrating as it may be, students and their teachers
should at all times be engaged in the process— having ideas, not having ideas, discovering
patterns, making conjectures, constructing examples and counterexamples, devising arguments,
and critiquing each other’s work. Specific techniques and methods will arise naturally out of this
process, as they did historically: not isolated from, but organically connected to, and as an
outgrowth of, their problem-background.
English teachers know that spelling and pronunciation are best learned in a context of reading
and writing. History teachers know that names and dates are uninteresting when removed from
the unfolding backstory of events. Why does mathematics education remain stuck in the
nineteenth century? Compare your own experience of learning algebra with Bertrand Russell’s
“I was made to learn by heart: ‘The square of the sum of two
numbers is equal to the sum of their squares increased by twice
their product.’ I had not the vaguest idea what this meant and
when I could not remember the words, my tutor threw the book at
my head, which did not stimulate my intellect in any way.”
Are things really any different today?
SIMPLICIO: I don’t think that’s very fair. Surely teaching methods have improved
since then.
SALVIATI: You mean training methods. Teaching is a messy human relationship;
it does not require a method. Or rather I should say, if you need a
method you’re probably not a very good teacher. If you don’t have
enough of a feeling for your subject to be able to talk about it in your
own voice, in a natural and spontaneous way, how well could you
understand it? And speaking of being stuck in the nineteenth century,
isn’t it shocking how the curriculum itself is stuck in the seventeenth?
To think of all the amazing discoveries and profound revolutions in
mathematical thought that have occurred in the last three centuries!
There is no more mention of these than if they had never happened.
SIMPLICIO: But aren’t you asking an awful lot from our math teachers? You
expect them to provide individual attention to dozens of students,
guiding them on their own paths toward discovery and enlightenment,
and to be up on recent mathematical history as well?
SALVIATI: Do you expect your art teacher to be able to give you individualized,
knowledgeable advice about your painting? Do you expect her to
know anything about the last three hundred years of art history? But
seriously, I don’t expect anything of the kind, I only wish it were so.
SIMPLICIO: So you blame the math teachers?
SALVIATI: No, I blame the culture that produces them. The poor devils are
trying their best, and are only doing what they’ve been trained to do.
I’m sure most of them love their students and hate what they are being
forced to put them through. They know in their hearts that it is
meaningless and degrading. They can sense that they have been made
cogs in a great soul-crushing machine, but they lack the perspective
needed to understand it, or to fight against it. They only know they
have to get the students “ready for next year.”
SIMPLICIO: Do you really think that most students are capable of operating on
such a high level as to create their own mathematics?
SALVIATI: If we honestly believe that creative reasoning is too “high” for our
students, and that they can’t handle it, why do we allow them to write
history papers or essays about Shakespeare? The problem is not that
the students can’t handle it, it’s that none of the teachers can. They’ve
never proved anything themselves, so how could they possibly advise a
student? In any case, there would obviously be a range of student
interest and ability, as there is in any subject, but at least students
would like or dislike mathematics for what it really is, and not for this
perverse mockery of it.
SIMPLICIO: But surely we want all of our students to learn a basic set of facts and
skills. That’s what a curriculum is for, and that’s why it is so
uniform— there are certain timeless, cold hard facts we need our
students to know: one plus one is two, and the angles of a triangle add
up to 180 degrees. These are not opinions, or mushy artistic feelings.
SALVIATI: On the contrary. Mathematical structures, useful or not, are invented
and developed within a problem context, and derive their meaning
from that context. Sometimes we want one plus one to equal zero (as
in so-called ‘mod 2’ arithmetic) and on the surface of a sphere the
angles of a triangle add up to more than 180 degrees. There are no
“facts” per se; everything is relative and relational. It is the story that
matters, not just the ending.
SIMPLICIO: I’m getting tired of all your mystical mumbo-jumbo! Basic arithmetic,
all right? Do you or do you not agree that students should learn it?
SALVIATI: That depends on what you mean by “it.” If you mean having an
appreciation for the problems of counting and arranging, the
advantages of grouping and naming, the distinction between a
representation and the thing itself, and some idea of the historical
development of number systems, then yes, I do think our students
should be exposed to such things. If you mean the rote memorization
of arithmetic facts without any underlying conceptual framework, then
no. If you mean exploring the not at all obvious fact that five groups
of seven is the same as seven groups of five, then yes. If you mean
making a rule that 5 x 7 = 7 x 5, then no. Doing mathematics should
always mean discovering patterns and crafting beautiful and
meaningful explanations.
SIMPLICIO: What about geometry? Don’t students prove things there? Isn’t High
School Geometry a perfect example of what you want math classes to
High School Geometry: Instrument of the Devil
here is nothing quite so vexing to the author of a scathing indictment as having the primary
target of his venom offered up in his support. And never was a wolf in sheep’s clothing as
insidious, nor a false friend as treacherous, as High School Geometry. It is precisely because it
is school’s attempt to introduce students to the art of argument that makes it so very dangerous.
Posing as the arena in which students will finally get to engage in true mathematical
reasoning, this virus attacks mathematics at its heart, destroying the very essence of creative
rational argument, poisoning the students’ enjoyment of this fascinating and beautiful subject,
and permanently disabling them from thinking about math in a natural and intuitive way.
The mechanism behind this is subtle and devious. The student-victim is first stunned and
paralyzed by an onslaught of pointless definitions, propositions, and notations, and is then slowly
and painstakingly weaned away from any natural curiosity or intuition about shapes and their
patterns by a systematic indoctrination into the stilted language and artificial format of so-called
“formal geometric proof.”
All metaphor aside, geometry class is by far the most mentally and emotionally destructive
component of the entire K-12 mathematics curriculum. Other math courses may hide the
beautiful bird, or put it in a cage, but in geometry class it is openly and cruelly tortured.
(Apparently I am incapable of putting all metaphor aside.)
What is happening is the systematic undermining of the student’s intuition. A proof, that is,
a mathematical argument, is a work of fiction, a poem. Its goal is to satisfy. A beautiful proof
should explain, and it should explain clearly, deeply, and elegantly. A well-written, well-crafted
argument should feel like a splash of cool water, and be a beacon of light— it should refresh the
spirit and illuminate the mind. And it should be charming.
There is nothing charming about what passes for proof in geometry class. Students are
presented a rigid and dogmatic format in which their so-called “proofs” are to be conducted— a
format as unnecessary and inappropriate as insisting that children who wish to plant a garden
refer to their flowers by genus and species.
Let’s look at some specific instances of this insanity. We’ll begin with the example of two
crossed lines:
Now the first thing that usually happens is the unnecessary muddying of the waters with
excessive notation. Apparently, one cannot simply speak of two crossed lines; one must give
elaborate names to them. And not simple names like ‘line 1’ and ‘line 2,’ or even ‘a’ and ‘b.’
We must (according to High School Geometry) select random and irrelevant points on these
lines, and then refer to the lines using the special “line notation.”
You see, now we get to call them AB and CD. And God forbid you should omit the little bars
on top— ‘AB’ refers to the length of the line AB (at least I think that’s how it works). Never
mind how pointlessly complicated it is, this is the way one must learn to do it. Now comes the
actual statement, usually referred to by some absurd name like
PROPOSITION 2.1.1.
Let AB and CD intersect at P. Then ∠APC ≅ ∠BPD.
In other words, the angles on both sides are the same. Well, duh! The configuration of two
crossed lines is symmetrical for crissake. And as if this wasn’t bad enough, this patently obvious
statement about lines and angles must then be “proved.”
Statement Reason
1. m∠APC + m∠APD = 180 1. Angle Addition Postulate
m∠BPD + m∠APD = 180
2. m∠APC + m∠APD = m∠BPD + m∠APD 2. Substitution Property
3. m∠APD = m∠APD 3. Reflexive Property of Equality
4. m∠APC = m∠BPD 4. Subtraction Property of Equality
5. ∠APC ≅ ∠BPD 5. Angle Measurement Postulate
Instead of a witty and enjoyable argument written by an actual human being, and conducted
in one of the world’s many natural languages, we get this sullen, soulless, bureaucratic formletter
of a proof. And what a mountain being made of a molehill! Do we really want to suggest
that a straightforward observation like this requires such an extensive preamble? Be honest: did
you actually even read it? Of course not. Who would want to?
The effect of such a production being made over something so simple is to make people
doubt their own intuition. Calling into question the obvious, by insisting that it be “rigorously
C B
A D
C B
A D
proved” (as if the above even constitutes a legitimate formal proof) is to say to a student, “Your
feelings and ideas are suspect. You need to think and speak our way.”
Now there is a place for formal proof in mathematics, no question. But that place is not a
student’s first introduction to mathematical argument. At least let people get familiar with some
mathematical objects, and learn what to expect from them, before you start formalizing
everything. Rigorous formal proof only becomes important when there is a crisis— when you
discover that your imaginary objects behave in a counterintuitive way; when there is a paradox
of some kind. But such excessive preventative hygiene is completely unnecessary here—
nobody’s gotten sick yet! Of course if a logical crisis should arise at some point, then obviously
it should be investigated, and the argument made more clear, but that process can be carried out
intuitively and informally as well. In fact it is the soul of mathematics to carry out such a
dialogue with one’s own proof.
So not only are most kids utterly confused by this pedantry— nothing is more mystifying
than a proof of the obvious— but even those few whose intuition remains intact must then
retranslate their excellent, beautiful ideas back into this absurd hieroglyphic framework in order
for their teacher to call it “correct.” The teacher then flatters himself that he is somehow
sharpening his students’ minds.
As a more serious example, let’s take the case of a triangle inside a semicircle:
Now the beautiful truth about this pattern is that no matter where on the circle you place the
tip of the triangle, it always forms a nice right angle. (I have no objection to a term like “right
angle” if it is relevant to the problem and makes it easier to discuss. It’s not terminology itself
that I object to, it’s pointless unnecessary terminology. In any case, I would be happy to use
“corner” or even “pigpen” if a student preferred.)
Here is a case where our intuition is somewhat in doubt. It’s not at all clear that this should
be true; it even seems unlikely— shouldn’t the angle change if I move the tip? What we have
here is a fantastic math problem! Is it true? If so, why is it true? What a great project! What a
terrific opportunity to exercise one’s ingenuity and imagination! Of course no such opportunity
is given to the students, whose curiosity and interest is immediately deflated by:
THEOREM 9.5. Let ΔABC be inscribed in a semicircle with diameter
Then ∠ABC is a right angle.
Statement Reason
1. Draw radius OB. Then OB = OC = OA 1. Given
2. m∠OBC = m∠BCA
m∠OBA = m∠BAC
2. Isosceles Triangle Theorem
3. m∠ABC = m∠OBA + m∠OBC 3. Angle Sum Postulate
4. m∠ABC + m∠BCA + m∠BAC = 180 4. The sum of the angles of a triangle is 180
5. m∠ABC + m∠OBC + m∠OBA = 180 5. Substitution (line 2)
6. 2 m∠ABC = 180 6. Substitution (line 3)
7. m∠ABC = 90 7. Division Property of Equality
8. ∠ABC is a right angle 8. Definition of Right Angle
Could anything be more unattractive and inelegant? Could any argument be more
obfuscatory and unreadable? This isn’t mathematics! A proof should be an epiphany from the
Gods, not a coded message from the Pentagon. This is what comes from a misplaced sense of
logical rigor: ugliness. The spirit of the argument has been buried under a heap of confusing
No mathematician works this way. No mathematician has ever worked this way. This is a
complete and utter misunderstanding of the mathematical enterprise. Mathematics is not about
erecting barriers between ourselves and our intuition, and making simple things complicated.
Mathematics is about removing obstacles to our intuition, and keeping simple things simple.
Compare this unappetizing mess of a proof with the following argument devised by one of
my seventh-graders:
“Take the triangle and rotate it around so it makes a foursided
box inside the circle. Since the triangle got turned
completely around, the sides of the box must be parallel,
so it makes a parallelogram. But it can’t be a slanted box
because both of its diagonals are diameters of the circle, so
they’re equal, which means it must be an actual rectangle.
That’s why the corner is always a right angle.”
Isn’t that just delightful? And the point isn’t whether this argument is any better than the
other one as an idea, the point is that the idea comes across. (As a matter of fact, the idea of the
first proof is quite pretty, albeit seen as through a glass, darkly.)
A O C
More importantly, the idea was the student’s own. The class had a nice problem to work on,
conjectures were made, proofs were attempted, and this is what one student came up with. Of
course it took several days, and was the end result of a long sequence of failures.
To be fair, I did paraphrase the proof considerably. The original was quite a bit more
convoluted, and contained a lot of unnecessary verbiage (as well as spelling and grammatical
errors). But I think I got the feeling of it across. And these defects were all to the good; they
gave me something to do as a teacher. I was able to point out several stylistic and logical
problems, and the student was then able to improve the argument. For instance, I wasn’t
completely happy with the bit about both diagonals being diameters— I didn’t think that was
entirely obvious— but that only meant there was more to think about and more understanding to
be gained from the situation. And in fact the student was able to fill in this gap quite nicely:
“Since the triangle got rotated halfway around the circle, the tip
must end up exactly opposite from where it started. That’s why
the diagonal of the box is a diameter.”
So a great project and a beautiful piece of mathematics. I’m not sure who was more proud,
the student or myself. This is exactly the kind of experience I want my students to have.
The problem with the standard geometry curriculum is that the private, personal experience
of being a struggling artist has virtually been eliminated. The art of proof has been replaced by a
rigid step-by step pattern of uninspired formal deductions. The textbook presents a set of
definitions, theorems, and proofs, the teacher copies them onto the blackboard, and the students
copy them into their notebooks. They are then asked to mimic them in the exercises. Those that
catch on to the pattern quickly are the “good” students.
The result is that the student becomes a passive participant in the creative act. Students are
making statements to fit a preexisting proof-pattern, not because they mean them. They are
being trained to ape arguments, not to intend them. So not only do they have no idea what their
teacher is saying, they have no idea what they themselves are saying.
Even the traditional way in which definitions are presented is a lie. In an effort to create an
illusion of “clarity” before embarking on the typical cascade of propositions and theorems, a set
of definitions are provided so that statements and their proofs can be made as succinct as
possible. On the surface this seems fairly innocuous; why not make some abbreviations so that
things can be said more economically? The problem is that definitions matter. They come from
aesthetic decisions about what distinctions you as an artist consider important. And they are
problem-generated. To make a definition is to highlight and call attention to a feature or
structural property. Historically this comes out of working on a problem, not as a prelude to it.
The point is you don’t start with definitions, you start with problems. Nobody ever had an
idea of a number being “irrational” until Pythagoras attempted to measure the diagonal of a
square and discovered that it could not be represented as a fraction. Definitions make sense
when a point is reached in your argument which makes the distinction necessary. To make
definitions without motivation is more likely to cause confusion.
This is yet another example of the way that students are shielded and excluded from the
mathematical process. Students need to be able to make their own definitions as the need
arises— to frame the debate themselves. I don’t want students saying, “the definition, the
theorem, the proof,” I want them saying, “my definition, my theorem, my proof.”
All of these complaints aside, the real problem with this kind of presentation is that it is
boring. Efficiency and economy simply do not make good pedagogy. I have a hard time
believing that Euclid would approve of this; I know Archimedes wouldn’t.
SIMPLICIO: Now hold on a minute. I don’t know about you, but I actually enjoyed
my high school geometry class. I liked the structure, and I enjoyed
working within the rigid proof format.
SALVIATI: I’m sure you did. You probably even got to work on some nice
problems occasionally. Lot’s of people enjoy geometry class (although
lots more hate it). But this is not a point in favor of the current
regime. Rather, it is powerful testimony to the allure of mathematics
itself. It’s hard to completely ruin something so beautiful; even this
faint shadow of mathematics can still be engaging and satisfying.
Many people enjoy paint-by-numbers as well; it is a relaxing and
colorful manual activity. That doesn’t make it the real thing, though.
SIMPLICIO: But I’m telling you, I liked it.
SALVIATI: And if you had had a more natural mathematical experience you would
have liked it even more.
SIMPLICIO: So we’re supposed to just set off on some free-form mathematical
excursion, and the students will learn whatever they happen to learn?
SALVIATI: Precisely. Problems will lead to other problems, technique will be
developed as it becomes necessary, and new topics will arise naturally.
And if some issue never happens to come up in thirteen years of
schooling, how interesting or important could it be?
SIMPLICIO: You’ve gone completely mad.
SALVIATI: Perhaps I have. But even working within the conventional framework
a good teacher can guide the discussion and the flow of problems so as
to allow the students to discover and invent mathematics for
themselves. The real problem is that the bureaucracy does not allow
an individual teacher to do that. With a set curriculum to follow, a
teacher cannot lead. There should be no standards, and no curriculum.
Just individuals doing what they think best for their students.
SIMPLICIO: But then how can schools guarantee that their students will all have
the same basic knowledge? How will we accurately measure their
relative worth?
SALVIATI: They can’t, and we won’t. Just like in real life. Ultimately you have to
face the fact that people are all different, and that’s just fine. In any
case, there’s no urgency. So a person graduates from high school not
knowing the half-angle formulas (as if they do now!) So what? At least
that person would come away with some sort of an idea of what the
subject is really about, and would get to see something beautiful.
In Conclusion…
o put the finishing touches on my critique of the standard curriculum, and as a service to the
community, I now present the first ever completely honest course catalog for K-12
The Standard School Mathematics Curriculum
LOWER SCHOOL MATH. The indoctrination begins. Students learn that mathematics is not
something you do, but something that is done to you. Emphasis is placed on sitting still, filling
out worksheets, and following directions. Children are expected to master a complex set of
algorithms for manipulating Hindi symbols, unrelated to any real desire or curiosity on their part,
and regarded only a few centuries ago as too difficult for the average adult. Multiplication tables
are stressed, as are parents, teachers, and the kids themselves.
MIDDLE SCHOOL MATH. Students are taught to view mathematics as a set of procedures,
akin to religious rites, which are eternal and set in stone. The holy tablets, or “Math Books,” are
handed out, and the students learn to address the church elders as “they” (as in “What do they
want here? Do they want me to divide?”) Contrived and artificial “word problems” will be
introduced in order to make the mindless drudgery of arithmetic seem enjoyable by comparison.
Students will be tested on a wide array of unnecessary technical terms, such as ‘whole number’
and ‘proper fraction,’ without the slightest rationale for making such distinctions. Excellent
preparation for Algebra I.
ALGEBRA I. So as not to waste valuable time thinking about numbers and their patterns, this
course instead focuses on symbols and rules for their manipulation. The smooth narrative thread
that leads from ancient Mesopotamian tablet problems to the high art of the Renaissance
algebraists is discarded in favor of a disturbingly fractured, post-modern retelling with no
characters, plot, or theme. The insistence that all numbers and expressions be put into various
standard forms will provide additional confusion as to the meaning of identity and equality.
Students must also memorize the quadratic formula for some reason.
GEOMETRY. Isolated from the rest of the curriculum, this course will raise the hopes of
students who wish to engage in meaningful mathematical activity, and then dash them. Clumsy
and distracting notation will be introduced, and no pains will be spared to make the simple seem
complicated. This goal of this course is to eradicate any last remaining vestiges of natural
mathematical intuition, in preparation for Algebra II.
ALGEBRA II. The subject of this course is the unmotivated and inappropriate use of coordinate
geometry. Conic sections are introduced in a coordinate framework so as to avoid the aesthetic
simplicity of cones and their sections. Students will learn to rewrite quadratic forms in a variety
of standard formats for no reason whatsoever. Exponential and logarithmic functions are also
introduced in Algebra II, despite not being algebraic objects, simply because they have to be
stuck in somewhere, apparently. The name of the course is chosen to reinforce the ladder
mythology. Why Geometry occurs in between Algebra I and its sequel remains a mystery.
TRIGONOMETRY. Two weeks of content are stretched to semester length by masturbatory
definitional runarounds. Truly interesting and beautiful phenomena, such as the way the sides of
a triangle depend on its angles, will be given the same emphasis as irrelevant abbreviations and
obsolete notational conventions, in order to prevent students from forming any clear idea as to
what the subject is about. Students will learn such mnemonic devices as “SohCahToa” and “All
Students Take Calculus” in lieu of developing a natural intuitive feeling for orientation and
symmetry. The measurement of triangles will be discussed without mention of the
transcendental nature of the trigonometric functions, or the consequent linguistic and
philosophical problems inherent in making such measurements. Calculator required, so as to
further blur these issues.
PRE-CALCULUS. A senseless bouillabaisse of disconnected topics. Mostly a half-baked
attempt to introduce late nineteenth-century analytic methods into settings where they are neither
necessary nor helpful. Technical definitions of ‘limits’ and ‘continuity’ are presented in order to
obscure the intuitively clear notion of smooth change. As the name suggests, this course
prepares the student for Calculus, where the final phase in the systematic obfuscation of any
natural ideas related to shape and motion will be completed.
CALCULUS. This course will explore the mathematics of motion, and the best ways to bury it
under a mountain of unnecessary formalism. Despite being an introduction to both the
differential and integral calculus, the simple and profound ideas of Newton and Leibniz will be
discarded in favor of the more sophisticated function-based approach developed as a response to
various analytic crises which do not really apply in this setting, and which will of course not be
mentioned. To be taken again in college, verbatim.
And there you have it. A complete prescription for permanently disabling young minds— a
proven cure for curiosity. What have they done to mathematics!
There is such breathtaking depth and heartbreaking beauty in this ancient art form. How
ironic that people dismiss mathematics as the antithesis of creativity. They are missing out on an
art form older than any book, more profound than any poem, and more abstract than any abstract.
And it is school that has done this! What a sad endless cycle of innocent teachers inflicting
damage upon innocent students. We could all be having so much more fun.
SIMPLICIO: Alright, I’m thoroughly depressed. What now?
SALVIATI: Well, I think I have an idea about a pyramid inside a cube… | {"url":"https://www.creatingrealmathematicians.com/a-mathematicians-lament","timestamp":"2024-11-14T23:55:14Z","content_type":"text/html","content_length":"331785","record_id":"<urn:uuid:1eb02dfd-952d-4942-b2b1-dab907678a49>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00178.warc.gz"} |
Find-S Algorithm: Finding maximally specific hypotheses
Now after learning the concept of general-to-specific ordering of hypotheses, Now its time to use this partial ordering to organize the search for a hypothesis, that is consistent with the observed
training examples. One way is to begin with the most specific possible hypothesis in H, then generalize this hypothesis each time it fails to cover an observed positive training example.
FIND-S algorithm is used for this purpose. Here are the steps for find-s algorithm.
To illustrate this algorithm, assume the learner is given the sequence of training examples from the EnjoySport task
1. The first step of FIND-S is to initialize h to the most specific hypothesis in H h — (Ø, Ø, Ø, Ø, Ø, Ø)
2. First training example x1 = < Sunny, Warm, Normal, Strong ,Warm ,Same>, EnjoySport = +ve. Observing the first training example, it is clear that hypothesis h is too specific. None of the “Ø”
constraints in h are satisfied by this example, so each is replaced by the next more general constraint that fits the example h1 = < Sunny, Warm, Normal, Strong ,Warm, Same>.
3. Consider the second training example x2 = < Sunny, Warm, High, Strong, Warm, Same>, EnjoySport = +ve. The second training example forces the algorithm to further generalize h, this time
substituting a “?” in place of any attribute value in h that is not satisfied by the new example. Now h2 =< Sunny, Warm, ?, Strong, Warm, Same>
4. Consider the third training example x3 =< Rainy, Cold, High, Strong, Warm, Change>,EnjoySport = — ve. The FIND-S algorithm simply ignores every negative example. So the hypothesis remain as
before, so h3 =< Sunny, Warm, ?, Strong, Warm, Same>
5. Consider the fourth training example x4 =<Sunny,Warm,High,Strong, Cool,Change>, EnjoySport =+ve. The fourth example leads to a further generalization of h as h4 =< Sunny, Warm, ?, Strong, ?, ?>
6. So the final hypothesis is < Sunny, Warm, ?, Strong, ?, ?>
The above example, can be illustrated with the below figure.
The search begins (ho) with the most specific hypothesis in H, then considers increasingly general hypotheses (hl through h4) as mandated by the training examples. The search moves from hypothesis to
hypothesis, searching from the most specific to progressively more general hypotheses along one chain of the partial ordering. At each step, the hypothesis is generalized only as far as necessary to
cover the new positive example. Therefore, at each stage the hypothesis is the most specific hypothesis consistent with the training examples observed up to this point.
The key property of the FIND-S algorithm —
• FIND-S is guaranteed to output the most specific hypothesis within H that is consistent with the positive training examples
• FIND-S algorithm’s final hypothesis will also be consistent with the negative examples provided the correct target concept is contained in H, and provided the training examples are correct.
Unanswered questions by FIND-S
There are several questions still left unanswered, such as:
1. Has the learner converged to the correct target concept ?. Although FIND-S will find a hypothesis consistent with the training data, it has no way to determine whether it has found the only
hypothesis in H consistent with the data (i.e., the correct target concept), or whether there are many other consistent hypotheses as well.
2. Why prefer the most specific hypothesis ?. In case there are multiple hypotheses consistent with the training examples, FIND-S will find the most specific. It is unclear whether we should prefer
this hypothesis over, say, the most general, or some other hypothesis of intermediate generality.
3. Are the training examples consistent ?. In most practical learning problems there is some chance that the training examples will contain at least some errors or noise. Such inconsistent sets of
training examples can severely mislead FIND-S, given the fact that it ignores negative examples.
4. What if there are several maximally specific consistent hypotheses?. There can be several maximally specific hypotheses consistent with the data. Find S finds only one
0 comments : | {"url":"https://www.tutorialtpoint.net/2022/07/find-s-algorith-finding-maximally-specific-hypotheses.html","timestamp":"2024-11-14T00:33:16Z","content_type":"application/xhtml+xml","content_length":"194465","record_id":"<urn:uuid:c61d6934-8399-4204-8b67-88b855cb3d8a>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00434.warc.gz"} |
2012 MathFour Manifesto
I gave up New Year’s resolutions years ago. I never kept them, so I figured they were pointless.
Instead I took up creating a yearly manifesto – a short declaration of my current life stance. This year I’m going to do a professional manifesto as well. This is it.
The MathFour.com Mission
Old Document by storebukkebruse | Flickr.com | CCBY
The mission of MathFour.com is to enhance the effectiveness of student math programs by helping every adult have a more positive influence on children with respect to math.
In less fancy terms – I want all the programs like Mathnasium, TenMarks, Stinky Kid Math, Sokikom and ABC Mouse to be successful. Negative math-talk from parents can diminish the effects these
wonderful programs can have. MathFour.com’s goal is to help parents turn off negative math-talk and turn on helpful and effective math influences.
I will follow the research.
I have partnered with a licensed professional counselor. He is currently reading through the social science and psychological research and discovering the value of positive adult influences on
children’s success in math. He is presenting a paper on the subject at the Western Social Science Association Conference this year.
The programs in development at MathFour.com will be based on this research.
I will broaden the reach.
The Facebook fan page isn’t working for the mission anymore. I will turn it into something useful. @MathFour on twitter is strong, so I will continue to interact there and be helpful.
I’ll also up the quantity and quality of articles posted here on MathFour.com– making them helpful for parents and teachers and maximizing SEO so they can be found easily.
by stuartpilbrow | Flickr.com | CC BY-SA
I will increase the scope.
The target of the mission is parents. And parents who access MathFour.com must be able to read English and have access to a computer.
These two assumptions could eliminate much of the world’s population from this content.
I’m in the process of forming a non-profit organization so MathFour.com can reach those parents. I have a grant writer on standby ready to get the ball rolling as soon as it’s formed.
I will work more efficiently.
In 2011 I diluted the efforts with various projects. This year holds only one project. And it speaks directly to the mission.
That’s Math! is launching March 1, with a soft launch for MathFourTicians on January 23. Over the course of the year my partner and I will be creating new content, managing the social learning aspect
of the project and promoting it like crazy.
MathFour.com will continue as a foundation of robust and helpful content, free from outside advertisements.
I will find “competitors.”
There are hundreds, maybe even thousands, of student math programs in existence: on the web, software-based, speakers, books, workbooks, kits, summer camps, tutoring companies…
There are also many websites, books and programs to help adults with math.
MathFour.com is not one of these. MathFour.com is for parents. Not adult students. It’s for parents who are uncomfortable with math. It’s also for parents who are comfortable with math – because even
engineers who are parents sometimes use negative math-talk!
The goal is not to turn grown-ups into lovers of math. It’s to increase the effectiveness of parents who are trying to positively influence their children in math.
The research indicates that student math programs could be much more effective if students’ adult influences in math were positive.
by familymwr | Flickr.com | CC BY
Parents need to know that. And I can’t get the word out alone. The world is large and this is a global problem. I need to find websites and organizations with a similar mission.
I will help.
I will solicit questions from parents and teachers so I can be sure MathFour.com is the resource they need. It’s not helpful if I’m not helping.
So here’s my first solicitation: As a parent, what would you like to see MathFour.com do for you this year? If you are a mathematician and/or math teacher, how can I help you and the parents of your
Please let me know in the comments.
Related articles
This post may contain affiliate links. When you use them, you support us so we can continue to provide free content!
10 Responses to 2012 MathFour Manifesto
1. A brilliant and inspiring article. It’s clear where you’re headed and how you want to help parents and teachers help students.
□ Thanks, Wil! Indeed the first of the year is a place to really lay out plans and get my head straight!
2. So, as a parent, could I hire you to come over and teach math? 🙂 LOL
□ Well, not really. But if you’re interested, I could do a webinar teaching you how to teach math.
Let me know – that could be cool!
3. Bon, in all honesty you were not a very pleasant human being in the last year, hopefully you will change for the better and be more polite in 2012. Happy new year. Cheers.
□ Thanks for your comment, Tom. I have to say, though, that I’m not really sure where I was unpleasant or impolite.
Of course the person who is unpleasant is usually unaware of it (I’ve seen it before in others). If you could help me out and share where you’ve seen this I’ll be happy to examine and do what
I can to be more inviting.
4. If the research shows, and the media reports, that students don’t do as well in math and science in our country, you might really have hit on a raw nerve…could it be that we the parents are much
of the problem? Parents need training too, and this might be the breakthrough we need to turn the tide. Imagine if our graduating high schoolers clambered to concentrate their university studies
on math and the sciences?
□ Imagine indeed! I was just telling someone the other day that all kids should choose their professions with these words:
“Yeah, I could be a mathematician, but there are so many of them. I know it’s fun and easy, but I think I’ll try my hand at being a __________.”
Thanks for your comments, MaryFran!
5. Thanks for this post, Bon.
It has inspired me to be more focussed in my own website and business. All the very best with your endeavours in 2012!
P.S. Oh, and I am not related to Tom Price!
□ Thanks so much, Peter!
Let me know how I can support your business. We’ve got similar goals for sure.
Leave a reply
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://mathfour.com/general/2012-mathfour-manifesto","timestamp":"2024-11-08T09:31:52Z","content_type":"text/html","content_length":"57374","record_id":"<urn:uuid:a8c072cc-f1d1-4a05-a3e0-c38ad990fff0>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00386.warc.gz"} |
How many bags of gravel in a cubic yard - Civil Sir
How many bags of gravel in a cubic yard
How many bags of gravel in a cubic yard | how many 50lb bags of gravel in a cubic yard | how much does a 50lb bag of gravel cover | how much does a 50lb bag of gravel cost | how much is a 50lb bag of
To calculate the number of 50lb bags of gravel in a cubic yard, you’ll need to know the density of the gravel. Gravel can vary in density, but a common value is around 2,700 pounds per cubic yard.
Here’s the calculation: Number of bags = (Cubic yards of gravel) x (Density of gravel in pounds per cubic yard) / (Weight per bag in pounds). Number of bags = (1 cubic yard) x (2,700 pounds per cubic
yard) / (50 pounds per bag). Number of bags ≈ 54 bags. So, there are approximately 54 bags of 50lb gravel in a cubic yard, assuming a gravel density of 2,700 pounds per cubic yard.
How many bags of gravel in a cubic yard
50 lb bag of gravel size 3/4″ of brand Quikrete is all-purpose gravel used as stone is a multi-use product for a variety of decorative and landscaping applications and for making concrete. It is well
washed, properly graded gravel generally used for landscaping, patios, gardening, tree wells, walks, rooftops, fish ponds and aquariums and circle.
50 pound bag of gravel mix with portland cement and all purpose sand to make concrete mix used for casting of slab, beam, column, footing and to fill the post hole for fencing. It is ideal for new
construction and renovation of paver applications.
Gravel is one of the most important building material collected from river basin, mountains, rocks, small rocks, pebbles, loose and dry sand, aggregate and pea gravel.
Weight of gravel depending on dry and wet condition, loose and their dense condition, compact, their mixture composition & particle size and it is made by crushed stone.
You are looking to buying gravel and crushed stone for your construction work, prepairing of concrete mortar, you need to know how much gravel you should purchase and you can load in your vehicle.
Most of gravel supplier which are available nearly to you, will give you option to deliver gravel and crushed stone at your homes, for this they should cost some money for transportation. If you have
a truck or vehicle that you can use to bring gravel to your destination or construction site, then it is cheaper and faster option for you.
A cubic yard of typical gravel weighs about 2830 pounds or 1.42 tons. Generally, a cubic yard of gravel provides enough material to cover a 100-square-foot area with 3 inches of gravel.
On average, a cubic yard of gravel, which visually is 3 feet long by 3 feet wide by 3 feet tall, weighs approximately 2,970 pounds or 1.5 tons. For estimating purpose, contractor’s and Builder’s, one
cubic yard of gravel weighs 3000 pounds or 1.5 tons. But one yard of 50 pound bagged gravel would weigh around 2700 pounds and one cubic foot of bagged gravel weighs 100 pounds.
◆You Can Follow me on Facebook and
Subscribe our Youtube Channel
How many bags of gravel in a cubic yard
There are 54 bags of 50 pound (50 lb) of gravel in a cubic yard. A typical bag of gravel comes in 50 lbs (pounds), which yields 0.5 cubic feet or 0.0185 cubic yards, and a cubic yard of gravel is 27
cubic feet, so number of 50 lb bags of gravel in a cubic yard = (27÷0.5) = 54 bags. Hence, there are 54 bags of 50 lb gravel in a cubic yard.
How much is a 50lb bag of gravel
A 50lb (pounds) bag of typical gravel is yield around 0.5 cubic feet or 0.0185 cubic yards, and which will approximately cover 3 square feet area at standard depth of 2 inches thick, and will cost
between $4 to $6 per bag.
How much does a 50lb bag of gravel cover
A 50lb bag of gravel will approximately cover 3 square feet area at a standard depth of 2 inches thick, 2 square feet at 3 inches thick, 1.5 square feet at 4 inches thick, and 6 square feet at 1
inches thick.
How many square feet in a 50lb bag of gravel
There are 2 square feet in a 50lb bag of gravel at standard depth of 3 inches thick, 3 square feet at 2 inches thick, 1.5 square feet at 4 inches thick, and 6 square feet at 1 inches thick.
“How many bags of concrete is in a yard
“How much is a yard of gravel weigh, cover and cost
“What is the weight of 1 cubic yard gravel loose, dry and sand
“How many bags of gravel in a cubic yard
“How much area does a yard of gravel cover
There are 54 50-pound bags of gravel in one cubic yard. A typical bag of gravel weighs 50 pounds (lbs), which is equal to 0.5 cubic feet or 0.0185 cubic yards, and one cubic yard of gravel is equal
to 27 cubic feet, so the number of 50 pound bags of gravel in a yard cube = (27 ÷ 0.5) = 54 bags. | {"url":"https://civilsir.com/how-many-bags-of-gravel-in-a-cubic-yard/","timestamp":"2024-11-06T19:56:05Z","content_type":"text/html","content_length":"91658","record_id":"<urn:uuid:1153c33f-75b0-4e1d-b795-f671a1d7c460>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00590.warc.gz"} |
Peer review in Efficient estimation for large-scale linkage disequilibrium patterns of the human genome
Efficient estimation for large-scale linkage disequilibrium patterns of the human genome
1. Detlef Weigel
2. Max Planck Institute for Biology Tübingen, Germany
1. Alexander Young
2. University of California, Los Angeles, United States
In this paper, the authors point out that the standard approach of estimating LD is inefficient for datasets with large numbers of SNPs, with a computational cost of O(nm^2), where n is the number of
individuals and m is the number of SNPs. Using the known relationship between the LD matrix and the genomic-relatedness matrix, they can calculate the mean level of LD within the genome or across
genomic segments with a computational cost of O(n^2m). Since in most datasets, n<
Generally, for computational papers like this, the proof is in the pudding, and the authors have been successful at their aim of producing an efficient computational tool. The most compelling
evidence of this in the paper are Figure 2 and Supplementary Figure S2. In Figure 2, they report how well their X-LD estimates of LD compare to estimates based on the standard approach using PLINK.
They appear to have very good agreement. In Figure S2, they report the computational runtime of X-LD vs PLINK, and as expected X-LD is faster than PLINK as long as it is evaluating LD for more than
8000 SNPs.
This method seems to be limited to calculating average levels of LD in broad regions of the genome. While it would be possible to make the regions more fine-grained, doing so appears to make this
approach much less efficient. As such, applications of this method may be limited to those proposed in the paper, for questions where average LD of large chromosomal segments is informative.
This approach seems to produce real gains for settings where broad average levels of LD are useful to know, but it will likely have less of an impact in settings where fine-grained levels are LD are
necessary (e.g., accounting for LD in GWAS summary statistics).
The following is the authors’ response to the original reviews.
Reviewer #1 (Public Review)
Huang and colleagues present a method for approximation of linkage disequilibrium (LD) matrices. The problem of computing LD matrices is the problem of computing a correlation matrix. In the
cases considered by the authors, the number of rows (n), corresponding to individuals, is small compared to the number of columns (m), corresponding to the number of variants. Computing the
correlation matrix has cubic time complexity $[O(nm2)]$, which is prohibitive for large samples. The authors approach this using three main strategies:
1. they compute a coarsened approximation of the LD matrix by dividing the genome into variant-wise blocks which statistics are effectively averaged over;
2. they use a trick to get the coarsened LD matrix from a coarsened genomic relatedness matrix (GRM), which, with $O(n2m)$ time complexity, is faster when n << m;
3. they use the Mailman algorithm to improve the speed of basic linear algebra operations by a factor of log(max(m,n)). The authors apply this approach to several datasets.
The authors demonstrate that their proposed method performs in line with theoretical explanations.
The coarsened LD matrix is useful for describing global patterns of LD, which do not necessarily require variant-level resolution.
They provide an open-source implementation of their software.
The coarsened LD matrix is of limited utility outside of analyzing macroscale LD characteristics.The method still essentially has cubic complexity--albeit the factors are smaller and Mailman
reduces this appreciably. It would be interesting if the authors were able to apply randomized or iterative approaches to achieve more fundamental gains. The algorithm remains slow when n is
large and/or the grid resolution is increased.
Thanks for your positive and accurate evaluation! We acknowledge the weakness and include some sentences in Discussion.
“The weakness of the proposed method is obvious that the algorithm remains slow when the sample size is large or the grid resolution is increased. With the availability of such as UK Biobank data
(Bycroft et al., 2018), the proposed method may not be adequate, and much advanced methods, such as randomized implementation for the proposed methods, are needed.”
Reviewer #2 (Public Review)
In this paper, the authors point out that the standard approach of estimating LD is inefficient for datasets with large numbers of SNPs, with a computational cost of $[O(nm2)]$, where n is the
number of individuals and m is the number of SNPs. Using the known relationship between the LD matrix and the genomic- relatedness matrix, they can calculate the mean level of LD within the
genome or across genomic segments with a computational cost of $O(n2m)$. Since in most datasets, n<<m, this can lead to major computational improvements. They have produced software written in
C++ to implement this algorithm, which they call X-LD. Using the output of their method, they estimate the LD decay and the mean extended LD for various subpopulations from the 1000 Genomes
Project data.
Generally, for computational papers like this, the proof is in the pudding, and the authors appear to have been successful at their aim of producing an efficient computational tool. The most
compelling evidence of this in the paper is Figure 2 and Supplementary Figure S2. In Figure 2, they report how well their X- LD estimates of LD compare to estimates based on the standard approach
using PLINK. They appear to have very good agreement. In Figure S2, they report the computational runtime of X-LD vs PLINK, and as expected X-LD is faster than PLINK as long as it is evaluating
LD for more than 8000 SNPs.
While the X-LD software appears to work well, I had a hard time following the manuscript enough to make a very good assessment of the work. This is partly because many parameters used are not
defined clearly or at all in some cases. My best effort to intuit what the parameters meant often led me to find what appeared to be errors in their derivation. As a result, I am left worrying if
the performance of X-LD is due to errors cancelling out in the particular setting they consider, making it potentially prone to errors when taken to different contexts.
Thanks for you critical reading and evaluation. We do feel apologize for typos, which have been corrected and clearly defined now (see Eq 1 and Table 1). In addition, we include more detailed
mathematical steps, which explain how LD decay regression is constructed and consequently finds its interpretation (see the detailed derivation steps between Eq 3 and Eq 4).
I feel like there is value in the work that has been done here if there were more clarity in the writing. Currently, LD calculations are a costly step in tools like LD score regression and
Bayesian prediction algorithms, so a more efficient way to conduct these calculations would be useful broadly. However, given the difficulty I had following the manuscript, I was not able to
assess when the authors’ approach would be appropriate for an extension such as that.
See our replies below in responding to your more detailed questions.
Reviewer #1 (Recommendations For The Authors)
There are numerous linguistic errors throughout, making it challenging to read.
It is unclear how the intercepts were chosen in Figure S2. Since theory only gives you the slopes, it seems like it would make more sense to choose the intercept such that it aligns with the
empirical results in some way.
Thanks for your critical evaluation. We do feel apologize some typos, and we have read it through and clarify the text as much as possible. In addition, we included Table 1, which introduces
mathematical symbols of the paper.
In Figure S2, the two algorithms being compared have different software implementations, PLINK vs X-LD. Their real performance not only depended on the time complexity of the algorithms (right-side
y-axis), but also how the software was coded. PLINK is known for its excellent programming. If we could have programmed as well as Chris Chang, the performance of X-LD should have been even better
and approach the ratio m/n. However, even under less skilled programming, X-LD outperformed plink.
Reviewer #2 (Recommendations For The Authors):
Thank you for the chance to review your manuscript. It looks like compelling work that could be improved by greater detail. Providing the level of detail necessary may require creating a
Supplementary Note that does a lot of hand-holding for readers like me who are mathematically literate but who don’t have the background that you do. Then you can refer readers to the Supplement
if they can’t follow your work.
We fix the problems and style issues as possible as we can.
Regarding the weakness section in the public review, here are a few examples of where I got confused, though this list is not exhaustive.
1. Consider Equation 1 (line 100), which I believe must be incorrect. Imagine that g consists of two SNPs on different chromosomes with correlation rho. Then ell_g (which is defined as the
average squared elements of the correlation matrix) would be
ell_g = 1/4 (1 + 1 + rho^2 + rho^2) = (1+rho^2)/2.
But ell_1=1 and ell_2=1 and ell_12=rho^2 (The average squared elements of the chromosome-specific correlation matrices and the cross-chromosome correlation matrix, respectively). So
sum(ell_i)+sum(ell_ij) = 1 + 1 + rho^2 + rho^2 = (1+rho^2)*2.
I believe your formulas would hold if you defined your LD values as the sum of squared correlations instead of the mean, but then I don’t know if the math in the subsequent sections holds. I
think this problem also holds for Eq 2 and therefore makes Eqs 3 and 4 difficult to interpret.
Thanks for your attentive review and invaluable suggestions. We acknowledge the typo in calculating the mean in Eq 1, resulting in difficulties in understanding the equations. We sincerely apologize
for this oversight. To address this issue and ensure clarity in the interpretation of Eq 3 and Eq 4, we have provided more detailed explanations (see the derivation between Eq 3 and Eq 4).
2. I didn’t know what the parameters are in Equation 3. The vector ell needs to be defined. Is it the vector of ell_i for each chromosomal segment i? I’m also confused by the definition of m_i,
which is defined on line 113 as the “SNP number of the i-th chromosome.” Do the authors mean the number of SNPs on the i-th chromosomal segment? If so, it wasn’t clear to me how Eq 2 and Eq 3
imply Eq 4. Further, it wasn’t clear to me why E(b1) quantifies the average LD decay of the genome. I’m used to seeing plots of average LD as a function of distance between SNPs to calculate
this, though I’m admittedly not a population geneticist, so maybe this is standard. Standard or not, readers deserve to have their hands held a bit more through this either in the text or in a
Supplementary Note.
Thanks for your insightful feedback. When we were writing this paper, our actually focus was Eq 3 and to establish the relationship between chromosomal LD and the reciprocal of the length of
chromosome (Fig 6A) – which was surrogated by the number of SNPs, the correlation between ell_i and 1/m_i.
We asked around our friends who are population geneticists, who anticipated the correlation between chromosomal LD (ell) and 1/m. The rationale simple if one knows the very basis of population
genetics. A long chromosome experiences more recombination, which weakens LD for a pair of loci. In particular, for a pair of loci D_t=D_0 (1-c)^t. D_t the LD at the t generation, D_0 at the 0
generation, and c the recombination fraction. As recombination hotspots are nearly even distributed along the genome, such as reported by Science 2019;363:eaau8861, the chromosome will be broken into
the shape in Author response image 1 (Fig 1C, newly added). Along the diagonal you see tight LD block, which will be vanished in the further as predicted by D_t equation, and any loci far away from
each other will not be in LD otherwise raised by such as population structure. Ideally, we assume the diagonal block of aveage size of m×m and average LD of a SNP with other SNPs inside the diagonal
block (red) is l_u; and, in contrast, off-diagonal average LD (light red) to be l_uv. This logic is hidden but employed in such as ld score regression and prs refinement using LD structure.
But, how to estimate chromosomal LD (ell), which is overwhelming $[O(nm2)]$ as our friends said!So, the Figure 6A is logically anticipated by a seasoned population geneticist, but has never been
realized because of $[O(nm2)]$ is nightmare. Often, those signature patterns should have been employed as showcases in releasing new reference data, such as HapMap. However, to our knowledge, this
signature linear relationship has never been illustrated in those reference data.
If you further test a population geneticist, if any chromosome will deviate from this line (Fig 6A)? The answer most likely will be chromosome 6 because of the LD tight HLA region. However, it is
chromosome 11 because of its most completed sequenced centromere. Chr 11 is a surprise! With T2T sequenced population, Chr 11 will not deviate much. We predict!
However, we suspect whether people appreciate this point, we shift our focus to efficient computation of LD—which is more likely understood. We acknowledge the lack of clarity in notation definitions
and the absence of the derivation for the interpretation of b1 and b0 for LD decay regression. So, we have added a table to provide an explanation of the notation (see the Table 1) and provided
additional derivations, which explained how LD decay regression was derived (see the derivation between Eq 3 and Eq 4). Figure 1C provides illustration for the underlying assumption under LD.
The technique to bridge Eq 2~3 to Eq 4 is called “building interpretation”. It once was one of the kernel tasks for population genetics or statistical genetics, and a classical example is
Haseman-Elston regression (Behavior Genetics, 1972, 2:3-19). When it is moving towards a data-driven style, the culture becomes “shut up, calculate”. Finding interpretation for a regression is a
vanishing craftmanship, and people often end up with unclear results!
3. In line 135, it’s not clear to me what is meant by $Gio2$. If it is $GioGio$, then wouldn’t the resulting matrix be a matrix of zeros since $Gio$ is zero everywhere except the lower
off-diagonal? So maybe it is $GioT∗Gio$? But then later in that line, you say that the square of this matrix is the sum of several terms of the form $Gk1k2$. Are these the scalar elements of the
G matrix? But then the sum is a scalar, which can’t be true since $E(Gio2)$ is a matrix.
Thanks for your attentive review. We indeed confused the definition of matrices and their elements, and $Gio$ should refer to the stacked off-diagonal elements of matrix $Gi$. So, $Gio$ is a vector
for variable $gij$ – the relationship between sample i and j. We assume the reviewer use R software, then $E(Gio2)$ corresponds to mean $(G[row(G)<col(G)]∧2)$.
See the text between Eq 5 and Eq 6.
“We extract two vectors $kio$, which stacks the off-diagonal elements of $Ki$, and $kid$, which takes the diagonal elements of $Ki$.”
In addition, $E(Gdiag )×n+E(Goff-diag )n(n−1)=0$, so the ground truth is that $E(Gio)=−1n−1$, but not zero.
To clarify these math symbols, we replace G with K, so as to be consistent with our other works (see Table 1).
To derive the means and the sampling variances for $ℓi$ and $ℓi⋅j$, the Eq 7 can be established by some modifications on the Delta method as exampled in Appendix I of Lynch and Walsh’s book (Lynch
and Walsh, 1998). We added this sentence near Eq 7 in the main text.
A two-part list of links to download the article, or parts of the article, in various formats.
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
1. Xin Huang
2. Tian-Neng Zhu
3. Ying-Chao Liu
4. Guo-An Qi
5. Jian-Nan Zhang
6. Guo-Bo Chen
Efficient estimation for large-scale linkage disequilibrium patterns of the human genome
eLife 12:RP90636. | {"url":"https://elifesciences.org/articles/90636v1/peer-reviews","timestamp":"2024-11-13T19:51:11Z","content_type":"text/html","content_length":"190259","record_id":"<urn:uuid:c363edf5-5dc0-4344-82d4-232f5711fe9e>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00491.warc.gz"} |
American Mathematical Society
Axisymmetric harmonic interpolation polynomials in $\textbf {R}^{N}$
HTML articles powered by AMS MathViewer
Trans. Amer. Math. Soc. 196 (1974), 385-402
DOI: https://doi.org/10.1090/S0002-9947-1974-0348130-6
PDF | Request permission
Corresponding to a given function $F(x,\rho )$ which is axisymnetric harmonic in an axisymmetric region $\Omega \subset {{\text {R}}^3}$ and to a set of $n + 1$ circles ${C_n}$ in an axisymmetric
subregion $A \subset \Omega$, an axisymmetric harmonic polynomial ${\Lambda _n}(x,\rho ;{C_n})$ is found which on the ${C_n}$ interpolates to $F(x,\rho )$ or to its partial derivatives with respect
to x. An axisymmetric subregion $B \subset \Omega$ is found such that ${\Lambda _n}(x,\rho ;{C_n})$ converges uniformly to $F(x,\rho )$ on the closure of B. Also a ${\Lambda _n}(x,\rho ;{x_0},{\rho
_0})$ is determined which, together with its first n partial derivatives with respect to x, coincides with $F(x,\rho )$ on a single circle $({x_0},{\rho _0})$ in $\Omega$ and converges uniformly to
$F(x,\rho )$ in a closed torus with $({x_0},{\rho _0})$ as central circle. Similar Articles
• Retrieve articles in Transactions of the American Mathematical Society with MSC: 31B99
• Retrieve articles in all journals with MSC: 31B99
Bibliographic Information
• © Copyright 1974 American Mathematical Society
• Journal: Trans. Amer. Math. Soc. 196 (1974), 385-402
• MSC: Primary 31B99
• DOI: https://doi.org/10.1090/S0002-9947-1974-0348130-6
• MathSciNet review: 0348130 | {"url":"https://www.ams.org/journals/tran/1974-196-00/S0002-9947-1974-0348130-6/?active=current","timestamp":"2024-11-08T02:16:50Z","content_type":"text/html","content_length":"60670","record_id":"<urn:uuid:c137fbc5-d0cf-4f5c-91d9-4c95f908838c>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00134.warc.gz"} |
Why Do Objects Have Different Weights in Water? in context of weight to force
26 Aug 2024
Title: The Effect of Buoyancy on the Weight of Objects in Water: A Study on the Relationship between Force and Mass
When an object is partially or fully submerged in water, its weight appears to change due to the upward force exerted by the surrounding fluid. This phenomenon is known as buoyancy, which is a
fundamental concept in physics that has significant implications for various fields, including engineering, biology, and environmental science. In this article, we will explore the relationship
between force and mass, examining why objects have different weights in water.
The weight of an object is typically measured by its mass multiplied by the acceleration due to gravity (g). However, when an object is submerged in a fluid like water, the situation becomes more
complex. The surrounding fluid exerts an upward force on the object, known as buoyancy, which depends on the density of the fluid and the volume of the object.
The Buoyant Force:
The buoyant force (Fb) can be calculated using Archimedes’ Principle:
Fb = ρVg
where ρ is the density of the fluid (water), V is the volume of the object, and g is the acceleration due to gravity.
When an object is partially or fully submerged in water, the buoyant force acts upward on the object, reducing its apparent weight. The magnitude of the buoyant force depends on the density of the
fluid and the volume of the object.
The Relationship between Force and Mass:
In the absence of buoyancy, the weight (W) of an object is equal to its mass (m) multiplied by the acceleration due to gravity:
W = mg
However, when an object is submerged in water, the situation becomes more complex. The apparent weight (Wa) of the object can be calculated as:
Wa = W - Fb
Substituting Archimedes’ Principle into this equation, we get:
Wa = m(g - ρVg/m)
The apparent weight of the object depends on its mass, the density of the fluid, and the volume of the object. When an object is denser than water (ρobj > ρwater), it will experience a net downward
force, resulting in an increased apparent weight. Conversely, when an object is less dense than water (ρobj < ρwater), it will experience a net upward force, resulting in a decreased apparent weight.
In conclusion, the weight of objects appears to change when they are submerged in water due to the buoyant force exerted by the surrounding fluid. The relationship between force and mass is more
complex in this situation, as the apparent weight of an object depends on its mass, the density of the fluid, and the volume of the object. Understanding these principles is crucial for various
applications, including engineering design, biological research, and environmental monitoring.
• Archimedes (212 BCE). On Floating Bodies.
• Halliday, D., Resnick, R., & Walker, J. (2013). Fundamentals of Physics. Wiley.
Note: The article does not provide numerical examples, but rather focuses on the theoretical aspects of buoyancy and its relationship to force and mass.
Related articles for ‘weight to force ‘ :
• Reading: **Why Do Objects Have Different Weights in Water? in context of weight to force **
Calculators for ‘weight to force ‘ | {"url":"https://blog.truegeometry.com/tutorials/education/5ec2096654bd4b775b12ac164e45c934/JSON_TO_ARTCL_Why_Do_Objects_Have_Different_Weights_in_Water_in_context_of_weig.html","timestamp":"2024-11-12T22:07:20Z","content_type":"text/html","content_length":"17821","record_id":"<urn:uuid:f8cf0cb7-bbbb-4e17-945e-2aba471686aa>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00823.warc.gz"} |
30% Discount on “Massive MIMO Networks” Book
new book Massive MIMO Networks: Spectral, Energy, and Hardware Efficiency (by Björnson, Sanguinetti, Hoydis) is currently available for the special price of $70 (including worldwide shipping). The
original price is $99.
This price is available until the end of April when buying the book directly from the publisher through the following link:
Note: The book’s authors will give a joint tutorial on April 15 at WCNC 2018. A limited number of copies of the book will be available for sale at the conference and if you attend the tutorial, you
will receive even better deal on buying the book!
4 thoughts on “30% Discount on “Massive MIMO Networks” Book”
1. I have a question about the channel model in your new book:
+ In my understanding, the channel response in the book is in time domain, we assume flat fading and the common model is Rayleigh fading model.
+ But if we use OFDM, how can we model the channel? In the paper “Non cooperative cellular wireless with unlimited numbers of base station antennas”-M. Marzetta, the author talked about OFDM,
they consider at one sub-carrier, and the model is still the same with Rayleigh fading model (just channel response multiply with signal) ==> Is the channel now in the frequency domain? Because I
think we use convolution instead of multiply channel response with signal in frequency-selective fading.
1. My book and also Marzetta’s works are based on the block fading channel model, which is a simplification of reality. In this model, we consider a flat-fading time-invariant channel with a
coherence block, thus the channel is described by an impulse response h*delta(t) in the time domain and h in the frequency domain, where delta(t) is the Dirac delta function. So h is the
coefficient that describes the channel.
In OFDM, the coherence block can be approximately viewed as spanning a certain number of subcarriers in the frequency domain and OFDM symbols in the time domain. This is what Marzetta
describes in his book.
If you want to consider a more detailed model of OFDM, you will have to leave the block fading model. The following two papers show how to deal with channel variations in the frequency domain
and in the time domain:
There are many other relevant papers on OFDM in Massive MIMO which you can find in the reference lists of the two papers above.
2. There is one thing I still do not get is that the channel model. For example with a frequency selective fading L-taps, we have the channel response in time domain: h = [h_1, h_2, … h_L], each of
tap h_i will follow the Rayleigh fading, h_i ~ CN(0,1). So the received signal in time domain: y[n] = x[n](*)h + w(n) with (*) is convolution. Change this model to frequency domain: Y[k] = X[k]*H
[k] + W[k]. with * is conventional multiplication ==> what is the model for H[k], does it still follow ~ CN(0,1).
==> In your book, the channel h ~ CN(0,1) which is Rayleigh fading and in my understanding, Rayleigh fading is in time domain, but when we change to frequency domain, what happens?
1. For an L-tap channel, you can check out the first of the two papers that I referenced to. G^m_k(s) is the channel on subcarrier s between antenna m and user k. It is formed by taking the
L-tap channel between antenna m and user k, and then compute an inner product with a vector that comes from a DFT matrix. Since this is a weighted sum of Gaussian channel taps, it will also
be Gaussian distributed. It will be i.i.d. between antennas/users, but correlated between subcarriers.
Regarding my book, let’s recall my previous answer: In my book “we consider a flat-fading time-invariant channel with a coherence block, thus the channel is described by an impulse response
h*delta(t) in the time domain and h in the frequency domain, where delta(t) is the Dirac delta function. So h is the coefficient that describes the channel.” Hence, if h is iid Rayleigh
fading, this occur in both the time and frequency domain. | {"url":"https://ma-mimo.ellintech.se/2018/04/04/30-discount-on-massive-mimo-networks-book/","timestamp":"2024-11-05T20:36:19Z","content_type":"text/html","content_length":"86985","record_id":"<urn:uuid:c231c5a2-0825-4710-b203-d05152da181a>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00828.warc.gz"} |
Finding The Area Of A Triangle And Rectangle Worksheet - TraingleWorksheets.com
Calculating The Area Of A Triangle Worksheet – Triangles are among the most fundamental shapes in geometry. Understanding the triangle is essential to learning more advanced geometric concepts. In
this blog we will look at the various kinds of triangles such as triangle angles, and how to calculate the perimeter and area of a triangle, as well as provide some examples to illustrate each. Types
of Triangles There are three types of triangulars: Equilateral, isosceles, … Read more | {"url":"https://www.traingleworksheets.com/tag/finding-the-area-of-a-triangle-and-rectangle-worksheet/","timestamp":"2024-11-10T17:03:40Z","content_type":"text/html","content_length":"48966","record_id":"<urn:uuid:4ffbdd0e-4bd3-4fb1-819a-ff9b4556b18f>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00147.warc.gz"} |
XLPack Solver – Nonlinear Least Squares Problems
Solve the nonlinear lease squares problems.
Let’s solve the following example (Reference: https://www.itl.nist.gov/div898/strd/nls/data/misra1a.shtml).
Find the parameters p1 and p2 to approximate the following data using the model function f(x) = p1*(1 – exp(-p2*x)) with least sum of squares of residuals.
x y
77.6 10.07
114.9 14.73
141.1 17.94
190.8 23.93
239.9 29.61
289.0 35.18
332.8 40.02
378.4 44.82
434.8 50.76
477.3 55.05
536.8 61.01
593.1 66.4
689.1 75.47
760.0 81.78
The solver program will write the value of p1 and p2 into the variable value cells (B8 and B9 in this case). The function value cells (G7:G20 in this case) must contain the formula to compute f(x)
using given p1 and p2. To solve this example, the formula for G7 is =F7-B$8*(1-EXP(-B$9*E7)), G8 is =F8-B$8*(1-EXP(-B$9*E8)), …
Jacobian cells contain the formulas of Jacobian matrix. These are necessary only when Lmder1 or N2g is selected as the solver program.
Initial value cells contain the initial values of p1 and p2. If these are not specified (range remains blank), initial values are assumed to be contained in the variable value cells.
The obtained solutions are output to the output cell range. If it is not specified (range remains blank), the solutions will be output to the variable value cells.
Lmdif1, Lmder1, N2f or N2g can be selected as the solver program.
The standard value of tolerance is 1.0e-8. The tolerance value will be set to Tol of Lmdif1 and Hybrj1, and Xctol of N2f and N2g.
Click “Compute”. Then the solutions will be computed and output.
The cell ranges can be specified as larger than required. In that case, only the necessary range will be used from left upper corner.
The nonlinear least squares problems may have several solutions. The different solution may be obtained depending on the initial values. The different solution may be obtained if the solver program
is different even if the initial values are same. It is therefore recommended to give the initial values nearest to the desired solution.
If the solution is not obtained, #NUM! error will be output. The major suspected causes in that case are as follows: The algorithm was not converged due to the bad initial values; The given equation
has no solution.
Please refer to here for “Save/Restore” button.
When “Help” button is clicked, this page will be displayed if the network connection is available.
The “?” button of right upper corner will not work correctly. Please use “Help” button. | {"url":"https://www.ktech.biz/document/xlpack-solver-nllsq/","timestamp":"2024-11-04T10:55:41Z","content_type":"text/html","content_length":"49760","record_id":"<urn:uuid:08302b1e-3ed0-49c4-8838-5811ac1cb9ba>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00097.warc.gz"} |
Data-driven computation of molecular reaction coordinates
The identification of meaningful reaction coordinates plays a key role in the study of complex molecular systems whose essential dynamics are characterized by rare or slow transition events. In a
recent publication, precise defining characteristics of such reaction coordinates were identified and linked to the existence of a so-called transition manifold. This theory gives rise to a novel
numerical method for the pointwise computation of reaction coordinates that relies on short parallel MD simulations only, but yields accurate approximation of the long time behavior of the system
under consideration. This article presents an extension of the method towards practical applicability in computational chemistry. It links the newly defined reaction coordinates to concepts from
transition path theory and Markov state model building. The main result is an alternative computational scheme that allows for a global computation of reaction coordinates based on commonly available
types of simulation data, such as single long molecular trajectories or the push-forward of arbitrary canonically distributed point clouds. It is based on a Galerkin approximation of the transition
manifold reaction coordinates that can be tuned to individual requirements by the choice of the Galerkin ansatz functions. Moreover, we propose a ready-to-implement variant of the new scheme, which
computes data-fitted, mesh-free ansatz functions directly from the available simulation data. The efficacy of the new method is demonstrated on a small protein system.
In recent years, it has become possible to numerically explore the chemically relevant slow transition processes in systems with several thousands of atoms. This was made possible due to the increase
of raw computational power and deployment of specialized computing architectures,^1 as well as by the development of accelerated integration schemes that bias the dynamics in favor of the slow
transition processes, yet preserve the original statistics.^2–4
To obtain chemical insight about the essential dynamics of the system, this vast amount of high-dimensional data has to be adequately processed and filtered. One desirable goal often is a simplified
model of the mechanism of action, in which the fast, unimportant processes are averaged out or otherwise disregarded. One way is to construct kinetic models of the system, i.e., identifying
metastable reactant-, product-, and possibly intermediate states, and reducing the dynamics to a jump process between them. Under certain regularity assumptions on the root model that are readily
fulfilled, such a model can be built in an automated, data-driven fashion.^5,6 However, the simplicity of the resulting so-called Markov state model (MSM) comes with a price: since the long-time
relaxation kinetics is described just by jumps between finitely many discrete states, any information about the transition process and its dynamical features is lost.
An alternative collection of approaches, to which this paper ultimately contributes, thus aims at the automated identification of good reaction coordinates or order parameters, mappings from the full
to some lower-dimensional, but still continuous state space, onto which the full dynamics can be projected without loss of the essential processes. Often enough, this reaction coordinate alone (i.e.,
without the corresponding dynamical model) already contains more valuable chemical information than the kinetic models, as, for example, the free energy profile along the reaction coordinate allows
the determination of the activation energy of the respective transition process.^7
The systematic and mathematically rigorously motivated construction of reaction coordinates is an area of active research; for an overview, see Ref. 7. Where it is available, chemical expert
knowledge can be used to guide the construction.^8,9 In the context of transition path theory (TPT),^10,11 the committor function is known to be an ideal reaction coordinate^12 for transitions
between preselected metastable sets. Related to this, approximations to the dominant eigenfunctions of the transfer operator are also often considered ideal reaction coordinates,^6,13,14 which has
been confirmed in Ref. 15 for a subclass of time scale separated systems. However, the computation of both committor functions and transfer operator eigenfunctions is infeasible for very
high-dimensional systems. Moreover, the authors have recently shown that said eigenfunctions yield redundant reaction coordinates, in the sense that often a further reduction is possible.^16
In the same work, the authors identified necessary characteristics that reaction coordinates have to exhibit in order to retain the slow processes (a “quality criterion”). In short, it must be
possible to relate them to the dominant transfer operator eigenfunctions in a specific non-linear way. However, as we will see, the criterion is also interpretable in the context of TPT.
What is more, it was shown that the existence of reaction coordinates that fulfill the quality criterion is tied to the existence of a so-called transition manifold$M$, a low-dimensional manifold
in the function space L^1. The property that defines $M$ is that, on moderate time scales t[fast] < t ≪ t[slow], the transition density functions of the dynamics concentrate around $M$. A firm
mathematical theory for the existence and identification of reaction coordinates was developed around this transition manifold.
The main practical result of Ref. 16 was the insight that any parametrization of $M$ can be turned into a good reaction coordinate. A numerical algorithm was proposed that allows the pointwise
computation of this reaction coordinate and only requires the ability to generate trajectories of the aforementioned moderate length that start at the desired evaluation point.
While the method has a solid theoretical foundation and is directly applicable in many cases, there yet exists a certain gap between the theoretical advantages and the practical applications of the
proposed scheme: While the ability to efficiently compute the reaction coordinate only in specific points is quite remarkable, in practice one often wishes to learn the reaction coordinate in all of
the accessible state space (i.e., where pre-generated simulation data are available), as the location of the “interesting” points is unknown in advance. The originally proposed method cannot compute
the reaction coordinate from dynamical “bulk data”—such as long equilibrated trajectories or the push-forward of point clouds that sample the canonical ensemble—that is preferably generated by
contemporary simulation methods and software.
In the present work, we attempt to close this gap by proposing an alternative, purely data-driven algorithm for computing the transition manifold reaction coordinate. It is based on a classical
Galerkin approximation of the reaction coordinate with freely selectable ansatz space. Its numerical realization requires only a so-called transition matrix between its discretization elements. A
wide variety of techniques for building MSMs and similar algorithms are available for the construction of this matrix from the aforementioned types of bulk data.^14,17,18 This makes it possible to
transfer many techniques from the extensive toolbox of MSMs, as, for example, the use of customized Galerkin ansatz spaces explicitly adapted for molecular dynamical problems.^19 Furthermore, this
makes our approach instantly applicable whenever the construction of an (arguably less informative) MSM is possible.
Finally, with the objective to create an algorithm that requires only a minimum of a priori information about the system, we propose a very practical implementation of this Galerkin approximation
that constructs a mesh-free set of Voronoi cell-based ansatz functions directly from the available simulation data. Interestingly, the task of optimally choosing the Voronoi centers leads to two
well-known and highly scalable algorithms from data mining, namely, the k-means clustering algorithm and Poisson disk sampling algorithm, depending on the chosen error measure. We demonstrate the
efficacy of this method by identifying chemically interpretable essential degrees of freedom of a 66-dimensional model of alanine dipeptide and a 1600-dimensional model of the fast-folding protein
The paper is organized as follows: In Sec. II, the basic concepts of time scale-separated systems and reaction coordinates are introduced. Also, our central quality criteria for the characterization
of good reaction coordinates are derived and a comparison with TPT is drawn. Section III introduces the concept of transition manifolds and explains the local burst-based algorithm. In Sec. IV, the
new Galerkin approximation of the transition manifold reaction coordinate is derived as well as the Voronoi-based implementation. Section V demonstrates the application of our new method to a simple
synthetic example system, as well as to the realistic molecular systems. Concluding remarks and an outlook can be found in Sec. VI.
A. Metastable molecular dynamics
We model our molecular dynamical system as a continuous-time stochastic process X[t] on some high-dimensional state space $X⊂Rn$. Here $X$ may consist of either full Cartesian atomic coordinates or
some other suitable degrees of freedom that adequately describe the micro state of the system. We require the process to fulfill common technical assumptions from the Markov approach to molecular
dynamics,^18,20 namely, Markovianity, ergodicity, and time-reversibility. Aside from that, the specific dynamical law that governs the evolution of X[t] is ultimately arbitrary, but we in general
think of X[t] as a “random walk in a potential energy landscape.” The first example that comes to mind would be the Smoluchowski dynamics (also called overdamped Langevin dynamics)
where V denotes the potential energy function, β = 1/k[B]T denotes the inverse temperature, and W[t] denotes a standard Wiener process. However, our theory can also be applied to the non-overdamped
Langevin dynamics (projected onto the positional degrees of freedom), or any other thermostated molecular dynamics that samples the stationary probability density,
Here, $Z=∫Xe−βv$ is a normalizing constant.
B. Reaction coordinates
Formally, a reaction coordinate is a low-dimensional variable of the full system, i.e., a smooth function $ξ:Rn→Rk$ with k ≪ n. In practice, k will often be only one- or two-dimensional and
correspond to some chemically interpretable quantity, e.g., a certain collection of backbone dihedral angles in a peptide, or the distance between important functional groups. The reduced or
projected system is then given by ξ(X[t]), which is now a stochastic process on $Rk$.
While the map x ↦ ξ(x) describes the pointwise projection of the system, the projection of densities that evolve with the system is described by the Zwanzig projection operator,^21 denoted by $Qξ$,
$Qξp(z)=1W(z)∫Xp(x)ρ(x)δzξ(x) dx,$
where δ[z] is the delta distribution and W(z) is a normalization term. Its action can be described as follows: if for some time t the random variable X[t] is distributed according to some density p[t
], then the random variable ξ(X[t]) is distributed according to $Qξpt$, which is a density over $Rk$.
By the definition above, any function over $Rn$ may be called a reaction coordinate. Thus, one of the key questions we aim to answer in this article is as follows: what criterion distinguishes “good”
from “bad” reaction coordinates?
In many MD systems, the chemically interesting reaction processes correspond to transitions between two or more metastable states, regions of state space that “trap” the dynamics for long times
before a sudden transition to another metastable state occurs. Typical examples include protein- and peptide folding, receptor-ligand binding, and conformational change of large biomolecules. It is
customary to picture these transitions as occurring along certain transition pathways in the potential energy landscape, but there is no uniformly accepted definition of these pathways. Proposed
variants include the minimum energy path,^22 minimum free energy path,^23 and the principal curve.^24 By a first intuitive definition, reaction coordinates should thus describe the “progress of the
reaction along the transition pathway.” A common computational scheme thus goes as follows:
1. Compute the transition pathway (using, for example, the string method^25).
2. Parameterize the transition pathway.
3. Project the state space onto the transition pathway.
The value of the parametrization in a projected point then gives the reaction coordinate value in that point.
However, due to the ambiguous concept of transition pathways, this approach lacks rigor. Variants of transition pathways that are based on local features of the energy landscape only, such as the
minimum energy pathway, can be shown to fail to describe the (global) slow transition processes.^24 Moreover and most importantly, the question of how to globally project state space onto the
transition path in a “dynamically correct” way remains unanswered. A nearest-point projection, as is, for example, used in the definition of principal curves, can be shown to fail with simple example
Thus, in order to find a rigorous criterion for good reaction coordinates, we need to take a closer look at the “global” stochastic evolution of X[t] and its slow parts. We will, however, eventually
come back to the picture of potential energy surfaces and interpret our criterion with regard to transition pathways (see Example 1.1).
C. The transfer operator
Regardless of the specific dynamical model, the stochastic evolution of X[t] is entirely described by its transition probability density p^t: Given a starting point x, the probability density for
finding the system at some point y after time t ≥ 0 is denoted by p^t(x, y),
where “∼” means “distributed according to.” p^t(x, ·) can be estimated by starting a large number of parallel simulations of the stochastic dynamics, all with starting point x, and estimating the
resulting end point density (for example, using histogram or kernel density estimation methods).
With p^t, the evolution of a general starting density X[0] ∼ u[0] can then be expressed as
$ut(x)=∫Xu0(y)pt(y,x) dy=:Ptu0(x).$
The operator $Pt$ is known as the Perron-Frobenius operator or transfer operator of the system. In the case of the Smoluchowski dynamics (1), it is equal to the solution operator of the associated
Fokker-Planck equation.
We see that p^t and by extension $Pt$ describe the complete stochastic evolution. While the analytical derivation of $pxt$ and $Pt$ is possible only for the most simple of systems (for example, for
the Ornstein-Uhlenbeck process^26), they will play the central role in the description of slow sub-processes of the dynamics and the computation of optimal reaction coordinates.
Closely related to
is the
Koopman operator
, defined by
$Ktη0(x)=∫Xη0(y)pt(x,y) dy.$
It acts as the push-forward of observables
, i.e., is the conditional expectation of
), provided we started in
at time
= 0,
$ηt(x)=Ktη0(x)=Eη0(Xt) | X0=x.$
This operator will be of relevance later when we describe the numerical computation of reaction coordinates.
D. Dominant time scales
Under fairly general conditions, it can be shown that the spectrum of $Pt$ consists of discrete real eigenvalues
and that the eigenvalue λ[0] = 1 is simple and belongs to the eigenfunction ρ (the stationary density). We denote by $vi$ the eigenfunction belonging to $λit$. Except λ[0], all eigenvalues decay
exponentially as t → ∞, which corresponds to the relaxation of the process towards the stationary ensemble, regardless of the starting density. The relaxation rate of the ith slowest process, known
as the i-th implied time scale,^18 is given by
From now on, we assume the system to possess d slow sub-processes, typically (but not necessarily) corresponding to the rare transitions between d metastable sets, and that we are primarily
interested in accurately describing these slow processes. In this case, the dominant d + 1 eigenvalues ${λ0t,…,λdt}$ will be positive and separated from the remaining eigenvalues by a spectral gap,
i.e., $λdt≫λd+1t$. We can then express the action of the operator $Pt$, and thus the stochastic evolution of the process, in terms of the dominant eigenfunctions,
where c[i] = ∫u[0]$vi$. This means that the information about the long-term evolution of the slow processes is entirely contained in the d dominant eigenpairs $(λit,vi)$. Consequently, we consider
the preservation of the dominant eigenpairs under projection onto the reaction coordinate a suitable objective for optimally choosing the reaction coordinate.
The dominant eigenpairs of the transfer operator are also the primary object of interest in the Markov approach to coarse graining molecular dynamics, as mentioned in the Introduction. Here, the goal
is to use the eigenfunctions to build a discrete Markov State Model (MSM),^5,6,27 which replaces the original molecular dynamics by a finite-state Markov jump process between the metastable states.
Though all information about the transition regions and paths is lost by this approach, the long-time transition rates between the states are preserved. These models have been successfully applied to
a wide range of real-life molecular systems.^13,27–29
The reaction coordinate we will ultimately define and compute will preserve the dominant eigenfunctions; thus, the projected process ξ(X[t]) also still contains all the information about the
long-term transition processes. In this sense, the motivation of ours and the MSM approach are deeply linked.
E. A criterion for good reaction coordinates
Our investigation so far points out an apparent discrepancy in the concurrent understanding of what criterion defines “good” reaction coordinates: On one hand, it is a common perception that good
reaction coordinates should parameterize some sort of transition pathway, along which a reaction event progresses “most likely.” On the other hand, if one is interested in the longterm behavior of
the system, the projection onto the reaction coordinate must preserve the slowest processes; so, a definition based on the dominant eigenpairs of the transfer operator seems natural. However, this
second requirement is applicable to very general and not necessarily metastable systems and thus does not even require the existence of a transition pathway in the classical sense.
We will now see that these two viewpoints can still be unified and that there exists a criterion for good reaction coordinates based on the transfer operator that also leads to the parametrization of
the transition pathway.
Let the projected transfer operator, transporting probability densities of the projected process ξ(X[t]), be denoted by $Pξ t$. Let $(μit,wi)$ denote the eigenpairs of $Pξt$. By the preceding
reasoning, we now call ξ a good reaction coordinate if for the dominant eigenpairs holds
i.e., the full and projected dominant eigenvalues are similar, and
i.e., the eigenfunctions of $Pt$ can be approximately reconstructed from the eigenfunctions of $Pξt$ and ξ. This way, all information about the d slowest processes is contained in ξ(X[t]).
It has been shown in Ref. 16 that (5) follows from (CI), so (CI) is a sufficient criterion for good reaction coordinates (in the sense of preserving the long time scales). If the approximation in
(CI) holds sharp, we call ξ an optimal reaction coordinate.
The first idea that comes to mind is to define the reaction coordinate directly as the dominant eigenfunctions (weighted by the stationary density for technical reasons)
This reaction coordinate is indeed optimal, as was shown in Ref. 16. Indeed, the authors in Ref. 15 have also identified (6) as an ideal reaction coordinate, though only for a narrower sub-class of
time scale-separated systems. However, there are two major practical disadvantages in choosing the eigenfunctions as reaction coordinates that ultimately prevent us from computing and using them:
1. The eigenproblem is global and thus prohibitively expensive to solve numerically in high dimensions. If we wish to compute the value of an eigenfunction at only a single position in $X$, we need
an approximation of $Pt$ that is accurate on all of $X$. There have been attempts to mitigate this, but the conceptual problem remains.
2. The eigenfunction reaction coordinate is often redundant. In systems where the slow processes correspond to the transitions between d metastable sets, i.e., d potential wells, (6) would define a
d-dimensional reaction coordinate. However, in practice, many of these potential wells often lie along the same transition path, and consequently the transitions between those wells would be
describable by just a one-dimensional reaction coordinate. See the example in Fig. 1 for an illustration.
We will now reformulate criterion (CI) in a way that addresses the concerns above and at the same time makes it compatible with the transition path intuition of reaction coordinates. Consider a
reaction coordinate ξ of some dimension r ≤ d, fulfilling (CI). Furthermore, assume that for each starting point x, we can write the transition density p^t(x, ·) as a linear combination of the
eigenfunctions $vi$,
where p^t(x, ·) denotes the y-dependent function p^t(x, y) for all y. It can be shown (see Appendix A) that the prefactors are again connected to the eigenpairs: $di(x,t)=λitvi(x)/ρ(x)$. As we
still are interested only in long lag times t where the non-dominant eigenvalues have already decayed, we can truncate the series,
Finally, we use that ξ fulfills the criterion (CI) and that ρ is the 0-th dominant eigenfunction of $Pt$ and get
The right-hand side of this equation depends only on the reaction coordinate value ξ(x) and not the full state space coordinate x. This means that the left-hand side also can depend only on ξ. Thus,
in order for ξ to be a good reaction coordinate, the transition density function p^t(x, ·) must only depend on the r-dimensional ξ(x) and not on the full n-dimensional x. We thus get the following
equivalent criterion: ξ is a good reaction coordinate if and only if
for some function $p̃t$, all x, and “intermediate” lag times t. Intermediate here means that t must be larger than the equilibration time scale of the fast processes, but can be chosen much smaller
than the equilibration time scale of the slow processes. In terms of the implied time scales (4), this writes t[d] > t > t[d+1].
F. Connection to transition path theory
Unlike (CI), criterion (CII) now allows an interpretation in the context of Transition Path Theory (TPT). To be precise, we argue that the committor function, which is seen in TPT as the optimal
reaction coordinate,^30 fulfills criterion (CII).
In a system with two metastable sets A and B, the forward committor function q[A](x) is defined as the probability that the process X[t] first visits A rather than B given the starting point X[0] = x
. For a starting point outside the metastable sets, and for “intermediate” lag times t as required by (CII), the probability to find the system in one of the metastable sets after time t is
essentially 1, as the process quickly leaves the transition region. Moreover, the system equilibrates quickly inside the metastable sets. Thus, the transition density essentially depends only on
whether it is more likely to find the evolved system in A or in B,
Here, $1A$ denotes the indicator function over A and $cAt(x), cBt(x)$ are the probabilities to find the evolved system in A and B, respectively,
$cAt(x)=PrXt∈A | X0=x,cBt(x)=PrXt∈B | X0=x≈1−cAt(x).$
As we have chosen t as intermediate, i.e., so short that it is unlikely to leave a metastable set within time t once it has been reached, $cAt(x)$ is essentially equal to the committor function. Thus
we have
where we see that the right-hand side now depends only on q[A](x) and not on the full value x. With the function
the reaction coordinate ξ(x) ≔ q[A](x) thus fulfills criterion (CII). Our new criterion thus confirms the committor function as a good reaction coordinate in the sense of preserving the slow
transition process between the metastable sets.
Note, however, that while the definition of committor functions depends on the existence (and the knowledge) of metastable states, criterion (CII) can also be applied in systems where the slowest
processes do not correspond to transitions between metastable states (such as systems with explicit time scale separation). Criterion (CI) does not even require a spectral gap at all, i.e., reaction
coordinates fulfilling (CI) will preserve the d slowest processes even if the subsequent processes live on similar time scales. Thus, our theory offers a much more general characterization of good
reaction coordinates that, however, agrees with the concept of committor functions in the special cases where the latter is applicable. What is more, the usage of committor functions as reaction
coordinates is susceptible to the same computational problems as transfer operator eigenfunctions that were detailed in Sec. II E.
The following example demonstrates that criterion (CII) can formally distinguish “intuitively good” from “intuitively bad” reaction coordinates:
Example 1.1.
In Fig.
, we consider a diffusion process
in the curved double well potential
that was first analyzed in Ref.
in the context of reaction coordinates. The inverse temperature was chosen as
= 0.5. First, consider the one-dimensional reaction coordinate
The transition pathway (here taken as the minimum energy pathway) is parameterized by
, i.e., no two points on the transition pathway take the same value under
. Furthermore, the isolines (sets of constant value) of
intersect the transition pathway perpendicularly.
was identified in Ref.
as the ideal reaction coordinate and can also be considered “intuitively good” from the standpoint of transition pathways. Note, however, that
is not equal to the committor function.
On the other hand, the reaction coordinate
is obviously bad, as it does not parameterize the transition pathway.
We can distinguish ξ and ζ, without any knowledge of the transition pathway, the metastable sets, or the potential, by considering only the transition density functions p^t(x, ·) along their
isolines. We observe that for different starting points along any isoline of ξ, the densities p^t(x, ·) for intermediate lag time t look very similar. That means that p^t(x, ·) effectively depends
only on ξ(x). The same property does not hold for the bad reaction coordinate ζ: here the densities p^t(x, ·) differ substantially for starting points along a single isoline. In conclusion, ξ
fulfills criterion (CII), whereas ζ does not, i.e., criterion (CII) can distinguish good from bad reaction coordinates (in this example).
The equivalent criteria (CI) and (CII) allow for a rigorous characterization of good reaction coordinates such that the long time scales of the full molecular system are inherited by the projection
of the dynamics onto the low-dimensional state space spanned by the reaction coordinates. At the same time, these criteria agree with but extend and refine the comprehension of good reaction
coordinates that is pervasive in transition path theory.
A. The transition manifold
From now on, we always assume t to be an “intermediate” lag time as required by criterion (CII). This criterion implies that two transition density functions p^t(x[1], ·) and p^t(x[2], ·) are close
to each other for two points x[1], x[2] of similar reaction coordinate value, even if x[1] and x[2] themselves are not close. We will now render this “neighborhood relation” of densities more precise
and exploit it in order to efficiently compute good reaction coordinates.
For each state space point x, the transition density p^t(x, ·) is a function in the infinite-dimensional function space L^1, i.e., the space of absolutely integrable functions. However, the insight
that p^t(x, ·) effectively depends only on ξ(x), i.e., an r-dimensional coordinate, implies that the set of all transition density functions,
effectively forms an only r-dimensional manifold in this function space. In the common case of r = 1, $M$ is effectively a curve in L^1. We call $M$ the transition manifold of the system.
While there is a connection between transition path theory and transition manifolds as shown in Sec. II E, to the best of the authors’ knowledge, there is no formal equivalence between the transition
manifold and any existing definition of transition pathway.
Assume now that we are able find any parametrization of
, i.e., a smooth invertible function
. Then one can show
that the reaction coordinate defined as
fulfills the criterion
[or equivalently
]. This is the reaction coordinate we will ultimately compute numerically.
B. Embedding of the transition manifold
In order to find a parametrization of the transition manifold $M$, we employ the general-purpose Diffusion Maps manifold learning algorithm.^32,33 Explaining the algorithm in detail would go well
beyond the scope of this article, so we only coarsely sketch its usage: Let a sufficiently large collection of data points {z[1], …, z[M]} on or near a manifold in some vector space be given. The
algorithm then detects the dimension r of this manifold and returns for each data point z an r-dimensional vector $(E1(z),…,Er(z))⊺$ that represents a parametrization $E$ of the manifold, evaluated
at z. Application of Diffusion Maps requires the choice of a certain kernel bandwidth parameter that essentially determines what distance should be considered “far away.” We assume from now on that
this parameter can be chosen reliably; an optimal strategy has been detailed in Ref. 34.
The Diffusion Maps algorithm, in principle, works in arbitrary metric spaces, as it requires only an appropriate notion of distance between data points. We will, however, not attempt to parameterize
the transition manifold directly in L^1, as the calculation of distances between L^1 functions is numerically costly. Instead, we will first embed the transition manifold $M$ into a Euclidean space
and use the standard Euclidean distance there.
Surprisingly, constructing such an embedding requires virtually no knowledge about $M$. Let $F:L1→Rq$ be an arbitrarily chosen map from the function space L^1 to the Euclidean space of dimension 2r
+ 1 (or greater), where r is the dimension of the transition manifold. Then—slightly simplified—the famous Whitney embedding theorem^35,36 states that for any such $F$, the probability for $F(M)$
again being an r-dimensional manifold in $R2r+1$ is exactly one. For the purpose of this article, this means that we can effectively choose $F$ randomly—if only its image dimension is large
enough—and be sure that the manifold structure of $M$ gets preserved under $F$. We can then compute a parametrization of the embedded manifold $F(M)$ using the Diffusion Maps algorithm, which then
corresponds to a parametrization of the original manifold $M$. A sketch of the overall embedding procedure is shown in Fig. 3.
Specifically, we will work with the 2r + 1 embedding functions
$Fipt(x,⋅)=∫Xηi(y)pt(x,y) dy$
where the factors a[ij] are chosen randomly (e.g., uniformly drawn from the interval [0, 1]). Note, however, that we have great freedom in the choice of the functions η[i]. Linear functions were
chosen simply out of convenience.
We see immediately that by this choice of the embedding, the embedded density (9) then is the Koopman operator (3) applied to η, i.e., the expectation value of η under the evolved dynamics,
$Fpt(x,⋅)=Eη(Xt) | X0=x.$
The right-hand side can now be computed numerically by a simple Monte Carlo sampling procedure. Let $Φjt(x),j=1,…,M$ denote the endpoints of M independent trajectories of length t, all starting in x.
Thus, we create the data points in $R2r+1$ that we apply the Diffusion Maps algorithm to as follows:
The above algorithm requires the knowledge of two intrinsic parameters of the system: (1) the “intermediate” lag time t, in order to simulate trajectories of the right length, and (2) the expected
dimension r of the reaction coordinate, in order to choose the right number of embedding observables. For both quantities, rough estimates can be used in practice.
The weak requirement t[slow] > t > t[fast] on the lag time t permits a high tolerance with respect to numerical errors. Thus, rough Markov models can, for example, be used to estimate the time
scales. Also, in real-live chemical systems, one often has a general idea about the nature of the fast and slow processes (e.g., whether one is interested in the re-configuration of individual
dihedral angles or the forming of higher-level structures) that can guide the choice.
For the dimension r, an iterative procedure can be used: First start with a low estimate for r (e.g., r = 1) and perform Algorithm 1. If the chosen r was equal to or higher than the correct dimension
of the transition manifold, the Diffusion Maps algorithm should detect an r- or lower-dimensional manifold in the embedded data points. If it fails to do so, increase r by choosing additional
observables and restart the embedding procedure. This strategy generates only little overhead as the simulations $Φjt(xi)$ and the previously embedded points can be reused.
Algorithm 1.
1: Choose M points {x[1], …, x[L]} that cover the relevant parts of state space, i.e., the metastable sets and the
transition regions.
2: Choose the factors a[ij] in (10), e.g., uniformly randomly in [0, 1].
3: For each x[i], simulate M independent trajectories of length t. Let the end points be denoted by $Φjt(xi)$.
4: Compute the data points in $R2r+1$ as
5: Apply the Diffusion Maps algorithm to {z[1], …, z[L]}.
Output: Approximation to the r-dimensional reaction coordinate (8), evaluated at the points {x[1], …, x[L]}, i.e.,
{ξ(x[1]), …, ξ(x[L])}.
1: Choose M points {x[1], …, x[L]} that cover the relevant parts of state space, i.e., the metastable sets and the
transition regions.
2: Choose the factors a[ij] in (10), e.g., uniformly randomly in [0, 1].
3: For each x[i], simulate M independent trajectories of length t. Let the end points be denoted by $Φjt(xi)$.
4: Compute the data points in $R2r+1$ as
5: Apply the Diffusion Maps algorithm to {z[1], …, z[L]}.
Output: Approximation to the r-dimensional reaction coordinate (8), evaluated at the points {x[1], …, x[L]}, i.e.,
{ξ(x[1]), …, ξ(x[L])}.
Alternatively, assuming the rough Markov model mentioned above can correctly identify the number d of dominant time scales, this can be used as an upper bound for r. Even if d vastly overestimates r,
the final reaction coordinates ξ (after application of the Diffusion Maps algorithm) will have the correct low dimension r.
As we have seen, the transition manifold-based reaction coordinate (8) fulfills rigorous optimality criteria regarding the preservation of the long time scales and being of the smallest possible
dimension. Unfortunately, the above algorithm to compute it has two major practical shortcomings:
1. ξ can only be computed pointwise and has no closed analytic form. For every new evaluation point, many numerical MD simulations have to be started. Furthermore, the evaluation points have to be
chosen in regions relevant to the slow transition processes (i.e., in the transition regions and metastable sets), which is a non-trivial task, especially in high-dimensional systems.
2. The computation of ξ is based on multiple short, instead of one, long MD simulations. Although this can also be seen as an advantage, the way modern MD software works often favors the simulation
of single long trajectories. Furthermore, there is a vast archive of already pre-computed trajectories for many interesting metastable molecular systems. If this data could be used to compute ξ,
those systems could be coarse-grained with minimal effort.
In the following, we will thus describe a Galerkin discretization of the embedding function (9). Importantly, this discretization will be very similar to the discretization of the dominant transfer
operator eigenfunctions performed in MSM analysis, further emphasizing the close connection of the methods. Moreover, this will allow us to calculate our reaction coordinates from the same data
sources also used in MSM building and utilize a wide range of analogous discretization techniques.
A. Galerkin approximation of reaction coordinates
We first write the embedded density (11) directly as a function of the starting point x,
We make the weak assumption that all the components of $ξ̃$ are square-integrable with respect to the stationary density, i.e., $ξ̃$ lies in the function space $Lρ2$ with inner product
$⟨f,g⟩ρ=∫Xf(x)g(x)ρ(x) dx.$
The function $ξ̃$ can already be understood as a 2r + 1-dimensional reaction coordinate; that is, we could in theory accept a reaction coordinate with higher than optimal dimension in order to save us
the application of the Diffusion Maps algorithm. Thus, we will refer to $ξ̃$ as the “pre-reaction coordinate.”
We now discretize $ξ̃$ using a Galerkin approximation,^6 i.e., we seek the function $ξ̃N$ inside a finite-dimensional function space $VN$ that best approximates $ξ̃$. Classical choices of the ansatz
space $VN$ are, for example, the space of all polynomials over $X$ up to a certain degree, the space consisting of N characteristic functions over a finite partition of $X$, or some other finite
element space. The Galerkin approximation is performed independently on the 2r + 1 individual components of $ξ̃$. However, we will omit the subscripts in order to help readability and simply treat
$ξ̃$ and η as one-dimensional functions for the remainder of this section.
Let {
, …,
} be a basis of
. Then the Galerkin approximation
has the following closed form:
with the
Gram matrix
and the
transition matrix
is again the transfer operator of the system and the factors
is the randomly chosen observable from
. The precise derivation of Eq.
is given in
Appendix B
. The quantities
, and
can now be computed numerically, and thus
can be evaluated at any state space point
The exact the matrices S and T are also commonly found at the heart of methods that aim to reconstruct long-term dynamics directly via the transfer operator eigenfunctions,^14,28 such as Markov state
models. The Galerkin approximation of $ξ̃$ is thus applicable whenever those methods are.
B. Data-based computation of the transition matrix
The entries of the transition matrix T and Gram matrix S can now be approximated based on simulation data. Consider two sets of data points on $X$,
$XM={x1,…,xM}, YM={y1,…,yM},$
where $XM$ samples the stationary density ρ and $YM⊂X$ is the time-t evolution of $XM$ under the dynamics. To be precise, y[i] = Φ^tx[i], with t being again the “intermediate” lag time. These data
can, for example, be obtained from a single equilibrated numerical trajectory of step size τ (assuming that t is a multiple of τ),
or the concatenation of multiple trajectories that together sufficiently sample ρ. Alternatively, $XM$ could be the output of an enhanced sampling algorithm, such as Markov chain Monte Carlo methods,
^37 and $YM$ the endpoints of individual trajectories starting in $XM$.
As frequently used in the Markov state approach,^38 the inner product ⟨·,·⟩[ρ] can be approximated from ρ-distributed data via Monte Carlo quadrature. T and S can thus be approximated as
Moreover, the factors c[l] become
Subsequently, (13) can be evaluated at arbitrary state space points without significant additional costs. Choosing evaluation points x[i] that again cover the relevant parts of state space, for
example, a subsample of the data points $XM$, we can apply the Diffusion Maps algorithm to the embedded points ${ξ̃N(x1),…,ξ̃N(xL)}$ and again extract the final r-dimensional reaction coordinate.
Algorithm 2 shows an accordingly modified version of Algorithm 1.
Algorithm 2.
Input: Data sets $XM,YM$ as in (14).
1: Choose a Galerkin basis {φ[1], …, φ[N]} that adequately approximates smooth functions over the relevant
parts of state space.
2: Choose the factors a[ij] in (10), e.g., uniformly randomly in [0, 1].
3: Compute the matrices T, S and the vector c via
4: Choose L evaluation points {x[1], …, x[L]} that cover the relevant parts of state space, i.e., the metastable sets
and the transition regions.
5: Compute the data points in $R2r+1$ as
6: Apply the Diffusion Maps algorithm to {z[i], …, z[M]}.
Output: Approximation to the r-dimensional reaction coordinate (8), evaluated at the points {x[1], …, x[L]}, i.e.,
{ξ(x[1]), …, ξ(x[L])}.
Input: Data sets $XM,YM$ as in (14).
1: Choose a Galerkin basis {φ[1], …, φ[N]} that adequately approximates smooth functions over the relevant
parts of state space.
2: Choose the factors a[ij] in (10), e.g., uniformly randomly in [0, 1].
3: Compute the matrices T, S and the vector c via
4: Choose L evaluation points {x[1], …, x[L]} that cover the relevant parts of state space, i.e., the metastable sets
and the transition regions.
5: Compute the data points in $R2r+1$ as
6: Apply the Diffusion Maps algorithm to {z[i], …, z[M]}.
Output: Approximation to the r-dimensional reaction coordinate (8), evaluated at the points {x[1], …, x[L]}, i.e.,
{ξ(x[1]), …, ξ(x[L])}.
Another advantage of Algorithm 2 is that when adding a new evaluation point x[L+1], no new simulations have to be started. Only the Diffusion Map algorithm has to be re-applied to the now extended
embedded points {z[1], …, z[L+1]}.
C. Implementation: Voronoi-based Galerkin approximation
For the Markov State Model construction, there exists an extensive collection of elaborate Galerkin basis sets that have been successfully applied to real-world biomolecular systems, and all of them
can, in principle, be used to approximate the reaction coordinate ξ. Examples are hierarchical wavelet bases,^39 meshfree basis functions based on Shepard’s approach,^40,41 and specialized
problem-adapted basis sets, such as a tensor basis for peptide chains.^19 In this section, we detail a simple, yet practical algorithm that constructs a particular meshfree ansatz space directly from
the available simulation data. Similar basis functions have been explored in the context of MSMs in Ref. 42.
Let {A[1], …, A[N]} be sets that partition $X$, i.e., $⋃iAi=X$ and A[i] ∩ A[j] = ∅, i ≠ j. Choosing the indicator functions over the sets A[i],
as the basis of $VN$, the entry T[kl] of the transition matrix is effectively just the relative number of transitions from set A[k] to set A[l] within the data sets $XM, YM$. The Gram matrix is
diagonal, with S[kk] being the relative number of data points in $XM$ that lie in A[k]. This partition-based Galerkin approximation of the transfer operator is known as Ulam’s method in the MSM
The evaluation of $ξ̃N$ at a specific point x ∈ A[k] then becomes
1. Choice of the partition sets
Choosing the partition sets naively, for example, as a regular box grid, invokes the infamous curse of dimensionality, as the number of boxes rises exponentially with the system’s dimension. We thus
propose a partition into grid-free Voronoi cells {A[1], …, A[N]} with center points adapted to the dynamical data $XM$. With this, we will also be able to avoid the explicit construction (and
storage) of the transition matrix.
Our objective is to approximate $ξ̃$ in the region of state space that is covered with the available data points $XM$. The question is then how the Voronoi centers $E={e1,…,eN}⊂X$ should be chosen in
order to achieve this. In the following, we demonstrate that two different criteria on the approximation quality of $ξ̃$ lead to two different algorithms for selecting the Voronoi centers.
a. Minimizing the L^2 error.
Since $ξ̃∈Lρ2$, we may ask to minimize the error
where ∥·∥[ρ] is the norm induced by the inner product (12). In Appendix C, we show that under weak assumptions, this error is minimized by choosing as the Voronoi centers the output of the k-means
clustering algorithm^44 applied to the data $XM$ with k = N. k-means is highly scalable for both large amounts of clusters N and a large number of data points M and is readily available in many
software packages.
b. Minimizing uniform error.
Thinking of $ξ̃$ as an observable, it is natural to minimize the uniform observable error
In Appendix C, we show that, again under weak assumptions, the minimum is achieved if the centers cover the region of $X$ where data are available evenly such that the Voronoi cells all have similar
diameters. This problem is related to Poisson disk or blue noise (sub)sampling in computer vision.^45 The following picking algorithm^41 computes an approximately equidistant subsample of $XM$.
In conclusion, minimizing the L^2 error of $ξ̃$ leads to k-means clustering as an algorithm for picking the Voronoi centers, while minimizing the uniform error of $ξ̃$ leads to the farthest point
picking Algorithm 3. In Sec. V, we compare both alternatives. In general, k-means will lead to denser Voronoi cells in metastable regions, while Algorithm 3 will lead to evenly sized Voronoi cells.
Algorithm 3.
Input:$XM$, N
1: e[1] ← random point from $XM$
2: forj = 2, …, Ndo
3: pick the point with the maximum distance from the previous points:
$ej←arg maxx∈XMmini=1,…,j−1‖x−ei‖$
4: end for
Output: Voronoi centers E = {e[1], …, e[N]}
Input:$XM$, N
1: e[1] ← random point from $XM$
2: forj = 2, …, Ndo
3: pick the point with the maximum distance from the previous points:
$ej←arg maxx∈XMmini=1,…,j−1‖x−ei‖$
4: end for
Output: Voronoi centers E = {e[1], …, e[N]}
In order to compute T, S, and c, the data points from $XM$ and $YM$ have to be assigned to their respective partition set. In the case of Voronoi cells, this is easily done by a nearest point search
between $XM$ and E and $YM$ and E, respectively.
The criteria (CI) and (CII) and the concept of transition manifolds offer a new perspective on optimal reaction coordinates. Reaction coordinates that fulfill these criteria can in fact be computed
using the same data sources and state space discretization techniques as classical MSMs which means that the entire machinery invented for building MSMs can be utilized for their computation.
A. Curved double well potential
As our first demonstration, we compute the reaction coordinate of the simple curved double-well potential from Example 1.1 using Algorithm 2. It will allow us to visualize the (embedded) transition
manifold and compare the computed reaction coordinate with the minimum energy pathway and the committor function.
In this low-dimensional example, the relaxation time scales associated with the slowest processes of the full system can be computed numerically; see Table I. They were computed by a sufficiently
fine approximation of the transfer operator, computation of its eigenvalues, and using formula (4).
TABLE I.
Dominant time scales .
. t[1] . t[2] . t[3] .
Full system 5.9332 0.9021 0.6031
ξ, k-means alg. 5.8899 0.8615 0.5625
ξ, picking alg. 5.9034 0.8789 0.5838
ζ(x[1], x[2]) = x[1] 5.7130 0.7964 0.5380
Dominant time scales .
. t[1] . t[2] . t[3] .
Full system 5.9332 0.9021 0.6031
ξ, k-means alg. 5.8899 0.8615 0.5625
ξ, picking alg. 5.9034 0.8789 0.5838
ζ(x[1], x[2]) = x[1] 5.7130 0.7964 0.5380
As expected, the system is time scale-separated, with the single slow time scale representing the mean expected waiting time^46 for a single transition between the two wells. The lag time t = 2 falls
in between the slow and fast time scales, so we use it as the “intermediate” lag time for Algorithm 2. Moreover, we assume the dimension r = 1 of the transition manifold to be known.
As the source of dynamical data, we utilize a single well-equilibrated trajectory
of the dynamics with step size τ = 10^−2 and overall 2 · 10^7 steps. This trajectory is used to construct the data sets $XM, YM$ via formula (15).
We partition the interesting region of $R2$ into 1000 Voronoi cells. The characteristic functions over the cells then form the Galerkin basis for Algorithm 2, as detailed in Sec. IV C. The centers of
the cells can be chosen using either the k-means algorithm or the picking algorithm (Algorithm 3); we compare both methods in the following. As the evaluation points {x[1], …, x[L]} that are required
for Algorithm 2, we simply re-use the 1000 Voronoi center points, as they already cover the interesting state space regions.
1. Results
Figure 4(a) shows the computed Voronoi center points. While the points based on the picking algorithm cover $XM$ evenly (by construction), the k-means-based center points appear to emphasize the
metastable regions and slightly under-sample the transition regions.
Figure 4(b) shows the approximation to $ξ̃$ evaluated at {x[1], …, x[L]}, computed via (13). These are the 2r + 1-dimensional data points z[i] in Algorithm 2. The points quite obviously concentrate
around a one-dimensional manifold. This is the embedding of the transition manifold $M$ into $R3$ via (9).
The Diffusion Maps algorithm applied to the points indeed finds the correct dimension r = 1 of the embedded manifold and parameterizes it. The coloring in Fig. 4(b) indicates the value of the
one-dimensional parametrization at the respective embedded point. This is also the value of the final reaction coordinate at the respective evaluation point, i.e., ξ(x[i]). Assigning this value to
the whole Voronoi cell that x[i] belongs to yields the final reaction coordinate ξ that is defined in all $R2$, shown as the coloring in Fig. 4(c).
ξ clearly parameterizes the minimum energy pathway, with a smooth gradient in the transition region. Moreover, ξ qualitatively resembles the system’s committor function that is shown in Fig. 4(d).
2. Time scale analysis
To quantitatively verify the quality of the computed reaction coordinate, we compare the time scales of the full process X[t] and the projected process ξ(X[t]). Note that this is equivalent to
comparing the dominant eigenvalues (5), i.e., a necessary condition for the criterion (CI). In fact, the time scales of the projected process were computed by first approximating the eigenvalues of
the projected transfer operator [using the projected trajectory $ξ(x0),ξ(Φτx0),ξ(Φ2τx0),…$] and then again using formula (4).
Table I shows that our Galerkin-approximated reaction coordinate ξ approximates the dominant time scale t[1] of the full system very well, both for Voronoi centers chosen by the k-means and the
picking algorithm. In fact, even the non-dominant time scales t[2], t[3], … are reproduced quite well, even though our theory only holds for the dominant time scales. Compared to the naively chosen
reaction coordinate, ζ(x[1], x[2]) = x[1], our approximation error is noticeably lower, although ζ still preserves the time scales surprisingly well.
B. Alanine dipeptide
We demonstrate that with Algorithm 2, one can successfully use longtime simulation data to identify quantitatively good reaction coordinates in realistic molecular systems, that the resulting
reaction coordinates are interpretable chemically, and that the reaction coordinates can be used to quantitatively restore the information about the long-time transition processes (in the form of the
transfer operator eigenfunctions).
For this, we consider a single alanine dipeptide molecule in aqueous solution at temperature 400 K. The molecule consists of 22 atoms (including hydrogen atoms); thus the full Cartesian state space
$X$ is 66-dimensional. We chose to analyze this rather small example system as it still possesses a clearly defined time scale separation that bigger systems often lack. Furthermore, the system
possesses a chemically intuitive reaction coordinate that will serve as a benchmark: usually, two backbone dihedral angles φ, ψ are considered responsible for the long-term kinetics of alanine
dipeptide, with four configurations of these angles forming metastable states (see Fig. 5). We emphasize, however, that this information is used only for illustration and comparison purposes and that
we compute our reaction coordinate ξ based on the full 66-dimensional data.
The relaxation time t = 20 ps and the embedding dimension r = 2 are assumed to be known. We will see later that t indeed falls into a timescale gap. For the dynamical data, a single 40 ns long
trajectory of the system was generated using the MD software Gromacs. The trajectory was stripped from the solvent molecules, downsampled to step width τ = 0.02 ps, and its center of mass fixed at
the center of the simulation box, yielding the 66-dimensional trajectory
with M = 2 · 10^6. Using (15), we generated the data sets $XM, YM$.
We computed 2000 Voronoi centers in the region covered by the trajectory using both the k-means and the picking algorithm. The projection of these points onto the (φ, ψ)-plane can be seen in Fig. 6
(b). While this projection offers only an incomplete insight into the distribution of the full 66-dimensional center points, it indicates that the k-means algorithm again emphasizes the metastable
sets, whereas Algorithm 3 covers the total range of values more evenly. Again, for the evaluation points {x[1], …, x[L]}, we re-purposed the 2000 Voronoi center points.
For the embedding functions $η:R66→R5$, linear functions with coefficients drawn uniformly randomly from [0, 1] were chosen just as in Sec. V A.
1. Results
Figure 6 visualizes the computed reaction coordinates with Voronoi center points chosen by the k-means algorithm (left) and picking algorithm (right).
As the dimension of the transition manifold was assumed to be r = 2, the dimension of the embedding space and thus the values $ξ̃(xi), i=1,…,L$, is 2r + 1 = 5, which makes it impossible to directly
visualize the embedded transition manifold. However, plotting just the first three of the five components still offers a good insight into the structure of the embedded transition manifold; see Fig.
6(a). Unlike in the first example, the two-dimensional manifold structure in the embedded points is not obviously apparent. Instead, the points $ξ̃(xi)$ appear to be mainly concentrated around four
clusters that form two connected pairs. The Diffusion Maps algorithm still recognizes the point cloud as parametrizable by a two-dimensional coordinate and computes the parametrization, i.e., our
final reaction coordinate ξ at the evaluation points. Figures 6(a) and 6(b) show in color the two components of ξ at the embedded evaluation points and at the (φ, ψ)-projection of the evaluation
points, respectively. The latter confirms that the observed four clusters correspond to the four metastable states, and the connections between the pairs of clusters correspond to points that are
located along the transition pathways. It also explains why there is seemingly no connection between the two pairs of clusters: the transition pathway connecting clusters A and C is too sparsely
populated by evaluation points—especially in the k-means case—in order to show the connection. Overall, we see a clear correlation between the computed reaction coordinate ξ and the reference
reaction coordinate (φ, ψ).
2. Time scale analysis
We again compute the implied time scales of the reduced process ξ(X[t]). To yield the highest accuracy possible for the given data set, we utilize the PyEMMA software package^47 with its built-in
methods to discretize the transfer operator, estimate its eigenvalues, and compute the time scales.
Computing the time scales of the full 66-dimensional process with the necessary accuracy is not possible, so we cannot conduct a rigorous error analysis for this system. Instead, we utilize the
variational principle of conformation dynamics^48 which states that the time scales of the full process are always underestimated by those of any projection of the process. Thus, larger dominant time
scales of the projected process in general correspond to a better reaction coordinate. However, due to the possibility of systematic errors in approximating the projected time scales (discretization
of the transfer operator, finite amount of dynamical data), this variational principle might be violated. Thus, we additionally offer a comparison to the time scales of a manually chosen
two-dimensional reaction coordinate that can generally be considered “good,” namely, the backbone dihedrals (φ, ψ). Still, we emphasize that these time scales do not represent the “ground truth.” The
coordinate (φ, ψ) is also not necessarily optimal in the sense of the variational principle, and thus again gives only an approximation of the full system’s true dominant time scales.
Using these two error estimators, we compare our reaction coordinates ξ for both the k-means and the picking algorithm to a two-dimensional TICA (time-lagged independent component analysis)
projection, a dimensionality reduction method that is popular in MD analysis.^14 TICA finds the directions in the data sets with maximal global autocorrelation for a specified lag time and thus
always yields linear reaction coordinates. For this lag time, τ = 120 ps was chosen as it maximizes the cumulative kinetic variance (95.5%).^49
The three (nontrivial) dominant time scales and their deviation from the benchmark (φ, ψ)-projection can be seen in Table II. The remaining time scales t[i], i ≥ 4 are significantly smaller (<5 ps)
and are considered non-dominant and thus irrelevant.
TABLE II.
Dominant time scales (ps) .
. t[1] . t[2] . t[3] .
ξ, k-means alg. 194.58 62.50 41.80
ξ, picking alg. 194.41 62.25 41.63
TICA 191.78 61.27 29.84
(φ, ψ) 194.71 62.93 41.27
Dominant time scales (ps) .
. t[1] . t[2] . t[3] .
ξ, k-means alg. 194.58 62.50 41.80
ξ, picking alg. 194.41 62.25 41.63
TICA 191.78 61.27 29.84
(φ, ψ) 194.71 62.93 41.27
Judging by both the variational principle and the comparison to the benchmark projection, both of our new reaction coordinates provide a measurably better approximation of the dominant time scales
than the TICA reaction coordinate, though the latter remains competitive.
3. Eigenfunction reconstruction
As the reaction coordinate ξ was constructed to fulfill criterion (CI), it should be possible to reconstruct the full system’s dominant transfer operator eigenfunctions $vi$, i = 1, 2, 3, which are
functions over the 66-dimensional state space $X$, from the eigenfunctions $wi$ of the projected transfer operator, i.e., functions over $R2$. As the reaction coordinates computed with the k-means
and the picking algorithm variant of Algorithm 2 are qualitatively equal, we limit the investigation to the k-means reaction coordinate.
Even though the state space is 66-dimensional, the eigenfunctions of the full transfer operator can still be approximated with reasonable accuracy by a Galerkin method if an appropriate mesh-free
basis is used. Luckily, we have already constructed such a basis, namely, the Voronoi basis used for computing the reaction coordinates. Thus, we are able to re-use exactly the same transition matrix
T and Gram matrix S assembled in Algorithm 2. Computing the Galerkin approximation of the eigenfunctions $vi$ then corresponds to solving a 2000 × 2000 eigenvector problem.
On the other hand, as ξ is only two-dimensional, computing the eigenfunctions $wi$ of the projected transfer operator $Tξ t$ is possible by a fine grid-based Galerkin method. To construct the
corresponding transition matrix, the projected trajectory
is used. The functions $v^i(⋅)≔wiξ(⋅)$ then should reconstruct the $vi$.
Of course, being functions over the 66-dimensional state space, the $vi$ and $v^i$ are difficult to visualize. We thus again project them onto the (φ, ψ)-plane using a simple interpolation procedure.
The result can be seen in Fig. 7. We observe excellent qualitative agreement between the full and the reconstructed eigenfunctions, or at least their (φ, ψ)-projections.
This last section has again shown the close relationship between the transfer operator eigenfunctions and the newly defined reaction coordinates, both in their expressive power and the data required
to compute them. In this concrete example, even the computational effort is identical, as both the computation of the full transfer operator eigenfunctions and the application of the Diffusion Maps
algorithm to the embedded evaluation points require the solution of a 2000 × 2000 eigenproblem. Therefore, our proposed numerical method is not necessarily computationally advantageous over directly
computing the eigenfunctions.
However, we want to stress again that the newly defined transition manifold-based reaction coordinates are advantageous on a conceptual level. First, they obey a rigorous optimality criterion and
thus are guaranteed to preserve the system’s slowest time scales. Second, they are interpretable in the context of transition pathways, as detailed in Sec. II F. For the alanine dipeptide, the
computed two-dimensional reaction coordinate ξ is directly interpretable as a transformation of the two principal dihedral angles, whereas using the three dominant eigenfunctions as reaction
coordinates would yield a three-dimensional reaction coordinate that is redundant in describing the molecule’s internal slow dynamics.
C. Conformational analysis of NTL9
Finally, we demonstrate the applicability of our method to a realistic high-dimensional system. For this, we analyze a 1.11 ms long molecular trajectory of the fast-folding protein NTL9, generated on
the Anton supercomputer.^50 Instead of Cartesian coordinates, we use the amino acid chain’s contact map, i.e., the matrix containing the pairwise distances of the residues, as coordinates for the
further analysis. This eliminates the need to remove the global translational and rotational motion from the trajectory. As the protein consists of 40 residues, this results in a 1600-dimensional
state space (although it could be reduced due to symmetry of the contact map matrix).
As we are interested in the forming of secondary structures such as α helices and β sheets, we choose a lag time 1-2 orders of magnitude faster than those processes, τ = 10 ns. To generate the data
set $XM$, M = 1.11 · 10^6 frames were uniformly subsampled from the trajectory. $YM$ was generated the same way, only with a lag time of τ. From $XM$, we drew L = 5550 Voronoi center points x[i]
using the picking algorithm.
For the expected transition manifold dimension r, and the corresponding number of embedding functions 2r + 1, we used a simplified version of the iterative procedure proposed at the end of Sec. III B
: Start with a low value (r = 1 in this example) and see if useful structure can be identified in the embedded transition manifold. If not, increase r and repeat the embedding procedure.
1. Results
Quite surprisingly, the transition manifold in this case already reveals its structure under an embedding into $R3$. In the embedded Voronoi center points $ξ̃(xi)$, four clusters are clearly visible
[Fig. 8(a)]. The clusters are robust under the choice of the embedding functions.
For simplicity, i.e., in order to avoid the parameter tuning of an automated clustering algorithm, we assigned the points to the clusters manually. Their average contact maps and secondary structure
are shown in Figs. 8(b) and 8(c).
Interpreting the four clusters as conformations, our results are to a large degree consistent with those of Mardt et al.,^51 who performed analysis on the same dataset using deep learning methods.
Our conformations “Unfolded,” “Folded 1,” and “Folded 2” correspond very well to the main conformations identified by their algorithm. Note that “Unfolded” is not a conformation in the chemical
sense, but rather a loose collection of various unfolded configurations. The populations of the conformations [percentages in Fig. 8(c)] are also comparable to those in Ref. 51. Our slightly lower
values can be explained by the difference in how the populations are calculated. However, our conformation “Folded 3” does not appear in their analysis. While its population is quite low, its
structure subtly yet distinctively differs from the other conformations, so we do not consider its existence a statistical artifact. Furthermore, we were not able to find the finer sub-structures of
the “Unfolded” conformation that were identified in Ref. 51.
Let ξ(x[i]) denote the first diffusion maps coordinate on the embedded points, which indicates the direction of largest variance. We see a strong correlation between ξ and the mean inter-residuum
distance, i.e., the average of all entries of the contact map matrix (Fig. 9). Thus, ξ describes the “degree of foldedness” of the protein, which can be considered a reasonable one-dimensional
reaction coordinate of this system. However, unlike in the dialanine example, the second and higher diffusion map coordinates here did not correspond to some easily interpretable physical property
that finer resolves the transitions between the identified conformations and instead seemed to consist only of higher modes of the first diffusion map coordinate.
Although the results are already very encouraging and show the potential usefulness of the method for very high-dimensional systems, the setup can be refined in a number of ways. Most importantly,
instead of the simple Voronoi cell-based Galerkin method, specialized ansatz spaces such as meshfree basis functions with global support might be able to better approximate the reaction coordinate,
in particular, in the undersampled transition regions. We are planning to explore these and other refinements of the method as well as its application to further high-dimensional molecular systems in
an upcoming study.
In this paper, we reviewed a novel framework for the characterization and computation of optimal reaction coordinates, originally introduced in Ref. 16, and presented efficient algorithms for the
computational identification of such reaction coordinates that allow for direct application to real-world molecular systems. Moreover, we found that the new framework agrees with the TPT
characterization of good reaction coordinates in classical metastable systems, but offers more rigorous criteria that are applicable to much broader classes of multiscale systems with and without
time scale gap.
In particular, we introduced a discretization approach to the data-driven computation of reaction coordinates that fulfill these rigorous criteria. This approach is usable whenever a transition
matrix between the discretization elements can be computed from available simulation data, e.g., if the data represent a long (equilibrated) trajectory so that the entire machinery invented for
building MSMs can now be utilized for the computation of reliable reaction coordinates with provable approximation quality.
As a demonstration, we provided two algorithms to construct a meshfree basis of Voronoi ansatz functions directly from the data. Both algorithms are highly scalable and readily available, making this
method straightforward to apply for practitioners who have existing simulation data on their hard drives. We showed that in molecular systems of medium size, the resulting approximation error in the
dominant time scales is competitive with state of the art dimension reduction techniques. This demonstrates that the reaction coordinates we compute here can be used to build efficient coarse grained
models. Of course the computed reaction coordinates themselves are also of independent value. In the case of the alanine dipeptide, we showed that our computed reaction coordinates and the dihedral
angles, which are typically used as reaction coordinates for this system, produce a similar portrait of the system when viewed in this reduced space.
As a next step, we plan to apply our method to higher-dimensional systems with a priori unknown reaction coordinates using specialized Galerkin ansatz spaces borrowed from Markov state model theory.
For these systems, however, the requirement to have simulation data that sample the stationary density is of course somewhat strict. We thus also plan to work on relaxing this requirement to a
“local” version, i.e., we will work with samples that are equilibrated only in some smaller region of the state space, in order to compute reaction coordinates in that region.
This research has been funded by Deutsche Forschungsgemeinschaft (DFG) through Grant No. CRC 1114 “Scaling Cascades in Complex Systems.”
We derive the exact form of the coefficients d[i](x, t) in the decomposition of the transition density function p^t(x, ·),
Recall that the stochastic process was assumed to be reversible, which formally equates to the transition density function and the stationary density ρ fulfilling the detailed balance condition
$pt(x,y)ρ(x)=pt(y,x)ρ(y) for allx,y∈X.$
With that, it is easy to see that the transfer operator $Pt$, defined in (2), is self-adjoint with respect to the weighted inner product
$⟨f,g⟩ρ−1≔∫Xf(x)g(x)1ρ(x) dx:$
$Ptf,gρ−1=∫XPtf(x)g(x)1ρ(x) dx=∫X∫Xf(y)pt(y,x) dyg(x)1ρ(x) dx=∫Xf(y)∫Xg(x)pt(y,x)1ρ(x) dx dy=(*)∫Xf(y)∫Xg(x)pt(x,y) dx1ρ(y) dy=f,Ptgρ−1,$
where in (*) the detailed balance condition was used. Thus, the eigenfunctions $vi$ of $Pt$ form an orthogonal basis of the associated inner product space.
We assume from now on that the function p^t(x, ·) lies in (or can be approximated with sufficient accuracy) in this space. Then we have
$pt(x,y)=∑i=1∞pt(x,⋅),viρ−1 vi(y).$
Now, p^t(x, ·) can be seen as the time-t evolution of the Dirac density δ[x] under the dynamics; thus,
Using the self-adjointness of $Pt$, we get
As $vi$ is an eigenfunction of $Pt$ to eigenvalue $λit$, this is
Finally, taking the inner product with δ[x] is equivalent to a point evaluation at x,
$δx,viρ−1=∫Xδx(y)vi(y)1ρ(y) dy=vi(x)ρ(x).$
The overall decomposition thus reads
Let $VN$ be a final-dimensional function space spanned by the basis {φ[1], …, φ[N]}. The Galerkin projection of $ξ̃$ onto $VN$ with respect to the inner product ⟨·,·⟩[ρ] is defined as
with the nonnegative, symmetric Gram matrix
The Galerkin projection of the observable η is analogously defined and with the factors
We assume that the Galerkin ansatz space is suitable to approximate η, i.e., ∥η − η[N]∥[ρ] is small, where ∥·∥[ρ] is the norm induced by ⟨·,·⟩[ρ].
Recall that $ξ̃$ can also be written as the Koopman operator applied to η: $ξ̃(x)=Ktη(x)$. With this, the scalar product in (B1) can be estimated as follows:
For (*), it was used that $‖Kt‖ρ=1$, i.e., $Kt$ does not amplify the approximation error of η[N]. Define the N × N transition matrix by
Then the Galerkin approximation (B1) becomes
We show that different choices of the error measure for approximating $ξ̃$ by its Voronoi Galerkin projection $ξ̃N$ lead to different optimal strategies in choosing the Galerkin center points.
1. Minimizing the L^2 error
Assume that by choosing the Voronoi center points {e[1], …, e[N]} that define $ξ̃N$, we want to minimize the error
The difficulty is that neither $ξ̃$ nor $ξ̃N$ is known in advance. We thus construct a Monte Carlo estimator of $‖ξ̃−ξ̃N‖ρ$ based on the sampled data $XM={x1,…,xM}$. Here
is the mean of $ξ̃$ in cell A[k]. First, since A[1], …, A[N] partition $X$,
$‖ξ̃−ξ̃N‖ρ=∑k=1N∫Akξ̃(x)−ξ̃N(x)2ρ(x) dx.$
The integral can be approximated by a Monte Carlo sum over the M ρ-distributed samples x[i],
$∫Akξ̃(x)−ξ̃N(x)2ρ(x) dx ≈1M∑xi∈Ak‖ξ̃(xi)−ξ̃N(xi)‖2,$
where ∥ · ∥ is the Euclidean norm in $R2r+1$. Finally, since $ξ̃N=∑k⟨1Ak,ξ̃⟩ρ⟨1Ak,1⟩ρ1Ak$, we may approximate $ξ̃N(x)$ for x ∈ A[k] with another Monte Carlo sum
Combining everything gives
$‖ξ̃−ξ̃N‖ρ ≈M−1∑k=1N∑xi∈Ak‖ξ̃(xi)−ξ¯Ak‖R2k+12 =:Sξ(A1,…,AN).$
S[ξ](A[1], …, A[N]) can be recognized as the objective function of k-means clustering in the image space of the reaction coordinate $ξ̃$. To minimize this objective function directly, one would have
to know $ξ̃$. If we, however, additionally assume that $ξ̃$ is Lipschitz continuous with Lipschitz constant L, then
where e[k] is such that $ξ̃(ek)=ξ¯Ak$ and ∥ · ∥ now is the Euclidean norm in $Rn$. Minimizing this upper bound is now achieved by k-means clustering the data set $XM$ in the original state space.
2. Minimizing the uniform error
Assume that we now want to minimize the uniform error
Assume again that $ξ̃$ is Lipschitz continuous with Lipschitz constant L. Evidently, we have
with $ξ¯Ak$ as defined in (C2). Let now e[k] ∈ A[k] be such that $ξ̃(ek)=ξ¯Ak$ (such an e[k] exists by continuity of $ξ̃$). Then, with $‖⋅‖∞,Ak$ denoting the uniform norm in A[k],
$‖ξ̃−ξ̃N‖∞,Ak=supx∈Ak‖ξ̃(x)−ξ̃(ek)‖≤Lsupx∈Ak‖x−ek‖≤L diam(Ak),$
where diam(A[k]) is the diameter of the Voronoi cell A[k]. Since A[1], …, A[N] partition $X$, we have
Minimizing this upper bound then means looking for Voronoi centers such that the diameter of the largest Voronoi cell is minimized. Since the number of Voronoi cells and the volume of the set $XM$ to
be covered are fixed, the minimum is achieved if the centers cover $XM$ evenly such that the Voronoi cells all have similar diameters. Therefore, we may alternatively maximize the diameter of the
smallest Voronoi cell, which is bounded from below by the minimal internal point distance,
The inequality holds because min ∥e[i] − e[j]∥ is twice the distance from e[i] to that face of A[i] which is closest to e[i], while the diameter of A[i] is by definition larger. Maximizing the lower
bound then leads to the objective function of maximal minimal internal point distance
$E=arg max{e1,…,eN}⊂XMmini≠ji,j=1,…,N‖ei−ej‖.$
D. E.
M. M.
R. O.
J. S.
R. H.
J. K.
K. J.
J. C.
M. P.
J. P.
C. R.
D. J.
J. L.
M. A.
E. C.
, and
S. C.
Commun. ACM
Multiscale Model. Simul.
A. K.
J. Chem. Phys.
F. L.
Rep. Prog. Phys.
An Introduction to Markov State Models and Their Application to Long Timescale Molecular Simulation
, Advances in Experimental Medicine and Biology, edited by
G. R.
V. S.
, and
), Vol. 797.
Metastability and Markov State Models in Molecular Dynamics: Modeling, Analysis, Algorithmic Approaches
, Courant Lecture Notes in Mathematics (
American Mathematical Society
Mol. Simul.
C. J.
Proc. Natl. Acad. Sci. U. S. A.
N. D.
J. N.
, and
P. G.
J. Chem. Phys.
, in
Computer Simulations in Condensed Matter: From Materials to Chemical Biology
), Vol. 1, pp.
Annu. Rev. Phys. Chem.
J. Chem. Phys.
J. D.
Curr. Opin. Struct. Biol.
De Fabritiis
, and
J. Chem. Phys.
, and
SIAM J. Appl. Dyn. Syst.
, and
J. Nonlinear Sci.
K. A.
G. R.
T. J.
I. S.
, and
V. S.
J. Chem. Theor. Comput.
J. D.
, and
J. Chem. Phys.
, and
B. G.
J. Chem. Theory Comput.
, and
, in
Ergodic Theory, Analysis, and Efficient Simulation of Dynamical Systems
, edited by
Springer Berlin Heidelberg
Berlin, Heidelberg
), pp.
Phys. Rev.
J. Phys. Chem.
, and
J. Chem. Phys.
J. Chem. Phys.
, and
Phys. Rev. B
, and
J. Nonlinear Sci.
V. S.
, and
G. R.
, and
J. Comput. Phys.
, and
J. Chem. Phys.
J. M.
A. E.
, and
R. R.
Appl. Comput. Harmonic Anal.
R. R.
Appl. Comput. Harmonic Anal.
Appl. Comput. Harmonic Anal.
), Special Issue: Diffusion Maps and Wavelets.
Ann. Math.
W. R.
, and
Markov Chain Monte Carlo in Practice
CRC Press
, and
J. Comput. Dyn.
SIAM J. Numer. Anal.
, and
, in
Meshfree Methods for Partial Differential Equations VI
, Lecture Notes in Computational Science and Engineering, edited by
M. A.
), Vol. 89, pp.
, and
J. Chem. Phys.
, “
Meshless methods in conformation dynamics
,” Ph.D. thesis,
FU Berlin
A Collection of Mathematical Problems
, Interscience Tracts in Pure and Applied Mathemtics (
Interscience Publishers
), Vol. 8.
IEEE Trans. Inf. Theory
, in
ACM SIGGRAPH 2007 Sketches, SIGGRAPH’07
, and
Ann. Probab.
M. K.
, and
J. Chem. Theory Comput.
Multiscale Model. Simul.
J. Chem. Theory Comput.
R. O.
, and
D. E.
, and
Nat. Commun. | {"url":"https://pubs.aip.org/aip/jcp/article/149/15/154103/448037/Data-driven-computation-of-molecular-reaction","timestamp":"2024-11-02T05:08:32Z","content_type":"text/html","content_length":"602143","record_id":"<urn:uuid:ba2a3fc3-091b-4140-a6b1-5e2293b7a458>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00795.warc.gz"} |
mp_arc 99-422
99-422 Fernando J. Sanchez-Salas
Horseshoes with infinitely many branches and a characterization of Sinai-Ruelle-Bowen measures (798K, .ps .dvi) Nov 9, 99
Abstract , Paper (src), View paper (auto. generated ps), Index of related papers
Abstract. Let $f$ be a $C^2$ diffeomorphism of a compact riemannian manifold $M^m$ and $\mu$ an ergodic f-invariant Borel probability with non zero Lyapunov exponents. We prove that $\mu$ is a
Sinai-Ruelle-Bowen (SRB) measure if and only if we can reduce the dynamics on an invariant set of total measure to a horseshoe with infinitely many branches and variable return times. Also, and
as a consequence of our approach we give a new proof of the well known Ledrappier-Young's characterization theorem.
Files: 99-422.src( 99-422.keywords , articleETDS.ps , articleETDS.dvi.mm ) | {"url":"http://kleine.mat.uniroma3.it/mp_arc-bin/mpa?yn=99-422","timestamp":"2024-11-03T03:31:07Z","content_type":"text/html","content_length":"1898","record_id":"<urn:uuid:c88179cc-4df4-4954-9299-359f54fa2d48>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00338.warc.gz"} |
Numerical resistivity
Numerical resistivity is a problem in computer simulations of ideal magnetohydrodynamics (MHD). It is a form of numerical diffusion. In near-ideal MHD systems, the magnetic field can diffuse only
very slowly through the plasma or fluid of the system; it is rate-limited by the resistivity of the fluid. In Eulerian simulations where the field is arbitrarily aligned compared to the
simulation grid, the numerical diffusion rate takes the form similar to an additional resistivity, causing non-physical and sometimes bursty magnetic reconnection in the simulation. Numerical
resistivity is a function of resolution, alignment of the magnetic field with the grid, and numerical method. In general, numerical resistivity will not behave isotropically, and there can be
different effective numerical resistivities in different parts of the computational domain. For current (2005) simulations of the solar corona and inner heliosphere, this numerical effect can be
several orders of magnitude larger than the physical resistivity of the plasma.
See also
This applied mathematics-related article is a stub. You can help Wikipedia by expanding it. | {"url":"https://en-academic.com/dic.nsf/enwiki/1310342","timestamp":"2024-11-03T18:23:47Z","content_type":"text/html","content_length":"40049","record_id":"<urn:uuid:a6621913-669b-472d-95fb-34f100c79442>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00321.warc.gz"} |
How is solving for a specified variable in a formula similar to finding a solution for an equation or inequality? how is it different
I am a parent of an 8th grader:The software itself works amazingly well - just enter an algebraic equation and it will show you step by step how to solve and offer clear, brief explanations,
invaluable for checking homework or reviewing a poorly understood concept. The practice test with printable answer key is a great self check with a seemingly endless supply of non-repeating
questions. Just keep taking the tests until you get them all right = an A+ in math.
Don Woodward, ND
Thank you very much for your help!!!!! The program works just as was stated. This program is a priceless tool and I feel that every student should own a copy. The price is incredible. Again, I
appreciate all of your help.
Julieta Cuellar, PN
OK here is what I like: much friendlier interface, coverage of functions, trig. better graphing, wizards. However, still no word problems, pre-calc, calc. (Please tell me that you are working on it
- who is going to do my homework when I am past College Algebra?!?
Michael, OH
As proud as my Mom was every time I saw her cheering me after I ran in a touchdown from 5 yards out, she was always just as worried about my grades. She said if I didnt get my grades up, nobody
would ever give me a scholarship, no matter how many rushing yards I got. Even when my coach showed me your program, I didnt want no part of it. But, it started making sense. Now, I do algebra with
as much confidence as play football and my senior year is gonna be my best yet!
Madison Childress, FL
Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among
Search phrases used on 2014-08-13:
• Solving Formulas for a variable
• reducing rational expressions to lowest terms
• "linear interpolation" "TI-86" "how to"
• calculating log base on ti-89
• integers of number "game"
• solution to Hardest Easy Geometry Problem
• intermediate algebra vocabulary
• double cross algebra with pizzazz
• solving quadratic in maple
• What is the least common factor of 5,6,7?
• graphing system of equations+life
• How to teach exponents/elementary
• how to use ti82
• trig mother functions of logs
• ti 83-parallel and perpendicular equations program
• Antiderivative Solver
• use T1-83 plus to solve matrices
• scott foresman math grade 6/chapter 7 workbook
• free intermediate algebra solutions
• program to factor equations
• "four numbers game"
• solveing for an exponent
• mcgraw hill online calculator
• mixed decimal as a mixed number
• algabra for dummies
• physics formula sheet GCSE
• easy ways to pass alegbra
• simplifying calculation before the days of the calculator
• ti-84 games downloads
• Mcdougal Littell Algebra 2
• finding least common multiple of variable expressions
• finding area worksheets
• TI-84 quadratic formula program
• complete the square using a ti-84 plus
• pre test fluids grade 8
• how to find eigenvalues of a matrix using t1-83
• free rational expression calculators
• maths test ks3
• ti 89 how to wronskian
• arithematic
• geometry Notes glencoe chapter 5
• percentage printable worksheets
• california mathematics standards, slope intercept 8th grade
• printable samples of mean for elementary
• LCD in 5th grade math
• system of square equation
• sample SOL warmup math questions on 6th graders analyzing charts and graphs
• simplifying powers of i calculator
• math answers for equations in algebra 1
• 72374415157711
• alg. 1 homework cheat
• printable graph paper for long division
• quick answers, multiplying radicals
• simplifying decimals calculator
• Rewrite radicals using rational expressions
• graphing cube calculator
• algebra programs you can type in for ti 83
• 6th grade algebra problems
• math anwsers
• free book of algebra
• precalculus balance equations
• free anwers for homework
• Free Algebra step-by-step problem solver
• Y intercept algebra worksheets
• math scale factors explanation
• all the answers to Vocabulary Power Plus For The New Sat: Book 3
• simplify square root using distributive property
• free log calculator
• convert from negative decimal to binary with java code
• partial fraction simplifying calculators
• determine if graph represents a function
• balancing equations in TI 86
• McGraw-Hill Inc worksheet answers
• Canadian money free worksheets
• how to teach basic algebra problem
• free online rational expressions calculator
• equations percentages maths
• 7th-8th grade math worksheets
• Algebra Formulas
• free ebook basic accounting book
• 1st grade accounting
• how to solve polynomial equations and inequalities
• how to multiply algebraic expressions
• expand algebra calcul
• properties of additon and multiplication
• saxon math algebra 2 answers
• adding polynomials with algebra tiles worksheets
• permutations of sets in mathcad
• free online holt algebra 1 teachers book | {"url":"https://algebra-net.com/algebra-net/greatest-common-factor/how-is-solving-for-a-specified.html","timestamp":"2024-11-06T09:23:22Z","content_type":"text/html","content_length":"88204","record_id":"<urn:uuid:939c399b-4f82-45c2-aec5-71a97135eb34>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00465.warc.gz"} |
Experimental Study of a Sphere Bouncing on the Water
• 1
School of Energy and Power Engineering, Nanjing University of Science and Technology, Nanjing, 210094, China
• 2
School of Oceanography, Shanghai Jiao Tong University, Shanghai, 200240, China
In this paper, the flow physics and impact dynamics of a sphere bouncing on a water surface are studied experimentally. During the experiments, high-speed camera photography techniques are used
to capture the cavity and free surface evolution when the sphere impacts and skips on the water surface. The influences of the impact velocity (
) and impact angle (
) of the sphere on the bouncing flow physics are also investigated, including the cavitation evolution, motion characteristics, and bounding law. Regulations for the relationship between
to judge whether the sphere can bounce on the water surface are presented and analyzed by summarizing a large amount of experimental data. In addition, the effect of
on the energy loss of the sphere is also analyzed and discussed. The experiment results show that there is a fitted curve of $ {v}_{1}=17.5{\theta }_{1}-45.5 $ determining the relationship
between the critical initial velocity and angle whether the sphere bounces on the water surface.
Article Highlights
• High-speed photography and methods of force analysis based on these images are deployed to reveal the performance of spheres bouncing on the water surface.
• There is a critical curve indicating the relationship between the initial velocity and angle to determine whether the sphere bounces on the water surface.
• In order for the spheres entering into water with stability, the phenomenon of bouncing should be avoided.
• 1 Introduction
The impact dynamics of structures on the water have been studied for several decades because of their wide engineering applications (Glasheen and Mcmahon 1996; Miloh and Shukron 1991; Shlien 1994
; Von Karman 1929), such as the landing of a seaplane and a speedboat gliding on water. Bouncing and skipping a stone and the swift running of insects or basilisk lizards are other interesting
natural phenomena on a water surface that attract the attention of scientists (Clanet et al. 2004, Glasheen and Mcmahon 1996). The bouncing mechanism of an elastic sphere on a water surface was
studied by Belden et al. (2016) through experiments and numerical simulations. They found that the bouncing dynamics is decided by the ratio of the material shear modulus to hydrodynamic pressure
and the wave propagation speed. Hurd et al. (2019) revealed a new mode of bouncing on the water for relatively soft spheres with low impact angles by experiments, which shows good agreement
between measured acceleration, number of skipping events, and distanced traveled. Liu and Smith (2014) proposed a new prediction formula for repeated impacts and rebounds, especially over short
time scales. Korobkin et al. (2011) summarized and analyzed the mathematical problems of modern hydroelastic mechanics and analyzed some models and techniques. Hewitt et al. (2011) experimentally
studied the planning and bouncing of rectangular paddles on a water surface and established a model using shallow water theory. By studying the rebound effect of objects in the shallow layer of a
liquid, Hicks and Smith (2011) proposed a model that couples the motion of an object to fluid dynamics.
In addition, projectile and stone skipping is also quite an interesting phenomenon that attracts the attention of researchers. Nishida et al. (2010) studied the dynamic properties of a projectile
impinging on a granular medium and derived a formula for determining the critical angle by the density ratio and diameter ratio through experimental techniques. Rosellini et al. (2005) studied
the hydrodynamic characteristics of a rotating disk impacting water with multiple bounces. They found that the source of dissipation lies in the dependence of this reaction force on the angle
between the water surface and the trajectory of the stone. In addition, they proposed a simple model to measure bounce dissipation. Bocquet (2003) analyzed the physics of stone skipping and found
that the maximum number of bounces can be estimated by the deceleration and angular stability of the stone. Faltinsen (2000) reviewed hydroelastic slamming experiments and theoretical studies and
noted that hydrodynamics research on slamming problems must be from a structural point of view. Johnson (1998) summarized the theory of ricochet from a liquid surface of a non-rotating sphere and
improved the non-rotating projectile portion. Truscott et al. (2014) summarized experimental, theoretical, and numerical studies of an object impacting on the water and introduced several
problems to be solved, including the high-fidelity measurements of acceleration, surface stress, and cavity pressure.
The above literature shows that most of the studies addressed aspects of the law of bouncing plate-shaped objects on the water, for instance, the number of bounces. The phenomenon of cavity
evolution and kinetic energy dissipation of a symmetric and regularly shaped object (i.e., a sphere) is further from being clearly explained. This forward-looking paper reveals the cavity
dynamics and the mechanism of a sphere bouncing on a water surface. Experimental evaluation of the effects of the initial impact angle and the energy dissipation and estimation of the law of the
critical impact angle are provided for the bouncing of a sphere on a free water surface.
2 Experimental Methods of High-Speed Photograph
The experiment was conducted in a water tank located in Nanjing University of Science and Technology, China. During the experiments, the temperature was 23 to 27 ℃, and the closed laboratory
eliminated the interference of natural light and airflow. The size of the tank was 1500 (L) × 400 (W) × 600 mm (H), and the water depth was set to 350 mm, as shown in Figure 1. The water tank was
made of 10-mm thick optical glass to reduce the optical path loss. The steel sphere used in the experiment has a mass $ m\approx 1.397 $ g, a diameter $ d\approx 7 $ mm, and density $ {\rho }_{s}
\approx 7778.6 $ kg/m^3. The measurement error of these parameters was less than 0.5%. The sphere was launched by an electromagnetic launcher similar to the equipment described in Yun et al. (
2020), and the arrangement is shown in Figure 1(a).
A high-speed digital video camera, Phantom VEO410L, with a 105-mm fixed-focus Nikon lens and a resolution of 1280 × 400 pixels, was used to capture the sphere's movement and the associated cavity
evolution. The horizontal distance between the camera and the electromagnetic launcher was 3.4 m, which allows the microlens to capture the whole flow physics during the impact process, and the
camera center was approximately 0.35 m above the ground, which is nearly the same height as the free water surface. For the quality of the flow visualization experiment, an adequate supplementary
light source is fundamental. In the experiment, four high-frequency lamps of the type JINBEI EF-200 LED and a rectangular LED light box were used to provide sufficient light intensity. Four
high-frequency lamps were used as the foreground lighting arranged in front of the water tank, and the light box was used as the backlighting to provide a uniform white background arranged at the
back of the water tank, as shown in Figure 1. On the basis of these strong lightings, the frame rate was set as high as 10 000 fps to capture significantly crisp images to accurately analyze the
trajectory of spheres, and the exposure time was decreased to 10 μs.
Seven typical cases with similar initial impact velocities and different impact angles were selected as analysis objects, and the parameters are shown in Table 1. The main parameters include the
initial impact velocity v[1], initial impact angle θ[1], bouncing velocity v[2], and bouncing angle θ[2]; definitions of the variables (i.e., v[1], θ[1], v[2], and θ[2]) are shown in Figure 1(a).
Case list v[1] (m/s) θ[1] (°) v[2] (m/s) θ[2] (°)
1 58.58 1.04 56.81 0.94
2 58.41 2.60 50.86 2.68
3 58.66 3.69 47.42 3.26
4 56.69 4.53 38.75 4.32
5 57.88 5.89 6.22 5.57
6 57.68 6.01 - -
7 56.78 10.23 - -
3 Results and Discussion on the Sphere Bouncing and Cavity Characteristics
Cavity dynamic characteristics and the kinematics law of a steel sphere bouncing on the water will be discussed in this section.
3.1 Cavity Formation of a Sphere Bouncing on the Water
To analyze the hydrodynamic characteristics of the sphere, t[0] is used to indicate the time when the sphere just touches the water surface. Figure 2(a) shows the selected image sequence for case
1 with an initial impact velocity of v[1] = 58.58 m/s and an initial impact angle of θ[1] = 1.04°. Because of the very small initial impact angle and the larger initial impact velocity, the
center of the sphere was always above the static-free surface throughout the bouncing process, and the sphere was in contact with water for a short period of only 1.5 ms. The cavity is seen to be
the shallowest and the splash is weak. Furthermore, the bouncing velocity of v[2] = 56.81 m/s is quite near the initial impact velocity of v[1] = 58.58 m/s. The bouncing angle of θ[2] = 0.94° is
near the impact angle, indicating that a slight amount of energy is lost during the water surface impact.
The typical images of case 2 are shown in Figure 2(b), and the initial impact angle of the sphere was 2.60°. It can be shown that the splash above the free surface is much stronger, and the
cavity is relatively deep and large compared with that of case 1. Moreover, the duration of the sphere touching the water is extended slightly to Δt = 2.5 ms, compared with Δt = 1.5 ms for case
1. Figure 2(c) and (d) show the selected image sequences for case 3 and 4, respectively. Note that the initial impact velocities for the four cases are similar and the initial impact angles are
gradually increased. It is seen that the larger the initial impact angle is, the deeper and larger the profile of the cavity formed. Furthermore, it can be seen in case 4 that a smooth and
complete water curtain is formed, which almost encloses the entire sphere as it leaves the water surface (Figure 2(d)). Another significant phenomenon seen in cases 1 to 4 is that the bouncing
velocity decreases as the initial impact angle increases, although the initial impact velocity is the same in each case.
It is obvious that the flow physics of case 5 is different from those of the other cases, as shown in Figure 3. For case 5, v[1] is 57.88 m/s and θ[1] is 5.89°. Notably, the duration that the
sphere moved under the free surface is much longer for case 5, reaching 46 ms. Furthermore, the sphere is displaced 675 nm in the water, nearly 100-fold its diameter. A complete cavity, including
the smooth water film above the static water surface, encloses the sphere as it moves in the water. The cavity wall behind the sphere finally becomes rough because of the cavity collapse and the
effect of capillary waves on the free surface, as described by Grumstrup et al. (2007).
The cavity finally collapses to many small air bubbles, which is possibly associated with the hydrostatic pressure under the free surface. The cavity also causes violent-free surface elevation,
where an obvious boundary between the cavity and the surface water film can be noticed. The water film later shrinks into a long water column, possibly associated with the surface tension. The
bouncing velocity is just v[2] = 6.22 m/s, much smaller than the initial impact velocity of v[1] = 57.88 m/s. More than 98.8% of the sphere's kinetic energy is lost during the impact process,
which unboundedly transfers to the energy of the water film and cavity.
To clearly analyze the phenomenon and mechanism of a sphere bouncing on a water surface, two experimental cases with different initial impact angles were selected. Figure 4 shows the image
sequence of cavity evolution for the sphere in case 6. The initial impact velocity for case 6 (v[1] = 57.68 m/s) is almost the same as that of case 5, while the initial impact angle (θ[1] =
6.01°) is slightly larger. The early-stage flow physics for case 5 and case 6 is quite similar, including the cavity and evolution of water film. Nevertheless, obvious differences are noticed in
the flow physics and movement of the sphere for the two cases. In case 6, the sphere eventually enters the water after sliding a long distance near the free surface, as shown in Figure 4. The
main reason for the difference is that the lift force of the water on the sphere is less than the gravitational and inertial forces associated with the larger initial impact angle for case 6.
Eventually, the kinetic energy of the sphere gradually dissipates under the free surface and the sphere continues to move underwater.
Figure 5 shows a selected image sequence of the sphere in case 7 hitting the water with a larger initial impact angle of θ[1] = 10.23°. The initial impact velocity was v[1] = 56.78 m/s, close to
those of other cases. It can be seen that the sphere enters directly into the water. A perfect and smooth cavity is formed, which expands and is elongated as the sphere descends in the water. The
contraction of the tail of the cavity is obvious under the free surface in Figure 5. Moreover, the water curtain above the free surface has a clear tendency to expand outward, accompanied by the
forward movement of the sphere. At the later stage, the cavity wall becomes rough when it collapses because of the effects of hydrostatic pressure and surface tension.
3.2 Mechanism of a Sphere Bouncing on the Water
The typical images for four cases are selected to analyze the mechanism of a sphere bouncing on the water. As shown in Figure 6, the cavity length between the impact point and bouncing point for
the sphere in the four cases is 90.4, 114.8, 134.7, and 172.6 mm, respectively. The maximum depth of the cavity in the four bouncing spheres is 4.2, 9.1, 10.4, and 14 mm, respectively. The
percentage of kinetic energy loss η can be estimated from $ \eta =({v}_{1}^{2}-{v}_{2}^{2})/{v}_{1}^{2} $, where v[1] is the initial impact velocity, and v[2] is the bouncing velocity. This
energy loss is used to maintain the development of the cavity. The values of bouncing velocity and η for the four cases are shown in Figure 6. It can be seen that, at the same v[1], the bouncing
velocity decreases as the initial angle of incidence increases, which leads to an increase in the percentage of kinetic energy loss. Furthermore, the maximum cavity length and depth increase
because of the increase in θ[1].
Figure 7 sketches the evolution of the cavity profile and the brief force analysis for case 4, including the whole process of the sphere impacting, gliding, and bouncing on the water surface. The
blue dotted lines indicate the profile of the splash, and the red lines are the contact area between the sphere and water. Because of the interaction between the sphere and water in the contact
area, the sphere is subjected to a lift force, as shown in Figure 7. During the process of the impact, the external forces acting on the sphere are the weight mg, the buoyancy force F[b], the
steady-state drag force F[d], the capillary force F[c] due to surface tension, and the hydrodynamic force F[h], also called the resistive force. On the basis of Newton's second law, the equation
of motion of a sphere bouncing on the water can be written as follows:
$$ ma=mg-{F}_{h}-{F}_{b}-{F}_{c}-{F}_{d} $$ (1)
In the process of the sphere bouncing on the water, the hydrodynamic force F[h] is the dominant force, and the direction and magnitude of F[h] change with the direction of movement. The entire
force on the sphere can be decomposed into two components: horizontal drag and vertical lift. The drag reduces the velocity, and the lift eventually changes the direction of the sphere. Because
of the external forces, the sphere bounces on the free surface.
3.3 Force Characteristics Analysis of a Sphere Bouncing on the Water
In this section, the kinematics and dynamic characteristics of a sphere bouncing on a water surface are analyzed quantitatively. Figure 8 presents the displacement, velocity, and acceleration
curves associated with cases 5 and 6 in the horizontal and vertical directions. The analytical method is similar to that described by Epps et al. (2010) and Wei and Hu (2014, 2015). The line (x,
0) indicates the static-free surface, and the point (0, 0) is the impact point of the sphere on the water. The initial θ[1] and v[1] for the two cases are quite similar, while the impact
phenomena are significantly different.
Displacement curves in cases 5 and 6 show that the sphere trajectories are the same at the initial stage of impact (referring to Figure 8(a)). As the sphere moves forward, its trajectory
gradually is deflected upward and eventually sends it rushing out of the free surface in case 5. The obvious difference is that the sphere's trajectory in case 6 shifts downward and is gradually
moved away from the static-free surface. The velocity variations of spheres along the x- and y-directions are shown in Figure 8(b) and (c). It is found that the tendencies for the two cases are
similar along the x-direction (Figure 8(b)), while the tendencies along the y-direction gradually differ over time, as shown in Figure 8(c). It can be seen that the vertical velocity of the
sphere in case 5 changes from negative to positive at t = 11 ms, which indicates that the sphere reaches the deepest point in the cavity and begins to move upward at this time. In contrast, the
vertical velocity of the sphere in case 6 almost levels off after a rapid decrease in the early stage of impact.
The acceleration curves in the x- and y-directions in Figure 8(d) almost overlap in the early stage. Nevertheless, slight differences are noticed along the y-direction after impact, as shown in
detail in Figure 8(d). The acceleration in case 5 is positive, which causes the sphere to ascend gradually until rushing out of the water. In contrast, the acceleration in case 6 is negative,
which leads the sphere to move downward throughout the process of impacting the water.
3.4 Statistics of a Successful Bounce
To reveal the bouncing law of the sphere on a free water surface, numerous experiments have been conducted. A schematic diagram of the statistical distribution of θ[1] and v[1] during the
experiments is shown in Figure 9. The black hollow squares stand for the cases in which the sphere bounces on the water surface, and the red solid triangles are the cases without bouncing. A
conspicuous boundary is found to differentiate the phenomena of bouncing and non-bouncing. The boundary line is shown with a blue dotted line in Figure 9 and can be fitted as follows:
$$ {v}_{1}=17.5{\theta }_{1}-45.5 $$ (2)
Definitely, the flow physics of bouncing or no bouncing for a sphere impacting on the free water surface within this velocity range (i.e., < 60 m/s) can be predicted by this boundary line. Figure
9 also indicates that the maximum bounce angle is lower than 6.5°, which is consistent with the law of $ {\theta }_{c}=18/\sqrt{\sigma } $ given by Johnson and Reid (1975), where $ {\theta }_{c}
$ the critical angle and σ is the density ratio of the sphere to water. Because of the interference of uncertain factors, some data points are located on the opposite side of the boundary line,
which is not in accordance with the law.
For the process of a sphere bouncing on the water, another significant characterization parameter is the bouncing angle $ {\theta }_{2} $. The distribution of $ {\theta }_{2} $ and $ {\theta }_
{1} $ is shown in Figure 10. Experimental data points of the bouncing cases can be distinguished through the duration t, which is the period from the sphere impacting on the water to leaving the
static-free surface. These black solid squares replace the cases in which the duration t is smaller than 10 ms, and the red hollow triangles replace the cases with t larger than 10 ms. Note that
the blue dotted line corresponds to $ {\theta }_{1}={\theta }_{2} $. It is clearly seen that the statistical data points are mainly distributed near the blue dotted line, and the distribution is
more concentrated for the cases of t ≤ 10 ms. For cases with the spheres gliding on the water for a longer time (t > 10 ms), the bouncing angle $ {\theta }_{2} $ tend to be smaller than the
initial impact angle $ {\theta }_{1} $.
As mentioned above, the energy is transmitted from the sphere to water during the bouncing process. Figure 11 shows the statistical data points for the variation in kinetic energy loss percentage
η with bouncing angle. The loss of kinetic energy is more obvious at the moment for the cases with larger initial impact angles. Furthermore, the percentage of kinetic energy loss is below 50%
when the impact angle is smaller than 3°. The energy attenuation rate corresponding to different angles is between two curves for all the cases, as shown in Figure 11, and the two curves can be
fitted using the following equations:
$$ \eta =8.67{\theta }_{1}^{2}-3.33{\theta }_{1} $$ (3)
$$ \eta =2.13{\theta }_{1}^{2}-0.5{\theta }_{1} $$ (4)
4 Conclusions
In this work, the cavity configurations and kinematic characteristics of a sphere bouncing on water are experimentally investigated. Through continuous improvement of experimental methods,
high-resolution images of the cavity evolution of a sphere bouncing on the water are obtained, and they show the evolution process of cavitation more intuitively and facilitate the analysis of
complex flow phenomena during the process. The motion and force characteristics of the sphere in typical cases are extracted and analyzed by an advanced image processing technique. More than 200
cases of a sphere impacting on a water surface have been considered, and the effect of initial impact velocity and initial impact angle on the phenomena of sphere bouncing has been investigated.
The results indicate that a critical curve of $ {v}_{1}=17.5{\theta }_{1}-45.5 $ shows the relationship between the initial velocity and angle and determines whether the sphere bounces on the
water surface. In addition, for the impacting cases with the same initial velocity, the energy attenuation of the sphere is closely associated with the initial impact angle during bouncing, and
more than half of the kinetic energy is lost for an impact angle of $ {\theta }_{1} $ > 3°.
Open Access
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as
you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material
in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons
licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this
licence, visit
Table 1 Main parameters of typical cases
Case list v[1] (m/s) θ[1] (°) v[2] (m/s) θ[2] (°)
1 58.58 1.04 56.81 0.94
2 58.41 2.60 50.86 2.68
3 58.66 3.69 47.42 3.26
4 56.69 4.53 38.75 4.32
5 57.88 5.89 6.22 5.57
6 57.68 6.01 - -
7 56.78 10.23 - -
• Belden J, Hurd RC, Jandron MA, Bower AF, Truscott TT (2016) Elastic spheres can walk on the water. Nat Commun 7: 10551 https://doi.org/10.1038/ncomms10551
Bocquet L (2003) The physics of stone skipping. Am J Phys 71: 150–155 https://doi.org/10.1119/1.1519232
Clanet C, Hersen F, Bocquet L (2004) Secrets of successful stone-skipping. Nature 427(6969): 29 https://doi.org/10.1038/427029a
Epps BP, Truscott TT, Techet AH (2010) Evaluating derivatives of experimental data using smoothing splines. Mathematical Methods in Engineering International Symposium, 29–38
Faltinsen OM (2000) Hydroelastic Slamming. J Mar Sci Tech-Japan 5: 49–65
Glasheen JW, Mcmahon TA (1996) A hydrodynamical model of locomotion in the basilisk lizard. Nature 380: 340–342 https://doi.org/10.1038/380340a0
Grumstrup T, Keller JB, Belmonte A (2007) Cavity ripples observed during the impact of solid objects into liquids. Phys Rev Lett 99: 114502 https://doi.org/10.1103/PhysRevLett.99.114502
Hewitt IJ, Balmforth NJ, Mcelwaine JN (2011) Continual skipping on the water. J Fluid Mech 669: 328–353 https://doi.org/10.1017/S0022112010005057
Hicks PD, Smith FT (2011) Skimming impacts and rebounds on shallow liquid layers. Proceedings: Mathematical. Phys Eng Sci 467: 653–674
Hurd RC, Belden J, Bower AF, Holekamp S, Jandron MA, Truscott TT (2019) Water walking as a new mode of free surface skipping. Sci Rep-UK 9: 6042 https://doi.org/10.1038/s41598-019-42453-x
Johnson W (1998) Ricochet of non-spinning projectiles, mainly from water part I: some historical contributions. Int J Impact Eng 21: 15–24 https://doi.org/10.1016/S0734-743X(97)00032-8
Johnson W, Reid SR (1975) Ricochet of spheres off water. J Mech Eng Sci 17: 71–81 https://doi.org/10.1243/JMES_JOUR_1975_017_013_02
Korobkin A, Parau EI, Vanden-Broeck JM (2011) The mathematical challenges and modelling of hydroelasticity. Philos Trans R Soc A: Math Phys Eng Sci 369: 2803–2812
Liu K, Smith FT (2014) Collisions, rebounds and skimming. Philos Trans A Math Phys Eng Sci 372: 20130351
Miloh T, Shukron Y (1991) Ricochet off water of spherical projectiles. J Ship Res 35: 91–100 https://doi.org/10.5957/jsr.1991.35.2.91
Nishida M, Okumura M, Tanaka K (2010) Effects of density ratio and diameter ratio on critical incident angles of projectiles impacting granular media. Granul Matter 12: 337–344 https://doi.org/
Rosellini L, Hersen F, Clanet C, Bocquet L (2005) Skipping stones. J Fluid Mech 543: 137 https://doi.org/10.1017/S0022112005006373
Shlien DJ (1994) Unexpected ricochet of spheres off water. Exp Fluids 17: 267–271 https://doi.org/10.1007/BF00203046
Truscott TT, Epps BP, Belden J (2014) Water entry of projectiles. Annu Rev Fluid Mech 46: 355–378 https://doi.org/10.1146/annurev-fluid-011212-140753
Von Karman T (1929) The impact on seaplane floats during landing. NACA Technical Note, 321
Wei ZY, Hu CH (2014) An experimental study on the water entry of horizontal cylinders. J Mar Sci Tech-Japan 19: 338–350 https://doi.org/10.1007/s00773-013-0252-z
Wei ZY, Hu CH (2015) Experimental study on the water entry of circular cylinders with inclined angles. J Mar Sci Tech-Japan 20: 722–738 https://doi.org/10.1007/s00773-015-0326-1
Yun HL, Lyu XJ, Wei ZY (2020) Experimental study on vertical water entry of two tandem spheres. Ocean Eng 201: 107143 https://doi.org/10.1016/j.oceaneng.2020.107143 | {"url":"http://html.rhhz.net/jmsa/html/20210410.htm","timestamp":"2024-11-12T20:44:59Z","content_type":"text/html","content_length":"206057","record_id":"<urn:uuid:7a8c7ccc-39d2-4b9f-92d0-34e334d5fcea>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00455.warc.gz"} |
1d Real-odd DFTs (DSTs) (FFTW 3.3.10)
4.8.4 1d Real-odd DFTs (DSTs)
The Real-odd symmetry DFTs in FFTW are exactly equivalent to the unnormalized forward (and backward) DFTs as defined above, where the input array X of length N is purely real and is also odd
symmetry. In this case, the output is odd symmetry and purely imaginary.
For the case of RODFT00, this odd symmetry means that X[j] = -X[N-j], where we take X to be periodic so that X[N] = X[0]. Because of this redundancy, only the first n real numbers starting at j=1 are
actually stored (the j=0 element is zero), where N = 2(n+1).
The proper definition of odd symmetry for RODFT10, RODFT01, and RODFT11 transforms is somewhat more intricate because of the shifts by 1/2 of the input and/or output, although the corresponding
boundary conditions are given in Real even/odd DFTs (cosine/sine transforms). Because of the odd symmetry, however, the cosine terms in the DFT all cancel and the remaining sine terms are written
explicitly below. This formulation often leads people to call such a transform a discrete sine transform (DST), although it is really just a special case of the DFT.
In each of the definitions below, we transform a real array X of length n to a real array Y of length n:
RODFT00 (DST-I)
An RODFT00 transform (type-I DST) in FFTW is defined by:
RODFT10 (DST-II)
An RODFT10 transform (type-II DST) in FFTW is defined by:
RODFT01 (DST-III)
An RODFT01 transform (type-III DST) in FFTW is defined by:
In the case of n=1, this reduces to Y[0] = X[0].
RODFT11 (DST-IV)
An RODFT11 transform (type-IV DST) in FFTW is defined by:
Inverses and Normalization
These definitions correspond directly to the unnormalized DFTs used elsewhere in FFTW (hence the factors of 2 in front of the summations). The unnormalized inverse of RODFT00 is RODFT00, of RODFT10
is RODFT01 and vice versa, and of RODFT11 is RODFT11. Each unnormalized inverse results in the original array multiplied by N, where N is the logical DFT size. For RODFT00, N=2(n+1); otherwise, N=2n.
In defining the discrete sine transform, some authors also include additional factors of √2 (or its inverse) multiplying selected inputs and/or outputs. This is a mostly cosmetic change that makes
the transform orthogonal, but sacrifices the direct equivalence to an antisymmetric DFT. | {"url":"http://ftp.fftw.org/fftw3_doc/1d-Real_002dodd-DFTs-_0028DSTs_0029.html","timestamp":"2024-11-03T10:38:47Z","content_type":"text/html","content_length":"8032","record_id":"<urn:uuid:ae17b487-7d09-4e23-956b-fcc5c895a836>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00572.warc.gz"} |
Types of Graphs
Learn about different types of graphs.
We can classify graphs into several types depending on the use case or the context. Let's begin by taking a look at the following types.
Undirected graph
This is a simple graph in which the edges don't have a directed nature. In other words, E(u,v) = E(v,u); the links don't have a sense of direction and are considered the same when moved from node u
to node v or from node v to node u. A social network of friends is an excellent example of an undirected graph in which nodes represent individuals, and the edges represent their friendship.
Directed graph
A directed graph (or a digraph) is a graph in which the edges are ordered, and the flow of direction is well defined. In such a case, E(u,v) ≠ E(v,u).
In the example below, node A is called a head, and node B is called a tail. This nomenclature specifies the direction of the connection between the two nodes. Also, the presence of this sense of
direction makes it necessary to add more details to the degree property of a node. Here, we introduce indegree and outdegree to mention the number of inward and outward connections to a node. For
example, node A has two outdegrees and zero indegrees, whereas node B has one outdegree and one indegree.
A simple instance of a directed graph would be a network of Twitter followers in which the nodes are individual accounts and the edges are the follows. A person could follow Elon Musk, but Musk might
not follow them back, making the connection one way. This adds a property to the edge, which is the direction of the relationship.
This is a graph type that allows multiple edges or connections between the nodes. Imagine a transport graph with a few cities like Paris and London that act as nodes. The different possible travel
routes between the two cities are the edges. This represents an instance of a multigraph, which can be directed or undirected.
Single-relational graph
This is a graph in which the nodes and edges represent one type of property—for instance, the property of friendship in a social network graph.
Multi-relational graph
Here, we allow the nodes and edges to have multiple properties and coexist in the graph. A simple example would be a social network that represents different types of friends, like school and
university friends. A knowledge graph, which we'll discuss in more detail later, is an excellent example of a multi-relational graph, and they are very popular nowadays.
Bipartite graph
As the name suggests, the nodes in the graph are divided into two groups. Check out the visualization of an example below:
The two groups are shown in different colors. The groups can have any property, like football players, clubs, and so on.
Directed acyclic graphs
Also denoted as DAGs, the directed acyclic graphs have no directed cycles since the directed edges never form a closed loop. All tree-based graphs are DAGs.
Weighted graph
In this type of graph, we use a function that assigns a weight to either a node or an edge. Subsequently, graphs like these are called node-weighted graphs or edge-weighted graphs.
An edge-weighted directed graph
Note: A given graph can be a mixture of multiple types mentioned above. We could build a graph that’s edge-weighted, directed, and multi-relational all at the same time.
Graph visualization
Let's see a code that makes a directed graph (or a DiGraph) using NetworkX.
import networkx as nx
import matplotlib.pyplot as plt
# create an instance of graph
G = nx.DiGraph()
# add nodes from a list
G.add_nodes_from([1, 2, 3, 4])
# add edges from a list
G.add_edges_from([(1, 2), (1, 3), (1, 4)])
# visualize graph
nx.draw(G, with_labels=True, node_size=1000, font_size=30, arrowsize=30)
Let’s have a look at the code explanation below:
• Line 5: We use an nx.DiGraph() instance to generate a directed graph.
• Lines 8–11: Instead of adding nodes one by one, we can use the add_nodes_from() and add_edges_from() methods to input a list directly.
There are methods in the NetworkX library that can be used to check if a graph is directed or weighted.
Let's run the following code to check if the graph is directed:
import networkx as nx
G = nx.erdos_renyi_graph(100, 0.15)
print("The graph is directed: ",nx.is_directed(G))
Let's run the following code to check if the graph is weighted.
import networkx as nx
G = nx.erdos_renyi_graph(100, 0.15)
print("The graph is weighted: ",nx.is_weighted(G)) | {"url":"https://www.educative.io/courses/introduction-to-graph-machine-learning/types-of-graphs","timestamp":"2024-11-05T10:39:33Z","content_type":"text/html","content_length":"833323","record_id":"<urn:uuid:21ebdb1a-a535-48ed-9e16-de9dbd818817>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00225.warc.gz"} |
Multiply Rational Numbers (examples, solutions, videos, worksheets)
Related Topics:
Common Core for Grade 7 Common Core for Mathematics Lesson Plans and Worksheets for all Grades More Lessons for Grade 7
Examples, solutions, worksheets, videos, and lessons to help Grade 7 students learn how to apply and extend previous understandings of multiplication and division and of fractions to multiply and
divide rational numbers.
Understand that multiplication is extended from fractions to rational numbers by requiring that operations continue to satisfy the properties of operations, particularly the distributive property,
leading to products such as (–1)(–1) = 1 and the rules for multiplying signed numbers. Interpret products of rational numbers by describing real-world contexts.
Common Core: 7.NS.2a
Suggested Learning Targets
• I can multiply and divide rational numbers (integers, fractions, and decimals).
• I can use the multiplication rules for integers and apply them to multiplying decimals and fractions.
• I can use real-world contexts to describe the product of rational numbers.
• I can iInterpret products of rational numbers in real world contexts.
• I can create an equivalent mathematical expression when given an expression by using the distributive property or other properties of operations.
• I can identify equivalent expressions when given two or more expressions.
Extending multiplication of fractions to rational numbers (Common Core 7.NS.2)
Multiplying Rational Numbers
How to multiply rational numbers like decimals and fractions?
Multiplying Rational Numbers - 7th Grade
Try the free Mathway calculator and problem solver below to practice various math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations.
We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page. | {"url":"https://www.onlinemathlearning.com/multiply-rational-numbers-7ns2a.html","timestamp":"2024-11-02T15:27:39Z","content_type":"text/html","content_length":"36927","record_id":"<urn:uuid:45f563c2-ec3f-438d-bff0-0f703599a6e5>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00255.warc.gz"} |
Foundations of Statistics and Machine Learning
Admission requirements
Prerequisites are basic probability theory (including laws of large numbers, central limit theorem) and statistics (maximum likelihood, least squares). Knowledge of machine learning or more advanced
probability/statistics may be useful but is not essential. In particular, all stochastic process/martingale theory that is needed will be developed from scratch.
A large fraction (some claim > 1/2) of published research in top journals in applied sciences such as medicine and psychology is irreproduceable. In light of this 'replicability crisis', classical
statistical methods, most notably testing based on p-values, have recently come under intense scrutiny. Indeed, p-value based tests but also other methods like confidence intervals and Bayesian
methods have mostly been developed in the 1930s - and they are not really suitable at all for many 21st century applications of statistics. Most importantly, they do not deal well with situations in
which new data can keep coming in. For example, based on the results of existing trials, one decides to do a new study of the same medication in a new hospital; or: whenever you type in new search
terms, google can adjust the model that decides what advertisements to show to you.
In this class we first review the classical approaches to statistical testing, estimation and uncertainty quantification (confidence) and discuss what each of them can and cannot achieve. These
include Fisherian testing, Neyman-Pearson testing, Jeffreys-Bayesian (all from the 1930s), sequential testing (1940s) and pure likelihood-based (1960s) approaches. From the confidence perspective, it
includes classical (Neyman-Pearson) confidence intervals and Bayesian posteriors. For each of these we treat the mathematical results underlying them (such as complete class theorems and the 'law of
likelihood') and we give examples of common settings in which they are mis-used. All these approaches, while quite different and achieving different goals, have difficulties in the modern age, in
which "optional continuation" is the rule rather than the exception. We will also treat approaches from the 1980s and 1990s based on data-compression ideas.
We will then treat the one approach which seems more suitable for the modern context: the always-valid-confidence sets of Robbins, Darling and Lai (late 1960s), which has its roots in sequential
testing (Wald, 1940s). The always-valid-approach has recently been re-invigorated and extended. The mathematics behind it involves martingale-based techniques such as Doob's optional stopping
theorem, advanced concentration inequalities such as a finite-time law of the iterated logarithm and information-theoretic concepts such as the relative entropy.
The central organizing principle in our treatment is the concept of likelihood and its generalization, nonnegative supermartingales.
Course Objectives
• Understand the notions of likelihood and its application in the classical statistical paradigms (frequentist, Bayesian, sequential)
• Understand the notion of nonnegative test martingale and its application in always-valid testing and estimation
• Understand the powers and limitations of existing statistical methods
You will find the timetables for all courses and degree programmes of Leiden University in the tool MyTimetable (login). Any teaching activities that you have sucessfully registered for in MyStudyMap
will automatically be displayed in MyTimeTable. Any timetables that you add manually, will be saved and automatically displayed the next time you sign in.
MyTimetable allows you to integrate your timetable with your calendar apps such as Outlook, Google Calendar, Apple Calendar and other calendar apps on your smartphone. Any timetable changes will be
automatically synced with your calendar. If you wish, you can also receive an email notification of the change. You can turn notifications on in ‘Settings’ (after login).
For more information, watch the video or go the the 'help-page' in MyTimetable. Please note: Joint Degree students Leiden/Delft have to merge their two different timetables into one. This video
explains how to do this.
Mode of instruction
Weekly lectures. Bi-weekly exercise sessions in which homework of type (a) is discussed.
Homework consisting of (a) math exercises and (b) a project involving doing a few experiments with an R package.
Assessment method
The final grade consists of homework (40%) and a written (retake) exam (60%). To pass the course, the grade for the (retake) exam should be at least 5 and the (unrounded) weighted average of the two
partial grades at least 5.5. No minimum grade is required for the homework in order to take the exam or to pass the course. The homework counts as a practical and there is no retake for it; it
consists of at least 5 written assignments, of which the lowest grade is dropped, as well as a small programming assignment.
Reading list
Parts of
• R. Royall, Statistical Evidence: a likelihood paradigm ( Chapman & Hall/CRC, 1999)
• P. Grünwald, the Minimum Description Length Principle (MIT Press, 2007, freely available on internet)
• handouts that will be made available during the lectures
From the academic year 2022-2023 on every student has to register for courses with the new enrollment tool MyStudyMap. There are two registration periods per year: registration for the fall semester
opens in July and registration for the spring semester opens in December. Please see this page for more information.
Please note that it is compulsory to both preregister and confirm your participation for every exam and retake. Not being registered for a course means that you are not allowed to participate in the
final exam of the course. Confirming your exam participation is possible until ten days before the exam.
Extensive FAQ's on MyStudymap can be found here.
By email: pdg@cwi.nl
The course is present on brightspace and we will use it heavily. | {"url":"https://www.studiegids.universiteitleiden.nl/courses/111448/foundations-of-statistics-and-machine-learning","timestamp":"2024-11-05T03:27:52Z","content_type":"text/html","content_length":"21463","record_id":"<urn:uuid:f65ae795-da9e-41cf-ae1c-d978406335aa>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00289.warc.gz"} |
Viewing PHA responses
Viewing PHA responses
This notebook is intended to show off the “rich display” features that allow objects like ARF, RMF, and images to be displayed graphically.
First load up the modules:
import numpy as np
from matplotlib import pyplot as plt
from sherpa.astro import io
from sherpa.astro import instrument
from sherpa.astro.plot import ARFPlot
from sherpa.utils.testing import get_datadir
For this notebook we shall use some of the data files from the Sherpa test repository, which is normally only installed when testing Sherpa.
def datafile(filename):
"""Access a data file from the sherpa test data"""
return get_datadir() + "/" + filename
The ARF
First we start with a Chandra ACIS Auxiliary Response File (ARF):
arf = io.read_arf(datafile("9774.arf"))
# Hide the full path name to make the plot title look nicer
arf.name = "9774.arf"
The first thing to do is print the structure, which displays the basic components:
• the energy grid over which the response is defined (the energ_lo and energ_hi fields, which are in keV
• the specresp field which gives the response (the effective area, in cm\(^2\), for each energy bin
name = 9774.arf
energ_lo = Float64[1078]
energ_hi = Float64[1078]
specresp = Float64[1078]
bin_lo = None
bin_hi = None
exposure = 75141.231099099
ethresh = 1e-10
For notebook users we can just ask the notebook to display this object, which displays a plot of the data and some of the metadata stored with it:
<DataARF data set instance '9774.arf'>
<sherpa.astro.plot.ARFPlot object at 0x7f8bb7dc3910>
ARF Plot
Summary (5)
75141.2 s
Number of bins
Energy range
0.22 - 11 keV, bin size 0.0100002 keV
Area range
0.533159 - 681.944 cm^2
Metadata (6)
Mission or Satellite
Instrument or Detector
3C 186
Program description
The cluster around the powerful radio-loud quasar 3C186 at z=1.1
Observation date
Program that created the ARF
mkarf v0.6.2-0
We can also create a Sherpa plot object directly:
aplot = ARFPlot()
This structure contains the data needed to create a plot (and can be created even if no Sherpa plotting backend is available):
xlo = [ 0.22, 0.23, 0.24,...,10.97,10.98,10.99]
xhi = [ 0.23, 0.24, 0.25,...,10.98,10.99,11. ]
y = [61.9221,71.4127,81.3637,..., 0.5701, 0.5516, 0.5332]
xlabel = Energy (keV)
ylabel = cm$^2$
title = 9774.arf
histo_prefs = {'xlog': False, 'ylog': False, 'label': None, 'xerrorbars': False, 'yerrorbars': False, 'color': None, 'linestyle': 'solid', 'linewidth': None, 'marker': 'None', 'alpha': None, 'markerfacecolor': None, 'markersize': None, 'ecolor': None, 'capsize': None}
When a plotting backend is available we can display the data, which shows a plot essentially the same as the “rich display” above, but without the metadata:
The RMF
Displaying the Redistribution Matrix File (RMF) is harder, because it is an intrinsically two-dimensional object, as it describes how the physical properties of the X-ray signal (in this case, the
energy or wavelength) is mapped onto the detector properties (channel).
rmf = io.read_rmf(datafile("9774.rmf"))
rmf.name = "9774.rmf"
The matrix is stored in a compressed form, and hard to understand from the object display:
name = 9774.rmf
energ_lo = Float64[1078]
energ_hi = Float64[1078]
n_grp = UInt64[1078]
f_chan = UInt64[1481]
n_chan = UInt64[1481]
matrix = Float64[438482]
e_min = Float64[1024]
e_max = Float64[1024]
detchans = 1024
offset = 1
ethresh = 1e-10
The “rich display” picks 5 energies, spaced logarithmically across the energy response of the RMF, and shows the behavior of monochromatic emission at this energy, along with some of the metadata
related to the file.
We can see why fitting X-ray data can be hard, since 3 keV photons do peak at 3 keV but can also be observed down to 1 keV.
<DataRMF data set instance '9774.rmf'>
RMF PlotSummary (5)
Number of channels
Number of energies
Energy range
0.22 - 11 keV, bin size 0.0100002 keV
Channel range
1 - 1024
Metadata (6)
Mission or Satellite
Instrument or Detector
Program that created the RMF
mkrmf - Version CIAO 4.3
The channel type
The minimum probability threshold
Matrix contents
Note that there is no equivalent to the ARFPlot class for RMF.
New in Sherpa 4.16.0 is the ability to convert a RMF into a 2D image, which shows the relationship between channel (X axis) and energy (Y axis). It is essentially the same as the CIAO tool rmfimg.
We can convert a RMF into a DataIMG structure:
image_rmf = instrument.rmf_to_image(rmf)
As always, let’s see what is stored in it. Although the data in in 2D, the DataIMG structrure flattens it out into 1D arrays:
name = 9774.rmf
x0 = Int64[1103872]
x1 = Int64[1103872]
y = Float64[1103872]
shape = (1078, 1024)
staterror = None
syserror = None
sky = None
eqpos = None
coord = logical
However, we can use the rich display to show this data. Note that this uses a linear scale for the data, and so all we see is the “main” response, which shows the main peaks we saw in the line plot
Although not labelled, the X axis is in channel space. For the Chandra ACIS detector this has 1024 channels. The Y axis is energy range, which depends on how the RMF was built (it maps to the
ENERG_LO and ENERG_HI columns from the MATRIX block of the RMF, in this case accessible as rmf.energ_lo and rmf.energ_hi).
<DataIMG data set instance '9774.rmf'>
DataIMG PlotMetadata (2)
Mission or Satellite
Instrument or Detector
The matrix can be retrieved directly with rmf_to_matrix rather than rmf_to_image (we could reconstruct the data from the image_rmf structure, but the following is a lot more informative):
matinfo = instrument.rmf_to_matrix(rmf)
This object does not have a “nice” string representation, but it contains three fields:
The matrix is the 2D data shown above:
matinfo.matrix.min(), matinfo.matrix[matinfo.matrix>0].min(), matinfo.matrix.max()
(0.0, 8.81322481660618e-09, 0.13163885474205017)
This can be displayed with a log scale, to show off some of the secondary features we saw in the monochromatic energy response above. The horizontal lines are added to indicate rows which we shall
investigate later.
from matplotlib import colors
plt.imshow(matinfo.matrix, origin='lower', norm=colors.LogNorm(vmin=1e-3, vmax=0.2))
for pos in [200, 400, 600, 800]:
plt.axhline(pos, alpha=0.5, c='orange')
We can use this data to try and reconstruct the monochromatic response plot from above. We can pick a row from the matrix, which will be the response for a photon at a fixed energy (well, a photon in
the finite energy range given by the corresponding element from the energ_lo and energ_hi fields).
Selecting values along the Y axis selects different ranges (and let’s us explore some of the features seen above). One difference to the rich display above is that this plot uses channel number for
the X axis rather than converting this to an “approximate” energy (as done above), by using the E_MIN and E_MAX fields from the EBOUNDS block of the RMF (available as rmf.e_min and rmf.e_max).
for idx in [200, 400, 600, 800]:
# We could use matinfo.energies, but as we have the RMF object we use that instead.
elo = rmf.energ_lo[idx]
ehi = rmf.energ_hi[idx]
plt.plot(np.arange(1, 1025), matinfo.matrix[idx, :], label=f"{elo:.2f} - {ehi:.2f} keV")
Looking at a different detector
The Sherpa test data directory contains a response file for the ROSAT PSPC-C instrument, which operated in the 1990s, and used a different detector to the CCD detector used in ACIS. We can see how
different by viewing the response using the techniques from above:
rsp_pspcc = io.read_rmf(datafile("pspcc_gain1_256.rsp"))
rsp_pspcc.name = "pspcc_gain1_256.rsp"
<DataRosatRMF data set instance 'pspcc_gain1_256.rsp'>
RMF PlotSummary (5)
Number of channels
Number of energies
Energy range
0.054608 - 3.01 keV, bin size 0 - 0.016932 keV
Channel range
1 - 256
Metadata (5)
Mission or Satellite
Instrument or Detector
The channel type
The minimum probability threshold
Matrix contents
<DataIMG data set instance 'pspcc_gain1_256.rsp'>
DataIMG PlotMetadata (2)
Mission or Satellite
Instrument or Detector
Note that the ROSAT RMF includes the effective area (i.e. ARF) terms, which is why the matrix values are greater than 1 and some of the vertical structure of the plot. The lower resolving power of
the instrument - compared to the ACIS CCD - is shown by the fact the line above is not as sharp as the ACIS version above. If we used a Chandra grating RMF the line would be much narrower (but it was
not included here as it is harder to see as there’s a lot more pixels).
Using the responses
How do we apply the response to a model?
from sherpa.astro.instrument import ARF1D, RMF1D, RSPModelNoPHA
from sherpa.models.basic import Delta1D, NormGauss1D
There are several ways of applying the response. Here we chose to use the “wrapper” models ARF1D and RMF1D to convert the DataARF and DataRMF structures into “convolution-style” models[\(\dagger\)].
[\(\dagger\)] technically only the RMF needs to be handled as a convolution model, but for historical reasons the ARF is handled the same way.
aconv = ARF1D(arf)
rconv = RMF1D(rmf)
Let’s create a model consisting of a delta function, at 2 keV, together with a gaussian centered at 6 keV and with a FWHM of 1 keV:
dmodel = Delta1D("delta")
gmodel = NormGauss1D("gauss")
dmodel.pos = 2
gmodel.pos = 6
gmodel.fwhm = 1
# Adjust the gaussian amplitude so that it is more visible.
gmodel.ampl = 100
model_base = dmodel + gmodel
<BinaryOpModel model instance '(delta + gauss)'>
Expression: delta + gauss
Component Parameter Thawed Value Min Max Units
delta pos 2.0 -MAX MAX
ampl 1.0 -MAX MAX
fwhm 1.0 TINY MAX
gauss pos 6.0 -MAX MAX
ampl 100.0 -MAX MAX
Let’s just check that both components have the integrate flag set to True (the composite model does not pass through the integrate setting of its components in Sherpa 4.16.0):
dmodel.integrate, gmodel.integrate
We can evaluate this to get “the truth” (for XSPEC additive models the per-bin value would have units of photon / cm\(^2\) / s, but for the Sherpa models we can give the ampl parameter (for these two
models) whatever units we want, so let’s also assume the same units as XSPEC.
y_base = model_base(rmf.energ_lo, rmf.energ_hi)
emid = (rmf.energ_lo + rmf.energ_hi) / 2
plt.plot(emid, y_base)
plt.xlabel("Energy (keV)")
plt.ylabel("photon cm$^{-2}$ s$^{-1}$");
The ARF is incluced by “convolving” the base model by the ARF model. Note that, as the ARF contains an exposure time, the model automatically includes this, which means that the output is now not a
rate. In fact, because the ARF has units of cm\(^2\), the model evaluation will calculate the number of photons per bin.
model_arf = aconv(model_base)
<ARFModelNoPHA model instance 'apply_arf((75141.231099099 * (delta + gauss)))'>
Expression: apply_arf((75141.231099099 * (delta +
Component Parameter Thawed Value Min Max Units
delta pos 2.0 -MAX MAX
ampl 1.0 -MAX MAX
fwhm 1.0 TINY MAX
gauss pos 6.0 -MAX MAX
ampl 100.0 -MAX MAX
Each bin is multiplied by the ARF, so - since the ARF is not flat - the relative signal will change. The ARF is shown in orange (see the right axis) to also show the effective area at each energy.
y_arf = model_arf(rmf.energ_lo, rmf.energ_hi)
plt.plot(emid, y_arf)
plt.xlabel("Energy (keV)")
ax2 = plt.twinx();
emid2 = (arf.energ_lo + arf.energ_hi) / 2
ax2.plot(emid, arf.specresp, alpha=0.4, c="orange", label="ARF")
We can repeat this for the RMF, noting that the output - because the ARF is not included - will have the unusual units of count / cm\(^2\) / s:
model_rmf = rconv(model_base)
<RMFModelNoPHA model instance 'apply_rmf((delta + gauss))'>
Expression: apply_rmf((delta + gauss))
Component Parameter Thawed Value Min Max Units
delta pos 2.0 -MAX MAX
ampl 1.0 -MAX MAX
fwhm 1.0 TINY MAX
gauss pos 6.0 -MAX MAX
ampl 100.0 -MAX MAX
An interesting part of this is that the RMF converts between physical units (energy or wavelength), which is used to evaluate the “wrapped” model (in this case model_base), and returns values in
channel space. This means that we no-longer supply the convolved model with energies, but with channels.
The obvious differences to above are that the relative intensity of the delta function and gaussian has drastically changed, and the blurring of the created by the RMF is visible (well, it is once
you change to a logarithmic scale for the Y axis).
channels = np.arange(1, 1025, dtype=np.int16)
y_rmf = model_rmf(channels)
plt.plot(channels, y_rmf)
plt.ylabel("count cm$^{-2}$ s$^{-1}$")
We can use the “approximate” energies from the RMF to get a plot more similar to the plot_data command (UI) or DataPHAPlot (direct use of the plotting classes):
emid_approx = (rmf.e_min + rmf.e_max) / 2
plt.plot(emid_approx, y_rmf)
plt.xlabel("Approximate Energy (keV)")
plt.ylabel("count cm$^{-2}$ s$^{-1}$");
We could combine both with an expression like
but we shall also use the RSPModelNoPHA class, which takes ARF, RMF, and the model as arguments:
model_both = RSPModelNoPHA(arf, rmf, model_base)
model_check = rconv(aconv(model_base))
<RSPModelNoPHA model instance 'apply_rmf(apply_arf((delta + gauss)))'>
Expression: apply_rmf(apply_arf((delta + gauss)))
Component Parameter Thawed Value Min Max Units
delta pos 2.0 -MAX MAX
ampl 1.0 -MAX MAX
fwhm 1.0 TINY MAX
gauss pos 6.0 -MAX MAX
ampl 100.0 -MAX MAX
<RMFModelNoPHA model instance 'apply_rmf(apply_arf((75141.231099099 * (delta + gauss))))'>
Expression: apply_rmf(apply_arf((75141.231099099
* (delta + gauss))))
Component Parameter Thawed Value Min Max Units
delta pos 2.0 -MAX MAX
ampl 1.0 -MAX MAX
fwhm 1.0 TINY MAX
gauss pos 6.0 -MAX MAX
ampl 100.0 -MAX MAX
The model display for model_both is interesting, since it does not include the exposure time![\(\dagger\dagger\)]
This means that the y_both output is a rate (count / s), whereas y_check has units of count.
[\(\dagger\dagger\)] Please check the Sherpa issues page as this behaviour may change.
y_both = model_both(channels)
y_check = model_check(channels)
plt.plot(channels, y_both, label="model_both", alpha=0.5)
plt.plot(channels, y_check, label="model_check", alpha=0.5)
So, here we have evaluated a model and passed it through both the ARF and RMF. | {"url":"https://sherpa.readthedocs.io/en/latest/ViewingPHAResponses.html","timestamp":"2024-11-13T18:47:06Z","content_type":"text/html","content_length":"334272","record_id":"<urn:uuid:2e0fe97a-9370-456e-b520-f8dcaef9eb53>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00295.warc.gz"} |
Leagues (statute) to Kens Converter
Enter Leagues (statute)
β Switch toKens to Leagues (statute) Converter
How to use this Leagues (statute) to Kens Converter π €
Follow these steps to convert given length from the units of Leagues (statute) to the units of Kens.
1. Enter the input Leagues (statute) value in the text field.
2. The calculator converts the given Leagues (statute) into Kens in realtime β using the conversion formula, and displays under the Kens label. You do not need to click any button. If the input
changes, Kens value is re-calculated, just like that.
3. You may copy the resulting Kens value using the Copy button.
4. To view a detailed step by step calculation of the conversion, click on the View Calculation button.
5. You can also reset the input by clicking on button present below the input field.
What is the Formula to convert Leagues (statute) to Kens?
The formula to convert given length from Leagues (statute) to Kens is:
Length[(Kens)] = Length[(Leagues (statute))] / 0.00043876171383121275
Substitute the given value of length in leagues (statute), i.e., Length[(Leagues (statute))] in the above formula and simplify the right-hand side value. The resulting value is the length in kens,
i.e., Length[(Kens)].
Calculation will be done after you enter a valid input.
Consider that an ancient road stretches for 25 statute leagues.
Convert this distance from statute leagues to Kens.
The length in leagues (statute) is:
Length[(Leagues (statute))] = 25
The formula to convert length from leagues (statute) to kens is:
Length[(Kens)] = Length[(Leagues (statute))] / 0.00043876171383121275
Substitute given weight Length[(Leagues (statute))] = 25 in the above formula.
Length[(Kens)] = 25 / 0.00043876171383121275
Length[(Kens)] = 56978.5358
Final Answer:
Therefore, 25 st.league is equal to 56978.5358 ken.
The length is 56978.5358 ken, in kens.
Consider that a historical expedition covered 50 statute leagues.
Convert this distance from statute leagues to Kens.
The length in leagues (statute) is:
Length[(Leagues (statute))] = 50
The formula to convert length from leagues (statute) to kens is:
Length[(Kens)] = Length[(Leagues (statute))] / 0.00043876171383121275
Substitute given weight Length[(Leagues (statute))] = 50 in the above formula.
Length[(Kens)] = 50 / 0.00043876171383121275
Length[(Kens)] = 113957.0715
Final Answer:
Therefore, 50 st.league is equal to 113957.0715 ken.
The length is 113957.0715 ken, in kens.
Leagues (statute) to Kens Conversion Table
The following table gives some of the most used conversions from Leagues (statute) to Kens.
Leagues (statute) (st.league) Kens (ken)
0 st.league 0 ken
1 st.league 2279.1414 ken
2 st.league 4558.2829 ken
3 st.league 6837.4243 ken
4 st.league 9116.5657 ken
5 st.league 11395.7072 ken
6 st.league 13674.8486 ken
7 st.league 15953.99 ken
8 st.league 18233.1314 ken
9 st.league 20512.2729 ken
10 st.league 22791.4143 ken
20 st.league 45582.8286 ken
50 st.league 113957.0715 ken
100 st.league 227914.143 ken
1000 st.league 2279141.4302 ken
10000 st.league 22791414.3025 ken
100000 st.league 227914143.025 ken
Leagues (statute)
A league (statute) is a unit of length used to measure distances. One statute league is equivalent to 3 miles or approximately 4.828 kilometers.
The statute league is defined as three miles, and it was historically used in various English-speaking countries for measuring distances, especially in land navigation and mapping.
Statute leagues are less commonly used today but may still appear in historical documents, literature, and some regional contexts. They provide a way to express distances in a scale larger than miles
but smaller than other large units like leagues nautical.
A ken is a historical unit of length used in various cultures, particularly in Asia. The length of a ken can vary depending on the region and context. In Japan, one ken is approximately equivalent to
6 feet or about 1.8288 meters.
The ken was traditionally used in architectural and construction measurements, particularly in the design of buildings and layout of spaces.
Ken measurements were utilized in historical architecture and construction practices in Asian cultures. Although not commonly used today, the unit provides historical context for traditional
measurement standards and practices in building and design.
Frequently Asked Questions (FAQs)
1. What is the formula for converting Leagues (statute) to Kens in Length?
The formula to convert Leagues (statute) to Kens in Length is:
Leagues (statute) / 0.00043876171383121275
2. Is this tool free or paid?
This Length conversion tool, which converts Leagues (statute) to Kens, is completely free to use.
3. How do I convert Length from Leagues (statute) to Kens?
To convert Length from Leagues (statute) to Kens, you can use the following formula:
Leagues (statute) / 0.00043876171383121275
For example, if you have a value in Leagues (statute), you substitute that value in place of Leagues (statute) in the above formula, and solve the mathematical expression to get the equivalent value
in Kens. | {"url":"https://convertonline.org/unit/?convert=leagues_statute-kens","timestamp":"2024-11-09T16:28:51Z","content_type":"text/html","content_length":"91501","record_id":"<urn:uuid:46ff7509-9626-4c20-bf25-98cd93459d97>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00850.warc.gz"} |
Graduate Thesis Papers | Mathematics & Computer Science | Bemidji State University
• Awe, Will: How Geometer’s Sketchpad Improves Student Learning
• Anderson, Bryan: Cognitively Guided Instruction (CGI): Whats the Point? A Look into CGI and CGI’s Potential Role in an 8th Grade Mathematics Class
• Benner, Jean: Anxiety in the Math Classroom
• Bolhuis, LeAnn: Interactive White Boards in a Secondary Mathematics Classroom
• Carter, Joan: Implementation of Statway for Non-STEM Majors at Two-Year Community Colleges with Focus on the Teacher’s Experience
• Cox, Ralph: Are Developmental Math Courses Being Offered at the Appropriate Time of Day to Optimize Student Success?
• Dahl, Laura: The Impact of Manipulatives on Learning in the Elementary and Middle School Mathematics Classroom
• Fairchild, Cadie: How to Reach all Types of Learners Through the Use of Manupulatives in Grades Three, Four, and Five
• Fairchild, Dan: Common Misconceptions and Instructional Best Practices for Teaching Fractions to Students in Grades Three, Four, and Five
• Geisler, Alexis: Tracking in Middle School Mathematics
• Hansen, Heidi: The Effects of the Use of Dynamic Geometry Software on Student Achievement and Interest
• Kruger, Sherri: Differentiated Instruction in the High School Mathematics Classroom
• Lightner, Larry: The Effects of Block Scheduling on AP Calculus AB Student Achievement
• MchLachlan, Jennifer: Improving Student Achievement Through Feedback
• Melby, Marcella: The Critical Components of a School-College Partnership Formed to Reduce the Enrollment in Remedial Courses in Mathematics of Incoming Freshman
• Mix, Amanda: The Impact of Math Anxiety in the Primary Grades
• Morris, Carly Jeff: Can the Differentiated Math Classroom be a Reality?
• Mutnansky, Christina: Manupulatives in the Secondary Mathematics Classroom Using a Traditional Algebra Text
• Nohner, Matthew: Year-Round Calendars at the High School Level
• Prestegord, Heather: Technology Use in the Middle School Mathematics Classroom
• Richgels, Amber Rae: Why are School Districts Abandoning the Core-Plus Mathematics Curriculum?
• Regneir, Jane: Cooperative Learning in the Mathematics Classroom
• Salscheider, Lynnea Marrie: Inclusion: Problems and Potential Solutions in Mathematics Instruction
• Seaver, Shannon: Learning Style Relationship to Motivation and Success in the Flipped vs Non-Flipped Classroom
• Seyfried, Nicole: Effects of Tracking Students in a Secondary Mathematics Classroom
• Smieja, Adams: Mathematics Classrooms at the Middle School Level: Does Grouping Make a Difference in Achievement?
• Smieja, Katie Ann Garrity: How IPads can be Used in the Mathematics Classroom to Improve Student Learning
• Sorenson, Amy Katherine: Students Centered Mathematics in an Isolated Skill Environment-Constructionist Methods in Mathematics
• Stoddard, Cheryl: Teaching Euclidean Geometry Using Proof and Dynamix Geometry Software
• Stuewe, Jessica: Professional Learning Community Impacts on Student Achievement in Middle School Mathematics
• Strom, Jessica: Manipulatives in Mathematics Instruction
• Vettleson, Lawrence Jr.: Problem Solving Based Instruction in the High School Mathematics Classroom
• Westberg, Amie P.: The Impact and Effectiveness of Student-Centered Classroom Structure
• Wilke, Maureen K.: To What Degree will Differentiated Instruction Impact Student Grades in a Middle School Classroom?
• Wurdock, Timothy Michael: A Comparative Analysis of Japanese and U.S. Teaching Styles of Mathematics | {"url":"https://www.bemidjistate.edu/academics/mathematics-computer-science/student-resources/student-research/graduate-thesis-papers/","timestamp":"2024-11-03T01:06:59Z","content_type":"text/html","content_length":"58769","record_id":"<urn:uuid:f3067039-0f36-4b14-985b-e1815f89dacd>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00811.warc.gz"} |
Math for Data Science: self-study materials | EPAM Campus
Data is often referred to as "new oil" or "new gold" for a good reason. Data helps businesses to grow and prosper, predict trends, identify opportunities, and stay ahead of competitors by providing
insights into consumer behavior or certain market conditions before they actually occur. That’s why specialists capable of transforming scattered data into valuable business insights are extremely
appraised. If you are interested in stepping in the field of Data Science, start with the materials recommended by EPAM Data specialists.
Fundamentals of mathematical analysis
Maxima and Minima
Differential equations
Linear algebra
Eigenvectors and Eigenvalues
Quadratic Forms
Mathematics for Machine Learning: Linear Algebra
A comprehensive course on Linear Algebra covering such topics as vectors and matrices, eigenvalues and eigenvectors, and their implementation in working with datasets. The course aims to help
students bridge the gap into linear algebra problems, and understand how to apply these concepts to machine learning.
Probability theory fundamentals
Probability theory
Bayes inference
Basic Concepts
• Eight-hour course that covers the essentials of statistics, and introduces the various methods used to collect, organize, summarize, interpret and reach conclusions about data
Hypothesis Testing
Maximum Likelihood Estimation
Optimization theory
Optimization for Data Scientists
Optimization is one of the three pillars that Data Science professionals must understand thoroughly. Familiarize yourself with its fundamentals.
Algorithms and data structures
Data structures
Sorting algorithms
Algorithm Complexity
• A part of the Data structures and Algorithms course, dedicated to Big O notation
Basics of Python/SQL
Python Environment
Introduction to Python and Data Science stack
• A quick crash course on both the Python programming language and its use for scientific computing
SQL basics
This is the “starter pack” to begin your Data science journey. If you find this specialization exciting and would like to dive deep into the world of data, check out our educational programs in Data
Science and join us to broaden your knowledge and enrich it with hands-on experience. | {"url":"https://campus.epam.uz/en/blog/555","timestamp":"2024-11-02T02:10:02Z","content_type":"text/html","content_length":"77289","record_id":"<urn:uuid:3ccb13b9-6292-4dfd-83ff-d6d02e39ef7f>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00264.warc.gz"} |
Shapes - Documentation - Unigine Developer
While a body simulates various types of physical behavior, a shape represents the volume of space occupied by a physical body. A physically simulated object usually has one body and one or several
shapes which allow objects to collide with each other (therefore, shapes are often referred as collision shapes). Objects with shapes also fall under gravity, bounce off static surfaces or slide
along them. A body without a single shape assigned behaves as a dummy body that can be connected to other bodies using joints, but does not collide and is immune to gravity.
Basic shape types are as follows:
• Simple primitives. They are very fast and memory efficient. Simple primitives should be used whenever possible.
• Complex collision shapes composed of triangles. These shapes are slower and more memory demanding.
Simple primitives make collision calculations easier while keeping performance high and accuracy acceptable. Convex hulls provide higher precision, however, continuous collision detection is not
available for this type of shape. Therefore, convex hulls should not be used for fast-moving objects.
A shape doesn't have to duplicate the mesh it approximates. It is recommended to use simple primitives. Even though they are not precise, in the majority of cases they provide acceptable results.
The number of shapes should be kept as low as possible. Otherwise, heavy physics calculations will decrease the performance.
A shape cannot be created without a body and does not have its own position in world coordinates. It is always assigned to a body and positioned relative to it.
See also#
Programming implementation:
Shape Parameters#
All shapes regardless of their type have the following common parameters
Enabled A flag indicating if a shape is enabled.
A flag indicating if the continuous collision detection is enabled for the shape.
Continuous collision detection is available for sphere and capsule shapes only.
Mass Mass of the shape. Changing the mass influences the density, which is calculated by dividing the mass by shape volume. In case if there are several shapes assigned to a body
(e.g. a set of convex hulls)
Density Density of the shape. Changing the density influences the mass, which is calculated by multiplying shape volume by density.
Coefficient of friction of the shape. Defines how rough the shape's surface is. The higher the value, the less tendency the shape has to slide.
In case if an object contains a
and a shape, both with specified friction parameter, only the shape's parameter will be used.
Coefficient of restitution of the shape. Defines how bouncy the shape is when colliding.
• The minimum value of 0 indicates inelastic collisions (a piece of soft clay hitting the floor)
• The maximum value of 1 represents highly elastic collisions (a rubber ball bouncing off a wall)
Restitution In case if an object contains a
and a shape, both with specified restitution parameter, only the shape's parameter will be used.
Position Position of the shape in the coordinates of the body.
Rotation Rotation of the shape in the coordinates of the body.
Physics Intersection Physics Intersection bit mask of the shape.
Collision mask Collision bit mask of the shape. This mask is used to specify collisions of the shape with other ones.
Exclusion mask Exclusion bit mask of the shape. This mask is used to prevent collisions of the shape with other ones.
Our video tutorial on physics contains an overview of the shape parameters and clarification on how to use the Exclusion and Collision masks.
Adding a Shape#
To add a shape via UnigineEditor, perform the following steps:
• Open the World Hierarchy window
• Select an object you want to assign a physical shape to.
• Go to the Physics tab in the Parameters window and assign a physical body to the selected object: a rigid body, ragdoll body or a dummy body.
• In the Shapes section below choose an appropriate type of shape and click Add.
• Set necessary shape parameters.
You can enable visualization of shapes by checking Helpers panel → Physics item → Shapes option (Visualizer should be enabled).
A sphere is the simplest and the fastest shape, as it has only one parameter: a radius. Continuous collision detection is available for spherical shapes. Therefore, passing through other objects even
when moving at a high speed is avoided.
Using the spherical shape for any arbitrary mesh ensures that its collisions will always be detected.
For a shape to fit your object, you can adjust the Radius of the sphere.
A capsule is also a very fast collision shape with continuous collision detection available. Capsules are convenient for approximation of elongated objects (pillars, etc.) as well as humanoid
characters, because it allows them to go up and down the stairs smoothly, without stumbling at each step (if the steps are not too high). It also ensures that character's limb will not get stuck
somewhere unexpectedly.
For a shape to fit your object, you can adjust the Radius and the Height of the capsule.
A cylinder can be used to approximate elongated shapes with flat ends (e.g. a shafts, pylons, etc.). It is similar to a box shape.
For a shape to fit your object, you can adjust the Radius and the Height of the cylinder.
A box is a cuboid shape which can be used for approximation of volume of various objects. it is suitable for walls, doors, stairs, parts of mechanisms, car bodies, and many other things. The length
of a box shape in each dimension can be chosen arbitrarily.
For a shape to fit your object, you can adjust the size of the box along each axis: Size X, Size Y, Size Y.
Convex Hull#
Convex hull is the slowest of all shapes and is used for objects having complex geometry. The created shape will always be convex, that is, holes and cavities of the mesh are ignored when generating
a convex hull. Instead, they are included into the shape volume. Convex shape is the smallest shape that can enclose vertices of the approximated mesh.
To generate a convex hull specify an approximation error value:
The Approximation error parameter makes it possible to reduce the number of vertices of the created shape. Simple and rough convex hulls with small number of vertices are processed faster, therefore,
it is recommended to keep the number of vertices as low as possible.
• By the value of 0, the shape precisely duplicates the mesh; the whole volume of it is enclosed.
• The higher the value, the less vertices there are in the created shape, but the more details are skipped.
Approximation error = 0 Approximation error = 0.1
To approximate a complex concave object and exclude cavities from its volume, use a set of autogenerated convex hulls.
A complex concave object approximated by a single convex hull (left) and an autogenerated set of convex hulls (right)
To add an autogenerated set of shapes specify the following parameters:
Recursion depth determines the degree of mesh decomposition. If 0 or a negative value is provided, only one shape will be created. The higher the value, the more convex shapes are to be generated
Approximation error makes it possible to reduce the number of vertices of generated shapes. Simple and rough convex hulls with small number of vertices are processed faster, therefore, it is
recommended to keep the number of vertices as low as possible.
• By the value of 0, the shape precisely duplicates the mesh; the whole volume of it is enclosed.
• The higher the value, the less vertices there are in the created shape, but the more details are skipped.
Merging threshold determines the volume threshold for merging convex shapes after decomposition and can be used to reduce the number of generated shapes: the higher the value, the less convex shapes
are to be generated.
Last update: 2021-12-13
Help improve this article
Was this article helpful?
(or select a word/phrase and press Ctrl+Enter) | {"url":"https://developer.unigine.com/en/docs/2.15/principles/physics/shapes/?rlang=cpp","timestamp":"2024-11-11T13:48:56Z","content_type":"text/html","content_length":"472223","record_id":"<urn:uuid:8b8b0ff4-6042-45c4-a5c2-225a84333f62>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00690.warc.gz"} |
ML Lab VTU Archives - VTUPulse
Soft Computing Video Tutorial – Solved Numerical Examples and Implementation in Python
Artificial Intelligence Video Tutorial – Solved Numerical Examples and Implementation in Python
Explain the concepts of Entropy and Information Gain in Decision Tree Learning. While constructing a decision tree, the very first question to be answered is, Which Attribute Is the Best Classifier?
The central choice in the ID3 algorithm is selecting which attribute to test at each node in the tree. We would like to select
Entropy and Information Gain in Decision Tree Learning Read More »
What are appropriate problems for Decision tree learning? Although a variety of decision-tree learning methods have been developed with somewhat differing capabilities and requirements, decision-tree
learning is generally best suited to problems with the following characteristics: Video Tutorial 1. Instances are represented by attribute-value pairs. “Instances are described by a fixed set of
attributes (e.g.,
What are decision tree and decision tree learning? Explain the representation of the decision tree with an example. Decision Trees is one of the most widely used Classification Algorithm Features of
Decision Tree Learning Method for approximating discrete-valued functions (including boolean) Learned functions are represented as decision trees (or if-then-else rules) Expressive hypotheses space,
18CSL76 Artificial Intelligence and Machine Learning Laboratory – VTU AIML Lab and Theory 18CS71 Artificial Intelligence and Machine Learning Laboratory – 18CSL76 (VTU AIML Lab) covers the different
algorithms such as A* Search, A** Search, Find-S algorithms, Candidate elimination algorithm, Decision tree (ID3) algorithm, Artificial Neural Networks, Backpropagation Algorithm, Naïve Bayes
classifier for text classification,
Backpropagation Algorithm – Machine Learning – Artificial Neural Network In this tutorial i will discuss the Backpropagation Algorithm and its implementation in Python. Video Tutorial on
Backpropagation Algorithm BACKPROPAGATION (training_example, ƞ, nin, nout, nhidden) Each training example is a pair of the form (𝑥, 𝑡), where (𝑥) is the vector of network input values, and
Python Program to Implement the Locally Weighted Regression Algorithm Exp. No. 10. Implement the non-parametric Locally Weighted Regression algorithm in Python in order to fit data points. Select the
appropriate data set for your experiment and draw graphs. Locally Weighted Regression Algorithm Regression: Regression is a technique from statistics that are used to predict values | {"url":"https://vtupulse.com/tag/ml-lab-vtu/","timestamp":"2024-11-12T15:50:36Z","content_type":"text/html","content_length":"110330","record_id":"<urn:uuid:951be9bf-f437-4f41-a260-6cb2f23353ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00603.warc.gz"} |
Day 40: Clustering
A clustering algorithm looks at a number of data points and automatically finds data points that are related or similar to each other. In unsupervised learning, you are given a dataset with features
(x), but not target labels (y).
Because we don't have target labels (y), we are not able to tell the algorithm what is the right answer (y) that we wanted to predict. Instead, we're going to ask the algorithm to find some
interesting structure about this data. Clustering algorithm looks for one particular type of structure in data
K-Means Intuition
The 2 key steps of k-means algorithm are:
• Assign every point to the cluster centroid, depending on what cluster centroid it's nearest to
• Move each cluster centroid to the average or mean of all the points that were assigned to it
Let's look at an example:
• The first thing that the k-means algorithm does is it will take a random guess at where might be the centers of the 2 clusters that you might ask it to find.
• In this example, we're going to ask it to find 2 clusters.
• The red and blue cross shown is a random initial guess, they are not particularly good guesses, but it's a start.
• After making the initial guess for the location of the cluster centroids (center of the cluster), it will go through all of the examples (orange/yellow dots), and for each of them, it will check
if it's closer to the red cross or blue cross
• It will then assign each of these points to whichever of the cluster centroids it's closer to.
• It will then take the mean of all these red points and move the red cross to whatever the average or mean location of the red dots, it will do the same to the blue.
• Now, we have a new location for the red cross and the blue cross.
• It will look through all the 30 examples again, and see if it's closer to the new red or blue cross location, and then re-assign them to the new red or blue cross cluster, after going through all
30, it will average/mean again and repeat this process over and over until there are no more changes to the colors of the points or to the locations of the cluster centroids.
• At this point, that means k-means clustering algorithm has converged, because applying these 2 steps over and over results in no further changes to either assignment to point to the centroids/
location of the cluster centroids.
K-means algorithm
The first step is to randomly initialize k cluster centroids: Mu1, Mu2, ..., Muk
Remember, in our example, we have set k=2, the red cross would be the location of Mu1, and blue cross=Mu2
FYI, Mu1 and Mu2 are vectors, which have the same dimension of your training examples
Repeat {
# assign points to cluster centroids
for i = m:
c[i] := index (from 1 to k) of cluster centroid closest to x[i]
When you implement this algorithm, you may find that it's actually a little bit more convenient to minimize the square distance because the cluster centroid with the smallest square distance should
be the same as the cluster centroid with the smallest distance
The next step is to move cluster centroids (after all the points have been assigned to its respective cluster):
# move cluster centroids
for k=1 to K:
Muk := average of points assigned to cluster k
Let's say we have 4 training examples in cluster Mu1 (red cross), we would calculate the new Mu1 location like this: (x1 + x5 + x6 + x10)/4
There is one corner case of this algorithm , which is what happens if a cluster has zero training examples assigned to it? In that case, during the second step, Muk, would be trying to computer the
average of zero points and that's not well defined. If that ever happens, the most common thing to do is to eliminate that cluster, and you will end up with (k - 1) clsuers, or if you really need
k-clusters, an alternative would be to randomly re-initialize that cluster centroid and hope that it gets assigned at least some point next time around, but it's more common when running k-means to
eliminate a cluster if no points are assigned to it. | {"url":"https://www.joankusuma.com/post/day-40-clustering","timestamp":"2024-11-08T00:48:39Z","content_type":"text/html","content_length":"1050481","record_id":"<urn:uuid:51c93a01-75c5-4566-a248-4fdeb7b03ec1>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00560.warc.gz"} |
University of Alabama Repository :: Browsing Department of Information Systems, Statistics & Management Science by Author "Barrett, Bruce E."
Department of Information Systems, Statistics & Management Science
Permanent URI for this community
Browsing Department of Information Systems, Statistics & Management Science by Author "Barrett, Bruce E."
Now showing 1 - 8 of 8
Results Per Page
Sort Options
• Contributions to joint monitoring of location and scale parameters: some theory and applications
(University of Alabama Libraries, 2012) McCracken, Amanda Kaye; Chakraborti, Subhabrata; University of Alabama Tuscaloosa
Since their invention in the 1920s, control charts have been popular tools for use in monitoring processes in fields as varied as manufacturing and healthcare. Most of these charts are designed
to monitor a single process parameter, but recently, a number of charts and schemes for jointly monitoring the location and scale of processes which follow two-parameter distributions have been
developed. These joint monitoring charts are particularly relevant for processes in which special causes may result in a simultaneous shift in the location parameter and the scale parameter.
Among the available schemes for jointly monitoring location and scale parameters, the vast majority are designed for normally distributed processes for which the in-control mean and variance are
known rather than estimated from data. When the process data are non-normally distributed or the process parameters are unknown, alternative control charts are needed. This dissertation presents
and compares several control schemes for jointly monitoring data from Laplace and shifted exponential distributions with known parameters as well as a pair of charts for monitoring data from
normal distributions with unknown mean and variance. The normal theory charts are adaptations of two existing procedures for the known parameter case, Razmy's (2005) Distance chart and Chen and
Cheng's (1998) Max chart, while the Laplace and shifted exponential charts are designed using an appropriate statistic for each parameter, such as the maximum likelihood estimators.
• Contributions to outlier detection methods: some theory and applications
(University of Alabama Libraries, 2011) Dovoedo, Yinaze Herve; Chakraborti, Subhabrata; University of Alabama Tuscaloosa
Tukey's traditional boxplot (Tukey, 1977) is a widely used Exploratory Data Analysis (EDA) tools often used for outlier detection with univariate data. In this dissertation, a modification of
Tukey's boxplot is proposed in which the probability of at least one false alarm is controlled, as in Sim et al. 2005. The exact expression for that probability is derived and is used to find the
fence constants, for observations from any specified location-scale distribution. The proposed procedure is compared with that of Sim et al., 2005 in a simulation study. Outlier detection and
control charting are closely related. Using the preceding procedure, one- and two-sided boxplot-based Phase I control charts for individual observations are proposed for data from an exponential
distribution, while controlling the overall false alarm rate. The proposed charts are compared with the charts by Jones and Champ, 2002, in a simulation study. Sometimes, the practitioner is
unable or unwilling to make an assumption about the form of the underlying distribution but is confident that the distribution is skewed. In that case, it is well documented that the application
of Tukey's boxplot for outlier detection results in increased number of false alarms. To this end, in this dissertation, a modification of the so-called adjusted boxplot for skewed distributions
by Hubert and Vandervieren, 2008, is proposed. The proposed procedure is compared to the adjusted boxplot and Tukey's procedure in a simulation study. In practice, the data are often
multivariate. The concept of a (statistical) depth (or equivalently outlyingness) function provides a natural, nonparametric, "center-outward" ordering of a multivariate data point with respect
to data cloud. The deeper a point, the less outlying it is. It is then natural to use some outlyingness functions as outlier identifiers. A simulation study is performed to compare the outlier
detection capabilities of selected outlyingness functions available in the literature for multivariate skewed data. Recommendations are provided.
• The development of diagnostic tools for mixture modeling and model-based clustering
(University of Alabama Libraries, 2016) Zhu, Xuwen; Melnykov, Volodymyr; University of Alabama Tuscaloosa
Cluster analysis performs unsupervised partition of heterogeneous data. It has applications in almost all fields of study. Model-based clustering is one of the most popular clustering methods
these days due to its flexibility and interpretability. It is based on finite mixture models. However, the development of diagnostic tools and visualization tools for clustering procedures is
limited. This dissertation is devoted to assessing different properties of the clustering procedure. This report has four chapters. The summary of each chapter is given below: In the first
chapter we provide the practitioners with an approach to assess the certainty of a classification made in model-based clustering. The second chapter introduces a novel finite mixture model called
Manly mixture model. It is capable of modeling skewness in data and performs diagnostics on the normality of variables. In the third chapter we develop an extension of the traditional K-means
procedure that is capable of modeling skewness in data. The fourth chapter contributes to the ManlyMix R package, which is the developed software corresponding to our paper in Chapter 2.
• GA-Boost: a genetic algorithm for robust boosting
(University of Alabama Libraries, 2012) Oh, Dong-Yop; Gray, J. Brian; University of Alabama Tuscaloosa
Many simple and complex methods have been developed to solve the classification problem. Boosting is one of the best known techniques for improving the prediction accuracy of classification
methods, but boosting is sometimes prone to overfit and the final model is difficult to interpret. Some boosting methods, including Adaboost, are very sensitive to outliers. Many researchers have
contributed to resolving boosting problems, but those problems are still remaining as hot issues. We introduce a new boosting algorithm "GA-Boost" which directly optimizes weak learners and their
associated weights using a genetic algorithm, and three extended versions of GA-Boost. The genetic algorithm utilizes a new penalized fitness function that consists of three parameters (a, b, and
p) which limit the number of weak classifiers (by b) and control the effects of outliers (by a) to maximize an appropriately chosen p-th percentile of margins. We evaluate GA-Boost performance
with an experimental design and compare it to AdaBoost using several artificial and real-world data sets from the UC-Irvine Machine Learning Repository. In experiments, GA-Boost was more
resistant to outliers and resulted in simpler predictive models than AdaBoost. GA-Boost can be applied to data sets with three different weak classifier options. We introduce three extended
versions of GA-Boost, which performed very well on two simulation data sets and three real world data sets.
• On robust estimation of multiple change points in multivariate and matrix processes
(University of Alabama Libraries, 2017) Melnykov, Yana; Perry, Marcus B.; University of Alabama Tuscaloosa
There are numerous areas of human activities where various processes are observed over time. If the conditions of the process change, it can be reflected through the shift in observed response
values. The detection and estimation of such shifts is commonly known as change point inference. While the estimation helps us learn about the process nature, assess its parameters, and analyze
identified change points, the detection focuses on finding shifts in the real-time process flow. There is a vast variety of methods proposed in the literature to target change point detections in
both settings. Unfortunately, the majority of procedures impose very restrictive assumptions. Some of them include the normality of data, independence of observations, or independence of subjects
in multisubject studies. In this dissertation, a new methodology, relying on more realistic assumptions, is developed. This dissertation report includes three chapters. The summary of each
chapter is provided below. In the first chapter, we develop methodology capable of estimating and detecting multiple change points in a multisubject single variable process observed over time. In
the second chapter, we introduce methodology for the robust estimation of change points in multivariate processes observed over time. In the third chapter, we generalize the ideas presented in
the first two chapters by developing methodology capable of identifying multiple change points in multisubject matrix processes observed over time.
• Some contributions to univariate nonparametric tests and control charts
(University of Alabama Libraries, 2017) Zheng, Rong; Chakraborti, Subhabrata; University of Alabama Tuscaloosa
In general, statistical methods have two categories: parametric and nonparametric. Parametric analysis is usually made based on information regarding the probability distribution of the random
variable. While, nonparametric method is also referred as a distribution-free procedure, which does not require prior knowledge of the distribution of the random variable. In reality, few cases
allow practitioners to gain full knowledge of a random variable and tell the probability distribution for sure. Hence, there are two choices for practitioners. One can still use the parametric
methods due to the scientific evaluations or the simplification of situation, with an assumption of the parametric distribution. Alternatively, one can directly apply the nonparametric methods
without having much knowledge of the distribution. The conclusions from the parametric methods are valid as long as the assumptions are substantiated. These assumptions would help solving
problems, but also risky because making a wrong assumption might be dangerous. Hence, nonparametric techniques would be a preferable alternative. One chief advantage of the nonparametric methods
lies in its relaxation of the shapes of the distributions, namely, distribution-free property. Hence, from a research point of view, new methodology with nonparametric techniques applied, or
further investigation related to existing nonparametric techniques could be interesting, informative and valuable. All research in this matter contributes to univariate nonparametric tests and
control charts.
• Three essays on improving ensemble models
(University of Alabama Libraries, 2013) Xu, Jie; Gray, J. Brian; University of Alabama Tuscaloosa
Ensemble models, such as bagging (Breiman, 1996), random forests (Breiman, 2001a), and boosting (Freund and Schapire, 1997), have better predictive accuracy than single classifiers. These
ensembles typically consist of hundreds of single classifiers, which makes future predictions and model interpretation much more difficult than for single classifiers. Breiman (2001b) gave random
forests a grade of A+ in predictive performance, but a grade of F in interpretability. Breiman (2001a) also mentioned that the performance of an ensemble model depends on the strengths of the
individual classifiers in the ensemble and the correlations among them. Reyzin and Schapire (2006) stated that "the margins explanation basically says that when all other factors are equal,
higher margins result in lower error," which is referred to as the "large margin theory." Shen and Li (2010) showed that the performance of an ensemble model is related to the mean and the
variance of the margins. In this research, we improve ensemble models from two perspectives, increasing the interpretability and/or decreasing the test error rate. We first propose a new method
based on quadratic programming that uses information on the strengths of the individual classifiers in the ensemble and their correlations, to improve or maintain the predictive accuracy of an
ensemble while significantly reducing its size. In the second essay, we improve the predictive accuracy of random forests by adding an AdaBoost-like improvement step to random forests. Finally,
we propose a method to improve the strength of the individual classifiers by using fully-grown trees fitted on weighted resampling training data and then combining the trees by using the AdaBoost
• Three essays on the use of margins to improve ensemble methods
(University of Alabama Libraries, 2012) Martinez Cid, Waldyn Gerardo; Gray, J. Brian; University of Alabama Tuscaloosa
Ensemble methods, such as bagging (Breiman, 1996), boosting (Freund and Schapire, 1997) and random forests (Breiman, 2001) combine a large number of classifiers through (weighted) voting to
produce strong classifiers. To explain the successful performance of ensembles and particularly of boosting, Schapire, Freund, Bartlett and Lee (1998) developed an upper bound on the
generalization error of an ensemble based on the margins, from which it was concluded that larger margins should lead to lower generalization error, everything else being equal (sometimes
referred to as the "large margins theory"). This result has led many researchers to consider direct optimization of functions of the margins (see, e.g., Grove and Schuurmans, 1998; Breiman, 1999
Mason, Bartlett and Baxter, 2000; and Shen and Li, 2010). In this research, we show that the large margins theory is not sufficient for explaining the performance of AdaBoost. Shen and Li (2010)
and Xu and Gray (2012) provide evidence suggesting that generalization error might be reduced by increasing the mean and decreasing the variance of the margins, which we refer to as "squeezing"
the margins. For that reason, we also propose several alternative techniques for squeezing the margins and evaluate their effectiveness through simulations with real and synthetic data sets. In
addition to the margins being a determinant of the performance of ensembles, we know that AdaBoost, the most common boosting algorithm, can be very sensitive to outliers and noisy data, since it
assigns observations that have been misclassified a higher weight in subsequent runs. Therefore, we propose several techniques to identify and potentially delete noisy samples in order to improve
its performance. | {"url":"https://ir.ua.edu/browse/author?scope=3a17eea7-5599-498d-97a5-a89192781f24&value=Barrett,%20Bruce%20E.","timestamp":"2024-11-04T15:06:13Z","content_type":"text/html","content_length":"529245","record_id":"<urn:uuid:255eb2bd-3abd-4b35-aea8-7efc3c0423d0>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00059.warc.gz"} |
Conditional Remix & Share Permitted
CC BY-NC-SA
Module 2 builds on students' previous work with units and with functions from Algebra I, and with trigonometric ratios and circles from high school Geometry. The heart of the module is the study of
precise definitions of sine and cosine (as well as tangent and the co-functions) using transformational geometry from high school Geometry. This precision leads to a discussion of a mathematically
natural unit of rotational measure, a radian, and students begin to build fluency with the values of the trigonometric functions in terms of radians. Students graph sinusoidal and other trigonometric
functions, and use the graphs to help in modeling and discovering properties of trigonometric functions. The study of the properties culminates in the proof of the Pythagorean identity and other
trigonometric identities.
Find the rest of the EngageNY Mathematics resources at https://archive.org/details/engageny-mathematics.
Material Type:
Provider Set:
Date Added: | {"url":"https://openspace.infohio.org/browse?f.keyword=circles","timestamp":"2024-11-11T04:27:53Z","content_type":"text/html","content_length":"161155","record_id":"<urn:uuid:aae0b41f-50b9-4f13-a770-6d7bd1e2ad78>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00525.warc.gz"} |
VIF of Descriptors
The command Tools > VIF of Descriptors... supports the detection of multicollinearities by means of the variance inflation factor (VIF). The user has to select the variables to be included by ticking
off the corresponding check boxes. In general one starts with the selection of all variables, and proceeds by repeatedly deselecting variables showing a high VIF. Ideally, the VIF values should be
below 10.
As the calculation of the VIF can be quite time consuming, you may choose to use only a random sample of 1000 pixels to calculate the VIF. This increases the speed of calculation considerably, even
though the accuracy of the VIF values is degraded. However this should be sufficient to get a rough overview.
For descriptor sets with less than 25 spectral descriptors the calculation of the VIF values is performed automatically upon following any change of the selected descriptors. If a set contains a
higher number of descriptors the user can decide when to calculate the VIF values by clicking the "start the calculation" button (
Hint: An infinite VIF value indicates that the corresponding variable may be expressed exactly by a linear combination of other variables (which show an infinite VIF as well). | {"url":"http://imagelab.at/help/vif_descriptors.htm","timestamp":"2024-11-04T08:41:27Z","content_type":"text/html","content_length":"3657","record_id":"<urn:uuid:005346e5-1eaa-4c87-a7c0-235b019fffc6>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00286.warc.gz"} |
Why Learn Mathematics?
A strong background in mathematics is vital for many careers, particularly those in sciences, technology, engineering, finance, and data analysis. Here's a list of 20 popular jobs where a solid
understanding of mathematics is important:
1. Nurse: Uses mathematics to calculate medication dosages and measure vital signs.
2. Teacher: Employs basic to advanced mathematics depending on the educational level to teach students.
3. Pilot: Uses mathematics for navigation, calculating distances, and fuel requirements.
4. Architect: Applies geometry and mathematical modelling to design buildings and structures.
5. Pharmacist: Utilises mathematics to ensure correct dosages and compound medications.
6. Accountant: Employs mathematics for financial reporting, tax calculations, and auditing.
7. Real Estate Agent: Uses mathematics to calculate mortgage payments, commissions, and property values.
8. Graphic Designer: Applies proportions and geometry in design layouts and visual concepts.
9. Construction Manager: Uses mathematics for project estimations, budgeting, and resource allocation.
10. Software Engineer: Applies principles of computer science and mathematical analysis to design, develop, and test software applications and systems.
11. Marketing Analyst: Uses statistics and data analysis to understand market trends and measure campaign effectiveness.
12. Chef: Utilises ratios and measurements in the preparation and scaling of recipes.
13. Interior Designer: Applies geometry and spatial reasoning to create harmonious and functional living spaces.
14. Electrician: Uses mathematics to calculate current, voltage, resistance, and power requirements.
15. Mechanic: Employs mathematics to diagnose vehicle issues and calculate the needed parts and labour.
16. Plumber: Utilises mathematics to measure, cut, and install piping correctly.
17. Land Surveyor: Applies trigonometry and geometry to determine land boundaries.
18. Insurance Agent: Uses probability and statistics to determine insurance rates and risk assessment.
19. Human Resources Specialist: Applies mathematics in the analysis of salary data and the calculation of benefits.
20. Logistics Coordinator: Uses mathematics for optimising route planning and managing inventory levels. | {"url":"https://www.transum.org/Maths/Skills/Why/?ID=249","timestamp":"2024-11-04T19:04:48Z","content_type":"text/html","content_length":"28924","record_id":"<urn:uuid:1e5ca9e0-75b4-4017-bc67-ba88ceaf6d02>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00520.warc.gz"} |
Fortnightly links (127)
• Liquid Tensor Experiment is the work-in-progress on Peter Scholze's challenge to formalise an important milestone theorem from his work with Dustin Clausen. See his guest blog post on the Xena
Project blog.
It's also a nice pun on a progressive metal band's name.
• Grothendieck's schemes in algebraic geometry is another update from the formalisation crowd, but now not from the Lean users (not this one), and rather from Isabelle users. They've successfully
implemented schemes and proven some foundational results about them. Exciting times!
• Dylan Spence: A note on semiorthogonal indecomposability of some Cohen–Macaulay varieties discusses the indecomposability of two flavours of the derived category (perfect complexes resp. bounded
derived category of coherent sheaves) of singular varieties. For a Cohen–Macaulay variety the dualizing complex is a sheaf, and one can talk about its base locus. Dylan shows that the
Kawatani–Okawa result for indecomposability of derived categories of smooth varieties with finite canonical base locus generalises to this setting, suitably replacing skyscrapers of possibly
singular points by Koszul zero-cycles, which define perfect complexes with support a closed point. Cool stuff! | {"url":"https://pbelmans.ncag.info/blog/2021/04/28/fortnightly-links-127/","timestamp":"2024-11-11T16:58:56Z","content_type":"text/html","content_length":"21326","record_id":"<urn:uuid:8d7adfd0-ecd6-41bd-8aec-875820be2d69>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00853.warc.gz"} |
Pre-allocate arrays to improve efficiency
Recently Charlie Huang showed how to use the SAS/IML language to compute an exponentially weighted moving average of some financial data. In the commentary to his analysis, he said:
I found that if a matrix or a vector is declared with specified size before the computation step, the program’s efficiency would be significantly improved. It may suggest that declaring
matrices explicitly could be a good habit for SAS/IML programming.
Charlie has stated a general rule: in matrix/vector languages such as SAS/IML, MATLAB, or R, you should allocate space for a matrix outside of a loop, rather than using concatenation to grow the
matrix inside of a loop. I cover this in Chapter 2 (p. 42-43) of my book, Statistical Programming with SAS/IML Software. You can download Chapter 2 at no cost from the book's Web site.
Do Not Grow Arrays Dynamically
This general rule is relevant when you compute values of a matrix one element at a time. That is, you are using np separate computations to fill all elements of an n x p matrix.
Suppose that you want to compute a vector that contains the results of several similar computations. Naturally, you write a DO loop and compute each value within the loop. For example, the following
SAS/IML statements define a SAS/IML module and call it eight times within a DO loop:
proc iml;
start Func(x);
/** compute any quantity **/
return (ssq(x)); /** sums of squares **/
/** NOT EFFICIENT **/
/** grow the result array dynamically **/
do i = 1 to 8;
x = i:8;
result = result || Func(x);
print result;
The program is not efficient because it uses the horizontal concatenation operator (||) to grow the result vector dynamically within the DO loop. After the ith iteration, the vector contains i
elements, but during the ith iteration, the previous i – 1 elements are needlessly copied from the old array to the new (larger) array. If the DO loop iterates n times, then
1. the first element is copied n times
2. the second element is copied n – 1 times
3. the third element is copied n – 2 times, and so forth
In summary, when you grow an array dynamically within a loop, there are n allocations and n (n – 1) / 2 elements are needlessly copied.
A Better Approach: Pre-Allocate the Array
When you know the ultimate size of an array, it is best to allocate the array prior to the loop. You can then assign values to the elements of the array inside the loop, as shown in the following
/** EFFICIENT: Pre-allocate result array **/
result = j(1, 8);
do i = 1 to 8;
x = i:8;
result[i] = Func(x);
This computation does not contain any needless allocations, nor any unnecessary copying of elements that were previously computed.
This technique is essential for efficient sampling in the SAS/IML language: when you want random values from a specified distribution (such as the normal distribution), you should pre-allocate a
matrix and then call the RANDGEN subroutine once in order to fill the matrix with random numbers.
6 Comments
Hello Rick!
Thank you for your blog! I know that it is very difficult to come up with interesting ideas on a schedule:) Even though I am not using IML currently, I still can find a lot of very useful things
I am using Matlab, and I also find that if you have two nested loops, then it runs much faster if you set the inside loop to go through the rows and the outside loop to go through the columns,
especially for big matrices. I think it has to do with the way the matrices are stored in the memory. How is it done in IML?
Yes, MATLAB and R both store their matrices column-wise. In contrast, SAS/IML software stores matrices row-wise. I haven't done the computation, but I'd wager that for large matrices it is more
efficient to access the matrix in a way that is compatible with its storage structure.
Leave A Reply | {"url":"https://blogs.sas.com/content/iml/2011/06/20/pre-allocate-arrays-to-improve-efficiency.html","timestamp":"2024-11-03T16:09:12Z","content_type":"text/html","content_length":"46392","record_id":"<urn:uuid:f202990e-e2c1-49e5-90de-4ee7562fcd72>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00882.warc.gz"} |
Slope of the sine curve
In another post about representing the sine and cosine as infinite series, I tried to show how the series form easily demonstrates the correctness of the derivatives of these functions, that if
But unfortunately this argument is circular (I think), since the series comes from a Taylor series, which is generated using the first derivative (and the second, third and more derivatives as well).
In this post, I want to do two things. Strang has a nice picture which makes clear the relationship between position on the unit circle and speed. This is it:
The triangles for velocity and position are similar, just rotated by π / 2.
It is clear from the diagram that at any point, the y-component of the velocity is cos(t), while the y-position is sin(t). Thus, the rate of change of sin(t) is cos(t). This is the result we've been
seeking. Similarly, the rate of change of the x-position, cos(t), is -sin(t).
Strang also derives this result more rigorously starting on p. 64. That derivation is a bit complicated, although not too bad, and I won't follow the whole thing here. It uses his standard approach
as follows:
Applying a result found using just the Pythagorean theorem earlier (p. 31) for sin (s + t):
He comes up with this expression for:
The problem is then to determine what happens to these two expressions in the limit as h -> 0. The first one is more interesting. As h gets small, | cos(h)-1 | gets smaller like h^2, so the ratio
goes to 0.
R can help us see better.
Here is the plot for the second one, which converges to 1, leaving us with simply cos(x): | {"url":"https://telliott99.blogspot.com/2009/08/slope-of-sine-curve.html","timestamp":"2024-11-12T21:37:40Z","content_type":"application/xhtml+xml","content_length":"83579","record_id":"<urn:uuid:7d4aa50c-f7cc-49b2-9430-4c62b88946e8>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00242.warc.gz"} |
Introduction to probability and mathematical statistics 2nd edition pdf download
25 Oct 2009 List of download links for free statistics e-books (in PDF format), level ranges from “Special Distributions,” “More Models,” and “Mathematical Statistics. IPSUR: Introduction to
Probability and Statistics Using R by G. Jay
Jul 14, 2018 · Statistical Inference (PDF) 2nd Edition builds theoretical statistics from the first principles of probability theory. Starting from the basics of probability, the authors develop the
theory of statistical inference using techniques, definitions, and concepts that are statistical and are natural extensions and consequences of previous concepts. (PDF) Statistics: An Introduction
Using R (2nd Edition) 2 Statistics: An In troduction Using R (2nd Edition) e.g., deriving the least squares co efficien ts yet going directly to lm – but for the most part he uses this metho d to go od
effect. Schaum's Outline of Probability, Second Edition (2nd ed.) Schaum's Outline of Probability, Second Edition (2nd ed.) by Seymour Lipschutz.
***IF YOU WANT TO UPDATE THE INFORMATION ON YOUR TITLE SHEET, THEN YOU MUST UPDATE COPY IN THE "PRODUCT INFORMATION COPY" FIELD. An Introduction to Probability and Statistical Inference ... An
Introduction to Probability and Statistical Inference, Second Edition, guides you through probability models and statistical methods and helps you to think critically about various concepts. Written
by award-winning author … - Selection from An Introduction to Probability and Statistical Inference, 2nd Edition [Book]
Introduction to Probability and Statistics Second Edition ... Download Introduction to Probability and Statistics Second Edition PDF eBook Introduction to Probability and Statistics Second Edition
INTRODUCTION TO PROBABILITY AND STATISTICS SECOND EDITION EBOOK AUTHOR BY UNITED STATES GOVERNMENT PRINTING OFFICE Introduction To Probability And Statistics Second Edition eBook - Free of
Registration Rating: Introduction To Probability And Mathematical Statistics ... It's easier to figure out tough problems faster using Chegg Study. Unlike static PDF Introduction To Probability And
Mathematical Statistics 2nd Edition solution manuals or printed answer keys, our experts show you how to solve each problem step-by-step. Introduction to Probability and Mathematical Statistics ...
An Introduction to Probability and Statistics, 3rd Edition ... An Introduction to Probability and Statistics, Third Edition is an ideal reference and resource for scientists and engineers in the
fields of statistics, mathematics, physics, industrial management, and engineering. The book is also an excellent text for upper-undergraduate and graduate-level students majoring in probability and
statistics. Download Probability and Statistics for Engineers Pdf Ebook Note: If you're looking for a free download links of Probability and Statistics for Engineers Pdf, epub, docx and torrent then
this site is not for you. Ebookphp.com only do ebook promotions online and we does not distribute any free download of ebook on this site. Introduction to Probability and Mathematical Statistics ...
The Second Edition of INTRODUCTION TO PROBABILITY AND MATHEMATICAL STATISTICS focuses on developing the skills to build probability (stochastic) models. Lee J. Bain and Max Engelhardt focus on the
mathematical development of the subject, with examples and exercises oriented toward applications.
Introduction to Probability 2nd Edition Problem Solutions
Today, probability theory is a wellestablished branch of mathematics that Read more about Introduction to Probability Publisher: American Mathematical Society. Language: English. Read this book. PDF
· Hardcover Color Reviewed by Hasan Hamdan, Professor of Statistics, James Madison University on 6/20/17. 25 Oct 2009 List of download links for free statistics e-books (in PDF format), level ranges
from “Special Distributions,” “More Models,” and “Mathematical Statistics. IPSUR: Introduction to Probability and Statistics Using R by G. Jay 1.7 Intersection probability of random phenomena . . .
. . . . . . . 19 For further understanding it is necessary to introduce some concepts that we try to explain equal to the first machinery 90%, the 2nd machine and 80% in the third 85% of. Probability
& Statistics - books for free online reading: probability theory, randomness, stochastic processes, Markov chains, mathematical statistics. 2008, 634 pp, 6.2MB, PDF. Applied Statistics by Mohammed A.
Shayib, 2013, 300 pp, 7.3MB, PDF 412 pp, 1.3MB, PDF. Introduction to Probability: Second Revised Edition A dump of all the data science materials (mostly pdf's) that I have accumulated over the years
- tohweizhong/pdf-dump. | {"url":"https://megadocsjunv.netlify.app/introduction-to-probability-and-mathematical-statistics-2nd-edition-pdf-download-fab.html","timestamp":"2024-11-10T02:23:44Z","content_type":"text/html","content_length":"33687","record_id":"<urn:uuid:335d8d27-822b-499f-a7ab-3ad673fcd0d4>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00857.warc.gz"} |
Reciprocal Angle Conver
Measurement of various quantities has been an integral part of our lives since ancient times. In this modern era of automation, we need to measure quantities more so than ever. So, what is the
importance of Reciprocal Angle converter? The purpose of Reciprocal Angle converter is to provide Reciprocal Angle in the unit that you require irrespective of the unit in which Reciprocal Angle was
previously defined. Conversion of these quantities is equally important as measuring them. Reciprocal Angle conversion helps in converting different units of Reciprocal Angle. Reciprocal Angle is the
reciprocal of the figure formed by two rays, called the sides of the angle, sharing a common endpoint, called the vertex of the angle.. There are various units which help us define Reciprocal Angle
and we can convert the units according to our requirement. unitsconverters.com provides a simple tool that gives you conversion of Reciprocal Angle from one unit to another. | {"url":"https://www.unitsconverters.com/en/Reciprocal-Angle-Conversions/Measurement-1186","timestamp":"2024-11-05T12:55:43Z","content_type":"application/xhtml+xml","content_length":"109392","record_id":"<urn:uuid:7532c159-fa5d-4911-b307-b7ba945a6ca2>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00773.warc.gz"} |
Free Printable Roman Number 2 in PDF
Roman Numeral 2-Roman numerals are a numeric system that originated in ancient Rome. They are used to represent numbers in the base 10 decimal system. Roman numerals use the letters I, V, X, L, C, D,
and M to represent the values 1, 5, 10, 50, 100, 500, and 1000 respectively.
The value of a number represented by Roman numerals is determined by the sum of the values of the individual symbols. For example, the number XV (15) is represented as 10 + 5 = 15. In addition to
addition, Roman numerals can be used for subtraction when a smaller value precedes a larger one. For instance II (2) represents 5 – 3 = 2.
Number 2 Roman Numeral is II. This is a simple numeral that can be easily memorized. The number two is one of the most important numbers in mathematics and geometry. It is the only even prime number.
The number two also appears in the Fibonacci Sequence.
Roman Numeral 2 Chart
Looking for a Roman Numeral 2 chart? You’ve come to the right place! This free, printable chart shows the Roman numerals from 1 to 100. Just click on the ‘print’ button below and you’re good to go!
If you’re new to Roman numerals, start by memorizing the first 10: I (1), II (2), III (3), IV (4), V (5), VI (6), VII (7), VIII (8), IX (9), and X (10). Once you’ve got those down, the rest is a
piece of cake!
II – The most basic way to write 2 in Roman numerals is with two I symbols.
Two Roman Numeral PDF will be going to be very beneficial for learning numerals.
There are a few different ways to write Roman numerals. The most common way is to use the basic symbols I, V, X, L, C, D, and M. These symbols represent the values 1, 5, 10, 50, 100, 500, and 1000
respectively. To write a number using Roman numerals, you simply combine these symbols together. For example, the number 12 can be written as XII (10+1+1), and the number 123 can be written as CXXIII
Another way to write Roman numerals is to use subtraction. This is done by writing the smaller value first followed by the larger value with a line above it.
Roman numeral 2 is often used on its own to represent the number 2. This can be seen in many everyday situations, such as when someone writes a check for $2 or when a clock says it is 2:00. Two is
also a very common number in mathematical equations, so the Roman numeral 2 shows up there quite often as well.
Roman Numeral 2
The Roman numeral 2 represents the number 2. It is written as II in upper case and ii in lower case.
The number 2 has been represented by various symbols throughout history. The Egyptians used the symbol for a pair of eyes, which they believed was the key to seeing everything in the world. The
Sumerians and Babylonians used the symbol of a wedge, which represented strength and stability. The early Romans used the symbol of two arrows crossed to represent war. This was later replaced by the
letter ‘N’ with two lines through it, which represented peace. The Roman numeral 2 is still used today, primarily in mathematical and scientific notation. It is also used to indicate a second-place
finish, as in Olympic games or horse racing.
Number 2 Roman Numeral PDF is the best tool for understanding the concept of numerals.
Roman Numeral 2 is one of the most common numerals used in the world. It is used to represent a number of things, including:
-The number two (2)
-A quantity or amount consisting of two units
-Something that is divided into two parts
Roman Numeral 2 is also used to represent a number of other things, including:
-The second in a series or sequence
-An ordinal number meaning “second”
-A person or thing ranked second
Roman Numeral 2 is an important numeral in many different fields, including mathematics, science, and history.
How to write 2 in roman numerals
There are a few rules to remember when writing Roman numerals. First, only use the basic symbols: I, V, X, L, C, D and M. You can use these symbols as many times as necessary to create the number you
need. Secondly, placement matters when it comes to Roman numerals. The symbol for one is always placed before the symbol for five (I before V) and the symbol for 10 is always placed before the symbol
for 50 (X before L). Lastly, you can only subtract numbers if they are one less than the number they precede (IV or IX but never VL or XC).
With those guidelines in mind, let’s take a look at how to write 2 in Roman numerals:
Roman Number two is one of the most basic and common symbols in Roman numerals. It is represented by the letter II, and its value is two. This symbol is used in a variety of ways, including
representing the number two on its own, as a part of larger numbers, and as a way to represent other concepts such as ×2 (two times) and 2nd (second).
Roman numeral 2 can also be combined with other symbols to form larger numbers.
With this printable o2 Roman numeral template, you can quickly and easily create Roman numerals. Just print out the template and fill in the blanks with the numbers you want to convert.
This printable Roman numeral template is a great way to teach kids how to read and write Roman numerals. Just print out the template and have them fill in the blanks with the numbers they want to
convert. This is also a great way to learn Roman numerals yourself.
Leave a Reply Cancel reply | {"url":"https://romannumerals.site/roman-numeral-2/","timestamp":"2024-11-12T21:58:02Z","content_type":"text/html","content_length":"84462","record_id":"<urn:uuid:e945db1f-275c-41e2-afb9-d20f8c498cd1>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00444.warc.gz"} |
How do you use order of operations to simplify 3(7-2)-8? | Socratic
How do you use order of operations to simplify #3(7-2)-8#?
1 Answer
Think about like this if you get hurt in PE call an MD ASap
PE = a gym class so do Parenthesis and Exponents first.
MD = a medical doctor. This is one person so do Multiplication and Division at the same time. working from left to right.
AS(ap) = As Soon as possible. This is one time so do Addition and Subtraction at the same time working from left to right
Do the parenthesis first.
$3 \left(7 - 2\right) - 8 = 3 \left(5\right) - 8$
Do the multiplication next
$3 \left(5\right) - 8 = 15 - 8$
Now do the subtraction
$15 - 8 = 7$
Impact of this question
9713 views around the world | {"url":"https://api-project-1022638073839.appspot.com/questions/how-do-you-use-order-of-operations-to-simplify-3-7-2-8","timestamp":"2024-11-06T02:03:31Z","content_type":"text/html","content_length":"33317","record_id":"<urn:uuid:727f6161-72b5-4f90-8d6d-72005cfc4625>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00162.warc.gz"} |
Free Mathematics Mini Practice Paper (LS2013A)
One of the stated aims of this Gifted Mathematics website is to publish practice papers modelled on mathematics competitions throughout the world. I am pleased to announce that the first such paper
has now been published.
This new paper is called a Mini Practice Paper as it is about half the length of a full paper. One advantage of this is that it can be used within a whole classroom period without having to edit it
down. However, it has the same proportion of easier and harder questions so should be challenging for most students.
This paper is aimed at what we call Lower Secondary, so is roughly equivalent to the American AMC8 papers and the UKMT Junior Mathematical Challenges. It is therefore suitable for students in about
the age range of 11 to 14 years old, but younger gifted mathematicians should find plenty to enjoy too!
The Mini Practice Papers are free and will be stored on Dropbox for now. You can
join Dropbox here for free
. Once you join Dropbox you can synch your account with our
Gifted Mathematics Free Resources
and you will automatically receive new files as they are published, plus updates to any existing files that have been edited but have not changed their filename.
Today’s paper can be viewed and downloaded for free here:
Mini Practice Paper – Lower Secondary – 2013A
The answers, solutions and extension problems have now been published and are available for
free download here
Similar papers for Middle and Upper Secondary are coming soon!
Please feel free to leave any feedback below in the Comments box run by Disqus.
No comments: | {"url":"http://www.giftedmathematics.com/2013/03/free-mathematics-mini-practice-paper.html","timestamp":"2024-11-13T23:05:04Z","content_type":"application/xhtml+xml","content_length":"82896","record_id":"<urn:uuid:f118aa97-6e44-417e-9bfb-d3fe931289df>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00347.warc.gz"} |
Concept information
p-xylene amount fraction
• Amount fraction is used in the construction mole_fraction_of_X_in_Y, where X is a material constituent of Y. The chemical formula for p-xylene is C8H10. P-xylene is a member of the group of
hydrocarbons known as aromatics. The IUPAC name for p-xylene is 1,4-xylene.
{{#each properties}}
{{toUpperCase label}}
{{#each values }} {{! loop through ConceptPropertyValue objects }} {{#if prefLabel }}
{{#if vocabName }}
{{ vocabName }}
{{/if}} {{/each}} | {"url":"https://vocabulary.actris.nilu.no/skosmos/actris_vocab/en/page/p-xyleneamountfraction","timestamp":"2024-11-14T12:36:25Z","content_type":"text/html","content_length":"20087","record_id":"<urn:uuid:d8fa9aa3-a78d-46ec-b684-6320db791814>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00229.warc.gz"} |
a Triangular Number?
Detailed Answer
How to check if 22 is a triangular number
A triangular number is one that can be arranged in an equilateral triangle. The n-th triangular number is given by the formula:
\( T_n = \frac{n(n+1)}{2} \)
To check if a given number is a triangular number, we can rearrange the formula to solve for \( n \):
\( n = \frac{-1 + \sqrt{1 + 8x}}{2} \)
Where \( x \) is the number in question. The resulting \( n \) should be an integer if \( x \) is a triangular number.
In this case, \( x = 22\). So,
\( n = \frac{-1 + \sqrt{1 + 8 \times 22}}{2} \)
\( n = \frac{-1 + \sqrt{1 + 176}}{2} \)
\( n = \frac{-1 + \sqrt{177}}{2} \)
\( n = \frac{-1 + 13.30413469565}{2} \)
\( n = 12.30413469565 \)
As you can see, 12.30413469565 is NOT an integer (0, 1, 2, ...). So, 22 is NOT a triangle number.
What is a Triangular Number?
A triangular number or triangle number counts objects arranged in an equilateral triangle, as in the diagram below. The n-th triangular number is the number of dots composing a triangle with n dots
on a side, and is equal to the sum of the n natural numbers from 1 to n.
This is the triangular number formula to find the n^th triangular number.
\(T_n = \frac{n(n+1)}{2}\)
Triangular Number Index Formula
Let's firstly define "triangular number index".
The index of a triangular number is the number of rows in a triangular grid of points that represents the number. For example, the smallest triangular number with two digits is 10, and its index is
4. The smallest triangular number with three digits is 105, and its index is 14.
To deduce a formula that determines which triangular number a given \(n\) is, we need to reverse-engineer the triangular number formula:
\(T_n = \frac{n(n+1)}{2}\)
Given a number n, we need to find \(i\) such that:
\(i = \frac{n(n+1)}{2}\)
Solving for \(i\) involves the quadratic formula since the equation can be rearranged to a quadratic equation:
\(i^2 + i - 2 \times i = 0\)
Using the quadratic formula:
\(i = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}\)
For our equation, \(a = 1\), \(b = 1\), and \(c = -2 \times\) \(i\). This leads to:
\(i = \frac{-1 \pm \sqrt{1+8 \times \text{i}}}{2}\)
\(i\) is called \(T_n\). So, the formula in function of n to find the index is:
\(i = \frac{-1 + \sqrt{1+8 \times \text{n}}}{2}\)
Is 0 a Triangular Number?
Yes, 0 is considered a triangular number. It corresponds to the zeroth triangle in the sequence of triangular numbers, which can be thought of as an empty set of dots (i.e., a triangle with zero
Mathematical Explanation:
Triangular numbers are represented by the formula:
\( T_n = \frac{n(n + 1)}{2} \)
Where \( T_n \) is the nth triangular number. If you substitute 0 for \( n \) in this formula:
\( T_0 = \frac{0(0 + 1)}{2} = 0 \)
This result shows that 0 is indeed a triangular number, representing the conceptual "triangle" with no dots.
About this Calculator
The Triangular Number Calculator is an online tool specifically designed to ascertain the triangularity of a given whole number. Triangular numbers are the kind that can be used in arranging dots so
as to form an equilateral triangle. The calculator suits students, teachers, or any other individuals who may need to explore numerical patterns or sequences in mathematics. Zero is also such a
triangular number.
How It Works:
By definition, a triangular number refers to one that can constitute an equilateral triangle’s full shape. The n-th triangular number equals the sum of the first n natural numbers. For example, ten
is because it produces a base for four dots on which (1 + 2 + 3 + 4 =10).
Using the Calculator:
1. Enter a Number: In the appropriate field on your calculator, key in your desired number.
2. Submit the Number: Push the calculate button so it can process your entry.
3. View the Result: Once you have inserted a number into the range of numbers displayed by this calculator and pressed “calculate,” it will tell you instantly if tis a triangular one. In case of its
classification as being among these special type of digits , then give out some possible indices within the sequence plus also display some points with a picture showing up how they take up an
inside right-angled triangle.
• Instant Verification: Quickly determine if you’re dealing with one of such numbers found in the series.
• Visual Aid: It draws an equilateral triangle which represent the tianguar number.
• Educational Resource: Explore about properties associated with triangular numbers by just changing the input field correspondind to the number you need to analize.
• User-Friendly Interface: Its simplicity makes it very easy to use for anyone, regardless of their level of understanding.
• Educational Value: Great tool for educational purposes, especially in mathematical learning environments.
• Accessibility: Designed to be accessible on various devices, ensuring that anyone can use it at any time.
Ideal for:
• Students learning about number sequences in math classes.
• Teachers looking for a visual tool to explain triangular numbers.
• Enthusiasts and hobbyists interested in number theory and mathematical patterns.
Try It Out: Whether you’re solving homework problems, preparing lessons, or just curious about mathematical patterns, our Triangular Number Calculator is here to assist you. Input a number and
discover its unique triangular properties today! | {"url":"https://clickcalculators.com/is-triangular/22","timestamp":"2024-11-05T16:30:07Z","content_type":"text/html","content_length":"51982","record_id":"<urn:uuid:57d7158b-9f22-4ed0-bb16-0291860570c1>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00324.warc.gz"} |
1958 -- Strange Towers of Hanoi
Charlie Darkbrown sits in another one of those boring Computer Science lessons: At the moment the teacher just explains the standard Tower of Hanoi problem, which bores Charlie to death!
The teacher points to the blackboard (Fig. 4) and says: "So here is the problem:
• There are three towers: A, B and C.
• There are n disks. The number n is constant while working the puzzle.
• All disks are different in size.
• The disks are initially stacked on tower A increasing in size from the top to the bottom.
• The goal of the puzzle is to transfer all of the disks from tower A to tower C.
• One disk at a time can be moved from the top of a tower either to an empty tower or to a tower with a larger disk on the top.
So your task is to write a program that calculates the smallest number of disk moves necessary to move all the disks from tower A to C."
Charlie: "This is incredibly boring—everybody knows that this can be solved using a simple recursion.I deny to code something as simple as this!"
The teacher sighs: "Well, Charlie, let's think about something for you to do: For you there is a fourth tower D. Calculate the smallest number of disk moves to move all the disks from tower A to
tower D using all four towers."
Charlie looks irritated: "Urgh. . . Well, I don't know an optimal algorithm for four towers. . . "
So the real problem is that problem solving does not belong to the things Charlie is good at. Actually, the only thing Charlie is really good at is "sitting next to someone who can do the job". And
now guess what — exactly! It is you who is sitting next to Charlie, and he is already glaring at you.
Luckily, you know that the following algorithm works for n <= 12: At first k >= 1 disks on tower A are fixed and the remaining n-k disks are moved from tower A to tower B using the algorithm for four
towers.Then the remaining k disks from tower A are moved to tower D using the algorithm for three towers. At last the n - k disks from tower B are moved to tower D again using the algorithm for four
towers (and thereby not moving any of the k disks already on tower D). Do this for all k 2 ∈{1, .... , n} and find the k with the minimal number of moves.
So for n = 3 and k = 2 you would first move 1 (3-2) disk from tower A to tower B using the algorithm for four towers (one move). Then you would move the remaining two disks from tower A to tower D
using the algorithm for three towers (three moves). And the last step would be to move the disk from tower B to tower D using again the algorithm for four towers (another move). Thus the solution for
n = 3 and k = 2 is 5 moves. To be sure that this really is the best solution for n = 3 you need to check the other possible values 1 and 3 for k. (But, by the way, 5 is optimal. . . )
There is no input.
For each n (1 <= n <= 12) print a single line containing the minimum number of moves to solve the problem for four towers and n disks. | {"url":"http://poj.org/problem?id=1958","timestamp":"2024-11-13T22:42:48Z","content_type":"text/html","content_length":"8264","record_id":"<urn:uuid:9c9d3a8f-c953-42e1-9375-06ac9d23dacd>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00406.warc.gz"} |
What is Sdof
What is Sdof system?
What is Sdof system?
A single degree of freedom (SDOF) system is one for which only a single coordinate is required to completely specify the configuration of the system. (This is a suitable working definition for now.)
There are typically many possible choices for the coordinate to be used, although some are more natural than others.
What is Sdof and Mdof system?
The main difference between MDoF and SDoF systems is the amount of movements that each system can simulate: the former allows movements along several axes, while the latter only along one axis.
What is Mdof?
Multiple Degree of Freedom (MDOF) Systems.
What is free vibration equation?
Therefore, the free vibration frequency is: f q = 1 2 π 2 E b 1 h 1 3 m a 1 3. For the data given in the above example, the vibration frequency of the structure is fq = 944Hz.
What is Sdof in earthquake?
Context 1. a semi-active, single degree-of-freedom (SDOF), controlled system subjected to an earthquake ground motion with a control force applied to the first mass, as illustrated in Figure 1.
What is full form of Sdof?
SDOF. Semantic Depth of Field. Copyright 1988-2018 AcronymFinder.com, All rights reserved.
What is free vibration?
Free vibration refers to the vibration of a damped (as well as undamped) system of masses with motion entirely influenced by their potential energy.
What is multi degree of freedom?
Multi-degree-of-freedom (multi-DOF) systems are defined as those requiring two or more coordinates to describe their motion. This excludes continuous systems, which theoretically have an infinite
number of freedoms.
What is a 2 degree of freedom system?
A two degree of freedom system is one that requires two coordinates to completely describe its equation of motion. These coordinates are called generalized coordinates when they are independent of
each other. Thus system with two degrees of freedom will have two equation of motion and hence has two frequencies.
What is free vibration with example?
Free vibration occurs when there is no external force causing the motion, and the vibration of the system is caused by the initial displacement of the system from the equilibrium position. A plucked
guitar string is an example of free vibration.
What is fre vibration?
How do you create a response spectrum?
How to Create a Response Spectrum
1. Select a frequency range for which the spectrum should be generated.
2. Select a frequency step that determines how many points on the response spectrum should be computed.
3. Select a certain damping ratio,
4. For each of the selected frequencies. a.
What is Sdof vibration?
The mass is allowed to travel only along the spring elongation direction. Such systems are called Single Degree-of-Freedom (SDOF) systems and are shown in the following figure, Equation of Motion for
SDOF Systems. SDOF vibration can be analyzed by Newton’s second law of motion, F = m*a.
What is K in vibration?
The constant of proportionality k is the spring constant or stiffness. Mass. A mass is a rigid body (Fig. 2.2) whose. acceleration ¨x according to Newton’s second law is.
What is free damped and forced vibration?
There are two types of vibrations; free vibrations and forced vibrations. Damped vibrations are a subset of forced vibrations where the force is applied to resist the motion of the system, while in
free vibrations, there is no external force applied.
What are the 3 degrees of freedom?
Three degrees of freedom (3DOF), a term often used in the context of virtual reality, refers to tracking of rotational motion only: pitch, yaw, and roll.
How many degrees of freedom are there for a continuous system?
Systems with one degree of freedom and two or more degrees of freedom are discussed elsewhere. If one continues to add DOFs, the limit at an infinite DOF defines a continuous system.
What is degree of freedom with examples?
Example – Degrees of freedom for calculating mean To calculate the mean of the sample data, the degrees of freedom is equal to count of the data in the sample that are free to vary. For example, in
the example given below, the degrees of freedom is 5. This means that all 5 data is equally independent to vary.
What are the three type of free vibrations?
Types of Free Vibrations
• Longitudinal vibrations.
• Transverse vibrations.
• Torsional vibrations.
What is the difference between free and forced vibration?
In free vibration, energy will remain the same. Energy is not added or removed from the body. The body keeps vibrating at the same amplitude. In forced vibration energy gets added to the vibrating | {"url":"https://stockingisthenewplanking.com/what-is-sdof-system/","timestamp":"2024-11-05T23:41:01Z","content_type":"text/html","content_length":"53947","record_id":"<urn:uuid:571f1034-58a8-438f-a2fc-91d00c288f51>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00556.warc.gz"} |
2022/2023 Season Program Announcements
Lilah Fear & Lewis Gibson (GBR)
Vivir Mi Vida by Marc Anthony, Nadir Khayat, Bilal Hajji, Achraf Jannusi, Alex Papaconstantinou, Björn Djupström, Cheb Khaled aka Sam Debbie
Vivir Mi Vida (Version Pop) by Marc Anthony, Nadir Khayat, Bilal Hajji, Achraf Jannusi, Alex Papaconstantinou, Björn Djupström, Cheb Khaled aka Sam Debbie
No Me Ames - duet with Marc Anthony (Ballad Version) by Jennifer Lopez, Giancarlo Bigazzi, Marco Falagiani, Aleandro BaldiIgnacio Ballesteros
Born This Way by Lady Gaga, Stefani Germanotta, Jeppe Laursen
Million Reasons by Lady Gaga, Stefani Germanotta, Hillary Lindsey, Mark Ronson
Source: ISU Bio
Camden Pulkinen (USA)
SP: Fly Me to The Moon by Chris Mann choreographed by Alex Johnson
FS: Invierno Porteno by Astor Piazzola choreographed by Shae-Lynn Bourne
Lara Naki Gutmann (ITA)
FS: soundtracks from Hitchcock's movies
SP: "Un año de amor" by Luz Casal
Anastasiia Arkhypova (UKR)
SP: Cornfield Chase
Choreographed by Anastasiia Arkhypova
Misato Komatsubara & Tim Koleto (JPN)
FD: The Fifth Element
Kyrylo Marsak (UKR)
FS: Star Wars soundtrack: "Across the stars", "Imperial March (Anakin’s Suffering)" and "Duel of the Fates" by John Williams and Samuel Kim
Choreographed by Adam Solya
• 2 weeks later... | {"url":"https://planethanyu.com/topic/1773-20222023-season-program-announcements/page/10/","timestamp":"2024-11-05T23:25:45Z","content_type":"text/html","content_length":"164931","record_id":"<urn:uuid:1d809a0d-7107-47a7-99c3-4fa27f6011fc>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00442.warc.gz"} |
Threesology Research Journal
Mathematics Perspective: Page 14
~ The Study of Threes ~
Visitors as of August 8th, 2022
Page 1 Page 2 Page 3 Page 4 Page 5
Page 6 Page 7 Page 8 Page 9 Page 10
Page 11 Page 12 Page 13 Page 14 Page 15
Page 16 Page 17 Page 18 Page 19 Page 20
Yet, just like sequential counting in that there may be no quantity beyond a thousands place (interpreted as a four-quantity), we can find examples which may include but not exceed a ten thousands
place, whereby some interpret this as a five, but could readily be interpreted as a 4 to 1 or 3 by 2. For example, while some people say we have five fingers, another person may count the digits as
four fingers and one thumb. Still another person may single out one or more fingers as have some particular distinction (pinky, ring finger, middle finger), whereby the described count takes on a
different proportion. The point is, while we humans may be counting in one fashion, Nature may be "thinking" of quantity or functionality in a different fashion. In any respect, the aforementioned
repetitive usage of a dichotomous orientation being used by Mathematicians may be a false configuration which acts as an obstacle to further development. We know how to look, but we don't know how to
see. Our seeming obsession of using dualities as an underlying standard of mathematics appears to be a Westernized recitation of a differently styled yin/yang compilation over-which we embellish with
patterns-of-three to provide us with the scenario that we add, subtract, multiply or divide one number in conjunction with another number in order to get a presumed third. I say "presumed" because
humans rationalistically orient themselves towards adopting an attitude of an accomplished or achieved progression, when in actuality it may be little more than an adaptation to an overlooked
incremental deterioration requiring a specific type of survival mechanism.
When one sees set figure sequences such as (1...), (1, 2...) (1, 2, 3...) and think this is a reflection of superior mathematics and not some repetitive cognitive illustration found in other subjects
which use their own vernaculars to express the same pattern; how are we truly going to be able to discern a progressive development in mathematical thinking if we are simply using the same underlying
patterns expressed with different symbols regulated as part of a larger survival mechanism that has been forced into usage by the presence of an incrementally deteriorating environment which most
people would readily dismiss by using one or another rationalization as a defense mechanism?
This is not to say that mathematics doesn't have great utility, much like a stick used by a chimpanzee to flesh out a tidy morsel of an insect from some crevice, but the long term facility of such an
adaptive behavior benefiting an individual in their respective life, ineffectually provides them with an ability to notice that the assumed progress defined by tool usage also embraces a foolhardy
interpretation about the prospects of longevity for the entire species. In the world of chimpanzees, the usage of a stick can be useful... just like the usage of mathematics for humans. In this
sense, mathematics is little more than a type of stick with which humans can probe, poke at and doodle a myriad of geometric forms with and call it science, art, poetry, music or even futility.
No less, let us view Nature itself (that which humans recognize in the context of our reality) as a primitive life form or life form originator which uses primitive patterns, whereby human deductions
of basic patterns routinely portray the exhibition of small numbers (for example, we have a triplet DNA code and not a larger number code), which may thus reflect a stagnant developmental sequence
that some readers might dismiss as due to the length of time required for Evolution to make changes. Yet, it is not certain if Evolution can make changes to its own Nature... that is, provide life
forms with a different model of Evolution with which to evolve, or use some other mechanism other than that which we at present call "Evolution". Indeed, why do we see so many recurring small number
patterns in basic Natural phenomena and yet attempt to dismiss this notion by claiming there are larger number patterns (beyond 1s, 2s, 3s, 4s, 5s...) such as the 8 (octet) pattern in chemistry, the
8 pattern in (octomerous) life forms, and the 7 or 8 quantity of Hox genes in free-living members of Platyhelmiths (Acoelomorphs have four or five Hox genres: pg. 292, Integrated principles of
Zoology, ISBN 978-8-07-304050-9) yet fail to recognize that such an "8" pattern apparently occurs more often in primitive instances and is not widespread in multiple other subjects? And to this we
might add the presence of a primitive "4" occurrence in quadripedalism with a lowered number bipedal gait used as a criteria for judging evolutionary advancement, at least in primates.
For example, why isn't there 8 families to atomic particles or people routinely being born with eight fingers, eight toes, eight eyes, eight heads, hearts, etc.? Why no eight-coding system for DNA
and RNA? Why no 8-forms of DNA or a standardized 8-speed bicycle or 8-position selection on appliances and automatic transmissions, or eight children being born as a standard birthing quantity of all
life forms or 8 cores as a standard in all computers? Why does Nature stop at using a recurrence of small number patterns unless for some reason it actually doesn't, and it is human perception/
consciousness which is imposing the recurrent usage of small number patterns? Why does mathematics, not only in the presence of an infinity of numbers and a suite of small numbers to choose from
(zero through nine), set a standardized dominant focus on using dichotomies, much in the same manner as the usage of a binary system for an electric circuit based computer language? Are
mathematicians simply regurgitating some ancestral obsession of using patterns-of-two (like some Western born Yin/Yang formula), and will rationalize some presumed value to describe why such is the
case, without taking the time to statistically arrive at a value which makes such a usage an improbability, or that such a probability is indicating we are dealing with an influence that is being
overlooked? In other words, is the system of mathematics rigged to configure a mathematical rationale to offer some plausible reason for the persistent usage of an underlying dichotomization of views
implanted in the construction of mathematics equations, typically resorting to some biological, atomic or mechanistic model to offer up a supposed proofing exposition?
We can recognize a recurrence of different patterns being use to express ideas in all subjects. All of them have some level in which basic patterns of a given subject matter are being indexed and in
some cases such as comparative anatomy and embryology, we come to discern similarities of patterns exhibiting the same quantity. This is not to say that researchers are making the correct deductions,
but that... nonetheless, some pattern is offered as being exemplary of a given process, function or design. At least how humans are perceiving such occasions. All told, we are dealing with
information which can be filed under a heading of information theory that, not surprisingly, encompasses a usage of number relationships:
(Information theory is a) a mathematical representation of the conditions and parameters affecting the transmission and processing of information. Most closely associated with the work of the
American electrical engineer Claude Shannon in the mid-20th century; information theory is chiefly of interest to communication engineers, though some of the concepts have been adopted and used
in such fields as psychology and linguistics. Information theory overlaps heavily with communication theory, but it is more oriented toward the fundamental limitations on the processing and
communication of information and less oriented toward the detailed operation of particular devices.
The formal study of information theory did not begin until 1924, when Harry Nyquist, a researcher at Bell Laboratories, published a paper entitled "Certain Factors Affecting Telegraph Speed."
Nyquist realized that communication channels had maximum data transmission rates, and he derived a formula for calculating these rates in finite bandwidth noiseless channels. Another pioneer was
Nyquist's colleague R.V.L. Hartley, whose paper "Transmission of Information" (1928) established the first mathematical foundations for information theory.
Clearly, the recurrence of number patterns in different subjects is information that we do not yet understand what is being conveyed to us. While the information is being transmitted just as
electrical impulses of the early telegraph, suggesting that the patterns being revealed are like a type of Morse code or Braille, we have also come upon a circumstance reflecting the causal nature of
certain patterns being more frequently repeated than others. Does such a situation reflect a primivity of thinking or an accurate depiction of a phenomena indicating we are subjected to a primitive
formula of Nature in the present context of Earth, and that other forms of Nature exist with more complex formulations of processes, functions and structures?
While humans have unraveled the code of genetics and atomic particles (to some extent), the patterns being presented to us by recurrences in different subjects is not understood because such a
discipline of study is in the position where all subjects begin such as for example paleontology. Whereas there may have initially been one or two who collected a few bones and used them as
door-stops (so to speak), later on there were those who collected more bones and other items (which were later labeled as fossils), and still others who eventually saw the collections as
representative of a pattern of life and geologic stratification. For example:
Paleontological research dates back to the early 1800s. In 1815 the English geologist William Smith demonstrated the value of using fossils for the study of strata. About the same time, the
French zoologist Georges Cuvier initiated comparative studies of the structure of living animals with fossil remains. ("paleontology." Encycloædia Britannica.)
All present day disciplines had their formative beginnings with fits and fashions, starts and stumbles, as well as detractors. Whereas Charles Darwin (for example) made a collection of life forms
while on his five year voyage: (The HMS Beagle was a) British naval vessel aboard which Charles Darwin served as naturalist on a voyage to South America and around the world (1831–36). The specimens
and observations accumulated on this voyage gave Darwin the essential materials for his theory of evolution by natural selection. ("Beagle." Encyclopædia Britannica.)
If Darwin had not made the trip or had not made a collection with which to refer to, it is suspect whether he would have developed his "On the Origin of Species by Means of Natural Selection" (later
called a theory of Evolution), even though the publication of the idea took two decades after his initial three-volume beginning and after he gained the distinction of being a Justice of the Peace.
("Darwin, Charles." Encyclopædia Britannica.) The point is others were not only thinking along the same lines, but were gathering dispersed amounts of examples which would later prove to help enlarge
the underlying premise of Darwin's ideas. All the collections were necessary even if he did not yet have a conceptual framework for inclusion in his own ideas. Needless to say, the rather disparate
looking information began to fit together into a puzzle from which a picture of life could be ascertained, and is extensively applied today.
The recurrence of enumerated patterns and those patterns not yet enumerated are presenting us with a sketch we can tentatively describe as representing a blueprint or map or etching of human
cognitive activity. If you prefer, let us call it a paint-by-numbers dot-to-dot illustration that we have not collected all the numbers for in their respective placements. If low numbers being
exhibited in ideas of fundamental occurrences represent that Nature itself is a primitive life form (so to speak), let us also identify what sort of species this life form is and whether it is able
to evolve to a more modern formulation. If Mathematic's usage of a top-heavy dichotomous orientation is to be viewed as a fundamental/basic scaffolding, then is this due to an imposition created by
the human psyche, the indication of an inherent disposition due to a primitive design (like an expressed development using two instead of three germ layers), or a relative "cry for help" because
mathematics is being forcibly subjected to an imprisonment which forces it to curtail its desire for expressive freedom?
The relatively small group of number patterns being used in different subjects is a puzzle whose answer may or may not lie in retracing our thoughts in addressing other kinds of puzzles with an
underlying numerical reference.
(The Konigsberg bridge problem is) a recreational mathematical puzzle, set in the old Prussian city of Königsberg (now Kaliningrad, Russia), that led to the development of the branches of
mathematics known as topology and graph theory. In the early 18th century, the citizens of Königsberg spent their days walking on the intricate arrangement of bridges across the waters of the
Pregel (Pregolya) River, which surrounded two central land-masses connected by a bridge. Additionally, the first landmass (an island) was connected by two bridges (5 and 6) to the lower bank of
the Pregel and also by two bridges (1 and 2) to the upper bank, while the other landmass (which split the Pregel into two branches) was connected to the lower bank by one bridge (7) and to the
upper bank by one bridge (4), for a total of seven bridges. According to folklore, the question arose of whether a citizen could take a walk through the town in such a way that each bridge would
be crossed exactly once.
In 1735 the Swiss mathematician Leonhard Euler presented a solution to this problem, concluding that such a walk was impossible. To confirm this, suppose that such a walk is possible. In a single
encounter with a specific landmass, other than the initial or terminal one, two different bridges must be accounted for: one for entering the landmass and one for leaving it. Thus, each such
landmass must serve as an endpoint of a number of bridges equaling twice the number of times it is encountered during the walk. Therefore, each landmass, with the possible exception of the
initial and terminal ones if they are not identical, must serve as an endpoint of an even number of bridges. However, for the land-masses of Königsberg, A is an endpoint of five bridges, and B,
C, and D are end-points of three bridges. The walk is therefore impossible.
It would be nearly 150 years before mathematicians would picture the Königsberg bridge problem as a graph consisting of nodes (vertices) representing the land-masses and arcs (edges) representing
the bridges. The degree of a vertex of a graph specifies the number of edges incident to it. In modern graph theory, an Eulerian path traverses each edge of a graph once and only once. Thus,
Euler's assertion that a graph possessing such a path has at most two vertices of odd degree was the first theorem in graph theory.
Euler described his work as geometria situs—the "geometry of position." His work on this problem and some of his later work led directly to the fundamental ideas of combinatorial topology, which
19th-century mathematicians referred to as analysis situs—the "analysis of position." Graph theory and topology, both born in the work of Euler, are now major areas of mathematical research.
"Königsberg bridge problem." Encyclopædia Britannica article by Stephan C. Carlson, Professor of Mathematics, Rose-Hulman Institute of Technology, Terre Haute, Indiana. Author of Topology of
Surfaces, Knots, and Manifolds: A First Undergraduate Course.
Although the foregoing presented us with a "7" problem, we need to contrast this with problems exhibiting a configuration of three elements, to which let us add the three shells and card monte games
problem, the dichotomous "checks and balances" schematic to provide a solution involving the three (Executive, Legislate, Judicial) branches of government, the And, Or, Not logic circuitry problem,
the God- Humanity- Satan problem, the Christian Trinity problem, as well as the following:
From celestial mechanics we have the 3 body problem and from Mathematics we find three classical problems. (Note, the exercise I am providing concerning the part played by the incremental
deteriorations of the Sun- Earth- Moon complex with respect to the usage of a top-heavy reliance on dichotomies used as a an accommodating survival mechanism, is another type of 3-body problem).
The three-body problem
The inclusion of solar perturbations of the motion of the Moon results in a “three-body problem” (Earth-Moon-Sun), which is the simplest complication of the completely solvable two-body problem
discussed above. When Earth, the Moon, and the Sun are considered to be point masses, this particular three-body problem is called "the main problem of the lunar theory," which has been studied
extensively with a variety of methods beginning with Newton. Although the three-body problem has no complete analytic solution in closed form, various series solutions by successive
approximations achieve such accuracy that complete theories of the lunar motion must include the effects of the non-spherical mass distributions of both Earth and the Moon as well as the effects
of the planets if the precision of the predicted positions is to approach that of the observations. Most of the schemes for the main problem are partially numerical and therefore apply only to
the lunar motion. An exception is the completely analytic work of the French astronomer Charles-Eugène Delaunay (1816–72), who exploited and developed the most elegant techniques of classical
mechanics pioneered by his contemporary, the Irish astronomer and mathematician William R. Hamilton (1805–65). Delaunay could predict the position of the Moon to within its own diameter over a
20-year time span. Since his development was entirely analytic, the work was applicable to the motions of satellites about other planets where the series expansions converged much more rapidly
than they did for the application to the lunar motion.
"celestial mechanics." Encyclopædia Britannica. Article by Stanton J. Peale, Professor of Physics, University of California, Santa Barbara.
The three classical problems
(H.O.B. note: notice Euclid used two items as a means of expressing a pattern-of-two styled dichotomy.<)/p>
Although Euclid solves more than 100 construction problems in the Elements, many more were posed whose solutions required more than just compass and straightedge. Three such problems stimulated
so much interest among later geometers that they have come to be known as the “classical problems”: doubling the cube (i.e., constructing a cube whose volume is twice that of a given cube),
trisecting the angle, and squaring the circle. Even in the pre-Euclidean period the effort to construct a square equal in area to a given circle had begun. Some related results came from
Hippocrates (see Sidebar: Quadrature of the Lune); others were reported from Antiphon and Bryson; and Euclid's theorem on the circle in Elements, Book XII, proposition 2, which states that
circles are in the ratio of the squares of their diameters, was important for this search. But the first actual constructions (not, it must be noted, by means of the Euclidean tools, for this is
impossible) came only in the 3rd century BC. The early history of angle trisection is obscure. Presumably, it was attempted in the pre-Euclidean period, although solutions are known only from the
3rd century or later.
"mathematics." Encyclopædia Britannica. Article by Wilbur R. Knorr, Professor of the History of Science, Stanford University, California. Author of Ancient Tradition of Geometric problems and
Here we have an example of three logic functions applied to an underlying dichotomy of zeros and ones in electrical circuitry for computer languages.
While the And- Not- Or logic circuit presents us with a pattern-of-three formulation, when speaking of electrical circuitry we often encounter two underlying models labeled as AC (alternating
current) and DC (Direct current). Hence, we are presented with a dichotomy to which we can add a few others:
● open/closed circuit
● Hot/Ground (also neutral). A color code for a DC circuit is that the Red wire is positive and the Black wire is negative. In an AC circuit, you might frequently find a black wire for positive or
"hot" side and a white wire for the negative (non-hot) side. A green wire might be used for a grounding wire.
● Series/Parallel
● (While we have the idea of a "short" circuit to indicate a faulty one, we do not customarily say "long" circuit as a means to convey the idea of a non-short- circuit.)
If counted separately a person arrives at five different types of circuits, but upon recognizing the presence of dichotomies, we see there are 3, or at least 2 and one-half. Metaphorically speaking,
it presents us with an overall circuitous route similar to the quantity of terms one might find in the human cochlea (snail-like process in the ear), attendant with the usage of electrical,
mechanical and fluidic types of circuitry used in the process of hearing.
With respect to patterns-of-two and math puzzles, we find a "classical" representation in what is called the The Prisoners' Dilemma. It is of need to note the problem present us with (at the very
leas) the problem of a dichotomy over which is inscribed over with three possible alternatives, much like a binary computing language adopting the usage of three alternatives (And- Or- Not), with
multiples being used as an expressed circumvention.
The study of dualities is an important perspective of analyzing human cognitive activity but it must be done in the larger context of collating the several (but extremely limited) variety of
enumerated examples to be found in multiple subject areas, even if an author(s) does not use number and instead) whether consciously or unconsciously) uses symbols or words or an overall writing/
illustration portrayal of enumeration. For example, the recurring usage of a basic heading- body- ending profile is typically used, as well as the period- question mark- exclamation point
patterns-of-three. In other words, an author can use Dichotomization without even being aware of doing so, because they take such a pattern as normal, usual and customary.
Here is an example of what can be described as an illustration concerning the development of enumeration:
Verse Forty Two
Wing-Tsit Chan, 1963
● Tao produced the One.
● The One produced the two.
● The two produced the three.
● And the three produced the ten thousand things.
The ten thousand things carry the yin and embrace the yang, and through the blending of the material force they achieve harmony.
People hate to be children without parents, lonely people without spouses, or men without food to eat, And yet kings and lords call themselves by these names. Therefore it is often the case that
things gain by losing and lose by gaining. What others have taught, I teach also: "Violent and fierce people do not die a natural death." I shall make this the father of my teaching.
Here's another variation:Threes poster column 5
*** Laozi (author of Tao Te China): Dao produces one. One produces two. Two produces three. Three produces the ten thousand things. (In classical Chinese, the "ten thousand things" means
"everything." Commentators have long disagreed over what the "one, two, and three" refers to, usually plugging in their favorite cosmological, cosmogonic, or metaphysical model.) Laozi further
writes: Something amorphous & consummate existed before Heaven & Earth. Solitude! Vast! Standing alone, unaltering. Going everywhere, yet unthreatened. It can be considered the Mother of the World. I
don't know its name, so I designate it "Dao." Compelled to consider it, name it "Great." (Dao is considered indistinct & undefinable. It is not the vision of a visionary that helps others see more
clearly, but that which they are able to articulate from memory of their visionary trek.)
In the above expression we can note that the counting sequence describes a 1- 2 -3... many grouping. Altogether it is a set... that is, a singular set of a singular idea and not a single idea
quantifying a multiplicity of examples. While it can be argued that "ten thousand things" references a multiplicity, the sequence itself is a single cognitive set. It is the same sort of "set theory"
from which our present human mathematics is derived. It is not a singular set of multiplicities nor multiple sets of singularity. All three are different models of cognitive orientation. the present
mathematics being used around the Earth is based on a primary orientation where the "two" is the dominant theme where the "many" is describe as three or more, even though some mathematicians use a
(1,2,3...) as a recurring cognitive set, and our series of notated numbers uses three items before demarcating them from the next three such as in the case of ones-tens-hundreds (comma) thousands-ten
thousands- hundred thousands (comma)... etc...
Then again, does counting quantitative sets instead of a singular set constitute a greater complexity of thinking to be approximated with a "higher" form of conceptualization from which can arise a
"better" mathematics... particularly when a presently used mathematics is mastered by those asserting there is no greater model to be born from a biologically based encephalization process?
Despite my short digression from the main presentation of Mathematics being stagnant, let me provide an example from Biology concerning the repetition of a dual-based division of multiplication with
respect to cell generation. Whereas one might well characterize development by way of such a two-patterned development scenario as Natural and Normal as well as dynamic, the repetitious usage of the
same pattern suggests a stagnant design. Though the two-patterned design (just like in mathematics and psychology) has utility, this does not detract from the observation that what we are perceiving
is a stagnant model. Similarly, we see a stagnant model in DNA with its reliance on a triplet code, instead of evolving towards the usage of another pattern. The lack of change, the lack of such a
dynamic, leads me to consider that such patterns are being reinforced by pressures which require such low number repetitions to occur as a survival mechanism. However, because we can note that the
overall Sun/Earth/Moon complex is on a course of incremental deterioration, it may well serve us preferentially to conclude that we are dealing with a tell-the-tale design of accommodation... whereby
our ideas involving mathematics, psychology and other subjects, take on the position of being rationalizations which delude us into thinking we are engaged in activities of sustained survivability
when actually we are not.
The dichotomizations being used in Mathematics (and other subjects) indicate human conceptualization is in a state of stagnant repetition reminiscent of an early counting scheme which the
three-patterned phrase of "one-two-many" can suffice as a generality exhibiting humanity's initial developmental attempts to forge a trail into the conceptual grasp of group or set theory initiated
by a behavior of pairing and then simply addition, such as is expressed in the Fibonacci series conveyed as an exercise of mathematical playfulness that may have been thought of as a serious
intellectual formulation by Fibonacci himself.
One of the (dichotomously) oppositional stances one might take against the present discussion is to point out that we typically align human life in concert with a previous era's interest in geology,
whereby the life span (origination and ultimate extinction) is typically assessed in terms of a geological standard of time stratification. In other words, it is very common to typically see life in
conjunction with some geological period, whereby geology is the standard by which we come to judge a life form's survivalness. Occurrences of different Life forms is expressed in geological terms
such as the following images describes as an example of this modeling, though one must look with a different type of perspective to discern the usage of singularity to multiplicity as well as
stratifications encompassing the ideas of from left to right and top to bottom, though one could easily flip/flop these ideas around:
For some readers, the geologic time scale coupled with the belief that the Sun/Earth/Moon (3-body) complex has been around from billions of years and is estimated (by way of educated guesstimation)
to remain for billions of more years, since there is no indication (according to present day determinations) that we should suspect any dramatic alteration to occur suggesting a shorter period of
time is to ensue, barring some unforeseen cataclysmic occurrence. However, the means by which mathematics was developed by what I believe to have been a single model of counting expressed in
culturally adopted different ways of presentation, describes a lineage of cognitive descent much like so many other lineages seen in biology... from a singularity of multiplicity by way of using a
pairing model as an underlying model over which a triadic or "many" expression is used to embellish the one-two formula. From the value of three onward appears to suggest a multiplicity, though it
might be argued that a "two" instance is a multiplicity of one, though multiplicity typically becomes defined in terms of three or more,occurring with different symbols and labels. Yet, the
multiplicity appears to be governed by a conservation which repeats itself... often by way of doubling itself.
As it is, Mathematics is a model of cognitive activity which developed by way of an associative pairing such as pairing one item with a representative word or symbol that came to stand for the object
or item being observed, such as one apple. While I do not know how long it took humanity to develop the concept of one or singularity in terms of a conscious acknowledgment of it being a quantity,
nor the sequence of time events before the concept of two, and so forth came to mind, or if there was some sort of "Eureka!" moment of conceptualization for one or more quantity identities emerging
consecutively or jointly; the point is that we of today can appreciate that the developmental scenario of numbers was a progression... even if there were one or more individuals who gained a more
comprehensive grasp of enumeration and quantity prior to their peers. It may well have been the case that past eras, deep into the recesses of hominin development there were those whose ability to
conceptualize exceeded their counterparts and may have brought about occasions for them to be subjected to ostracism, trephination or even death, as a coping mechanism for those whose inability to
grasp what was being (crudely?) expressed and felt was an indication of something being wrong with a particular individual who did not maintain the visual perspicuity of the many, particularly the
reigning leadership.
In any respect, I am claiming that the present models of mathematics are a divergent lot which originate from a single model of counting whose adaptability for future conditions is limited and that
the present usage of mathematics to persistent in efforts to survive by encouraging all social activities to "do the math" according to the prevailing axioms, also limits the ability of present human
societies to adapt to changes requiring a model of mathematics which is born from a different model of counting. Yet, because present social authority imposes the belief among the public that the
current model of thinking mathematically is a vital necessity, any other model which may surface due to a developmental change in human consciousness will be oppressed to comply and leading to
misunderstood conflicts of conceptualized orientation.
By viewing present Mathematics as a type of game with a given set of rules that become mastered by a few while the majority come to struggle with, it is believed that those who come to master
mathematics are somehow more intellectually astute or even gifted, while the rest of humanity, the majority of humanity, does not have the intellectual fortitude to learn the rules and apply them
according to the dictates of those who have become convinced by their math instructors and thus instruct others with the same message that a thorough grasp of higher mathematics is needed to work
competently in many fields of research. Even though it is realized that mathematics is a contour of thinking that can be expressed within the parameters of different mediums such as art and music,
mathematics is not typically viewed as a genres of art and musical composition using an alternative set of symbols, language and application. Even while it is known that those skilled in the use of
an abacus can perform simple calculations equal to other processes, the abacus, as an expressed model of computational effort is not typically acknowledged as a model of language medium, just as are
musical instruments and the instruments used by artistic illustrators.
If Humanity encounters a sentient species born on a different planet, it may well be the case that their kind of mathematics may have originated by way of a different model of counting. Whereas
humanity began the trek of its mathematics along non-set progressions, a counting methodology which uses set progressions would be one in which larger amounts of information are taken into account.
The sets would not also exhibit a volume of objects to be counted, but a volume of objects/entities with more referencing to the originations of multiple complexities. In other words, while one child
is on a trek of developing a mathematics by way of counting wooden blocks, and another is on a different path defined by counting sets of different kinds of blocks with different labels. Such a
situation is made more difficult when we have child development experts who have created what are called developmental milestones which are used by multiple others to gauge the developmental
performance of children by their corresponding ability to perform a task according to guidelines that an adult observer may not be competently able to deduce that a given child is enabled to mentally
use a different model for developing rudimentary counting associations. The problem is made more acute because society is governed by a present model of mathematics which sees the world through a
predominant two-patterned prism/kaleidoscope governed by axioms which are initiated in accord with such a binary orientation. Society and social behavior is exhibited to conform to this model because
leaderships in business, government and religions expect compliance to their alternative forms of dichotomization such as qualitative/quantitative products and productivity, Patriotism/Treason, good/
evil, etc...
Another problem with the foregoing stated reliance of orienting human existence to a geologic scale is to overlook the possibility that human life-span as a species might be better off in describing
itself in terms similar to the life-cycle of a vinegar (fruit) fly, which has a relatively short life span (of about two weeks in duration). I mention the fruit fly not only because it has been used
extensively in genetics experiments but also in a metaphorical sense because of its two wings, though other two-winged life forms might be used to express the notion of "flightedness", which can be
used to illustrate the phenomena known as pure math and its suggested "loftedness" and ability to soar above conventional (applied) mathematics; such as the view I believed to have been illustrated
by Godfrey Harold harding in his distaste for applied mathematics because he viewed pure mathematics as an exalted character, in line with those who think that mathematics is the Queen of all other
Regarding the question of how we might develop a "better" mathematics in terms of creating conditions for improving the quality of life by creating society in accord with such a mathematic's, is
first to recognize that the present mathematics has an origin based on a formula that comes to assert itself as a dominant theme in the practices of business, government and religion. By being made
aware that the current model of mathematics is a limitation that it is but a species of cognitive activity which may be akin to a primitive mentality... as one might describe the human brain having
developed along a three-patterned (Paul MacLean-nian styled triune brain complex) course of Reptilian- Mammalian- New-mammalian, then one might come to view mathematics in a state of primivity it can
not grow out of. Like a developmental stage experienced by those life forms which use some type of cocooning process (to give but one type of analogy), the current model of mathematics is a primitive
stage which is binding human development too... as if humanity mentality is akin to a cyclic state awaiting some change in the environment to trigger its further development; despite many who would
claim that humanity's model of mathematics is a fully developed butterfly (along with the moths and skippers which make up the insect order Lepidoptera), winging its way from the nectars of different
subject-flowers. (Skippers are considered an inter-mediate form between moths and butterflies.)
When I speak of a "Dynamic Calculus", I am not referring explicitly to the model of mathematics named Calculus. Calculus as defined by a Britannica on the subject is stated as: (Calculus is a) branch
of mathematics concerned with the calculation of instantaneous rates of change (differential calculus) and the summation of infinitely many small factors to determine some whole (integral calculus).
As defined, this does actually pertain to the development of Mathematics, unless for some reason a reader thinks Mathematics was developed instantaneously (spontaneously). In the sense that I am
using the word, it refers to the whole of mathematics as a type of calculating methodology which one might refer to as a tree with multiple branches and shoots from its roots or as a single species
with different racial classifications, or as a library with different genres of books, magazines, newspapers, etc...
When using a biology metaphor to describe mathematic's development from a basic (what I presently believe to be) singular origin, one might also be inclined to continue using the analogy to present
the view that mathematic's may be exhibiting not a fully-fledged development but some pre-stage position of its metamorphoses. Then again, if we look at the underlying skeleton (blueprint/
scaffolding) of mathematics as expressed by its heavy usage of cognitively described dualities, one might want to say this equates with a basic cellular doubling event or a bilateral body plan, of
which we may note the existence of 3 types of body plans viewed from different vantage points, though the two-word phrase "body plan" is not characteristically used in several of the contexts from
which the following thimble-full of examples have been culled:
Although the foregoing example of sponge body plans would seem to indicated that this represents a fundamental structure, it needs to be placed into the larger context of other animal body plans
which are described by the quantity of Germ layers, namely diploblastic (two layers) and triploblastic (three layers). The usage of an underlying two-patterned orientation found in mathematics,
computer language and other subjects, suggests we are dealing with the prominent presence of a diploblastic formation found in the development body plans of the cnidarians (sea anemones, corals, and
jellyfish) are diploblastic, the inner endoderm and outer ectoderm being separated by an acellular mesoglea. ("circulation." Encyclopædia Britannica.) A triploblastic model of development is said to
occur in animals from earth worms to humans, while Porifera (sponges) and Placozoa, lack clearly defined tissues and organs, (yet) their cells specialize and integrate their activities. ("animal."
Encyclopædia Britannica.) And though the term "Monoploblasitic" (one germ layer) is not part of the routine vocabulary of comparative anatomy/embryology, let us nonetheless make note of it including
the idea that in some perspectives, the sponges are considered to be a representative animal thereof. In other words, we see an increase in germ layer quantity from simple to complex life forms.
Similarly, let us speculate that a prominent usage of two-patterned ideas found in Mathematics and other subject areas provides us with the consideration we are dealing with a circumstance of
primivity... unless, like in embryological studies, the prominent usage of such is like a student that has a preoccupation with the body functions which arise from the 2nd, middle (mesodermal) layer,
though in actual evolutionary terms, it appears to have occurred third in the overall developmental sequence, even while some other researchers claim the neural crest as a forth germ layer. However,
the neural crest(s) is said to arise due to groupings of ectodermal cells, developed as a column on each side of the neural tube. ("nervous system, human." Encyclopædia Britannica.)
In several cases one may find illustrations exhibiting a simple numerical pattern that might otherwise be viewed as a set (set of one, set of two, set of three, etc...), though in different
applications the actual quantity of layers, departments, structures, levels, forms, compartments, distinctions, appendages, etc., may be fewer or several more when viewed from a different vantage
point. Nonetheless, the standard cognitive formula appears to be the usage of only a few— such as one, two, three or multiples thereof, but infrequently as a large quantity seen as a standard
repetition in multiple subjects. For example, though mathematics has an infinity of numbers at its disposal, most equations are formulas advancing simple patterns configured into assumed complexities
like someone playing chess, checkers, or a card game against themselves or multiple imaginary players, which entail different types of betting, bargaining, baiting, in order to best some effort set
as an objective before them... and frequently appears to be like a child with an imaginary playmate carrying on either/or silent/outspoken dramatized conversation. Examples of this are when someone
talks to their vehicle accusingly or in a manner trying to coax out another mile before they run out of gas that may alternatively be measured in terms of full, half, quarter and eight of a tank of
gas, unless some other system of measurement is employed. Another example is when someone tries to beat themselves at a previous (sports, exercise or other) record, likely using some numerical
indexing formula. Mathematicians are not exempt from competing against themselves to pursue some "higher ground" of formulaic accomplishment.
Yet, we also see lots of mimicry which we might use to explain the recurrence of a few numerically indicated patterns cropping up in different subjects. Even the exceptions to recurring patterns is a
recurring pattern with a low quantitative repetition. For example, we might speak of the recurring Octet formula found in chemistry, but don't see the value "8" as a wide-spread occurrence of
notability as we do other patterns such as twos and threes. Whereas we can witness a lot of people engaging in the activity (in different contexts) using one enumerated value that is added,
subtracted, multiplied or divided from/to/with another number quantity to arrive at a third actual or approximation, they may nonetheless use an unrealized adherence to a formulated dichotomous
theme. Whether one examines musical notation or the notation sometimes employed in the counting of squares in checkers or chess, the latter two refile on a system of play dictated by being able to
move playing pieces in one of three directions noted as horizontal, diagonal and vertical, with musical notion often subscribing to the formula of using fractions and not enlarged sets of whole
Whereas we are dealing with the language of mathematics, one can also look at common verbal expressions involving grammar which can be subsumed under the heading of linguistics, though in terms of
grammar, Noam Chomsky's "tansformational grammar" idea can be reviewed to identify the established usage of a dichotomy called Surface and Deep structures, though one might also refer to the
dichotomy of consonant and vowels, which are "added" together to produce syllables whereby a third feature known as suprasegmentals arise. In addition to this "two" notation found in language, we
find the idea of word order in Linguistics distinguishes three-patterned linear orders of arranging a subject, object and verb.
These are all (six) possible word orders for the subject, object, and verb in the order of most common to rarest (the examples use "she" as the subject, "loves" as the verb, and "him" as the
1. SOV is the order used by the largest number of distinct languages; languages using it include Japanese, Korean, Mongolian, Turkish, the Indo-Aryan languages and the Dravidian languages. Some,
like Persian, Latin and Quechua, have SOV normal word order but conform less to the general tendencies of other such languages. A sentence glossing as "She him loves" would be grammatically
correct in these languages.
2. SVO languages include English, Bulgarian, Macedonian, Serbo-Croatian,[10] the Chinese languages and Swahili, among others. "She loves him."
3. VSO languages include Classical Arabic, Biblical Hebrew, the Insular Celtic languages, and Hawaiian. "Loves she him."
4. VOS languages include Fijian and Malagasy. "Loves him she."
5. OVS languages include Hixkaryana. "Him loves she."
6. OSV languages include Xavante and Warao. "Him she loves."
(Wikipedia: Word Order)
The following (1st list) list comes from (What is Word Order?), which I truncated into fewer word association compilations. The link provides information on the three-patterned word order idea, yet
presents a seventh category called "free word order". Yet if one looks at this assumed "free-ness", it too must contain one of the six claimed in the foregoing Wikipedia article, unless one is
speaking without using a subject, object or verb as distinctions. However, if these three are used, there are only six possibilities unless one alternatively uses different types of word categories
or uses a pattern-of-one or a pattern-of-two combination method such as using only one-word or two-words instead of three. It is an intellectual juggling mechanism seen in every single subject, be it
mathematics, philosophy, politics, religion, etc...
The idea of a "Free" word order needs to be seen as a cognitive orientation that has many parallels if we group the different variations of the 3-patterned Subject-Object-Verb as a separate triad
identity... so to speak, and the idea of a separate "free" category as a different identity... this combination can be seen as a 3-to-one complex or as a duality. These Identities might not come to
mind as patterns if we are looking only for compatibilities in a single subject area. However, if we look beyond the territory of a single subject area, we come face to face with similarities which
can be viewed in the context of illustrating a repetition of cognitive behavior. This link (3-to-1 ratio examples) and the following example illustrating another "free" reference, clearly indicate
similar cognitive patterns in different subject areas:
Date of Origination:
19th March 2022... 5:23 AM
Secondary page split:
24th March 2022... 3:24 AM
Updated Posting:
2nd January 2023... 11:01 AM
Initial Posting: | {"url":"https://www.threesology.org/math-perspective-14.php","timestamp":"2024-11-15T03:09:31Z","content_type":"application/xhtml+xml","content_length":"78621","record_id":"<urn:uuid:a74ffbac-f6ce-45ce-b6b3-bbade61d4d58>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00806.warc.gz"} |
Gravitational Wave Memory: A Tool for Measuring Spacetime Symmetries | TechieTonics
Gravitational Wave Memory: A Tool for Measuring Spacetime Symmetries
When we talk about the fabric of reality, in terms of physics, we deal with interesting abstractions and tonnes of complexities. One such intriguing concept of Einstein’s theory of general relativity
is the existence of gravitational waves.
As the name suggests, these ripples are generated in the spacetime by some of the universe’s most violent and energetic processes, such as mergers of cosmic stars or dent in the landscape due to
black holes. Interestingly, whenever these waves pass through, they leave a measurable imprint on the relative positions of objects—a phenomenon known as gravitational wave memory.
Gravitational wave memory is the small but permanent change in the positions of objects after a gravitational wave has passed through. Think of it as a tiny, lasting shift in the “fabric” of space
caused by these waves. Studying these shifts can help us understand the basic properties of spacetime itself.
Gravitational Wave Memory: A Tool for Measuring Spacetime Symmetries
Researchers at Gran Sasso Science Institute (GSSI) and the International School for Advanced Studies (SISSA) recently carried out a study into this possibility. The team explored the use of
gravitational wave memory as a tool to measure spacetime symmetries.
Spacetime is a framework where all physical events occur. Since, it comprises of the three dimensions of space (length, breadth and height) with the one dimension of time into a four-dimensional
continuum. Without spacetime, it is difficult for us to comprehend, space and time independently.
Symmetry, here, refers to a property where a system remains unchanged under certain transformations. For example, a circle remains symmetrical no matter how we rotate it.
So, combining spacetime and symmetries means that there are specific symmetries related to the structure of spacetime. These symmetries show that the fundamental properties of spacetime remain
unchanged under various transformations.
Displacement and Spin Memory in Focus
To understand these symmetries, the study focused on the following two key phenomena:
• Displacement Memory: It refers to the permanent shift in the distance between objects caused by a gravitational wave.
• Spin Memory: It involves changes in the angular momentum of objects.
By tracking these two effects, the scientists look forward to unravel new dimensions of our understanding of spacetime. Or maybe surface some hidden aspects of symmetry and structure, which we are
still not aware of.
Connecting Gravitational Wave Memory and Quantum Mechanics
Boris Goncharov, co-author of the paper, explained how he connected the dots between gravitational wave memory and its connection with low-energy physics and quantum mechanics.
Low-energy physics refers to the energy scales where quantum field theories accurately describe physical processes, and quantum gravitational effects are not significant. At this level, the behavior
of quantum particles, such as those involved in gravitational waves, can be observed.
During his Ph.D. studies Goncharov encountered Weinberg’s soft graviton theorem while discussing gravitational wave memory. As a result, this steered his research trajectory towards the “Infrared
Triangle,” which connects the soft theorem with gravitational wave memory and the symmetries of spacetime at infinity from gravitational wave sources.
Weinberg’s soft graviton theorem and the ‘infrared triangle’ are both mathematical frameworks that describe aspects of gravitational wave memory.
• Weinberg’s soft graviton theorem helps us understand how gravitational waves can permanently alter the shape of spacetime, even after the waves move away. Its like dropping a stone into a pond,
which leads waves to spread out across the water surface. Now, even after the waves have stopped, the water surface isn’t exactly the same as it was before—they leave tiny long-lasting ripples.
• On the other hand, the infrared triangle connects complex ideas to things we can actually observe. It shows how the soft graviton theorem relates to bigger ideas about the nature of spacetime.
Goncharov added that by leveraging gravitational wave memory to probe spacetime symmetries, they are exploring how Einstein’s General Relativity aligns with the rules of quantum physics.
Despite General Relativity’s century-long success and its lack of natural fit with the microscopic world; He is optimistic about the approach, believing it could lead to a substantial and promising
unification in physics.
Detecting these memory effects and spacetime symmetries could provide grounded interpretations of general relativity and quantum field theory. There is also a possibility that it could to lead to new
or refined theories about the fundamental nature of the universe.
These advanced methods are helping us in probing the depths of the universe. And I feel, we inch closer to unravelling some of the most profound mysteries of existence.
Via: Physics Magazine | {"url":"https://techietonics.com/space-tonics/gravitational-wave-memory-spacetime-symmetries.html","timestamp":"2024-11-05T09:50:14Z","content_type":"text/html","content_length":"103174","record_id":"<urn:uuid:db712fa6-7bda-485f-9fc5-dbabd781d446>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00703.warc.gz"} |
The amazing singing banana – James Grime – does it again: What’s the probability you live in an odd numbered house?
Posted by: Gary Ernest Davis on: August 5, 2011
If only the accomplished, cialis clever James Simons (see previous video) were this entertaining.
James Grime stands a serious chance of turning people onto mathematics.
How are houses numbered where you live? In the US, odd number houses are on one side of the street and even number houses are on the other street. In Philadelphia, houses on the North side of
East-West streets, and on the East side of North-South streets, are odd numbered. Not every street has the same number of houses on the even and odd sides. I wonder if the probability would be
different if the Facebook poll reached mostly Americans? Mostly Philadelphians?
Well done, very entertaining vid, and very interesting topic. Two comments:
(1) You say, “What’s the probability that you live in an odd-numbered house?” Actually, that probability for me is zero!
(2) I like your analysis of a street having either an odd or even number of houses. However, the illustration, with a small number of houses, seems to equate “street” with “block”. And of course,
looking at house numbers block by block has to allow for lots of variation, such as blocks that begin with even numbers.
I think a slightly more precise way of saying it would be to say that house numbers tend to be doled out in strips, e.g., 501-599 or 2801-2860, and that these strips of numbers usually start with an
odd number, something ending in 1, and then end with either an odd or even number.
The second set of poll numbers you provide is: odd = 211,803, even = 209,964, for a difference of 1839, which is roughly 1/229 of the total of 421,767. If we assume that half the strips end in odd
numbers, and half end in even numbers, then we would need an average strip length of about 115 to add those additional 1839 odd-numbered houses. I would go on to hypothesize that the true average
strip length is less than 100 – just based on observations of how houses are numbered – and that the difference between the true value, and 115 – i.e., why we would need *more* strips to get our
extra odd-numbered houses – is accounted for by the fact that more strips end with an even number than with an odd, by virtue of the fact that houses are most often situated in pairs facing each
other across the street. | {"url":"http://www.blog.republicofmath.com/the-amazing-singing-banana-james-grime-does-it-again-whats-the-probability-you-live-in-an-odd-numbered-house/","timestamp":"2024-11-03T13:47:12Z","content_type":"application/xhtml+xml","content_length":"50500","record_id":"<urn:uuid:52205bfa-d727-4247-8458-d5b2c13949f8>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00565.warc.gz"} |
Is it ethical to pay for MATLAB assignment help in simulation of renewable energy integration into smart cities? | Pay Someone To Do My Matlab Assignment
Is it ethical to pay for MATLAB assignment help in simulation of renewable energy integration into smart cities? Many work-out specialists in the EU have all advised in terms ofMATLAB integration to
solve problem questions in renewable finance. These try this website ask about MATLAB control files (see following). For MATLAB integration into smart city, this guidance (see following). Matlab
Checkout Tool A MATLAB check-out tool, based on MATLAB and R-based checker, was first used by the State of England (SIE) to assess how MATLAB can be used to predict the market price of homes for a
given customer group on the basis of their price perception. Today, MATLAB integration into smart city is easy to read, quick, and easily understand. The Check-Out Tool requires MATlab environment
tool(s) such as MATLAB’s FindIn and FindOut tool to operate. Since MATLAB integration into smart city is not easy to use, solution to check-out it may be a good solution. MATLAB Checkout Tool is
created by the State of England and it is part of a larger MATLAB integration in smart city. FindIn test library In MATLAB check-out test software, MATLAB checked out to extract data from MATLAB
installation files, thus producing results. Since MATLAB checks out script for MATLAB test, MATLAB Checkout Tool worked as follows. Check-out Tool The Check-out Tool acts as MATLAB-interface to check
go to these guys MATLAB check-out process is performed. Matlab Checkout Tool was created by the State of England while MATLAB Checkout Tool is created by the State of India. Check-out Tool is
composed of two parts: check-out-data + use MATLAB example Check-out Tool on MATLAB example would be the second part. Check-out Tool on MATLAB example works according to MATLAB Script in MATLABIs it
ethical to pay for MATLAB assignment help in simulation of renewable energy integration into smart cities? I was previously writing in Matlab but came across this piece when someone suggested I may
not write in just Matlab because I am in Windows. I tried to explain my problem and understood that what people may be confused about is they would not always use MATLAB for project. In particular,
MATLAB should be on desktop and would provide to user a lot of help if he or she was aware of some tutorials for doing that and for some reason you can find out more wasn’t possible. For example, we
have a team of MATLAB engineers who are tasked with solving some engineering problems click over here now the promise to generate a database of our project “procedure/metrics/synthesis”) in MATLAB.
The motivation for this discussion was that if the project required a why not try these out of help, then here we are: A team of engineers in a small academic project had some understanding (by the
way) of what the business requirements are and what they intended from a risk and risk perspective. They came up with something called “data fusion” that was meant to do “analysis” of the data. A
cool example paper describing the procedure being used was by Stéphane Verbeau, who had worked on MATLAB simulations a few years ago as a lead engineer.
Can I Pay Someone To Take My Online Class
He wondered if MATLAB used the same functions for modeling the calculations in the simulations. There are lots of very common mistakes people make and they are usually understood in context and
clearly understood. I was confused by the “how it works” or “why it matters” or even the use of a function when they were talking about the application (code). This is where I came up with all the
examples mentioned above: I was interested to see if there was a problem in MATLAB when in the course of simulation of the procedure I was working on (one can just search for “functions” and findIs
it ethical to pay for MATLAB assignment help in simulation of renewable energy integration into smart cities? Whatif the number of partners are around ten and a half million and there are a thousand
and a half suppliers. Is the future of energy integration and renewable energy integration into smart cities the same or more than the number of partners? Does the need for the energy integration and
renewable energy integration into smart cities a part of the existing strategic infrastructure of the future? There are a few strategies we can make to find out the number of partners in renewable
energy integration and renewable energy integration into smart cities. First of all, the number of partners in renewable energy integration and renewable energy integration into smart cities is often
very small. For example, the total number of participants and contributors is about 2 million in electricity networks, which is close to the 5% of total energy. Secondly, there are a few strategy
that you can make to maximize the number of partners rather than the number of partners by: •Creating a financial model for energy integration. This model should start from the initial energy from
the sources such as the air, water and surface. It should run up to 100% of the energy to generate the required operating power. •Building a financial model for energy integration into smart cities.
This should start from the initial energy from the sources such as the air, water and surface. It should run up to 100% of the energy to generate the required operating power. •Building a financial
model to generate the necessary marketable products of energy and clean energy standards. This see it here start from the initial energy from power plants at the factory level. It should run up to
100% of energy to generate the required operating power. Another more established strategy that you can make is to build a financial model for renewable energy integration into companies. This model
should start from the initial energy from the platforms such as photovoltaic and wind turbines and start to develop the necessary strategies for development and deployment. Depending on the
definition of a startup space, there may | {"url":"https://domymatlab.com/is-it-ethical-to-pay-for-matlab-assignment-help-in-simulation-of-renewable-energy-integration-into-smart-cities","timestamp":"2024-11-12T11:50:58Z","content_type":"text/html","content_length":"111440","record_id":"<urn:uuid:c4d3709f-a575-428f-bba5-75a3c45ec7aa>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00219.warc.gz"} |
Error message
• Deprecated function: Function create_function() is deprecated in views_php_handler_area->render() (line 39 of /home/it/www/www-icts/sites/all/modules/contrib/views_php/plugins/views/
• Notice: Undefined index: und in __lambda_func() (line 3 of /home/it/www/www-icts/sites/all/modules/contrib/views_php/plugins/views/views_php_handler_area.inc(39) : runtime-created function).
Monday, 01 January 2024
Title: Mini-course Global Analysis of Locally Symmetric Spaces with Indefinite Metric Lecture 1
The local to global study of geometries was a major trend of 20th century geometry, with remarkable developments achieved particularly in Riemannian geometry. In contrast, in areas such as
pseudo-Riemannian geometry, familiar to us as the space-time of relativity theory, and more generally in pseudo-Riemannian geometry of general signature, surprising little wa known about global
properties of the geometry even if we impose a locally homogeneous structure. This theme has been developed rapidly in the last three decades.
In the series of lectures, I plan to discuss two topics by the general theory and some typical examples.
1. Global geometry: Properness criterion and its quantitative estimate for the action of discrete groups of isometries on reductive homogeneous spaces, existence problem of compact manifolds modeled
on homogeneous spaces, and their deformation theory.
2. Spectral analysis: Construction of periodic L2-eigenfunctions for the Laplacian with indefinite signature, stability question of eigenvalues under deformation of geometric structure, and spectral
decomposition on the locally homogeneous space of indefinite metric.
Title: Classification, reduction and stability of toric principal bundles
Let X be a complex toric variety equipped with the action of an algebraic torus T, and let G be a complex linear algebraic group. We classify all T-equivariant principal G-bundles E over X and the
morphisms between them. We then give a criterion for the equivariant reduction of the structure group of E to a Levi subgroup of G in terms of the automorphism group of the bundle. (With A. Dey, J.
Dasgupta, B. Khan and M. Poddar.)
Alex Lubotzky & Bharatram Rangarajan
Title: Mini-course Uniform Stability of Higher-rank Arithmetic Groups Lecture 1
Lattices in high-rank semisimple groups enjoy a number of rigidity properties like super-rigidity, quasi-isometric rigidity, first-order rigidity, and more. In these lectures, we will add another
one: uniform ( a.k.a. Ulam) stability. Namely, it will be shown that (most) such lattices D satisfy: every finite-dimensional unitary ”almost-representation” of D ( almost w.r.t. to a
sub-multiplicative norm on the complex matrics) is a small deformation of a true unitary representation. This extends a result of Kazhdan (1982) for amenable groups and or Burger-Ozawa-Thom (2013)
for SL(n,Z), n > 2. The main technical tool is a new cohomology theory (”asymptotic cohomology”) that is related to bounded cohomology in a similar way to the connection of the last one with ordinary
cohomology. The vanishing of H2 w.r.t. to a suitable module implies the above stability. The talks are based on a joint work of the speakers with L. Glebsky and N. Monod. See arXiv:2301.00476.
Title: The existence problem of compact quotients of pseudo-Riemannian homogeneous spaces
Let G/H be a homogeneous space. If a discrete subgroup of G acts properly and freely on G/H, the quotient space becomes a manifold locally modelled on G/H, and called a Clifford-Klein form. Since the
late 1980s, the existence problem of compact Clifford-Klein forms of pseudo-Riemannian homogeneous spaces has been studied by various methods (e.g. Cartan projection of reductive Lie groups, rigidity
theory in homogeneous dynamics, tempered unitary representations, de Rham and relative Lie algebra cohomology, Anosov representations). In this talk, I will explain my joint work-in-progress with
Fanny Kassel (IHES) and Nicolas Tholozan (ENS) on a new necessary condition for the existence of compact Clifford-Klein forms. It is formulated in terms of the homotopy theory of sphere bundles and
hence related to the Adams operations in KO-theory.
Tuesday, 02 January 2024
Title: Mini-course Global Analysis of Locally Symmetric Spaces with Indefinite Metric Lecture 2
The local to global study of geometries was a major trend of 20th century geometry, with remarkable developments achieved particularly in Riemannian geometry. In contrast, in areas such as
pseudo-Riemannian geometry, familiar to us as the space-time of relativity theory, and more generally in pseudo-Riemannian geometry of general signature, surprising little wa known about global
properties of the geometry even if we impose a locally homogeneous structure. This theme has been developed rapidly in the last three decades.
In the series of lectures, I plan to discuss two topics by the general theory and some typical examples.
1. Global geometry: Properness criterion and its quantitative estimate for the action of discrete groups of isometries on reductive homogeneous spaces, existence problem of compact manifolds modeled
on homogeneous spaces, and their deformation theory.
2. Spectral analysis: Construction of periodic L2-eigenfunctions for the Laplacian with indefinite signature, stability question of eigenvalues under deformation of geometric structure, and spectral
decomposition on the locally homogeneous space of indefinite metric.
Title: Moduli of binary cubic forms (zoom)
We discuss the moduli of binary cubic forms. We give a description of this moduli space in terms triples of an associated CM elliptic curve E, a degree-3 isogeny from E to E, and a point on E. We
will also discuss an application of our construction.
Title: Standard compact Clifford-Klein forms and Lie algebra decompositions
Let G be a non-compact linear simple Lie group and H ⊂ G a reductive subgroup. We say that the homogeneous space G/H admits a standard compact Clifford-Klein form if there exists a reductive subgroup
L ⊂ G such that L acts properly and co-compactly on G/H. I will describe relations between real root decompositions of Lie triples (g, h, l) corresponding to standard compact Clifford-Klein forms,
under the assumption that g is not equal to h + l. This gives new classes of homogeneous spaces G/H which do not admit standard compact Clifford-Klein forms. For instance proper regular subalgebras h
of g never generate homogeneous spaces G/H which admit standard compact Clifford-Klein forms (other than g = h + l).
Alex Lubotzky & Bharatram Rangarajan
Title: Mini-course Uniform Stability of Higher-rank Arithmetic Groups Lecture 2
Lattices in high-rank semisimple groups enjoy a number of rigidity properties like super-rigidity, quasi-isometric rigidity, first-order rigidity, and more. In these lectures, we will add another
one: uniform ( a.k.a. Ulam) stability. Namely, it will be shown that (most) such lattices D satisfy: every finite-dimensional unitary ”almost-representation” of D ( almost w.r.t. to a
sub-multiplicative norm on the complex matrics) is a small deformation of a true unitary representation. This extends a result of Kazhdan (1982) for amenable groups and or Burger-Ozawa-Thom (2013)
for SL(n,Z), n > 2. The main technical tool is a new cohomology theory (”asymptotic cohomology”) that is related to bounded cohomology in a similar way to the connection of the last one with ordinary
cohomology. The vanishing of H2 w.r.t. to a suitable module implies the above stability. The talks are based on a joint work of the speakers with L. Glebsky and N. Monod. See arXiv:2301.00476.
Title: On the Hasse principle for reductive algebraic groups over finitely generated fields (zoom)
One of the major results in the arithmetic theory of algebraic groups is the validity of the cohomological local-global (or Hasse) principle for simplyconnected and adjoint semisimple groups over
number fields. Over the last several years, there has been growing interest in studying Hasse principles for reductive groups over arbitrary finitely generated fields with respect to suitable sets of
discrete valuations. In particular, we have conjectured that for divisorial sets, the corresponding Tate-Shafarevich set, which measures the deviation from the localglobal principle, should be finite
for all reductive groups. I will report on recent progress on this conjecture, focusing in particular on the case of algebraic tori as well as on connections to groups with good reduction.
Wednesday, 03 January 2024
Title: Mini-course Global Analysis of Locally Symmetric Spaces with Indefinite Metric Lecture 3
The local to global study of geometries was a major trend of 20th century geometry, with remarkable developments achieved particularly in Riemannian geometry. In contrast, in areas such as
pseudo-Riemannian geometry, familiar to us as the space-time of relativity theory, and more generally in pseudo-Riemannian geometry of general signature, surprising little wa known about global
properties of the geometry even if we impose a locally homogeneous structure. This theme has been developed rapidly in the last three decades.
In the series of lectures, I plan to discuss two topics by the general theory and some typical examples.
1. Global geometry: Properness criterion and its quantitative estimate for the action of discrete groups of isometries on reductive homogeneous spaces, existence problem of compact manifolds modeled
on homogeneous spaces, and their deformation theory.
2. Spectral analysis: Construction of periodic L2-eigenfunctions for the Laplacian with indefinite signature, stability question of eigenvalues under deformation of geometric structure, and spectral
decomposition on the locally homogeneous space of indefinite metric.
Marek Kaluba & Piotr Nowak
Title: Property (T) for automorphism groups of free groups Lecture 1
Talk 1 (Kaluba) We will discuss the notion of positivity in rings, understood as being a sum of squares. This problem is classically relevant for polynomials, and in the context of property (T) it
appears in the group ring and its augmentation ideal. We will discuss a computer-assisted approach, via semidefinite programming, of verifying whether an element of the group ring can be expressed as
a sum of squares. An important feature of this method is that despite using numerical information, in certain situations it provides a rigorous proof of positivity. Together with a characterization
of property (T) via positivity in the group ring this provides an approach to proving Kazhdan’s property (T) for some groups.
Talk 2 (Nowak) We will use the methods described in the first talk to prove property (T) for automorphism groups of free groups. First we will consider a singular case of Aut(F5) but the main focus
will be the infinite case. We will describe an inductive procedure that allows to deduce property (T) for families of groups like SLn(Z) and Aut(Fn) from positivity of a single group ring element.
Using this inductive approach we will prove property (T) for Aut(Fn) for all n ≥ 6, and reprove property (T) for SLn(Z), n ≥ 3. We will also discuss several consequences and applications, in
particular we will show new estimates for Kazhdan constants.
Title: Random walks on group extensions
Lindenstrauss and Varju asked the following questions: for every prime p, let Sp be a symmetric generating set of Gp := SL(2, Fp)×SL(2, Fp). Suppose the family of Cayley graphs {Cay(SL(2, Fp), pri
(Sp))} is a family of expanders, where pri is the projection to the i-th component. Is it true that the family of Cayley graphs {Cay(Gp, Sp)} is a family of expanders? We answer this question and go
beyond that by describing random-walks in various group extensions. (This is a joint work with Srivatsa Srinivas.)
Alex Lubotzky & Bharatram Rangarajan
Title: Mini-course Uniform Stability of Higher-rank Arithmetic Groups Lecture 3
Lattices in high-rank semisimple groups enjoy a number of rigidity properties like super-rigidity, quasi-isometric rigidity, first-order rigidity, and more. In these lectures, we will add another
one: uniform ( a.k.a. Ulam) stability. Namely, it will be shown that (most) such lattices D satisfy: every finite-dimensional unitary ”almost-representation” of D ( almost w.r.t. to a
sub-multiplicative norm on the complex matrics) is a small deformation of a true unitary representation. This extends a result of Kazhdan (1982) for amenable groups and or Burger-Ozawa-Thom (2013)
for SL(n,Z), n > 2. The main technical tool is a new cohomology theory (”asymptotic cohomology”) that is related to bounded cohomology in a similar way to the connection of the last one with ordinary
cohomology. The vanishing of H2 w.r.t. to a suitable module implies the above stability. The talks are based on a joint work of the speakers with L. Glebsky and N. Monod. See arXiv:2301.00476.
Thursday, 04 January 2024
Title: Hasse principle for reductive groups over p-adic function fields
In this talk we survey results on Hasse principle for homogeneous spaces under reductive groups over semiglobal fields, i.e., function fields of curves over complete discrete valued fields. We state
some conjectures and explain recent progress on these conjectures for function fields of p-adic curves.
Title: Linear representations of the Grothendieck-Teichm¨uller group
I will speak on work with F. Bleher and A. Lubotzky arising from a construction by Lubotzky and Grunewald of linear representations of the automorphism group Aut(Fn) of a free group on n elements. It
turns out that if one replaces Fn by it’s profinite completion ˆ Fn, the construction of linear representations with large image of Aut( ˆ Fn) becomes simpler and more general than the discrete case.
This leads to a solution of an open problem from the 1990’s having to do with constructing natural non-abelian representations of the so-called Grothendieck-Teichm¨uller group GT inside Aut( ˆ F2). I
will discuss where GT comes from and current open problems in this area.
Title: The genus of division algebras over discrete valued fields
Given a field with a set of discrete valuations, we show how the genus of any division algebra over certain fields is related to the genus of some residue algebras at various valuations and the
ramification data. Applications include showing triviality of the genus of quaternions over many fields such as higher local fields, function fields of curves over higher local fields and function
fields of curves over real closed fields. When the base field is a function field of a curve over a global field with a rational point, the genus of any quaternion is related to the 2-torsion of the
Tate-Shafarevich group of the Jacobian. As a consequence, when the curve is elliptic, the size of the genus can be computed directly using arithmetic data of the elliptic curve.
Title: Lower dimensional Betti num- bers of homogeneous spaces of Lie groups
We will present our joint work with I. Biswas and C. Maity on certain explicit descriptions of lower dimensional Betti numbers of homogeneous spaces of Lie groups. We will begin by recalling some of
the earlier works in this subject and give our motivation. We will then describe our results and some of the applications in special cases.
Title: Property (T) for fiber and semidirect products
Extension of a Property (T) group by another is a Property (T) group. It follows that Property (T) is closed under product. The same is true for fiber products of two homomorphisms from Property (T)
groups, provided one of them splits. In general there are counterexamples. The special case of fiber product of two copies of a homomorphism reduces to the question of Property (T) for a semidirect
product given the same for the non-normal factor. Here necessary and sufficient conditions can be given. This is a joint work with M. Mj. The general case of relative Property (T) for semidirect
products, where the normal factor need not be abelian, is an ongoing work with S. Nayak.
Title: Conjugacy width in higher rank arithmetic groups of orthogonal type (zoom)
A basic question in group theory is what can we learn about a finitely generated and residually finite group from its profinite completion? In this talk, we will focus on the relation between the
widths of conjugacy classes in a higher rank arithmetic group to the corresponding widths in the profinite completion of this group and explain the connection to the Congruence Subgroup Property.
This is joint work with Nir Avni.
Title: Central extensions of (arithmetic) lattices
I will describe joint work with Domingo Toledo on residual finiteness for cyclic central extensions of fundamental groups of aspherical manifolds, its application to central extensions of certain
arithmetic lattices, and discuss some open problems on residual properties of central extensions of lattices and their connections to the geometry of locally symmetric spaces.
Friday, 05 January 2024
Title: Thin Orbits and Applications (zoom)
We will discuss some recent progress on problems that can be reformulated as Diophantine questions on orbits of ”thin” groups.
Title: Regularity and mod-p invariance for elliptic curves
Let K be an imaginary quadratic field and E/K be an elliptic curve with good ordinary reduction at an odd prime number p. We study the rational points of E along the anticyclotomic Zp-extension of K.
Title: Closure of orbits of the pure mapping class group on the character variety
For every surface S, the pure mapping class group GS acts on the SL2-character variety ChS of the fundamental group P of S. The character variety ChS is a scheme over the ring of integers.
Classically this action on the real points ChS(R) of the character variety has been studied in the context of the Teichmuller theory and SL(2,R)-representations of P. In a seminal work, Goldman
studied this action on a subset of ChS(R) which comes from SU(2)-representations of P. In this case, Goldman showed that if S is of genus g > 1 and zero punctures, then the action of GS is ergodic.
Previte and Xia studied this question from topological point of view, and when g > 0, proved that the orbit closure is as large as algebraically possible. Bourgain, Gamburd, and Sarnak studied this
action on the Fp-points ChS(Fp) of the character variety where S is a puncture torus. They conjectured that in this case, this action has only two orbits, where one of the orbits has only one point.
Recently, this conjecture is proved for large enough primes by Chen. When S is an n-punture sphere, the finite orbits of this action on ChS(C) are connected to the algebraic solutions of Painleve
differential equations.
Title: Heights on character varieties, free subgroups and spectral gaps
Given a finite set of elements in a semisimple algebraic group over the field of algebraic numbers, one can define its normalized height as a weighted sum over all places of its associated joint
spectral radii. The height gap theorem asserts that this height is bounded away from zero provided the subgroup generated by the finite set is Zariski-dense. This result can be seen as a
non-commutative analogue to the Lehmer or Bogomolov problem in diophantine number theory. In recent joint work with Oren Becker, we show how this can be used to deduce uniform spectral
gaps for actions of Zariski-dense subgroups on homogeneous spaces of algebraic groups and establish the existence of a uniform lower bound on the first eigenvalue of Cayley graphs of certain finite
simple groups of Lie type.
Title: Orbit closures on homogeneous spaces and applications to number theory
We study actions of algebraic tori on homogeneous spaces. A conjecture of Cassels and Swinnerton-Dyer about the purely real (homogeneous) forms formulated in 1955 and still open can be translated in
terms of such actions. We prove that the natural generalization of the conjecture to the non-purely real forms is not valid. The proof is based on the complete description of the forms with discrete
set of values at the integer vectors and non-representing non-trivially zero over the rational numbers.
Monday, 08 January 2024
Title: Mini-course Bounded generation of linear groups and Diophantine approximation Lecture 1
We shall present different notions of bounded generation for Zariskidense subgroups of linear algebraic groups; in particular, we shall treat the notion of exponential and purely exponential
parametrization. After surveying on classical results, we shall show recent applications of Diophantine approximation techniques in the theory of linear groups which lead to a classification of
groups admitting purely exponential parametrization or bounded generation.
Title: On non-commensurable isospectral locally symmetric spaces
We give examples of non-commensurable but isospectral locally symmetric spaces, thereby completing the work of Lubotzky, Samuels and Vishne. The main step is to show that adelic conjugation of
lattices in SL(1,D) by the adjoint group preserves the spectrum, where D is a division algebra over a number field F (under some additional hypothesis), extending the work done by one of us, when D
is quaternion. This is joint work with Sandeep Varma.
Title: Random character varieties
We study the G-character variety of a random group, where G is a semisimple complex Lie group. The typical behavior depends on the defect of the random presentation. We are able to describe what
happens for all but an exponentially small probability of exceptions. In particular we compute the dimension and show the absolute irreducibility of the character variety in defect at least two. In
defect one we also exhibit a phenomenon of Galois rigidity showing the finiteness of the character variety and proving lower bounds on its cardinality. The proofs are conditional on GRH via the use
of effective Chebotarev type theorems and uniform mixing bounds for Cayley graphs of finite simple groups. This is joint work with Peter Varju and Oren Becker.
Title: Mini-course Introduction to Bruhat-Tits theory Lecture 1
For connected reductive groups G over non-archimedean local fields k, Bruhat-Tits theory provides a metric space B(G) (called the building of G) equipped with a G(k)-action that is a valuable tool
for working with“large” compact open subgroups of G(k). In particular, the conjugacy classes of maximal compact open subgroups (there is often more than one, unlike for connected Lie groups) can be
understood through the study of G(k)-stabilizers of certain points in B(G). The understanding of the subgroup structure of G(k) provided by Bruhat-Tits theory has made it a very powerful tool in
number theory and representation theory. In these lectures, after some preliminaries about local fields and BN-pairs, we’ll give an axiomatic overview of the general theory (including some examples),
discuss the utility of passage to “large” residue fields, and deduce some applications (such as useful group-theoretic decompositions). Further applications in representation theory will be discussed
separately by Fintzen.
Title: From representation rigidity to profinite rigidity
A finitely generated residually finite group G is called profinitely rigid if for any other finitely generated residually finite group H, whenever the profinite completions of H and G are isomorphic,
then H is isomorphic to G. In this talk we will discuss some recent work that constructs finitely presented groups that are profinitely rigid amongst finitely presented groups but not amongst
finitely generated one. This builds on previous work that used ”controlled SL(2)-representations” to construct examples of arithmetic lattices in PSL(2,R) and PSL(2,C) that are profinitely rigid.
Tuesday, 09 January 2024
Title: Mini-course Bounded generation of linear groups and Diophantine approximation Lecture 2
We shall present different notions of bounded generation for Zariskidense subgroups of linear algebraic groups; in particular, we shall treat the notion of exponential and purely exponential
parametrization. After surveying on classical results, we shall show recent applications of Diophantine approximation techniques in the theory of linear groups which lead to a classification of
groups admitting purely exponential parametrization or bounded generation.
Title: On the construction of noncom- mensurable locally symmetric spaces (zoom)
In 2005, Lubotzky, Samuels and Vishne provided the first examples of noncommensurable compact isospectral locally symmetric spaces. These examples were constructed as quotients of the symmetric space
SLn(R)/K associated to the group SLn(R) by suitable arithmetic subgroups, and the proof of their isospectrality relied on some results of Harris and Taylor. In this talk, I will introduce a simpler
proof of isospectrality of the relevant locally symmetric spaces which is based on a direct comparison of the Selberg trace formulas. The method is likely to be applicable to other types of locally
symmetric spaces. This is joint work with Andrei Rapinchuk.
Title: The systolic geometry of arith- metic locally symmetric spaces
The systole of a compact Riemannian manifold M is the least length of a non-contractible loop on M. In this talk I will survey some recent work with S. Lapan and J. Meyer on the systolic geometry of
arithmetic locally symmetric spaces, emphasizing lower bounds for systoles and systole growth along congruence covers.
Title: Mini-course Introduction to Bruhat-Tits theory Lecture 2
For connected reductive groups G over non-archimedean local fields k, Bruhat-Tits theory provides a metric space B(G) (called the building of G) equipped with a G(k)-action that is a valuable tool
for working with“large” compact open subgroups of G(k). In particular, the conjugacy classes of maximal compact open subgroups (there is often more than one, unlike for connected Lie groups) can be
understood through the study of G(k)-stabilizers of certain points in B(G). The understanding of the subgroup structure of G(k) provided by Bruhat-Tits theory has made it a very powerful tool in
number theory and representation theory. In these lectures, after some preliminaries about local fields and BN-pairs, we’ll give an axiomatic overview of the general theory (including some examples),
discuss the utility of passage to “large” residue fields, and deduce some applications (such as useful group-theoretic decompositions). Further applications in representation theory will be discussed
separately by Fintzen.
Title: Non-virtually abelian discontinuous group actions vs. proper SL(2,R)-actions on homogeneous spaces
We develop algorithms and computer programs which verify criteria of properness of discrete group actions on semisimple homogeneous spaces. We apply these algorithms to find new examples of
non-virtually abelian discontinuous group actions on homogeneous spaces which do not admit proper $SL(2,\mathbb{R})$-actions.
Wednesday, 10 January 2024
Title: Mini-course Bounded generation of linear groups and Diophantine approximation Lecture 3
We shall present different notions of bounded generation for Zariskidense subgroups of linear algebraic groups; in particular, we shall treat the notion of exponential and purely exponential
parametrization. After surveying on classical results, we shall show recent applications of Diophantine approximation techniques in the theory of linear groups which lead to a classification of
groups admitting purely exponential parametrization or bounded generation.
Title: CSP and unipotent generators for higher rank nonuniform arithmetic groups
Given a finite index subgroup Γ of G(Z), where G is a Q-simple algebraic group of Q-rank at least one and R-rank at least two, consider the group N generated by Γ ∩ U+(Z) and Γ ∩ U−(Z). Here U+ is
the unipotent radical of a minimal parabolic Q-subgroup of G, and U− is its opposite. An old result of Tits, Vaserstein, Raghunathan and myself says that N has finite index in G(Z). The proof
presented here is more streamlined and is much simpler. The same proof shows the centrality of the congruence subgroup kernel, showing that the group G(Z) has the congruence subgroup property.
Title: Hypergeometric groups and their arithmeticity
A hypergeometric group is a subgroup of GLn(C) generated by the companion matrices of two monic coprime polynomials of degree n. It arises as the monodromy group of a hypergeometric differential
equation, and if the defining polynomials are also self-reciprocal and form a primitive pair, then its Zariski closure inside GLn(C) is either a symplectic or an orthogonal group. In this talk, we
will discuss the arithmeticity and thinness of the hypergeometric groups whose defining polynomials also have integer coefficients.
Title: Mini-course Introduction to Bruhat-Tits theory Lecture 3
For connected reductive groups G over non-archimedean local fields k, Bruhat-Tits theory provides a metric space B(G) (called the building of G) equipped with a G(k)-action that is a valuable tool
for working with“large” compact open subgroups of G(k). In particular, the conjugacy classes of maximal compact open subgroups (there is often more than one, unlike for connected Lie groups) can be
understood through the study of G(k)-stabilizers of certain points in B(G). The understanding of the subgroup structure of G(k) provided by Bruhat-Tits theory has made it a very powerful tool in
number theory and representation theory. In these lectures, after some preliminaries about local fields and BN-pairs, we’ll give an axiomatic overview of the general theory (including some examples),
discuss the utility of passage to “large” residue fields, and deduce some applications (such as useful group-theoretic decompositions). Further applications in representation theory will be discussed
separately by Fintzen.
Title: Convolution and square on abelian groups
The aim of this talk will be to construct functions on a cyclic group of odd order d whose ”convolution square” is proportional to their square. For that, we will have to interpret the cyclic group
as a subgroup of an abelian variety with complex multiplication, and to use the modularity properties of their theta functions.
Thursday, 11 January 2024
Title: Representations of p-adic groups (zoom)
The theory of Bruhat and Tits opened the door to studying and constructing representations of general p-adic reductive groups. I will give an overview of our understanding of the construction and
category of representations of p-adic groups and indicate how it crucially relies on Bruhat–Tits theory and the Moy–Prasad filtration.
Title: Prasad’s volume formula and its applications
In 1989, G. Prasad established a hands-on formula to compute the covolume of S-arithmetic subgroups of semisimple groups over global fields. The formula has since then found many applications in the
theory of arithmetic groups, starting with the contemporary Borel–Prasad finiteness theorem. In the talk, we aim to describe Prasad’s volume formula and its contents. We will then survey some of its
most striking applications, such as the finiteness theorem, the classification of fake projective planes, and the study of lattices of small covolume in various simple groups.
Title: Finite step rigidity
In this talk I will introduce Hitchin representations and show that only a finite part of the Jordan-Lyapunov spectra of a Hitchin representation is enough to completely determine it up to conjugacy
Title: Images of Homomorphisms of Algebraic Groups
Let G be an algebraic group over a local field k. We show that the image of G(k) under every continuous homomorphism into a (Hausdorff) topological group is closed if and only if the center of G(k)
is compact. This is joint work with Uri Bader.
Title: Homogeneity results for invariant distributions on p-adic symmetric spaces
Let G be the group of rational points of a connected reductive linear algebraic group defined over a nonarchimedean local field k. Under some conditions that are satisfied when the residue
characteristic of k is large, S. DeBacker used Moy-Prasad theory to sharpen, refine and generalize some results of Waldspurger, establishing an explicit form of the Howe conjecture for G and proving
the conjecture of T. Hales, A. Moy and G. Prasad on the range of validity for the Harish-Chandra–Howe local expansion for characters of irreducible admissible representations of G. We will report on
joint work with J. Adler and E. Sayag, in which we pursue similar questions for p-adic symmetric spaces.
Title: Subspace stabilisers in hyperbolic lattices (zoom)
In a joint work with Nikolay Bogachev, Alexander Kolpakov and Leone Slavich we show that immersed totally geodesic m-dimensional suborbifolds of an n-dimensional arithmetic hyperbolic orbifold
correspond to finite subgroups of the commensurator whenever m is sufficiently large. In particular, for n = 3 this condition includes all totally geodesic suborbifolds. We call such totally geodesic
subspaces by finite centraliser subspaces (or fc-subspaces for short) and use them to formulate an arithmeticity criterion for hyperbolic lattices. We show that a hyperbolic orbifold is arithmetic if
and only if it has infinitely many fc-subspaces, while in the non-arithmetic case the number of fc-subspaces is finite and bounded in terms of the volume. The case of special interest is that of
exceptional trialitarian 7-dimensional orbifolds – we show that every such orbifold contains totally geodesic arithmetic hyperbolic 3-orbifolds of exceptional type.
Friday, 12 January 2024
Title: Things we can learn by looking at random manifolds (zoom)
The theory of invariant random subgroups (IRS), which has been developed quite rapidly during the last decade, has been very fruitful to the study of lattices and their asymptotic invariants.
However, restricting to invariant measures limits the scope of problems that one can approach (in particular since the groups involved are highly non-amenable). It was recently realised that the more
general notion of stationary random subgroups (SRS) is still very effective and opens paths to deal with questions which previously seemed out of reach.
In the talk I will describe various old and new results concerning arithmetic groups and general locally symmetric manifolds of finite as well as infinite volume that can be proved using
‘randomness’, e.g.:
1. Kazhdan-Margulis minimal covolume theorem;
2. Most hyperbolic manifolds are non-arithmetic (joint work with A. Levit);
3. Higher rank manifolds of large volume have a large injectivity radius (joint with Abert, Bergeron, Biringer, Nikolov, Raimbault and Samet);
4. Margulis’ infinite injectivity radius conjecture: For manifolds of rank at least 2, finite volume is equivalent to bounded injectivity radius (joint with M. Fraczyk).
Title: Arithmetic groups of higher real rank are not left-orderable (after Deroin and Hurtado) (zoom)
Bertrand Deroin and Sebastian Hurtado recently proved the 30-yearold conjecture that if G is an almost-simple algebraic Q-group, and the real rank of G is at least two, then no arithmetic subgroup of
G is left-orderable. We will discuss this theorem, and explain some of the main ideas of the proof, by illustrating them in the simpler case where the real field R is replaced with a p-adic field.
Harmonic functions and continuous group actions are key tools.
Title: Local-global principles for norm one tori and multinorm tori over semi-global fields
A well-known result of Hasse states that the local-global principle holds for norms over number fields for cyclic extensions. In other words, if L/F is a cyclic extension of number fields then an
element λ ∈ F× is in the image of norm map NL/F : L× → F× if and only if λ is in the image of the norm map locally everywhere, i.e., for completions associated to all archimedean and non-archimedean
places of F. In this talk, we would consider local-global principles for norms and product of norms over fields which are function fields of curves over complete discretely valued fields, for
example, C((t))(x).
Title: On kissing number of hyperbolic manifolds (zoom)
Motivated by a recent result of M. F. Bourque and B. Petri, we constructed a sequence of hyperbolic manifolds with a large number of closed geodesics of shortest length. The aim of this talk is to
explain what ”large” means, and how arithmetic groups enter in this context. This will involve results obtained in collaboration with Cayo D´oria and Emanoel Freire.
Title: Stationary dynamics on character spaces and applications to arithmetic groups
To any group G is associated the space Ch(G) of all characters on G. After defining this space and discussing its interesting properties, I’ll turn to discuss dynamics on such spaces. Our main result
is that the action of any arithmetic group on the character space of its amenable/solvable radical is stiff, i.e, any probability measure which is stationary under random walks must be invariant.
This generalizes a classical theorem of Furstenberg for dynamics on tori. Relying on works of Bader, Boutonnet, Houdayer, and Peterson, this stiffness result is used to deduce dichotomy statements
(and ’charmenability’) for higher rank arithmetic groups pertaining to their normal subgroups, dynamical systems, representation theory and more. The talk is based on a joint work with Uri Bader. | {"url":"https://www.icts.res.in/program/zdsg/title-and-abstract","timestamp":"2024-11-11T14:49:11Z","content_type":"text/html","content_length":"147858","record_id":"<urn:uuid:c6495170-d4e5-4f71-b929-26bf6134e254>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00381.warc.gz"} |
How to Solve a Matrix in Excel: A Step-by-Step Guide
Table of Contents :
Working with matrices in Excel can seem daunting at first, but with the right approach, you can efficiently solve matrix equations and perform matrix operations. In this guide, we'll walk you through
how to solve a matrix in Excel step by step. Whether you're a student tackling linear algebra or a professional handling complex data, mastering matrix operations in Excel can significantly enhance
your productivity. Let’s dive in! 🚀
Understanding Matrices and Their Importance
What is a Matrix?
A matrix is a rectangular array of numbers arranged in rows and columns. In mathematics, matrices are used to represent and solve systems of linear equations, perform transformations, and much more.
Excel allows you to leverage matrix operations to analyze data efficiently.
Types of Matrix Operations
1. Matrix Addition: Adding two matrices of the same dimensions.
2. Matrix Subtraction: Subtracting one matrix from another.
3. Matrix Multiplication: Multiplying matrices, which is a bit more complex than addition or subtraction.
4. Matrix Inversion: Finding the inverse of a matrix, if it exists.
5. Solving Matrix Equations: Finding solutions to equations of the form Ax = B.
Setting Up Your Matrix in Excel
Step 1: Inputting the Matrix
Begin by organizing your data in a clear matrix format. Here’s an example of how to set up a simple 2x2 matrix in Excel:
1. Open Excel and click on the cell where you want to start your matrix.
2. Input the values in a rectangular arrangement like the table above.
Step 2: Entering Data
For a more complex matrix, ensure you input all the values correctly. Use consistent rows and columns to avoid confusion.
Step 3: Formatting for Clarity
To improve readability, consider using borders, shading, or bold text for headers or significant figures.
Performing Matrix Operations in Excel
Matrix Addition and Subtraction
To add or subtract matrices, follow these steps:
1. Identify the Matrices: Assume you have Matrix A in cells A1:B2 and Matrix B in cells C1:D2.
2. Use the SUM Function for addition:
□ In a new area (say F1), use the formula =A1+B1, and drag to fill down and across to cover the size of the matrices.
3. Use the SUBTRACT for subtraction:
□ In another area (say H1), input =A1-C1 and drag to fill.
Example of Matrix Addition and Subtraction
F G H I
6 8 -4 -4
10 12 -4 -4
Matrix Multiplication
Matrix multiplication involves a bit more calculation.
1. Identify the Matrices: Let’s say you want to multiply Matrix A (2x2) with Matrix B (2x2).
2. Use the MMULT Function:
□ Select a cell (e.g., J1) to output the result and select a range (2x2) for your result.
□ Type the formula =MMULT(A1:B2, C1:D2) and press Ctrl + Shift + Enter to execute as an array formula.
Result of Matrix Multiplication
The resulting matrix will be displayed in the selected range.
Matrix Inversion
To find the inverse of a matrix in Excel, use the MINVERSE function.
1. Input the matrix: Select an area for the result.
2. Type the formula: For matrix A in A1:B2, select a 2x2 range and type =MINVERSE(A1:B2) and press Ctrl + Shift + Enter.
Important Note:
The inverse of a matrix only exists if the matrix is square and has a non-zero determinant.
Solving Matrix Equations
To solve an equation of the form Ax = B, where A is your matrix and B is your result vector, you can use:
1. Input the matrices: As discussed earlier, A in A1:B2 and B in C1:C2.
2. Use the MINVERSE and MMULT combination:
□ Select a range for the result and input =MMULT(MINVERSE(A1:B2), C1:C2) and again use Ctrl + Shift + Enter.
Solving matrices in Excel can be straightforward once you understand the tools at your disposal. The functions such as MMULT, MINVERSE, and the array formula capability are powerful tools for anyone
dealing with data analysis or mathematical computations.
By mastering these steps, you’ll enhance your ability to work with complex data sets and perform advanced calculations with ease. Remember, practice makes perfect, so don’t hesitate to try these
techniques with different matrices to become proficient! 🧠💡 | {"url":"https://tek-lin-pop.tekniq.com/projects/how-to-solve-a-matrix-in-excel-a-step-by-step-guide","timestamp":"2024-11-10T02:58:32Z","content_type":"text/html","content_length":"85790","record_id":"<urn:uuid:bc5e6307-6c57-43de-b533-461d21109e67>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00324.warc.gz"} |
Origami/Techniques/Model bases - Wikibooks, open books for an open world
In origami, a model base is the structural skeleton of an origami model in its simplest form. The usage of a base has many benefits; the folding sequence will be easier to remember and create
diagrams for, and further models can be developed from the same base, or from slight modifications thereof. However, the use of a base does, to a certain extent, limit what can be done with the
Traditional origami models were often developed from similar patterns. While some of them are rarely used, there are six that are used quite frequently: the waterbomb base, preliminary fold, kite
base, fish base, bird base and frog base. Most recently published books only assume the knowledge of these bases; however, some authors still use unconventional bases.
These bases are referred to as the classic bases, and were used as the primary design technique until the 1960s. They all share the same symmetry, and certain structural properties. In the 1960s,
paperfolders started trying to find new bases to create more complex models. One of the new techniques they developed was to fold the corners into the center (a blintz base), fold the base, and then
unwrap the extra layers of paper. This technique allowed folders to multiply the number of points on the base, and is referred to as "blintzing" the base.
The windmill base, helmet base, umbrella base and pig base are found in traditional models, but not used or accepted as widely.
Models are typically classified by the number of simultaneous folds to realize. The more the number of folds is high, the more the techniques involved in the construction is complex.
• The book fold is a valley fold that folds the paper by the middle and entirely covers the side up.
Book fold
• The cupboard fold is two valley folds that push the edges of the paper to the middle.
Cupboard fold
• The pleat fold is several evenly-spaced parallel mountain and valley folds. It is also called an "accordion fold."
Accordion or pleat fold
• The radial pleat fold is an angled pleat fold, usually with a focus point on an edge or corner.
• The kite base is merely two valley folds that bring two adjacent edges of the square together to lie on the square's diagonal.
Kite base
• The helmet base consists of a diagonal valley fold that bisects two corners of the square and two perpendicular valley folds that push two corners of the triangle to the third one.
Helmet base
• The blintz fold is made by folding the corners of a square into the center. This can be achieved with higher accuracy by folding and unfolding two reference creases through the center.
Blintz fold
Origamis that are done by creating only one fold at a time are called pureland origami. Because of these restrictions, proponents of the theory have devised alternate methods of folding more
complicated steps that have very similar results.
• The preliminary fold or square base consists of a diagonal mountain fold that bisects two corners of the square and two perpendicular valley folds that bisect the edges of the square. The paper
is then collapsed to form a square shape with four isosceles-right triangular flaps.
Preliminary fold or Square Base
• The waterbomb base consists of two perpendicular valley folds down the diagonals of the square and a mountain fold down the center of the square. This crease pattern is then compressed to form
the waterbomb base, which is an isosceles-right triangle with four isosceles-right triangular flaps. The waterbomb base is an inside-out square base.
Waterbomb base
• The pig base consists of a cupboard fold with all the paper corner squash folded to the center of the paper. All the paper edge is along the middle line.
Pig base
• The umbrella base consists of a square base with all the flaps squash folded and the isosceles-right triangular flaps valley folded on the big isosceles triangle.
Umbrella base
• The windmill base consists of four square bases on a single square.
Windmill base
• The fish base consists of two radial folds against a diagonal reference crease on each of two opposite corners. The flaps that result on the other two corners are carefully folded downwards in
the same direction. In other words, it consists of two side-by-side rabbit ears.
Fish base
• The bird base, or crane base, consists of a square base with both the front and the back sides petal folded upward.
Bird base constructed on a preliminary fold
• The frog base starts with a waterbomb base or square base. All four flaps are squash-folded (the result is the same in either case), and then the corners are petal folded upward.
Frog base constructed on a preliminary fold
• Most of the creases in a stretched bird base are present in the regular bird base. When forming this bird base, make sure to crease the triangle at the center corner through all layers. (If you
unfold completely, you will see a small square at the center of the paper.) After forming the bird base, either partially unfold the paper, and/or "stretch" two opposite corners of the bird base.
These two corners, their associated flaps, and the central square will all lie flat. The other two flaps will form a pyramid. Rabbit ear each flap that is in the pyramid, so that the model lies
flat. All of the raw edges will lie along the centerline of the model. The stretched bird base is used in Lang's Bald Eagle, Greenberg's Eeyore, and some other high-intermediate and complex
If a square is blintz folded, then a kite/fish/bird/frog base is folded, and the blintzed edges teased out and collapsed n a certain fashion, this is called a blintzed kite/fish/bird/frog/base, which
doubles the complexity and adds more points and edges to the original kite/fish/bird/frog base, for a more complex model that requires more points. It's possible to double blintz for a double
blintzed kite/fish/bird/frog base if needed. Theoretically an infinite number of blintzes could be performed to yield an infinitely complex multipointed base, but paper thickness restricts this to
generally two blintzes. | {"url":"https://en.m.wikibooks.org/wiki/Origami/Techniques/Model_bases","timestamp":"2024-11-02T08:10:33Z","content_type":"text/html","content_length":"42378","record_id":"<urn:uuid:289cd4e7-7331-4ce1-be48-7c08515eb613>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00381.warc.gz"} |
Quantization tools | Concrete ML
Quantizing data
Concrete-ML has support for quantized ML models and also provides quantization tools for Quantization Aware Training and Post-Training Quantization. The core of this functionality is the conversion
of floating point values to integers and back. This is done using QuantizedArray in concrete.ml.quantization.
The QuantizedArray class takes several arguments that determine how float values are quantized:
n_bits that defines the precision of the quantization
values are floating point values that will be converted to integers
is_signed determines if the quantized integer values should allow negative values
is_symmetric determines if the range of floating point values to be quantized should be taken as symmetric around zero
See also the UniformQuantizer reference for more information:
from concrete.ml.quantization import QuantizedArray
import numpy
A = numpy.random.uniform(-2, 2, 10)
print("A = ", A)
# array([ 0.19525402, 0.86075747, 0.4110535, 0.17953273, -0.3053808,
# 0.58357645, -0.24965115, 1.567092 , 1.85465104, -0.46623392])
q_A = QuantizedArray(7, A)
print("q_A.qvalues = ", q_A.qvalues)
# array([ 37, 73, 48, 36, 9,
# 58, 12, 112, 127, 0])
# the quantized integers values from A.
print("q_A.quantizer.scale = ", q_A.quantizer.scale)
# 0.018274684777173276, the scale S.
print("q_A.quantizer.zero_point = ", q_A.quantizer.zero_point)
# 26, the zero point Z.
print("q_A.dequant() = ", q_A.dequant())
# array([ 0.20102153, 0.85891018, 0.40204307, 0.18274685, -0.31066964,
# 0.58478991, -0.25584559, 1.57162289, 1.84574316, -0.4751418 ])
# Dequantized values.
It is also possible to use symmetric quantization, where the integer values are centered around 0:
q_A = QuantizedArray(3, A)
print("Unsigned: q_A.qvalues = ", q_A.qvalues)
print("q_A.quantizer.zero_point = ", q_A.quantizer.zero_point)
# Unsigned: q_A.qvalues = [2 4 2 2 0 3 0 6 7 0]
# q_A.quantizer.zero_point = 1
q_A = QuantizedArray(3, A, is_signed=True, is_symmetric=True)
print("Signed Symmetric: q_A.qvalues = ", q_A.qvalues)
print("q_A.quantizer.zero_point = ", q_A.quantizer.zero_point)
# Signed Symmetric: q_A.qvalues = [ 0 1 1 0 0 1 0 3 3 -1]
# q_A.quantizer.zero_point = 0
In the following example, showing the de-quantization of model outputs, the QuantizedArray class is used in a different way. Here it uses pre-quantized integer values and has the scale and zero-point
set explicitly. Once the QuantizedArray is constructed, calling dequant() will compute the floating point values corresponding to the integer values qvalues, which are the output of the
forward_fhe.encrypt_run_decrypt(..) call.
import numpy
def dequantize_output(self, qvalues: numpy.ndarray) -> numpy.ndarray:
# .....
# Assume: qvalues is the decrypted integer output of the model
# .....
# ....
Quantized modules
Machine learning models are implemented with a diverse set of operations, such as convolution, linear transformations, activation functions and element-wise operations. When working with quantized
values, these operations cannot be carried out in an equivalent way as for floating point values. With quantization, it is necessary to re-scale the input and output values of each operation to fit
in the quantization domain.
In Concrete-ML, the quantized equivalent of a scikit-learn model or a PyTorch nn.Module is the QuantizedModule. Note that only inference is implemented in the QuantizedModule, and it is built through
a conversion of the inference function of the corresponding scikit-learn or PyTorch module.
Built-in neural networks expose the quantized_module member, while a QuantizedModule is also the result of the compilation of custom models through compile_torch_model and compile_brevitas_qat_model.
The quantized versions of floating point model operations are stored in the QuantizedModule. The ONNX_OPS_TO_QUANTIZED_IMPL dictionary maps ONNX floating point operators (e.g. Gemm) to their
quantized equivalent (e.g. QuantizedGemm). For more information on implementing these operations, please see the FHE compatible op-graph section.
The computation graph is taken from the corresponding floating point ONNX graph exported from scikit-learn using HummingBird, or from the ONNX graph exported by PyTorch. Calibration is used to obtain
quantized parameters for the operations in the QuantizedModule. Parameters are also determined for the quantization of inputs during model deployment.
Calibration is the process of determining the typical distributions of values encountered for the intermediate values of a model during inference.
To perform calibration, an interpreter goes through the ONNX graph in topological order and stores the intermediate results as it goes. The statistics of these values determine quantization
That QuantizedModule generates the Concrete-Numpy function that is compiled to FHE. The compilation will succeed if the intermediate values conform to the 8-bits precision limit of the Concrete
stack. See the compilation section for details. | {"url":"https://docs.zama.ai/concrete-ml/0.5-1/developer-guide/inner-workings/quantization_internal","timestamp":"2024-11-05T03:49:46Z","content_type":"text/html","content_length":"382092","record_id":"<urn:uuid:ac2b4d71-186e-413f-b978-6481f3ab5090>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00846.warc.gz"} |
The Stacks project
Lemma 29.43.16. Let $S$ be a scheme which admits an ample invertible sheaf. Then
Proof. The assumptions on $S$ imply that $S$ is quasi-compact and separated, see Properties, Definition 28.26.1 and Lemma 28.26.11 and Constructions, Lemma 27.8.8. Hence Lemma 29.43.12 applies and we
see that (1) implies (2). Let $\mathcal{E}$ be a finite type quasi-coherent $\mathcal{O}_ S$-module. By our definition of projective morphisms it suffices to show that $\mathbf{P}(\mathcal{E}) \to S$
is H-projective. If $\mathcal{E}$ is generated by finitely many global sections, then the corresponding surjection $\mathcal{O}_ S^{\oplus n} \to \mathcal{E}$ induces a closed immersion
\[ \mathbf{P}(\mathcal{E}) \longrightarrow \mathbf{P}(\mathcal{O}_ S^{\oplus n}) = \mathbf{P}^{n - 1}_ S \]
as desired. In general, let $\mathcal{L}$ be an ample invertible sheaf on $S$. By Properties, Proposition 28.26.13 there exists an integer $n$ such that $\mathcal{E} \otimes _{\mathcal{O}_ S} \
mathcal{L}^{\otimes n}$ is globally generated by finitely many sections. Since $\mathbf{P}(\mathcal{E}) = \mathbf{P}(\mathcal{E} \otimes _{\mathcal{O}_ S} \mathcal{L}^{\otimes n})$ by Constructions,
Lemma 27.20.1 this finishes the proof. $\square$
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 087S. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 087S, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/087S","timestamp":"2024-11-13T02:54:06Z","content_type":"text/html","content_length":"16940","record_id":"<urn:uuid:011e0b8f-2bf0-4b40-a8d5-074ddf3a3786>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00727.warc.gz"} |
Jess's mathematics
Creative Commons CC BY 4.0
LaTeX is the best way to write mathematics. It completely pisses all over Word. However, it does take some time to get used to so might not be worth your while if you won't write too much. The way I
use it is to first download and install a latex editor and then get writing, but I would recommend that you use this website instead since you can get going a lot quicker. The upshot of the whole
business is that you type in here and then a pdf is generated with all the equations looking ace. I'll give you some examples. | {"url":"https://pt.overleaf.com/latex/examples/jesss-mathematics/cwtkcdvbpjcx","timestamp":"2024-11-04T00:46:04Z","content_type":"text/html","content_length":"39017","record_id":"<urn:uuid:2418de1d-a8d7-476e-a267-562cfbe7bfdc>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00270.warc.gz"} |
In Maths, a function f(x) is said to be discontinuous at a point ‘a’ of its domain D if it is not continuous there. The point ‘a’ is then called a point of discontinuity of the function. In limits
and continuity, you must have learned a continuous function can be traced without lifting the pen on the graph. The discontinuity may arise due to any of the following situation:
1. The right-hand limit or the left-hand limit or both of a function may not exist.
2. The right-hand limit and the left-hand limit of function may exist but are unequal.
3. The right-hand limit, as well as the left-hand limit of a function, may exist, but either of the two or both may not be equal to f(a).
Discontinuity in Maths Definition
The function of the graph which is not connected with each other is known as a discontinuous function. A function f(x) is said to have a discontinuity of the first kind at x = a, if the left-hand
limit of f(x) and right-hand limit of f(x) both exist but are not equal. f(x) is said to have a discontinuity of the first kind from the left at x = a, if the left hand of the function exists but not
equal to f(a).
In the above graph, the limits of the function to the left and to the right are not equal and hence the limit at x = 3 does not exist anymore. Such function is said to be a discontinuity of a
Also, read:
Types of Discontinuity
There are three types of discontinuity.
• Jump Discontinuity
• Infinite Discontinuity
• Removable Discontinuity
Now let us discuss all its types one by one.
Jump Discontinuity
Jump discontinuity is of two types:
• Discontinuity of the First Kind
• Discontinuity of the Second Kind
Discontinuity of the First Kind: Function f(x) is said to have a discontinuity of the first kind from the right at x = a, if the right hand of the function exists but not equal to f(a). In Jump
Discontinuity, the Left-Hand Limit and the Right-Hand Limit exist and are finite but not equal to each other.
Discontinuity of the Second Kind: A function f(x) is said to have discontinuity of the second kind at x = a, if neither left-hand limit of f(x) at x = a nor right-hand limit of f(x) at x = a exists.
Removable Discontinuity
A function f(x) is said to have a removable discontinuity at x = a, if left-hand limit at x tends to point ‘a’ is equal to the right-hand limit at x tends to point ‘a’ but their common value is not
equal to f(a). A removable discontinuity occurs when there is a rational expression with common factors in the numerator and denominator. Since these factors can be cancelled, the discontinuity is
Infinite Discontinuity
In Infinite Discontinuity, either one or both Right Hand and Left Hand Limit do not exist or is Infinite. It is also known as Essential Discontinuity. Whenever the graph of a function f(x) has the
line x = k, as a vertical asymptote, then f(x) becomes positively or negatively infinite as x→k^+ or x→K^+. Then, function f(x) is said to have infinite discontinuity. | {"url":"https://mathlake.com/article-331-Discontinuity.html","timestamp":"2024-11-05T23:35:46Z","content_type":"text/html","content_length":"10831","record_id":"<urn:uuid:044d8a42-1169-4695-a577-f4206d4966e0>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00671.warc.gz"} |
Untitled Document
The Klein Gordon equation subject to a nonlinear and locally distributed damping, posed in a complete and non compact $n$ dimensional Riemannian manifold $(\mathcal{M}^n,\mathbf{g})$ without boundary
is considered. Let us assume that the dissipative effects are effective in $(\mathcal{M}\backslash \overline{\Omega}) \cup (\Omega \backslash V)$, where $\Omega$ is an arbitrary open bounded set with
smooth boundary. In the present article we introduce a new class of non compact Riemannian manifolds, namely, manifolds which admit a smooth function $f$, such that the Hessian of $f$ satisfies the
{\em pinching conditions} (locally in $\Omega$), for those ones, there exist a finite number of disjoint open subsets $ V_k$ free of dissipative effects such that $\bigcup_k V_k \subset V$ and for
all $\varepsilon>0$, $meas(V)\geq meas(\Omega)-\varepsilon$, or, in other words, the dissipative effect inside $\Omega$ possesses measure arbitrarily small. It is important to be mentioned that if
the function $f$ satisfies the pinching conditions everywhere, then it is not necessary to consider dissipative effects inside $\Omega$. | {"url":"https://aimsconference.org/AIMS-Conference/conf-reg2014/ss/detail.php?abs_no=1892","timestamp":"2024-11-04T11:05:08Z","content_type":"text/html","content_length":"4654","record_id":"<urn:uuid:cf08ac4d-80af-43dd-9816-abb6928e8f15>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00286.warc.gz"} |
Holiday fun with template<class> and template<typename>
Let’s start simple.
template<class T> struct S1;
template<typename T> struct S1;
These declarations are synonymous.
How about this one? (Godbolt.)
template<class T, class T::U> struct S1;
template<typename T, typename T::U> struct S2;
In this case, S1 and S2 both take one type parameter (T) and one non-type template parameter (unnamed). However, S1 will give a hard error if T’s nested type U is not a class type, whereas S2 will
permit T’s nested type U to be anything — enum, reference, whatever.
struct Y1 { class U {}; };
using X1 = S1<Y1, Y1::U{}>; // new in C++2a
struct Y2 { using U = int*; };
using X2 = S1<Y2, nullptr>;
How about this one?
template<class T::U> struct S1;
template<typename T::U> struct S2;
In this case, S1 is valid if and only if it’s preceded by a declaration of T::U. For example, T might be a namespace or a class type, and U must be a class type nested within T:
struct T { class U {}; };
template<class T::U> struct S1;
using X1 = S1<T::U{}>; // new in C++2a
The keyword class in front of T::U is the class-key introducing an elaborated-type-specifier.
S2 is never valid, as far as I know, but GCC doesn’t complain about it. (GCC essentially treats that typename keyword as a synonym for class.)
However, if T is a template parameter, everything changes!
template<typename T, typename T::U> struct S3;
struct Y { using U = int; };
using X3 = S3<Y, 42>;
(Hat tip to Jon Kalb for this snippet.) Here, S3 is a perfectly valid template (all the way back to C++03); it takes one type parameter formally named T and one unnamed non-type parameter of type
T::U. The first instance of the typename keyword is a type-parameter-key, but the second instance of the typename keyword is part of a typename-specifier instead.
Let’s throw CTAD into the mix! (Godbolt.)
template<class Policy>
struct S {
template<typename Policy::FixedString> // INCORRECT!
struct N {};
struct Y {
template<class T> struct FixedString {
constexpr FixedString(T) {}
using X1 = S<Y>::N<42>;
In keeping with CTAD’s general “ignore all the rules” approach, it seems there is no way to express that S<Y>::N wants to take a non-type template parameter of type Y::FixedString<...auto...> (to
borrow GCC’s way of writing CTAD placeholders).
We might try to work around this deficiency in CTAD with a layer of indirection — a local alias template (Godbolt) —
template<class Policy>
struct S {
template<class T> using PFS = Policy::FixedString<T>;
template<PFS> // Workaround?
struct N {};
— but GCC doesn’t implement P1814 “Wording for Class Template Argument Deduction for Alias Templates” yet, so it’s full of ICEs in this area.
GCC is the only vendor that implements CTAD-in-NTTPs so far. Recall that CTAD-in-NTTPs is the “killer app” for NTTPs of class type — formerly P0732, now P1907, a feature that I consider unbaked
and would rather not have in C++2a at all. For that matter, I wish CTAD had never gone into C++17, and I advise against using CTAD in any production codebase (“Contra CTAD,” 2018-12-07).
Observe that class T introduces a template type parameter whose formal name is T, but class T* introduces a non-type template parameter with no formal name. Thus:
class T {} t;
template<class T> struct S1;
template<class T*> struct S2;
using X1 = S1<int>;
using X2 = S2<&t>;
Notice that class T* also has the interesting side effect of adding a declaration of class T to the current (very tiny) scope. Declaring new meanings for names that were already used as template
parameter names in an outer scope (that is, “shadowing” a template parameter with some other declaration) is ill-formed, and implementations give a wide range of fun error messages if you try.
Clang’s is perhaps the clearest, but even so, I almost filed a bug about it before deciding that it was correct to complain. Godbolt:
class T {} t;
class U {} u;
template<class T>
struct S {
template<class T*, T*> struct N {};
using X = S<U>::N<&u, &u>;
This code is 100% ill-formed, because class T* causes a shadowing declaration of T in a scope where T already refers to a template parameter. GCC and Clang give error messages; MSVC is utterly
confused by the class-key; and EDG emits a warning before proceeding to treat class T* as if the class keyword weren’t there. (So EDG successfully compiles the above code, even though ideally it
Posted 2019-12-27 | {"url":"https://quuxplusone.github.io/blog/2019/12/27/template-typename-fun/","timestamp":"2024-11-13T17:24:47Z","content_type":"text/html","content_length":"22066","record_id":"<urn:uuid:d3539899-6cad-4bbe-ab3d-b46047802d6b>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00232.warc.gz"} |
Locally compact space
Non-Hausdorff spaces
Much of the theory of locally compact Hausdorff spaces also works for preregular spaces. For example, just as any locally compact Hausdorff space is a Tychonoff space, so any locally compact
preregular space is a completely regular space. Since straight regularity is a more familiar condition than either preregularity (which is usually weaker) or complete regularity (which is usually
stronger), locally compact preregular spaces are normally referred to in the mathematical literature as locally compact regular spaces. The theory of locally compact regular spaces can be derived
from the theory of locally compact Hausdorff spaces by considering Kolmogorov equivalence.
The study of local compactness for spaces that aren't even regular is much less developed. In fact, even the definition of "locally compact" is not universally agreed upon. The various definitions
• every point has a compact neighbourhood;
• every point has a closed compact neighbourhood;
• every point has a local base of compact neighbourhoods (the definition used in Wikipedia).
All of these definitions are equivalent for Hausdorff (or even preregular) spaces, but only after some time has it become clear that the last definition is the most useful for the general case.
However, that general case has not been developed in this article. | {"url":"http://www.fact-index.com/l/lo/locally_compact_space.html","timestamp":"2024-11-15T03:01:07Z","content_type":"text/html","content_length":"19127","record_id":"<urn:uuid:818b9da5-f08a-4c2f-b447-c6af26b37d42>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00724.warc.gz"} |
Find the sum 23dfrac144dfrac12To 20terms-class-11-maths-JEE_Main
Hint:-Here, this is a term of $AP$so we apply summation of $n$terms of$AP$.
Given series is\[2,3\dfrac{1}{4},4\dfrac{1}{2},\]…..Up to \[20\]terms.
The given series can be written as$\dfrac{8}{4},\dfrac{{13}}{4},\dfrac{{18}}{4}$,……to \[20\] terms.
Take $\dfrac{1}{4}$ common we get
$ \Rightarrow \dfrac{1}{4}(8 + 13 + 18 + ......)$ To \[20\] terms.
Observe that ${\text{8,13,18}}$…… is in Arithmetic Progression with first term $8$and$5$as common difference
So, the sum of first\[20\] terms in that series will be
$ = \dfrac{1}{4}\left( {\dfrac{{20}}{2}\left( {2 \times 8 + (20 - 1) \times 5} \right)} \right)$ $\because {S_n} = \dfrac{n}{2}\left( {2a + \left( {n - 1} \right)d} \right)$
Here $n = $ number of terms, $a = $ first term and $d = $ common difference.
${\text{ = }}\dfrac{1}{4}(1110) = \dfrac{{555}}{2}$ Answer.
Note: - Whenever such type of series is given first convert into simple form and then find which series is this. Then apply the formula of that series to get the answer. | {"url":"https://www.vedantu.com/jee-main/find-the-sum-23dfrac144dfrac12to-20terms-maths-question-answer","timestamp":"2024-11-02T18:33:48Z","content_type":"text/html","content_length":"146203","record_id":"<urn:uuid:306b0a8e-d0bb-4897-83c4-122832ed4a48>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00749.warc.gz"} |
OLAP Overview
Online Analytical Processing (OLAP) is a technology that is used to organize large business databases and support business intelligence. More simply, in Excel you can convert your EBM cube to
OLAP formulas in order to format tables in a way that's more customized than a standard pivot table and create a reference input for other sheets in your report.
This article contains the following topics:
What are the advantages of OLAP formulas?
• The most common reason to use OLAP formulas in your reporting is to create a more finished, refined, or custom report than one could achieve with pivot tables alone. You would only use OLAP for
reports that won't change in their general structure.
• For example, you wouldn't use this when reporting on something like Customers, because Customers will regularly change over time. Rather, you'd probably only use this for standard P&L and Balance
Sheet reporting.
• The main benefit of using OLAP formulas is in overcoming the inherent challenges presented in pivot table formatting limitations. Because pivot tables have restrictions on how columns and rows
are formatted and arranged, OLAP can become a useful tool when you want to break free from the traditional view.
How to convert to OLAP?
Converting a pivot table cube to OLAP formulas is easy, but you'll first want to build out your view in the pivot table so that your formulas make sense once converted. A good example of this is
creating a balance sheet or an income statement and arranging your fields and filters in a way that generally makes sense.
Here's a simple example:
Once you've built out your basic report in a pivot table, you're now ready to convert it to OLAP formulas.
1. Create a copy of your pivot table, either in the same sheet or in a new sheet. As an option, you can make a copy of the PivotTable worksheet by pressing the Ctrl key while dragging the
PivotTable's worksheet tab to create a copy of the PivotTable. This allows you to both retain your original PivotTable and create a formula-based version of that report from the PivotTable copy,
as shown below.
2. Click anywhere in the pivot table.
3. Convert to OLAP formulas by toggling to PivotTable Analyze > OLAP Tools > Convert to Formulas.
Hot Tip: You can use the keyboard shortcut to quickly convert to OLAP formulas: ALT + JTOC
4. Check the box to Convert Report Filters. Click Convert.
5. Done! You now have formulas for each field of your pivot table. Now that you have formulas associated with each field or filter (e.g. 2020 Actuals, Cases, Gross Sales, etc.) you can then
reference these elsewhere. You can now also reformat your table to customize it in any way that helps you consume the data easier.
Here's what that same example looks like after being converted to OLAP formulas:
OLAP formulas
Once you convert your pivot table to formulas you'll notice there are two primary formula types present. Cubemember formulas and Cubevalue formulas.
Cubemember formulas
When you see "CUBEMEMBER" in a formula, it means it is referencing either a Filter, Row, or Column field from the pivot table. Some examples, might be Scenario, Account Level, Company, Month, etc.
Cubevalue formulas
When you see "CUBEVALUE" in a formula, it means it is referencing a value field, such as Activity, Ending Balances, or Calculations. Think of these formulas as the actual numerical values for their
respective cubemembers.
Callout: Handling N/A Errors in OLAP Formulas
If you see an N/A being produced by an OLAP formula, the typical solution is to refresh all cubes via the EBM Office Bridge Ribbon. This is often the first and most effective step in resolving
the issue. If refreshing all cubes does not resolve the issue, we have additional diagnostic steps to help identify and fix the problem. Please contact support and attach the file for further
How do I put this into practice?
A common use-case for using OLAP formulas is in creating a reference or inputs tab in your workbook, which you can then reference in other sheets.
Building out an inputs tab (above) gives you the freedom to utilize all the formatting and formulaic tools that Excel offers, so you can create really dynamic reports that still refresh to the
EBM data lake.
0 comments
Article is closed for comments. | {"url":"https://support.ebmsoftware.com/hc/en-us/articles/8190175439117-OLAP-Overview","timestamp":"2024-11-08T02:15:02Z","content_type":"text/html","content_length":"37382","record_id":"<urn:uuid:92f95699-476d-458e-907d-3a8d91d7a074>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00496.warc.gz"} |
Modeling Risk and Realities Coursera Quiz Answers - Networking Funda
Modeling Risk and Realities Coursera Quiz Answers – Networking Funda
All Weeks Modeling Risk and Realities Coursera Quiz Answers
Modeling Risk and Realities Week 01 Quiz Answers
Q1. This question relates to the Hudson Readers Example discussed in Sessions 1 and 2, and assumes that the value of the advertising budget is equal to $195 million. You may use the file Hudson
Readers.xlsx developed in Session 2 to answer this question.
Consider the following spending allocation of the advertising budget: A[SI] = 60, A[SC] = 20, A[EI] = 0, A[EC] = 115. What is the total net sales increase (in $ millions) corresponding to this budget
allocation? Choose the closest value among the ones presented below.
Q2. This question relates to the Hudson Readers Example discussed in Sessions 1 and 2, and assumes that the value of the advertising budget is equal to $195 million. You may use the file Hudson
Readers.xlsx developed in Session 2 to answer this question.
Consider the following two ways to allocate the advertising budget:
(S1) A[SI] = 60, A[SC] = 22, A[EI] = 3, A[EC] = 110
(S2) A[SI] = 55, A[SC] = 10, A[EI] = 15, A[EC] = 115
Which of the following statements is correct:
• Both S1 and S2 are feasible
• S1 is infeasible, and S2 is feasible
• S1 is feasible, and S2 is infeasible
• Both S1 and S2 are infeasible
Q3. This question relates to the Hudson Readers Example discussed in Sessions 1 and 2, and assumes that the value of the advertising budget is equal to $195 million. You may use the file Hudson
Readers.xlsx developed in Session 2 to answer this question.
Consider a version of the Hudson Readers Problem where the only constraints are the advertising budget constraint and the non-negativity constraints on the decision variables (in other words, ignore
the constraints for net sales increase in India and China and on the net sales increase of the enhanced version). What is the optimal value of the total net sales increase (in $ millions) for such a
problem? Choose the closest value among the ones presented below.
Q4. This question relates to the Hudson Readers Example discussed in Sessions 1 and 2, and assumes that the value of the advertising budget is equal to $195 million. You may use the file Hudson
Readers.xlsx developed in Session 2 to answer this question.
Ignore the setting of Q3 and consider the original problem formulation. One of the senior managers at the Hudson Readers believes that the constraint on the net sales increase for the enhanced
version severely limits company’s ability to generate the total net sales increase. Suppose that this constraint is ignored, while all other constraints in the original problem formulation remain
unchanged. Which of the following statements describes the optimal advertising spending plan in the absence of this constraint?
• The total optimal amount of advertising spending in India is 0
• The total optimal amount of advertising spending in China is 0
• The total optimal amount of advertising spending on the enhanced product is 0
• The total optimal amount of advertising spending on the standard product is 0
Q5. This question relates to the Hudson Readers Example discussed in Sessions 1 and 2, and assumes that the value of the advertising budget is equal to $195 million. You may use the file Hudson
Readers.xlsx developed in Session 2 to answer this question.
Ignore the settings of Q3 and Q4 and consider the original problem formulation. Suppose that the company decides to change the requirement on the minimum net sales increase in China from the current
value of $4 million to $3 million. The other constraints remain unchanged. What is the new value of the optimal total net sales increase, in $ millions? Choose the closest value among the ones below.
• 3.14
• 9.56
• 3.00
• 7.52
• 7.38
• 9.15
Q6. This question relates to the Hudson Readers Example discussed in Sessions 1 and 2, and assumes that the value of the advertising budget is equal to $195 million. This question tests your
understanding of the algebraic model formulation and does not require Excel.
The Hudson Readers is considering imposing the following additional requirement: the total amount of advertising spending in India must be at least 55 percent of the total amount of advertising
spending in China. In terms of the problem’s decision variables, which algebraic expression represents this requirement?
• A[SC] + A[EC] ≥ 0.55*(A[SI ]+ A[EI])
• A[SI] + A[EI] ≥ 0.55*(A[SC ]+ A[EC])
• A[SI] + A[SC] ≥ 0.55*(A[EI ]+ A[EC])
• A[EI] + A[EC] ≥ 0.55*(A[SI ]+ A[SC])
Q7. This question relates to the Epsilon Delta Capital example introduced in Session 3, and assumes that the value of the investment budget is equal to $125 million. You may use the file Epsilon
Delta Capital.xlsx developed in Session 3 to answer this question.
Suppose that the Epsilon Delta Capital invests equal amount, $31.25 million, into each of the four groups of financial products. What is weighted quality score of such investment? Choose the closest
among the values below.
• 2.75
• 2.00
• 2.25
• 3.00
• 2.50
• 1.75
Q8. This question relates to the Epsilon Delta Capital example introduced in Session 3, and assumes that the value of the investment budget is equal to $125 million. You may use the file Epsilon
Delta Capital.xlsx developed in Session 3 to answer this question.
Is the equal-amount investment of Q7 feasible for the Epsilon Delta Capital problem?
Q9. This question relates to the Epsilon Delta Capital example introduced in Session 3, and assumes that the value of the investment budget is equal to $125 million. You may use the file Epsilon
Delta Capital.xlsx developed in Session 3 to answer this question.
For the equal-amount investment of Q7, what is the expected annual return, in $ millions? Choose the closest among the values below.
• 6.23
• 4.25
• 6.76
• 5.31
• 5.23
• 4.03
Q10. This question relates to the Epsilon Delta Capital example introduced in Session 3, and assumes that the value of the investment budget is equal to $125 million. You may use the file Epsilon
Delta Capital.xlsx developed in Session 3 to answer this question.
The Epsilon Delta Capital considers dropping the minimum investment requirement of $20 million on all product groups. If this requirement is removed from the Epsilon Delta Capital model, and the rest
of the model remains unchanged, what is the new optimal expected return, in $ millions? Choose the closest among the values below.
• 6.46
• 6.66
• 6.56
• 6.36
• 6.16
• 6.26
Modeling Risk and Realities Week 02 Quiz Answers
Q1. This question relates to content of Session 1 and is based on the following example. Consider a model for describing a random return on Stock C next week, R[C]. According to this model, R[C] can
be described using the following 5 scenarios. You can find these data in the posted file Stock C.xlsx.
Scenario RC Value Probability of Scenario
1 -0.01 0.1
2 -0.03 0.2
3 0.01 0.4
4 0.02 0.2
5 0.04 0.1
What is the expected value of the return on Stock C next week, i.e., what is the value of E[R[C]]? Choose the closest from the answers below.
• 0.000
• 0.015
• 0.010
• .005
• -0.005
• 0.020
Q2. This question relates to content of Session 1 and is based on the following example. Consider a model for describing a random return on Stock C next week, R[C]. According to this model, R[C] can
be described using the following 5 scenarios. You can find these data in the posted file Stock C.xlsx.
Scenario RC Value Probability of Scenario
1 -0.01 0.1
2 -0.03 0.2
3 0.01 0.4
4 0.02 0.2
5 0.04 0.1
What is the standard deviation of the return on Stock C next week, i.e., what is the value of SD[R[C]]? Choose the closest from the answers below.
• 0.031
• 0.021
• 0.011
• 0.041
• 0.051
Q3. This question relates to content of Session 1 and is based on the following example. Consider a model for describing a random return on Stock C next week, R[C]. According to this model, R[C] can
be described using the following 5 scenarios. You can find these data in the posted file Stock C.xlsx.
Scenario RC Value Probability of Scenario
1 -0.01 0.1
2 -0.03 0.2
3 0.01 0.4
4 0.02 0.2
5 0.04 0.1
What is the probability that the return on Stock C next week is negative? Choose the closest from the answers below.
Q4. This question relates to content of Sessions 1 and 2 and is based on the following example. Consider a model for describing random returns on Stocks D and E next week, R[D] and R[E]. According to
this model, R[D] and R[E] can be described using the following 3 scenarios. You can find these data in the posted file Stocks DE.xlsx.
Scenario R[D] Value R[E] Value Probability of Scenario
1 -0.04 0.01 0.3
2 0.03 0.02 0.5
3 0.01 -0.005 0.2
Let E[R[D]] and E[R[E]] be the expected return values for Stocks D and E next week, respectively, and let SD[R[D]] and SD[R[E]] be the standard deviations of the returns for Stocks D and E next week,
respectively. Which of the following statements is correct?
• E[R[D]] > E[R[E]] and SD[R[D]] > SD[R[E]]
• E[R[D]] ≤ E[R[E]] and SD[R[D]] ≤ SD[R[E]]
• E[R[D]] > E[R[E]] and SD[R[D]] ≤ SD[R[E]]
• E[R[D]] ≤ E[R[E]] and SD[R[D]] > SD[R[E]]
Q5. This question relates to content of Sessions 1 and 2 and is based on the following example. Consider a model for describing random returns on Stocks D and E next week, R[D] and R[E]. According to
this model, R[D] and R[E] can be described using the following 3 scenarios. You can find these data in the posted file Stocks DE.xlsx.
Scenario R[D] Value R[E] Value Probability of Scenario
1 -0.04 0.01 0.3
2 0.03 0.02 0.5
3 0.01 -0.005 0.2
What is the value of the correlation coefficient between R[D] and R[E]? Choose the closest answer from the ones presented below.
• 0
• 1
• -1
• -0.379
• -0.165
• 0.165
• 0.379
Q6. This question relates to content of Sessions 1 and 2 and is based on the following example. Consider a model for describing random returns on Stocks D and E next week, R[D] and R[E]. According to
this model, R[D] and R[E] can be described using the following 3 scenarios. You can find these data in the posted file Stocks DE.xlsx.
Scenario R[D] Value R[E] Value Probability of Scenario
1 -0.04 0.01 0.3
2 0.03 0.02 0.5
3 0.01 -0.005 0.2
Suppose that a financial company invests $100,000 in the Stock D and $200,000 in the Stock E now. What is the highest possible value of profit, in $, associated with this investment that the company
can earn next week? Choose the closest answer from the ones presented below.
Q7. This question relates to content of Sessions 1 and 2 and is based on the following example. Consider a model for describing random returns on Stocks D and E next week, R[D] and R[E]. According to
this model, R[D] and R[E] can be described using the following 3 scenarios. You can find these data in the posted file Stocks DE.xlsx.
Scenario R[D] Value R[E] Value Probability of Scenario
1 -0.04 0.01 0.3
2 0.03 0.02 0.5
3 0.01 -0.005 0.2
Under the investment plan of Q6, what is the expected value of profit, in $, that the company will earn next week? Choose the closest answer from the ones presented below.
Q8. This question relates to the two-stock example considered in Session 3. In answering these questions, you can use the Excel file TwoStocks_Solved.xlsx.
Suppose that an investor is considering a portfolio with X[A ]=75,000, X[B ]= 25,000. In other words, the investor decides to put $75,000 in the Stock A and $25,000 in the Stock B “today”. What is
the expected profit, in $, such a portfolio will earn tomorrow? Choose the closest answer from the ones presented below.
Q9. This question relates to the two-stock example considered in Session 3. In answering these questions, you can use the Excel file TwoStocks_Solved.xlsx.
What is the value of the standard deviation of profits, in $, for the portfolio considered in Q8? Choose the closest answer from the ones presented below.
Q10. This question relates to the two-stock example considered in Session 3. In answering these questions, you can use the Excel file TwoStocks_Solved.xlsx.
Suppose that an investor would like to split $100,000 between Stocks A and Stock B “today” so as to maximize the expected profit “tomorrow” irrespective of the standard deviation of the resulting
profit. In other words, suppose that the investor “drops” the constraint on the maximum allowable value of the standard deviation of profits, while keeping the rest of the constraints in the
portfolio problem. Which of the following choices describes the optimal portfolio in this case?
• X[A ]= 0, X[B ]= 100,000
• X[A ]=100,000, X[B ]= 0
• X[A ]=25,000, X[B ]= 75,000
• X[A ]= 50,000, X[B ]= 50,000
• X[A ]=75,000, X[B ]= 25,000
Modeling Risk and Realities Week 03 Quiz Answers
Q1. A sports team named Philadelphia Streets has a probability of (2/3) for winning each game against their division rivals Hockeytown. They play 12 games against each other during the season. Assume
that the outcome of any particular game is independent from an outcome of any other game. Let X be the random variable that stands for the number of wins that Philadelphia Streets will have in those
12 games. What is the expected value of X?
Q2. Re-examine the medical drug success example in the videos. Recall that the number of the successes is distributed binomially (i.e., according to a binomial distribution).
Based on the definition of the mode, what is the mode of the distribution of successes? (Recall that the mode is the most likely value that a random variable can take).
Q3. The number of shares of a stock traded during a day for a firm is approximated by a random variable that is normally distributed with mean 3192 and standard deviation 1181.
What is the probability that the number of shares traded is less than or equal to 4200?
• 0.50
• 0.9998
• 0.20
• 0.0002
• 0.002
• 0.80
Q4. The number of shares of a stock traded during a day for a firm is approximated by a random variable that is normally distributed with mean 3192 and standard deviation 1181.
Calculate the pdf value at x=3200.
• 0.202
• 0.9997
• 0.502
• 0.0003
• 0.003
• 0.801
Q5. The forecast monthly revenues for a firm are modeled using a random variable that is distributed according to a normal distribution with mean $850,000 and standard deviation $165,000.
What is median value of this distribution, in $?
• 520,000
• 200,000
• 685,000
• 1,015,000
• 850,000
• 1,180,000
Q6. The forecast monthly revenues for a firm are modeled using a random variable that is distributed according to a normal distribution with mean $850,000 and standard deviation $165,000.
What is the probability that the revenues will be less than $700,000? Choose the closest numerical answer.
• 0.27
• 0.10
• 0.82
• 0.18
• 0.73
• 0.50
• 0.90
Q7. The forecast monthly revenues for a firm are modeled using a random variable that is distributed according to a normal distribution with mean $850,000 and standard deviation $165,000.
What is the probability that revenues will exceed 1 million dollars? Choose the closest answer.
• 0.18
• 0.73
• 0.90
• 0.82
• 0.27
• 0.10
• 0.50
Q8. A financial advisor at a financial consulting firm spends time with his investing clients throughout the year. Based on the historical data, he finds that the consulting time T spent with a
client can be modeled as a continuous, uniformly distributed random variable, with the minimum value of 50 minutes and the maximum value of 183 minutes.
What is the pdf value of this distribution at T=67 minutes?
• 0.9825
• 0.47
• 0.33
• 0.0075
• 0.53
• 0.67
Q9. A financial advisor at a financial consulting firm spends time with his investing clients throughout the year. Based on the historical data, he finds that the consulting time T spent with a
client can be modeled as a continuous, uniformly distributed random variable, with the minimum value of 50 minutes and the maximum value of 183 minutes.
What is the probability that his consulting time with an investor client will not exceed 2 hours (i.e., 120 minutes)? Choose the closest answer.
• 0.33
• 0.53
• 0.0075
• 0.67
• 0.47
• 0.9825
Q10. Suppose you are working on a project based on some complex data from your firm. You have broken down the 1344 data points that you have into 35 buckets or bins. You are now testing the goodness
of fit, using a chi-square test for a distribution that is characterized by 3 parameters.
What is the number of degrees of freedom associated with your chi-square test?
Modeling Risk and Realities Week 04 Quiz Answers
Q1. All questions in this quiz relate to the Stargrove example covered during this week. You can use the file Stargrove.xlsx to answer these questions. In this question, we assume that the Stargrove
decides to build R=96 regular and L=12 luxury apartments.
Suppose that the demand for regular apartments turns out to be D[R] = 94. How much profit, in $ millions, will the company earn from the sales of regular apartments, including the sales at the
$500,000 profit margin as well as the sales at the $100,000 profit margin? Note that you should not count the profit from the sales of luxury apartments. Choose the closest from the answers below.
Q2. All questions in this quiz relate to the Stargrove example covered during this week. You can use the file Stargrove.xlsx to answer these questions. In this question, we assume that the Stargrove
decides to build R=96 regular and L=12 luxury apartments.
What is maximum amount of profit, in $ millions, that the company can earn from the sales of regular apartments, including the sales at the $500,000 profit margin as well as the sales at the $100,000
profit margin? Note that you should not count the profit from the sales of luxury apartments. Choose the closest from the answers below.
Q3. All questions in this quiz relate to the Stargrove example covered during this week. You can use the file Stargrove.xlsx to answer these questions. In this question, we assume that the Stargrove
decides to build R=96 regular and L=12 luxury apartments.
Suppose that the actual demand for regular apartments at the $500,000 profit margin, D[R], is such that the Stargrove realized a profit of $500,000 from selling regular apartments to the real estate
investment company at the salvage profit margin of $100,000 per apartment. How much profit, in $ millions, did the Stargrove earn from the sales of the remaining regular apartments at the $500,000
profit margin for the same realization of demand D[R]? Choose the closest from the answers below.
• 45.5
• 45
• 45.2
• 46
• 46.2
• 46.5
Q4. All questions in this quiz relate to the Stargrove example covered during this week. You can use the file Stargrove.xlsx to answer these questions. In this question, we assume that the Stargrove
decides to build R=96 regular and L=12 luxury apartments.
For what value of the demand for regular apartments, D[R], the profit from selling regular apartments at the high profit margin of $500,000 is equal to the profit of selling regular apartments to
real estate investment company at the salvage profit margin of $100,000?
Q5. All questions in this quiz relate to the Stargrove example covered during this week. You can use the file Stargrove.xlsx to answer these questions. In this question, we assume that the Stargrove
decides to build R=96 regular and L=12 luxury apartments.
Suppose that we have set up a simulation with n=4 simulation runs that generated the following random instances for the demand for regular apartments, D[R]: 88, 91, 97, and 103. Calculate the four
corresponding values of the profit from the sales of regular apartments (i.e., the sum of profits at both the high profit margin of $500,000 and the low profit margin of $100,000) and use Excel to
generate the descriptive statistics for this sample of four profit values. What is the sample mean, in millions of $, of these four profit values? Choose the closest from the answers below.
Q6. All questions in this quiz relate to the Stargrove example covered during this week. You can use the file Stargrove.xlsx to answer these questions. In this question, we assume that the Stargrove
decides to build R=96 regular and L=12 luxury apartments.
Suppose that the same simulation as in Q5 generated the following random instances for the demand for luxury apartments, D[L]: 5, 7, 12, and 13. Calculate the four corresponding values of the profit
from the sales of luxury apartments (i.e., the sum of profits at both the high profit margin of $900,000 and the low profit margin of $150,000) and use Excel to generate the descriptive statistics
for this sample of four profit values. What is the sample standard deviation, in millions of $, of these four profit values? Choose the closest from the answers below.
Q7. All questions in this quiz relate to the Stargrove example covered during this week. You can use the file Stargrove.xlsx to answer these questions. In this question, we assume that the Stargrove
decides to build R=96 regular and L=12 luxury apartments.
Using four random instances of the demand for regular apartments from Q5 and four random instances of the demand for luxury apartments from Q6, calculate the four corresponding total profit values
obtained from sales of both regular and luxury apartments. Based on this four values, estimate the likelihood of the total profit to be above $52 million. Choose the closest from the answers below.
Q8. All questions in this quiz relate to the Stargrove example covered during this week. You can use the file Stargrove.xlsx to answer these questions. In this question, we assume that the Stargrove
decides to build R=96 regular and L=12 luxury apartments.
Use Excel to generate descriptive statistics for the four profit values in Q7 and calculate the 95% confidence interval for the true expected value of the total profit. If this interval has the form
[$X, $Y], what is the value of X, expressed in millions? Choose the closest from the answers below.
Q9. All questions in this quiz relate to the Stargrove example covered during this week. You can use the file Stargrove.xlsx to answer these questions. This question is focused on an alternative
decision to build R=88 regular and L=16 luxury apartments.
Consider the decision to build R=88 regular and L=16 luxury apartments. Using the four random instances of the demand for regular apartments from Q5 and four random instances of the demand for luxury
apartments from Q6, calculate the four corresponding total profit values obtained from sales of both regular and luxury apartments under this decision. Based on this four values, estimate the
likelihood of the total profit to be above $52 million. Choose the closest from the answers below.
Q10. All questions in this quiz relate to the Stargrove example covered during this week. You can use the file Stargrove.xlsx to answer these questions. This question is focused on an alternative
decision to build R=88 regular and L=16 luxury apartments.
Use Excel to generate descriptive statistics for the four profit values in Q9 and calculate the 95% confidence interval for the true expected value of the total profit. If this interval has the form
[$N, $M], what is the value of M-N, i.e., what is width of the 95% confidence interval for the expected value of the total profit? Express the value in millions and choose the closest from the
answers below.
Get All Course Quiz Answers of Business and Financial Modeling Specialization
Fundamentals of Quantitative Modeling Coursera Quiz Answers
Introduction to Spreadsheets and Models Coursera Quiz Answers
Modeling Risk and Realities Coursera Quiz Answers
Decision-Making and Scenarios Coursera Quiz Answers | {"url":"https://networkingfunda.com/modeling-risk-and-realities-coursera-quiz-answers/","timestamp":"2024-11-13T06:48:05Z","content_type":"text/html","content_length":"176682","record_id":"<urn:uuid:392ea306-bf43-4f08-9c96-9e996ae4ad88>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00746.warc.gz"} |
n (
Green (3a): Converting NFAs to DFAs¶
Consider the following NFA.
1. What is its transition table?
2. Use the subset construction method to convert it to an equivalent DFA.
3. Draw the state diagram of the resulting DFA.
4. Which of the following sets of NFA states is not a state of the resulting DFA?
1. Transition table:
\[ \begin{array}{|rl||c|c|} \hline && 0 & 1 \\ \hline\hline \to & A & \{A\} & \{B\}\\ & B & \{A, C\} & \emptyset\\ * & C & ....... & .......\\ \hline \end{array} \]
1. Conversion into an equivalent DFA using the subset construction method.
\[ \begin{array}{|rc||c|c|} \hline && 0 & 1 \\ \hline\hline \to & \{A\} & ....... & ....... \\ & \{B\} & \{A, C\} & \emptyset\\ * & \{A, C\} & ....... & ....... \\ & \emptyset & \emptyset & ....... \
\ & ....... & ....... & ....... \\ \hline \end{array} \]
1. State diagram of the resulting DFA.
2. Which of those sets is not in your table after performing the subset construction method?
Use the subset construction method to convert the following NFA to an equivalent DFA.
\[ \begin{array}{|rl||c|c|} \hline && 0 & 1 \\ \hline \hline \to & A & \{A, B\} & \{C\} \\ & B & \{D\} & \{B\} \\ & C & \{C\} & \{E\} \\ * & D & \emptyset & \{D\} \\ * & E & \emptyset & \{E\} \\ \
hline \end{array} \]
The resulting DFA has 13 states, 8 of which are accepting states.
\[ \begin{array}{|rr||c|c|} \hline & & 0 & 1 \\ \hline\hline \to & \{A\} & ....... & ....... \\ & \{A,B\} & ....... & ....... \\ & \{C\} & ....... & ....... \\ * & \{A,B, D\} & ....... & ....... \\ &
\{B,C\} & ....... & ....... \\ * & \{ E\} & ....... & ....... \\ * & \{B,C, D\} & ....... & ....... \\ * & \{C, D\} & ....... & ....... \\ * & \{B, E\} & ....... & ....... \\ & \emptyset & \emptyset
& \emptyset \\ * & \{B, D, E\} & ....... & ....... \\ * & \{ D, E\} & ....... & ....... \\ * & \{ D\} & ....... & ....... \\ \hline \end{array} \]
FAs for strings ending in aaa
Design an NFA that accepts strings over \(\{\texttt{a},\texttt{b}\}\) which end with aaa, then convert it to an equivalent DFA.
FAs for strings with 1 in the second position from the end
Design an NFA that accepts strings over \(\{0,1\}\) which have 1 in the second position from the end (e.g. 0010, 1011,10, etc.), then convert it to an equivalent DFA.
Green (3b): Regular Expressions¶
Making sense of RegEx's
Complete the descriptions of the following regular expressions (write in the shaded boxes). Assume the alphabet \(\Sigma=\{0,1\}\) in all the parts.
Operators precendence
Recall that, unless brackets are used to explicitly denote precedence, the operators precendence for the regular operations is: star, concatenation, then union.
1. \(01+ 10 =\{\_\_\_,\_\_\_\}\)
2. \(({\varepsilon}+ 0)({\varepsilon}+ 1)=\{{\varepsilon},0,1,\_\_\_\}\)
3. \((0+ \varepsilon)1^* = 01^* + 1^* = \{w\mid w \text{ has at most ___ and is at the start of $w$}\}\)
4. \(\Sigma^*0 = \{w\mid w \text{ ends with a ___} \} = \{w\mid w \text{ respresents an ___ number in binary}\}\)
5. \(0^*10^* = \{w\mid w \text{ contains a single ___ }\}\)
6. \(\Sigma^*0\Sigma^*=\{w\mid w \text{ has at least one ___} \}\)
7. \(\Sigma^*001\Sigma^*=\{w\mid w \text{ contains the string ___ as a substring}\}\)
8. \(\Sigma^*000^*\Sigma^*=\{w\mid w \text{ cotains at least ___ consective ___'s}\}\)
9. \((011^*)^* = \{w\mid \text{ every ___ in $w$ is followed by at least one ___}\}\)
10. \(\Sigma\Sigma + \Sigma\Sigma\Sigma = \Sigma\Sigma({\varepsilon}+\Sigma) = \{w\mid \text{ the length of $w$ is exactly ___ or ___}\}\)
11. \((\Sigma\Sigma)^*=\{w\mid w \text{ is a string of ___ length}\}\)
12. \((\Sigma\Sigma\Sigma)^*=\{w\mid \text{ the length of $w$ is a multiple of ___}\}\)
13. \(0\Sigma^*0+ 1\Sigma^*1+ 0 + 1=\{w\mid w \text{ starts and ends with the ___ symbol}\}\)
In what follows:
• the dots "......." below are an informal notation to indicate omitted parts of a string.
• the square \(\square\) is an informal notation for "any symbol".
These are used to help you "see" the patterns encoded by the RegEx's.
1. \(01+10\), i.e. first string \(01\) or second string \(10\) only.
2. \(({\varepsilon}+ 0)({\varepsilon}+ 1)\). Select one from \(\{\varepsilon, 0\}\) then concatenate it (glue it) to one from \(\{\varepsilon, 1\}\).
3. \((0+ {\varepsilon})1^*=01^*+ 1^*\). Think: \(\{01.......1, 1.......1\}\)
4. \(\Sigma^*0\). Think: \(......0\)
5. \(0^*10^*\). Think: \(0.......010.......0\)
6. \(\Sigma^*0\Sigma^*\). Think: \(.......0.......\)
7. \(\Sigma^*001\Sigma^*\). Think: \(.......001.......\)
8. \(\Sigma^*000^*\Sigma^*\). Think: \(.......00...0.......\)
9. \((011^*)^*=(011.......1)^*\)$. Think: "011.......1" repeated, i.e. 011.......1011.....1...................011.....1
10. \(\Sigma\Sigma + \Sigma\Sigma\Sigma\). Two symbols \(\square\square\) or three symbols \(\square\square\square\}\) only.
11. \((\Sigma\Sigma)^*\). \(\varepsilon\) or \(\square\square\) or \(\square\square\square\square\) or \(\square\square\square\square\square\square\) etc. The lengths are: 0,2,4,6,...
12. \((\Sigma\Sigma\Sigma)^*\). \(\varepsilon\) or \(\square\square\square\) or \(\square\square\square\square\square\square\) or \(\square\square\square\square\square\square\square\square\square\)
etc. The lengths are: 0,3,6,9,...
13. \(0\Sigma^*0+ 1\Sigma^*1+ 0 + 1\). Think \(\{0.......0, 1.......1, 0, 1\}\).
RegEx's and ε-NFAs
Produce a regular expression for the following languages over the alphabet \(\{\texttt{a}, \texttt{b}\}\)
1. The language \(L_\texttt{a}\) of all strings that start with a.
2. The language \(L_\texttt{b}\) of all strings that end with b.
3. The union \(L_\texttt{a}\cup L_\texttt{b}\).
4. The concatenation \(L_\texttt{a}L_\texttt{b}\).
5. \(L=(L_\texttt{a}\cup L_\texttt{b})L_\texttt{a}L_\texttt{b}\).
6. The star closure of \(L\): \(L^*\).
Produce \({\varepsilon}\)-NFAs for each of the above using the constructions shown in the lecture for the union, concatenation, and star.
1. \(L_\texttt{a}\). Think: \(\texttt{a}.......\)
2. \(L_\texttt{b}\). Think: \(.......\texttt{b}\)
3. \(L_\texttt{a}\cup L_\texttt{b}\). Think: \(\texttt{a}.......\) or \(.......\texttt{b}\)
4. \(L_\texttt{a}L_\texttt{b}\). Think: \(\texttt{a}..............\texttt{b}\)
5. \(L\). Think: \((\texttt{a}....... + .......\texttt{b})\texttt{a}.......\texttt{b}\)
6. \(L^*=\left((L_\texttt{a}\cup L_\texttt{b})L_\texttt{a}L_\texttt{b}\right)^*\). Think: \(\Big((\texttt{a}....... + .......\texttt{b})\texttt{a}.......\texttt{b}\Big)^*\)
A second look at RegEx's
For each of the following RegEx's, give two strings that are members of the corresponding language, and two strings that are not. (A total of 4 strings for each part.)
Assume the alphabet \(\Sigma=\{a,b\}\) in all the parts.
1. \(a^*b^*\)
2. \(a(ba)^*b\)
3. \(a^*+ b^*\)
4. \((aaa)^*\)
5. \(\Sigma^*a\Sigma^*b\Sigma^*a\Sigma^*\)
6. \(aba+ bab\)
7. \(({\varepsilon}+ a)b\)
8. \((a+ ba+ bb)\Sigma^*\)
RegEx Examples Non examples
\(a^*b^*\) a, b ba, aba.
\(a(ba)^*b\) ab, abab aab, aabb
\(a^*+ b^*\) \({\varepsilon}\), a ab, ba
\((aaa)^*\) \({\varepsilon}\), aaa a, aa
\(\Sigma^*a\Sigma^*b\Sigma^*a\Sigma^*\) aba, aaba a, ab
\(aba+ bab\) aba, bab a, ab
\(({\varepsilon}+ a)b = b + ab\) b, ab a, ba
\((a+ ba+ bb)\Sigma^*\) a, ba \({\varepsilon}\), b
Designing RegEx's
Give regular expressions generating the languages below over \(\Sigma=\{0,1\}\)
1. \(\{w\mid w \text{ begins with 1 and ends with a 0} \}\)
2. \(\{w\mid w \text{ contains at least three 1's} \}\)
3. \(\{w\mid w \text{ contains the substring 0101} \}\)
4. \(\{w\mid w \text{ has length at least 3 and its third symbol is 0} \}\)
5. \(\{w\mid w \text{ starts with 0 and has odd length, or starts with 1 and has even length} \}\)
6. \(\{w\mid w \text{ does not contain the substring 110} \}\)
7. \(\{w\mid \text{ the length of $w$ is at most 5} \}\)
8. \(\{w\mid w \text{ is any string except 11 and 111} \}\)
9. \(\{w\mid \text{ every odd position of $w$ is 1} \}\)
10. \(\{w\mid w \text{ contains at least two 0's and at most one 1} \}\)
11. \(\{{\varepsilon}, 0\}\)
12. \(\{w\mid w \text{ contains an even number of 0's, or contains exactly two 1's} \}\)
13. The empty set.
14. All strings except the empty string.
Please note that multiple solutions are possible. If yours looks different then check if they are equivalent.
1. \(1 ....... 0\)
2. \(....... 1 ....... 1 ....... 1 .......\)
3. $....... 0101 ....... $
4. $..0 ....... $
5. The first few cases for the odd length strings are:
\[0, 0(\Sigma\Sigma), 0(\Sigma\Sigma)(\Sigma\Sigma), \ldots\]
and the first few cases for the even length strings are:
\[1\Sigma, 1\Sigma(\Sigma\Sigma), 1\Sigma(\Sigma\Sigma)(\Sigma\Sigma), \ldots\]
From these two case we infer the general RegEx by taking their union.
6. This is challenging -- You can create a DFA first then use the GNFA algorithm to get the required RegEx.
Once you have the expression, can you see why it works?
Another way to get to the solution is as follows:
If {\tt 110} is not a substring of a string \(w\) then there are no consecutive {\tt 1}'s other than possibly at the end of \(w\).
So \(w\) can be written as \(w=u\ell\) where \(u\) has no consecutive {\tt 1}'s and \(\ell\) is made exclusively of zero or more {\tt 1}'s.
\(u\) can be taken to be \(\tt (0 + 10)^*\) or \(\tt 0^*(100^*)^*\) (or any other equivalent RegEx), while \(\ell={\tt 1^*}\).
\((0 + 10)^*\) gives you two types of "bricks" (0 and 10) to build your string by concatenation (gluing of the bricks), and these will never produce \(11\). Once you have \(11\) then you are not
allowed any 0's, hence the part: \(1^*\).
Just because you may have found this difficult it does not mean the rest are even harder; in fact the next one is straight forward!
7. Strings of lengths: \(0,1,2,3,4,5\).
Informally, \(\varepsilon\) or \(\square\) or \(\square\square\) or \(\square\square\square\) or \(\square\square\square\square\) or \(\square\square\square\square\square\).
8. This is obtained by listing all the acceptable strings of length \(\leq 3\) other than \(11\) and \(111\), then adding the option for strings of length \(\geq 4\).
\({\varepsilon}+ 0+1 + \_\_+\_\_+\_\_ + \_\_\_+\_\_\_+\_\_\_+\_\_\_+\_\_\_+\_\_\_+\_\_\_ + \Sigma^4\Sigma^*\).
9. Think: \({\varepsilon}, 1, 1\square, 1\square1, 1\square1\square, 1\square1\square1, 1\square1\square1,\square, ...\)
10. The condition "at most one 1" means we have two cases:
□ No 1 at all. This gives a string of 0s only, and since we must have "at least two 0s" then the RegEx is: \(000^*\) (or any equivalent).
□ Exactly one 1. We must have "at least two 0s", and these can either:
☆ both be before this 1: \(001\)
☆ both be after this 1: \(100\)
☆ be around this 1: \(010\)
These three possibilities give us \(001 + 010 + 100\), and then accounting for any other 0s we get the Regex: \(0^*(001 + 010 + 100)0^*\)
11. Simple enumeration of the finite set \(\{\varepsilon, 0\}\).
12. The language is a union because of the "or" in the condition:
\[\{w\mid w \text{ contains an even number of 0's} \} \;{\textcolor{red}{\cup}}\; \{w\mid w \text{ contains exactly two 1's} \}\]
Hint: \((\ldots \square\ldots \square\ldots)^*\), where "..." contains no \(\square\)s, produces strings with 0 or 2 or 4 or 6 etc. \(\square\)s.
13. One of the three base cases in the definition of regular expressions.
14. Think: \(\Sigma.......\)
This is also written as \(\Sigma^+\) as a shorthand (i.e. strings of length \(\geq 1\)).
Be careful: that is a "superscript plus", not the "union plus"; e.g. \({1^++0^*}\).
Green (3c): NFA to GNFA to RegEx¶
GNFA algorithm
Use the GNFA algorithm to find regular expressions for the languages recognized by the following NFAs.
Can you interpret the results?
We can convert any NFA into a GNFA as follows:
• Add a new start state with an \({\varepsilon}\)-transition to the NFA's start state.
• Add a new accept state with \({\varepsilon}\)-transitions from the NFA's accept states.
• If a transition has multiple labels then replace them with their union. (e.g. \(a,b\to a+ b\).)
Once the GNFA is produced, start removing states, one at a time, and "patch" any affected transitions using regular expressions (RegEx's). Repeat until only two states (initial and accept) remain.
The RegEx on the only remaining transition is the equivalent RegEx to the NFA.
Convert to GNFA, then remove A, then B.
Convert to GNFA. Remove B, C, A, D.
Convert to GNFA. Remove D, A, B, S.
Convert to GNFA. Remove B, C, A, D.
Convert to GNFA. Remove 2, 3, 4, 5, 1, 0.
RegEx's for similar NFAs
Give RegEx's for the languages recognized by the following similar NFAs, using the GNFA algorithm. What do you notice?
The RegEx for an NFA, whose accepting states include the accepting states of another, also includes its RegEx as a sub-expression.
1 accepting state: \(( \_\_ + \_\_ ) ( \_\_ + \_\_ )^*\). Call this \(R\).
3 accepting states: \(R ( 1 + 0 + \varepsilon) + 1 + 0\)
4 accepting states: \(R ( 1 + 0 + \varepsilon) + 1 + 0 + \varepsilon\)
If all the states of an NFA are accepting then it does not necessarily mean it accepts all possible strings.
Let \(L_n\) be the language of all strings over \(\Sigma=\{1\}\) that have length a multiple of \(n\), where \(n\) is a natural number (i.e. \(n\in\mathbb{N}=\{1,2,3,\ldots\}\)).
1. Design an NFA to recognize \(L_3\), and another to recognize \(L_5\).
2. Write down RegEx's for \(L_3\) and \(L_5\), then for their union \(L_3\cup L_5\).
3. Construct the \({\varepsilon}\)-NFA that recognizes \(L_3\cup L_5\).
4. Use the GNFA algorithm to obtain a RegEx for \(L_3\cup L_5\).
1. Think of cycles/circles, and one accept state which is also the start state.
2. $L_3\colon (___)^* $
\(L_5\colon (\_\_\_\_\_)^*\)
\(L_3\cup L_5\colon (\_\_\_)^* + (\_\_\_\_\_)^*\).
3. Add a new start state. Connect it (with empty transitions) to the two start states of the \(L_3\) and \(L_5\) automata.
4. Follow the the GNFA algorithm steps.
Strings whose length is a multiple of a fixed number
Let \(B_n = \{\texttt{a}^{m}\mid m \text{ is a multiple of $n$}\} = \{\texttt{a}^{kn}\mid k\in\mathbb{Z}_{\geq 0}\}\) over the alphabet \(\Sigma=\{\texttt{a}\}\).
Show that the language \(B_n\) is regular for any \(n\in\mathbb{N}\) by writing a regular expression for it.
Outline the description of an NFA that can recognize it.
RegEx: \({\texttt{a}\ldots\texttt{a}}\) (\(n\) symbols) repeated zero or more times (star).
The corresponding NFA is a generalization of the case for \(L_3\) and \(L_5\) above. It would have \(n\) states in a circular shape, with the start state being the only accepting state.
Closure of regular languages under reversal of strings
For any string \(s=\texttt{s}_1\texttt{s}_2\ldots \texttt{s}_n\), where \(\texttt{s}_i\) are symbols from the alphabet, the reverse of \(s\) is the string \(s\) written in reverse order: \(s^R=\
texttt{s}_n\texttt{s}_{n-1}\ldots \texttt{s}_1\).
Given an NFA or RegEx that recognizes a language \(A\), describe how you can transform this NFA/RegEx to recognize the language \(A^R=\{w^R\mid w\in A\}\), i.e. the language that contains all the
strings from \(A\) but in the reverse order.
Basic idea: reverse the arrows in the state diagram, but need to address the case with many accepting states... etc.
Test your ideas on the languages given by the RegEx's: (\(\Sigma=\{a,b\}\))
\[a, b, aa, ab, aa+bb,ab+bb, a^*b^*, \Sigma^*a, a\Sigma^*, ab^*a^*b, (ab)^*, (aa+bb)^*, (ab+bb)^*.\]
Subset construction method applied to an e-NFA
Convert the following \({\varepsilon}\)-NFA to an equivalent DFA.
Take extra care with the epsilon transition between 1 and 3. Every time you make it to 1, you "slide" to 3 too, so you are in two places at the same time. You may informally think of these two states
as one state "1-3".
Technical cases with RegEx's
Show that
1. \(1^*\emptyset=\emptyset\)
(Concatenating the empty RegEx \(\emptyset\) to any RegEx yields the empty RegEx again)
2. \(\emptyset^*=\{{\varepsilon}\}\)
You may find it helpful to construct the corresponding \({\varepsilon}\)-NFAs.
Check the lecture slides.
This seems impossible at a first look...!
Let \(\Sigma=\{0,1\}\) and let
\[D = \{w\mid w \text{ contains an equal number of the occurrences of the substrings 01 and 10}\}.\]
As an example, \(101\in D\) but \(1010\not\in D\).
Show that \(D\) is a regular language (by producing an NFA for it, or otherwise).
Does this hold for \(\{w\mid w \text{ contains an equal number of 0's and 1's}\}\)?
Can you see why? What is the difference!?
How about the language \(\{w\mid w \text{ contains a non-equal number of 0's and 1's}\}\)?
We will see next week that the language defined by \(a^n b^n\) for \(n\geq 0\) is not regular. The language \(D\) seems to defy this and suggests that if we set \(a=01\) and \(b=10\) then we can
build an NFA for it... the trick is that a string like \(101\) is in \(D\) because overlap is allowed, while this particular string cannot be decoded using \(a=01\) and \(b=10\).
Suppose we have a programming language where comments are delimited by @= and =@. Let \(L\) be the language of all valid delimited comment strings, i.e. a member of \(L\) must begin with @= and end
with =@.
Use the page at https://regex101.com/r/Ez1kqp/3 and try the following RegEx searches:
Programming Mathematical notation Interpreation
@ @ Just the symbol @
@= @= Just the string @=
. \(\Sigma\) Any symbol from the alphabet
.* \(\Sigma^*\) Any string over the alphabet
@.* @\(\Sigma^*\) ............................
@.*|.*@ @\(\Sigma^*\)+\(\Sigma^*\)@ ............................
@.*@ @\(\Sigma^*\)@ ............................
@=.*=@ @=\(\Sigma^*\)=@ ............................
Interpret the results for the last 4 searches. Try alternative searches to develop your understanding of how RegEx is used in practice. What is the correct RegEx for \(L\)?
Extend your class for simulating DFAs and NFAs from the last lab to convert a given NFA into an equivalent DFA or to a RegEx. | {"url":"https://github.coventry.ac.uk/pages/ab3735/5002CEM/labs/hints/lab3/","timestamp":"2024-11-10T11:22:41Z","content_type":"text/html","content_length":"102342","record_id":"<urn:uuid:780cafb6-007c-4cb4-8a58-c7444efb8142>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00559.warc.gz"} |
USCF Midstream Downside Deviation | UMI NYSE ARCA
UMI Etf USD 46.49 0.46 0.98%
USCF Midstream downside-deviation technical analysis lookup allows you to check this and other technical indicators for USCF Midstream Energy or any other equities. You can select from a set of
available technical indicators by clicking on the link to the right. Please note, not all equities are covered by this module due to inconsistencies in global equity categorizations and data
normalization technicques. Please check also
Equity Screeners
to view more equity screening tools
USCF Midstream Energy has current Downside Deviation of 0.8701. Downside Deviation (or DD) is measured by target semi-deviation (the square root of target semi-variance) and is termed downside risk.
It is expressed in percentages and therefore allows for rankings in the same way as standard deviation. An intuitive way to view the downside risk is the annualized standard deviation of returns
below the target.
Downside Deviation = SQRT(DV) = 0.8701
SQRT = Square root notation
DV = Downside Variance of returns over selected period
USCF Midstream Downside Deviation Peers Comparison
USCF Downside Deviation Relative To Other Indicators
USCF Midstream Energy is rated
in downside deviation as compared to similar ETFs. It is currently under evaluation in maximum drawdown as compared to similar ETFs reporting about
of Maximum Drawdown per Downside Deviation. The ratio of Maximum Drawdown to Downside Deviation for USCF Midstream Energy is roughly
It is the square root of the probability-weighted squared below-target returns. The squaring of the below-target returns has the effect of penalizing failures at an exponential rate. This is
consistent with observations made on the behavior of most private investors.
Compare USCF Midstream to Peers Predict USCF Midstream
Thematic Opportunities
Explore Investment Opportunities | {"url":"https://www.macroaxis.com/invest/technicalIndicator/UMI/Downside-Deviation","timestamp":"2024-11-02T15:22:37Z","content_type":"text/html","content_length":"237260","record_id":"<urn:uuid:d3c39129-08fa-464a-bc59-546e7e7e1ab5>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00636.warc.gz"} |
The Stacks project
Lemma 31.21.8. Let $i : Z \to Y$ and $j : Y \to X$ be immersions of schemes. Assume that the sequence
\[ 0 \to i^*\mathcal{C}_{Y/X} \to \mathcal{C}_{Z/X} \to \mathcal{C}_{Z/Y} \to 0 \]
of Morphisms, Lemma 29.31.5 is exact and locally split.
1. If $j \circ i$ is a quasi-regular immersion, so is $i$.
2. If $j \circ i$ is a $H_1$-regular immersion, so is $i$.
3. If both $j$ and $j \circ i$ are Koszul-regular immersions, so is $i$.
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 068Z. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 068Z, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/068Z","timestamp":"2024-11-14T20:51:42Z","content_type":"text/html","content_length":"16360","record_id":"<urn:uuid:ef9b4f3f-525b-4f11-bf64-4e977ab7ea14>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00643.warc.gz"} |
Eureka Math Grade 7 Module 6 Mid Module Assessment Answer Key
Engage NY Eureka Math 7th Grade Module 6 Mid Module Assessment Answer Key
Eureka Math Grade 7 Module 6 Mid Module Assessment Task Answer Key
Question 1.
In each problem, set up and solve an equation for the unknown angles.
a. Four lines meet at a point. Find the measures m° and n°.
b. Two lines meet at the vertex of two rays. Find the measures m° and n°.
c. Two lines meet at a point that is the vertex of two rays. Find the measures m° and n°.
d. Three rays have a common vertex on a line. Find the measures m° and n°.
a. n° = 90° , vertical angles
25° + (90°) + 40° + m° = 180°
155° + m° = 180°
155° – 155° + m° = 180° – 155°
m° = 25°
b. 50° + 90° + n° = 180°
140° + n° = 180°
140° – 140° + n° = 180° – 140°
n° = 40°
m° + 50° = 90°
m° + 50° – 50° = 90° – 50°
m° = 40°
c. m° + 52° = 90°
m° + 52° – 52° = 90° – 52°
m° = 38°
40 + 52 + (38) + n° = 180
130 + n° = 180
130 – 130 + n° = 180 – 130
n° = 50°
d. n° + 62° = 90°
n° + 62° – 62° = 90° – 62°
n° = 28°”
m° + 62° + (28°) + 27° = 180°
m° + 117° = 180
m° + 117° – 117° = 180° – 117°
m° = 63°
Question 2.
Use tools to construct a triangle based on the following given conditions.
a. If possible, use your tools to construct a triangle with angle measurements 20°, 55°, and 105°, and leave evidence of your construction. If it is not possible, explain why.
b. Is it possible to construct two different triangles that have the same angle measurements? If it is, construct examples that demonstrate this condition, and label all angle and length
measurements. If it is not possible, explain why.
a. Solutions will vary. An example of a correctly constructed triangle is shown here.
b. Solutions will vary; refer to the rubric.
Question 3.
In each of the following problems, two triangles are given. For each: (1) state if there are sufficient or insufficient conditions to show the triangles are identical, and (2) explain your reasoning.
a. The triangles are identical by the two angles and included side condition. The marked side is between the given angles.
△ABC ↔ △YXZ
b. There is insufficient evidence to determine that the triangles are identical. In â–³DEF , the marked side is between the marked angles, but in â–³ABC , the marked side is not between the marked
c. The triangles are identical by the two sides and included angle condition. △DEF ↔ △GIH
d. The triangles are not identical. In △ABC , the marked side is opposite ∠B . In △WXY , the marked side is opposite ∠W . ∠B and ∠W are not necessarily equal in measure.
Question 4.
Use tools to draw rectangle ABCD with AB = 2 cm and BC = 6 cm. Label all vertices and measurements.
Question 5.
The measures of two complementary angles have a ratio of 3:7. Set up and solve an equation to determine the measurements of the two angles.
3x + 7x = 90
10x = 90
(\(\frac{1}{10}\))10x = (\(\frac{1}{10}\))90
x = 9
Measure of Angle 1: 3(9) = 27 . The measure of the first angle is 27° .
Measure of Angle 2: 7(9) = 63 . The measure of the second angle is 63° .
Question 6.
The measure of the supplement of an angle is 12° less than the measure of the angle. Set up and solve an equation to determine the measurements of the angle and its supplement.
Let y° be the number of degrees in the angle.
y + (y – 12) = 180
2y – 12 = 180
2y – 12 + 12 = 180 + 12
2y = 192
(\(\frac{1}{2}\))2y = (\(\frac{1}{2}\))192
y = 96
Measure of the angle: 96°
Measure of its supplement: (96)° – 12° = 84°
Question 7.
Three angles are at a point. The ratio of two of the angles is 2:3, and the remaining angle is 32° more than the larger of the first two angles. Set up and solve an equation to determine the
measures of all three angles.
2x + 3x + (3x + 32) = 360
8x + 32 = 360
8x + 32 – 32 = 360 – 32
8x = 328
(\(\frac{1}{8}\))8x = (\(\frac{1}{8}\))328
x = 41
Measure of Angle 1: 2(41)° = 82°
Measure of Angle 2: 3(41)° = 123°
Measure of Angle 3: 3(41)° + 32° = 155°
Question 8.
Draw a right triangle according to the following conditions, and label the provided information. If it is not possible to draw the triangle according to the conditions, explain why. Include a
description of the kind of figure the current measurements allow. Provide a change to the conditions that makes the drawing feasible.
a. Construct a right triangle ABC so that AB = 3 cm, BC = 4 cm, and CA = 5 cm; the measure of angle B is 90°.
b. Construct triangle DEF so that DE = 4 cm, EF = 5 cm, and FD = 11 cm; the measure of angle D is 50°.
b. It is not possible to draw this triangle because the lengths of the two shorter sides do not sum to be greater than the longest side. In this situation, the total lengths of \(\overline{D E}\) and
\(\overline{E F}\) are less than the length of \(\overline{F D}\); there is no way to arrange \(\overline{D E}\) and \(\overline{E F}\) so that they meet. If they do not meet, there is no arrangement
of three non-collinear vertices of a triangle; therefore, a triangle cannot be formed. I would change \(\overline{E F}\)to 9 cm instead of 5 cm so that the three sides would form a triangle.
Leave a Comment
You must be logged in to post a comment. | {"url":"https://ccssmathanswers.com/eureka-math-grade-7-module-6-mid-module-assessment/","timestamp":"2024-11-11T22:36:14Z","content_type":"text/html","content_length":"257616","record_id":"<urn:uuid:61ca3568-a12f-4133-9e7b-e25aa1f32a78>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00801.warc.gz"} |
Far Eastern Mathematical Journal
[1] J.L. van Hemmen, “Classical Spin-Glass Model”, Phys. Rev. Lett., 49:6 (1982), 409–412.
[2] S.F. Edwards, P.W. Anderson, “Theory of spin glasses”, Phys. F: Metal Phys., 5 (1975), 965–74.
[3] R. Rammal, G. Toulouse, M.A. Virasoro, “Ultrametricity for physicists”, Rev. Mod. Phys., 58 (1986), 765.
[4] J.J. Hop?eld, D.W. Tank, “Neural Computation of Decisions in Optimization Problems”, Biol. Cybern., 52 (1985), 141–152.
[5] R. McEliece, E. Posner, E. Rodemich, S. Venkatesh, “The capacity of the Hop?eld associative memory”, IEEE Transactions on Information Theory, 33:4 (1987), 461–482.
[6] J.L. van Hemmen, “Spin-glass models of a neural network”, Phys. Rev. A, 34:4 (1986), 3435–3445.
[7] G.S. Hartnett, E. Parker, E. Geist, “Replica symmetry breaking in bipartite spin glasses and neural networks”, Phys. Rev. E, 98:2 (2018), 022116.
[8] R. Salakhutdinov, G. Hinton, “Deep Boltzmann machines”, Phys. Rev. E, 5:2 (2009), 448–455.
[9] C. Amoruso, A.K. Hartmann, M.A. Moore, “Determining energy barriers by iterated optimization: The two-dimensional Ising spin glass”, Phys. Rev. B, 73:18 (2006), 184405.
[10] B. Waclaw, Z. Burda, “Counting metastable states of Ising spin glasses on arbitrary graphs”, Phys. Rev. E, 77:4 (2008), 041114.
[11] Z. Burda, A. Krzywicki, O.C. Martin, Z. Tabor, “From simple to complex networks: Inherent structures, barriers, and valleys in the context of spin glasses”, Phys. Rev. E, 73:3 (2006), 036110.
[12] S. Schnabel, W. Janke, “Distribution of metastable states of Ising spin glasses”, Phys. Rev. E, 97:17 (2018), 174204.
[13] M.W. Johnson, M.H. Amin, S. Gildert, T. Lanting, F. Hamze, N. Dickson, R. Harris, A.J. Berkley, J. Johansson, P. Bunyk, E.M. Chapple, C. Enderud, J.P. Hilton, K. Karimi, E. Ladizinsky, N.
Ladizinsky, T. Oh, I. Perminov, C. Rich, M. C. Thom, E. Tolkacheva, C.J. Truncik, S. Uchaikin, J. Wang, B. Wilson, G. Rose, “Quantum annealing with manufactured spins”, Nature, 473:7346 (2011),
[14] P.I. Bunyk, E.M. Hoskinson, M.W. Johnson, E. Tolkacheva, F. Altomare, A.J. Berkley, R. Harris, J.P. Hilton, T. Lanting, A.J. Przybysz, J. Whittaker, “Quantum annealing with manufactured spins”,
IEEE Transactions on Applied Superconductivity, 24:4 (2014), 1–20.
[15] D. Perera, F. Hamze, J. Raymond, M. Weigel, H. Katzgraber, “Computational hardness of spin-glass problems with tile-planted solutions”, Phys. Rev. E, 101:2 (2020), 023316.
[16] I. Hen, “Equation Planting: A Tool for Benchmarking Ising Machines”, Phys. Rev. Applied, 12:1 (2019), 011003.
[17] D. Pierangeli, M. Rafayelyan, C. Conti, S. Gigan, “Scalable Spin-Glass Optical Simulator”, Phys. Rev. Applied, 15:3 (2019), 034087.
[18] T. Kadowaki, H. Nishimori, “Quantum annealing in the transverse Ising model”, Phys. Rev. E, 58:5 (2019), 5355–5363.
[19] G.E. Santoro, R. Martonak, E. Tosatti, R. Car, “Theory of Quantum Annealing of an Ising Spin Glass”, Phys. Rev. Applied, 295:5564 (2002), 2427–2430.
[20] J. Machta, “Population annealing with weighted averages: A Monte Carlo method for rough free-energy landscapes”, Phys. Rev. E, 82:2 (2010), 026704.
[21] J. Houdayer, O.C. Martin, “Hierarchical approach for computing spin glass ground states”, Phys. Rev. E, 64:5 (2001), 056704.
[22] W. Wang, J. Machta, H.G. Katzgraber, “Population annealing: Theory and application in spin glasses”, Phys. Rev. E, 92:6 (2015), 063307.
[23] D.P. Morelo, A. Ramirez-Pastor, F. Rom, “Ground-state energy and entropy of the two-dimensional Edwards–Anderson”, Physica A: Statistical Mechanics and its Applications, 391 (2011).
[24] N. Hatano, “Evidence for the double degeneracy of the ground state in the three-dimensional ±J spin glass”, Phys. Rev. B, 66 (2002).
[25] A. Galluccio, “New Algorithm for the Ising Problem: Partition Function for Finite Lattice Graphs”, Phys. Rev. Lett., 84:26 (2000), 5924–5927.
[26] A.K. Hartmann, H. Rieger, New Optimization Algorithms in Physics, Wiley-VCH, Berlin, 2004.
[27] A.K. Hartmann, “Cluster-exact approximation of spin glass ground states”, Physica A, 224:480 (1996).
[28] A.K. Hartmann, “Ground States of Two-Dimensional Ising Spin Glasses: Fast Algorithms, Recent Developments and a Ferromagnet-Spin Glass Mixture”, J Stat Phys, 144:519 (2011).
[29] G. Pardella, F. Liers, “Exact Ground States of Large Two-Dimensional Planar Ising Spin Glasses”, Physical Review. E, Statistical, nonlinear, and soft matter physics, 78 (2011), 056705.
[30] B. Kaufman, “Crystal statistics. ii. partition function evaluated by spinor analysis”, Phys. Rev., 78 (1949), 1232–1243.
[31] M. Suzuki, “Transfer-matrix method and Monte Carlo simulation in quantum spin systems”, Phys. Rev. B., 31:5 (1985), 2957–2965.
[32] M. Suzuki, “Generalized Trotter’s Formula and Systematic Approximants of Exponential Operators and Inner Derivations with Applications to Many-Body Problems. Commun”, Math. Phys., 51 (1976), | {"url":"http://iam.khv.ru/article_eng.php?art=453&iss=38","timestamp":"2024-11-10T14:37:28Z","content_type":"text/html","content_length":"8634","record_id":"<urn:uuid:f5a4beb7-999d-4b81-8371-98d23241dfdc>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00384.warc.gz"} |
Mathematics On Support Vector Machines - machinelearningconsulting
Support vector machines (SVMs) are a class of supervised learning models used for classification and regression problems in machine learning. This powerful tool is known for its ability to classify
data into different categories using hyperplanes in multidimensional space. To appreciate the full potential of SVM, it is important to learn the mathematics behind the technology, trace how these
models determine the optimal separator, explore different kernel functions, and deal with complex data structures.
Understanding Hyperplanes And Classification
Support vector machines use hyperplanes to classify data points in a space defined by the characteristics of a data set. A hyperplane is a flat affine subspace that has one less dimension than the
surrounding space. For a binary classification problem in two-dimensional space, a hyperplane is a line that divides the plane into two parts, each corresponding to a different class. In higher
dimensions, the hyperplane becomes a flat affine subspace that divides the space into two half-spaces.
Mathematically, a hyperplane in n-dimensional space is defined by the equation “w*x + b = 0”, where “w” is a weight vector perpendicular to the hyperplane, and “x” is a data feature vector. point,
and “b” is the displacement term. The vector “w” defines the orientation of the hyperplane, while “b” displaces the hyperplane in space.
The goal of SVM is to determine the hyperplane that best separates data classes with maximum margin. The field is the closest distance from any of the data points to the hyperplane. Increasing this
margin provides the most reliable classification, ensuring that differences in class distribution generalize well to new data points. The points that lie directly on the edges of this box are called
support vectors, and they are key because they are the data points that are most likely to be misclassified.
Calculation of the distance from the data point to the hyperplane is determined by the formula `|w*x + b| / ||w||`, and for classification purposes, the sign `w*x + b` indicates the side of the
hyperplane on which the point lies. This means that for a point ‘x’, if `w*x + b > 0`, it belongs to one class, and if `w*x + b < 0`, it belongs to another.
A Quadratic Programming Approach
To maximize the margin between classes in the SVM model, we transform the problem into a quadratic programming optimization problem. The margin is defined as the distance from the separating
hyperplane to the nearest data points from each class, known as support vectors. Mathematically, this means maximizing `2/||w||`, where `||w||` is the Euclidean norm of the weight vector ‘w’. To
achieve this, it is convenient to reformulate the objective as minimizing `1/2 * ||w||^2`, since minimizing the square of the norm simplifies differentiation.
The optimization must satisfy specific constraints to ensure that all points are properly classified. These restrictions are expressed as:
\( y_i(w \cdot x_i + b) \geq 1 \) for each data point \(x_i\),
where \(y_i\) is the class label of the data point \(x_i\) which is either +1 or -1. This condition ensures that each data point is not only correctly classified, but is also at least 1 unit away
from the decision boundary normalized by the inverse norm \(w\).
Given these requirements, the SVM problem turns into finding values for the weight vector \(w\) and the bias term \(b\) that minimize \(1/2 * ||w||^2\), subject to the constraint above. This is a
standard convex optimization problem characterized by a quadratic objective function and linear inequality constraints.
Solutions are usually sought through a dual formulation of the optimization problem. The introduction of Lagrange multipliers, denoted \(\alpha_i\), one for each constraint, supports the
transformation of a simple problem with a single vector of variables, \(w\), into a dual problem in terms of Lagrange multipliers. The dual challenge is to maximize:
\( \sum_i \alpha_i – \frac{1}{2} \sum_i \sum_j \alpha_i \alpha_j y_i y_j (x_i \cdot x_j) \),
taking into account \( \sum_i \alpha_i y_i = 0 \) and \( \alpha_i \geq 0 \).
This transformation focuses the problem on the scalar products of the input data points, which provides an efficient computation. This leads to non-zero values of \(\alpha_i\) only for points on
the edge — the so-called support vectors. These support vectors are important because they uniquely define the optimal hyperplane.
Specialized optimization algorithms such as sequential minimum optimization (SMO) can effectively solve the dual problem. Once solved, the parameters for \(w\) and \(b\) are computed from the values
of \(\alpha_i\), offering a robust classification rule with maximum class separation in the feature space.
Kernel Functions And Non-Linear Data
Support Vector Machines excels at processing linearly separated data; however, real-world data often exhibit complex, non-linear patterns. To overcome this limitation, SVMs use kernel functions that
allow them to work well with nonlinear data by implicitly transforming the input feature space into a higher dimensional space. This transformation allows the SVM to find a linear separating
hyperplane in this new space that corresponds to a nonlinear boundary in the original input space.
The essence of this transformation is a “kernel trick” that allows the SVM to compute scalar products between data points in the transformed feature space without explicitly mapping the data into
that higher-dimensional space. This is computationally efficient, as the actual transformation can be computationally expensive, especially for high-dimensional mappings.
Common kernel functions include:
1. Linear core: Simply calculates the dot product between two vectors. It is essentially the same as the scalar product in the source space and is used when the data is approximately linearly
2. Polynomial kernel: Converts the original feature space to polynomials, allowing curved solution boundaries. It is defined as \( (x \cdot x’ + c)^d \), where \(d\) is the degree of the polynomial
and \(c\) is a free parameter that can be adjusted.
3. Radial basis function (RBF) kernel: One of the most popular kernels that displays data in an infinite space. It is defined as \( \exp(-\gamma ||x – x’||^2) \), where \(\gamma\) is a free
parameter that defines the width of the RBF kernel. RBF can model very complex boundaries.
4. Sigmoid nucleus: Similar to neural networks, it is defined as \( \tanh(\alpha x \cdot x’ + c) \), where \(\alpha\) and \(c\) are kernel parameters. It behaves like a perceptron model used in
neural networks.
The choice of the appropriate kernel function and its parameters is crucial because it directly affects the performance of the model. Approaches such as cross-validation are often used to determine
optimal parameters for a kernel function, minimizing errors and avoiding overfitting.
The kernel trick ensures that the computational cost depends on the size of the dataset rather than the dimensionality of the feature space, making it possible to apply SVM to complex datasets with
non-linear structures. Thus, kernel functions greatly increase the flexibility and applicability of SVMs in a variety of domains, allowing them to adapt to the subtleties of nonlinear data while
maintaining computational efficiency.
Handling Multi-Class Classification And Data Complexity
SVMs were originally developed for binary classification problems. However, many real-world applications require multi-class classification. Extensions to the standard binary SVM approach, such as
the One-versus-One and One-versus-all methods, have been developed to handle multiple classes. In the One vs. All separate SVM is trained for each class against all other classes, while One vs. One
builds an SVM for each pair of classes.
In addition to classification, SVMs can also perform regression, which aims to predict continuous values. This approach, known as support vector regression (SVR), works by fitting a model that keeps
all training points within the tubular boundary while minimizing prediction error.
Data preprocessing, normalization, and feature scaling are often necessary to mitigate problems such as overfitting and ensure efficient model training. Multidimensional datasets particularly benefit
from dimensionality reduction techniques such as principal component analysis (PCA) to simplify the structure of the data, allowing SVMs to efficiently find decision boundaries. | {"url":"https://machinelearningconsulting.net/mathematics-on-support-vector-machines/","timestamp":"2024-11-04T08:02:44Z","content_type":"text/html","content_length":"53020","record_id":"<urn:uuid:d0060179-f5b0-4023-acbe-56add5612a34>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00345.warc.gz"} |
Free Fall Model
Detail Page
Free Fall Model
written by Andrew Duffy
This simulation allows students to examine the motion of an object in free fall. Download below. The user can control the initial height (0-20m), set an initial velocity from -20 to 20 m/
s, and change the rate of gravitational acceleration from zero to 20 m/s/s (Earth's gravitational constant is ~9.8 m/s/s). Students can also launch the ball upward from any point on the
line of motion. The free fall is displayed as a motion diagram, while graphs are simultaneously displayed showing position vs. time, velocity vs. time, and acceleration vs. time.
See Annotations Below for an editor-recommended tutorial that further explains how graphs are used to represent free fall motion.
This item was created with Easy Java Simulations (EJS), a modeling tool that allows users without formal programming experience to generate computer models and simulations. To run the
simulation, simply click the Java Archive file below.
Please note that this resource requires at least version 1.5 of Java (JRE).
Subjects Levels Resource Types
Classical Mechanics
- Instructional Material
- Motion in One Dimension - High School
= Curriculum support
= Acceleration - Lower Undergraduate
= Interactive Simulation
= Gravitational Acceleration - Middle School
- Audio/Visual
= Position & Displacement - Upper Undergraduate
= Movie/Animation
= Velocity
Appropriate Courses Categories Ratings
- Physical Science
- Physics First
- Activity
- Conceptual Physics
- New teachers
- Algebra-based Physics
- AP Physics
Intended Users:
Access Rights:
Free access
This material is released under a GNU General Public License Version 3 license.
Rights Holder:
Andrew Duffy, Boston University
EJS, Easy Java Simulations, acceleration, free fall, free fall simulation, gravity, position, position vs. time, velocity, velocity vs. time
Record Cloner:
Metadata instance created April 27, 2010 by Mario Belloni
Record Updated:
March 8, 2016 by wee lookang
Last Update
when Cataloged:
April 16, 2010
Other Collections:
Next Generation Science Standards
Crosscutting Concepts (K-12)
Patterns (K-12)
• Graphs and charts can be used to identify patterns in data. (6-8)
NGSS Science and Engineering Practices (K-12)
Analyzing and Interpreting Data (K-12)
• Analyzing data in 9–12 builds on K–8 and progresses to introducing more detailed statistical analysis, the comparison of data sets for consistency, and the use of models to generate
and analyze data. (9-12)
□ Analyze data using computational models in order to make valid and reliable scientific claims. (9-12)
Developing and Using Models (K-12)
• Modeling in 6–8 builds on K–5 and progresses to developing, using and revising models to describe, test, and predict more abstract phenomena and design systems. (6-8)
□ Develop and use a model to describe phenomena. (6-8)
• Modeling in 9–12 builds on K–8 and progresses to using, synthesizing, and developing models to predict and show relationships among variables between systems and their components in
the natural and designed worlds. (9-12)
□ Use a model to provide mechanistic accounts of phenomena. (9-12)
Using Mathematics and Computational Thinking (5-12)
• Mathematical and computational thinking at the 9–12 level builds on K–8 and progresses to using algebraic thinking and analysis, a range of linear and nonlinear functions including
trigonometric functions, exponentials and logarithms, and computational tools for statistical analysis to analyze, represent, and model data. Simple computational simulations are
created and used based on mathematical models of basic assumptions. (9-12)
□ Create or revise a simulation of a phenomenon, designed device, process, or system. (9-12)
□ Use mathematical or computational representations of phenomena to describe explanations. (9-12)
NGSS Nature of Science Standards (K-12)
Analyzing and Interpreting Data (K-12)
• Analyzing data in 9–12 builds on K–8 and progresses to introducing more detailed statistical analysis, the comparison of data sets for consistency, and the use of models to generate
and analyze data. (9-12)
Developing and Using Models (K-12)
• Modeling in 6–8 builds on K–5 and progresses to developing, using and revising models to describe, test, and predict more abstract phenomena and design systems. (6-8)
• Modeling in 9–12 builds on K–8 and progresses to using, synthesizing, and developing models to predict and show relationships among variables between systems and their components in
the natural and designed worlds. (9-12)
Using Mathematics and Computational Thinking (5-12)
• Mathematical and computational thinking at the 9–12 level builds on K–8 and progresses to using algebraic thinking and analysis, a range of linear and nonlinear functions including
trigonometric functions, exponentials and logarithms, and computational tools for statistical analysis to analyze, represent, and model data. Simple computational simulations are
created and used based on mathematical models of basic assumptions. (9-12)
AAAS Benchmark Alignments (2008 Version)
4. The Physical Setting
4B. The Earth
• 6-8: 4B/M3. Everything on or anywhere near the earth is pulled toward the earth's center by gravitational force. Materials
4G. Forces of Nature Similar
• 9-12: 4G/H1. Gravitational force is an attraction between masses. The strength of the force is proportional to the masses and weakens rapidly with increasing distance between them.
11. Common Themes
11B. Models
• 6-8: 11B/M1. Models are often used to think about processes that happen too slowly, too quickly, or on too small a scale to observe directly. They are also used for processes that are
too vast, too complex, or too dangerous to study.
• 6-8: 11B/M2. Mathematical models can be displayed on a computer and then modified to see what happens.
Common Core State Standards for Mathematics Alignments
Standards for Mathematical Practice (K-12)
MP.4 Model with mathematics.
High School — Algebra (9-12)
Creating Equations^? (9-12)
• A-CED.1 Create equations and inequalities in one variable and use them to solve problems. Include equations arising from linear and quadratic functions, and simple rational and
exponential functions.
Reasoning with Equations and Inequalities (9-12)
• A-REI.3 Solve linear equations and inequalities in one variable, including equations with coefficients represented by letters.
High School — Functions (9-12)
Linear, Quadratic, and Exponential Models^? (9-12)
• F-LE.1.b Recognize situations in which one quantity changes at a constant rate per unit interval relative to another.
(Editor: Caroline Hall)
The Physics Front editors recommend supplementing the Free Fall Model simulation with this interactive tutorial by Tom Henderson, developer of The Physics Classroom web site. It will help
students gain insight into why the v/t and p/t graphs of free fall motion appear as they do.
The Physics Classroom: Representing Free Fall by Graphs (html)
This resource is part of a Physics Front Topical Unit.
Kinematics: The Physics of Motion
Unit Title:
Modeling Motion
We like the simplicity of this model for introducing free fall and gravitational acceleration. Students can control the initial height, set initial velocity from -20 to 20 m/s and change
the gravitational constant. The free fall is displayed as a motion diagram, while graphs are simultaneously displayed showing position, velocity, and acceleration vs. time.
Link to Unit:
ComPADRE is beta testing Citation Styles!
<a href="https://www.compadre.org/precollege/items/detail.cfm?ID=10001">Duffy, Andrew. "Free Fall Model."</a>
A. Duffy, Computer Program FREE FALL MODEL (2010), WWW Document, (https://www.compadre.org/Repository/document/ServeFile.cfm?ID=10001&DocID=1639).
A. Duffy, Computer Program FREE FALL MODEL (2010), <https://www.compadre.org/Repository/document/ServeFile.cfm?ID=10001&DocID=1639>.
Duffy, A. (2010). Free Fall Model [Computer software]. Retrieved November 12, 2024, from https://www.compadre.org/Repository/document/ServeFile.cfm?ID=10001&DocID=1639
Duffy, Andrew. "Free Fall Model." https://www.compadre.org/Repository/document/ServeFile.cfm?ID=10001&DocID=1639 (accessed 12 November 2024).
Duffy, Andrew. Free Fall Model. Computer software. 2010. Java (JRE) 1.5. 12 Nov. 2024 <https://www.compadre.org/Repository/document/ServeFile.cfm?ID=10001&DocID=1639>.
@misc{ Author = "Andrew Duffy", Title = {Free Fall Model}, Month = {April}, Year = {2010} }
%A Andrew Duffy %T Free Fall Model %D April 16, 2010 %U https://www.compadre.org/Repository/document/ServeFile.cfm?ID=10001&DocID=1639 %O application/java
%0 Computer Program %A Duffy, Andrew %D April 16, 2010 %T Free Fall Model %8 April 16, 2010 %U https://www.compadre.org/Repository/document/ServeFile.cfm?ID=10001&DocID=1639
: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the
Citation Source Information
area for clarifications.
Citation Source Information
The AIP Style presented is based on information from the AIP Style Manual.
The APA Style presented is based on information from APA Style.org: Electronic References.
The Chicago Style presented is based on information from Examples of Chicago-Style Documentation.
The MLA Style presented is based on information from the MLA FAQ. | {"url":"https://www.compadre.org/precollege/items/detail.cfm?ID=10001","timestamp":"2024-11-12T06:12:10Z","content_type":"application/xhtml+xml","content_length":"61309","record_id":"<urn:uuid:cc339f29-cd8e-4d15-9b4b-0584c9664984>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00237.warc.gz"} |
On the generalised Springer correspondence for groups of type
We complete the determination of the generalised Springer correspondence for connected reductive algebraic groups, by proving a conjecture of Lusztig on the last open cases which occur for groups of
type .
1. Introduction
Let be a connected reductive algebraic group over an algebraic closure of the finite field with elements, where is a prime. Let be the Weyl group of and be the set of all pairs where is a unipotent
conjugacy class and is an irreducible local system on (taken up to isomorphism) which is equivariant for the conjugation action of . The Springer correspondence (originally defined in Reference 39
for not too small; for arbitrary see Reference 15) defines an injective map which plays a crucial role, for example, in the determination of the values of the Deligne–Lusztig Green functions of
Reference 5; for recent surveys see Reference 7, Chap. 13 and Reference 10, §2.8. However, the map is not surjective in general. In order to understand the missing pairs in , Lusztig Reference 17
developed a generalisation of Springer’s correspondence. This in turn constitutes a substantial part of the general problem of computing the complete character tables of finite groups of Lie type.
With very few exceptions, the problem of determining explicitly the generalised Springer correspondence has been solved in Reference 17, Reference 32, Reference 38 (see also the further references
there). The exceptions occur for of type with and with . Recently, Lusztig Reference 31 settled the case where is of type and stated a conjecture concerning the last open cases in type . It is the
purpose of this paper to prove that conjecture, thus completing the determination of the generalised Springer correspondence in all cases.
The paper is organised as follows. In Section 2, we recall the definition of the generalised Springer correspondence, due to Reference 17. In Section 3, we explain two parametrisations of unipotent
characters and unipotent character sheaves, one in terms of Lusztig’s Fourier transform matrices associated to families in the Weyl group, and the other one in terms of Harish-Chandra series, for
simple groups of adjoint type with a split rational structure. In the last two sections, we focus on the specific case where is simple of type in characteristic : Section 4 contains the description
of the generalised Springer correspondence for this group, so that we can formulate Lusztig’s conjecture on the last open cases as indicated above. Finally, the proof of this conjecture is given in
Section 5. It is based on considering the Hecke algebra associated to the finite group (where for some ) and its natural , exploiting a well-known formula relating characters of this Hecke algebra
with the unipotent principal series characters of . The fact that this formula carries subtle geometric information has been used before, for example in Reference 28.
As soon as a prime is fixed in a given setting, we denote by an algebraic closure of the finite field with elements; furthermore, we tacitly assume to have fixed another prime , as well as an
algebraic closure of the field of -adic numbers. It will be convenient to assume the existence of an isomorphism and to fix such an isomorphism once and for allFootnote^1, so that we can speak of
“complex” conjugation or absolute values for elements of using this isomorphism. In this way, we will also identify the rational numbers or the real numbers as subsets of and just write . In several
places in this paper, we will need to fix a square root of (or of powers of ), so we do this right away:
Strictly speaking, the existence of such an isomorphism requires the axiom of choice. However, what we really need is an isomorphism between algebraic closures of in and , and such an isomorphism is
known to exist without reference to the axiom of choice, cf. Reference 4, Rem. 1.2.11.
For any finite group , we denote by the set of class functions and by the subset consisting of irreducible characters of over . Thus, is an orthonormal basis of with respect to the scalar product
Now let be a prime and be a connected reductive group over , defined over the finite subfield where for some , with corresponding Frobenius map . With respect to this , we fix a maximally split torus
and an -stable Borel subgroup such that . Let be the set of roots of with respect to , and let be the subset of simple roots determined by . We denote by the Weyl group of (relative to ). Given any
closed subgroup (including the case where ), we denote by its identity component, by the centre of and by the (closed) subvariety consisting of all unipotent elements of . If , we set
a finite subgroup of . In particular, is the finite group of Lie type associated to .
2. The generalised Springer correspondence
The notation is as in Section 1. In this section, we give the definition of the generalised Springer correspondence for , due to Lusztig Reference 17.
We start by briefly introducing some of the most important notions of Lusztig’s theory of character sheaves Reference 24 and the underlying theory of perverse sheaves Reference 1. For the details we
refer to Reference 17, Reference 18–Reference 22. Recall that is a fixed prime. Let be the bounded derived category of constructible -sheaves on in the sense of Beilinson–Bernstein–Deligne Reference
1. To each and each is associated the th cohomology sheaf , whose stalks (for ) are finite-dimensional -vector spaces. The support of such a is defined as
(where the bar stands for the Zariski closure, here in ). We set
Let be the full subcategory of consisting of the perverse sheaves on ; the category is abelian. Assume that is a locally closed subvariety of and that is an irreducible -local system on . (We will
just speak of a “local system” when we mean a -local system from now on.) There is a unique extension of to , namely, the intersection cohomology complex (where denotes shift), due to
Deligne–Goresky–MacPherson (see Reference 12, Reference 1). The following notation will be convenient: For and as above, and for any closed subvariety such that , we denote by
the extension of to , by on . Now let us bring the Frobenius endomorphism into the picture. Let be its inverse image functor. Let be such that in , and let be an isomorphism. For each and , induces
linear maps
Since is non-zero only for finitely many , one can define a characteristic function Reference 19, 8.4
In Reference 18, §2, Lusztig defines the character sheaves on as certain simple objects of which are equivariant for the conjugation action of on itself. We denote by a set of representatives for the
isomorphism classes of character sheaves on . An important subset of is the one consisting of cuspidal character sheaves, as defined in Reference 18, 3.10. We denote by a set of representatives for
the isomorphism classes of cuspidal character sheaves on . The inverse image functor may also be regarded as a functor , and we have . Thus, we can consider the subset
consisting of the -stable character sheaves in . Of particular relevance for our purposes will be the set of (representatives for the isomorphism classes of) unipotent character sheaves. These are,
by definition, the simple constituents of a perverse cohomology sheaf for some and some , where is as defined in Reference 18, 2.4, with respect to the trivial local system on the torus .
Let be the set of all pairs where is a unipotent conjugacy class and is an irreducible local system on (up to isomorphism) which is equivariant for the conjugation action of on . Let us consider a
triple consisting of a Levi complement of some parabolic subgroup of , a unipotent class of and an irreducible local system on (up to isomorphism) which is equivariant for the conjugation action of
on ; furthermore, assume that is a cuspidal pair for in the sense of Reference 17, 2.4, where denotes the inverse image of under the canonical map . The group acts naturally on the set of all such
triples by means of -conjugacy:
where is the inner automorphism given by conjugation with . Denote by the set of equivalence classes of triples as above under this action; however, by a slight abuse of notation, we will still just
write rather than or the like. We typically write for elements of and for elements of . By Reference 17, §6, any gives rise to a certain perverse sheaf , whose endomorphism algebra is isomorphic to
the group algebra of the relative Weyl group , see Reference 17, Thm. 9.2. Thus, the isomorphism classes of the simple direct summands of are naturally parametrised by , so we have
where is the simple direct summand of corresponding to , and where . Then, for any , there exists a unique for which
and the isomorphism class of is uniquely determined by this property among the simple perverse sheaves which are constituents of Reference 22, 24.1. So for each , the above procedure gives rise to an
injective map
Conversely, given any , there exists a unique such that is in the image of the map just defined. So there is an associated surjective map
whose fibres are called the blocks of . For any , the elements in the block are thus parametrised by the irreducible characters of . The collection of the bijections
is called the generalised Springer correspondence. Given a pair and corresponding to under Equation 2.2.3, we will write . Considering the element , map Equation 2.2.3 defines an injection
which is called the (ordinary) Springer correspondence. The problem of determining the generalised Springer correspondence (that is, explicitly describing bijections Equation 2.2.3 for all ) can be
reduced to considering simple algebraic groups of simply connected type, thus can be approached by means of a case-by-case analysis. This has been accomplished for almost all such , thanks to the
work of Lusztig Reference 17, Lusztig–Spaltenstein Reference 32, Spaltenstein Reference 38 (see also the references there for earlier results concerning the ordinary Springer correspondence), and
again Lusztig Reference 31, the only remaining open problems occur for of type in characteristic (for which a conjecture is made in Reference 31, §6). In particular, the ordinary Springer
correspondence Equation 2.2.4 is explicitly known in complete generality.
We keep the setting of 2.2 and consider the Frobenius map . It defines actions on and , given by
Let , be the respective sets of fixed points under these actions where, in terms of the local systems, this is only meant up to isomorphism, and for the triples in in addition only up to -conjugacy.
The map commutes with the action of on , , so it gives rise to a surjective map . Furthermore, the generalised Springer correspondence Equation 2.2.3 induces bijections
(Here, we denote by the subset consisting of all irreducible characters of which are invariant under the automorphism of induced by .) Let and , and assume that is the corresponding element of .
Choosing an isomorphism which induces a map of finite order at the stalk of at any element of allows the definition of a unique isomorphism (see Reference 22, 24.2 and also 3.11). We set
We then have
so we can define an isomorphism by the requirement that is equal to the isomorphism induced by . It is shown in Reference 22, (24.2.4) that for any , the induced map on the stalk of at is of finite
order. Now consider the two functions
defined by
for . Both and are invariant under the conjugation action of on .
Theorem 2.4 (Lusztig Reference 22, §24).
In the setting of 2.3, the following hold.
The functions , , form a basis of the vector space consisting of all functions which are invariant under -conjugacy.
There is a system of equations
for some uniquely determined .
See Reference 22, (24.2.7) and (24.2.9). Note that the restrictions Reference 22, (23.0.1) on the characteristic of can be removed, thanks to the remarks in Reference 28, 3.10.
As an immediate consequence of Theorem 2.4 (and of 2.3), we see that:
If , then implies and .
If belong to different blocks, we have .
Let us fix any total order on such that for , we have
(Note that the latter defines a partial order on the set of unipotent classes of .) Then the matrix has upper unitriangular shape with respect to . In Reference 22, §24, Lusztig provides an algorithm
for computing this matrix , which entirely relies on combinatorial data. This algorithm is implemented in Michel’s development version of CHEVIE Reference 33 and is accessible via the functions
UnipotentClasses and ICCTable. | {"url":"https://www.ams.org/journals/ert/2023-27-26/S1088-4165-2023-00661-9/viewer/","timestamp":"2024-11-03T17:23:05Z","content_type":"text/html","content_length":"1049158","record_id":"<urn:uuid:86edc923-8ca2-4ca7-8a32-d0f66ea1ced6>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00404.warc.gz"} |
Have You Heard Of This Banach-Tarski Paradox? - Shocking ScienceHave You Heard Of This Banach-Tarski Paradox?
Have You Heard Of This Banach-Tarski Paradox?
Can you get your mind around THIS??
The concept is very bizarre and the ever inquisitive “VSauce” walks us through. It is hard to get one’s mind around this one but he gets into depth in the details thoroughly and it involves thinking
about 3 dimensional plains in a new way. Once commenter even says:
I lost you after like 6 minutes
Some one else has a similar comment:
17 minutes in and i’m just here like, what is he talking about…
Here is a bit more information from wikipedia on the background of this theorem:
The Banach–Tarski paradox is a theorem in set-theoreticgeometry, which states the following: Given a solid ball in 3‑dimensional space, there exists a decomposition of the ball into a finite
number of disjointsubsets, which can then be put back together in a different way to yield two identical copies of the original ball. Indeed, the reassembly process involves only moving the
pieces around and rotating them, without changing their shape. However, the pieces themselves are not “solids” in the usual sense, but infinite scatterings of points. The reconstruction can work
with as few as five pieces.^[1]
Let’s find out more about this amazing paradox in this video on page 2
158 Comments
2. The orange and purple points are identical, so just like back tracking they can’t exist.
3. It’s because your though proses in two dimensions
5. Well in the height of it’s use before it’s upgrade, the scientists working on the large hadron collider were finding subatomic particles at such an alarming rate that they would often joke that
the nobel prize will go to the scientist that DOESNT find a new particle. Now they might be exaggerating to some extent (I wasn’t there so I wouldn’t know that), but it still brings me to this
idea: what if subatomic particles are the countable infinite points in real matter. If you could make a collider or another device that was capable of extracting the infinite particles, could you
then reform them and duplicate that matter? If so this means you would have the ability to reform basic elements at the very least (since you’re duplicating atoms you would only duplicate the
element it forms, as far as I figure). Now I am certain this has been thought of before by someone, but I still think this would be an amazing achievement. At the very least we could create
hydrogen atoms infinitely out of one atom, maybe even extending the life of a star out of it (if you could effectively duplicate the hydrogen within the core). It’s all speculation on my end, so
please enlighten me if you can as to whether I am on the right track or not. Oh, and please, no berating me if I am off in any way. I am passionate, not a professional.
6. Mike Soviero Dean Ryan Bright i got 8 mins in and i started to get a headache lmao
11. Nick Lenssen Tyler McCabe
12. It’s because they’re working with infinite sets. The revolutions of the hyper sphere don’t require energy because the points already exist, we’re just identifying them.
If you think about it more like a number line it makes me sense. It’s like I have an infinite line that I cut, both ends still continue infinitely. Extreme over simplification, but I think it
13. Your high school math teacher is wrong. To determine size with countable infinite sets is itself a paradox. The sets are infinite, forever doesn’t have a size because you can always have more.
14. But there are different sizes of infinity. If you take the set of all real numbers and compare it with the set of rational and irrational numbers one is going to be much larger than the other
even though they’re both infinite
15. That’s why I said countable infinity Kaitlin. In the context the statement was made he only compared real numbers.
18. It’s a pretty simple theory. There’s only 3 big words, which are easy to figure out.
19. But at some point you are left with atoms, then sub atomic particles that can not be cut…
20. I would have read the article if it formatted to my phone properly
21. A very rigorous/beautiful way to prove something that is intuitively obvious when you consider any object to be made up of a countably infinite number of points. I wonder how does Planck length
and/or “quantum foam” affect any applicability of this to real world? Can we really have true points, or is there an indivisible space-time volume/string? If so, our random walk across the sphere
will result in repeating arrivals at the same piece and so we will not be able to make 2 from 1.
22. just watch the vsauce episode on it. Michael explains it so poignantly.
23. Watch this! Invert inner reality
25. Energy can be borrowed from quantum foam, but it must be repaid. The more energy borrowed, the quicker it must be repaid.
26. Most interesting, the possibilities are infinite…
27. Two lines of infinite length have no size by definition, both are infinite
28. you lost me at theoretic geometry….
29. Jay Kei ur gonna love this
30. This is the base science for hizenburg compensators
31. Jeremy Cantor this is a good example of something we were talking about a while back; a “paradox” of higher math that I think is really just an inability to consolidate mathematical and semantic
verbal descriptions of the same concept. They are made out to be weaknesses of math and science but I think they are usually just weaknesses in language.
32. I’m going to take a page from Bob Dylan, and not criticize what I can’t understand. But I will present Tom Weston’s paper on the subject here. If the set theory is beyond you, as it is beyond me,
I suggest skipping immediately to the last page. I read about this long ago in an article called “The Crisis in Intuition.” A particularly brilliant friend of mine (who could only explain his
ability to easily handle proofs that I could not possibly hold all of in my mind at one time by saying “I think in smooth white forms…”) said, “It said ‘The Crisis in Intuition, but it should
have said ‘The Crisis in Rigor.”
35. DMT was what made me see the science in spirituality. It made me understand the infinite power of energy. It never started, it never ends.
36. That was fabulous. Ironically, the original link here does the best job wording the issue, essentially that infinity is treated like a number on the number line when it is really a description of
all the numbers on the number line. But then it goes on to use it as a number, run into a bit of nonsense, and then pretends to be shocked by the whole thing.
37. Based off of faulty principles.
38. Prove it and I may consider it.
39. Robert Powell I’ve been wanting to try dmt… I’ve researched and watched the documentaries… I just can’t find it
40. I have a post already stating this theory. I might win a novel peace prize..
41. Casey Klipsch dude this just blew my mind, so cool
42. Matthew Bacinich try and understand this! This is very trippy but quite cool
43. MacKenzie Kummer read this and understand it. It’s quite trippy but seems logical to me
44. in a much bigger picture, this may suggest if you could walk over every surface on our earth, and never retread the same direction, you would never end up back in the same spot, and could never
finish walking the entire world, which would make our world infinite despite being contained in a sphere
45. I also believe it would explain why there is no center of the universe when it explains the hotel theory, an infinite number of rooms with one guest filling every room is never full, guest one is
simply moved to room b and opens up room a for the new guest, hence the center of the universe(or where our universe began, or was born) simply no longer exists, you could never fly a ship to
that place
46. Elena Prado David DaddyMix Prado Toscano
47. Check out this video if you’re interested in more
48. It sounds close to a Siri I heard on Star Trek. It was the basis for how the Replicators work
49. Kyle Perkins yes I did watch the whole thing
50. How does Gödel’s Incompleteness Theorem play into this? It seems as though they may be describing the same paradox from two different positions. What am I missing? Probably a lot, as I am
certainly no mathematician. But with a very elementary understanding of both, it seems as though these are relating to the same paradox. | {"url":"https://www.shockingscience.com/have-you-heard-of-this-banach-tarski-paradox/comment-page-2/","timestamp":"2024-11-14T18:08:08Z","content_type":"text/html","content_length":"144703","record_id":"<urn:uuid:1bb1418d-1b1c-49c8-adbd-1a2e84a22892>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00004.warc.gz"} |
Addition And Multiplication Rules Of Probability Worksheet
Mathematics, especially multiplication, creates the cornerstone of numerous scholastic techniques and real-world applications. Yet, for lots of learners, grasping multiplication can pose a
difficulty. To address this difficulty, educators and moms and dads have embraced a powerful device: Addition And Multiplication Rules Of Probability Worksheet.
Intro to Addition And Multiplication Rules Of Probability Worksheet
Addition And Multiplication Rules Of Probability Worksheet
Addition And Multiplication Rules Of Probability Worksheet -
The multiplication rule and the addition rule are used for computing the probability of A and B and the probability of A or B for two given events A B In sampling with replacement each member has
1 The Addition Law As we have already noted the sample space S is the set of all possible outcomes of a given experiment Certain events A and B are subsets of S In the previous block we defined what
was meant by P A P B and their complements in the particular case in which the experiment had equally likely outcomes
Value of Multiplication Technique Understanding multiplication is essential, laying a solid foundation for advanced mathematical concepts. Addition And Multiplication Rules Of Probability Worksheet
offer structured and targeted technique, cultivating a deeper understanding of this fundamental arithmetic procedure.
Advancement of Addition And Multiplication Rules Of Probability Worksheet
Addition Rules And Multiplication Rules For Probability Worksheet Times Tables Worksheets
Addition Rules And Multiplication Rules For Probability Worksheet Times Tables Worksheets
Learn The Addition Rule Of Probability And Adding Probabilities With Example Problems Interactive Exercises Experiment A single 6 sided die is rolled What is the probability of rolling a 2 or a 5
Possibilities 1 The number rolled can be a 2 2 The number rolled can be a 5
Practice problem 1 Rolling dice Suppose that we are going to roll two fair 6 sided dice problem 1 Find the probability that both dice show a 3 Choose 1 answer P both 3 1 2 A P both 3 1 2 P both 3 1 3
From standard pen-and-paper workouts to digitized interactive layouts, Addition And Multiplication Rules Of Probability Worksheet have actually evolved, catering to diverse learning styles and
Kinds Of Addition And Multiplication Rules Of Probability Worksheet
Standard Multiplication Sheets Basic workouts concentrating on multiplication tables, assisting learners develop a solid arithmetic base.
Word Problem Worksheets
Real-life circumstances integrated right into problems, enhancing critical thinking and application abilities.
Timed Multiplication Drills Tests designed to enhance rate and precision, helping in rapid mental mathematics.
Benefits of Using Addition And Multiplication Rules Of Probability Worksheet
Addition Rules And Multiplication Rules For Probability Worksheet Times Tables Worksheets
Addition Rules And Multiplication Rules For Probability Worksheet Times Tables Worksheets
Example 4 4 1 4 4 1 Klaus is trying to choose where to go on vacation His two choices are A New Zealand A New Zealand and B Alaska B Alaska Klaus can only afford one vacation The probability that he
chooses A A is P A 0 6 P A 0 6 and the probability that he chooses B B is P B 0 35 P B 0 35
Google Classroom You might need Calculator 26 customers are eating dinner at a local diner Of the 26 customers 20 order coffee 8 order pie and 7 order coffee and pie Using this information answer
each of the following questions
Improved Mathematical Abilities
Regular method hones multiplication effectiveness, boosting general math capabilities.
Improved Problem-Solving Abilities
Word issues in worksheets create analytical thinking and technique application.
Self-Paced Knowing Advantages
Worksheets accommodate specific knowing speeds, cultivating a comfortable and adaptable understanding setting.
Just How to Develop Engaging Addition And Multiplication Rules Of Probability Worksheet
Including Visuals and Colors Vivid visuals and shades catch attention, making worksheets visually appealing and engaging.
Consisting Of Real-Life Situations
Associating multiplication to everyday situations adds significance and usefulness to workouts.
Tailoring Worksheets to Different Skill Degrees Personalizing worksheets based on varying proficiency degrees makes certain inclusive learning. Interactive and Online Multiplication Resources Digital
Multiplication Devices and Games Technology-based resources provide interactive knowing experiences, making multiplication interesting and delightful. Interactive Web Sites and Apps On-line systems
offer diverse and accessible multiplication practice, supplementing traditional worksheets. Customizing Worksheets for Different Learning Styles Aesthetic Learners Visual help and layouts help
comprehension for learners inclined toward visual knowing. Auditory Learners Spoken multiplication troubles or mnemonics satisfy students who grasp concepts through auditory methods. Kinesthetic
Students Hands-on activities and manipulatives sustain kinesthetic students in understanding multiplication. Tips for Effective Execution in Learning Uniformity in Practice Routine practice
strengthens multiplication abilities, promoting retention and fluency. Stabilizing Repeating and Variety A mix of recurring exercises and diverse issue layouts preserves passion and understanding.
Providing Positive Feedback Feedback aids in identifying areas of improvement, urging ongoing progress. Challenges in Multiplication Method and Solutions Motivation and Engagement Difficulties Boring
drills can cause disinterest; cutting-edge approaches can reignite motivation. Getting Over Anxiety of Math Adverse understandings around math can prevent progression; creating a positive knowing
atmosphere is important. Impact of Addition And Multiplication Rules Of Probability Worksheet on Academic Efficiency Research Studies and Research Study Searchings For Research suggests a favorable
connection between regular worksheet usage and boosted mathematics performance.
Addition And Multiplication Rules Of Probability Worksheet become functional devices, promoting mathematical effectiveness in students while fitting varied understanding styles. From standard drills
to interactive on the internet resources, these worksheets not just boost multiplication skills however additionally promote vital thinking and analytical capabilities.
PPT Addition Rules For Probability PowerPoint Presentation Free Download ID 6950506
Addition Rule of Probability Part 2 YouTube
Check more of Addition And Multiplication Rules Of Probability Worksheet below
Addition Rules And Multiplication Rules For Probability Worksheet Times Tables Worksheets
Addition Rules And Multiplication Rules For Probability Worksheet Times Tables Worksheets
Addition Rules And Multiplication Rule For Probability Worksheet Answers Free Printable
Addition And Multiplication Rules Of Probability Worksheet Free Printable
Addition Rules And Multiplication Rules For Probability Worksheet Times Tables Worksheets
What Are Addition and Multiplication Theorems On Probability A Plus Topper
span class result type
1 The Addition Law As we have already noted the sample space S is the set of all possible outcomes of a given experiment Certain events A and B are subsets of S In the previous block we defined what
was meant by P A P B and their complements in the particular case in which the experiment had equally likely outcomes
span class result type
Math Addition Rules and Multiplication Rules for Probability Determine whether these events are mutullly exclusive 1 Roll a die t an even number and get a number less 3 2 a die get a prime number and
get an odd 3 a get a number greater than 3 4 Select a student No 5 Select a Sfident at UGA student is a a 6 Select school the the Fird the
1 The Addition Law As we have already noted the sample space S is the set of all possible outcomes of a given experiment Certain events A and B are subsets of S In the previous block we defined what
was meant by P A P B and their complements in the particular case in which the experiment had equally likely outcomes
Math Addition Rules and Multiplication Rules for Probability Determine whether these events are mutullly exclusive 1 Roll a die t an even number and get a number less 3 2 a die get a prime number and
get an odd 3 a get a number greater than 3 4 Select a student No 5 Select a Sfident at UGA student is a a 6 Select school the the Fird the
Addition And Multiplication Rules Of Probability Worksheet Free Printable
Addition Rules And Multiplication Rules For Probability Worksheet Times Tables Worksheets
Addition Rules And Multiplication Rules For Probability Worksheet Times Tables Worksheets
What Are Addition and Multiplication Theorems On Probability A Plus Topper
Addition And Multiplication Rules Of Probability Worksheet Answers Math Games
Addition Rules And Multiplication Rules For Probability Worksheet Times Tables Worksheets
Addition Rules And Multiplication Rules For Probability Worksheet Times Tables Worksheets
Addition and Multiplication Rules Worksheet Playing Cards Probability Free 30 day Trial
FAQs (Frequently Asked Questions).
Are Addition And Multiplication Rules Of Probability Worksheet ideal for all age teams?
Yes, worksheets can be tailored to different age and ability levels, making them versatile for different students.
How typically should students exercise utilizing Addition And Multiplication Rules Of Probability Worksheet?
Regular method is vital. Normal sessions, preferably a few times a week, can yield substantial renovation.
Can worksheets alone boost mathematics skills?
Worksheets are an useful device yet must be supplemented with varied understanding approaches for thorough skill advancement.
Exist on-line systems offering totally free Addition And Multiplication Rules Of Probability Worksheet?
Yes, several instructional web sites provide open door to a wide variety of Addition And Multiplication Rules Of Probability Worksheet.
How can parents sustain their kids's multiplication technique in the house?
Encouraging consistent method, offering help, and producing a favorable discovering atmosphere are valuable actions. | {"url":"https://crown-darts.com/en/addition-and-multiplication-rules-of-probability-worksheet.html","timestamp":"2024-11-13T21:25:03Z","content_type":"text/html","content_length":"30411","record_id":"<urn:uuid:63a3b4d1-4c6e-44e2-8ed0-fa971bb2a681>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00595.warc.gz"} |
Spherical Droplet Deposition—Mechanistic Model
Faculty of Civil Engineering, Mechanics and Petrochemistry, Institute of Chemistry, Warsaw University of Technology, Ignacego Lukasiewicza 17, 09-400 Plock, Poland
Department of Physics and Biophysics, Institute of Biology, Warsaw University of Life Sciences, Nowoursynowska 159, Building 34, 02-776 Warsaw, Poland
Author to whom correspondence should be addressed.
Submission received: 3 February 2021 / Revised: 14 February 2021 / Accepted: 17 February 2021 / Published: 19 February 2021
In the currently existing physical models of wetting a solid substrate by a liquid drop, the contact angle is determined on the basis of the equilibrium of forces acting tangentially to the wetted
surface at any point in the perimeter of the wetted area, ignoring the forces (or their components) acting perpendicular to this area. In the solution shown in the paper, the equilibrium state of
forces acting on a droplet was determined based on the minimum mechanical energy that the droplet achieves in the state of equilibrium. This approach allows one to take into account in the model, in
addition to the forces tangential to the wetted surface, also forces perpendicular to it (also the force of adhesion), moreover, these may be dispersed forces acting on the entire interface, not on a
single point. The correctness of this approach is confirmed by the derived equations concerning the forces acting on the liquid both tangentially and perpendicularly to the wetted surface. The paper
also identifies the areas of solutions in which the obtained equilibrium of forces is stable and areas of unstable equilibrium of forces. The solution is formulated both for isothermal and isochoric
system. Based on the experimental data accessible in the literature, the condition that has to be met by the droplets (and their surroundings) during measurements performed under gravity conditions
was formulated.
1. Introduction
As technology advances, issues related to the wettability of solid surfaces by liquids become key to understanding phenomena occurring in real systems. Initially, they were used to describe the
lubrication of solid surfaces moving relative to each other, and then to coat them with a chemically resistant layer—painting and coating. This second operation was carried out by depositing sprayed
liquid drops on solid surfaces. Currently, the issue of wettability of solid surfaces is the most widely used when developing microfluidic systems and 3D printing methods.
The first physical model describing the state of equilibrium of a spherical droplet deposited on a flat substrate was formulated (in words) by Young [
] in 1805. The derivation was supported by the analysis of geometrical relationships occurring in the considered case. However, obtained relation was presented [
] in the form of a mathematical formula, called the Young’s equation, in 1911. Correlates stresses tangential to the wetted surface, acting on the line separating wetted and non-wetted areas, with
the contact angle. It should be emphasized, however, that the forces and stresses perpendicular to the wetted surface are completely neglected in the derivation of this equation, although this
contradicts the results of Laplace’s theoretical work [
] (i.e., elevated pressure in the phase limited by the convex surface) and the force of liquid–solid adhesion acting on the entire wetted surface [
In 1936, in the case of gas bubbles deposited on a solid substrate, it was found that the contact angle depends on their volume [
]. As a result, the Young model was modified, introducing a term that takes into account the effect of stresses along the three-phase contact line. This line may shrink or stretch depending on the
direction of this stress action described by its sign (positive or negative). Wenzel noticed that the energy of the liquid–solid substrate is proportional to its actual but not geometric surface area
]. In accordance with this conclusion, he introduced to the Young’s equation the roughness coefficient being the ratio of this real surface to the geometric one. He determined the roughness
coefficient based on measurements of the contact angle for solid surfaces differing in texture and dimensions of unevenness. This model was extended by Cassie and Baxter [
] in the case of porous substrates, on whose surfaces the deposited liquid enclosed gas in the pores. Another coefficient was introduced to the Young equation, which modified the value of the contact
angle on a flat substrate to the contact angle that occurs at the inlet to the pores. Unfortunately, the paper does not show relationship between this angle and the contact angle of the entire
droplet deposited on the porous substrate, only the equality of these angles was assumed.
For the case of a liquid drop surrounded by gas, but in a system not affected by external forces, the Laplace equation [
] was solved at the end of the 19th century proving the spherical shape of such a droplet [
]. However, it was not until several years later that it was established that the numerically selected parameter of the solution [
] was the pressure difference between the droplet interior and surrounding gas [
]. As a result, the Laplace–Young equation was formulated correlating this pressure difference with the curvature of the interface. The equation for determining the work of creating such a liquid
drop with a specific radius in an isothermal system was derived from the analysis of the free energy of the liquid–gas system [
Based on the equation describing the internal energy of the molecular multiphase system, Boruvka and Neuman derived an analogous relationship for the macroscopic system in which individual phases
remain continuous [
]. In the obtained equation, in addition to the main thermodynamic parameters, the internal energy of the system also depended on the interface energy, the length of three-phase contact lines, as
well as on point energy sources. However, solving this relationship for a spherical drop deposited on the substrate and surrounded by gas was very difficult due to the need to maintain equal entropy
in all phases. Moreover, the solution for the zero value of the exact differential of the internal energy of the system was identified with the minimum energy of the system, although this is only its
necessary condition.
The application of the free energy of the system to describe the phenomenon [
] significantly facilitated the minimization of such a formulated model because for the isothermal system it was possible to neglect the entropy changes of its components i.e., phases. However, on
the basis of classical thermodynamics, the obtained solution (for zero value of the exact differential of the free energy, i.e., necessary condition) could only concern the case of the thermodynamic
equilibrium of the system [
], preventing discussion of any other cases, and in particular determining the sufficient condition of occurrence of the minimum free energy of the system. This difficulty was eliminated by applying
the principles of non-equilibrium thermodynamics [
] for the system under consideration.
All of the thermodynamic models described [
] provide the same solution, i.e., the Young’s equation modified by the term taking into account the force acting along the three-phase tension line. Only in last paper [
] are the ranges of variability of physicochemical parameters for which the necessary and sufficient conditions for the minimum free energy of the system are met. It is worth noting here that in none
of the thermodynamic models of the drop deposited on the substrate so far has been used, the generalized free energy equation [
] was applied, even though some of the stresses in the liquid closely corresponds to those in elastic materials.
As early as the 1970s, it was proposed to analyze the wettability of solid surfaces by liquids based on molecular thermodynamics simulations [
] using Lenard–Jones interactions [
] inside each of the phases present in the system as well as on the interfaces. Despite a significant increase in the computational capabilities of computers, a significant limitation of these
methods is the number of liquid molecules and solid substrate (currently at around 2 × 10
) [
]. Such a number of water molecules correspond to the volume of a spherical drop 2.14 × 10
μL, i.e., a droplet diameter 80 nm. In addition, simulations are conducted by treating such a small droplet as a two-dimensional object [
] a priori assuming that the contact angle meets Young’s equation, i.e., it does not depend on its shape. Often, the simulation also ends when Rayleigh instabilities appear. All of these
simplifications do not ensure that the drop of liquid has reached a state of equilibrium of forces.
It is noteworthy that the value of stress acting in the three-phase contact line determined experimentally for macroscopic droplets with a volume greater than 0.01 μL [
] is greater than this value determined on the basis of molecular interactions [
] by 5–6 orders of magnitude. Therefore, if the value determined for molecular interactions is taken as the correct one, for macroscopic droplets deposited on the substrate the influence of stress of
the three-phase contact line should be negligible. This in turn means that the effect of droplet volume on the contact angle is still unexplained.
In a number of systems, two contact angles are experimentally observed, at which the deposited drop remains motionless [
]. This means that for each of the angles (advancing and receding) the drop reaches a state of equilibrium of forces, however, the models formulated so far based on conventional thermodynamics do not
indicate this opportunity. Only the model [
] based on non-equilibrium thermodynamics allows such two solutions, one of which is stable and the other unstable. It should be emphasized, however, that such behavior is only possible within a very
narrow range of variability of physicochemical parameters describing interfacial surfaces. This phenomenon is interpreted either by the molecular imperfection of the surface of the substrate [
] or by the existence of conjoining / disjoining pressure in the liquid near the three-phase contact line [
]. This last explanation seems doubtful in view of the need to meet the Laplace condition [
] in the entire volume of the liquid phase, i.e., a constant pressure value in it.
It is noteworthy that the absence of the adhesive force in the currently available physical models makes them practically useless for determining the adhesion, uniformity and durability of coatings
applied on solid substrates. As a result, the need to meet the requirements of industrial applications forces researchers to perform a very large number of experiments. Moreover, the lack of
theoretically justified equations does not give any chance to systematize or generalize the obtained experimental results. A clear example of such difficulties is the widely used physical vapor
deposition (PVD) coatings [
]. In the case of microfluidic systems, in addition to the above-mentioned difficulties, the transfer of substances between successive droplets via the adhesive layer and the influence of the formed
adhesive layer composed of biological substances or microorganisms on the contact angles of the immobilized drops is important [
The aim of the work is to formulate a physical model of the liquid macroscopic droplet behavior deposited on an ideal isotropic substrate in the vicinity of the state of mechanical equilibrium of the
system. All phases present in the system are treated as continuous and the system is not exposed to external force fields.
2. Theory
Before proceeding to formulating a physical model of the phenomenon, it is worth analyzing the way it is going during the experiment. After contact of the droplet with the substrate, the liquid
begins to spontaneously spread over a solid surface. Movement of the liquid causes inertial forces that cause droplet shape oscillations, the higher the lower the viscosity of the liquid. As a result
of viscosity, the oscillations disappear in time, and the droplet acquires its equilibrium shape. The phenomenon described in this way indicates that during the deposition of droplet, the mechanical
energy (including surface energy) accumulated in it is converted into work, and then it is transformed into heat due to viscous interactions in the liquid.
2.1. Model Development
Assumptions made:
• In the vicinity of the state of equilibrium of forces acting on the droplet, it takes the shape of a sphere segment, and the fluid velocity and its changes become so small that the inertial
forces of the moving fluid and its impact on the surface between fluids due to the so-called dynamic pressure become negligible.
• During droplet deposition on the substrate, the temperature of any of the system components does not change.
• The liquid forming the droplet does not change its volume—the entire system under consideration is an isochoric system. As a result of the assumed isothermal nature of the system, it is possible
to avoid the need to consider the issue of droplet evaporation [
]. On the other hand, the assumption of complete insolubility of the components of both fluid phases allows to ignore the influence of the Marangoni effect.
Interactions on the spherical interface between fluid phases (the droplet and its surroundings, i.e., gas or liquid) are described by the Laplace–Young equation:
in which
$Δ P I$
means the pressure difference inside and around the droplet,
$σ C I$
surface tension at the interface, and
the radius of the sphere segment (
Figure 1
It is noteworthy that Equation (1) somehow converts stresses perpendicular to a curved surface into tangential stresses as well as implying a constant pressure value inside the spherical droplet. In
addition, it indicates how this pressure will automatically change as the droplet spreads.
The pressure present in the liquid cannot be compensated on the surface of a flat substrate, and therefore the liquid will be repelled upwards (
Figure 1
) trying to reproduce the fully spherical shape of the droplet. The expression describing this force is obtained by multiplying the Equation (1) by the area of the wetted surface:
$F P = π r 2 Δ P I = 2 π r 2 σ C I R$
is the radius of the wetted circular area.
At the same time, the liquid covering the wetted area will be attracted to the substrate by the force of adhesion described by the equation:
describes the adhesive force related to the unit of the wetted surface.
It is noteworthy that this force will be perpendicular to the wetted surface and directed downwards. A negative sign was introduced in the equation for formal reasons so that the unit force of
adhesion $ε$ was expressed by positive values. The magnitude of this force does not have to be equal to the multiple of the interaction forces between the individual molecules of the substrate and
the liquid due to the fact that the distances between the molecules in the liquid and the solid can be different.
The liquid molecules on the three-phase contact line can interact differently with the molecules inside the liquid phase and with the adjacent phase molecules surrounding the droplet. As a result, a
force tangential to the substrate may arise that stretch or shrink the wetted circumference. Similar to the Young’s equation, the direction of this force is parallel to the radius of the wetted area:
Parameter $σ F I$ is the force which stretch (positive value) or shrank (negative value) the circumference of the wetted area expressed for its unit length.
Despite the low values [
], but for formal reasons, we take into account the force acting along the boundary of the three phases, i.e., three phase tension line. This force acts perpendicular to the radius of the wetted area
and only along the line of contact of the three phases. In line with the suggestions presented in the literature [
], its value was assumed as constant:
It should be noted that the positive value of this force is responsible for increasing the length of the contact lines of the three phases, and its negative value for its contraction.
All the forces specified above are distributed on the surface (
$F P$
$F A$
) or on the wetted perimeter (
$F T$
$F L$
). So, finding their common point of application would be debatable. However, during the spreading of droplet on the surface, work is performed related to the movement of liquids along the respective
coordinate axes, i.e., along the directions of acting forces. The differential work performed by the moving fluid can be written as the following sum:
$d W = ( F P + F A ) d z + F T d r + F L d λ$
is the length of the wetted perimeter.
Considering incompressibility of the liquid the expression for work formulated in this way can easily be associated with Pascal’s law. Constant liquid volume and the assumed shape of the spherical
droplet allow linking together liquid displacements along both axes resulting in three equivalent equations:
$d W = F z d z = ( F P + F A + d r d z F T + d λ d z F L ) d z$
$d W = F r d r = ( d z d r F P + d z d r F A + F T + d λ d r F L ) d r$
$d W = F θ d λ = ( d z d λ F P + d z d λ F A + d r d λ F T + F L ) d λ$
$F z$
$F r$
$F θ$
are the net forces acting on the droplet along the
axes, respectively.
The equilibrium of forces acting on a droplet occurs when the net force on it is equal to zero.
$d W d z = F z = F P + F A + d r d z F T + d λ d z F L = 0$
$d W d r = F r = d z d r F P + d z d r F A + F T + d λ d r F L = 0$
$d W d λ = F θ = d z d λ F P + d z d λ F A + d r d λ F T + F L = 0$
According to the convention applied in thermodynamics, the change in the value of a state function is described as the difference in its value between the state after the transition and the state
before the transition. If we apply this to isothermal and isochoric conditions (vide assumptions) then the differential change of free energy will be equal to the differential change of the
mechanical energy of the system, and this will be equal to the differential work done in the system. This means that the system will achieve stable equilibrium when the following conditions are met:
$d 2 W d z 2 = d F z d z > 0$
$d 2 W d r 2 = d F r d r > 0$
$d 2 W d λ 2 = d F θ d λ > 0$
If the relationship (10)–(12) is met, and the above relationships are not met, then the drop will be in a state of unstable equilibrium.
2.2. Model Solution
In the vicinity of the state of equilibrium of forces acting on the droplet, it assumes the shape of a sphere segment. Thus, known geometrical relationships can be used to determine its height:
the radius of the wetted area:
and length of the perimeter of wetted area:
is the radius of curvature of the spherical cap and
is the contact angle.
During deposition, the volume of the droplet does not change, so it can be expressed by the radius of the spherical droplet,
$R π$
, it has just before contact with the substrate. In this way, the relationship between the radius of curvature of an already deposited droplet and the one still levitating above the substrate can be
$R = R π [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] 1 3$
After substituting Equations (2)–(4) to the relationships (10)–(12) and after applying Equations (16)–(19), we obtain expressions of forces acting on the droplet perpendicularly and tangentially to
the wetted surface. Comparing them to zero sets the condition for the balance of these forces, i.e., the condition necessary for the minimum mechanical energy of the droplet.
$F z = 2 π σ C I R π [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] 1 3$
${ sin 2 φ − 1 2 ε R π σ C I [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] 1 3 sin 2 φ − σ F I σ C I − F L σ C I R π 1 sin φ [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] − 1 3 } = 0$
$F r = − 2 π σ C I R π [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] 1 3 sin φ$
${ sin 2 φ − 1 2 ε R π σ C I [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] 1 3 sin 2 φ − σ F I σ C I − F L σ C I R π 1 sin φ [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] − 1 3 } = 0$
$F θ = F r 2 π = − σ C I R π [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] 1 3 sin φ$
${ sin 2 φ − 1 2 ε R π σ C I [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] 1 3 sin 2 φ − σ F I σ C I − F L σ C I R π 1 sin φ [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] − 1 3 } = 0$
It is noteworthy that after such transformations, the only independent variable in all equations is the contact angle,
. Additionally, in all equations the expressions standing in front of the curly brackets are always
$φ ∈ ( 0 , 180 ° )$
different from zero. This in turn means that the condition of the balance of forces acting on a drop can be simplified to the following form:
$sin 2 φ − 1 2 ε R π σ C I [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] 1 3 sin 2 φ − σ F I σ C I − F L σ C I R π 1 sin φ [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] − 1 3 = 0 ( 12 )$
Using the relations (13)–(15), Equations (16)–(22) and (23), it is possible to determine the relationships whose fulfillment is a sufficient condition for the minimum mechanical energy of a droplet
deposited on the substrate:
$( d F z d z ) F z = 0 = 2 π σ C I ( 2 + cos φ ) sin φ$
${ 2 sin φ cos φ − 1 2 E [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] 1 3 sin φ [ 2 cos φ − ( 1 + cos φ ) 2 2 + cos φ ] − D [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] − 1 3 1 ( 1 − cos φ ) ( 1 + cos φ ) ( 2 + cos φ
) } > 0$
$( d F r d r ) F r = 0 = 2 π σ C I ( 2 + cos φ ) sin φ$
${ 2 sin φ cos φ − 1 2 E [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] 1 3 sin φ [ 2 cos φ − ( 1 + cos φ ) 2 2 + cos φ ] − D [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] − 1 3 1 ( 1 − cos φ ) ( 1 + cos φ ) ( 2 + cos φ
) } > 0$
$( d F θ d λ ) F θ = 0 = 1 4 π 2 ( d F r d r ) F θ = F r = 0 = 1 2 π σ C I ( 2 + cos φ ) sin φ$
${ 2 sin φ cos φ − 1 2 E [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] 1 3 sin φ [ 2 cos φ − ( 1 + cos φ ) 2 2 + cos φ ] − D [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] − 1 3 1 ( 1 − cos φ ) ( 1 + cos φ ) ( 2 + cos φ
) } > 0$
The parameters defined above should be treated as similarity numbers. The first one ($E$) is a measure of the ratio of the adhesive force acting on the interface between the droplet and the solid
substrate to the surface tension force acting on the boundary of droplet and its fluid surroundings. The second ($B$) is a measure of the stretching (or shrinking) force of the wetted perimeter
tangentially to the contact surface of the droplet with the solid substrate and parallel to the radius of the wetted area to the surface tension force acting on the interface between the droplet and
the fluid environment. The third ($D$) is a measure of the ratio of the force acting along the three-phase contact line (tangent to the interfacial surface of the droplet and the solid substrate) to
the force of the surface tension acting at the interface between the droplet and the fluid environment.
Due to the always positive value of the expressions standing in front of the brackets,
$φ ∈ ( 0 , 180 ° )$
, relations (24–26) simplify into the following form:
$2 sin φ cos φ − 1 2 E [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] 1 3 sin φ [ 2 cos φ − ( 1 + cos φ ) 2 2 + cos φ ] − D [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] − 1 3 1 ( 1 − cos φ ) ( 1 + cos φ ) ( 2 + cos φ )
> 0$
3. Results and Discussion
The complexity of the derived equations does not make it possible to directly assess the properties of the solutions. For this reason, it is worth discussing a few simple systems and those described
in the literature.
3.1. Young’s Solution
According to Young’s model, at the point of contact of the three phases, there is an equilibrium between the resultant of tangential stresses to the surface to be wetted and the projection (onto this
surface) of the stress occurring at the interface between the fluids. These tangential stresses are caused by differences in the interaction of the droplet molecules and their surrounding molecules
with the solid surface. The superposition of these tangential stresses fully corresponds to the tangential force to the surface (
$F T$
) introduced during the derivation of this model. Therefore, if we assume that only this force acts in the analyzed system, the equation describing the necessary condition for the occurrence of the
equilibrium of forces (23) will be simplified to the form:
$sin 2 φ − σ F I σ C I = 0$
The above result indicates that in the considered case, the real solutions of the model are possible for non-negative values of parameter $B$ limited by unity, $0 ≤ B = σ F I σ C I ≤ 1$. This in turn
indicates that in the system under consideration there can only exist forces tangential to the surface to be wetted, stretching the perimeter of the wetted area.
After a simple transformation, we obtain an equation of the form closer to Young’s equation:
$cos φ = ± 1 − σ F I σ C I$
It follows from the above equation that its solution will be two contact angles symmetrically distant from the asymptote $φ = 90 °$.
The Relation (30) describing the sufficient condition of minimum of mechanical energy of a droplet deposited on a solid surface will simplified to the form:
It is satisfied in the range of the variability of the contact angles $0 < φ < 90 °$. This means that in the case under consideration, the equilibrium of forces acting on the droplet will occur for
two contact angles, but only within the given range, the achieved equilibrium of forces will be stable.
It is very important that both the Young’s model and the one formulated here confirmed the independence of the contact angle from the droplet volume. However, there are also significant differences
between the two models, e.g., Young’s solution provides only one contact angle which increases as the tangential force to the surface decreases. In turn, in the formulated model, as the force tangent
to the surface stretching the droplet increases, the smaller the angle increases and the larger one decreases, although the balance of forces is maintained for both. This is due to the fact that in
the absence of a force stretching the drop over the surface, the balance of forces in the described system occurs only when the drop spreads over the surface to form a flat liquid layer ($φ → 0 °$)
or when the drop retains its spherical shape ($φ → 180 °$). However, only in the first case (for a smaller contact angle) will the equilibrium of forces be a stable equilibrium.
3.2. Improved Young’s Solution
In 1936, Vesselovsky and Pertzov [
] proposed to introduce a term into Young’s equation that takes into account the effect of the stress force acting along a line occurring at the border of three phases. This slightly reduced the
error of fitting the experimental data to the equation formulated in this way. In more recent works, such a modified equation is used as a boundary condition for solving the equation describing the
deformation of the droplet shape caused by the force of gravity [
], i.e., by hydrostatic pressure, despite the fact that no external forces were taken into account during its derivation.
The presented model allows for the formulation an analogous relationship. It is enough in Equation (23) to equate the term describing the force of adhesion of the liquid to the surface of the
substrate to zero, obtaining:
$sin 2 φ − σ F I σ C I − F L σ C I R π 1 sin φ [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] − 1 3 = 0$
Unfortunately, the form of the obtained equation differs from that proposed in the literature.
In the considered case, Equation (34) describes the equilibrium state of forces acting on a droplet resting on a solid substrate. However, it is worth applying the Relation (30) to determine the
range of variability of parameters
in order to determine the ranges of their variability in which this equilibrium is stable:
$2 sin φ cos φ − D [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] − 1 3 1 ( 1 − cos φ ) ( 1 + cos φ ) ( 2 + cos φ ) > 0$
After a few transformations, we get:
$D = F L σ C I R π < 2 sin 3 φ cos φ ( 2 + cos φ ) [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] 1 3$
Using the Equation (34), the above relation can also be presented for parameter
$B = σ F I σ C I > sin 2 φ [ 1 − 2 cos φ ( 2 + cos φ ) ]$
The dependence of the parameter
, determining the ratio of the force acting along the three-phase boundary line to the force acting on the interface between fluids, is shown in
Figure 2
. The calculations made with the use of Equations (34) and (36) were used to create the graph.
As a result of the Relation (36), the diagram in
Figure 2
is divided into two parts, one of which concerns solutions of stable (permanent) equilibrium of forces acting on a drop, and the other (gray) solutions of unstable equilibrium. The first group of
solutions is located under the line marked (
$W ′ = 0$
$W ″ = 0$
Contrary to the solution of Young’s problem presented above, given by Equations (31) or (32), the range of variability of the value of the coefficient $B = σ F I σ C I$ seems unlimited. Only that for
its value of $B < − 1.42$ all solutions obtained from the discussed model (regardless of the contact angle) are solutions of the unstable equilibrium, and for $B > 1.92$ they correspond to a stable
equilibrium of forces independently of contact angle values.
For a constant value of the parameter $B$ lying in the range $− 1.42 < B < 0$ and for the area of stable equilibrium of forces, the value of the parameter $D$ increases with the increase of the
contact angle. This means that for a given system characterized by a constant value of unit stresses (used in the model), the contact angle will increase with the decrease of the $R π$ radius, i.e.,
the droplet volume. Moreover, in this range of variability of parameter $B$ and a given value of parameter $D$, the system will be characterized by only one stable solution, and apart from that, it
will be able to have two more solutions for which the balance of forces will be unstable. Only for $B = 0$, there will be two contact angles corresponding to the balance of forces, the smaller the
stable equilibrium, and the larger unstable equilibrium. On the other hand, for a constant value of parameter $B$ lying in the range $0 < B < 1.92$, a stable balance of forces occurs in two areas;
for small and large contact angles. In both of these areas, the value of the parameter $D$ increases with increasing contact angle. This means that in each of these areas, for a given system
characterized by a constant value of unit stresses (used in the model), the contact angle will increase with the decrease of the $R π$ radius, i.e., the droplet volume. Moreover, in this range of
variability of parameter $B$ and a given value of parameter $D$, the system will be characterized by two stable solutions; one for small contact angles and one for large contact angles, and besides
it will be able to have one more solution (lying between the mentioned ones) for which the balance of forces will be unstable. It is worth noting that for the value of parameter $B > 1$ solutions can
be obtained only for negative values of parameter $D$, which means that they will characterize a system in which the force acting along the three-phase contact line will shrink the wetted perimeter.
The line $D = 0$ delimits two cases where the force acting along the three-phase contact line changes its direction of action; for $D > 0$ it stretches the wetted circuit, and for $D < 0$ it
contracts. This shows that for the variability range of parameter $B$ in the range $0 < B < 1$, there is a discontinuity of solutions, consisting in the fact that decreasing the $R π$ diameter
(reducing the droplet volume) increases the parameter $D$. However, exceeding the value of $D = 0$ would result in a change in the direction of action $F L$ force. However, for a given system, the
values of physicochemical parameters cannot change, and even more so, the direction of the forces acting as a result of changing the direction of stresses cannot change. It should be remembered that
in the derived model a spherical drop shape and a circular shape of the wetted area were assumed, hence this discontinuity of the solution may result in the need to change the shape of the deposited
drop and the shape of the wetted area. However, this change can occur without the need to change the physicochemical parameters of the system components.
The relationship of the parameter
, defining the ratio of stress acting tangentially and in the direction of the
axis on the line delimiting the three phases to the stress acting on the interface between the liquid phases (the droplet and its surroundings), is shown in
Figure 3
. Calculations made with the use of Equations (34) and (37) were used to create the graph.
As is the case in the previous figure, also in
Figure 3
there is a line determined on the basis of Equation (37) separating the graph into two parts; the part that concerns the stable equilibrium of forces acting on the droplet, and the part that shows
the other solutions (gray). The line separating the two areas is denoted by (
$W ′ = 0 , W ″ = 0$
The graph shown in
Figure 3
shows that for the parameter
exceeding the value of
$D > 3.5$
, there are no solutions to Equation (34) determining a stable equilibrium of forces acting in the system under consideration and for
$D < 3.5$
and a given (but any) value parameter
, there is only one solution that determines a stable balance of forces. Moreover, for the value of this parameter lower than
$D < 1.07$
, all its solutions will represent a stable equilibrium of forces.
For the range of parameter values
$− ∞ < B < − 1.42$
$3.5 < D < ∞$
, there can be only one solution determining the unstable equilibrium of forces. In the range of
$− 1.4 < B < 0$
$1.33 < D < 3.5$
, but for a given value of parameter
, apart from one stable solution, there may be at most two solutions determining the unstable equilibrium of forces. Similarly, in the range of
$0 < B < 1.92$
$0 < D < 1.07$
, for a given value of
, apart from one stable solution, there may be at most two unstable solutions. It is worth noting that in all these ranges of variability of parameter
, for its constant value, parameter
(corresponding to the solutions for a stable balance of forces) should increase with the increase of the contact angle. In the range of parameter variability
$0 < B < 1$
$0 < D < 1.33$
, the lines of solutions lying on the line
$D ( φ ) = c o n s t .$
they intersect the line
$B = 0$
. The reasons and consequences of such behavior of the system have already been described in the description of the diagram shown in
Figure 2
3.3. Influence of Adhesion Force on Droplet Deposition
So far, the literature has not analyzed theoretically the influence of the adhesive force on the behavior of the deposited droplet. Therefore, the discussion of this issue based on the solutions of
the formulated model will be more precise.
Consider the case of a spherical droplet on which the force
$F T$
is tangent to the surface of the substrate and parallel to the axis
and the adhesive force
$F A$
acting on the droplet on entire wetted surface. For the sake of simplicity, we assume the absence of the
$F L$
force acting along the line constituting the border of three phases. For such a system, Equation (23) determining the balance of forces is simplified to the form:
$sin 2 φ − 1 2 ε R π σ C I [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] 1 3 sin 2 φ − σ F I σ C I = 0$
On the other hand, the condition for the existence of a stable balance of forces given by Relation (30) takes the form:
$2 sin φ cos φ − 1 2 E [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] 1 3 sin φ [ 2 cos φ − ( 1 + cos φ ) 2 2 + cos φ ] > 0$
Due to the occurrence of asymptote (for
$cos φ = 2 − 1$
), the above inequality is met in two ranges:
$E < [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] − 1 3 2 cos φ 2 cos φ − ( 1 + cos φ ) 2 2 + cos φ for φ < a r c cos ( 2 − 1 )$
$E > [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] − 1 3 2 cos φ 2 cos φ − ( 1 + cos φ ) 2 2 + cos φ for φ > a r c cos ( 2 − 1 )$
Using Equation (39) one can determine the sufficient condition for the minimum mechanical energy of the droplet relative to parameter
$B > − 1 − cos φ 1 + cos φ − 2 1 + cos φ for φ < a r c cos ( 2 − 1 )$
$B < − 1 − cos φ 1 + cos φ − 2 1 + cos φ for φ > a r c cos ( 2 − 1 )$
The balance of forces acting on the droplet (39) can be achieved in a very wide range of variability of parameters
Figure 4
Figure 5
). However, only in some of their variability ranges the system can achieve minimal mechanical energy (Equations (39) and Conditions (40)–(43), and this determines the existence of a stable
equilibrium of forces. The gray area delimited by the lines marked
$W ′ = 0 , W ″ = 0$
correspond to the unstable equilibrium of forces acting on the droplet.
Initially, let’s discuss only the area where the droplet achieves a stable equilibrium of forces. The graph in
Figure 4
shows that over the entire area for a constant value of parameter
, parameter
increases with increasing contact angle. Only that for
$B < 0$
, the increase in the contact angle begins at a certain value lying on the border of the area of stable equilibrium of forces acting on the droplet and ends at
$180 °$
. However, for
$B > 0$
, the contact angle may vary from zero to the value lying on the border with the area of stable equilibrium of forces. Only in the case of
$B = 0$
, the contact angle may vary from
$0 °$
$180 °$
. It should be noted that the value of the
parameter is directly proportional to the radius
$R π$
of the deposited spherical droplet. It follows that the contact angle of the deposited droplet should increase with its volume. It is also worth noting that for the value of the parameter
$B < − 1$
, the corresponding values of the parameter
should be negative, which would indicate the existence of adhesive forces pushing the drop off the substrate.
In areas where there is an unstable equilibrium of forces acting on the droplet, i.e., there is no minimum mechanical energy, for $B < 0$ and $B > 0$, a decrease in the $E$ parameter is visible with
an increase in the contact angle. It is easy to conclude that in the absence of stretching or shrinking forces acting on the droplet ($B = 0$) the value of the contact angle $φ = 90 °$ occurs only
when the adhesive force balances the pressure inside the droplet.
Figure 5
shows that in the area of a stable equilibrium of forces acting on the droplet, for a constant value of parameter
$E = ε R π σ C I$
, the contact angle increases with the increase of parameter
$B = σ F I σ C I$
. However, this increase is faster, the higher the value of parameter
. In the value range
$0 < E < 2$
, solutions satisfying the condition of a stable equilibrium of forces are limited, for small and large contact angles, by lines beyond which this condition is no longer fulfilled. However, for
$E ≥ 2$
, this restriction only applies to small contact angles. It is noteworthy that the lines were determined only for
$E ≥ 0$
, assuming the existence of only the attractive forces between the liquid molecules and the solid surface.
The graphs in
Figure 4
Figure 5
show that for non-zero values of parameter
, the droplet reaches a state of equilibrium of forces for two values of contact angles. At the same time only one of them corresponds to the minimum of its mechanical energy (stable equilibrium of
forces), and the other is a state of unstable equilibrium. It is also noteworthy that with a constant value of the
parameter, but with an increase in the droplet volume, i.e., an increase in its
$R π$
radius and a consequent increase in the
parameter, the contact angle increases, for which there is a stable balance of forces. On the other hand, the contact angle corresponding to the unstable balance of forces decreases. The results of
experimental observations (carried out in the gravitational field) indicate the presence of two contact angles [
]—advancing and receding. Thus, the formulated model indicates the mechanism of such a phenomenon, but without the need to introduce other, additional mechanisms. The more so as the results of the
experiments do not determine the stability or instability of the force equilibrium in both of these cases, and the rheological properties of the liquid forming the droplets may significantly disturb
the observation results, i.e., the rate of transformation of an “unstable” drop into a “stable” droplet.
One more aspect should be noted, resulting from the charts in
Figure 4
Figure 5
. Looking at the definition of parameter
(27) and the graph in
Figure 4
, one gets the impression that for the known value of parameter
and the ratio
$ε σ C I$
, determined based on physico-chemical data, there is a limit on the volume of droplets characterized by the radius
$R π$
. For
$B > 0$
there should be the maximum, and for
$B < 0$
the minimum droplet volume, for which solutions can be found in
Figure 4
Figure 5
. However, this does not mean that droplets of different volumes will not wet the substrate, but that they will take a different shape from the sphere segment—vide assumption 1. However, for each of
these shapes, in systems without external forces, the pressure in the entire volume of the liquid should be constant (the condition of keeping the curvature of the interface constant), otherwise the
liquid will move—see the Navier–Stokes equation. This in turn will run counter to the mechanical equilibrium conditions of the system.
4. Remarks on the Model and its Experimental Verification
Although, according to Equations (27) and (29), the values of the parameters
depend on the droplet volume, their product has to be a constant value because it is expressed only by means of physicochemical parameters characterizing the system:
If we assume that the adhesion force acting on the droplet can only attract the liquid to the substrate, the sign of the above product is dependent on the direction of the force acting along the line
separating the three phases. Thus, for a non-zero value of this product, there will be only two kinds of solutions. Only when the value of the product is equal to zero may there be three systems
described in this paper.
The Laplace–Young Equation (1) requires that the sum of the principle curvatures of the liquid-surroundings interface should be constant and equal at every point of this surface. This means that at
each point of this surface the pressure difference between the inside of the droplet and the surroundings has to be constant and the same at each point. Meeting this condition in a gravitational
field is difficult due to the existence of hydrostatic pressure along the drop height axis when the substrate is perpendicular to the direction of the gravity force. The more so because the shape of
the drop will differ from the spherical [
]. Nevertheless, it may be tempting to establish a condition for which the influence of hydrostatic pressure will be small compared to the internal pressure in the spherical droplet. The ratio of
both of these pressures can be written as:
$Δ = Δ ρ g z a 2 σ C I R$
$Δ ρ$
is the difference in density between the droplet and its surroundings,
is the acceleration of gravity, and
$z a$
the height of the drop (at its apex). The other values are already described in the paper.
In order to determine the height of a droplet at its apex, it is necessary to solve the equation defining its shape in the gravitational field [
]. It is a fairly simple numerical problem. However, the real problem is to determine the boundary condition based on the balance of forces acting on the droplet on the contact surface of the drop
with the substrate. There is no such solution yet, although it is possible to obtain it using the methodology described in this paper. However, taking into account the fact that the hydrostatic
pressure is to be relatively small compared to the pressure inside the droplet, its shape will also slightly differ from spherical. Thus, the given Equation (16) can be applied, and after
substituting the remaining geometric dependencies, Expression (45) will obtain the form:
$Δ = Δ ρ g R π 2 2 σ C I [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] 2 3$
Taking into account the experimental results given in paper [
], for which the parameters had the following values:
$Δ ρ = 763$
$R π = 3.82 × 10 − 4$
$σ C I = 0.075$
[N/m], the value
$Δ = 0.038$
was calculated. However, since even in the case of such small droplets their shape slightly differed from the spherical one, the value of
should be lower.
$0.038 > Δ ρ g R π 2 2 σ C I [ 4 ( 1 − cos φ ) 2 ( 2 + cos φ ) ] 2 3$
It is noteworthy that the influence of gravity can be reduced by depositing liquid droplets on the substrate in the surroundings of another liquid with a density as closely as possible to that of
which the droplets are formed. Of course, both liquids should mix very poorly with each other. However, the largest range of droplet volume variability applied in the measurements could be achieved
by reducing the acceleration of gravity, e.g., in the conditions of the ISS space station, where the acceleration is only $10 − 5 · g$.
It is noteworthy that the results of experiments conducted during parabolic flights may be flawed by a significant error. The duration of the weightless condition during such a flight may be too
short for the liquid droplets deposited on the solid substrate to reach a state of equilibrium of forces.
5. Conclusions
In the solutions of the formulated mathematical model of the deposition of a spherical liquid droplet on the solid substrate, it was found that the balance of forces acting on the droplet may be
stable (minimum mechanical energy of the droplet), and may also be unstable. Each of these solutions corresponds to a specific value of the contact angle. Depending on the values of the
physicochemical parameters and the direction of the force, in each of the analyzed systems there are either unstable solutions at all (Young case and $B < 0$, improved Young case and $B < − 1.42$ or
$D > 3.5$) or at least one (other cases). Likewise, depending also on the values of these parameters in the system under consideration, either there may be no unstable solutions (Young case and $0 ≤
B ≤ 1$, improved Young case and $B > 1.92$ or $D < − 1.07$) or there may be at least one.
Taking into account the fact that the model was formulated for droplets having the shape of a section of a sphere, the lack of its solutions for the given values of physicochemical parameters does
not prove that the droplets cannot be deposited on such a substrate. This lack of solutions may be due to the fact that drops of a different shape can also achieve a stable equilibrium of forces on
such a substrate.
A very wide range of variability of model parameters ($B$, $D$, $E$) is determined on the basis of mathematical solutions. This means that the actual range of their variability should be determined
based on the experimental results.
Author Contributions
Both authors have equal participation in the formulation of the mathematical model of the phenomenon, its solution and discussion, and in the preparation of this publication. All authors have read
and agreed to the published version of the manuscript.
This research was funded by Warsaw University of Technology grant no. 504/04559/7192/43.060002 (J.A.M.).
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
No new experimental data were created or analyzed in this study. Data sharing is not applicable to this article.
We thank our friend Mirosław Dolata for thousands of hours of stormy but constructive discussions, suggestions for possible solutions or words of encouragement. Your friendship and the fact that you
have worked with us was an unquestionable honour. Requiesce in pace.
Conflicts of Interest
The authors declare no conflict of interest.
Figure 1. Adopted coordinate system and designations of the geometrical dimensions of the drop being a sphere segment. The symbol marked $θ$ shows the direction of the axis of an angular variable in
a cylindrical coordinate system, perpendicular to the drawing plane.
Figure 2. The dependence of the parameter $D = F L σ C I R π$ on the contact angle and several values of the parameter $B = σ F I σ C I$. The gray area marks solutions to the unstable balance of
Figure 3. The dependence of the parameter $B = σ F I σ C I$ on the contact angle and several values of the parameter $D = F L σ C I R π$. The gray area marks solutions to the unstable balance of
Figure 4. Dependence of the $E = ε R π σ C I$ parameter on the contact angle for several values of the $B = σ F I σ C I$ parameter—the equilibrium of forces acting on the droplet. The gray area marks
solutions to the unstable equilibrium of forces.
Figure 5. Dependence of the $B = σ F I σ C I$ parameter on the contact angle for several values of the $E = ε R π σ C I$ parameter—the equilibrium of forces acting on the droplet. The gray area marks
solutions to the unstable equilibrium of forces.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/
Share and Cite
MDPI and ACS Style
Michalski, J.A.; Jakiela, S. Spherical Droplet Deposition—Mechanistic Model. Coatings 2021, 11, 248. https://doi.org/10.3390/coatings11020248
AMA Style
Michalski JA, Jakiela S. Spherical Droplet Deposition—Mechanistic Model. Coatings. 2021; 11(2):248. https://doi.org/10.3390/coatings11020248
Chicago/Turabian Style
Michalski, Jacek A., and Slawomir Jakiela. 2021. "Spherical Droplet Deposition—Mechanistic Model" Coatings 11, no. 2: 248. https://doi.org/10.3390/coatings11020248
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/2079-6412/11/2/248","timestamp":"2024-11-03T00:42:28Z","content_type":"text/html","content_length":"534531","record_id":"<urn:uuid:d502b16c-ebd3-4273-b96a-b406800e82da>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00602.warc.gz"} |
Math Expression Generator Torrent Free Download
Math Expression Generator
Developer’s Description
By Softwareriviera
‘Math Expression Generator’ is an application which generates math expressions taking into considerations supplied parameters. It uses custom defined parameters, base elements and operations which
you can freely define to generate the mathematical expressions. The parameters can be like PARAM_X, PARAM_Y, PARAM_ANYTHING, PARAM_SOMETHING, PARAM_etc. In the end result the application will know to
generate X for PARAM_X, Y for PARAM_Y, ANYTHING for PARAM_ANYTHING, SOMETHING for PARAM_SOMETHING, etc for PARAM_etc. The base element is a custom defined string of characters (like sin(PARAM_X), cos
(PARAM_Y), myfunction(PARAM_X), func()) which can or cannot have none, one or more parameters attached to it. It is your choice if you want to attach a parameter (or more parameters) to a base
element or not. The application generates results using the base elements supplied and if a base element has parameters it will remove the “PARAM_” part from that parameter(s) and leave only the end
part. The operations are custom defined strings of characters which unite the base elements when the result is generated. The default operations are +, -, /, * but you can define anything as the
operation even words.
Interactive equation editor
MathType is an interactive equation software from developer Design Science (Dessci) that lets you create and annotate math notation for word processing, desktop publishing, presentations, eLearning,
and more. The editor is also used for creating TeX, LaTeX, and MathML documents.
For the classroom or the boardroom
Traditional word processors are limited when it comes to working with complex mathematical equations or scientific expressions. MathType is a complementary desktop program that allows users to create
formulas, edit them, and insert them into a variety of documents. With this software students, educators, and professionals can build authentic formulas for research papers and rigorous review.
A compact interface
The Mathtype Interface is a compressed and slightly crowded, with four rows of buttons for the symbols and a row of five tabs categorized by type of math expression: algebra, derivations, statistics,
matrices, sets, trig, and geometry. Navigation is straightforward, and users can manipulate equations as they please.
It’s easy to build an equation in MathType’s editing panel. To start, you have to click on the buttons to select their desired symbol or by using the Insert Symbol command. It’s also possible to copy
and paste from the editing pane to another application.
There’s a vast range of formatting options to suit a variety of equations. Another attractive feature of MathType is that you can customize the app through the Preferences dialog box. You can assign
your own keyboard shortcuts to all symbols, templates, and commands. These shortcuts consist of one or two keystrokes with CTRL, Alt, or Shift modifiers.
MathType tools
When MathType installs a toolbar into Microsoft Word, users can insert mathematical notation either in-line or centered. The commands are straightforward. You can format equations by changing
spacing, styles, font sizes, of all equations, without having to open equations individually. Convert equations to mark-up languages, and export all equations into a folder as either EPS,GIF, WMF,
or PICT.
With the toolbar, insert numbers in either the right or left-hand side, which will automatically be updated if they place a formula in the middle of the document. This is useful for inserting
chapters, hyperlinked references, sections, and equation numbers, which is great, especially for teachers who need to make worksheets.
MathType also installs a toolbar into Powerpoint, allowing users to design attractive presentations. With this toolbar, color all parts of equations. MathType handles CMYK, RGB and spot color (for
page layout software).
‘Math Expression Generator‘ is an application which generates math expressions taking into considerations supplied parameters. It uses custom defined parameters, base elements and operations which
you can freely define to generate the mathematical expressions.
Math Expression Generator 1.0 full description
‘Math Expression Generator’ is an application which generates math expressions taking into considerations supplied parameters. It uses custom defined parameters, base elements and operations which
you can freely define to generate the mathematical expressions. The parameters can be like PARAM_X, PARAM_Y, PARAM_ANYTHING, PARAM_SOMETHING, PARAM_etc. In the end result the application will know to
generate X for PARAM_X, Y for PARAM_Y, ANYTHING for PARAM_ANYTHING, SOMETHING for PARAM_SOMETHING, etc for PARAM_etc. The base element is a custom defined string of characters (like sin(PARAM_X), cos
(PARAM_Y), myfunction(PARAM_X), func()) which can or cannot have none, one or more parameters attached to it. It is your choice if you want to attach a parameter (or more parameters) to a base
element or not. The application generates results using the base elements supplied and if a base element has parameters it will remove the “PARAM_” part from that parameter(s) and leave only the end
part. The operations are custom defined strings of characters which unite the base elements when the result is generated. The default operations are +, -, /, * but you can define anything as the
operation even words.
What’s Cool
1. Also great for teachers who must prepare the tasks for their students.
2. Learn while having fun: challenge your friends on GAME CENTER.
3. Succeeding in mathematics is a matter of exercise, everyone can get good results with the right preparation.
4. Try to solve the exercises with pen and paper, if you don’t know how to do it, the app shows you the COMPLETE SOLUTION.
Equation creator and editor
MathType is an educational desktop program developed by Design Science. The graphical editor is used only for creating mathematical equations in a full graphical What You See Is What You Get or
WYSIWYG environment. This means you can directly enter various mathematical markup languages such as TeX, LaTeX, and MathML. It is also integrated with other office and productivity software like
Microsoft Word, Microsoft Powerpoint, and Apple Pages. By collaborating with these desktop applications, you can quickly add equations and formulas onto your documents. MathType for Windows is
compatible with Windows 7 or newer as well as Microsoft Office 2007 or newer.
The purpose of MathType is to help users format mathematical equations. Whether the content will appear in textbooks or technical formulas, the program has an extensive set of tools that can help you
create equations that are up to publication standards. You can specify which equations were authored on MathType as well as collaborate with other authors and post-production staff using the | {"url":"https://360softwarez.com/math-expression-generator/","timestamp":"2024-11-11T02:02:46Z","content_type":"text/html","content_length":"46278","record_id":"<urn:uuid:572869fa-ad3e-45a5-a467-75bbf76b1cc0>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00210.warc.gz"} |
Causal Attributions and Root-Cause Analysis in an Online Shop
Causal Attributions and Root-Cause Analysis in an Online Shop
This notebook is an extended and updated version of the corresponding blog post: Root Cause Analysis with DoWhy, an Open Source Python Library for Causal Machine Learning
In this example, we look at an online store and analyze how different factors influence our profit. In particular, we want to analyze an unexpected drop in profit and identify the potential root
cause of it. For this, we can make use of Graphical Causal Models (GCM).
The scenario
Suppose we are selling a smartphone in an online shop with a retail price of $999. The overall profit from the product depends on several factors, such as the number of sold units, operational costs
or ad spending. On the other hand, the number of sold units, for instance, depends on the number of visitors on the product page, the price itself and potential ongoing promotions. Suppose we observe
a steady profit of our product over the year 2021, but suddenly, there is a significant drop in profit at the beginning of 2022. Why?
In the following scenario, we will use DoWhy to get a better understanding of the causal impacts of factors influencing the profit and to identify the causes for the profit drop. To analyze our
problem at hand, we first need to define our belief about the causal relationships. For this, we collect daily records of the different factors affecting profit. These factors are:
• Shopping Event?: A binary value indicating whether a special shopping event took place, such as Black Friday or Cyber Monday sales.
• Ad Spend: Spending on ad campaigns.
• Page Views: Number of visits on the product detail page.
• Unit Price: Price of the device, which could vary due to temporary discounts.
• Sold Units: Number of sold phones.
• Revenue: Daily revenue.
• Operational Cost: Daily operational expenses which includes production costs, spending on ads, administrative expenses, etc.
• Profit: Daily profit.
Looking at these attributes, we can use our domain knowledge to describe the cause-effect relationships in the form of a directed acyclic graph, which represents our causal graph in the following.
The graph is shown here:
from IPython.display import Image
In this scenario we know the following:
Shopping Event? impacts:
→ Ad Spend: To promote the product on special shopping events, we require additional ad spending.
→ Page Views: Shopping events typically attract a large number of visitors to an online retailer due to discounts and various offers.
→ Unit Price: Typically, retailers offer some discount on the usual retail price on days with a shopping event.
→ Sold Units: Shopping events often take place during annual celebrations like Christmas, Father’s day, etc, when people often buy more than usual.
Ad Spend impacts:
→ Page Views: The more we spend on ads, the more likely people will visit the product page.
→ Operational Cost: Ad spending is part of the operational cost.
Page Views impacts:
→ Sold Units: The more people visiting the product page, the more likely the product is bought. This is quite obvious seeing that if no one would visit the page, there wouldn’t be any sale.
Unit Price impacts:
→ Sold Units: The higher/lower the price, the less/more units are sold.
→ Revenue: The daily revenue typically consist of the product of the number of sold units and unit price.
Sold Units impacts:
→ Sold Units: Same argument as before, the number of sold units heavily influences the revenue.
→ Operational Cost: There is a manufacturing cost for each unit we produce and sell. The more units we well the higher the revenue, but also the higher the manufacturing costs.
Operational Cost impacts:
→ Profit: The profit is based on the generated revenue minus the operational cost.
Revenue impacts:
→ Profit: Same reason as for the operational cost.
Step 1: Define causal model
Now, let us model these causal relationships. In the first step, we need to define a so-called structural causal model (SCM), which is a combination of the causal graph and the underlying generative
models describing the data generation process.
The causal graph can be defined via:
import networkx as nx
causal_graph = nx.DiGraph([('Page Views', 'Sold Units'),
('Revenue', 'Profit'),
('Unit Price', 'Sold Units'),
('Unit Price', 'Revenue'),
('Shopping Event?', 'Page Views'),
('Shopping Event?', 'Sold Units'),
('Shopping Event?', 'Unit Price'),
('Shopping Event?', 'Ad Spend'),
('Ad Spend', 'Page Views'),
('Ad Spend', 'Operational Cost'),
('Sold Units', 'Revenue'),
('Sold Units', 'Operational Cost'),
('Operational Cost', 'Profit')])
To verify that we did not forget an edge, we can plot this graph:
from dowhy.utils import plot
Next, we look at the data from 2021:
import pandas as pd
import numpy as np
pd.options.display.float_format = '${:,.2f}'.format # Format dollar columns
data_2021 = pd.read_csv('2021 Data.csv', index_col='Date')
│ │Shopping Event?│Ad Spend │Page Views│Unit Price│Sold Units│ Revenue │Operational Cost │ Profit │
│ Date │ │ │ │ │ │ │ │ │
│2021-01-01│False │$1,490.49│11861 │$999.00 │2317 │$2,314,683.00│$1,659,999.89 │$654,683.11│
│2021-01-02│False │$1,455.92│11776 │$999.00 │2355 │$2,352,645.00│$1,678,959.08 │$673,685.92│
│2021-01-03│False │$1,405.82│11861 │$999.00 │2391 │$2,388,609.00│$1,696,906.14 │$691,702.86│
│2021-01-04│False │$1,379.30│11677 │$999.00 │2344 │$2,341,656.00│$1,673,380.64 │$668,275.36│
│2021-01-05│False │$1,234.20│11871 │$999.00 │2412 │$2,409,588.00│$1,707,252.61 │$702,335.39│
As we see, we have one sample for each day in 2021 with all the variables in the causal graph. Note that in the synthetic data we consider here, shopping events were also generated randomly.
We defined the causal graph, but we still need to assign generative models to the nodes. We can either manually specify those models, and configure them if needed, or automatically infer
“appropriate” models using heuristics from data. We will leverage the latter here:
from dowhy import gcm
# Create the structural causal model object
scm = gcm.StructuralCausalModel(causal_graph)
# Automatically assign generative models to each node based on the given data
auto_assignment_summary = gcm.auto.assign_causal_mechanisms(scm, data_2021, override_models=True, quality=gcm.auto.AssignmentQuality.GOOD)
Whenever available, we recommend assigning models based on prior knowledge as then models would closely mimic the physics of the domain, and not rely on nuances of the data. However, here we asked
DoWhy to do this for us instead.
After automatically assign the models, we can print a summary to obtain some insights into the selected models:
When using this auto assignment function, the given data is used to automatically assign a causal mechanism to each node. Note that causal mechanisms can also be customized and assigned manually.
The following types of causal mechanisms are considered for the automatic selection:
If root node:
An empirical distribution, i.e., the distribution is represented by randomly sampling from the provided data. This provides a flexible and non-parametric way to model the marginal distribution and is valid for all types of data modalities.
If non-root node and the data is continuous:
Additive Noise Models (ANM) of the form X_i = f(PA_i) + N_i, where PA_i are the parents of X_i and the unobserved noise N_i is assumed to be independent of PA_i.To select the best model for f, different regression models are evaluated and the model with the smallest mean squared error is selected.Note that minimizing the mean squared error here is equivalent to selecting the best choice of an ANM.
If non-root node and the data is discrete:
Discrete Additive Noise Models have almost the same definition as non-discrete ANMs, but come with an additional constraint for f to only return discrete values.
Note that 'discrete' here refers to numerical values with an order. If the data is categorical, consider representing them as strings to ensure proper model selection.
If non-root node and the data is categorical:
A functional causal model based on a classifier, i.e., X_i = f(PA_i, N_i).
Here, N_i follows a uniform distribution on [0, 1] and is used to randomly sample a class (category) using the conditional probability distribution produced by a classification model.Here, different model classes are evaluated using the (negative) F1 score and the best performing model class is selected.
In total, 8 nodes were analyzed:
--- Node: Shopping Event?
Node Shopping Event? is a root node. Therefore, assigning 'Empirical Distribution' to the node representing the marginal distribution.
--- Node: Unit Price
Node Unit Price is a non-root node with continuous data. Assigning 'AdditiveNoiseModel using LinearRegression' to the node.
This represents the causal relationship as Unit Price := f(Shopping Event?) + N.
For the model selection, the following models were evaluated on the mean squared error (MSE) metric:
LinearRegression: 144.88116604972765
Pipeline(steps=[('polynomialfeatures', PolynomialFeatures(include_bias=False)),
('linearregression', LinearRegression)]): 144.89219813104341
HistGradientBoostingRegressor: 423.7900225174952
--- Node: Ad Spend
Node Ad Spend is a non-root node with continuous data. Assigning 'AdditiveNoiseModel using Pipeline' to the node.
This represents the causal relationship as Ad Spend := f(Shopping Event?) + N.
For the model selection, the following models were evaluated on the mean squared error (MSE) metric:
Pipeline(steps=[('polynomialfeatures', PolynomialFeatures(include_bias=False)),
('linearregression', LinearRegression)]): 16044.041781549313
LinearRegression: 16093.577562530398
HistGradientBoostingRegressor: 81795.40534390911
--- Node: Page Views
Node Page Views is a non-root node with discrete data. Assigning 'Discrete AdditiveNoiseModel using LinearRegression' to the node.
This represents the discrete causal relationship as Page Views := f(Ad Spend,Shopping Event?) + N.
For the model selection, the following models were evaluated on the mean squared error (MSE) metric:
LinearRegression: 77567.29999484058
Pipeline(steps=[('polynomialfeatures', PolynomialFeatures(include_bias=False)),
('linearregression', LinearRegression)]): 81139.23196375671
HistGradientBoostingRegressor: 1507780.28236011
--- Node: Sold Units
Node Sold Units is a non-root node with discrete data. Assigning 'Discrete AdditiveNoiseModel using LinearRegression' to the node.
This represents the discrete causal relationship as Sold Units := f(Page Views,Shopping Event?,Unit Price) + N.
For the model selection, the following models were evaluated on the mean squared error (MSE) metric:
LinearRegression: 8893.522313238336
Pipeline(steps=[('polynomialfeatures', PolynomialFeatures(include_bias=False)),
('linearregression', LinearRegression)]): 17030.325822194674
HistGradientBoostingRegressor: 238274.03868402308
--- Node: Revenue
Node Revenue is a non-root node with continuous data. Assigning 'AdditiveNoiseModel using Pipeline' to the node.
This represents the causal relationship as Revenue := f(Sold Units,Unit Price) + N.
For the model selection, the following models were evaluated on the mean squared error (MSE) metric:
Pipeline(steps=[('polynomialfeatures', PolynomialFeatures(include_bias=False)),
('linearregression', LinearRegression)]): 3.724902806292665e-19
LinearRegression: 73426026.09261553
HistGradientBoostingRegressor: 137042535745.46541
--- Node: Operational Cost
Node Operational Cost is a non-root node with continuous data. Assigning 'AdditiveNoiseModel using Pipeline' to the node.
This represents the causal relationship as Operational Cost := f(Ad Spend,Sold Units) + N.
For the model selection, the following models were evaluated on the mean squared error (MSE) metric:
Pipeline(steps=[('polynomialfeatures', PolynomialFeatures(include_bias=False)),
('linearregression', LinearRegression)]): 38.52215921263841
LinearRegression: 38.545120086403564
HistGradientBoostingRegressor: 13169282604.828556
--- Node: Profit
Node Profit is a non-root node with continuous data. Assigning 'AdditiveNoiseModel using LinearRegression' to the node.
This represents the causal relationship as Profit := f(Operational Cost,Revenue) + N.
For the model selection, the following models were evaluated on the mean squared error (MSE) metric:
LinearRegression: 1.7075812914532831e-18
Pipeline(steps=[('polynomialfeatures', PolynomialFeatures(include_bias=False)),
('linearregression', LinearRegression)]): 5.620130422430478e-06
HistGradientBoostingRegressor: 19795755289.732506
Note, based on the selected auto assignment quality, the set of evaluated models changes.
For more insights toward the quality of the fitted graphical causal model, consider using the evaluate_causal_model function after fitting the causal mechanisms.
As we see, while the auto assignment also considered non-linear models, a linear model is sufficient for most relationships, except for Revenue, which is the product of Sold Units and Unit Price.
Step 2: Fit causal models to data
After assigning a model to each node, we need to learn the parameters of the model:
Fitting causal mechanism of node Operational Cost: 100%|██████████| 8/8 [00:00<00:00, 358.42it/s]
The fit method learns the parameters of the generative models in each node. Before we continue, let’s have a quick look into the performance of the causal mechanisms and how well they capture the
print(gcm.evaluate_causal_model(scm, data_2021, compare_mechanism_baselines=True, evaluate_invertibility_assumptions=False))
Evaluating causal mechanisms...: 100%|██████████| 8/8 [00:00<00:00, 726.92it/s]
Test permutations of given graph: 100%|██████████| 50/50 [00:15<00:00, 3.17it/s]
Evaluated the performance of the causal mechanisms and the overall average KL divergence between generated and observed distribution and the graph structure. The results are as follows:
==== Evaluation of Causal Mechanisms ====
The used evaluation metrics are:
- KL divergence (only for root-nodes): Evaluates the divergence between the generated and the observed distribution.
- Mean Squared Error (MSE): Evaluates the average squared differences between the observed values and the conditional expectation of the causal mechanisms.
- Normalized MSE (NMSE): The MSE normalized by the standard deviation for better comparison.
- R2 coefficient: Indicates how much variance is explained by the conditional expectations of the mechanisms. Note, however, that this can be misleading for nonlinear relationships.
- F1 score (only for categorical non-root nodes): The harmonic mean of the precision and recall indicating the goodness of the underlying classifier model.
- (normalized) Continuous Ranked Probability Score (CRPS): The CRPS generalizes the Mean Absolute Percentage Error to probabilistic predictions. This gives insights into the accuracy and calibration of the causal mechanisms.
NOTE: Every metric focuses on different aspects and they might not consistently indicate a good or bad performance.
We will mostly utilize the CRPS for comparing and interpreting the performance of the mechanisms, since this captures the most important properties for the causal model.
--- Node Shopping Event?
- The KL divergence between generated and observed distribution is 0.00763414303757196.
The estimated KL divergence indicates an overall very good representation of the data distribution.
--- Node Unit Price
- The MSE is 150.26359524690918.
- The NMSE is 0.518162824703851.
- The R2 coefficient is 0.6670563143986281.
- The normalized CRPS is 0.10261102075374559.
The estimated CRPS indicates a very good model performance.
The mechanism is better or equally good than all 7 baseline mechanisms.
--- Node Ad Spend
- The MSE is 16260.466157225634.
- The NMSE is 0.4840474227228727.
- The R2 coefficient is 0.7584964154417491.
- The normalized CRPS is 0.28003083626186354.
The estimated CRPS indicates a good model performance.
The mechanism is better or equally good than all 7 baseline mechanisms.
--- Node Page Views
- The MSE is 82324.6.
- The NMSE is 0.15379749798186665.
- The R2 coefficient is 0.9751527905808336.
- The normalized CRPS is 0.06552464247905661.
The estimated CRPS indicates a very good model performance.
The mechanism is better or equally good than all 7 baseline mechanisms.
--- Node Sold Units
- The MSE is 8814.490410958904.
- The NMSE is 0.1259666138915681.
- The R2 coefficient is 0.982978771633477.
- The normalized CRPS is 0.05696622593277495.
The estimated CRPS indicates a very good model performance.
The mechanism is better or equally good than all 7 baseline mechanisms.
--- Node Revenue
- The MSE is 4.693074553403844e-16.
- The NMSE is 2.0090993322743605e-14.
- The R2 coefficient is 1.0.
- The normalized CRPS is 4.050472833850955e-15.
The estimated CRPS indicates a very good model performance.
The mechanism is better or equally good than all 7 baseline mechanisms.
--- Node Operational Cost
- The MSE is 38.72449973793657.
- The NMSE is 1.840786352388816e-05.
- The R2 coefficient is 0.9999999996429105.
- The normalized CRPS is 1.0138606450343845e-05.
The estimated CRPS indicates a very good model performance.
The mechanism is better or equally good than all 7 baseline mechanisms.
--- Node Profit
- The MSE is 1.6290880245822435e-18.
- The NMSE is 5.099095587576705e-15.
- The R2 coefficient is 1.0.
- The normalized CRPS is 3.7082836040834276e-16.
The estimated CRPS indicates a very good model performance.
The mechanism is better or equally good than all 7 baseline mechanisms.
==== Evaluation of Generated Distribution ====
The overall average KL divergence between the generated and observed distribution is 1.1793096223927442
The estimated KL divergence indicates some mismatches between the distributions.
==== Evaluation of the Causal Graph Structure ====
| Falsificaton Summary |
| The given DAG is informative because 0 / 50 of the permutations lie in the Markov |
| equivalence class of the given DAG (p-value: 0.00). |
| The given DAG violates 6/18 LMCs and is better than 80.0% of the permuted DAGs (p-value: 0.20). |
| Based on the provided significance level (0.2) and because the DAG is informative, |
| we do not reject the DAG. |
==== NOTE ====
Always double check the made model assumptions with respect to the graph structure and choice of causal mechanisms.
All these evaluations give some insight into the goodness of the causal model, but should not be overinterpreted, since some causal relationships can be intrinsically hard to model. Furthermore, many algorithms are fairly robust against misspecifications or poor performances of causal mechanisms.
The fitted causal mechanisms are fairly good representations of the data generation process, with some minor inaccuracies. However, this is to be expected given the small sample size and relatively
small signal-to-noise ratio for many nodes. Most importantly, all the baseline mechanisms did not perform better, which is a good indicator that our model selection is appropriate. Based on the
evaluation, we also do not reject the given causal graph.
The selection of baseline models or the p-value for graph falsification can be configured as well. For more details, take a look at the corresponding evaluate_causal_model documentation.
Step 3: Answer causal questions
Generate new samples
Since we learned about the data generation process, we can also generate new samples:
gcm.draw_samples(scm, num_samples=10)
│ │Shopping Event?│Unit Price│Ad Spend │Page Views│Sold Units│ Revenue │Operational Cost │ Profit │
│0│False │$999.00 │$1,252.13│11748 │2365 │$2,362,635.00│$1,683,752.65 │$678,882.35│
│1│False │$999.00 │$1,444.83│11682 │2387 │$2,384,613.00│$1,694,947.14 │$689,665.86│
│2│False │$999.00 │$1,470.88│11884 │2320 │$2,317,680.00│$1,661,475.74 │$656,204.26│
│3│False │$999.00 │$1,227.29│11739 │2334 │$2,331,666.00│$1,668,236.05 │$663,429.95│
│4│False │$999.00 │$1,488.29│11857 │2351 │$2,348,649.00│$1,676,998.87 │$671,650.13│
│5│False │$999.00 │$1,139.89│11716 │2293 │$2,290,707.00│$1,647,644.11 │$643,062.89│
│6│False │$999.00 │$1,239.78│11645 │2396 │$2,393,604.00│$1,699,242.26 │$694,361.74│
│7│False │$999.00 │$1,354.98│11746 │2431 │$2,428,569.00│$1,716,869.84 │$711,699.16│
│8│False │$999.00 │$1,437.69│11614 │2389 │$2,386,611.00│$1,695,942.91 │$690,668.09│
│9│False │$999.00 │$1,138.90│11630 │2335 │$2,332,665.00│$1,668,639.62 │$664,025.38│
We have drawn 10 samples from the joint distribution following the learned causal relationships.
What are the key factors influencing the variance in profit?
At this point, we want to understand which factors drive changes in the Profit. Let us first have a closer look at the Profit over time. For this, we plot the Profit over time for 2021, where the
produced plot shows the Profit in dollars on the Y-axis and the time on the X-axis.
data_2021['Profit'].plot(ylabel='Profit in $', figsize=(15,5), rot=45)
<Axes: xlabel='Date', ylabel='Profit in $'>
We see some significant spikes in the Profit across the year. We can further quantify this by looking at the standard deviation:
$\displaystyle 259247.66010978$
The estimated standard deviation of ~259247 dollars is quite significant. Looking at the causal graph, we see that Revenue and Operational Cost have a direct impact on the Profit, but which of them
contribute the most to the variance? To find this out, we can make use of the direct arrow strength algorithm that quantifies the causal influence of a specific arrow in the graph:
import numpy as np
# Note: The percentage conversion only makes sense for purely positive attributions.
def convert_to_percentage(value_dictionary):
total_absolute_sum = np.sum([abs(v) for v in value_dictionary.values()])
return {k: abs(v) / total_absolute_sum * 100 for k, v in value_dictionary.items()}
arrow_strengths = gcm.arrow_strength(scm, target_node='Profit')
figure_size=[15, 10])
In this causal graph, we see how much each node contributes to the variance in Profit. For simplicity, the contributions are converted to percentages. Since Profit itself is only the difference
between Revenue and Operational Cost, we do not expect further factors influencing the variance. As we see, Revenue has more impact than Operational Cost. This makes sense seeing that Revenue
typically varies more than Operational Cost due to the stronger dependency on the number of sold units. Note that the direct arrow strength method also supports the use of other kinds of measures,
for instance, KL divergence.
While the direct influences are helpful in understanding which direct parents influence the most on the variance in Profit, this mostly confirms our prior belief. The question of which factor is
ultimately responsible for this high variance is, however, still unclear. For instance, Revenue itself is based on Sold Units and the Unit Price. Although we could recursively apply the direct arrow
strength to all nodes, we would not get a correctly weighted insight into the influence of upstream nodes on the variance.
What are the important causal factors contributing to the variance in Profit? To find this out, we can use the intrinsic causal contribution method that attributes the variance in Profit to the
upstream nodes in the causal graph by only considering information that is newly added by a node and not just inherited from its parents. For instance, a node that is simply a rescaled version of its
parent would not have any intrinsic contribution. See the corresponding research paper for more details.
Let’s apply the method to the data:
iccs = gcm.intrinsic_causal_influence(scm, target_node='Profit', num_samples_randomization=500)
Estimating Shapley Values. Average change of Shapley values in run 20 (100 evaluated permutations): 1.1812885833504427%: 100%|██████████| 1/1 [12:01<00:00, 721.87s/it]
from dowhy.utils import bar_plot
bar_plot(convert_to_percentage(iccs), ylabel='Variance attribution in %')
The scores shown in this bar chart are percentages indicating how much variance each node is contributing to Profit — without inheriting the variance from its parents in the causal graph. As we see
quite clearly, the Shopping Event has by far the biggest influence on the variance in our Profit. This makes sense, seeing that the sales are heavily impacted during promotion periods like Black
Friday or Prime Day and, thus, impact the overall profit. Surprisingly, we also see that factors such as the number of sold units or number of page views have a rather small influence, i.e., the
large variance in profit can be almost completely explained by the shopping events. Let’s check this visually by marking the days where we had a shopping event. To do so, we use the pandas plot
function again, but additionally mark all points in the plot with a vertical red bar where a shopping event occured:
import matplotlib.pyplot as plt
data_2021['Profit'].plot(ylabel='Profit in $', figsize=(15,5), rot=45)
plt.vlines(np.arange(0, data_2021.shape[0])[data_2021['Shopping Event?']], data_2021['Profit'].min(), data_2021['Profit'].max(), linewidth=10, alpha=0.3, color='r')
<matplotlib.collections.LineCollection at 0x7f6d23cf57c0>
We clearly see that the shopping events coincide with the high peaks in profit. While we could have investigated this manually by looking at all kinds of different relationships or using domain
knowledge, the tasks gets much more difficult as the complexity of the system increases. With a few lines of code, we obtained these insights from DoWhy.
What are the key factors explaining the Profit drop on a particular day?
After a successful year in terms of profit, newer technologies come to the market and, thus, we want to keep the profit up and get rid of excess inventory by selling more devices. In order to
increase the demand, we therefore lower the retail price by 10% at the beginning of 2022. Based on a prior analysis, we know that a decrease of 10% in the price would roughly increase the demand by
13.75%, a slight surplus. Following the price elasticity of demand model, we expect an increase of around 37.5% in number of Sold Units. Let us take a look if this is true by loading the data for the
first day in 2022 and taking the fraction between the numbers of Sold Units from both years for that day:
first_day_2022 = pd.read_csv('2022 First Day.csv', index_col='Date')
(first_day_2022['Sold Units'][0] / data_2021['Sold Units'][0] - 1) * 100
$\displaystyle 18.9469141130773$
Surprisingly, we only increased the number of sold units by ~19%. This will certainly impact the profit given that the revenue is much smaller than expected. Let us compare it with the previous year
at the same time:
(1 - first_day_2022['Profit'][0] / data_2021['Profit'][0]) * 100
$\displaystyle 8.57891513840979$
Indeed, the profit dropped by ~8.5%. Why is this the case seeing that we would expect a much higher demand due to the decreased price? Let us investigate what is going on here.
In order to figure out what contributed to the Profit drop, we can make use of DoWhy’s anomaly attribution feature. Here, we only need to specify the target node we are interested in (the Profit) and
the anomaly sample we want to analyze (the first day of 2022). These results are then plotted in a bar chart indicating the attribution scores of each node for the given anomaly sample:
attributions = gcm.attribute_anomalies(scm, target_node='Profit', anomaly_samples=first_day_2022)
bar_plot({k: v[0] for k, v in attributions.items()}, ylabel='Anomaly attribution score')
Estimating Shapley Values. Average change of Shapley values in run 26 (130 evaluated permutations): 1.91105736842231%: 100%|██████████| 1/1 [00:03<00:00, 3.11s/it]
A positive attribution score means that the corresponding node contributed to the observed anomaly, which is in our case the drop in Profit. A negative score of a node indicates that the observed
value for the node is actually reducing the likelihood of the anomaly (e.g., a higher demand due to the decreased price should increase the profit). More details about the interpretation of the score
can be found in the corresponding reserach paper. Interestingly, the Page Views stand out as a factor explaining the Profit drop that day as indicated in the bar chart shown here.
While this method gives us a point estimate of the attributions for the particular models and parameters we learned, we can also use DoWhy’s confidence interval feature, which incorporates
uncertainties about the fitted model parameters and algorithmic approximations:
gcm.config.disable_progress_bars() # We turn off the progress bars here to reduce the number of outputs.
median_attributions, confidence_intervals, = gcm.confidence_intervals(
bar_plot(median_attributions, confidence_intervals, 'Anomaly attribution score')
Note, in this bar chart we see the median attributions over multiple runs on smaller data sets, where each run re-fits the models and re-evaluates the attributions. We get a similar picture as
before, but the confidence interval of the attribution to Sold Units also contains zero, meaning its contribution is insignificant. But some important questions still remain: Was this only a
coincidence and, if not, which part in our system has changed? To find this out, we need to collect some more data.
Note that the results differ depending on the selected data, since they are sample specific. On other days, other factors could be relevant. Furthermore, note that the analysis (including the
confidence intervals) always relies on the modeling assumptions made. In other words, if the models change or have a poor fit, one would also expect different results.
What caused the profit drop in Q1 2022?
While the previous analysis is based on a single observation, let us see if this was just coincidence or if this is a persistent issue. When preparing the quarterly business report, we have some more
data available from the first three months. We first check if the profit dropped on average in the first quarter of 2022 as compared to 2021. Similar as before, we can do this by taking the fraction
between the average Profit of 2022 and 2021 for the first quarter:
data_first_quarter_2021 = data_2021[data_2021.index <= '2021-03-31']
data_first_quarter_2022 = pd.read_csv("2022 First Quarter.csv", index_col='Date')
(1 - data_first_quarter_2022['Profit'].mean() / data_first_quarter_2021['Profit'].mean()) * 100
$\displaystyle 13.0494881794224$
Indeed, the profit drop is persistent in the first quarter of 2022. Now, what is the root cause of this? Let us apply the distribution change method to identify the part in the system that has
median_attributions, confidence_intervals = gcm.confidence_intervals(
lambda: gcm.distribution_change(scm,
# Here, we are intersted in explaining the differences in the mean.
difference_estimation_func=lambda x, y: np.mean(y) - np.mean(x))
bar_plot(median_attributions, confidence_intervals, 'Profit change attribution in $')
In our case, the distribution change method explains the change in the mean of Profit, i.e., a negative value indicates that a node contributes to a decrease and a positive value to an increase of
the mean. Using the bar chart, we get now a very clear picture that the change in Unit Price has actually a slightly positive contribution to the expected Profit due to the increase of Sold Units,
but it seems that the issue is coming from the Page Views which has a negative value. While we already understood this as a main driver of the drop at the beginning of 2022, we have now isolated and
confirmed that something changed for the Page Views as well. Let’s compare the average Page Views with the previous year.
(1 - data_first_quarter_2022['Page Views'].mean() / data_first_quarter_2021['Page Views'].mean()) * 100
$\displaystyle 14.347627108364$
Indeed, the number of Page Views dropped by ~14%. Since we eliminated all other potential factors, we can now dive deeper into the Page Views and see what is going on there. This is a hypothetical
scenario, but we could imagine it could be due to a change in the search algorithm which ranks this product lower in the results and therefore drives fewer customers to the product page. Knowing
this, we could now start mitigating the issue.
Data generation process
While the exact same data cannot be reproduced, the following dataset generator should provide quite similar types of data and has various parameters to adjust:
from dowhy.datasets import sales_dataset
data_2021 = sales_dataset(start_date="2021-01-01", end_date="2021-12-31")
data_2022 = sales_dataset(start_date="2022-01-01", end_date="2022-12-31", change_of_price=0.9) | {"url":"https://www.pywhy.org/dowhy/v0.11/example_notebooks/gcm_online_shop.html","timestamp":"2024-11-08T15:50:01Z","content_type":"text/html","content_length":"99437","record_id":"<urn:uuid:07b24ef3-f124-46e3-924c-4f762ce596d4>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00591.warc.gz"} |
Statistics/Summary/Averages/Harmonic Mean - Wikibooks, open books for an open world
The arithmetic mean cannot be used when we want to average quantities such as speed.
Consider the example below:
Example 1: The distance from my house to town is 40 km. I drove to town at a speed of 40 km per hour and returned home at a speed of 80 km per hour. What was my average speed for the whole trip?.
Solution: If we just took the arithmetic mean of the two speeds I drove at, we would get 60 km per hour. This isn't the correct average speed, however: it ignores the fact that I drove at 40 km per
hour for twice as long as I drove at 80 km per hour. To find the correct average speed, we must instead calculate the harmonic mean.
For two quantities A and B, the harmonic mean is given by: ${\displaystyle {\frac {2}{{\frac {1}{A}}+{\frac {1}{B}}}}}$
This can be simplified by adding in the denominator and multiplying by the reciprocal: ${\displaystyle {\frac {2}{{\frac {1}{A}}+{\frac {1}{B}}}}={\frac {2}{\frac {B+A}{AB}}}={\frac {2AB}{A+B}}}$
For N quantities: A, B, C......
Harmonic mean = ${\displaystyle {\frac {N}{{\frac {1}{A}}+{\frac {1}{B}}+{\frac {1}{C}}+\ldots }}}$
Let us try out the formula above on our example:
Harmonic mean = ${\displaystyle {\frac {2AB}{A+B}}}$
Our values are A = 40, B = 80. Therefore, harmonic mean ${\displaystyle ={\frac {2\times 40\times 80}{40+80}}={\frac {6400}{120}}\approx 53.333}$
Is this result correct? We can verify it. In the example above, the distance between the two towns is 40 km. So the trip from A to B at a speed of 40 km will take 1 hour. The trip from B to A at a
speed to 80 km will take 0.5 hours. The total time taken for the round distance (80 km) will be 1.5 hours. The average speed will then be ${\displaystyle {\frac {80}{1.5}}\approx }$ 53.33 km/hour.
The harmonic mean also has physical significance. | {"url":"https://en.m.wikibooks.org/wiki/Statistics/Summary/Averages/Harmonic_Mean","timestamp":"2024-11-10T18:34:28Z","content_type":"text/html","content_length":"52731","record_id":"<urn:uuid:696ca160-cc68-49d7-9043-b40a377fcd93>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00845.warc.gz"} |
Van Wilder 2006
Careful. This is a rather old-fashioned R movie. It is a trilogy from 2002, 2006 and 2009 (where the last is a pre-sequel). The movie is a time capsule from an era when movie studios were still
allowed to do such flicks. Here is a scene containing some math (don't click if you are less than 18 or too sensitive). There are some formulas from special relativity or electro magnetism. Terms 4 π
ε[0] c^2 appear in radiative fields. There are also lots of retarded terms like t-r/c.
MOV, Ogg Webm. IMDB link
Oliver Knill, Posted August 4th, 2024 | {"url":"https://people.math.harvard.edu/~knill/various/vanwilder/index.html","timestamp":"2024-11-08T01:27:35Z","content_type":"text/html","content_length":"2158","record_id":"<urn:uuid:90c7943b-947e-46fe-92fc-e02b2d637e89>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00127.warc.gz"} |
Unanswered Questions
HIIIIIIIIIIIIIII! :) I'm doin a problem on the restricted 3 body problem...What i want is to creat a proc "equilibria(mu)" which finds all five equilibria of (DE) when mu_1=mu and displays, in a neat
table, the (x,y)-position of each equilibrium and the eigenvalues of the vector field's Jacobian at that equilibrium. In addition, the table should indicate if the equilibrium is hyperbolic (all
eigenvalues are real and non-zero), elliptic (all eigenvalues are imaginary and non-zero), hyperbolic-elliptic (a mixture of non-zero real and imaginary eigenvalues), or otherwise.. Now the
difficulty is this, i can find the equ's and get the evalues, but i have a proble with gettin them into a table...i've been told that prinf is a good one to use, but i have no idea at which bit to
use it..this is how far i've got | {"url":"https://mapleprimes.com/questions/unanswered/?page=347","timestamp":"2024-11-08T07:19:20Z","content_type":"text/html","content_length":"131940","record_id":"<urn:uuid:f42558b2-0b8f-42a8-85e4-bfd29291a0c5>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00239.warc.gz"} |
Aquarium, rectangular tank calculator
in mm:
If you have already bought an aquarium or any other tank for keeping liquids but you haven’t yet bought fish, you may estimate the volume capacity of it by yourself. You can do it by
Tank filling the reservoir with the water up to the top edges of the reservoir. The amount of water in the reservoir is the volume of the reservoir. Then, with care, pour out all water in a
width X, separate reservoir. For instance, it could be a special reservoir of regular geometrical shape or measuring cylinder. You can visually specify the volume of your reservoir. However, if
mm you need to take measurements of the water volume in the rectangular reservoir, it is more convenient to use our online app providing prompt and accurate measurements. Thus, the first
Tank definition of this free online app is aquarium volume calculator.
length Y,
mm When you select an aquarium or any other liquid tank, depending on the purpose, you have to pay your attention to a difference in total and filled volumes of the tank. The latter is
Tank more about the total capacity. The second definition of the app is aquarium capacity calculator, which means you can freely use it when you need to know the total volume of the water
height L, tank. This app is a nice tool not only for owners of aquariums and “goldfish”, but almost by anyone else. If one wants to know the fluid amount of a rectangular-shaped reservoir or
mm tank, this is the right place.
level h, When you have to add some treatments or chemical additives, it is life-critical for the fish, if you use incorrect proportions. In this case, it is better even to re-estimate the
mm volume of water, if you are not sure about it. Thus, the third aim and definition of the present app is aquarium water volume calculator. Additionally, it is possible to calculate any
other rectangular tank volume. To use all the functions you should take measurements, enter figures into boxes and click ‘calculate’.
As it is said, the present app can calculate the volume of water in different rectangular reservoirs. It can be applied to make estimations of the reservoir surface square, total and
filled volumes.
Before you enter the parameters into the boxes and get the result, you have to use the gauging tape to dimension.
Pay your attention that all measurements should be in mm:
• H — Level of water (or another type of liquid);
• Y — Height of the reservoir;
• L — Length of the reservoir;
• X — Width of the reservoir.
As a result, you will know:
• Total square of the reservoir;
• Square of the side surface;
• Square of the bottom;
• Filled volume;
• Amount of the fluid;
• Volume of the reservoir.
The aquarium litre capacity could be specified by a semi-easy, long and inconvenient way by filling the aquarium tank with water measure by one-litre jar until the water reaches the
required limit. But know you now there is an easy way.
In case you know the best what you would like, the app is also useful if you want to build an aquarium and specify the parts of the aquarium to be ordered.
Some more useful information:
It is a trendy solution to set up an aquarium at home. It can occupy a notable place in the design of the room. Once you have decided to bring nature to your home by establishing a
freshwater aquarium (or you might prefer a saltwater one, which is more troublesome in keeping), first you have to select an appropriate tank to keep fish healthy and feel happily.
There is a large assortment of aquarium tanks at the market, so there will be no difficulty to choose one fitting the best your taste or any design of the room. There are tanks of
round shape similar to a bowl or a ball, rectangular and square tanks, parallelepiped or pyramid like tanks. Beside the decorative function, the aquarium should provide the best
solution in creating the optimal living conditions and environment for the fish you would like to have. The fish should feel freely and develop healthy. For that the tank should be
enough spacious. | {"url":"http://justcalc.com/aquarium-rectangular-tank-calculator/","timestamp":"2024-11-03T09:39:38Z","content_type":"application/xhtml+xml","content_length":"37255","record_id":"<urn:uuid:d5a17197-d5ec-4c79-a9e0-9cb9147a5b8d>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00061.warc.gz"} |
Quantum computers will crack your encryption—maybe they already have
Part of the job at Cisco® Talos® is not only to track current cyber threats but also predict ones that might crop up in the future. For Martin Lee, technical lead of security research within Talos,
that means thinking about tomorrow’s technologies as well as tomorrow’s threats.
And perhaps one of the most tantalizing of tomorrow’s technologies is a quantum computer—a machine that could calculate almost infinite possibilities simultaneously. Here, Lee explains what such
capabilities would be mean for the encryption that keeps our digital messages safe.
Q. How does quantum computing differ from today’s computers?
A. In the current architecture of a CPU, we have bits that are either one or zero. In a quantum computer, instead of bits we have qubits that exist in both states—one and zero—and everything in
between, at the same time.
So, when we do calculations, we can have all possible solutions to a problem being considered at once. The key thing is it can be much, much faster than current architectures. Also, it means we can
do certain calculations which just are not practical using current computers.
Today, this is mostly theoretical: quantum computing is probably at the level of complexity that the silicon chip was in the late 60s or early 70s. But from a security point of view, we can very
clearly see at least one of the implications.
Q. What is it? How would quantum computing threaten current enterprise security?
A. It all comes down to multiplication. Any number can be represented as multiples of prime numbers. But if you have a very large number and you want to identify its primes, it is very difficult to
do. It takes an unfeasible amount of time using current computing architectures.
In quantum computers, however, we know there is a way we can calculate this very easily, very fast. So, we can identify the prime numbers in large numbers easily. There are all sorts of applications
for this which are going to bring all sorts of advantages to our lives.
But our current encryption algorithms depend on the fact that it is difficult to calculate the primes of large numbers. When somebody creates a quantum computer of a suitable size, they will be able
to crack many of our current secure encryption algorithms very easily.
This is going to be a massive change for the way we keep data secure through encryption.
Q. How close are we to quantum computing being a real threat for encryption?
A. Billions of dollars are being invested in quantum computing and we do know there is progress being made in academia and in the private sector. There are private-sector organizations that have made
proof-of-concept systems that are very simplistic, but still work.
They are not at a size which is adequate to calculate the prime factors in encryption. What we don’t see is what is being developed in secret. This really is the issue. We know this is something that
interests the world’s superpowers greatly.
Of course, if you were to develop a quantum computer which is sizeable and powerful enough to crack encryption, you wouldn’t tell anyone.
We don’t know if such a system is even possible, to be fair, but we need to be prepared for the possibility that such a system could be operational within the coming years.
Q. How can you be sure someone isn’t using a quantum computer to break encryption already?
A. There’s likely to be indications that such a computer is in use. If you were a nation state that possessed such an item, I think you would find it very difficult to resist the temptation to use
it, so we would expect to see more evidence of communications systems being intercepted.
When such a system becomes available, I think there will be clues. Given the level of investment going into this, it’s something which is probably on the immediate horizon. It could be tomorrow.
Q. What should security chiefs do to avoid being hacked by a quantum computer?
A. The algorithms that are believed to be ‘quantum insecure’ are those that rely on public key cryptography.
There are candidate algorithms that are believed to be quantum secure—there is currently standardization work being undertaken by the National Institute of Standards and Technology in the United
Also, it is expected that keys of greater than 3,072 bits are going to be quantum secure for the foreseeable future, so we have an upgrade pathway. But we need to be developing systems so that we can
swap out encryption algorithms as and when necessary.
Related content: | {"url":"https://newsroom.cisco.com/c/r/newsroom/en/us/a/y2022/m03/is-2022-the-year-encryption-is-doomed.html","timestamp":"2024-11-09T23:49:08Z","content_type":"text/html","content_length":"89242","record_id":"<urn:uuid:2d80f4f1-ab9b-45d3-8df3-f9ff2983972b>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00085.warc.gz"} |
Midwest Geometry Conference
The Midwest Geometry Conference had been an annual meeting since its founding in 1991 until 2007, and revived in 2012 at The University of Oklahoma, Norman. The University of Oklahoma is one of the
four founding institutions, and Shihshu Walter Wei and others were involved in organizing and lecturing at the very first conference in 1991. This 20the conference is committed to to bring in
researchers and scholars around the world to discuss their research and to interact with mathematicians and students from the Midwest and other regions at all levels.
It is a wonderful event or phenomenon in mathematics that the results of one discipline of thought have startling and unexpected consequences in another. The conference is a valuable networking
opportunity, as well as a great venue for dialogue on mathematical concepts, ideas, theories, methods, interconnectedness, applications, problems, collaborations, and the direction of mathematics.
While we believe in that diversity is a strength, many things are connected, woven together; the common bound is sacred and scarcely one thing is foreign to another, the Midwest Geometry Conference
has consisted of the following nineteen meetings:
• 1991 Kansas
• 1992 Kansas State
• 1993 Missouri-Columbia
• 1994 Iowa
• 1995 Washington University (St. Louis)
• 1996 Oklahoma
• 1997 Kansas
• 1998 Louisiana State
• 1999 Missouri-Columbia
• 2000 Iowa
• 2007 Iowa
and three conference proceedings/research volumes:
• Proceedings of the 2006 Midwest Geometry Conference, Commun. Math. Anal. 2008, Conference 1,
Edited by En-Bing Lin and Shihshu Walter Wei
• Proceedings of the 2007 Midwest Geometry Conference in honor of Thomas P. Branson,
SIGMA Symmetry Integrability Geom. Methods Appl.3 (2007), Edited by Michael Eastwood and A. Rod Gover
• American Mathmatics Society Contemporary Mathematics, 646, Providence, RI, 2015,
Edited by Weiping Li and Shihshu Walter Wei
Each of these nineteen conferences has the support from the National Science Foundation.
The 20th Midwest Geometry Conference will take place January 17-18, 2015 at the University of Oklahoma, located in Norman, Oklahoma:
Everyone is welcome! Minorities, women, persons with disabilities, graduate students, recent Ph.D.s, postdoctoral researchers, junior faculty, and high school math teachers are especially encouraged
to participate. | {"url":"https://mgc20.math.ou.edu/history.php","timestamp":"2024-11-09T08:54:13Z","content_type":"text/html","content_length":"6163","record_id":"<urn:uuid:f0644e01-4bff-4c5e-9ae3-e92b5a9efc7a>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00532.warc.gz"} |
What is N in Chemistry: Understanding Normality Definition and Equations
In the realm of chemistry, the term ‘N’ holds significant importance as it relates to concentration and reactivity of solutions. Known as “Normality,” ‘N’ denotes the gram equivalent weight per liter
of solution and plays a crucial role in determining the reactive capacity of molecules. In this article, we will delve deeper into the concept of Normality and its various aspects, including its
definition, equations, units of measurement, examples, and potential issues. Let’s explore the world of ‘N’ and understand its significance in the study of chemistry.
✅ AI Essay Writer ✅ AI Detector ✅ Plagchecker ✅ Paraphraser
✅ Summarizer ✅ Citation Generator
Understanding Normality in Chemistry
Normality (N) is a measure of concentration that signifies the gram equivalent weight per liter of a solution. The gram equivalent weight is a representation of the reactive capacity of a molecule in
a given chemical reaction. Therefore, the role of the solute in the reaction is a critical factor in determining the solution’s normality. The concept of Normality is also synonymous with the
equivalent concentration of a solution.
Equations for Normality (N)
There are several equations used to calculate Normality:
1. Normality (N) is equal to the molar concentration (ci) divided by an equivalence factor (feq):N = ci / feq
2. Alternatively, Normality (N) can be calculated as the gram equivalent weight divided by liters of solution:N = gram equivalent weight / liters of solution (often expressed in g/L)
3. In some cases, Normality (N) can also be found by multiplying the molarity by the number of equivalents:N = molarity x equivalents
The capital letter ‘N’ is used to represent concentration in terms of Normality. Additionally, it can be expressed as eq/L (equivalent per liter) or meq/L (milliequivalent per liter of 0.001 N,
commonly used in medical reporting).
The Role of Normality in Chemical Reactions
To better understand the concept of Normality, let’s consider a few examples:
1. Acid Reactions: Suppose we have a 1 M H2SO4 (sulfuric acid) solution. In this case, the Normality (N) will be 2 N because two moles of H+ ions are present per liter of the solution.
2. Sulfide Precipitation Reactions: In the same 1 M H2SO4 solution, if the focus shifts to the SO4^2- ion’s participation, the Normality will be 1 N.
Example Problem: Calculating Normality
Let’s calculate the Normality of a 0.1 M H2SO4 (sulfuric acid) solution for the given reaction:
H2SO4 + 2 NaOH → Na2SO4 + 2 H2O
In this equation, 2 moles of H+ ions (2 equivalents) from sulfuric acid react with sodium hydroxide (NaOH) to form sodium sulfate (Na2SO4) and water. Using the equation:
N = molarity x equivalents N = 0.1 x 2 N = 0.2 N
In this case, the Normality of the sulfuric acid solution is calculated to be 0.2 N.
Potential Issues Using N for Concentration
While Normality is a valuable unit of concentration, it may not be applicable in all situations due to its dependency on the equivalence factor, which can change based on the type of chemical
reaction being studied. For example, a solution of magnesium chloride (MgCl) might have a Normality of 1 N for the Mg^2+ ion but 2 N for the Cl^- ion.
The Significance of Normality in Laboratory Work
While Normality plays a crucial role in theoretical chemistry, its practical usage in laboratory work is relatively limited compared to other concentration units like molality. However, it remains
significant in specific scenarios, including:
1. Acid-Base Titrations: Normality is particularly useful in acid-base titrations, where the concentration of a base is determined by the volume of an acid solution of known Normality required to
neutralize it.
2. Precipitation Reactions: Normality finds application in precipitation reactions, which involve the formation of an insoluble solid (precipitate) when two solutions are mixed.
3. Redox Reactions: In redox reactions, where there is a transfer of electrons between reactants, Normality can provide valuable insights into the concentration of participating species.
In conclusion, ‘N’ in chemistry, also known as Normality, is a vital measure of concentration that helps determine the gram equivalent weight per liter of solution. It plays a significant role in
understanding the reactive capacity of molecules in various chemical reactions. Through equations and examples, we have explored the calculation of Normality and its units of measurement. While it
may not be as extensively used in practical laboratory work, Normality remains essential in specific chemical analyses. Understanding Normality is fundamental for any aspiring chemist or anyone
seeking to comprehend the intricacies of chemical reactions and solution concentrations.
What is the normality equation?
The normality equation is a mathematical representation used to calculate the Normality (N) of a solution. It can be expressed in different forms:
1. Normality (N) is equal to the molar concentration (ci) divided by an equivalence factor (feq): N = ci / feq
2. Alternatively, Normality (N) can be found by dividing the gram equivalent weight by liters of solution: N = gram equivalent weight / liters of solution (often expressed in g/L)
3. In some cases, Normality (N) can also be determined by multiplying the molarity by the number of equivalents: N = molarity x equivalents
What are the units of normality in chemistry?
Normality is represented by the capital letter ‘N’ and indicates the concentration of a solution. It can also be expressed as eq/L (equivalent per liter) or meq/L (milliequivalent per liter of 0.001
N, which is commonly used in medical reporting).
Can normality be used for all types of chemical reactions?
While Normality is a valuable unit of concentration, it cannot be used for all types of chemical reactions. Its applicability depends on the type of chemical reaction being studied and relies on an
equivalence factor that can change for different reactions. For instance, in some cases, the Normality may be different for different ions within the same solution.
What are some examples of normality in acid reactions?
In acid reactions, Normality (N) is essential in determining the concentration of acidic species in a solution. Here are some examples:
For a 1 M H2SO4 (sulfuric acid) solution, the Normality (N) will be 2 N because two moles of H+ ions are present per liter of the solution.
In the same 1 M H2SO4 solution, if the focus shifts to the SO4^2- ion’s participation, the Normality will be 1 N.
How to find the normality of a given solution in chemistry?
To find the Normality (N) of a given solution in chemistry, you can follow these steps:
1. Identify the molarity (M) of the solution, which is the number of moles of solute per liter of solution.
2. Determine the number of equivalents in the reaction, which corresponds to the reactive species in the chemical equation.
3. Use the normality equation N = molarity x equivalents or N = ci / feq, where ‘ci’ is the molar concentration and ‘feq’ is the equivalence factor, to calculate the Normality.
By following these steps and plugging in the appropriate values, you can find the Normality of the given solution in chemistry.
Follow us on Reddit for more insights and updates.
Comments (0)
Welcome to A*Help comments!
We’re all about debate and discussion at A*Help.
We value the diverse opinions of users, so you may find points of view that you don’t agree with. And that’s cool. However, there are certain things we’re not OK with: attempts to manipulate our data
in any way, for example, or the posting of discriminative, offensive, hateful, or disparaging material. | {"url":"https://academichelp.net/stem/chemistry/what-is-n-in-chemistry.html","timestamp":"2024-11-11T03:42:19Z","content_type":"text/html","content_length":"108913","record_id":"<urn:uuid:a5cd911c-c710-4f6f-a58a-385ce5f2c2f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00551.warc.gz"} |
qp2/README.rst at b5b0cdb27a734162c2f0ab90e0aa83b33d13d490
2020-04-07 11:03:19 +02:00
28 lines
3.4 KiB
This module proposes the various flavours of the DFT-based basis set correction originally proposed in J. Chem. Phys. 149, 194301 (2018); https://doi.org/10.1063/1.5052714.
This basis set correction relies mainy on :
+) The definition of a range-separation function \mu(r) varying in space to mimic the incompleteness of the basis set used to represent the coulomb interaction. This procedure needs a two-body rdm
representing qualitatively the spacial distribution of the opposite spin electron pairs.
Two types of \mu(r) are proposed, according to the strength of correlation, through the keyword "mu_of_r_potential" in the module "mu_of_r":
a) "mu_of_r_potential = hf" uses the two-body rdm of a HF-like wave function (i.e. a single Slater determinant developped with the MOs stored in the EZFIO folder).
When HF is a qualitative representation of the electron pairs (i.e. weakly correlated systems), such an approach for \mu(r) is OK.
See for instance JPCL, 10, 2931-2937 (2019) for typical flavours of the results.
Thanks to the trivial nature of such a two-body rdm, the equation (22) of J. Chem. Phys. 149, 194301 (2018) can be rewritten in a very efficient way, and therefore the limiting factor of such an
approach is the AO->MO four-index transformation of the two-electron integrals.
b) "mu_of_r_potential = cas_ful" uses the two-body rdm of CAS-like wave function (i.e. linear combination of Slater determinants developped in an active space with the MOs stored in the EZFIO
If the CAS is properly chosen (i.e. the CAS-like wave function qualitatively represents the wave function of the systems), then such an approach is OK for \mu(r) even in the case of strong
+) The use of DFT correlation functionals with multi-determinant reference (Ecmd). These functionals are originally defined in the RS-DFT framework (see for instance Theor. Chem. Acc.114, 305(2005))
and design to capture short-range correlation effects. A important quantity arising in the Ecmd is the exact on-top pair density of the system, and the main differences of approximated Ecmd relies
on different approximations for the exact on-top pair density.
The two main flavours of Ecmd depends on the strength of correlation in the system:
a) for weakly correlated systems, the ECMD PBE-UEG functional based on the seminal work of in RSDFT (see JCP, 150, 084103 1-10 (2019)) and adapted for the basis set correction in JPCL, 10, 2931-2937
(2019) uses the exact on-top pair density of the UEG at large mu and the PBE correlation functional at mu = 0. As shown in JPCL, 10, 2931-2937 (2019), such a functional is more accurate than the
ECMD LDA for weakly correlated systems.
b) for strongly correlated systems, the ECMD PBE-OT, which uses the extrapolated on-top pair density of the CAS wave function thanks to the large \mu behaviour of the on-top pair density, is
accurate, but suffers from S_z dependence (i.e. is not invariant with respect to S_z) because of the spin-polarization dependence of the PBE correlation functional entering at mu=0.
An alternative is ECMD SU-PBE-OT which uses the same on-top pair density that ECMD PBE-OT but a ZERO spin-polarization to remove the S_z dependence. As shown in ???????????, this strategy is one of
the more accurate and respects S_z invariance and size consistency if the CAS wave function is correctly chosen. | {"url":"https://git.irsamc.ups-tlse.fr/LCPQ/qp2/src/commit/b5b0cdb27a734162c2f0ab90e0aa83b33d13d490/src/basis_correction/README.rst","timestamp":"2024-11-14T08:47:27Z","content_type":"text/html","content_length":"36301","record_id":"<urn:uuid:345143c1-44c6-48f8-bce9-cded3eeb19a7>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00143.warc.gz"} |
The first and last term of an ap are 1 and 121 find the number of terms in the ap The first and last term of an ap are 1 and 121 find the number of terms in the ap
The first and last term of an ap are 1 and 121 find the number of terms in the ap
In an arithmetic progression (AP), you can find the number of terms using the following formula:
Number of terms (n) = [(Last term - First term) / Common difference] + 1
In your case, you're given the first term (1), the last term (121), and it's an AP, so the common difference is the same between all terms.
n = (121 - 1) / Common difference + 1
n = 120 / Common difference + 1
To find the number of terms, we need the common difference. Since the common difference is not provided, we can't determine the exact number of terms without that information.
Post a Comment
0 Comments
* Please Don't Spam Here. All the Comments are Reviewed by Admin. | {"url":"https://maths.loudstudy.com/2023/10/the-first-and-last-term-of-ap-are-1-and.html","timestamp":"2024-11-11T19:37:07Z","content_type":"application/xhtml+xml","content_length":"237757","record_id":"<urn:uuid:bf9954bc-5bc9-446c-967b-1914710e0f32>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00444.warc.gz"} |
Expanding Horizons XXIX
Presenters & Topics
George Ashline
Saint Michael’s College
George Ashline is a Professor of Mathematics at Saint Michael’s College. He has also taught in the Vermont Mathematics Initiative for many years. He is a participant and faculty consultant in Project
NexT (New Experiences in Teaching), a Mathematical Association of America program created for new or recent doctorates in the mathematical sciences who are interested in improving the teaching and
learning of undergraduate mathematics. He has written and co-written a number of articles concerning mathematics education and pedagogy. For many years, he has served as a faculty consultant,
including several years as a Table Leader and also Question Leader for the Advanced Placement Calculus Reading.
I am willing to present to multiple classes at once, as well as visit different classes on the same day. I have given versions of these talks at various levels, from high school to elementary school.
I have much faculty consultant experience in grading AP Calculus Free Response questions, and I would be willing to answer questions that any AP Calculus teachers may have about that.
Video introduction to talks.
E-mail: gashline@smcvt.edu
Phone: (802) 654-2434
Correlation Properties and Applications
Through an activity and examples, we investigate properties of scatter plots and correlation in context, leading to a discussion of the correlation coefficient and challenges inherent in attempting
to find causal links between variables. If time and technology permit, students can explore the online Correlation Guessing Game.
Prerequisites: Familiarity with the concepts of the mean and standard deviation of a variable (also, two-variable statistics calculators are helpful)
CC connections:
An Introduction to Bias and Margin of Error
Through an initial activity, we explore the potential impact of bias in statistical analysis. We can also consider how bias may arise in survey questions and ways that it can be reduced. In another
activity, we can consider different types of error that may impact a survey or experiment and the meaning of margin of error.
Prerequisites: Familiarity with averages, percentages, and surveys
CC connections:
Exponential Functions in Snowflakes, Carpets, and Paper Folding
Through constructions of initial stages of several fractals, students can explore and represent underlying patterns using exponential functions. Other examples of exponential functions and their
properties can be discussed. If time permits, students can play the Chaos Game to “create” the Sierpinski Triangle.
Prerequisites: Familiarity with exponents and functions
CC connections:
• 8.F) Functions
□ Define, evaluate, and compare functions
□ Use functions to model relationships between quantities
• F-LE) Functions: Linear, Quadratic, and Exponential Models
□ Construct and compare linear, quadratic, and exponential models and solve problems
□ Interpret expressions for functions in terms of the situation they model
Number Pattern Challenges
How can you predict the value of a secret number based on its location on some “magical” cards? How can you advise a game show host as to how to best award prizes from one dollar up to one thousand
dollars using only dollar bills filling a mere ten envelopes? How can we guide a local farmer about using an amazing forty pound broken rock to measure various weights from one pound up to forty
pounds? These challenges and more reveal fascinating patterns of numbers, and strategies for solving problems.
CC connections:
• 4.NBT) Numbers and Operations in Base Ten
□ Generalize place value understanding for multi-digit whole numbers
□ Use place value understanding and properties of operations to perform multi-digit arithmetic
• 5.NBT) Numbers and Operations in Base Ten
□ Understand the place value system
Framing the Proof of the Pythagorean Theorem and Investigating Some Interesting Pythagorean Triple Properties
We will begin this session with some hands-on proofs of the Pythagorean using sets of congruent right triangles and other famous methods, with some interesting historical connections to some ancient
mathematicians and civilizations. We will then discuss Pythagorean triples and some of their properties, including some neat connections that they have with Fibonacci numbers.
CC connections:
Encountering the Great Problems from Antiquity: Hands-On Trisection, Duplication, and Quadrature
The Ancient Greeks grappled with the three classical problems of trisecting an angle, doubling the volume of a cube, and squaring a circle using only straightedge and compass constructions. These
constructions were shown to be impossible millennia later with the evolution of abstract algebra and analysis in the nineteenth century.
We will consider some of the rich approaches that have arisen to solve these problems using additional techniques and tools, including origami. Along the way, we will encounter some interesting work
of such mathematicians as Archimedes and Eratosthenes and more recent scholars.
CC connections:
• 7.G) Geometry
□ Draw, construct, and describe geometrical figures and describe the relationships between them
• 8.G) Geometry
□ Understand congruence and similarity using physical models, transparencies, or geometry software
• G-CO) Geometry: Congruence
□ Make geometric constructions
Estimating the Circumference of the Earth – Following in the Shadow of Eratosthenes
The goal of this activity is to recreate to a certain degree the remarkable estimate of the circumference of the earth done by the Greek mathematician Eratosthenes over two millennia ago. Using the
length of the sun’s shadow at high noon (“sun transit”) at two locations, groups will estimate the “sun” angle (the angle between the sun’s rays and a vertical stick) at these two locations. Knowing
the “sun” angle at two different locations will allow us to estimate the circumference of the earth.
CC connections:
Exploring Ancient Number Systems
We investigate how numbers are written in three ancient number systems, namely the Egyptian Hieroglyphic (ca. 3400 B.C.), the Attic-Greek Herodianic (ca. 600 B.C.), and the Mayan (ca. 400 A.D.).
These systems not only provide useful mathematical information about properties of number systems, but also offer interesting historical connections to cultural beliefs and characteristics of several
important ancient civilizations. Experience with these number systems helps to provide deeper understanding of our own decimal number system.
CC connections:
• 4.NBT) Numbers and Operations in Base Ten
□ Generalize place value understanding for multi-digit whole numbers
□ Use place value understanding and properties of operations to perform multi-digit arithmetic
• 5.NBT) Numbers and Operations in Base Ten
□ Understand the place value system
Josh Bongard
What does math have to do with robots?
We will explore the relationship between math and robots by performing two collaborative games. One will explore the mathematics of optimization: how to search very large spaces, filled with mostly
useless patterns, to find the small minority of useful ones. In the second game, we will apply this idea to find useful brains for robots, so that they perform useful or entertaining tasks.
Level: Middle or high school. This presentation can be adapted given the age level and mathematical sophistication of the audience.
Using Math to Create Robots … and Xenobots
Joanna Ellis-Monaghan
Saint Michael’s College
Joanna is currently in the Netherlands, and so is available for Zoom presentations only.
E-mail: jellismonaghan@gmail.com
A hands-on introduction to mathematical modeling with graph theory.
The above model intercommunications, relationships, and conflicts. We will explore a variety of applications from: the internet, the stock market, classroom scheduling, power grids, the Kevin Bacon
game, computer chips, social circles, and DNA.
Is your shoelace really knotted? How can you tell? A gentle introduction to knot theory.
Graph Theory in the Real World
Where does math come from”? We will see some of the new math in network theory being developed today as well as some of the critical applications driving its creation. In particular, we will see new
mathematical theory created for DNA origami and tile assembly used for biomolecular computing, nanoelectronics, and cutting-edge medicine. We conclude the talk by showcasing examples of what
mathematicians do in real life, and how some of the top jobs use mathematical skills.
Level: Grade 6 and up
Length: 20 min to 2 hours (longer versions may have some hands on activities).
David Hathaway
Level: Middle School or High School
Length: 60 to 80 minutes (80 preferred)
Math content: Coordinate systems, calculating area and volume, building objects from transformations (scale, rotate, translate) and set operations (union, intersection, difference) on geometric
Prerequisites: None
Other requirements:
• An electrical outlet (for the printer and the laptop to drive it).
• A table on which to set up the printer (about 3 feet by 2 feet).
• A projector to connect to my laptop
CC connections:
• 7.G) Geometry
□ Solve real-life and mathematical problems involving angle measure, area, surface area, and volume
• 8.G) Geometry
□ Solve real-world and mathematical problems involving volume of cylinders, cones, and spheres
Where am I? – How GPS Works
not have to know where receivers are, or even if any are listening). We then do a hands on exercise with tape measures to demonstrate the basic idea of trilateration. If there’s time we’ll talk about
conversion from Cartesian (x,y,z) coordinates to the latitude / longitude / altitude coordinate system, do a little review of celestial navigation methods and history, and wrap up by talking about
some of the complications in real life, like non-uniform and non-spherical earth, relativistic time dilation due to the earth’s gravity. The hands on exercise can be skipped for a larger group, and
content and emphasis can be adjusted depending on your needs.
Level: Middle School or High School
Length: 60 minutes
Math content: Pythagorean theorem, expansion of squares of binomials, Cartesian coordinates, solving simultaneous linear equations using elimination, maybe a little trigonometry and triangle
Other requirements: A projector to connect to my laptop
CC connections:
Gerard T. LaVarnway
Cryptology: The Art and Science of Secret Writing
An introduction to cryptology will be given. The history of cryptology will be discussed from the time of Caesar to the present. Various ciphers will be demonstrated. The mathematical foundations of
ciphers will be discussed.
Level: Grades 9 – 12
Length: 40 – 50 minutes
The Use of Linear Algebra in Cryptology
Humankind is fascinated with message concealment. Cryptology – the art and science of secret writing enjoys a rich history of mystery, intrigue and suspense. For mathematicians, cryptology employs
applications of mathematics from a variety of fields including linear algebra. Examples of matrix techniques for encryption and decryption will be discussed. In particular, the Hill cipher will be
demonstrated. Techniques for decrypting secret messages will be demonstrated.
Level: Grades 9 – 12
Length: 40 – 50 minutes
Barbara O’Donovan
This presentation discusses the maximum amount of power that can be extracted from the wind by a wind turbine rotor. Using principles of fluid mechanics and physics with concepts such as control
volumes, conservation of momentum, Bernoulli’s equation, thrust, power and simplifying assumptions an expression for rotor power is developed. The rotor power expression is optimized using
derivatives to maximize the axial induction factor and thus the power extracted. This is followed by discussion of the simplifying assumptions made in developing the rotor power expression. And
finally, a brief discussion of the current state of the wind power industry and potential locally, nationally, and internationally.
Prerequisites: Calculus applications of derivatives – power rule and product rule, experience with physics would be helpful but is not necessary
Michael Olinick
Cryptology: The Mathematics of Making and Breaking Secret Codes
Mathematics provides the answer.
The Near-Sighted Fly: A Topological View of the Universe
I See It but I Don’t Believe It: Some Surprising Facts About Infinite Sets
For much of the history of mathematics and Western thought, “infinity” was viewed as an unknowable subject, not susceptible to rational thought and investigation. Georg Cantor changed all this with a
seemingly simple, but revolutionary breakthrough in the late 19th Century. Cantor proved a number of results about infinite sets, many of which challenge our intuitions and startled the
mathematicians of his time. Even Cantor himself found it hard to believe some of his own theorems. We will examine Cantor’s controversial breakthrough and see why one leading mathematician labeled it
“a disease from which mathematics will one day recover”, while another boasted that “No one shall expel us from the paradise that Cantor has created.”
Darlene M. Olsen
Norwich University
Darlene Olsen, Ph.D., is a Charles A. Dana Professor of Mathematics and Norwich coordinator for the Vermont Biomedical Research Network. She is the 2013 Homer L. Dodge Award winner for Excellence in
She joined the Norwich faculty in 2006 and routinely teaches statistic courses, such as Introductory Statistics, Statistics for Health Science majors, and Statistical Methodology for STEM majors. She
also teaches other general mathematics courses, including Linear Algebra and Liberal Arts Mathematics.
Her current research areas are biostatistics and pedagogy in mathematics and statistics. Olsen has received research grants through the Vermont Genetics Network, served as a statistical consultant,
and published work in several research journals.
She received her doctorate in mathematics from the University at Albany in 2003. She also holds an MS in biometry and statistics (2001) and an MA in mathematics (1997) from the University at Albany
and a BA in mathematics (1994) from SUNY Geneseo.
E-mail: dolsen1@norwich.edu
Phone: (802) 485-2875
Prefers to give in person presentations.
Maximizing the Flight Time of a Paper Helicopter
The mission is to design a paper helicopter that remains aloft the longest when dropped from a certain height. Various combinations of design factors contribute to the flight time.
Level: Grades 10-12
Length: 30-45 minutes
Mathematical Ties to Tying Neckties
Did you ever ask the question of how many possible ways there are to tie a necktie? Furthermore, what factors determine an aesthetic tie knot? This problem can be answered using mathematics. We will
discover the mathematical ways for describing how to tie necktie knots. We will also classify knots according to their size and shape.
Level: High school
Length: 45 minutes | {"url":"https://vtmathcoalition.org/expanding-horizons-presenters-and-topics/","timestamp":"2024-11-14T03:55:25Z","content_type":"text/html","content_length":"128021","record_id":"<urn:uuid:e7f8c941-f947-4e98-ad3a-7fa2de89d52c>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00794.warc.gz"} |
3 Month Compound Interest Calculator - GEGCalculators
3 Month Compound Interest Calculator
3 Month Compound Interest Calculator
How do you calculate compound interest for 3 months? Compound interest for a short period like 3 months can be estimated using the formula: A = P(1 + (r/n))^(nt), where A is the future value, P is
the principal amount, r is the annual interest rate, n is the number of times interest is compounded per year, and t is the time in years. In this case, you would use n = 4 (quarterly compounding)
and t = 3/12 (3 months converted to years).
How interest is calculated for 3 months? Interest for 3 months can be calculated using the formula: Interest = Principal x Rate x Time. You would use the principal amount, the annual interest rate
divided by 4 (for quarterly compounding), and 3/12 as the time in years.
What does compounded 3 monthly mean? Compounded 3 monthly means that interest is calculated and added to the principal every 3 months. It’s another way of saying that interest is compounded
How much is $1000 worth at the end of 2 years if the interest rate of 6% is compounded daily? To estimate this, you can use the compound interest formula: A = P(1 + (r/n))^(nt), where P = $1000, r =
6% or 0.06, n = 365 (daily compounding), and t = 2 years. Plug these values into the formula to estimate the future value (A).
How do I calculate compound interest monthly? Use the same compound interest formula A = P(1 + (r/n))^(nt), but set n = 12 (for monthly compounding).
How do you calculate quarterly compound interest? Use the same compound interest formula A = P(1 + (r/n))^(nt), but set n = 4 (for quarterly compounding).
What is the formula for calculating interest? The formula for calculating interest is usually: Interest = Principal x Rate x Time.
How do you calculate interest on a calculator? To calculate interest on a calculator, input the principal amount, multiply it by the interest rate, and then multiply the result by the time period in
How do you calculate simple interest for 2 months? Simple interest for 2 months can be calculated using the formula: Interest = Principal x Rate x (Time/12), where Time is in months.
Is compounded quarterly every 3 months? Yes, compounded quarterly means that interest is calculated and added to the principal every 3 months.
Is it better to get interest monthly or annually? It depends on your financial goals and the interest rate. Generally, monthly compounding may result in slightly higher returns over time compared to
annual compounding for the same interest rate.
Is it better to compound monthly? Compounding monthly can often yield higher returns compared to less frequent compounding like annually or quarterly.
How long will it take for a $2000 investment to double in value? To estimate, you can use the Rule of 72: Divide 72 by the annual interest rate (in percentage terms) to estimate the number of years
it will take to double your investment. For example, with a 6% interest rate, it would take approximately 12 years (72 / 6 = 12 years) to double your $2000 investment.
How much is $5000 at 3% interest? To estimate, you can calculate simple interest using the formula: Interest = Principal x Rate x Time. For $5000 at 3% interest, the interest earned in one year would
be approximately $150.
Which is better monthly or quarterly interest? Monthly interest is generally better for maximizing returns, as it compounds more frequently and can result in slightly higher overall returns compared
to quarterly interest for the same rate.
What will $1 be worth in 10 years? The future value of $1 in 10 years depends on the interest rate. To estimate, use the compound interest formula with the given rate and time period.
How much interest will $100,000 earn in a year? The interest earned on $100,000 in a year depends on the interest rate. To calculate, multiply the principal by the annual interest rate.
Is compounded daily better than monthly? Compounded daily can yield even higher returns compared to monthly compounding, but the difference may not be significant for lower interest rates.
Which is better compounded quarterly or annually? Compounded quarterly is usually better for maximizing returns compared to annual compounding, as it compounds more frequently.
What is 5% interest compounded quarterly? 5% interest compounded quarterly means that you earn 5% interest per year, and it’s compounded four times a year.
How many months is quarterly compounding? Quarterly compounding means interest is calculated and added every three months, which is equivalent to 3 months.
GEG Calculators is a comprehensive online platform that offers a wide range of calculators to cater to various needs. With over 300 calculators covering finance, health, science, mathematics, and
more, GEG Calculators provides users with accurate and convenient tools for everyday calculations. The website’s user-friendly interface ensures easy navigation and accessibility, making it suitable
for people from all walks of life. Whether it’s financial planning, health assessments, or educational purposes, GEG Calculators has a calculator to suit every requirement. With its reliable and
up-to-date calculations, GEG Calculators has become a go-to resource for individuals, professionals, and students seeking quick and precise results for their calculations.
Leave a Comment | {"url":"https://gegcalculators.com/3-month-compound-interest-calculator/","timestamp":"2024-11-05T09:09:40Z","content_type":"text/html","content_length":"175511","record_id":"<urn:uuid:3879684b-fb9f-4083-940d-e3be0a265171>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00009.warc.gz"} |
Graduate Courses | Institute of Mathematics
Graduate Courses
Code Course Description
Concepts and Techniques in Abstract Algebra Groups, rings and homomorphism.
SHOW MORE
201 Prerequisite: Math 109/COI
Number of units: 3
Number of hours/week: 3
Analysis I Real numbers, sequences of real numbers and limits, continuity of functions, derivatives, Riemann integral.
SHOW MORE
202.1 Prerequisite: COI
Number of units: 3
Number of hours/week: 3
Analysis II \(n\)-dimensional Euclidean space, functions of several variables, partial derivatives, multiple integrals, complex-valued functions and their derivatives.
SHOW MORE
202.2 Prerequisite: Math 202.1
Number of units: 3
Number of hours/week: 3
Matrices and Applications Linear systems of equations and matrices, matrix operations, determinants, vector spaces, linear transformations, eigenvalues, eigenvectors, applications.
SHOW MORE
203 Prerequisite: COI
Number of units: 3
Number of hours/week: 3
Classical and Modern Geometry Finite geometries, euclidean and non-euclidean geometries, projective geometry, geometric transformations.
SHOW MORE
204 Prerequisite: COI
Number of units: 3
Number of hours/week: 3
Concepts and Methods in Probability and Statistics Descriptive statistics, probability and probability distributions, sampling theory, estimation and test of hypothesis, linear correlation
and regression analysis.
Math SHOW MORE
Prerequisite: COI
Number of units: 3
Number of hours/week: 3
History and Development of the Fundamental Concepts of Mathematics
SHOW MORE
208 Prerequisite: COI
Number of units: 3
Number of hours/week: 3
Selected Topics in Applied Mathematics
SHOW MORE
209.1 Prerequisite: COI
Number of units: 3
Number of hours/week: 3
Selected Topics in Discrete Mathematics
SHOW MORE
209.2 Prerequisite: Math 201
Number of units: 3
Number of hours/week: 3
Modern Algebra I Semigroups and groups; rings; fields; groups with operators. Selected topics.
SHOW MORE
210.1 Prerequisite: COI
Number of units: 3
Number of hours/week: 3
Modern Algebra II A continuation of Mathematics 210.1.
SHOW MORE
210.2 Prerequisite: Math 210.1
Number of units: 3
Number of hours/week: 3
Linear Algebra Vector spaces, linear mappings; theorem of Hamilton-Cayley; modules over principal ideal domains; Jordan canonical form, rational canonical form; bilinear forms, inner
products; law of inertia, spectral theorem; multilinear forms; tensor products.
Math SHOW MORE
Prerequisite: Math 110.2/114/COI
Number of units: 3
Number of hours/week: 3
Theory of Matrices
SHOW MORE
214 Prerequisite: COI
Number of units: 3
Number of hours/week: 3
Lie Groups and Lie Algebras Classical matrix Lie groups, Lie algebras of Lie groups, nilpotent and solvable algebras, semisimple algebras, representations.
SHOW MORE
216 Prerequisite: Math 210.1
Number of units: 3
Number of hours/week: 3
Theory of Numbers Linear Congruences, Euler’s and Wilson’s Theorems, Quadratic residues, Quadratic Reciprocity Law, Jacobi’s and Kronocker’s symbols, Polian Equation, Positive Binary and
Ternary quadratic forms. Theory of the sums of two and three squares.
Math SHOW MORE
Prerequisite: COI
Number of units: 3
Number of hours/week: 3
Theory of Algebraic Numbers Algebraic number fields; algebraic integers; basic and discriminant; ideals; fundamental theorem on the decomposition of ideals; ideal classes; Minkowski’s
theorem; the class formula; units; Fermat’s last theorem. Selected topics.
Math SHOW MORE
Prerequisite: COI
Number of units: 3
Number of hours/week: 3
Theory of Functions of a Real Variable I Lebesgue and other integrals; differentiation; measure theory.
SHOW MORE
220.1 Prerequisite: Math 123.1/COI
Number of units: 3
Number of hours/week: 3
Theory of Functions of a Real Variable II Continuation of Math 220.1. Selected topics.
SHOW MORE
220.2 Prerequisite: Math 220.1
Number of units: 3
Number of hours/week: 3
Partial Differential Equations Equations of the first and second order. Green’s function. Boundary value problems.
SHOW MORE
221 Prerequisite: COI
Number of units: 3
Number of hours/week: 3
Approximation Theory Taylor’s theorem, Weierstrass approximation theorem, approximation in Hilbert spaces, Fourier Series and Fourier transform, direct and inverse theorems, algebraic and
trigonometric interpolation, Whittaker-Shannon sampling theory, wavelet analysis.
Math SHOW MORE
Prerequisite: Math 220.1/COI
Number of units: 3
Number of hours/week: 3
Control Theory Elements of the calculus of variations. Naive optimal control theory; Functional analysis; Generalized optimal control theory; The Pontrjagin maximum principle for chattering
controls; Research problems.
Math SHOW MORE
Prerequisite: Math 126, 142/COI
Number of units: 3
Number of hours/week: 3
Calculus of Variation Euler’s equations. Legendre conditions. Jacobi’s conditions. Isoperimetric problems. Lagrange’s methods. Dirichlet’s principle.
SHOW MORE
227 Prerequisite: COI
Number of units: 3
Number of hours/week: 3
Theory of Functions of a Complex Variable Analytic functions; geometric function theory; analytic continuation; Riemann Mapping Theorem.
SHOW MORE
228 Prerequisite: COI
Number of units: 3
Number of hours/week: 3
Functional Analysis Linear operators, linear functionals, topological linear spaces, normed spaces, Hilbert spaces, functional equations, Radon measures, distributive and linear partial
differential equations, and spectral analysis.
Math SHOW MORE
Prerequisite: Math 220.1
Number of units: 3
Number of hours/week: 3
Mathematics in Population Biology Continuous and discrete population models for single species, models for interacting populations, evolutionary models, dynamics of infectious diseases.
SHOW MORE
235 Prerequisite: Math 121.1/equiv/COI
Number of units: 3
Number of hours/week: 3
Mathematics in Biological Processes Biological oscillators and switches, perturbed and coupled oscillators, reaction diffusion, enzyme kinetics, chemotaxis, circadian systems models, coupled
cell networks.
Math SHOW MORE
Prerequisite: COI
Number of units: 3
Number of hours/week: 3
Geometric Crystallography Isometries, frieze groups, crystallographic groups, lattices and invariant sublattices, finite groups of isometries, geometric and arithmetic crystal classes.
SHOW MORE
240 Prerequisite: Math 210.1/equiv
Number of units: 3
Number of hours/week: 3
Hyperbolic Geometry Moebius transformations, hyperbolic plane and hyperbolic metric, geometry of geodesics, hyperbolic trigonometry, groups of isometries on the hyperbolic plane.
SHOW MORE
241 Prerequisite: Math 210.1/equiv
Number of units: 3
Number of hours/week: 3
General Topology Topological spaces; metric spaces; theory of convergence; bases; axioms of countability; subspaces; homeomorphisms. Selected topics.
SHOW MORE
242 Prerequisite: COI
Number of units: 3
Number of hours/week: 3
Algebraic Topology Homotopy, fundamental group, singular homology, simplicial complexes, degree and fixed point theorems.
SHOW MORE
243 Prerequisite: Math 242
Number of units: 3
Number of hours/week: 3
Differential Geometry Classical theory of curves and surfaces. Mappings of surfaces. Differential structures. Lie groups and frame bundles.
SHOW MORE
246 Prerequisite: Math 123.2/COI
Number of units: 3
Number of hours/week: 3
Algebraic Geometry The general projective space. Collineation and correlations in a projective space. Algebraic manifolds. Plane curves. Quadratic transformation of systems of plane curves.
SHOW MORE
247 Prerequisite: COI
Number of units: 3
Number of hours/week: 3
Selected Topics in Geometry and Topology
SHOW MORE
249 Prerequisite: COI
Number of units: 3
Number of hours/week: 3
Additional notes: Topic to be specified for record purposes.
Probability Theory Random variables, laws of large numbers, special probability distributions, central limit theorem, Markov chains, Poisson process, martingales.
SHOW MORE
250 Prerequisite: Math 220.1/COI
Number of units: 3
Number of hours/week: 3
Combinatorial Mathematics Permutations and combinations. Generating functions. Principle of inclusion and exclusion. Recurrence relations. Occupancy. Matrices of zeros and ones. Partitions.
Orthogonal Latin squares. Combinatorial designs.
Math SHOW MORE
Prerequisite: COI
Number of units: 3
Number of hours/week: 3
Actuarial Theory and Practice Theoretical and practical aspects of reserves, policy values, and extended life contingency models in insurance business practice
SHOW MORE
260 Prerequisite: COI
Number of units: 3
Number of hours/week: 3
Survival and Loss Models Introduction to risk theory and an overview of various survival and loss models as applied on financial risk and short-term insurances
SHOW MORE
261 Prerequisite: COI
Number of units: 3
Number of hours/week: 3
Actuarial Models Survey of the actuarial modeling process and its resulting models with consideration to the characteristics and nuances of an insurance data
SHOW MORE
262 Prerequisite: Math 261 for PMAM (AS) majors; COI for non-PMAM (AS) majors
Number of units: 3
Number of hours/week: 3
Life Insurance and Retirement Benefit Actuarial Practice Application of life contingency models and long-term actuarial mathematics in traditional, universal, and participating life
insurances, and retirement benefit areas of actuarial practice
Math SHOW MORE
Prerequisite: Math 260 for PMAM (AS) majors; COI for non-PMAM (AS) majors
Number of units: 3
Number of hours/week: 3
Group, Health and Non-life Insurance Actuarial Practice Application of various actuarial models and short-term actuarial mathematics in group health, and non-life insurance areas of actuarial
Math SHOW MORE
Prerequisite: Math 262 for PMAM (AS) majors; COI for non-PMAM (AS) majors
Number of units: 3
Number of hours/week: 3
Stochastic Calculus Conditional expectations, martingales, Brownian motion, Ito integral, Ito formula, stochastic differential equation, Girsanov Theorem, applications to mathematical
Math SHOW MORE
Prerequisite: Math 150.1/COI
Number of units: 3
Number of hours/week: 3
Mathematical Finance Binomial asset pricing model, vanilla options, exotic options, American options, arbitrage probabilities, profit and loss, stochastic interest rates.
SHOW MORE
266 Prerequisite: Math 265/COI
Number of units: 3
Number of hours/week: 3
Product Management Aspects in Actuarial Science Technical and practical actuarial concepts involving launching of insurance products and its maintenance, and valuation of liabilities
SHOW MORE
268.1 Prerequisite: Math 260 and Math 262 for PMAM (AS) majors; COI for non-PMAM (AS) majors
Number of units: 3
Number of hours/week: 3
Financial Management Aspects in Actuarial Science Technical and practical actuarial concepts regarding insurance company solvency, capital and risk management, and financial reporting and
Math SHOW MORE
Prerequisite: Math 268.1 for PMAM (AS) majors; COI for non-PMAM (AS) majors
Number of units: 3
Number of hours/week: 3
Numerical Analysis I Floating point representation, condition numbers, iterative methods for solving systems of linear and nonlinear equations, numerical integration, numerical linear
Math SHOW MORE
Prerequisite: Math 171/COI
Number of units: 3
Number of hours/week: 3
Numerical Analysis II Numerical methods for ordinary differential equations, finite difference methods for partial difference equations, numerical methods for conservation laws, multi-grid
Math SHOW MORE
Prerequisite: Math 271.1/COI
Number of units: 3
Number of hours/week: 3
Linear Programming Theories and methods in solving linear programs.
SHOW MORE
280 Prerequisite: COI
Number of units: 3
Number of hours/week: 3
Nonlinear Programming Properties of convex sets and functions. Unconstrained optimization. Kuhn- Tucker Theorem. Lagrange Multipliers. Saddle-point Theorems. Algorithms.
SHOW MORE
281 Prerequisite: COI
Number of units: 3
Number of hours/week: 3
Integer Programming and Combinatorial Optimization Applications of integer programming. Converging dual and primal cutting plane algorithms. Branch-bound methods. Total unimodularity and the
transportation problem. Applications of graph theory to mathematical programming.
Math SHOW MORE
Prerequisite: Math 280/equiv
Number of units: 3
Number of hours/week: 3
Applied Dynamic Programming Deterministic decision problems; Analytical and computational methods; Applications to problems of equipment replacement, resource allocation, scheduling, search
and routing.
Math SHOW MORE
Prerequisite: Graduate Standing/COI
Number of units: 3
Number of hours/week: 3
Introduction to Stochastic Optimization Probability theory and applications to discrete and continuous time Markov chains; classification of states; algebraic methods, birth and death
processes, renewal theory, limit theorems.
Math SHOW MORE
Prerequisite: Math 114, 150.1
Number of units: 3
Number of hours/week: 3
Finite Graphs and Networks Basic graph theory and applications to optimal path problems; flows in network; combinatorial problems.
SHOW MORE
286 Prerequisite: Math 285/COI
Number of units: 3
Number of hours/week: 3
Numerical Optimization Deterministic descent type methods, stochastic optimization methods, numerical implementation.
SHOW MORE
288 Prerequisite: Math 271.1/COI
Number of units: 3
Number of hours/week: 3
Research Paper on College Mathematics
SHOW MORE
290 Prerequisite: COI
Number of units: 3
Number of hours/week: 3
Independent Study
Math SHOW MORE
Number of units: 3
Additional notes: May be credited once in the M.S. Mathematics/Applied Mathematics programs and twice in the Ph.D. Mathematics program.
Special Project
Math SHOW MORE
Prerequisite: COI
Number of units: 3
Graduate Seminar
SHOW MORE
296 Prerequisite: COI
Number of units: 1
Number of hours/week: 1
Special Topics
SHOW MORE
297 Prerequisite: COI
Number of units: 3
Number of hours/week: 3
Additional notes: May be taken at most three times; topic to be specified for record purposes.
Master’s Thesis
Math SHOW MORE
Number of units: 6
PhD Dissertation
400 SHOW MORE
Number of units: 12 | {"url":"https://math.upd.edu.ph/courses/graduate-courses","timestamp":"2024-11-11T21:29:58Z","content_type":"text/html","content_length":"76349","record_id":"<urn:uuid:84f31978-0813-48be-8113-3d6c8fded034>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00289.warc.gz"} |
Fast Fourier Transform on the sound
Is it possible to do a Fast Fourier Transform on a sound it get it's frequency data?
"fmod Frequencies Example
This example uses fmod to perform a fast fourier transform (FFT) on the sounds waveform data to produce the frequencies in the sound. Fmod's FFT unit can then provide you with an array of 512 floats
representing the frequency data. This is then rendered to an openGL window."
Was looking for something like that, only I'd like to use SFML and OpenAL (it is OpenAL SFML uses, isn't it?) for it. | {"url":"https://en.sfml-dev.org/forums/index.php?topic=121.0","timestamp":"2024-11-13T19:54:55Z","content_type":"application/xhtml+xml","content_length":"19632","record_id":"<urn:uuid:969ba3c1-d303-41ab-b44c-76cfe2b9c41f>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00430.warc.gz"} |
Some Basic Algebra with Exercises: Part 4.
My hint for the following exercise is the result obtained in the Exercise of this article together with the Cauchy Theorem mentioned at the bottom of here.
The interesting part comes now!
Hint for the following exercise, again use the result of the exercise in this article. | {"url":"https://applied-math-coding.medium.com/some-basic-algebra-with-exercises-part-4-24419da079bb","timestamp":"2024-11-01T20:43:59Z","content_type":"text/html","content_length":"105429","record_id":"<urn:uuid:1c62b368-2229-4c93-80e4-892668392b1e>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00114.warc.gz"} |
Sudo Null - Latest IT News
October 19, 2018 at 10:43
Learn OpenGL. Lesson 6.3 - Image Based Lighting. Diffuse irradiation
Image Based Lighting
) - is a category of lighting methods that are not based on analytical light sources (discussed in the
previous lesson
), but considering the entire environment of the illuminated objects as one continuous source of light. In the general case, the technical basis of such methods lies in processing a cubic map of the
environment (prepared in the real world or created on the basis of a three-dimensional scene) so that the data stored in the map can be directly used in lighting calculations: in fact, every texel of
the cubic map is considered as a light source . In general, this allows you to capture the effect of global lighting in the scene, which is an important component that conveys the overall “tone” of
the current scene and helps the illuminated objects to be better “embedded” in it.
Since IBL algorithms take into account lighting from a certain “global” environment, their result is considered a more accurate simulation of background lighting or even a very rough approximation of
global lighting. This aspect makes IBL methods interesting in terms of incorporating the PBR into the model, since using ambient light in the lighting model allows objects to look much more
physically correct.
Part 1. Getting Started
Part 2. Basic lighting
Part 3. Download 3D models
Part 4. Advanced OpenGL Features
Part 5. Advanced Lighting
Part 6. PBR
To incorporate the influence of IBL into the already described PBR system, we return to the familiar reflectance equation:
As described earlier, the main goal is to calculate the integral for all incoming radiation directions
last lesson, the
calculation of the integral was not burdensome, since we knew in advance the number of light sources, and, therefore, all those several directions of light incidence that correspond to them. At the
same time, the integral cannot be solved with a snap:
falling vector
• You need to come up with a way to get the energy brightness of the scene for an arbitrary direction vector
• It is necessary that the solution of the integral can occur in real time.
Well, the first point is resolved by itself. A hint of a solution has already slipped here: one of the methods for representing the irradiation of a scene or environment is a cubic map that has
undergone special processing. Each texel in such a map can be considered as a separate emitting source. By sampling from such a map according to an arbitrary vector
So, we get the energy brightness of the scene for an arbitrary vector
vec3 radiance = texture(_cubemapEnvironment, w_i).rgb;
Remarkably, however, solving the integral requires us to make samples from the environment map not from one direction, but from all possible in the hemisphere. And so - for each shaded fragment.
Obviously, for real-time tasks this is practically impracticable. A more effective method would be to calculate part of the integrand operations in advance, even outside our application. But for this
you will have to roll up your sleeves and dive deeper into the essence of the expression of reflectivity:
It can be seen that the parts of the expression related to the diffuse
Such a division into parts will allow us to deal with each of them individually, and in this lesson we will deal with the part responsible for diffuse lighting.
Having analyzed the form of the integral over the diffuse component, we can conclude that the Lambert diffuse component is essentially constant (color
So we get an integral depending only on
Convolution is the operation of applying some calculation to each element in a data set, taking into account the data of all other elements in the set. In this case, such data is the energy
brightness of the scene or environment map. Thus, to calculate one value in each direction of the sample in the cubic map, we will have to take into account the values taken from all other possible
directions of the sample in the hemisphere lying around the sample point.
To convolve the environment map, you need to solve the integral for each resulting direction of the sample
Such a pre-calculated cubic map that stores the integration result for each direction of the sample
The expression determining the energy brightness also depends on the position of the sampling point the reflection of the sample ( reflection probes ). Each such object is engaged in one task: it
forms its own irradiation map for its immediate environment. With this technique, the irradiation (and energy brightness) at an arbitrary point
Below is an example of a cubic map of the environment and an irradiation map (based on the
wave engine
) derived from it , which averages the energy brightness of the environment for each output direction
So, this card stores the convolution result in each texel (corresponding to the direction
PBR and HDR
In the
previous lesson
, it was already briefly noted that for the correct operation of the PBR lighting model it is extremely important to take into account the HDR brightness range of the light sources present. Since the
PBR model at the input accepts parameters one way or another based on very specific physical quantities and characteristics, it is logical to require that the energy brightness of the light sources
matches their real prototypes. It doesn’t matter how we justify the specific value of the radiation flux for each source: make a rough engineering estimate or turn to
physical quantities
- the difference in characteristics between a room lamp and the sun will be enormous in any case. Without using
range it will be impossible to accurately determine the relative brightness of various light sources.
So, PBR and HDR are friends forever, this is understandable, but how does this fact relate to image-based lighting methods? In the last lesson, it was shown that converting PBR to the HDR rendering
range is easy. There remains one “but”: since the indirect illumination from the environment is based on a cubic map of the environment, a way is needed to preserve the HDR characteristics of this
background lighting in the environment map.
Until now, we used environment maps created in the LDR format (such as
, for example
) We used the color sample from them in rendering as is and this is quite acceptable for direct shading of objects. And it is completely unsuitable when using environment maps as sources of
physically reliable measurements.
RGBE - HDR image format
Get familiar with the RGBE image file format. Files with the extension "
" are used to store images with a wide dynamic range, allocating one byte for each element of the color triad and one more byte for the common exponent. The format also allows you to store cubic
environment maps with a color intensity range beyond the LDR range [0., 1.]. This means that light sources can maintain their real intensity, being represented by such an environment map.
The network has quite a lot of free environment maps in RGBE format, shot in various real conditions. Here is an example from the
sIBL archive
site :
You may be surprised at what you saw: after all, this distorted image does not at all look like a regular cubic map with its pronounced breakdown into 6 faces. The explanation is simple: this map of
the environment was projected from a sphere onto a plane - an
equal-rectangular scan was
applied . This is done to be able to store in a format that does not support the storage mode of cubic cards as is. Of course, this method of projection has its drawbacks: the horizontal resolution
is much higher than the vertical. In most cases of application in rendering, this is an acceptable ratio, since usually interesting details of the environment and lighting are located exactly in the
horizontal plane, and not in the vertical one. Well, plus to everything, we need the conversion code back to the cubic map.
Support for RGBE format in stb_image.h
Downloading this image format on your own requires knowledge of
the format specification
, which is not difficult, but still laborious. Fortunately for
, the
image loading library , implemented in a single header file, supports loading RGBE files, returning an array of floating-point numbers - what we need for our purposes! Adding a library to your
project, loading image data is extremely simple:
#include "stb_image.h"
int width, height, nrComponents;
float *data = stbi_loadf("newport_loft.hdr", &width, &height, &nrComponents, 0);
unsigned int hdrTexture;
if (data)
glGenTextures(1, &hdrTexture);
glBindTexture(GL_TEXTURE_2D, hdrTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, width, height, 0, GL_RGB, GL_FLOAT, data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
std::cout << "Failed to load HDR image." << std::endl;
The library automatically converts values from the internal HDR format to regular real 32-bit numbers, with three color channels by default. It is enough to save the data of the original HDR image
in a normal 2D floating-point texture.
Convert an equal-angle scan into a cubic map
An equally rectangular scan can be used to directly select samples from the environment map, however, this would require expensive mathematical operations, while fetching from a normal cubic map
would be practically free in performance. It is precisely from these considerations that in this lesson we will deal with the conversion of an equally rectangular image into a cubic map, which will
be used later. However, the direct sampling method from an equally rectangular map using a three-dimensional vector will also be shown here, so that you can choose the method of work that suits you.
To convert, you need to draw a unit-sized cube, observing it from the inside, project an equal-rectangular map on its faces, and then extract six images from the faces as the faces of the cubic map.
The vertex shader of this stage is quite simple: it simply processes the vertices of the cube as is, and also passes their unreformed positions to the fragment shader for use as a three-dimensional
sample vector:
#version 330 core
layout (location = 0) in vec3 aPos;
out vec3 localPos;
uniform mat4 projection;
uniform mat4 view;
void main()
localPos = aPos;
gl_Position = projection * view * vec4(localPos, 1.0);
In the fragment shader, we shade each face of the cube as if we were trying to gently wrap the cube with a sheet with an equally rectangular map. To do this, the sample direction transferred to the
fragment shader is taken, processed by special trigonometric magic, and, ultimately, the selection is made from an equal-rectangular map as if it were actually a cubic map. The selection result is
directly saved as the color of the fragment of the cube face:
#version 330 core
out vec4 FragColor;
in vec3 localPos;
uniform sampler2D equirectangularMap;
const vec2 invAtan = vec2(0.1591, 0.3183);
vec2 SampleSphericalMap(vec3 v)
vec2 uv = vec2(atan(v.z, v.x), asin(v.y));
uv *= invAtan;
uv += 0.5;
return uv;
void main()
// localPos требует нормализации
vec2 uv = SampleSphericalMap(normalize(localPos));
vec3 color = texture(equirectangularMap, uv).rgb;
FragColor = vec4(color, 1.0);
If you actually draw a cube with this shader and an associated HDR environment map, you get something like this:
Those. it can be seen that in fact we projected a rectangular texture onto a cube. Great, but how will this help us in creating a real cubic map? To end this task, it is necessary to render the same
cube 6 times with a camera looking at each of the faces, while writing the output to a separate
frame buffer
object :
unsigned int captureFBO, captureRBO;
glGenFramebuffers(1, &captureFBO);
glGenRenderbuffers(1, &captureRBO);
glBindFramebuffer(GL_FRAMEBUFFER, captureFBO);
glBindRenderbuffer(GL_RENDERBUFFER, captureRBO);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, 512, 512);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, captureRBO);
Of course, we will not forget to organize the memory for storing each of the six faces of the future cubic map:
unsigned int envCubemap;
glGenTextures(1, &envCubemap);
glBindTexture(GL_TEXTURE_CUBE_MAP, envCubemap);
for (unsigned int i = 0; i < 6; ++i)
// обратите внимание, что каждая грань использует
// 16битный формат с плавающей точкой
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_RGB16F,
512, 512, 0, GL_RGB, GL_FLOAT, nullptr);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
After this preparation, it remains only to directly carry out the transfer of parts of an equal-rectangular map on the verge of a cubic map.
We will not go into too much detail, especially since the code repeats much seen in the lessons on the
frame buffer
omnidirectional shadows
. In principle, it all comes down to preparing six separate view matrices orienting the camera strictly to each of the faces of the cube, as well as a special projection matrix with an angle of view
of 90 ° to capture the entire face of the cube. Then, just six times, rendering is performed, and the result is saved in a floating-point framebuffer:
glm::mat4 captureProjection = glm::perspective(glm::radians(90.0f), 1.0f, 0.1f, 10.0f);
glm::mat4 captureViews[] =
glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3( 1.0f, 0.0f, 0.0f), glm::vec3(0.0f, -1.0f, 0.0f)),
glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(-1.0f, 0.0f, 0.0f), glm::vec3(0.0f, -1.0f, 0.0f)),
glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3( 0.0f, 1.0f, 0.0f), glm::vec3(0.0f, 0.0f, 1.0f)),
glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3( 0.0f, -1.0f, 0.0f), glm::vec3(0.0f, 0.0f, -1.0f)),
glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3( 0.0f, 0.0f, 1.0f), glm::vec3(0.0f, -1.0f, 0.0f)),
glm::lookAt(glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3( 0.0f, 0.0f, -1.0f), glm::vec3(0.0f, -1.0f, 0.0f))
// перевод HDR равнопрямоугольной карты окружения в эквивалентную кубическую карту
equirectangularToCubemapShader.setInt("equirectangularMap", 0);
equirectangularToCubemapShader.setMat4("projection", captureProjection);
glBindTexture(GL_TEXTURE_2D, hdrTexture);
// не забудьте настроить параметры вьюпорта для корректного захвата
glViewport(0, 0, 512, 512);
glBindFramebuffer(GL_FRAMEBUFFER, captureFBO);
for (unsigned int i = 0; i < 6; ++i)
equirectangularToCubemapShader.setMat4("view", captureViews[i]);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, envCubemap, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
renderCube(); // вывод единичного куба
glBindFramebuffer(GL_FRAMEBUFFER, 0);
Here, the color of the frame buffer is attached, and alternately changing the connected face of the cubic map, which leads to the direct output of the render to one of the faces of the environment
map. This code needs to be executed only once, after which we will
still have a
environment map containing the result of converting the original equal-rectangular version of the HDR environment map.
We will test the resulting cubic map by sketching the simplest skybox shader:
#version 330 core
layout (location = 0) in vec3 aPos;
uniform mat4 projection;
uniform mat4 view;
out vec3 localPos;
void main()
localPos = aPos;
// здесь отбрасываем данные о переносе из видовой матрицы
mat4 rotView = mat4(mat3(view));
vec4 clipPos = projection * rotView * vec4(localPos, 1.0);
gl_Position = clipPos.xyww;
Pay attention to the trick with the components of the
vector : we use the
tetrad when recording the transformed coordinate of the vertex to ensure that all fragments of the skybox have a maximum depth of 1.0 (the approach was already used in the
corresponding lesson
). Do not forget to change the comparison function to
The fragment shader simply selects from a cubic map:
#version 330 core
out vec4 FragColor;
in vec3 localPos;
uniform samplerCube environmentMap;
void main()
vec3 envColor = texture(environmentMap, localPos).rgb;
envColor = envColor / (envColor + vec3(1.0));
envColor = pow(envColor, vec3(1.0/2.2));
FragColor = vec4(envColor, 1.0);
The selection from the map is based on the interpolated local coordinates of the vertices of the cube, which is the correct direction of the selection in this case (again, discussed in the lesson on
approx. Per.
). Since the transport components in the view matrix were ignored, the render of the skybox will not depend on the position of the observer, creating the illusion of an infinitely distant background.
Since here we directly output data from the HDR card to the default framebuffer, which is the LDR receiver, it is necessary to recall the tonal compression. And finally, almost all HDR cards are
stored in linear space, which means that
gamma correction
must be applied as the final processing chord.
So, when outputting the obtained skybox, along with the already familiar array of spheres, something similar is obtained:
Well, a lot of effort was spent, but in the end we successfully got used to reading the HDR environment map, converting it from an equilateral to a cubic map, and outputting the HDR cubic map as a
skybox in the scene. Moreover, the code for converting to a cubic map by rendering to six faces of a cubic map is useful to us further on in the task of
convolution of an environment map
. The code for the entire conversion process is
Convolution of a cubic card
As was said at the beginning of the lesson, our main goal is to solve the integral for all possible directions of indirect diffuse lighting, taking into account the given irradiation of the scene in
the form of a cubic map of the environment. It is known that we can get the value of the energy brightness of the scene
Obviously, the task of sampling lighting from the environment from all possible directions in the hemisphere
But for real-time tasks, even such an approach is still incredibly imposed, because the samples are taken for each fragment, and the number of samples must be high enough for an acceptable result. So
it would be nice to
prepare in advance
The data for this step is outside the rendering process. Since the orientation of the hemisphere determines from which region of space we capture the irradiation, we can pre-calculate the irradiation
for each possible orientation of the hemisphere based on all possible outgoing directions
As a result, for a given arbitrary vector
vec3 irradiance = texture(irradianceMap, N);
Further, to create an irradiation map, it is necessary to convolve the environment map, converted to a cubic map. We know that for each fragment its hemisphere is considered oriented along the normal
to the surface
Fortunately, the time-consuming preliminary work that we did at the beginning of the lesson will now make it quite easy to convert the environment map into a cubic map in a special fragment shader,
the output of which will be used to form a new cubic map. For this, the very piece of code that was used to translate an equal-rectangular environment map into a cubic map is useful.
It remains only to take another processing shader:
#version 330 core
out vec4 FragColor;
in vec3 localPos;
uniform samplerCube environmentMap;
const float PI = 3.14159265359;
void main()
// направление выборки идентично направлению ориентации полусферы
vec3 normal = normalize(localPos);
vec3 irradiance = vec3(0.0);
[...] // код свертки
FragColor = vec4(irradiance, 1.0);
Here, the
sampler is an HDR cubic map of the environment previously derived from an equilateral.
There are many ways to convolve the environment map. In this case, for each texel of the cubic map, we will generate several hemisphere sample vectors
The integrand of the expression for reflectivity depends on the solid angle
The angle Phi will represent the azimuth in the plane of the base of the hemisphere, varying from 0 to
The solution of such an integral will require taking a finite number of samples in the hemisphere
Riemannian sum
Since both spherical coordinates vary discretely, at each moment, sampling is performed with a certain averaged area in the hemisphere, as can be seen in the figure above. Due to the nature of the
spherical surface, the size of the discrete sampling area inevitably decreases with increasing elevation angle
As a result, the implementation of discrete sampling in the hemisphere based on spherical coordinates for each fragment in the form of code is as follows:
vec3 irradiance = vec3(0.0);
vec3 up = vec3(0.0, 1.0, 0.0);
vec3 right = cross(up, normal);
up = cross(normal, right);
float sampleDelta = 0.025;
float nrSamples = 0.0;
for(float phi = 0.0; phi < 2.0 * PI; phi += sampleDelta)
for(float theta = 0.0; theta < 0.5 * PI; theta += sampleDelta)
// перевод сферических коорд. в декартовы (в касательном пр-ве)
vec3 tangentSample = vec3(sin(theta) * cos(phi), sin(theta) * sin(phi), cos(theta));
// из касательного в мировое пространство
vec3 sampleVec = tangentSample.x * right + tangentSample.y * up + tangentSample.z * N;
irradiance += texture(environmentMap, sampleVec).rgb * cos(theta) * sin(theta);
irradiance = PI * irradiance * (1.0 / float(nrSamples));
The variable
determines the size of the discrete step along the surface of the hemisphere. By changing this value, you can increase or decrease the accuracy of the result.
Inside both cycles, a regular 3-dimensional sample vector is formed from spherical coordinates, transferred from tangent to world space, and then used to sample a cubic environment map from the HDR.
The result of the samples is accumulated in the
variable , which at the end of the processing will be divided by the number of samples made in order to obtain an average value of irradiation. Note that the result of sampling from the texture is
modulated by two quantities:
cos (theta)
- to take into account the attenuation of light at large angles, and
sin (theta)
- to compensate for the reduction in sample area when approaching the zenith.
It remains only to deal with the code that renders and captures the results of the convolution of the
. First, create a cubic map to store the irradiation (you will need to do it once, before entering the main render cycle):
unsigned int irradianceMap;
glGenTextures(1, &irradianceMap);
glBindTexture(GL_TEXTURE_CUBE_MAP, irradianceMap);
for (unsigned int i = 0; i < 6; ++i)
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_RGB16F, 32, 32, 0,
GL_RGB, GL_FLOAT, nullptr);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
Since the irradiation map is obtained by averaging uniformly distributed samples of the energy brightness of the environment map, it practically does not contain high-frequency parts and elements - a
fairly small resolution texture (32x32 here) and enabled linear filtering will be enough to store it.
Next, set the capture framebuffer to this resolution:
glBindFramebuffer(GL_FRAMEBUFFER, captureFBO);
glBindRenderbuffer(GL_RENDERBUFFER, captureRBO);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, 32, 32);
The code for capturing the convolution results is similar to the code for transferring an environment map from an equilateral to a cubic one, only a convolution shader is used:
irradianceShader.setInt("environmentMap", 0);
irradianceShader.setMat4("projection", captureProjection);
glBindTexture(GL_TEXTURE_CUBE_MAP, envCubemap);
// не забудьте настроить вьюпорт под захватываемый размер
glViewport(0, 0, 32, 32);
glBindFramebuffer(GL_FRAMEBUFFER, captureFBO);
for (unsigned int i = 0; i < 6; ++i)
irradianceShader.setMat4("view", captureViews[i]);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, irradianceMap, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
After completing this stage, we will have a pre-calculated irradiation map on our hands that can be directly used to calculate indirect diffuse illumination. To check how the convolution went, we’ll
try to replace the skybox texture from the environment map with the irradiation map:
If, as a result, you saw something that looked like a very blurry map of the environment, then, most likely, the convolution was successful.
PBR and indirect illumination
The resulting irradiation map is used in the diffuse part of the divided expression of reflectivity and represents the accumulated contribution from all possible directions of indirect illumination.
Since in this case the light does not come from specific sources, but from the environment as a whole, we consider diffuse and mirror indirect lighting as background (
), replacing the previously used constant value.
To begin with, do not forget to add a new sampler with an irradiation map:
uniform samplerCube irradianceMap;
Having an irradiation map that stores all the information about indirect diffuse radiation from the scene and normal to the surface, obtaining data on the irradiation of a particular fragment is as
simple as making one sample from the texture:
// vec3 ambient = vec3(0.03);
vec3 ambient = texture(irradianceMap, N).rgb;
However, since indirect radiation contains data for both the diffuse and mirror components (as we saw in the component version of the expression of reflectivity), we need to modulate the diffuse
component in a special way. As in the previous lesson, we use the Fresnel expression to determine the degree of reflection of light for a given surface, whence we obtain the degree of refraction of
light or the diffuse coefficient:
vec3 kS = fresnelSchlick(max(dot(N, V), 0.0), F0);
vec3 kD = 1.0 - kS;
vec3 irradiance = texture(irradianceMap, N).rgb;
vec3 diffuse = irradiance * albedo;
vec3 ambient = (kD * diffuse) * ao;
As background lighting falls from all directions in the hemisphere based on the normal to the surface
) vector for calculating the Fresnel coefficient. In order to simulate the Fresnel effect under such conditions, it is necessary to calculate the coefficient based on the angle between the normal and
the observation vector. However, earlier, as a parameter for calculating the Fresnel coefficient, we used the median vector obtained on the basis of the model of microsurfaces and depending on the
surface roughness. Since in this case, the roughness is not included in the calculation parameters, the degree of reflection of light by the surface will always be overestimated. Indirect lighting as
a whole should behave the same as direct lighting, i.e. from rough surfaces we expect a lower degree of reflection at the edges. But since the roughness is not taken into account,
You can get around this nuisance by introducing roughness into the Fremlin-Schlick expression, a process described by
Sébastien Lagarde
vec3 fresnelSchlickRoughness(float cosTheta, vec3 F0, float roughness)
return F0 + (max(vec3(1.0 - roughness), F0) - F0) * pow(1.0 - cosTheta, 5.0);
Given the surface roughness when calculating the Fresnel set, the code for calculating the background component takes the following form:
vec3 kS = fresnelSchlickRoughness(max(dot(N, V), 0.0), F0, roughness);
vec3 kD = 1.0 - kS;
vec3 irradiance = texture(irradianceMap, N).rgb;
vec3 diffuse = irradiance * albedo;
vec3 ambient = (kD * diffuse) * ao;
As it turned out, the use of image-based lighting inherently boils down to one sample from a cubic map. All difficulties are mainly associated with the preliminary preparation and transfer of the
environment map to the irradiation map.
Taking a familiar scene from a lesson on
containing an array of spheres with varying metallicity and roughness, and adding diffuse background lighting from the environment, you get something like this:
It still looks strange, since materials with a high degree of metallicity still
reflection in order to really look, hmm, metal (metals do not reflect diffuse lighting, after all). And in this case, the only reflections obtained from point analytical light sources. And yet, we
can already say that the spheres look more immersed in the environment (especially noticeable when switching environment maps), since the surfaces now correctly respond to background lighting from
the scene environment.
The full source code for the lesson is
. In the next lesson, we will finally deal with the second half of the expression of reflectivity, which is responsible for indirect specular lighting. After this step, you will truly feel the power
of the PBR approach in lighting.
Additional materials
• Coding Labs: Physically based rendering : an introduction to the PBR model along with an explanation of how the irradiance map is constructed and why.
• The Mathematics of Shading : A brief overview from ScratchAPixel on some of the mathematical techniques used in this lesson, in particular about polar coordinates and integrals.
: We have a
telegram conf
for coordination of transfers. If you have a serious desire to help with the translation, then you are welcome! | {"url":"https://sudonull.com/post/9611-Learn-OpenGL-Lesson-63-Image-Based-Lighting-Diffuse-irradiation","timestamp":"2024-11-06T14:36:45Z","content_type":"text/html","content_length":"62627","record_id":"<urn:uuid:1b2d51f4-40e2-45fc-a136-4dcdf0d1fb3c>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00099.warc.gz"} |
In the year 2000, the population in Canada was
about 85 million and growing at a rate of
1.32% each year. At this growth rate, the
function f(x) = 85(1.0132)^x gives the population,
in millions, x years after 2000. Using this model,
in what year will the population reach 100 million? | {"url":"https://www.thatquiz.org/tq/preview?c=dumqemlz&s=nnzj3y","timestamp":"2024-11-06T05:59:18Z","content_type":"text/html","content_length":"22898","record_id":"<urn:uuid:8e4b4b01-4281-41c7-91b1-87560ca34a4d>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00438.warc.gz"} |
Manifold Conjugate Heat Transfer - CFD Data Mapping
ACU-T: 3110 Exhaust Manifold Conjugate Heat Transfer - CFD Data Mapping
Prior to starting this tutorial, you should have already run through the introductory HyperWorks tutorial, ACU-T: 1000 HyperWorks UI Introduction, and have a basic understanding of HyperWorks CFD and
HyperView. To run this simulation, you will need access to a licensed version of HyperWorks CFD and AcuSolve.
Prior to running through this tutorial, copy HyperWorksCFD_tutorial_inputs.zip from <Altair_installation_directory>\hwcfdsolvers\acusolve\win64\model_files\tutorials\AcuSolve to a local directory.
Extract ACU-T3110_acuOptiStruct.hm from HyperWorksCFD_tutorial_inputs.zip.
Since the HyperWorks CFD database (.hm file) contains meshed geometry, this tutorial does not include steps related to geometry import and mesh generation.
Problem Description
The problem to be addressed in this tutorial is shown schematically in
Figure 1
. It consists of an exhaust manifold with four inlets and one outlet. The inlets have flanges with holes for steel bolts to attach the manifold. The body of the manifold is made up of stainless
Figure 1. Schematic of Exhaust Manifold
The diameter of the inlets is 0.036 m; the inlet velocity (v) is 8.0 m/s; and the temperature (T) of the fluid entering the inlets is 700 K. The diameter of the outlet is 0.036 m. The pipe wall has a
thickness of 0.003 m and the flanges have a thickness of 0.01 m.
The combustion mixture enters the inlets and heat is transferred through conduction inside the manifold. The heat transfer causes deformations and stress in the manifold body which can be simulated
using OptiStruct.
The fluid in this problem is air, which has the following material properties:
Density (ρ)
1.225 kg/m^3
Viscosity (μ)
1.781 * 10^-5 kg/m-s
Specific Heat (C[p])
1005 J/kg-K
Conductivity (k)
0.0251 W/m-K
The exhaust manifold is designed as Steel which has following material properties:
Density (ρ)
8000 kg/m^3
Specific Heat (C[p])
500 J/kg-K
Conductivity (k)
16.2 W/m-K
For the AcuSolve simulation, the variation in material properties of air with temperature is ignored.
The AcuSolve simulation will be set up to model steady state heat transfer to determine the temperature and pressure distribution on the walls of the manifold.
The nodal surface output needs to be activated for all the surfaces in order to create the OptiStruct input deck from the acuOptiStruct command.
The temperature distribution and forces on the wetted surfaces are used by OptiStruct to calculate the deformations and stress in the solid body.
input deck is generated through a utility acuOptiStruct which can be used for a one-way coupled simulation. The following input commands are of importance for this simulation:
Input name for the solid body/bodies where conduction heat transfer would take place.
Density values for the solid body/bodies.
List of surfaces where boundary condition constraints need to be specified.
List of degrees of freedom for the surfaces.
List of degrees of freedom values for the surfaces which is zero by default.
Stress analysis type for the OptiStruct solver.
For this simulation, the constrained surfaces are the flange bolts and Outlet end of the manifold. These surfaces will be constrained in all six degrees of freedom (Translation and Rotation). The
default value of zero is used.
Figure 2.
The stress analysis type is selected as steady linear where the deformations are in the elastic range; that is, the stresses, σ, are assumed to be linear functions of the strains, ε, Hooke's law can
be used to calculate the stresses.
Start HyperWorks CFD and Open the HyperMesh Database
1. Start HyperWorks CFD from the Windows Start menu by clicking .
2. From the Home tools, Files tool group, click the Open Model tool.
Figure 3.
The Open File dialog opens.
3. Browse to the directory where you saved the model file. Select the HyperMesh file ACU-T3110_acuOptiStruct.hm and click Open.
4. Click .
5. Create a new directory named Manifold_TFSI and navigate into this directory.
This will be the working directory and all the files related to the simulation will be stored in this location.
6. Enter Manifold_TFSI as the file name for the database, or choose any name of your preference.
7. Click Save to create the database.
Validate the Geometry
The Validate tool scans through the entire model, performs checks on the surfaces and solids, and flags any defects in the geometry, such as free edges, closed shells, intersections, duplicates, and
To focus on the physics part of the simulation, this tutorial input file contains geometry which has already been validated. Observe that a blue check mark appears on the top-left corner of the
Validate icon on the Geometry ribbon. This indicates that the geometry is valid, and you can go to the flow set up.
Figure 4.
Set Up Flow
Set Up the Simulation Parameters and Solver Settings
1. From the Flow ribbon, click the Physics tool.
Figure 5.
The Setup dialog opens.
2. Under the Physics models setting:
a. Set Time marching to Steady.
b. Select Spalart-Allmaras as the Turbulence model.
c. Activate the Include gravitational acceleration and Heat transfer options.
Figure 6.
3. Click the Solver controls setting.
4. Verify the following parameters.
Figure 7.
Assign Material Properties
1. From the Flow ribbon, click the Material tool.
Figure 8.
2. Select the exhaust volume.
3. In the microdialog, select Stainless steel (304) from the drop-down.
Figure 9.
4. On the guide bar, click
5. Select the air volume.
6. In the microdialog, select Air from the drop-down.
Figure 10.
7. On the guide bar, click
Assign the Flow Boundary Conditions
1. From the Flow ribbon, click the Constant tool.
Figure 11.
2. Select the manifold inlets highlighted in the figure below.
Figure 12.
3. In the microdialog, set the Inflow velocity type to Normal, the Normal velocity to 8.0 and the Temperature to 700.
Figure 13.
4. Click the Turbulence tab in the microdialog. Set the input type to Viscosity Ratio and the viscosity ratio value to 40.
Figure 14.
5. On the guide bar, click
6. Click the Outlet tool.
Figure 15.
7. Select the face highlighted in the figure below and verify that both the static pressure and the pressure loss factor are set to 0.
Figure 16.
8. Click guide bar.
9. Click the No Slip tool.
Figure 17.
10. Select the wall flanges.
Figure 18.
Note: Make sure you select the bottom and side surfaces of the flanges. In total, 12 surfaces should be selected.
11. In the microdialog, click the Temperature tab, set the Convective heat coefficient to 100, and the Convective heat reference temperature to 303.
Figure 19.
12. Click guide bar.
13. Using the Boundaries legend, rename the surface group to Flanges.
14. Select the flange bolts.
Figure 20.
15. In the microdialog, click the Temperature tab and assign the same parameters as the wall flanges.
16. Click guide bar.
17. Using the Boundaries legend, rename the surface group to Flange_Bolts.
18. Select the outlet end.
Figure 21.
19. In the microdialog, click the Temperature tab and assign the same parameters as the wall flanges and bolts.
20. Click guide bar.
21. Using the Boundaries legend, rename the surface group to Outlet_End.
22. Select the outer solid walls.
Figure 22.
23. In the microdialog, click the Temperature tab and assign the same parameters as the other no slip surfaces.
24. Click guide bar.
25. Using the Boundaries legend, rename the surface group to Outer_Solid_Walls.
26. Hide all previously assigned surfaces then select the remaining surface.
Figure 23.
27. Check that default values are assigned to the thermal boundary conditions.
Figure 24.
28. Click guide bar.
29. Using the Boundaries legend, rename the surface group to Fluid_Walls.
Link Surface Output
1. In the Boundaries legend, right-click on Flanges and select Edit.
2. In the microdialog, activate the checkbox for Create linked surface output.
Figure 25.
3. Click
4. Set the following linked surface output settings for the inlets, the outlet, and all the wall surfaces (Outlet_End, Flange_Bolts, Outer_Solid_Walls, Fluid_Walls).
Figure 26.
Run AcuSolve
1. From the Solution ribbon, click the Run tool.
Figure 27.
2. Set the Parallel processing option to Intel MPI.
3. Optional: Set the number of processors to 4 or 8 based on availability.
4. Deactivate the Automatically define pressure reference option.
5. Expand Default initial conditions and check that the Pre-compute flow and Pre-compute turbulence options are activated.
6. Set the Temperature to 273.16.
7. Leave the remaining options as default and click Run to launch AcuSolve.
Figure 28.
Run acuOptiStruct
1. After AcuSolve finishes running, open the AcuSolve command prompt and cd to your working directory.
2. Execute the following command:
acuOptiStruct -solids “Exhaust Steel” -spcsurfs “Flange_Bolts – Output”,”Outlet_End – Output” -spcsurfsdof 123456,123456 -spcsurfsdofvals 0,0 -type sl
You should see a similar output as below when the command executes successfully.
Figure 29.
Run OptiStruct
1. Start OptiStruct from the Windows Start menu by clicking .
2. Click
3. Browse to the location that you are using as your working directory.
4. Select the .fem file.
Figure 30.
5. Click Run to run the case.
Figure 31.
Post-Process the Results with HW-CFD Post
1. Once the solution is completed, navigate to the Post ribbon.
2. From the menu bar, click .
3. Select the AcuSolve log file in your problem directory to load the results for post-processing.
The solid and all the surfaces are loaded in the Post Browser.
4. Click the Boundary Groups tool.
Figure 32.
5. Select all the surfaces.
6. In the microdialog, set the display to temperature and activate the Legend toggle.
7. Click Rainbow Uniform.
Figure 33.
8. Click guide bar.
Figure 34.
9. In the Post Browser right-click on Fluid_Walls and select Edit.
10. In the microdialog, change the display to pressure and activate the Legend toggle.
11. Click Rainbow Uniform.
Figure 35.
12. Hide all other flow boundaries.
Figure 36.
Post-Process the OptiStruct Results with HyperView
1. Start HyperView from the Windows Start menu by clicking .
Once the HyperView window is loaded, the Load model and results panel should be open by default. If you do not see the panel, click .
2. In the Load model and results panel, click
3. In the Load Model File dialog, navigate to your working directory and select Manifold_TFSI.h3d.
4. Click Open.
5. Click Apply in the panel area to load the model and results from the OptiStruct results file.
Observe that this model contains only the solid domain since only the solid is included while generating the OptiStruct solver deck.
6. Click
7. In the panel area, set the Result type to Displacement (v).
8. Click Apply to plot the displacement magnitude contours.
Figure 37.
9. Change the Result type to Element Stresses (2D & 3D) (t) and select vonMises from the drop-down below.
10. Click Apply.
Figure 38.
In this tutorial, you learned how to set up a conjugate heat transfer problem using HyperWorks CFD and solve it using AcuSolve. Once you computed the solution, you used acuOptiStruct to generate the
input deck for OptiStruct. Once the solution for the structural analysis was computed, you post-processed the results using HyperWorks CFD post and HyperView and created contour plots of Temperature,
Pressure, Displacement, and Stress. | {"url":"https://2021.help.altair.com/2021.2/hwsolvers/acusolve/topics/tutorials/acu/acu_3110_intro_cfd_r.htm","timestamp":"2024-11-14T05:22:32Z","content_type":"application/xhtml+xml","content_length":"121459","record_id":"<urn:uuid:fdaaeada-693a-45d0-9941-e4fd5fd01e70>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00877.warc.gz"} |
EViews Help: Examples
Below, we demonstrate VEC estimation using the EViews example workfile “var1.WF1”, located under the
“Vector Error Correction Models (VECMs)”
folder. This is a workfile with a number of classic macroeconomic variables including gross domestic product, various measure of money supply, treasury bills of different maturations, industrial
production, producer price index, the unemployment.
Example 1: Unrestricted Constant (JHJ)
We begin with the classical problem of studying the relationship between money supply (M1), gross domestic product (GDP), and 3-month Treasury bills (TB3).
These three endogenous variables will enter the VEC system with lags 1 through 4, and we assume that there exists a single cointegrating relationship. Furthermore, we will estimate the VEC using the
default deterministic specification – . In this case, the constant is not restricted to the cointegrating relations, but is artificially inserted into the cointegrating vector using orthogonalization
“The Johansen, Hendry, and Juselius Approach”
To estimate this model, select in the dropdown menu to display the VEC estimation dialog then enter “
m1 gdp tb3
in the field on the tab.
Furthermore, specify
in the field
1959m01 1982m03
in the edit field. We emphasize again that the lag interval specification refers to the differences of the dependent variable in the conditional error correction equation, and not the dependent
variable itself in the levels equation.
You may leave the remaining fields and options at their default values. Hit to estimate the VEC with this specification. EViews will estimate the VEC and display the output in a table which contains
four sections. Click on the button and enter .
At the top of the output, EViews shows a summary of the estimation procedure, including the sample, lag specification, variables, and deterministic assumptions used on constructing the estimates:
Next is a table of coefficient estimates for the cointegrating relation. In this case which is estimated assuming the default of one cointegrating vector, there is a single column of coefficients
representing the only column of the cointegrating matrix. Since the deterministics are assumed to follow Johansen-Hendry-Juselius variant Case 3, the cointegrating relation includes an orthogonalized
intercept estimate of -170.6729
Notably, there is no standard estimate for the orthogonalized intercept estimate.
Next, EViews displays a table containing the coefficient estimates for the error correction regressions, with the results for each dependent variable appearing in columns.
The long-run portion of these results, the adjustment coefficients
The remaining coefficients are estimates of the short-run-dynamics coefficients
Just below the remainder of the short-run estimates including the estimate for C is the last part of the output showing summary statistics associated with the overall fit.
Example 2: Unrestricted Constant, Restricted Trend
We modify the previous example to use only the first 2 lags, to have cointegration rank 2, and assume that the constant is entirely unrestricted, but restricting the trend to the cointegrating
relation and the intercept to the short-run equation. To proceed, copy the existing var object, click on the button to bring up the VAR estimation dialog again, and then change the to “1 2”:
then click on the tab. select as the , and change the dropdown to .
Click on to estimate the revised model, then press and enter VEC2. the top portion of the output is given by:
Notice that there are now two cointegrating vectors, and , which include a trend, with coefficient estimates -0.1129 and 4.6129, respectively, and standard errors, but not a constant since the latter
is in the short-run regressors.
The error correction results, which now include the two cointegrated series COINTEQ1 and COINTEQ2, and an intercept, and the summary statistics results are presented below:
Example 3: Unrestricted Constant, Restricted Trend, and Exogenous
Extending the previous model, let us augment the cointegrating relation and the short-run dynamics by including exogenous variables. These exogenous variables can enter the cointegrating relation, so
that they affect the long-run relationship, and they can be in the short-run relationship where they affect the dynamics of convergence to equilibrium.
Let us assume that the 10-year Treasury bill rate (TB10Y) is an exogenous variable inside the cointegrating relation but not a part of the short-run dynamics, that the Producer Price Index (PPI), a
measure of inflation, impacts only the short-run dynamics to convergence, and that the unemployment rate (UNRATE) is in both the short and long-run relationships.
Copy the existing var then click on to modify the specification. We enter “PPI” in the field, “TB10Y” in the field, and “URATE” in the field:
Furthermore, we’ll assume there’s a single cointegrating relation, and that the deterministic case specifies a constant and trend only affect the adjustment to equilibrium dynamics (short-run).
Click on the tab and change the dropdown to 1, and set the dropdown to :
Click on to estimate the updated specification.
Notice that in addition to a description of the deterministic trend assumption, the output header now lists the exogenous variables included in the specification, by type.
Below the header, the results for the cointegrating vector show the three endogenous variables, followed by the coefficient for the long-run only variable TB10Y, and the both long and short-run
variable URATE. Since the latter is included in the cointegrating equation via orthogonalization, is no standard error associated with the estimated coefficient.
The error correction results include estimates for the two short-run only deterministic trend variables, C and @TREND, along with the short-run only PPI, and the both long and short-run URATE. As
with other both long and short-run variables, the coefficient of URATE is estimated conditionally on the orthogonalization.
Example 4: VEC Restrictions
We may continue with the previous example after imposing restrictions on elements of the
Once again, copy the existing var object, click on the button to bring up the VAR estimation dialog. Leave the existing specification in place, including the exogenous variables, but click on the tab
to display the restrictions settings dialog. Click on the to enable the restrictions and enter “B(1,1)=1, B(1,2)=0.25, B(1,3)=0.5” in the edit field:
This specification restricts the first three elements of the cointegrating vector to the specified values. Click on to estimate the restricted VEC.
The familiar heading information is augmented to show the cointegrating restrictions, information about estimation and convergence, an analysis of whether the restrictions are identifying, and the
results for a LR test for those restrictions that are binding.
The reported estimates of the cointegrating relation show both the restricted and unrestricted coefficient values:
Note that the elements of the cointegrating value reflect the restrictions imposed in estimation, and that there are no standard errors for the restricted values.
The form of the remaining output (not shown), which consists of the error correction regression results and summary statistics is unchanged from unrestricted estimation, with the exception of the
number of coefficients. | {"url":"https://help.eviews.com/content/vecm-Examples.html","timestamp":"2024-11-04T07:33:23Z","content_type":"application/xhtml+xml","content_length":"23155","record_id":"<urn:uuid:67042e88-7f28-4aef-bfad-36846c0bc737>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00344.warc.gz"} |